To perform accurate movements, the sensorimotor system must maintain a delicate calibration of the mapping between visual inputs and motor outputs. Previous work has focused on the mapping between visual inputs and individual locations in egocentric space, but little attention has been paid to the mappings that support interactions with 3D objects. In this study, we investigated sensorimotor adaptation of grasping movements targeting the depth dimension of 3D paraboloid objects. Object depth was specified by separately manipulating binocular disparity (stereo) and texture gradients. At the end of each movement, the fingers closed down on a physical object consistent with one of the two cues, depending on the condition (haptic-for-texture or haptic-for-stereo). Unlike traditional adaptation paradigms, where relevant spatial properties are determined by a single dimension of visual information, this method enabled us to investigate whether adaptation processes can selectively adjust the influence of different sources of visual information depending on their relationship to physical depth. In two experiments, we found short-term changes in grasp performance consistent with a process of cue-selective adaptation: the slope of the grip aperture with respect to a reliable cue (correlated with physical reality) increased, whereas the slope with respect to the unreliable cue (uncorrelated with physical reality) decreased. In contrast, slope changes did not occur during exposure to a set of stimuli where both cues remained correlated with physical reality, but one was rendered with a constant bias of 10 mm; the grip aperture simply became uniformly larger or smaller, as in standard adaptation paradigms. Overall, these experiments support a model of cue-selective adaptation driven by correlations between error signals and input values (i.e., supervised learning), rather than mismatched haptic and visual signals.