Spatial Coding of the Predicted Impact Location of a Looming Object

0 downloads 0 Views 291KB Size Report
Jul 13, 2004 - stimuli moving in the peripheral visual field. Hence, the .... position with respect to the head had a peripheral, right tion bias .... corresponding to a time to collision of 250 ms. Interest- ... mechanisms for catching fly balls. J. Exp.
Current Biology, Vol. 14, 1174–1180, July 13, 2004, 2004 Elsevier Ltd. All rights reserved.

DOI 10.1016/j .c ub . 20 04 . 06 .0 4 7

Spatial Coding of the Predicted Impact Location of a Looming Object Marco Neppi-Mo`dona,1,2 David Auclair,1 Angela Sirigu,1 and Jean-Rene´ Duhamel1,* 1 Institut des Sciences Cognitives CNRS-Universite´ de Lyon 1 67 Bd Pinel 69675 Bron France 2 Dipartimento di Psicologia Universita` di Torino Via Po 14 10123 Torino Italia

Summary Avoiding or intercepting looming objects implies a precise estimate of both time until contact and impact location [1–4]. In natural situations, extrapolating a movement trajectory relative to some egocentric landmark requires taking into account variations in retinal input associated with moment-to-moment changes in body posture [5-7]. Here, human observers predicted the impact location on their face of an approaching stimulus mounted on a robotic arm, while we systematically manipulated the relation between eye, head, and trunk orientation. The projected impact point on the observer’s face was estimated most accurately when the target originated from a location aligned with both the head and eye axes. Eccentric targets with respect to either axis resulted in a systematic perceptual bias ipsilateral to the trajectory’s origin. We conclude that (1) predicting the impact point of a looming target requires combining retinal information with eye position information, (2) that this computation is accomplished accurately for some, but not all, possible combinations of these cues, (3) that the representation of looming trajectories is not formed in a single, canonical reference frame, and (4) that the observed perceptual biases could reflect an automatic adaptation for interceptive/defensive actions within near peripersonal space. Results and Discussion The perception of the projected point of impact of an approaching object was studied by using a robotic arm to displace a spot of light toward the face of an observer in an otherwise completely dark environment (Figure 1A). The moving target was turned off halfway between its starting position and the head of the observer who was asked to report whether the target would have touched the left or right half of his/her face. We varied the spatial origin and end point of the looming target and the orientation of the observer’s gaze and trunk, defining several viewing conditions. In the foregoing presentation, the *Correspondence: [email protected]

spatial origins of the target are defined as left, center, or right with respect to the head axis. Eyes, Head, and Trunk Coaligned Impact-point prediction was most accurate when the target originated from a straight-ahead location (Figure 1B, dotted line). Observers discriminated reliably between left- and right-projected impact locations. In terms of just noticeable difference (JND, distance between 50% and 75% “right-impact” decisions on the best fitting psychometric function), they could distinguish between impact points separated by 2.2 mm (SD ⫽ 3.3 mm, Figure 2B, shaded central panel, white bar). The point of subjective equality (50% probability of rightimpact responses) was closest to the objective midline for straight-ahead target origins (Figure 2A, shaded central panel, white bar; mean bias ⫽ 2 mm to the right, SD ⫽ 5.8 mm, not significantly different from zero on a one sample t-test). Stimuli originating from eccentric locations induced strong, systematic biases. Trajectories starting from the left and aimed at the midline or the right side of the face were often perceived as directed toward the left side (Figure 1B, solid line). The mean leftward bias across subjects was 18.4 mm (SD ⫽ 15.5, Figure 2A, shaded panel, black bar). A mirror symmetric pattern was observed for right-target origins (Figure 1B, dashed line) with a mean rightward bias of 18.2 mm (SD ⫽ 13.1, Figure 2A, shaded panel, striped bar). These biases were not accompanied by a significant increase in JNDs (Figure 2B), ruling out the possibility that prediction errors were related to elevated discrimination thresholds for stimuli moving in the peripheral visual field. Hence, the subjective impact location of a looming target is systematically displaced ipsilaterally to its spatial origin. Eyes Deviated Left or Right, Head and Trunk Coaligned In the data presented so far, the spatial origin of the target with respect to the head and trunk is confounded with its retinal location: the image of a target originating from an eccentric spatial location is also formed in the peripheral visual field. In order to determine if prediction errors are related to the eye- or the head-centered origin of the trajectory, we had subjects maintain eccentric fixation during its presentation. This defined a set of viewing conditions in which the same retinal motion was produced by stimuli originating from the left, central, and right regions of space. If impact prediction is accomplished in eye-centered coordinates, then equivalent retinal stimuli should produce equivalent percepts. By contrast, if it is accomplished in head-centered coordinates, prediction errors should be most similar for targets having the same head-centered location even if their retinal trajectories differ. The results show that the perception of looming stimuli jointly depends on the head- and eye-centered origins of the target. Whenever the looming target origi-

Prediction of Impact Location of Looming Stimuli 1175

Figure 1. Experimental Setup for the Measurement of Impact-Point Prediction (A) Experimental setup. Lateral view: (a) robotic arm, (b) LED stimulus mounted on robot’s “finger” (gray circle), (c) three possible spatial origins of the target (0⬚, 17⬚ left and right), (d) three possible locations of the fixation LEDs (black circles, 0⬚, 17⬚ left and right) fixed on a table-mounted frame and positioned 3.5⬚ above target movement plane, (e) translucent liquid crystal goggles, (f) head rest, (g) video camera, (i) infrared light source, and (j) video monitor for eye position tracking. Top view: viewed (solid line) and extrapolated (dotted line) portions of stimulus trajectories and layout of the seven projected impact points on the subject’s face (small circles). Impact points were spaced 15 mm apart, from 45 mm left to 45 mm right of the subjects’ head midline. (B) Influence of stimulus spatial origin on impact point prediction accuracy. Logistic regression of left-right impact judgments made by a representative subject as a function of the spatial origin of the target for the straightahead fixation condition. Individual data points represent the observer’s percentage of correct responses for each of the seven projected impact points on the face. Transition slopes between perceived left and right impacts are steep, but the location at which this transition occurs differs for the left (solid line), central (dotted line), and right (dashed line) spatial origins.

nated from the left part of space or moved within the left visual field, subjects reported more left impacts, with this bias being strongest when both conditions were associated (Figure 2A, black column on the left panel, white column on the right panel, black column on the central panel). A symmetrical pattern was found for right egocentric origins and right visual-field trajectories (Figure 2A, striped column on the right panel, white column on the left panel, striped column on the central panel). Fixation direction and target origin interacted significantly (F[4,68] ⫽ 26.09; p ⬍ 0.0001); as compared to straight-ahead fixation, the size of prediction errors was reduced by gaze alignment with the left origin (Figure 2A, black bars in the central versus left panels, p ⬍ 0.01). A similar tendency was observed for gaze alignment with the right origin (striped bars in the central versus right panels, p ⫽ 0.08). Yet this did not eliminate the prediction bias totally: errors remained larger than for central origin targets presented during straight-ahead fixation (Figure 2A, black bar on the left panel or striped bar on the right panel versus white bar on the central panel; for both p ⬍ 0.04). Also, equivalent peripheral retinal trajectories led to smaller prediction errors when the target’s origin coincided with the head axis than when it started from the left or right of the head midline (Figure 2A, white bar on the left panel versus striped bar on the central panel,

p ⬍ 0.05; white bar on the right panel versus black bar on the middle panel, p ⫽ 0.07). JNDs (Figure 2B) varied between 1.7 and 3.8 mm in the different viewing conditions, but none of the comparisons yielded significant differences, again excluding the possibility that prediction biases were related to elevated trajectory discrimination thresholds. Interobservers Variability In order to investigate the relative contribution of retinal and nonretinal cues on impact-point prediction in individual observers, we used multiple regression analysis with the retinal and the head-centered locations of the target as regressors. In the majority of subjects, high partial-correlation coefficients were obtained between prediction errors and both variables (Figure 3A, upper right quadrant). In some subjects, performance depended prevalently on the target’s head- or eye-centered location (lower right and upper left, respectively). Finally, in three subjects, the error patterns seemed unrelated to any of these spatial dimensions (lower left portion of the graph). Figure 3B shows the performance of three representative subjects. All showed an ipsilateral bias. Each plot contains the three psychometric functions associated with central fixation condition as well as the curves obtained for central target trajectories

Current Biology 1176

Figure 2. Impact-Point Prediction as a Function of Target Origin and Eye Fixation The cartoon on top of the data columns schematizes, from top to bottom, gaze direction, target origin, and the resulting head-centered and eye-centered (i.e., retinal) locations of the looming stimulus. (A) Prediction bias. Positive values of the bars correspond to a rightward deviation of the perceived impact location. Shaded central panel summarizes average group data obtained for the straight-ahead fixation condition (same condition as shown in Figure 1B). Impact points of trajectories with a central origin are perceived at their near veridical locations, and significant and mirror-symmetric biases are observed for left and right eccentric target origins. Lateral panels represent data obtained by using eccentric fixations for two stimulation conditions that yielded retinal stimulus trajectories that are comparable to those tested with central fixation (e.g., black and white bars on the left panel versus white and striped bars on the central panel, and white and striped bars on the right panel versus black and white bars of the central panel). Through this comparison, one can estimate the effects of target origin alignment with either the gaze or the head axis; for instance, it can be seen that aligning the gaze axis with target origin reduces the size of the prediction error and that aligning the head axis with target origin (while gaze is deviated) also reduces prediction errors (see text for details). (B) JNDs used as a measure of the discrimination threshold between adjacent impact points show no significant variations across the different spatial origins and fixation locations.

viewed from an eccentric fixation. In subject four, the psychometric curves obtained for the central target trajectories are perfectly aligned, indicating that the prediction bias is related to the head-centered, not the eyecentered, location of the looming target. By contrast, subject 17 exemplifies impact prediction dominated by the eye-centered location of the target, with a better alignment of psychometric curves corresponding to target origins with matching retinal than head-centered locations. Finally, subject 12 shows an intermediate pattern, where the prediction bias seems to be a compromise between the two reference frames. This heterogeneity in individual performance suggests that different observers might weight differently the available retinaland eye-position cues. Eyes and Head Coaligned, Trunk Deviated Left or Right In the experimental conditions described so far, the trunk and head were coaligned. In order to test whether head-on-trunk information is relevant to predicting the point of impact of an object on the head, we partly replicated the experiment with the subject’s trunk devi-

ated 34⬚ leftward or rightward while the head and eyes remained directed straight ahead. This determined viewing conditions in which a target originating from a central position with respect to the head had a peripheral, right or left, origin with respect to the trunk. This had no significant effect on performance (Figure 4). In this study we asked (1) how accurately can an observer predict the impact location of a target approaching the head when presented with only a portion of its trajectory, and (2) what frame of reference is used to perform this prediction. The underlying hypothesis was that in a task designed to measure what is essentially an anticipated tactile percept, accurate prediction of projected impact point would jointly depend on the visual analysis of the 3D trajectory of the target (e.g., its relative rate and direction of expansion on the retina) and on the mapping of this trajectory onto an internal representation of the observer’s own head by using postural information. Motion-in-depth perception has been investigated in studies that did not focus specifically on impact-point prediction. In the present study we found average impact point JNDs of 1.7–3.8 mm corresponding, for tra-

Prediction of Impact Location of Looming Stimuli 1177

Figure 3. Interobserver Variability (A) Scatter plot of squared partial correlation coefficients calculated from individual, multiple linear-regression analyses on prediction errors as a function of the head-centered (abscissa) and eye-centered (ordinate) location of the moving target. Each symbol represents a single subject. (B) Prediction performance for three different observers. Presentation format is the same as in Figure 2. In addition to the three trajectory types (central, left, and right target origins) presented during central fixation, we show sigmoid fits for the central trajectories presented during left and right eccentric fixation. Impact-prediction performance for central origin targets depends principally on the head-centered origin in subject number 4 (H ⬎ E), on the eye-centered origin in subject number 17 (E ⬎ H), and on both factors in subject number 12 (E and H). Abbreviations: E⫽ eye-centered origin and H ⫽ head-centered origin.

jectories starting 40 cm from the eyes, to angular differences of 0.25⬚–0.54⬚. These threshold values are close to those reported in somewhat similar experiments by Beverley and Regan [8] (0.20⬚–0.80⬚) but higher than in Regan and Kaushal [9] (0.03⬚–0.12⬚). Important methodological differences between these two studies and our own, such as the type of stimuli and the level of subject expertise, make direct comparisons quite difficult. More surprising is the systematic bias in impact-point perception observed for eccentric target origins, which could not have been predicted from the standard visual psychophysics of 3D motion perception. How can we account for this finding? A first possibility is that it reflects

poorer perception of movement trajectories in the peripheral retina. However, this explanation is unlikely since discrimination thresholds did not increase significantly for eccentric trajectories. Three alternative explanations can be evaluated. (1) Attentional bias: covert shifts of attention in the direction of a stimulus moving in the periphery may have some influence on subjective estimates of spatial location. This issue has not been investigated very thoroughly in the literature. In static line bisection or in landmark tasks, the perceived midpoint is displaced toward the attentional focus by about 0.5⬚ ([10]; Wardak et al., 2000, Soc. Neurosci., abstract). This is a relatively small effect as

Current Biology 1178

Figure 4. Influence of Trunk Orientation on Prediction Accuracy Leftward (continuous line) or rightward (dashed line) deviation of the trunk has no influence on impact point prediction.

compared to the present observations (18 mm at 40 cm viewing distance ⫽ 2.6⬚). (2) Absolute (egocentric) distance underestimation: judging the stimulus to be nearer than it really is would have little consequence for central trajectories but, as shown in Figure 5A, would lead to mislocate the projected impact point of eccentric targets toward their spatial origin. Direct evidence that subjects systematically underestimated target distances is lacking in the present study but has been reported with stationary stimuli [11, 12]. (3) Implicit target interception: rather than attempting to predict the impact point of the target in the plane of the face, subjects might covertly estimate the location of the contact point of the stimulus with the hand at some distance in front of the face (Figure 5B). The interception plane, as inferred from the size of the bias, is located about 5 cm from the eyes, corresponding to a time to collision of 250 ms. Interestingly, subregions of the parietal [13] and premotor cortex [14, 15] in monkeys contain neurons with visual receptive fields with a restricted extension in space, discharging for stimuli within 5–10 cm of the face. Also, microstimulation of these cortical fields evokes defensive motor responses, suggesting that they may encode a margin of safety around the body [16, 17]. As illustrated in Figure 5, the distance underestimation and interception hypotheses are geometrically related, and our results cannot distinguish definitively between the two. Computing the projected point of impact of a looming object does not depend solely on visual factors. It involves a processing stage in which retinal information about the target is combined with postural (eye position) information. Our results are consistent with studies of

ego-motion perception showing that whenever retinal or extraretinal information sources are altered or suppressed, heading judgments deteriorate [18–20]. In the present study prediction accuracy was not homogeneous across visual space; it was highest when the target, eye, and head axes coincided and lowest when targets were eccentric with respect to both the eyes and the head. Alignment of the visual axis with other reference frames is an ecologically viable strategy that can be observed in many species showing coordinated responses of the head, trunk, eyes, and ears toward the spatial origin of a sensory event. The alignment of sensory receptor surfaces results in an alignment of internal sensory maps, facilitating the integration of multisensory cues across these maps and thereby enhancing spatial localization performance [21]. We also found interobserver differences in impactpoint prediction related to reference axes alignment. In most subjects error distribution indicates a partial integration of visual and eye position cues, but in some cases the perception of impact location was nearly constant across the different eye fixation locations, suggesting a stable, head-centered representation of the target’s trajectory. Such integration of retinal and extraretinal signals could be related to the properties of motion-in-depth-sensitive neurons in the primate parietal cortex [22, 23]. The activity of these neurons is modulated by postural signals and exhibits different degrees of spatial invariance of the encoded visual trajectories with respect to the head, the arm, or the hand [24–26]. Our present result may represent a psychophysical signature of the intermediate stage operations postulated Figure 5. Two Possible Interpretations of the Directional Specificity of the Prediction Bias

(A) According to the target distance underestimation hypothesis, subjects would judge absolute target distance as nearer than it really was. For eccentric trajectory, this would result in predicting target impacts on the face erroneously displaced ipsilaterally to the target spatial origin. Distance underestimation would have no effect, however, on central target origins in the eye/head axis. (B) According to the covert interception hypothesis, the observers’ apparent prediction bias would reflect the estimated point of interception of the target by the hand at some distance in front of the face rather than its actual impact location on the face. This interception plane, which could represent a safe-distance margin, is located approximately 5 cm in front of the eyes in the central fixation condition.

Prediction of Impact Location of Looming Stimuli 1179

by models of space representation [27–29], which could be accomplished by cortical structures such as the parietal cortex.

Received: November 24, 2003 Revised: April 15, 2004 Accepted: May 5, 2004 Published: July 13, 2004

Experimental Procedures

References

Apparatus All experiments, apparati, and safety procedures were approved by the CNRS human subjects committee. The apparatus consisted of a green LED covered with a 5 mm wide, custom-made Fresnel lens attached to a robotic arm (CRS Robotics A-255, Burlington, Ontario). The robot was programmed to execute horizontal pure rectilinear movements toward the observer’s head at a constant speed of 20 cm s⫺1. The observer sat in a medical chair with his/her head immobilized by a forehead rest and chinrest (see Figure 1). During stimulus presentation he/she fixated on one of three horizontally aligned red LEDs (0⬚, 17⬚ left and right) fixed on a table-mounted metal frame 3.5⬚ above the movement plane of the target LED and distant 40 cm from the subject. Eye and head position were continuously monitored with an infrared video camera to control for steadiness of ocular fixation. Trials in which subjects broke fixation were repeated. The observer wore liquid crystal shutter goggles (model P-1, Translucent Technologies, Toronto, Ontario) and viewed the fixation and moving stimuli binocularly in an otherwise completely dark and frameless visual environment. The goggles allowed one to control the timing of exposure to the stimuli and prevented dark adaptation; at the end of each stimulus excursion a computer-controlled switch commanded shutter closure and turned on the room lights, thus illuminating the retina while preventing pattern vision.

1. Lee, D.N. (1970). A theory of visual control of braking based on information about time to collision. Perception 5, 437–459. 2. Regan, D. (1997). Visual factors in hitting and catching. J. Sports Sci. 15, 533–558. 3. Tresilain, J.R. (1990). Perceptual information for the timing of interceptive action. Perception 19, 223–239. 4. Oudejans, R.R.D., Michaels, C.F., Bakker, F.C., and Davids, K. (1999). Shedding some light in catching in the dark : perceptual mechanisms for catching fly balls. J. Exp. Psychol. Hum. Percept. Perform. 25, 531–542. 5. Crowell, J.A., Banks, M.S., Shenoy, K.V., and Andersen, R.A. (1998). Visual self-motion perception during head turns. Nat. Neurosci. 1, 732–737. 6. Peper, L., Bootsma, R.J., Mestre, D.R., and Bakker, F.C. (1994). Catching balls: how to get the hand to the right place at the right time. J. Exp. Psychol. Hum. Percept. Perform. 20, 591–612. 7. Oudejans, R.R.D., Michaels, C.F., Bakker, F.C., and Dolne´, N.A. (1996). The relevance of action in perceiving affordances: perception of catchableness of flying balls. J. Exp. Psychol. Hum. Percept. Perform. 22, 879–891. 8. Berverley, K.I., and Regan, D. (1975). The relation between discrimination and sensitivity in the perception of motion in depth. J. Physiol. 249, 387–398. 9. Regan, D., and Kaushal, S. (1994). Monocular discrimination of the direction of motion in depth. Vision Res. 34, 163–177. 10. Nielsen, K.E., Intriligator, J., and Barton, J.J. (1999). Spatial representation in the normal visual field: a study of hemifield line bisection. Neuropsychologia 37, 267–277. 11. Jonhston, E.B. (1991). Systematic distortions of shape from stereopsis. Vision Res. 31, 1351–1360. 12. Foley, J.M., Ribeiro-Filho, N.P., and Da Silva, J.A. (2004). Visual perception of extent and the geometry of visual space. Vision Res. 44, 147–156. 13. Duhamel, J.-R., Colby, C.L., and Goldberg, M.E. (1991) Congruent representations of visual and somatosensory space in single neurons of monkey ventral intraparietal cortex (area VIP). In Brain and Space, J. Paillard, ed. (Oxford: Oxford University Press), pp. 223–236. 14. Fogassi, L., Gallese, V., di Pellegrino, G., Fadiga, L., Gentilucci, M., Luppino, G., Matelli, M., Pedotti, A., and Rizzolatti, G. (1992). Space coding by premotor cortex. Exp. Brain Res. 89, 686–690. 15. Graziano, M.S., and Gross, C.G. (1998). Visual responses with and without fixation: neurons in premotor cortex encode spatial locations independently of eye position. Exp. Brain Res. 118, 373–380. 16. Graziano, M.S., Taylor, C.S., and Moore, T. (2002). Complex movements evoked by microstimulation of precentral cortex. Neuron 30, 841–851. 17. Cooke, D.F., Taylor, C.S., Moore, T., and Graziano, M.S. (2003). Complex movements evoked by microstimulation of the ventral intraparietal area. Proc. Natl. Acad. Sci. USA 100, 6163–6168. 18. Royden, C.S., Banks, M.S., and Crowell, J.A. (1992). The perception of heading during eye movements. Nature 360, 583–585. 19. Crowell, J.A., Banks, M.S., Shenoy, K.V., and Andersen, R.A. (1998). Visual self-motion perception during head turns. Nat. Neurosci. 1, 732–737. 20. van den Berg, A.V. (2000). Human ego-motion perception. Int. Rev. Neurobiol. 44, 3–24. 21. Stein, B.E., and Meredith, M.A. (1993). The Merging of the Senses, (Cambridge: MIT Press). 22. Colby, C.L., Duhamel, J.-R., and Goldberg, M.E. (1993). Ventral intraparietal area of the macaque: anatomic location and visual response properties. J. Neurophysiol. 69, 902–914. 23. Bremmer, F., Duhamel, J.-R., Ben Hamed, S., and Graf, W. (2002). Heading encoding in the macaque ventral intraparietal area (VIP). Eur. J. Neurosci. 16, 1569–1586.

Procedure Stimulus trajectories originated from one of three possible spatial locations (0⬚, 17⬚ left and right) horizontally aligned at eye level, 40 cm away from the subject, and were aimed at one of seven possible impact points spaced 15 mm apart on the observer’s face, from 45 mm left to 45 mm right of the head midsagittal plane. The observer was instructed to mentally complete the trajectory after goggle closure and report which side of his/her face would have been touched by the stimulus had it continued along its path (left/right forced choice). Exposure time was 1 s, so that the perceived and the mentally completed distances were identical (20 cm). Psychophysical data indicate that within this time window subjects’ prediction ability is highest, progressively declining for longer extrapolated intervals [30]. Each combination of looming target origin (n ⫽ 3), endpoint (n ⫽ 7), and eye fixation position (n ⫽ 3) were repeated at least four times and tested over two or three 20–50 min long sessions in a group of 18 untrained, naive subjects (eight males and ten females, mean age 24.89, SD ⫽ 3.29, mean education years ⫽ 17). A subset of 12 subjects was further tested in a condition where the eyes and head were in central alignment but the trunk was deviated 34⬚ left or right. Data Analysis For each viewing condition, a logistic regression curve was fitted to the distribution of subjects’ correct answers for each projected collision point on the face by using the equation: Px⫽ exp[a ⫹ (b*x)]/ (1 ⫹ exp[a ⫹ (b*x)]), where P is the probability of a correct answer, and a and b are the parameters of the equation. Standard statistical analyses were conducted on systematic errors (prediction bias, defined as the deviation of the 50% correct answer point from the actual midline) and on just noticeable difference (JND, defined as the difference between the 50% and 75% points). Acknowledgments Research supported by a postdoctural fellowship from the Fyssen Foundation to M.N.-M. and by a grant from the Italian Ministero dell’Istruzione, dell’Universita` e della Ricerca to Anna Berti. We thank Lionel Granjon, Marc The´venet, and Belkacem Messaoudi for their invaluable technical assistance.

Current Biology 1180

24. Graziano, M.S., and Gross, C.G. (1998). Spatial maps for the control of movement. Curr. Opin. Neurobiol. 8, 195–201. 25. Galletti, C., Gamberini, M., Kutz, D.F., Fattori, P., Luppino, G., and Mattelli, M. (2001). The cortical connections of area V6: an occipito-parietal network processing visual information Europ. J. Neurosci. 13, 1572–1588. 26. Duhamel, J.-R., Bremmer, F., BenHamed, S., and Graf, W. (1997). Spatial invariance of visual receptive fields in parietal cortex neurones. Nature 389, 845–848. 27. Andersen, R.A., and Zipser, D. (1988). The role of the posterior parietal cortex in coordinate transformations for visual-motor integration. Can. J. Physiol. Pharmacol. 4, 488–501. 28. Pouget, A., and Sejnowski, T.J. (1994). A neural model of the cortical representation of egocentric distance. Cereb. Cortex 4, 314–329. 29. Pouget, A., Deneve, S., and Duhamel, J.-R. (2002). A computational perspective on the neural basis of multisensory spatial representations. Nat. Rev. Neurosci. 3, 741–747. 30. Peterken, C., Brown, B., and Bowman, K. (1991). Predicting the future position of a moving target. Perception 20, 5–16.

Suggest Documents