The effects of rotation and inversion on face processing in

0 downloads 0 Views 509KB Size Report
Jan 7, 2002 - rotated faces under both upright and inverted conditions. INTRODUCTION. When we look for familiar faces at a crowded party, we can pick out ...
Q0421–CN4500 / Jan 7, 02 (Mon)/ [17 pages, 0 tables, 8 figures, 1 footnotes] – Edited from Disk. COGNITIVE NEUROPSYCHOLOGY , 2002, 19 (1), 31–47

THE EFFECTS OF ROTATION AND INVERSION ON FACE PROCESSING IN PROSOPAGNOSIA J.J. Marotta, T.J. McKeeff, and M. Behrmann Carnegie Mellon University, Pittsburgh, USA

The current study investigated the sensitivity of face recognition to two changes of the stimulus, a rotation in depth and an inversion, by comparing the performance of two prosopagnosic patients, RN and CR, with non-neurological control subjects on a face-matching task. The control subjects showed an effect of depth rotation, with errors and reaction times increasing systematically with rotation angle, and the traditional inversion effect, with errors and reaction times increasing under inverted conditions. In contrast, RN showed no effect of rotation or inversion on his error data but did show a less sensitively graded effect of rotation and the traditional inversion effect on reaction times. CR did not show a graded effect of rotation on his errors or reaction times. Although CR showed the traditional inversion effect on his error data, he displayed an inversion superiority effect on his reaction time data, which supports the claim that the damaged holistic processing systems continue to dominate face processing in prosopagnosia even though they are malfunctioning. These results suggest that the damage that occurs to the ventral temporal cortex in prosopagnosia may have forced the patients to rely on sources of information that are not dependent on the view of the face and, moreover, cannot be adapted to deal with rotated faces under both upright and inverted conditions.

INTRODUCTION When we look for familiar faces at a crowded party, we can pick out our friends with relative ease, even if they are not looking directly at us. In addition, if we are speaking to a person we have just met at the party and she looks away while picking up a drink, we do not think that someone else has suddenly replaced her. How one derives an invariant representation of an image despite substantial differences in retinal input is one of the crucial questions in vision science. We need to understand how our success in these tasks depends on robust face representations that are resilient to a variety of spatial transformations. We must also have a way of

acquiring the information necessary to form these representations and make comparisons with the faces we are currently viewing. Most of the previous research on face representations has focused on whether the underlying representation is viewpoint dependent or independent. A viewpoint-dependent representation of a face would capture how the face appears from a particular vantage point using a viewer-centred (egocentric) coordinate system. In contrast, a viewpointindependent representation would characterise the three-dimensional structure of a face regardless of viewpoint using a face-centred (allocentric) coordinate system. Much of the work on face perception has focused on front views of faces, but a face is a

Requests for reprints should be addressed to Dr Jonathan Marotta, Dept of Psychology/Baker Hall, Carnegie Mellon University, Pittsburgh, PA 15213, USA (Email: [email protected]). This research was supported by grants from the McDonnell-Pew Program in Cognitive Neuroscience and NSERC to JJM and by a grant from the NIH to MB. We would like to thank Niko Troje and Heinrich Bülthoff for access to the Max-Planck Face Database and Michael Tarr for his input. Finally, we would like to thank the subjects for the cooperation they showed throughout the testing. 2002 Psychology Press Ltd http://www.tandf.co.uk/journals/pp/02643294.html

31

DOI:10.1080/02643290143000079

MAROTTA, McKEEFF, BEHRMANN

complex 3-D object that must be recognised from all directions. One approach to studying how faces are represented has been to examine how robust face perception is when the face undergoes a change in orientation—either a rotation in depth or in the frontal plane. If faces are represented in a viewpoint-dependent manner, then these types of transformations should have a deleterious effect on face recognition.

Transformation in depth Previous research has shown that even though we are able to recognise objects from different vantage points, it is not done without cost. As the angle of rotation increases from the preferred or canonical view of an object, so does reaction time for recognition, at a reported rate of between 2 to 4 ms/degree (Bülthoff & Edelman, 1992; Shepard & Metzler, 1971; Tarr & Pinker, 1989). It has been suggested that the mechanisms for face recognition may operate in similar ways. A similar cost in reaction time and sensitivity has been observed when subjects learn faces and then generalisation to differently oriented faces is tested. For example, Hill, Schyns, and Akamatsu (1997) investigated generalisation from single views and found that there was a reduction in sensitivity as the angle of rotation increased away from the full-face view. When the view of a face deviates from a “typical view,” the visual system might use view-invariant feature properties (e.g., skin tone and texture), local shape cues (e.g., size of nose and mouth), or global three-dimensional shape—or a mixture of several of these properties— to facilitate recognition, although there is clearly a cost associated with the use of these other visual properties. The fact that there is an advantage for recognising a face from a particular vantage point is also observed in single unit recording studies with nonhuman primates. An investigation of the sensitivity of cells in the macaque superior temporal sulcus (STS) to the sight of different perspective views of the head (Perrett et al., 1991) found that a majority of cells in the STS are viewer centred and exhibit unimodal tuning to one view, with more of the cells

32

COGNITIVE NEUROPSYCHOLOGY , 2002, 19 (1)

being tuned to the full-face and profile views than to other views. A smaller number (25%) of the cells responding to faces were face centred. The viewinvariant coding of such cells, however, may be established by combining the outputs of cells selective for particular views. Face-centred descriptions could, therefore, be established by combining the outputs of several viewer-centred descriptions (Perrett et al., 1991). In addition, many cells in the inferotemporal cortex show considerable selectivity in their responses when stimuli change in size, position, colour, and orientation in the image plane (Ullman, 1998). This body of evidence suggests that a viewpoint-dependent face representation is primarily used for face recognition. Of particular relevance for the current investigation is the finding that more of the view-specific cells were tuned to the full-face and profile views than other views. Although there appears to be a cost incurred with recognising faces from unfamiliar orientations, there seems to be some advantage in recognition of the three-quarter view (Bruce, Valentine, & Baddeley, 1987; Bruyer & Galvez, 1989; O’Toole, Edelman, & Bulthoff, 1998). This is partly because such views lie between frontal and profile views and, therefore, there is less possible change in orientation between learning and test views (Hancock, Bruce, & Burton, 2000). The three-quarter view of the face is also thought to be a canonical view, since it is often used to portray faces in pictures and it reveals more of the parts useful for face recognition than either full-face or profile views (Hill et al., 1997). Despite the fact that there are more neurons tuned to full-face and profile views, relatively broad tuning of these neurons would result in the three-quarter view activating both sets of neurons—leading to faster recognition times (Perrett, Oram, & Ashbridge, 1998). In addition, a three-quarter view may reveal the most useful features to recover the identity of a face, or give the best representation of its threedimensional shape. To put it simply, the threequarter view may be canonical because of the number of face-selective neurons it activates, the number of features represented, or because a better three-dimensional representation of a face can be constructed from an angled view.

ROTATION AND INVERSION IN PROSOPAGNOSIA

Neuropsychological investigations Studies with patients who are no longer able to recognise faces also provide us with the opportunity to understand the nature of the representations that mediate face recognition. These patients are significantly impaired at recognising faces, a deficit referred to as prosopagnosia. This impairment is usually associated with bilateral damage to the inferior aspect of the occipitotemporal regions (Damasio, 1985), in the region of the fusiform gyrus, although a unilateral right-hemisphere lesion may also suffice for the deficit (De Renzi, 1986). In one investigation of the effects of rotation on the face-processing abilities of such patients, Benton and Van Allen (1968) used a face-matching task that consisted of a series of photographs of unfamiliar faces, viewed from the front. Subjects were required to match the target with a set of photographs that could differ in presentation angle. Although there was a range of performance across different cases, they found that the individuals with prosopagnosia were generally able to perform this complex perceptual task (see also Schweich, 1993, p. 108). A more dramatic effect of rotation was observed in a study by Sergent and Poncet (1990) in which a prosopagnosic patient, PV, was required to memorise and then match front-view faces or to memorise and then match a front-view face with a three-quarters-view face. In the former condition, PV identified the learned front-view faces from a set of 40 front-view faces. His performance was far from perfect, being slightly above chance but inferior to the poorest normal subject. His impairment was greatly magnified, however, when the learned front-view faces were matched to three-quartersview faces. Under this condition, PV’s immediate and delayed performance was at chance levels. Thus, although PV appeared to be able to code the exact face perceptually and be successful to some extent at the front-view matching, she was unable to generalise across viewpoint. These findings suggest that PV was particularly unable to derive a representation that was robust across rotations in depth. In the current study, we further examine the effects of a rotation in depth on the face-matching

ability of two individuals with prosopagnosia. Two aspects of the patients’ performance are considered. The first concerns the fall-off in performance as rotation in depth increases. If the representation derived by these patients is strongly feature-based, then a rotation of the face in depth could produce impairments in tests of rotated views, as in the case of PV. The second aspect concerns the extent to which patients with prosopagnosia have a preferred view of a face. Although the three-quarter view allows for a large number of features to be represented and provides information for a threedimensional representation of a face, these advantages may not be usable by prosopagnosic patients.

Planar transformation The data reported earlier suggest that there is an influence of depth transformation on face processing. Previous research has also shown that a planar transformation also has significant effects on face recognition. Yin (1969) was the first to find that faces were more difficult to recognise when they were inverted, the inversion effect, leading him to conclude that faces are not represented in a facecentred or view-invariant way. Yin also found that face recognition was disproportionately impaired by stimulus inversion when compared to recognition of other classes of visual stimuli. This result has been replicated many times and is a standard in the literature (for review, see Valentine, 1988; Valentine & Bruce, 1988). The findings that inverted faces are harder for subjects to learn, as measured by recognition performance, and harder to perceive, as measured by simultaneous matching performance, have been interpreted in terms of specialisation of face recognition for upright faces and the existence of a face-specific processor. Additional evidence for a face-specific processor comes from Yin’s (1970) findings that patients with certain righthemisphere lesions were impaired relative to neurologically intact individuals and left-hemisphere damaged patients at recognising upright faces but not on either inverted faces or upright or inverted houses. Whether or not there is a face-specific processor has been the subject of much debate recently. ChalCOGNITIVE NEUROPSYCHOLOGY, 2002, 19 (1)

33

MAROTTA, McKEEFF, BEHRMANN

lenges to the claim of specialisation for faces come from the findings that recognition of dogs by expert dog show judges (Diamond & Carey, 1986) and natural handwriting by handwriting experts (Bruyer & Crispeels, 1992) were also sensitive to inversion. These data suggest that the system may not be specialised for face per se but, instead, for stimuli for which the observer has become an expert. Thus, the coding strategy used by observers as they become increasingly familiar with an object category may not be the same as those used by observers when they first encounter the category. Instead, expertise with an object may allow for a different form of coding that is sensitive to inversion. Recently, Gauthier, Tarr, Anderson, Skudlarski, and Gore (1999) provided functional neuroimaging evidence that expertise recruits the fusiform gyrus “face area.” The acquisition of expertise with novel objects (Greebles) led to increased activation in the right-hemisphere “face areas" for matching of upright Greebles as compared to matching of inverted Greebles—an inversion effect. Experts may consider objects more as a whole than novices, who may take more of a partsbased approach (Tanaka & Gauthier, 1997). Further evidence of the influence of part-based and holistic processing on functional activation comes from the finding that when subjects match whole faces, the right middle fusiform region is activated whereas the opposite pattern is observed in the left homologous region when subjects match face parts (Rossion et al., 2000). In other words, the fusiform gyrus in the right hemisphere is involved in the holistic processing of faces, whereas the fusiform gyrus in the left hemisphere tends to process a feature-based representation of the face. Moscovitch, Winocur, and Behrmann (1997) concluded that face recognition normally depends on a holistic, face-specific system but that a partbased object-recognition system can contribute to face recognition when the stimulus does not activate the face-specific system. Their results indicate that inverted faces do not initially engage mechanisms specialised for dealing with faces but instead are handled first by mechanisms used to identify objects. The inversion effect may be the result of representing the complex pattern information of an

34

COGNITIVE NEUROPSYCHOLOGY , 2002, 19 (1)

upright face holistically, that is, with little or no parts decomposition. This noncomponent process would be made much less efficient, or disabled entirely, by inversion of the face. If that were the case, then perception of inverted faces would instead have to proceed by a time consuming analysis of the individual parts of the face (Barton, Keenan, & Bass, 2001). In a recent functional neuroimaging investigation by Haxby et al. (1999), evidence was provided that an increased response in “object-sensitive” areas occurs when viewing inverted faces, which further suggests that the failure of the face perception system with inverted faces leads to the recruitment of processing resources in object perception systems. Murray, Yong, and Rhodes (2000) have suggested that face- and object-processing systems may differ in their reliance on the encoding of spatial-relational information. Evidence for this comes from the finding that there is a qualitative difference in the processing of upright and inverted faces, which may be due to disproportionate effects of inversion on the encoding of spatial-relational information. Thus far, we have separately discussed transformations in depth and those along the frontal plane but the combined effects of these transformations have previously been investigated in intact control subjects. Moses, Ullman, and Edelman (1996) found that the fall-off in recognition with increasing change of viewpoint between learning and test was larger for inverted faces than for upright ones, suggesting that face-specific processes are normally used to help solve the viewpoint problem. This finding adds additional support to the suggestion that humans appear to have a class-general, viewpoint-specific (i.e., upright only) representation for faces.

Neuropsychological investigations One theory as to why individuals with prosopagnosia perform poorly in face recognition tasks is that damage to their face-specific processor results in an impairment at representing shape holistically (Farah, Wilson, Drain, & Tanaka, 1995b). If their impairment is the result of a damaged holistic processor, how do prosopagnosic

ROTATION AND INVERSION IN PROSOPAGNOSIA

patients process inverted faces, which would not trigger the damaged face processor? Farah et al. found that prosopagnosic patient LH actually performed better at matching inverted faces than upright faces, the opposite of the face inversion effect—an “inversion superiority effect.” These findings have recently been confirmed and extended to objects; De Gelder, Bachoud-Lévi, and Degos (1998) found the inversion superiority effect in the recognition of faces and shoes in a patient with visual agnosia, BC, and later in patient LH (De Gelder & Rouw, 2000). With both upright faces and upright shoes, BC and LH’s performance were at chance levels but when the stimuli were presented upside down, there was a significant inversion superiority effect—their performance improved. These studies also incorporated a limited orientation in depth manipulation, since frontal views of the faces and shoes were learned but threequarters views were tested (also see Farah, Levinson, & Klein, 1995a). Farah takes inversion superiority in prosopagnosia as support for a face module that continues to dominate face processing even though it is malfunctioning and therefore maladaptive (Farah et al., 1995b). As a consequence, the intact general visual procedures the patient is using in successfully matching inverted faces cannot be used in the presence of an upright face. LH, therefore, could not perceive (let alone develop recognition strategies for) upright faces using the more general visual pattern perception mechanisms that he applied to inverted faces. De Gelder and Rouw (2000), on the other hand, have put forward the argument that the inversion effect (and thus the inversion superiority effect) does not present decisive evidence for the existence of a face module. Instead, the fact that LH can reliably match inverted but not upright stimuli suggests that a parts-based processing route is intact but that there is interference on its application to upright stimuli from a damaged processing route that targets the whole stimulus and focuses on configuration—not a face module, since it occurs for both faces and objects. Common to these two views is that these patients appear to be handicapped by their spared face categorisation and prevented from using intact

parts-based processes with faces. The latter are successfully used with inverted faces but are clearly of no use to deal with an upright face. Presumably inverting a face makes it object-like and no longer triggers face-specific processes, therefore giving a chance to part-based routines. Under upright conditions, the impaired holistic processing systems in these patients may continue to process visual forms but the required operations are no longer performed adequately (De Gelder et al., 1998). Under inverted orientation conditions, where inverted faces do not engage holistic recognition systems but instead are recognised piecemeal, by part-based, part-dependent processes (Moscovitch et al., 1997), interference from the faulty processor would not impair the patients’ performance—leading to improved performance. Previous neuropsychological studies have not performed a thorough investigation of the joint effects of rotation in depth and a planar inversion. The current study investigated the sensitivity of face recognition to two changes in orientation, a rotation in depth and inversion, by comparing the face-matching ability of two neurologically impaired individuals (RN and CR) and a group of neurologically intact control subjects. Do the patients have a preferred view of a face? If damage to the patients’ holistic face processing systems produces impairments in tests of rotated views, does their performance improve for inverted faces, when intact parts-based processing mechanisms are operating? On the other hand, if the prosopagnosic patients rely on matching low-level perceptual cues (like texture or colour), a rotation of the face in depth could produce impairments in tests of rotated views under both upright and inverted conditions.

METHOD Participants Sixteen male university students (age range 18–23 years; mean age 19.6 years) with normal or corrected-to-normal vision participated in the experiment, for which they received course credit. All but three subjects were right-handed. Two male COGNITIVE NEUROPSYCHOLOGY, 2002, 19 (1)

35

MAROTTA, McKEEFF, BEHRMANN

individuals with visual agnosia, RN and CR, also consented to participate. RN is a 43-year-old right-handed male who suffered a stroke following a myocardial infarction in May 1998. A CT scan in July 1998 showed no abnormalities (normal ventricles, no areas of increased or decreased density and no masses or midline shifts)1 RN shows no visual field deficits and performs normally on low-level visual tasks (line orientation, visual discrimination), as well as good performance on higher-level tasks like subtests of the Birmingham Object Recognition Battery (BORB; Riddoch & Humphreys, 1993) that require matching across viewpoint or minimal features. RN does, however, have a marked visual agnosia for objects. In October 1998, he obtained a score of 42/60 on the Boston Naming Test (Goodglass, Kaplan, & Weintraub, 1983) and 121/ 235 on a subset of the Snodgrass and Vanderwart black-and-white line drawings (Snodgrass & Vanderwart, 1980). This poor performance was attributed to a failure to recognise objects, rather than to an anomia, as he had no naming impairment for stimuli presented in any other modality. His ability to recognise living things (34%) was significantly worse than for nonliving things (60%). RN performed at the borderline level on the Benton Face Recognition test (score 40/54) and recognised only 4 out of a set of 50 difficult famous faces (his wife, who served as a control, recognized 14). On a face-matching task that varied the similarity of the comparison stimuli, RN produced the longest reaction times when two identical faces were presented for discrimination (unpublished observations). CR is a 19-year-old right-handed male who suffered from a right temporal lobe abscess with a complicated medical course including a history of Group A toxic shock syndrome, pneumonia, cardiac arrest, candida bacteremia, and metabolic encephalopathy in May 1996. MR scans reveal a right temporal lobe lesion consistent with acute micro-abscesses of the right temporal lobe and medial occipital lobe (see Figure 1). CR shows no 1

Figure 1. Coronal section of CR’s brain revealing a right temporal lobe lesion and damage to the right fusiform gyrus. MR scan is presented in radiological format (the right hemisphere is on the left side of the image).

visual field deficits and performs within the normal range on all tests of low-level visual processing (judging size, length, and orientation of stimuli) as well as on the BORB subtests that require matching of objects from different viewpoints or along a foreshortened axis. CR is impaired at recognising some common familiar objects, as was made evident by his poor recognition scores of 46/60 on the Boston Naming Test and 149/185 on the Snodgrass and Vanderwart black-and-white line drawings. His ability to recognize living things (64%) was worse than for nonliving things (89%). CR’s performance on tests of face recognition is worse than RN’s performance, with scores in the “severely impaired” range on the Benton Facial Recognition tests (scores of 36/54 and 37/54 on two separate administrations of the test), and he is also unable to recognise pictures of any famous people (such as former

The absence of a lesion following a myocardial infarction is not surprising; although the neurons may be affected by underperfusion, the extensive neuropil is less vulnerable and hence no lesion may be evident on a CT or MRI (Coslett, personal communications).

36

COGNITIVE NEUROPSYCHOLOGY , 2002, 19 (1)

ROTATION AND INVERSION IN PROSOPAGNOSIA

President Bill Clinton). CR has participated in several previous studies (Gauthier, Behrmann, & Tarr, 1999; Marotta et al., 1999; Williams & Behrmann, 2001) that highlight his use of part-based recognition mechanisms. CR has now completed high school (with considerable support to assist him in processing the material visually) and is attending a community college for vocational training. Although both of our prosopagnosic patients also exhibit object agnosia, this does not negate the fact that they have face recognition deficits. Whereas many prosopagnosic patients also have object recognition deficits, the presence of object agnosia in RN and CR may be an indication of an early perceptual impairment.

Materials The experiment was conducted on a Power Macintosh G3 computer, with subjects making their responses on a Button Box (New Micros, Dallas, TX). Stimuli were presented on a 17-inch colour monitor using PsyScope experimental software version 1.2.1 (Cohen, MacWhinney, Flatt, & Provost, 1993). The stimuli consisted of colour pictures of male and female faces obtained from the MaxPlanck Face Database. This database consists of a series of three-dimensional (3D) models of real faces in three rotations in depth around the vertical axis—full-face (0°), right three-quarter (45°), and right profile (90°) (see Figure 2). The faces were collected as 3D models and colour maps using a

Cyberware™ 3D laser-scanner. Hair was trimmed from the images, leaving the face area alone. Each face was positioned on a black square background (7.5 cm × 7.5 cm). A total of 97 faces were used from the database to produce the experimental trials. Three stimuli appeared during a trial, a target face (centred at 16.5 cm from the left side of the screen, 5.5 cm from the top of the screen) and two choice faces, with one presented on the lower left side of the screen (9.5 cm from left, 16 cm from top) and the other presented on the lower right side of the screen (22.5 cm from left, 16 cm from top). All three stimuli appeared at once on a grey background. In one set of trials, all stimuli (target and choices) were presented upright; in the other set of trials, all stimuli were inverted. In other words, vertical orientation was consistent throughout a block of trials and all three stimuli were presented in the same vertical orientation on any one trial. Viewing distance was approximately 50 cm. Each trial began with a fixation cross appearing on the computer screen for 250 ms, followed by the three stimuli, which remained on the screen until the subjects made a response. Whereas the rotation angle of the target face could differ from the rotation angle of the choice faces by 0°, 45°, or 90°, the two choice faces were always rotated to the same angle within a trial. This resulted in a total of nine possible target face x choice faces rotation combinations. In three of these combinations, the rotation angles of the target face and choice faces did not differ (0°)—target frontal/choice frontal (FF), three-

Figure 2. Sample face from Max-Planck Face Database rotated in depth around the vertical axis. From left to right: full face (0°), right three-quarter (45°), and right profile (90°). COGNITIVE NEUROPSYCHOLOGY, 2002, 19 (1)

37

MAROTTA, McKEEFF, BEHRMANN

quarter/three -quarter (TT), and profile/profile (PP). In four of the combinations, the rotation angles of the target face and choice faces differed by 45°—frontal/three-quarter (FT), three-quarter/ frontal (TF), three-quarter/profile (TP), and profile/three-quarter (PT). Finally, in two of the combinations, the rotation angles of the target face and choice faces differed by 90°—frontal/profile (FP) and profile/frontal (PF). These conditions were randomly mixed within each block of upright and inverted trials. In a 1-hour session, the control subjects were presented with a block of 360 upright trials and a block of 360 inverted trials, which were counterbalanced across control subjects. The trials in each block were made up of equal numbers (40) of the nine possible target face × choice faces rotation combinations. RN and CR were presented with 720 upright trials and 720 inverted trials (80 trials/ condition) over four separate sessions.

Design and procedure The subjects were told that a fixation cross would appear on the computer screen, followed by three faces (one on top and two on the bottom) and that their task was to determine which of the two bottom faces (left or right) was the same person as the picture above, regardless of how the faces were rotated. The subjects were told to make their responses on a Button Box as fast as possible without sacrificing accuracy. Reaction time was measured from the time the pictures were presented on the screen until a response was made.

RESULTS In the following section, we present the error and reaction time data for the control subjects, followed by patients RN and CR. The error and reaction time data are presented in an order that pertains to our questions of preferred view, effects of rotation and inversion, and their combined effects.

38

COGNITIVE NEUROPSYCHOLOGY , 2002, 19 (1)

Control subjects Errors For each of the control subjects, mean values of the percentage of correct trials were calculated for every condition. These values were entered into a 2 ×3 ×3 (planar orientation × target face angle x choice faces angle) repeated-measures analysis of variance. Post hoc Neuman-Keuls analyses were performed where necessary at an alpha level of .05. The control subjects showed a significant effect of target face angle, F(2, 30) = 16.1, p < .0005, and choice faces angle, F(2, 30) = 17.9, p < .0005; with the fewest errors being produced when the faces were presented in the three-quarter view. In general, the number of errors increased as the rotation angle between the target face and the choice faces increased, F(4, 60) = 52.67, p < .0005. The control subjects produced the most errors when the rotation angles of the target face and the choice faces differed by 90° (FP and PF), fewer errors were produced when the difference was 45° (TF, FT, PT, and TP), and the fewest errors were produced when the angles did not differ (FF, TT, and PP). There was a significant effect of vertical orientation on the control subjects’ error data, F(1, 15) = 27.66, p < .0005. The control subjects exhibited the inversion effect, producing fewer errors under the upright (95.8% correct) than the inverted (90.5%) orientation conditions. As can be seen in Figure 3, under upright orientation conditions, the control subjects produced the most errors when the angles of rotation for the target face and the choice faces differed by 90°. There were no significant differences in the number of errors produced under the other upright conditions. In contrast, under inverted orientation conditions, the number of errors increased with increasing differences in the angles of rotation between the target face and choice faces, F(4, 60) = 10.36, p < .0005. The 90° difference conditions produced the most errors, the 45° difference conditions produced fewer errors, and the fewest errors were produced under the 0° difference conditions. In addition, when no transformation in depth was required (0°), the upright and inverted conditions did not differ significantly (98% and 97%, respectively). This result may due to

ROTATION AND INVERSION IN PROSOPAGNOSIA Target Face View Frontal Three-quarter Profile

A

Correct Matches (%)

100 90 80 70 60 50 40

0 45 90 Frontal

0 45 45 Three-quarter

0 45 90 Profile

Difference in Angles of Rotation Choice Faces View

B

Correct Matches (%)

100 90 80 70 60 50 40

0 45 90 Frontal

0 45 45 Three-quarter

0 45 90 Profile

Difference in Angles of Rotation Choice Faces View

Figure 3. The effects of target face angle and choice faces angle on the percentage of correct matches made by the control subjects under (a) upright and (b) inverted orientation conditions (error bars: SEMs).

the subjects using a fast feature-matching strategy or may simply be a result of the overall ease of the task when no rotation transformation was required. In either case, what is more important is what happened when a rotation in depth was required. The ease of the task was an intentional feature of the design that was necessary to tease out the effects of rotation and inversion on the patients’ performance.

Reaction times The trials in which the control subjects made an incorrect response were excluded from the reaction time analysis. For each of the control subjects, mean values of the reaction times were calculated for each condition and entered into a 2 × 3 × 3 (planar orientation × target face angle × choice faces angle) repeated-measures analysis of variance. Post hoc Neuman-Keuls analyses were performed where necessary at an alpha level of .05. There was a significant effect of target face angle, F(2, 30) = 19.94, p < .0005, and choice faces angle, F(2, 30) = 21.74, p < .0005, with the fastest reaction times being produced when the faces were presented in the three-quarter view. With a pattern similar to that seen in the error data, the control subjects’ reaction times increased with increasing differences in the angles of rotation between the target face and choice faces, F(4, 60) = 46.91, p < .0005. The 90° difference conditions produced the longest reaction times, the 45° difference conditions produced shorter reaction times, and the shortest reaction times, were produced under the 0° difference conditions. There was a significant effect of vertical orientation on the control subjects’ reaction time data, F(1, 15) = 11.54, p < .005. The control subjects exhibited the inversion effect; producing shorter reaction times under the upright (2082.59 ms) than the inverted orientation conditions (2927.10 ms). A three-way interaction can be seen in Figure 4, F(4, 60) = 5.47, p < .001, which shows that even though there was an effect of vertical orientation on reaction times, when no transformation in depth was required (0°), the upright and inverted reaction times did not differ significantly, although there is a trend for longer reaction times when the faces were inverted (1805.65 ms) than upright (1414.10 ms). The slopes of the lines describing the log of the control subjects’ reaction times as a function of rotation in depth indicate that they were sensitive to rotation under both upright, y = 0.0033x + 3.15, and inverted, y = 0.0038x + 3.26, conditions. To summarise, the control subjects showed a three-quarters view advantage, sensitively graded effects of rotation, an inversion effect, and an exagCOGNITIVE NEUROPSYCHOLOGY, 2002, 19 (1)

39

MAROTTA, McKEEFF, BEHRMANN

gerated effect of rotation under inverted conditions in both their accuracy and reaction time data.

Patient RN Errors The error frequencies per condition for each patient were entered into separate 2 × 3 × 3 (planar orientation × target face angle × choice faces angle) loglinear analyses. The number of RN’s errors did not differ as a factor of target face angle, choice faces

Figure 4. The effects of target face angle and choice faces angle on the reaction times produced by the control subjects under (a) upright and (b) inverted orientation conditions (error bars: SEMs).

40

COGNITIVE NEUROPSYCHOLOGY , 2002, 19 (1)

angle, or vertical orientation, although there was a general trend similar to that seen in the control subjects’ data of increasing errors as the angles of rotation increased under inverted orientation conditions (see Figure 5). This lack of main effects may be attributable to RN’s long reaction times, as we shall see below. RN is an overly cautious subject, possibly due to abnormal processing or a compensation for his deficit, and the accuracy of his results may come at the expense of his response speed.

Figure 5. The effects of target face angle and choice faces angle on the percentage of correct matches made by RN under (a) upright and (b) inverted orientation conditions.

ROTATION AND INVERSION IN PROSOPAGNOSIA

Reaction times The reaction times of the correct trials in each condition were entered into separate 2 × 3 × 3 (planar orientation × target face angle × choice face angle) repeated-measures analyses of variance for each patient. Post hoc Neuman-Keuls analyses were performed where necessary at an alpha level of .05. There was a significant effect of target face angle, F(2, 1243) = 11.11, p < .0005, and choice faces angle, F(2, 1243) = 14.79, p < .0005, with the longest reaction times being produced when the faces were presented in the frontal view. This result is similar to the finding described in the Methods section that detailed RN’s problem in matching identical stimuli. One possibility is that if RN does not find an immediate difference between the stimuli, he gets caught in a loop of checking back and forth between the two items. If this were the case, then the frontal view of the face would be particularly difficult for RN, since it provides more parts to compare than the other views of the face. Although RN was slowest when the angle of rotation between the target face and the choice faces was 90°, the rest of target face angle by choice faces angle results were not as clear-cut as the pattern seen in the control subjects, F(4, 1243) = 45.36, p < .0005. When the target face and the choice faces were presented at the same angle (i.e., FF, TT, PP) the reaction times did not differ. The FF condition, however, was also not significantly different from the TP or PT conditions. There was a significant effect of vertical orientation on RN’s reaction time data, F(1, 1243) = 118.893, p < .0005. RN showed the inversion effect in his data; producing shorter reaction times under the upright (4843 ms) than the inverted orientation conditions (7058.67 ms). RN’s data shows a vertical orientation by choice faces angle interaction, F(2, 1243) = 7.49, p < .001, which was not present in the control subjects’ data. Although there was a threequarter view advantage in the reaction times produced under upright orientation conditions, under inverted orientation conditions there was no significant difference between the reaction times produced for the three-quarter and the profile views. In addition, although there was no significant difference in reaction times between the frontal and pro-

file views under upright conditions, under inverted conditions, frontal views produced the longest reaction times. As can be seen in Figure 6, RN took longer to match the FP condition than any other condition. Although reaction times for the 45° difference conditions were not significantly different from one another, several of them were also not different from the reaction times when the target face and choice faces were rotated to the same view. RN appears to be less influenced by the rotation of the

Figure 6. The effects of target face angle and choice faces angle on the reaction times produced by RN under (a) upright and (b) inverted orientation conditions (error bars: SEMs). COGNITIVE NEUROPSYCHOLOGY, 2002, 19 (1)

41

MAROTTA, McKEEFF, BEHRMANN

faces under upright orientation conditions than the control subjects. This is also generally true for inverted conditions, F(4, 1243) = 4.56, p < .005. Whereas RN produced the longest reaction times for the PF and FP (90°) conditions, the reaction times for the 45° difference conditions were not as clearly separated as they were for the control subjects. Once again, the frontal view of the face appeared to be problematic for RN, since both FT and TF produce longer reaction times than TP and PT. The slopes of the lines describing the log of RN’s reaction time-rotation function indicate that RN was sensitive to rotation under both upright, y = 0.0032x + 3.5, and inverted, y = 0.00352x + 3.6, conditions. In summary, although RN showed no effect of rotation or inversion on his error data, he did show a mild three-quarters view advantage, a large frontal view disadvantage, a less sensitively graded effect of rotation than control subjects, and the inversion effect on his reaction time data. To verify that the pattern of RN’s results was due to his brain damage and not his age, we ran four age-matched control subjects (age range 40–45 years; mean age 41.8 years) on the face-matching task. The older control subjects’ errors and reaction times were not significantly different from those of the younger controls and there was no orientation × rotation × subject interaction present. Like the younger controls, they still showed the traditional inversion effect on their error (p < .005) and reaction time (p < .004) data. They also showed a systematically graded effect of rotation in depth on their error (p < .0001) and reaction time data (p < .002), with the 90° difference conditions producing the most errors and longest reaction times, the 45° difference conditions producing fewer errors and shorter reaction times, and the fewest errors and fastest reaction times being produced under the 0° difference conditions.

between the target face angle and the choice faces angle, 2(4) = 22.96, p < .0005, with the fewest errors produced when the rotation angles of the target face and the choice faces were identical—where no transformation in depth was required (see Figure 7). There was a significant effect of vertical orientation on CR’s error data, 2(1) = 10.58, p < .005. CR also shows the inversion effect in his data; producing fewer errors under upright (84.2% correct) than inverted (69.2%) orientation conditions.

Patient CR Errors CR did not show a significant effect of target face angle or choice faces angle in the errors he produced. He did, however, show an interaction

42

COGNITIVE NEUROPSYCHOLOGY , 2002, 19 (1)

Figure 7. The effects of target face angle and choice faces angle on the percentage of correct matches made by CR under (a) upright and (b) inverted orientation conditions.

ROTATION AND INVERSION IN PROSOPAGNOSIA

Reaction times CR did not show a significant effect of target face angle or choice faces angle in his reaction times. A target face by choice faces interaction was present in CR’s reaction time data, F(4, 1086) = 17.212, p < .0005, with the shortest reaction times being produced when the angles of rotation for the target face angle and the choice faces angle did not differ. There were no differences between the other target face angle by choice faces angle conditions. Although there was an effect of vertical orientation on CR’s data, F(1, 1086) = 67.123, p < .0005, it was in a pattern opposite to that seen for the control subjects and RN; the reaction times were longer under upright (1828.03 ms) than inverted orientation conditions (1557.43). These results were not due to a speed-accuracy trade-off, since under both upright, F(1, 718) = 6.72, p < .01, and inverted conditions, F(1, 718) = 12.60, p < .0005, CR produced longer reaction times when he made an error. Like RN, CR did show a vertical orientation by choice faces angle interaction, F(2, 1086) = 3.68, p < .05. Under upright conditions, CR’s reaction times were longer when the choice faces were presented as frontal views than when they were presented as three-quarter views. The reaction times produced when the choice faces were presented as profile views were not significantly different from the reaction times for both the profile and the three-quarter view. Under inverted conditions, there were no significant differences in the reaction times for the choice faces conditions. Unlike the control subjects and RN, there was no three-way interaction present in CR’s data, F(4, 1086) = 1.381, p > .05 (see Figure 8). CR’s lack of sensitivity to rotation in depth was also indicated by the slopes of the lines describing the log of CR’s reaction time-rotation function under both upright, y = 0.0009x + 3.2, and inverted, y = 0.0006x + 3.16, conditions. To summarise, CR did not show a preferred view of the face. He did not show a sensitively graded effect of rotation, although he did produce the fewest errors and the fastest reaction times when the target face and choice faces were presented at the same angle. Whereas CR showed the inversion effect in his error data, he showed an inversion superiority effect in his reaction time

Figure 8. The effects of target face angle and choice faces angle on the reaction times produced by CR under (a) upright and (b) inverted orientation conditions (error bars: SEMs).

data, producing faster reaction times when the faces were inverted. To confirm that the differences in the pattern of results between the control subjects and the patients were due to perceptual changes caused by prosopagnosia and not to alterations in task sensitivity at different performance levels, an additional group of 10 control subjects (age range 18–22 years; mean age 19.8 years) were run with restricted presentation times (1.25 s per trial) to try and match overall control accuracy to that of RN (87.6% COGNITIVE NEUROPSYCHOLOGY, 2002, 19 (1)

43

MAROTTA, McKEEFF, BEHRMANN

correct) and CR (77% correct). Whereas the control subjects’ errors increased significantly (p < .0001) when the presentation time was restricted (81.2% from 93% correct under unlimited conditions), there was no orientation × rotation × subject interaction present. The control subjects showed the traditional inversion effect (p < .001) and an effect of rotation in depth (p < .0001) on their error data. Furthermore, as in the unlimited presentation condition, the effect of rotation was systematically graded with the 90° difference conditions producing the most errors, the 45° difference conditions producing fewer errors, and the fewest errors being produced under the 0° difference conditions. In summary, even when task difficulty was equated for the control subjects and the prosopagnosic patients, their patterns of results still differed.

DISCUSSION The present study was designed to determine the individual and combined effects of two transformations, a rotation in depth and a planar inversion, on the face-matching ability of individuals with prosopagnosia and their intact control subjects. In addition, we wanted to determine if patients with prosopagnosia have a preferred view of a face. The rotation of the faces in depth did not impair the control subjects’ matching of upright faces; only the largest difference in the angle of rotation between the target face and choice faces (90°) produced an increase in errors. In contrast, the rotation of the faces in depth appeared to have a systematic influence on the processing of inverted faces; the control subjects’ errors increased with increasing angles of rotation. This systematic influence of rotation was also evident in the controls subjects’ reaction times under both upright and inverted conditions. The effects of rotation on RN were not as straightforward as they were in the control subjects. RN did not show a significant effect of rotation in his error data but did show an influence of rotation in his reaction time data, although it was less systematically graded than the pattern shown by the control subjects. Although the effects of rotation on CR’s performance can be considered systematic,

44

COGNITIVE NEUROPSYCHOLOGY , 2002, 19 (1)

they were the coarsest effects exhibited by any of the subjects, with his errors and reaction times increasing equally for both 45° and 90° rotations. CR may be using different sources of information to the control subjects in the face-matching task, which become unusable when any rotation in depth occurs. We will discuss this in more detail later. Given the evidence that viewer-centred representations in the ventral temporal cortex account for face recognition from typical and rotated views, the damage to the ventral stream that occurs in these patients could explain this insensitivity to rotation. This would also explain why the effect of rotation in CR is less sensitively graded than in RN, since CR has more severe damage to his temporal cortex. This damage may also force the patients to use different information to the control subjects in their processing that cannot be adapted for a strategy to deal with rotated faces. As discussed in the Introduction, when the view of a face deviates from a “typical view” the visual system may use several properties to facilitate recognition. Under inverted conditions, the control subjects may parse up the faces into meaningful components (e.g., nose, eyes, mouth, etc . . .) and predict how they would look when transformed in depth. This may not be possible for RN and CR. CR, in particular, seems reliant on a source of information (e.g., texture or colour) that is not adaptable when a rotation in depth occurs. CR’s difficulty with face rotation appears to be a form of viewpoint-dependency; when he was faced with any rotation in depth, he produced equally poor results. The control subjects, and to some extent RN, exhibited a three-quarter view advantage in the face-matching task. As discussed in the Introduction, the three-quarter view may be canonical because of the number of features represented, or because a better three-dimensional representation of a face can be constructed from an angled view. Interestingly, RN produced the longest reaction times when he was presented with frontal views of the target and choice faces. As we explained earlier, this may be the result of a repetitive comparison of the target and choice faces stimuli by RN. Unlike the control subjects and RN, CR showed no main effect of target face or choice faces angles. One

ROTATION AND INVERSION IN PROSOPAGNOSIA

reason that CR may not show effects of target or choice faces angles is that the information he is using in the face-matching task does not rely on a particular view. It does not matter which way the target and choice faces happen to be facing, as long as they are facing the same way. This is further evidence that CR is using a form of information that is not dependent on the view of the face and, moreover, cannot deal with any rotation of the face. The control subjects showed the traditional inversion effect in their error and reaction time data, supporting the argument that face recognition normally depends on a holistic, face-specific system but can be achieved through a time-consuming analysis of individual parts in situations where the stimuli do not activate the face-specific system, as is the case with inverted faces (Barton et al., 2001; Moscovitch et al., 1997). In contrast, CR showed an inversion superiority effect in his reaction times—producing faster reaction times when matching inverted faces than upright faces, which supports the claim that the face module continues to dominate face processing in prosopagnosia even though it is malfunctioning and therefore maladaptive (Farah et al., 1995b). CR’s damaged face processor may have continued to process upright faces, which resulted in increased reaction times. Under inverted orientation conditions, where inverted faces do not engage holistic recognition systems but instead are recognised piecemeal by part-based, part-dependent processes (Moscovitch et al., 1997), CR’s faulty processor would not be activated and, therefore, would not impair his performance. It appears, however, that even though the damaged face processor may have increased reaction times, it still allowed for more accurate matching of the upright faces than the parts-based analysis did of the inverted faces. The performance differences between RN and CR may be attributable to the differences in their lesions and resulting deficits. RN has generally performed better than CR on previous tests of face recognition, which could be the result of RN having a less damaged face-specific processor than CR. Another possibility is that RN was able to partially compensate for his face-processing deficit by taking more time to make his responses. It is possible that

this strategy allowed RN enough time to perform a painstaking parts-based comparison that overcame the different experimental manipulations. This type of approach could explain why RN was as accurate during the inverted orientation conditions, although much slower, than he was during the upright orientation conditions. If this were the case, one might expect that if the time to view the stimuli were restricted, RN would show the inversion effect in his errors, as well as his reaction times. Alternatively, RN’s overall performance would likely deteriorate if his viewing times were restricted, which could result in a performance drop to a level closer to that seen in CR. In summary, the damage that occurs in prosopagnosia to the ventral temporal areas that normally account for face recognition from typical and rotated views may have forced CR to rely on sources of information that are not dependent on the view of the face and, moreover, cannot be adapted to deal with rotated faces under both upright and inverted conditions. In addition, the inversion superiority effect seen in CR’s data supports the claim that the holistic processing systems continue to dominate face processing in prosopagnosia even though they are malfunctioning (Farah et al., 1995b). The effects of planar inversion on the control subjects’ data supports the theory that face recognition normally depends on a holistic, face-specific system but can be achieved through a time-consuming analysis of individual parts when the stimuli do not activate the facespecific processors. Manuscript received 10 October 2000 Revised manuscript received 20 February 2001 Revised manuscript accepted 10 April 2001

REFERENCES Barton, J.J.S., Keenan, J.P., & Bass, T. (2001). Discrimination of spatial relations and features in faces: Effects of inversion and viewing duration. British Journal of Psychology, 92, 527–549. COGNITIVE NEUROPSYCHOLOGY, 2002, 19 (1)

45

MAROTTA, McKEEFF, BEHRMANN

Benton, A., & Van Allen, M. (1968). Impairment in facial recognition in patients with cerebral disease. Transactions of the American Neurological Association, 93, 38–42. Bruce, V., Valentine, T., & Baddeley, A. (1987). The basis of the 3/4 view advantage in face recognition. Applied Cognitive Psychology, 1, 109–120. Bruyer, B., & Crispeels, G. (1992). Expertise in person recognition. Bulletin of the Psychonomic Society, 30, 501–504. Bruyer, B., & Galvez, C. (1989). The structural orientation of the mental representation of faces. Archives de Psychologie, 57, 259–269. Bülthoff, H.H., & Edelman, S. (1992). Psychophysical support for a two-dimensional view theory of object recognition. Proceedings of the National Academy of Science, 89, 60–64. Cohen, J.D., MacWhinney, B., Flatt, M., & Provost, J. (1993). PsyScope: A new graphic interactive environment for designing psychology experiments. Behavioral Research Methods, Instruments and Computers, 25, 257–271. Damasio, A. (1985). Disorders of complex visual processing: Agnosia, achromatopsia, Balint’s syndrome, and related difficulties of orientation and construction. In M. Mesulam (Ed.), Principles of behavioral neurology (pp. 259–288). Philadelphia, PA: F.A. Davis Company. De Gelder, B., Bachoud-Lévi, A., & Degos, J. (1998). Inversion superiority in visual agnosia may be common to a variety of orientation polarised objects besides faces. Vision Research, 38, 2855–2861. De Gelder, B., & Rouw, R. (2000). Paradoxical inversion effect for faces and objects in prosopagnosia. Neuropsychologia, 38, 1271–1279. De Renzi, E. (1986). Prosopagnosia in two patients with CT scan evidence of damage confined to the right hemisphere. Neuropsychologia, 24, 385–389. Diamond, R., & Carey, S. (1986). Why faces are and are not special: An effect of expertise. Journal of Experimental Psychology: General, 115, 107–117. Farah, M., Levinson, K., & Klein, K. (1995a). Face perception and within-category discrimination in prosopagnosia. Neuropsychologia, 33, 661–674. Farah, M.J., Wilson, K.D., Drain, H.M., & Tanaka, J.R. (1995b). The inverted face inversion effect in prosopagnosia: Evidence for mandatory, face-specific perceptual mechanisms. Vision Research, 35, 2089– 2093. Gauthier, I., Behrmann, M., & Tarr, M.J. (1999). Can face recognition really be dissociated from object

46

COGNITIVE NEUROPSYCHOLOGY , 2002, 19 (1)

recognition? Journal of Cognitive Neuroscience, 11, 349–370. Gauthier I., Tarr, M., Anderson, A., Skudlarski, P., & Gore, J. (1999). Activation of the middle fusiform “face area” increases with expertise in recognising novel objects. Nature Neuroscience, 2, 568–573. Goodglass, H., Kaplan, E., & Weintraub, S. (1983). Boston Naming Test. New York: Lea & Febiger. Hancock, P.J.B., Bruce, V., & Burton, A.M. (2000). Recognition of unfamiliar faces. Trends in Cognitive Sciences, 4, 330–337. Hill, H., Schyns, P.G., & Akamatsu, S. (1997). Information and viewpoint dependence in face recognition. Cognition, 62, 201–222. Marotta, J.J., Voyvodic, J.T., Gauthier, I., Tarr, M.J., Thulborn, K.R., & Behrmann, M. (1999). A functional MRI study of face recognition in patients with prosopagnosia. Cognitive Neuroscience Society Annual Meeting Program 1999, 83. Moscovitch, M., Winocur, G., & Behrmann, M. (1997). What is special about face recognition? Nineteen experiments on a person with visual object agnosia and dyslexia but normal face recognition. Journal of Cognitive Neuroscience, 9, 555–604. Moses, Y., Ullman, S., & Edelman, S. (1996). Generalisation to novel images in upright and inverted faces. Perception, 25, 443–461. Murray, J.E., Yong, E., & Rhodes, G. (2000). Revisiting the perception of upside-down faces. Psychological Science, 11, 492–496. O’Toole, A.J., Edelman, S., & Bulthoff, H.H. (1998). Stimulus-specific effects in face recognition over changes in viewpoint. Vision Research, 38, 2351–2363. Perrett, D.I., Oram, M.W., & Ashbridge, E. (1998). Evidence accumulation in cell populations responsive to faces: an account of generalisation of recognition without mental transformations. Cognition, 67, 111– 145. Perrett, D.I., Oram, M.W., Harries, M.H., Bevan, R., Hietanen, J.K., Benson, P.J., & Thomas, S. (1991). Viewer-centred and object-centred coding of heads in the macaque temporal cortex. Experimental Brain Research, 86, 159–173. Riddoch, M.J., & Humphreys, G.W. (1993). Birmingham Object Recognition Battery. Hove, UK: Lawrence Erlbaum Associates Ltd. Rossion, B., De Gelder, B., Dricot, L., Zoontjes, R., De Volder, A., Bodart, J., & Crommelinck, M. (2000). Hemispheric asymetries for whole-based and partbased face processing in human fusiform gyrus. Journal of Cognitive Neuroscience, 12, 793–802.

ROTATION AND INVERSION IN PROSOPAGNOSIA

Schweich, M., & Bruyer, R. (1993). Heterogeneity in the cognitive manifestation of prosopagnosia: The study of a group of single cases. Cognitive Neuropsychology, 10, 529–547. Sergent, J., & Poncet, M. (1990). From covert to overt recognition of faces in a prosopagnosic patient. Brain, 113, 989–1004. Shepard, R.N., & Metzler, J. (1971). Mental rotation of three-dimensional objects. Science, 171, 701–703. Snodgrass, S., & Vanderwart, M.A. (1980). A standardised set of 260 pictures: Norms for name agreement, image agreement, familiarity and visual complexity. Journal of Experimental Psychology: Learning, Memory and Cognition, 6, 174–215. Tanaka, J., & Gauthier, I. (1997). Expertise in object and face recognition. In G.S. Medin (Ed.), Psychology of earning and motivation series, Special Volume: Perceptual mechanisms of learning, Vol. 36 (pp. 83–125). San Diego, CA: Academic Press.

Tarr, M.J., & Pinker, S. (1989). Mental rotation and orientation dependence in shape recognition. Cognitive Psychology, 21, 233–282. Ullman, S. (1998). Three-dimensional object recognition based on the combination of views. In M. Tarr & H. Bülthoff (Eds.), Object recognition in man, monkey, and machine (pp. 21–44). Cambridge, MA: The MIT Press. Valentine, T. (1988). Upside-down faces: A review of the effect of inversion upon face recognition. British Journal of Psychology, 79, 471–491. Valentine, T., & Bruce, V. (1988). Mental rotation of faces. Memory and Cognition, 16, 556–566. Williams, P., & Behrmann, M. (2001). Object categorisation and part integration: Experiments with normal perceivers and patients with visual agnosia. Manuscript submitted for publication. Yin, R.K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81, 141–145. Yin, R.K. (1970). Face recognition by brain-injured patients: A dissociable ability? Neuropsychologia, 8, 395–402.

COGNITIVE NEUROPSYCHOLOGY, 2002, 19 (1)

47

Suggest Documents