Experimental Aging Research, 36: 1–22, 2010 Copyright # Taylor & Francis Group, LLC ISSN: 0361-073X print=1096-4657 online DOI: 10.1080/03610730903418372
Downloaded By: [University of Otago] At: 23:13 13 January 2010
AGING AND THE PERCEPTION OF EMOTION: PROCESSING VOCAL EXPRESSIONS ALONE AND WITH FACES
Melissa Ryan Janice Murray Ted Ruffman Department of Psychology, University of Otago, Dunedin, New Zealand This study investigated whether the difficulties older adults experience when recognizing specific emotions from facial expressions also occur with vocal expressions of emotion presented in isolation or in combination with facial expressions. When matching vocal expressions of six emotions to emotion labels, older adults showed worse performance on sadness and anger. When matching vocal expressions to facial expressions, older adults showed worse performance on sadness, anger, happiness, and fear. Older adults’ poorer performance when matching faces to voices was independent of declines in fluid ability. Results are interpreted with reference to the neuropsychology of emotion recognition and the aging brain.
People express emotion through their facial expressions, their bodies, or in voices. Emotional expression can alter the meaning of speech and the ability to accurately identify emotional content is particularly important in social interaction. Thus, it is important to understand how this ability changes across the life span. This study examined age differences in recognition of vocal expressions of emotion, as well as matching of vocal to facial expressions of emotion. A small number of studies have investigated how the ability to identify vocal expressions of emotion fares with age. Some studies have compared Received 6 November 2006; accepted 12 July 2008. Address correspondence to Janice Murray, Department of Psychology, University of Otago, 95A Union Street, Dunedin 9054, New Zealand. E-mail:
[email protected]
Downloaded By: [University of Otago] At: 23:13 13 January 2010
2
M. Ryan et al.
overall recognition of auditory expressions in young versus older adults, and have found worse recognition in older adults (usually defined as 60þ years), but have not isolated individual emotions (Kiss & Ennis, 2001; Mitchell, 2007; Orbelo, Grim, Talbott, & Ross, 2005; Orbelo, Testa, & Ross, 2003; Oscar-Berman, Hancock, Mildworf, Hutner, & Altman Weber, 1990; Raithel & HielscherFastabend, 2004). Other authors have isolated individual emotions. Brosgole and Weisman (1995) found that older adults were less accurate than younger adults when identifying angry, sad, and happy emotion voices (the only three types of emotion they tested), and Wong, Cronin-Golomb, and Neargarder (2005) found that older adults experienced difficulties relative to young adults when presented with happy and sad voices, but not angry, fearful, surprised, or disgusted voices. It is rare for emotions to be expressed solely in the voice during social interactions in the real world; usually, when trying to recognize what emotion is expressed, it is necessary to combine information from both face and voice cues. Numerous studies have examined older adults’ recognition of facial emotion displays (e.g., Moreno, Borod, Welkowitz, & Alpert, 1993; McDowell, Harrison, & Demaree, 1994; Brosgole & Weisman, 1995; MacPherson, Phillips, & Della Sala, 2002; Phillips, MacLean, & Allen, 2002; Calder et al., 2003; Sullivan and Ruffman, 2004a, 2004b). A summary of the relevant research in this area by Sullivan and Ruffman (2004b) indicated that older adults are consistently less accurate than younger adults when recognizing facial expressions of anger (82% of studies) and sadness (75% of studies), and are frequently worse when recognizing fear (50% of studies). This poorer performance is offset by areas of relative strength in older adults. For example, older adults’ recognition of happiness and surprise from static images of faces appears to be as accurate as that of younger adults although ceiling effects might cloud the issue for happiness (Sullivan & Ruffman, 2004b), and there is some evidence that older adults’ recognition of disgust is better than young adults’ (Calder et al., 2003). To date, only one study has examined older adults’ ability to process two sources of emotion information simultaneously in an emotion-matching task. In that study, Sullivan and Ruffman (2004b) found that older adults were less accurate than younger adults in matching vocal to facial expressions of emotion when matching expressions of anger, sadness and disgust. It seems plausible that matching voices to faces will be particularly difficult for older adults, insofar as problems processing either vocal or facial expressions of emotion could create difficulties on the emotion-matching
Downloaded By: [University of Otago] At: 23:13 13 January 2010
Older Adults’ Recognition of Emotion
3
task. However, Sullivan and Ruffman’s (2004b) study did not determine whether the poorer performance of the older adults was due to difficulty in combining emotion information from two modalities or whether it simply reflected difficulty with recognizing emotion from the auditory stimuli alone. The first aim of the present study was to determine whether older adults’ poor performance when matching emotion sounds to emotion faces is merely due to difficulty identifying emotion sounds independent of any difficulty identifying facial expressions. If older adults have difficulty labeling emotion sounds, then this might underlie their difficulties matching emotion sounds to facial expressions of emotion. Our emotion-matching tasks also allowed an investigation of which specific emotions older adults find more difficult compared to young adults. It is not clear, for instance, whether older adults experience the same difficulty with an emotion regardless of whether the match is to a facial expression or to a verbal label. It is possible, however, to make predictions based on previous research. Given that anger and sadness have proved difficult for older adults when recognizing facial expressions, vocal expressions, and when matching the two, we would expect that, regardless of task, matching expressions of sadness and anger would be particularly difficult for older adults. Nevertheless, it is possible, based on the past single-modality research described above, that matching vocal expressions of happiness and fear to facial expressions may also create difficulties. It is also possible, based on the one previous study employing a face-voice matching task (Sullivan & Ruffman, 2004b), that matching vocal expressions of disgust to facial expressions will create difficulties. Thus, a second aim of the present study was to clarify the specific pattern of emotion recognition difficulties experienced by older adults when asked to integrate two sources of emotional information. One explanation for the specific emotion recognition difficulties observed in previous studies is age-related brain changes. There is some consensus in the literature as to the brain areas involved in processing emotion from each modality. For example, processing emotions from vocal and auditory cues activates a network of voiceselective brain regions (Schirmer & Kotz, 2006). Detection of a vocal expression initially engages a processing pathway from the auditory cortex to the superior temporal sulcus (Liebenthal, Binder, Spitzer, Possing, & Medler, 2005; Mitchell, Elliott, Barry, Cruttenden, & Woodruff, 2003; Parker et al., 2005; Scott & Wise, 2004). The frontal lobes, particularly the inferior frontal gyrus (Buchanan et al., 2000; George et al., 1996; Wildrguber et al., 2005; Wildgruber, Pihan, Ackermann, Erb, & Grodd, 2002; but see Mitchell et al., 2003) and
Downloaded By: [University of Otago] At: 23:13 13 January 2010
4
M. Ryan et al.
the orbitofrontal cortex (OFC; Van Hoesen, Parvizi, & Chu, 2000; Ongu¨r, Ferry, & Price, 2003), are then involved in making an emotional judgement to determine and label the emotion present in the vocal stimuli. There is some evidence from neuroimaging studies that the OFC is implicated in making judgments of any vocal expression of emotion (Buchanan et al., 2000; Wildgruber et al., 2002, 2005). On the other hand, Hornak et al. (2003) found that people with bilateral OFC lesions had selective difficulty recognizing vocal expressions of sadness and anger. A study by Sander et al. (2005) also found that the OFC is activated by angry voices. Overall, this research suggests that the OFC may be particularly involved in recognizing angry and sad vocal expressions. Similarly, there is a network of brain areas that processes facial expressions of emotion. This includes the ventral striatum (Calder, Keane, Lawrence, & Manes, 2004), the OFC (particularly for anger; Blair & Cipolotti, 2000; Blair, Morris, Frith, Perrett, & Dolan, 1999; Fine & Blair, 2000; Iidaka et al., 2001; Sprengelmeyer, Rausch, Eysel, & Przuntek, 1998), and visual processing areas in the parietal and occipital lobes (Posamentier & Abdi, 2003). The basal ganglia and insula are specifically involved in decoding disgust (e.g., Calder, Lawrence, & Young, 2001; Phan, Wager, Taylor, & Liberzon, 2002), and there is some consensus (e.g., Adolphs, Tranel, Damasio, & Damasio, 1994, 1995; Adolphs et al., 1999; Adolphs, 2002; Anderson & Phelps, 2000; Calder et al., 1996, 2001; Phan et al., 2002; Posamentier & Abdi, 2003; Sprengelmeyer et al., 1998) that the amygdala is particularly implicated in decoding facial expressions of fear. It seems clear, then, that a number of brain regions situated in the frontal and temporal lobes are implicated in the processing of emotions expressed in faces and voices. What is important is what happens to these regions in an aging brain. It is generally recognized that frontal and temporal regions undergo consistent age-related change (e.g., Bartzokis et al., 2001; Raz et al., 2005), making it possible that brain change accounts for older adults’ emotion recognition difficulties. In particular, it is frequently argued that brain volume losses occur earlier and more rapidly in frontal areas (e.g., Allen, Bruss, Brown, & Damasio, 2005a, 2005b; Dimberger et al., 2000; Grieve, Clark, Williams, Peduto, & Gordon, 2005; Moscovitch & Winocur, 1992; Phillips & Henry, 2005; Raz, 2000; West, 2000), and there is evidence that the OFC degrades even more rapidly than other frontal areas (Convit et al., 2001; Lamar & Resnick, 2004; Raz et al., 1997; Resnick et al., 2000; Resnick, Pham, Kraut, Zonderman, & Dvatzikos, 2003; Tisserand et al., 2002). Such decline would lead to
Downloaded By: [University of Otago] At: 23:13 13 January 2010
Older Adults’ Recognition of Emotion
5
older adults having difficulty recognizing facial expressions of anger given the role of the OFC in processing such stimuli (Blair & Cipolotti, 2000; Blair et al., 1999; Fine & Blair, 2000; Iidaka et al., 2001; Sprengelmeyer et al., 1998). Although the amygdala might not decline as rapidly as some brain areas such as the frontal lobes (Good et al., 2001; Grieve et al., 2005), a number of studies indicate that there are also linear reductions in amygdala volume with age (Allen et al., 2005a, 2005b; Grieve et al., 2005; Mu, Xie, Wen, Weng, & Shuyun, 1999; Tisserand, Visser, van Boxtel, & Jolles, 2000; Wright, Wedig, Williams, Rauch, & Albert, 2006; Zimmerman et al., 2006). These reductions might lead older adults to have difficulty recognizing facial expressions of fear and anger, and vocal expressions of anger and sadness. Thus, there seems to be some convergence between behavioral studies of young-old emotion recognition differences, neuropsychological studies investigating the brain regions involved in recognizing specific emotions, and studies investigating age-related brain region decline. Together, these studies lead to the prediction that expressions of anger and sadness will create particular difficulties for older adults in both modalities. Further, based on empirical results obtained thus far in single modality or matching tasks, one might predict additional difficulties matching expressions of fear, disgust, and happiness. In the present study, we used labeling of auditory expressions and matching of auditory expressions to facial expressions to test these predictions. A third aim was to investigate whether worsening emotion recognition is experienced by all older adults or just a subset of this population. Whereas many studies have found poorer performance in older adults on facial recognition tasks, no study has examined whether difficulties are experienced by all or most older adults. We examined this question by splitting the younger adult group into quartiles based on their emotion recognition scores in each task, and then examining how many older adults’ scores fell into each quartile. If the same number of younger and older adults scored in the top quartile, but more older adults were in the bottom quartile, this would suggest that emotion recognition abilities worsen only in some older adults. Alternatively, the occurrence of fewer older adults in the top quartile and more in the lower three quartiles (particularly in the bottom quartile) would suggest emotion recognition abilities worsen in most or all older adults. This analysis relies on the assumption that it is unlikely that the most able group of young adults would go on to become the least able group of older adults. This assumption seems justifiable because
Downloaded By: [University of Otago] At: 23:13 13 January 2010
6
M. Ryan et al.
although many mental abilities tend to decline over the life span, individual differences are stable throughout life such that that those who do better on mental tests at one age relative to their peers also tend to do better at a later age relative to their peers. For instance, in one study the correlation between mental abilities at age 11 and age 80 was .66 (Deary, Whiteman, Whalley, Fox, & Starr, 2004). Given the stability of mental abilities over age, it would be implausible to suggest that a pattern of many older adults in the bottom quartile and few in the top quartile could be explained by the most able group joining the least able group over time, and the middle two quartiles remaining stable. Finally, a fourth aim was to investigate whether an age-related decline in emotion recognition ability is related to general cognitive decline. It is well established that fluid ability, those processes associated with greater mental effort, novelty, and information complexity, declines with age (Burke & Mackay, 1997; Salthouse, 2000), but only a few studies have investigated whether this fluid ability decline is responsible for poorer emotion recognition in older adults. At present, there are three studies suggesting independence of fluid ability and emotion recognition (Keightley, Winocur, Burianova, Hongwanishkul, & Grady, 2006; Sullivan & Ruffman, 2004a, 2004b), and one study suggesting older adults’ emotion recognition difficulties may be underpinned by declining fluid ability (Phillips & Allen, 2004). Although the reasons why researchers have obtained discrepant findings are not clear, our study will help to clarify the relation between fluid ability and understanding of emotion. METHODS Participants Participants consisted of 40 younger adults (25 female and 15 male; 17 years to 29 years; M ¼ 21.63 years) and 40 older adults (25 female and 15 male; 60 years to 84 years; M ¼ 65.60 years). All participants spoke English as their first language. Older adults were recruited from the community via newspaper advertisements and all lived independently. The younger adults were undergraduate students at the University of Otago. The eyesight of all older adults was tested on a Snellen chart and all had average to good (corrected-to-normal) visual acuity (from 20=30 to 20=16). None of the older adults had a history of stroke or other illnesses that may have affected performance on the tasks below. No participants
Older Adults’ Recognition of Emotion
7
reported any hearing difficulties in general or having difficulty hearing the stimuli presented.
Downloaded By: [University of Otago] At: 23:13 13 January 2010
Materials Vocal Stimuli We used 12 emotion soundtracks, each running for 20 s in the ‘‘label þ voice’’ condition. The tracks were taken from Hobson et al.’s (1988) study and were digitized from the tape recordings. The recordings could be divided into two groups with six soundtracks in each group. For the first group of sounds, a male actor enacted six emotions (happiness, sadness, fear, disgust, surprise, and anger) to create a series of nonverbal expressions (for example, a happy humming sound or high-pitched gasps of fear). The second group of sounds consisted of a female actor reading the following sentence while expressing each of the six emotions through prosody: ‘‘I was walking down the road yesterday, when I saw a large red car in front of me. It stopped and a small man in a blue coat got out.’’ Facial Stimuli We used the voices described above along with six photos of faces depicting emotions from the Ekman and Friesen (1976) set of emotion faces in the ‘‘face þ voice’’ condition. The six faces represented the six emotions portrayed by ‘JJ’ (happiness, sadness, anger, surprise, disgust, and fear). He was photographed facing the camera and was devoid of glasses and facial hair. When viewed at a distance of 80 cm, the faces subtended a visual angle of 6.6 vertically and 6.1 horizontally. Labels describing the six emotions as expressed in the faces screen (e.g., the word ‘‘Happy’’) were used. The labels were capitalized and typed in bold, 32-point, Arial font. Each face and label was displayed in a box, 285 253 pixels in size, on the screen. Measures of Intelligence The Culture Fair Intelligence Test (Cattell & Cattell, 1959) provided the measure of fluid intelligence. This test involves four types of spatial problems (odd-man-out, matrices, series completion, and topology) and is a commonly used measure of fluid intelligence (e.g., Duncan, Burgess, & Emslie, 1995; Rouaux & Juhel, 1995; Tan, Tan, & Atatuerk, 1998). The vocabulary subtest of the Wechsler Adult Intelligence Scale—Revised (Wechsler, 1981) was used as a measure of crystallized intelligence. This task requires participants, without a time limit, to provide definitions for 35 words.
M. Ryan et al.
8
Downloaded By: [University of Otago] At: 23:13 13 January 2010
Procedure In two separate blocks of trials, participants were presented with either six photos of facial expressions of emotion or six emotion labels on the computer screen. The six photographs or emotion labels were presented simultaneously; three across the top of the computer screen (angry, disgusted, fearful) and three across the bottom (happy, sad, surprised). A vocal expression was presented simultaneously with either the face or label display, and the participants were instructed to match the vocal expression, on the basis of emotional content, to the appropriate face (face þ voice condition) or label (label þ voice condition). The participants indicated their choice by clicking the mouse on one of the six faces or labels. In each condition, a block of 24 trials consisted of two presentations of each of the two types of vocal expression for each of the six emotions. Order of presentation of blocks was counterbalanced across participants. The order of the experimental task, intelligence, and visual acuity measures was also counterbalanced across participants. RESULTS Table 1 includes the proportion correct in the different conditions. We conducted a 2 (age group: younger adult, older adult) 2 (task: face þ voice, label þvoice) 2 (stimulus type: female voice, male voice) analysis of variance (ANOVA). The dependent variable was
Table 1. Mean proportion correct (SD) for younger and older adults across the stimuli conditions for the voice and face-voice tasks Age group Stimuli type Label þ Voice Task Male voice Female voice Difference scorea Face þ Voice Task Male voice Female voice Difference scorea a
Young adults
Older adults
0.91 (0.09) 0.78 (0.15) 0.13 (0.17)
0.83 (0.12) 0.73 (0.17) 0.11 (0.16)
0.76 (0.19) 0.76 (0.15) 0.00 (0.18)
0.66 (0.19) 0.56 (0.22) 0.10 (0.16)
‘‘Male voice’’ minus ‘‘Female voice’’ score.
Downloaded By: [University of Otago] At: 23:13 13 January 2010
Older Adults’ Recognition of Emotion
9
proportion correct. Stimulus type was examined because in the face þ voice task the female voice produced a gender mismatch with male faces, whereas for the male voice, gender of voice and face matched. This ANOVA revealed a significant difference between the age groups, F(1, 78) ¼ 18.75, p < .001, indicating that the older adults performed less accurately than the younger adults overall. There was also a main effect of task, F(1, 78) ¼ 50.88, p < .001, indicating that both groups found the face þ voice task more difficult than the label þ voice task. In addition, there was a significant Age Group Task interaction, F(1, 78) ¼ 8.11, p < .01. Older adults were less accurate than younger adults on both the label þ voice, t(78) ¼ 2.74, p < .01, and face þ voice task, t(78) ¼ 4.19, p < .001. However, as can be seen in Table 1, older adults were comparatively worse on the face þ voice task than the label þ voice task relative to younger adults. An evaluation of relative performance across task confirmed this observation. The difference score (label þ voice minus face þ voice) for older adults was significantly larger than that for younger adults, t(78) ¼ 2.84, p < .01. There was also a significant main effect of stimulus type, F(1, 78) ¼ 41.26, p < .001, with worse performance by all participants for the female voice. Stimulus type also interacted with task, F(1, 78) ¼ 7.59, p < .01, as well as with task and age group, F(1, 78) ¼ 5.55, p < .05. To explore the three-way interaction further, we examined whether older adults experienced more difficulty on the female voice in the face þ voice task (when they had to match a female voice to a male face half the time, and a male voice to a male face half the time) compared to the label þ voice task (when they had to assess a female voice on its own half the time, and a male voice on its own half the time). To this end, we created a ‘‘male voice female voice’’ difference score in each task. As can be seen in Table 1, for the older adults, the difference scores did not differ significantly in the face þ voice and label þ voice tasks, t < 1. Older adults, like younger adults, showed less accurate performance for the female voice in the label þ voice task, but exhibited no additional disadvantage when gender incongruity was introduced with the matching of the female voice to a male face in the face þ voice task. In contrast, for the younger adults, the difference scores were significantly different, t(78) ¼ 3.42, p < .01, indicating that the advantage in the label þ voice task when matching a male voice compared to a female voice disappeared in the face þ voice task. Thus, it was the younger adults’ reduced accuracy for the male voice in the face þ voice task compared to the label þ voice task that gave rise to the three-way interaction described above.
Downloaded By: [University of Otago] At: 23:13 13 January 2010
10
M. Ryan et al.
Figure 1. Proportion correct for each emotion in the label þ voice condition.
We next compared the performance of the two age groups on each of the six emotions for the face þ voice and label þ voice conditions. Figures 1 and 2 show younger and older adults’ performance on the different emotions for the two tasks. Two-tailed t tests using Holms correction were used. When matching voices to labels, older adults were significantly less accurate than younger adults only when recognizing sadness, t(78) ¼ 3.46, p < .01, and anger, t(78) ¼ 2.92, p < .01. In contrast, when matching voices to faces, older adults were significantly less accurate than younger adults when recognizing sadness,
Figure 2. Proportion correct for each emotion in the face þ voice condition.
Downloaded By: [University of Otago] At: 23:13 13 January 2010
Older Adults’ Recognition of Emotion
11
Figure 3. Number of younger and older adults in each quartile for the label þ voice condition.
t(78) ¼ 4.70, p < .001; anger, t(78) ¼ 2.41, p < .05; fear, t(78) ¼ 2.56, p < .05; and happiness, t(78) ¼ 2.80, p < .01. To investigate whether emotion recognition difficulties were experienced by all of the older adults or by just a subset, we grouped the younger adults’ mean scores for each task into approximate quartiles and then calculated the number of older adults whose scores fell into the same four groups. The resulting frequency counts for each task condition are displayed in Figures 3 and 4. As can be seen in Figure 3 for the label þ voice condition, 40% of older adults’ scores fell into the lowest quartile compared with 20% of younger adults. There was a significant difference in the number
Figure 4. Number of younger and older adults in each quartile for the face þ voice condition.
Downloaded By: [University of Otago] At: 23:13 13 January 2010
12
M. Ryan et al.
of younger and older adults falling into each quartile for the label þ voice condition, Mann-Whitney U test: p < .05. A similar but more striking pattern can be seen for the face þ voice task in Figure 4, where 3% of older adults’ scores fell into the top quartile compared with 28% of the younger adults. In addition, 63% of older adults’ scores fell into the lowest quartile compared to 23% of younger adults. Again there was a significant difference in the number of younger and older adults falling into each quartile for the face þ voice condition, Mann-Whitney U test: p < .001. Whereas no younger adults scored less than 45% correct in the face þ voice condition, nine older adults (23% of the sample) achieved a score lower than this. We then examined fluid and crystallized ability. Consistent with normal cognitive aging (Salthouse, 2000), the younger adults were superior on the fluid measure, t(78) ¼ 8.31, p < .001 (younger adult M ¼ 27.08, SD ¼ 4.23; older adult M ¼ 18.98, SD ¼ 4.48), whereas the older adults were superior on the crystallized measure, F(1,78) ¼ 19.22, p < .001 (younger adult M ¼ 50.34, SD ¼ 8.64; older adult M ¼ 57.70, SD ¼ 6.09). To determine the relation between age, fluid ability, and task performance, we first calculated the correlations between these variables on both emotion-matching tasks. The correlation between age and performance on the label þ voice task was r ¼ .31, p < . 01, and the correlation between fluid ability and performance on this task was r ¼ .38, p < .01. On the face þ voice task, the correlation between age and performance was r ¼ .44, p < .01, and the correlation between fluid ability and performance was r ¼ .39, p < .01. Age and fluid ability were highly correlated (r ¼ .69, p < .01). We next used regression to examine whether age alone was an independent predictor of performance on the two tasks or whether fluid ability contributed to the observed performance difference. In the first regression, performance on the label þ voice task was the dependent variable. With both fluid ability and age group entered into the model predicting performance on the label þ voice task, only fluid ability predicted independent variance in task performance, t(77) ¼ 2.14, pr ¼ .24, p < .05, b ¼ 0.31. Age was not an independent predictor of task performance, t(77) ¼ .70, pr ¼ .08, p > .49, b ¼ 0.10. In contrast, for the face þ voice task, only age group was an independent predictor of task performance, t(77) ¼ 2.33, pr ¼ .26, p < .05, b ¼ 0.32; fluid ability was not, t(77) ¼ 1.19, pr ¼ .13, p > .05, b ¼ 0.17. In sum, age group differences on the face þ voice task were not simply a function of changes in fluid ability.
Older Adults’ Recognition of Emotion
13
Downloaded By: [University of Otago] At: 23:13 13 January 2010
DISCUSSION Although there are now many studies demonstrating that, on the whole, older adults are less accurate than younger adults in recognizing specific emotions from facial stimuli, there are few studies that have examined emotion recognition using nonfacial stimuli such as vocal expressions, or have investigated recognition of emotion from voice and face cues presented simultaneously. The present study adds to the growing body of research suggesting that most older adults experience an age-related reduction in emotion recognition. We found that most older adults had difficulty recognizing some vocal expressions of emotion and matching these vocal expressions to corresponding facial expressions. Matching voices to faces in particular created difficulties for older adults. Although the face þ voice task tended to be harder than the label þ voice task for both age groups, older adults showed a greater decrement in performance. When asked to label the emotional content of vocal expressions, older adults were worse than young adults on expressions of sadness and anger. Recall that in previous studies isolating individual emotions, older adults had difficulties recognizing sad vocal expressions (Brosgole & Weisman, 1995; Wong et al., 2005), and in one of these studies they also had difficulties recognizing angry vocal expressions (Brosgole & Weisman, 1995). In our second task, requiring matching of faces to voices, older adults were worse than young adults when matching happiness and fear in addition to sadness and anger. Recall also that in the only previous study requiring matching of faces to voices, older adults were worse than young adults when matching anger, sadness, and disgust (Sullivan & Ruffman, 2004b). Thus, difficulties matching anger and sadness were obtained in both this previous study and the present study. Greater difficulty on the face þ voice task for older adults was possibly due to the additional need to recognize facial expressions in the face þ voice task, or specifically to difficulty integrating faces and voices in the face þ voice task. If the latter, one could argue that the cognitive resources needed to simultaneously process vocal and facial expressions of emotion are greater than those required for vocal expressions alone, and it is this increased cognitive demand that gives rise to the difficulties experienced by the older adults. However, several findings are inconsistent with the idea that face þ voice difficulties might be reducible to increased demands on fluid ability. As in the present study, Sullivan and Ruffman (2004b) included a task in which participants matched emotion sounds to faces, but in addition, included a task in which older adults matched
Downloaded By: [University of Otago] At: 23:13 13 January 2010
14
M. Ryan et al.
nonemotion sounds (e.g., raking, sweeping, clipping) to nonemotion photographs. Although each task required matching, and the two tasks were of roughly equal difficulty for young adults, they found that only the emotion- matching task created difficulties for older adults. Moreover, in both the present study and that of Sullivan and Ruffman, fluid ability was not a significant predictor of task performance for the face þ voice task after accounting for age (see also Ruffman, Henry, Livingstone, & Phillips, 2008). Older adults may alternatively have had greater difficulty on the face þ voice task simply due to the introduction of facial expressions of emotions. Recall that older adults were not worse than younger adults when recognizing vocal expressions of fear in the label þ voice voice task, but were worse when asked to simultaneously process vocal and facial expressions of fear in the face þ voice task. Given that recognition of fear from facial expressions has been shown to be difficult for older adults in 50% of studies (Sullivan & Ruffman, 2004b), the introduction of faces, rather than having to integrate faces and voices, might have accounted for the older adults’ difficulty recognizing fear in the face þ voice task. Recall also that older adults were not worse than younger adults when recognizing vocal expressions of happiness in the label þvoice task, but were worse when asked to simultaneously process vocal and facial expressions of happiness in the face þ voice task. Unlike fear, recognition of happiness from facial expressions is not typically worse in older adults compared to younger adults (Sullivan & Ruffman, 2004b). Therefore, a plausible explanation for the worse performance on happiness in the face þ voice task is that the need to integrate faces and voices creates additional difficulties for older adults. Why might integrating facial and vocal expressions of emotion pose a problem for older adults? There is general evidence for the idea that combining two sources of emotion information requires greater activation of emotion areas of the brain. For instance, Pourtois, de Gelder, Bol, and Crommelinck (2005) found that two sources of emotion (faces and voices) promoted greater activation of the amygdala and medial temporal gyrus than a single source (face only or voice only). Similarly, Kreifelts, Ethofer, Grodd, Erb, and Wildgruber (2007) found that younger adults show greater activation of the medial temporal areas and the thalamus when integrating visual and auditory emotion information and that greater activation was correlated with better emotion recognition. If the process of integrating emotion information across modalities requires greater activation of emotion areas of the brain, but older adults show generally reduced
Downloaded By: [University of Otago] At: 23:13 13 January 2010
Older Adults’ Recognition of Emotion
15
volume and activation of such areas, we would expect that they would show proportionally worse performance when matching vocal to facial expressions of emotion, as was found in the present study. Consistent with this explanation are the findings of studies showing that there is less amygdala activation in older adults when viewing emotional and particularly negative emotional expressions (Fischer et al., 2005; Gunning-Dixon et al., 2003; Iidaka et al., 2002; Mather et al., 2004; Tessitore et al., 2005; although see Wright et al., 2006). Yet claims of decreased activation must be reconciled with research suggesting greater activation of other brain regions with age when viewing emotional stimuli. Relative to younger adults, there is increased activation of the inferior and middle frontal regions in older adults during emotion processing tasks (Gunning-Dixon et al., 2003), and increased activation of the medial prefrontal cortex when reflecting on whether emotional descriptions apply to self (Fossati et al., 2003) and during face perception tasks (Grady, 2002). Researchers argue that recruitment of these other frontal areas may reflect a compensatory mechanism in older adults to offset less efficient processing of facial stimuli and increased cognitive effort (Grady, 2002; Gunning-Dixon et al., 2003). In contrast, in emotion processing areas, such as the amygdala, there is decreased activation, as described above, and the same possibility of decreased activation exists for emotion processing areas in frontal areas, such as the orbitofrontal region. Thus, there seem to be decreases in activation in emotion processing areas independent of concurrent increases in activation in brain areas mediating working memory, attention, and inhibitory ability. A final aim of our study was to determine whether emotion recognition difficulties were experienced by most older adults or only a few. We grouped the younger adults’ scores into approximate quartiles and then counted the number of older adults who fell into the same four quartiles for each task condition. There were fewer older adults than younger adults who were highly accurate (in the top quartile) on both task conditions but this was particularly marked in the face þ voice integration task where almost two thirds of the older adults were performing at a level comparable to or worse than the lowest quartile of younger adults. Although many mental abilities tend to decline over the life span, they are stable throughout life in the sense that those who do better on mental tests at one age relative to their peers also tend to do better at a later age relative to their peers (see above). For this reason, it is not plausible that the most able younger adults over time become the least able older adults. Instead, the most plausible interpretation is that all or most adults become worse over time so that there are
Downloaded By: [University of Otago] At: 23:13 13 January 2010
16
M. Ryan et al.
few older adults occupying the highest young adult quartile, and a preponderance of older adults in the lowest quartile. One caveat regarding the present study is that the hearing of the older adults was not specifically tested. It has been established that there is high-frequency-hearing loss beginning in the fifth decade of life, particularly in males (Fransen, Lemkens, Van Laer, & Van Camp, 2003). It is possible that age-related hearing loss contributed to the older adults’ performance difficulties in the present study. However, a recent study by Orbello, Grim, Talbott, and Ross (2005) investigated the impact of such hearing loss on the ability to complete affective prosodic recognition tasks and found that hearing loss was not related to the older adults’ performance difficulties. Orbello et al. concluded that provided an older adult can communicate well in general conversations and the stimuli are presented at an appropriate volume, hearing tests are unnecessary as hearing ability should not affect performance. In the present study, the older adults could all hear well enough to hold a conversation and adjusted the volume to their preferred level on the auditory tasks. In fact, no older adult reported difficulty with hearing or understanding the stimuli. Thus, it is unlikely that the performance differences between older and younger adults can be explained by age-related hearing loss. In summary, our data provide several novel findings. They suggest that older adults are worse than younger adults at recognizing emotions from vocal expressions in isolation and have proportionally greater difficulty simultaneously processing emotional expressions from voices and faces. The results suggest some similarities in the emotions (anger and sadness) older adults find difficult to recognize across facial expressions in isolation, vocal expressions in isolation, and when asked to combine voices and faces. In addition, our data provide converging evidence that difficulties in emotion recognition are often independent of differences in general cognitive (fluid) decline. Finally, our study is the first to examine whether emotion recognition difficulties are experienced by only a few, or many, older adults. Our results are consistent with the idea that worsening performance might be experienced by most or all older adults. REFERENCES Adolphs, R. (2002). Neural systems for recognizing emotion. Current Opinion in Neurobiology, 12, 169–177. Adolphs, R., Tranel, D., Damasio, H., & Damasio, A. (1994). Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala. Nature, 372, 669–672.
Downloaded By: [University of Otago] At: 23:13 13 January 2010
Older Adults’ Recognition of Emotion
17
Adolphs, R., Tranel, D., Damasio, H., & Damasio, A. R. (1995). Fear and the human amygdala. Journal of Neuroscience, 15, 5879–5892. Adolphs, R., Tranel, D., Hamann, S., Young, A., Calder, A., Anderson, A., Phelps, E., Lee, G. P., & Damasio, A. R. (1999). Recognition of facial emotion in nine subjects with bilateral amygdala damage. Neuropsychologia, 37, 1111–1117. Allen, J. S., Bruss, J., Brown, C. K., & Damasio, H. (2005a). Normal neuroanatomical variation due to age: The major lobes and a parcellation of the temporal region. Neurobiology of Aging, 26, 1245–1260. Allen, J. S., Bruss, J., Brown, C. K., & Damasio, H. (2005b). Methods for studying the aging brain: Volumetric analyses versus VBM. Neurobiology of Aging, 26, 1275–1278. Anderson, A. K., & Phelps, E. A. (2000). Expression without recognition: Contributions of the human amygdala to emotional communication. Psychological Science, 11, 106–111. Bartzokis, G. Beckson, M., Po, H., Lu, M. A., Nuechterlein, K. H., Edwards, N., & Mintz, J. (2001). Age-related changes in frontal and temporal lobe volumes in men. Archives of General Psychiatry, 58, 461–465. Blair, R. J. R., & Cipolotti, L. (2000). Impaired social response reversal. A case of ‘acquired sociopathy’. Brain, 123, 1122–1141. Blair, R. J. R., Morris, J. S., Frith, C. D., Perrett, D. I., & Dolan, R. J. (1999). Dissociable neural responses to facial expressions of sadness and anger. Brain, 122, 883–893. Brosgole, L., & Weisman, J. (1995). Mood recognition across the ages. International Journal of Neuroscience, 82, 169–189. Buchanan, T. W., Lutz, K., Mirzazade, S., Specht, K., Shah, N. J., Zilles, K., & Janke, L. (2000). Recognition of emotional prosody and verbal components of spoken language: An fMRI study. Cognitive Brain Research, 9, 227–238. Burke, D. M., & Mackay, D. G. (1997). Memory, language, and ageing. Philosophical Transactions of the Royal Society London B, 352, 1845–1856. Calder, A. J., Keane, J., Manly, T., Sprengelmeyer, R., Scott, S., Nimmo-Smith, I., & Young, A. W. (2003). Facial expression recognition across the adult life span. Neuropsychologia, 41, 195–202. Calder, A. J., Keane, J., Lawrence, A. D., & Manes, F. (2004). Impaired recognition of anger following damage to the ventral striatum. Brain, 127, 1958–1969. Calder, A. J., Lawrence, A. D., & Young, A. W. (2001). Neuropsychology of fear and loathing. Nature Reviews Neuroscience, 2, 352–363. Calder, A. J., Young, A.W., Rowland, D., Perrett, D. I., Hodges, J. R., & Etcoff, N. L. (1996). Facial emotion recognition after bilateral amygdala damage: Differentially severe impairment of fear. Cognitive Neuropsychology, 13, 699–745. Cattell, R., & Cattell, A. K. (1959). The culture fair intelligence test. USA: The Institute for Personality and Ability Testing. Convit, A., Wolf, O. T., de Leon, M. J., Patalinjug, M., Kandil, E., Caraos, C., Scherer, A., Saint Louis, L. A., & Cancro, R. (2001). Volumetric analysis of the pre-frontal regions: Findings in aging and schizophrenia. Psychiatry Research: Neuroimaging, 107, 61–73. Deary, I. J., Whiteman, M. C., Whalley, L. J., Fox, H. C., & Starr, J. M. (2004). The impact of childhood intelligence on later life: Following up the Scottish
Downloaded By: [University of Otago] At: 23:13 13 January 2010
18
M. Ryan et al.
Mental Surveys of 1932 and 1947. Journal of Personality and Social Psychology, 86, 130–147. Dimberger, G., Lalouschek, W., Lindinger, G., Egkher, A., Deecke, L., & Lang, W. (2000). Reduced activation of midline frontal areas in human elderly subjects: A contingent negative variation study. Neuroscience Letters, 280, 61–64. Duncan, J., Burgess, P., & Emslie, H. (1995). Fluid intelligence after frontal lobe lesions. Neuropsychologia, 33, 261–268. Ekman, P., & Friesen, W. V. (1976). Pictures of facial affect. Palo Alto, CA: Consulting Psychologists Press. Fine, C., & Blair, R. J. R. (2000). Mini review: The cognitive and emotional effects of amygdala damage. Neurocase, 6, 435–450. Fischer, H., Sandblom, J., Gavazzeni, J., Fransson, P., Wright, C. I., & Ba¨ckman, L. (2005). Age-differential patterns of brain activation during perception of angry faces. Neuroscience Letters, 386, 99–104. Fossati, P., Hevenor, S. J., Graham, S. J., Grady, C., Keightley, M. L., Craik, F., & Mayberg, H. (2003). In search of the emotional self: An fMRI study using positive and negative emotional words. American Journal of Psychiatry, 160, 1938–1945. Fransen, E., Lemkens, N., Van Laer, L., & Van Camp, G. (2003). Age-related hearing impairment (ARHI): Environmental risk factors and genetic prospects. Experimental Gerontology, 38, 353–359. George, M. S., Parekh, P. I., Rosinsky, N., Ketter, T. A., Kimbrell, T. A., Heilman, K. M., Herscovitch, P., & Post, R. M. (1996). Understanding emotional prosody activates right hemisphere regioins. Archives of Neurology, 53, 339–352. Good, C. D., Johnsrude, I. S., Ashburner, J., Henson, R. N. A., Friston, J., & Frackowiak, S. J. (2001). A voxel-based morphometric study of ageing in 465 normal adult human brains. NeuroImage, 14, 21–36. Grady, C. L. (2002). Age-related differences in face processing: A meta-analysis of three functional neuroimaging experiments. Canadian Journal of Experimental Psychology, 56, 208–220. Grieve, S. M., Clark, C. R., Williams, L. M., Peduto, A. J., & Gordon, E. (2005). Preservation of limbic and paralimbic structures in aging. Human Brain Mapping, 25, 391–401. Gunning-Dixon, F. M., Gur, R. C., Perkins, A. C., Schroeder, L., Turner, T., Turetsky, B. I., Chan, R. M., Loughhead, J. W., Alsop, D. C., Maldjian, J., & Gur, R. E. (2003). Age-related differences in brain activation during emotional face processing. Neurobiology of Aging, 24, 285–295. Hobson, R., Ouston, J., & Lee, A. (1988). Emotion recognition in autism: Coordinating faces and voices. Psychological Medicine, 18, 911–923. Hornak, J., Bramham, J., Rolls, E. T., Morris, R. G., O’Doherty, J., Bullock, P. R., & Polkey, C. E. (2003). Changes in emotion after circumscribed surgical lesions of the orbitofrontal and cingulated cortices. Brain, 126, 1691–1712. Iidaka, T., Omori, M., Murata, T., Kosaka, H., Yonekura, Y., Tomohisa, O., & Sadato, N. (2001). Neural interaction of the amygdala with the prefrontal and temporal cortices in the processing of facial expressions as revealed by fMRI. Journal of Cognitive Neuroscience, 13, 1035–1047.
Downloaded By: [University of Otago] At: 23:13 13 January 2010
Older Adults’ Recognition of Emotion
19
Keightley, M. L., Winocur, G., Burianova, H., Hongwanishkul, D., & Grady, C. L. (2006). Age effects on social cognition: Faces tell a different story. Psychology and Aging, 21, 558–572. Kiss, I., & Eniss, T. (2001). Age-related decline in perception of prosodic effect. Applied Neuropsychology, 8, 251–254. Kreifelts, B., Ethofer, T., Grodd, W., Erb, M., & Wildgruber, D. (2007). Audiovisual integration of emotional signals in voice and face: An event-related fMRI study. NeuroImage, 37, 1445–1456. Lamar, M., & Resnick, S. M. (2004). Aging and prefrontal functions: Dissociating orbitofrontal and dorsolateral abilities. Neurobiology of Aging, 25, 553–558. Liebenthal, E., Binder, J. R., Spitzer, S. M., Possing, E. T., & Medler, D. A. (2005). Neural substrates of phonemic perception. Cerebral Cortex, 15, 1621–1631. MacPherson, S. E., Phillips, L. H., & Della Sala, S. (2002). Age, executive function, and social decision making: A dorsolateral prefrontal theory of cognitive aging. Psychology and Aging, 17, 598–609. Mather, M., Canli, T., English, T., Whitfield, S., Wais, P., Oschner, K., Gabrieli, J. D., & Carstensen, L. L. (2004). Amygdala responses to emotionally valenced stimuli in older and younger adults. Psychological Science, 15, 259–263. McDowell, C. L., Harrison, D. W., & Demaree, H. A. (1994). Is right hemisphere decline in the perception of emotion a function of aging? International Journal of Neuroscience, 79, 1–11. Mitchell, R. L. C. (2007). Age-related decline in the ability to decode emotional prosody: Primary or secondary phenomenon? Cognition and Emotion, 21, 1435–1454. Mitchell, R. L. C., Elliot, R., Barry, M., Cruttenden, A., & Woodruff, P. W. R. (2003). The neural response to emotional prosody, as revealed by functional magnetic resonance imaging. Neuropsychologia, 41, 1410–1421. Moreno, C., Borod, J., Welkowitz, J., & Alpert, M. (1993). The perception of facial emotion across the adult life span. Developmental Neuropsychology, 9, 305–319. Moscovitch, M., & Winocur, G. (1992). The neuropsychology of memory and aging. In F. I. M. Craik & T. A. Salthouse (Eds.), The neuropsychology of memory and aging (pp. 315–372). Hillsdale, NJ: Erlbaum. Mu, Q., Xie, J., Wen, Z., Weng, Y., & Shuyun, Z. (1999). A quantitative MR study of the hippocampal formation, the amygdala, and the temporal horn of the lateral ventricle in healthy subjects 40 to 90 years of age. American Journal of Neuroradiology, 20, 207–211. Ongu¨r, D., Ferry, A. T., & Price, J. L. (2003). Architectronic subdivision of the human orbital and medial prefrontal cortex. Journal of Comparative Neurology, 460, 425–449. Orbello, D. M., Grim, M. A., Talbott, R. E., & Ross, E. D. (2005). Impaired comprehension of affective prosody in elderly subjects is not predicted by age-related hearing loss or age-related cognitive decline. Journal of Geriatric Psychiatry and Neurology, 18, 25–32. Oscar-Berman, M., Hancock, M., Mildworf, B., Hutner, N., & Altman Weber, D. (1990). Emotional perception and memory in alcoholism and aging. Alcoholism: Clinical and Experimental Research, 14, 383–393.
Downloaded By: [University of Otago] At: 23:13 13 January 2010
20
M. Ryan et al.
Parker, G. J. M., Luzzi, S., Alexander, D. C., Wheeler-Kingshott, C. J. M., Ciccarelli, O., & Lambon Ralph, M. A. (2005). Lateralization of ventral and dorsal auditory-language pathways in the human brain. Neuroimage, 24, 656–666. Phan, K. L., Wager, T., Taylor, S. F., & Liberzon, I. (2002). Functional neuroanatomy of emotion: A meta-analysis of emotion activation studies in PET and fMRI. Neuroimage, 16, 331–348. Phillips, L. H., & Allen, R. (2004). Adult aging and the perceived intensity of emotions in faces and stories. Aging: Clinical and Experimental Research, 16, 190–199. Phillips, L. H., & Henry, J. D. (2005). An evaluation of the frontal lobe theory of cognitive aging. In J. Duncan, L. H. Phillips, & P. McLeod (Eds.), Measuring the mind: Speed, control and age (pp. 191–216). Oxford, UK: Oxford University Press. Phillips, L. H., MacLean, R. D. J., & Allen, R. (2002). Age and the understanding of emotions: Neuropsychological and sociocognitive perspectives. Journal of Gerontology: Psychological Sciences, 57B, P526–P530. Posamentier, M. T., & Abdi, H. (2003). Processing faces and facial expressions. Neuropsychology Review, 13, 113–143. Pourtois, G., de Gelder, B., Bol, A., & Commelinck, M. (2005). Perception of facial expressions and voices and of their combination in the human brain. Cortex, 41, 49–59. Raithal, V., & Hielscher-Fastabend, M. (2004). Emotional and linguistic perception of prosody. Folia Phoniatrica et Logopaedica, 56, 7–13. Raz, N. (2000). Aging of the brain and its impact on cognitive performance: Integration of structural and functional findings. In F. I. M. Craik & T. A. Salthouse (Eds.), The handbook of aging and cognition (pp. 1–90). Mahwah, NJ: Erlbaum. Raz, N., Gunning, F. M., Head, D., Dupuis, J. H., McQuain, J., Briggs, S. D., Loken, W. J., Thornton, A. E., & Acker, J. D. (1997). Selective aging of the human cerebral cortex observed in vivo: Differential vulnerability of the prefrontal gray matter. Cerebral Cortex, 7, 268–282. Raz, N., Lindenberger, U., Rodrigue, K. M., Kennedy, K. M., Head, D., Williamson, A., Dahle, C., Gerstorf, D., & Acker, J. D. (2005). Regional brain changes in aging healthy adults: General trends, individual differences and modifiers. Cerebral Cortex, 15, 1676–1689. Resnick, S. M., Golszal, F., Davatzikos, C., Golski, S., Kraut, M. A., Metter, E. J., Bryan, R. N., & Zonderman, A. B., (2000). One-year changes in MRI brain volumes in older adults. Cerebral Cortex, 10, 464–472. Resnick, S. M., Pham, D. L., Kraut, M. A., Zonderman, A. B., & Davatzikos, C. (2003). Longitudinal magnetic resonance imaging studies of older adults: A shrinking brain. Journal of Neuroscience, 23, 3295–3301. Rouaux, N., & Juhel, J. (1995). Aging, working memory capacity, informationprocessing speed and cognitive performance: A comparative study. Bulletinde-Psychologie, 49, 1–3. Ruffman, T., Henry, J. D., Livingstone, V., & Phillips, L. H. (2008). A meta-analytic review of emotion recognition and aging: Implications for neuropsychological models of aging. Neuroscience and Biobehavioral Reviews, 32, 863–881.
Downloaded By: [University of Otago] At: 23:13 13 January 2010
Older Adults’ Recognition of Emotion
21
Salthouse, T. A. (2000). Steps towards the explanation of adult age differences in cognition. In T. J. Perfect & E. A. Maylor (Eds.), Models of cognitive aging (pp. 19–49). Oxford, UK: Open University Press. Sander, D., Grandjean, D., Pourtois, G., Schwartz, S., Seghier, M. L., Scherer, K. R., & Vuilleumier, P. (2005). Emotion and attention interactions in social cognition: Brain regions involved in processing anger prosody. NeuroImage, 28, 848–858. Schirmer, A., & Kotz, S. A. (2006). Beyond the right hemisphere: Brain mechanisms mediating vocal emotional processing. Trends in Cognitive Sciences, 10, 24–30. Scott, S. K., & Wise, R. J. S. (2004). The functional neuroanatomy of prelexical processing in speech perception. Cognition, 92, 13–45. Sprengelmeyer, R., Rausch, M., Eysel, U. T., Przuntek, H. (1998). Neural structures associated with recognition of facial expressions of basic emotions. Proceedings of the Royal Society of London B: Biological Sciences, 265, 1927–1931. Sullivan, S., & Ruffman, T. (2004a). Social understanding: How does it fare with advancing years? British Journal of Psychology, 95, 1–18. Sullivan, S., & Ruffman, T. (2004b). Emotion recognition deficits in the elderly. International Journal of Neuroscience, 114, 94–102. Tan, U., Tan, M., & Atatuerk, U. (1998). The curvilinear correlations between the total testosterone levels and fluid intelligence in men and women. International Journal of Neuroscience, 94, 55–61. Tisserand, D. J., Pruessner, J. C., Arigita, E. J. S., van Boxtel, M. P. J., Evans, A. C., Jolles, J., & Uylings, H. B. M. (2002). Regional frontal cortical volumes decrease differentially in aging: An MRI study to compare volumetric approaches and voxel-based morphometry. NeuroImage, 17, 657–669. Tisserand, D. J., Visser, P. J., van Boxtel, M. P. J., & Jolles, J. (2000). The relation between global and limbic brain volumes on MRI and cognitive performance in healthy individuals across the age range. Neurobiology of Aging, 21, 569–576. Tessitore, A., Hariri, A. R., Fera, F., Smith, W. G., Das, S., Weinberger, D. R., & Mattay, V. S. (2005). Functional changes in the activity of brain regions underlying emotion processing in the elderly. Psychiatry Research: Neuroimaging, 139, 9–18. Van Hoesen, G. W., Pavizi, J., & Chu, C.-C. (2000). Orbitofrontal cortex pathology in Alzheimer’s Disease. Cerebral Cortex, 10, 243–251. Wechsler, D. (1981). Manual for the weschler adult intelligence scale—revised. New York: Psychological Corporation. West, R. (2000). In defense of the frontal lobe hypothesis of cognitive aging. Journal of the International Neuropsychological Society, 6, 727–729. Wildgruber, D., Pihan, H., Ackermann, H., Erb, M., & Grodd, W. (2002). Dynamic brain activation during processing of emotional intonation: Influence of acoustic parameters, emotional valence, and sex. Neuroimage, 15, 856–869. Wildgruber, D., Riecker, A., Hertich, I., Erb, M., Grodd, W., Ethofer, T., & Ackermann, H. (2005). Identification of emotional intonation identified by fMRI. Neuroimage, 24, 1233–1241. Wong, B., Cronin-Golomb, A., & Neargarder, S. (2005). Patterns of visual scanning as predictors of emotion identification in normal aging. Neuropsychology, 19, 739–749.
22
M. Ryan et al.
Downloaded By: [University of Otago] At: 23:13 13 January 2010
Wright, C. L., Wedig, M. M., Williams, D., Rauch, S. L., & Albert, M. S. (2006). Novel fearful faces activate the amygdala in healthy young and elderly adults. Neurobiology of Aging, 27, 361–374. Zimmerman, M. E., Brickman, A. M., Paul, R. H., Grieve, S. M., Tate, D. F., Gunstad, J., Cohen, R. A., Aloia, M. S., Williams, L. M., Clark, C. R., Whitford, T. J., & Gordon, E. (2006). The relationship between frontal gray matter volume and cognition varies across the healthy adult lifespan. American Journal of Geriatric Psychiatry, 14, 823–833.