Sep 16, 2014 - According to recent research on language comprehension, the semantic features of a text are not the only
NeuroImage 103 (2014) 20–32
Contents lists available at ScienceDirect
NeuroImage journal homepage: www.elsevier.com/locate/ynimg
Mood-dependent integration in discourse comprehension: Happy and sad moods affect consistency processing via different brain networks Giovanna Egidi a,⁎, Alfonso Caramazza a,b a b
Center for Mind/Brain Sciences (CIMeC), University of Trento, Via delle Regole 101, 38123 Mattarello, TN, Italy Cognitive Neuropsychology Laboratory, Harvard University, 33 Kirkland St., Cambridge, MA 02138, USA
a r t i c l e
i n f o
Article history: Accepted 5 September 2014 Available online 16 September 2014 Keywords: Cognition–emotion interaction Context Inconsistency detection Individual differences Language network
a b s t r a c t According to recent research on language comprehension, the semantic features of a text are not the only determinants of whether incoming information is understood as consistent. Listeners' pre-existing affective states play a crucial role as well. The current fMRI experiment examines the effects of happy and sad moods during comprehension of consistent and inconsistent story endings, focusing on brain regions previously linked to two integration processes: inconsistency detection, evident in stronger responses to inconsistent endings, and fluent processing (accumulation), evident in stronger responses to consistent endings. The analysis evaluated whether differences in the BOLD response for consistent and inconsistent story endings correlated with selfreported mood scores after a mood induction procedure. Mood strongly affected regions previously associated with inconsistency detection. Happy mood increased sensitivity to inconsistency in regions specific for inconsistency detection (e.g., left IFG, left STS), whereas sad mood increased sensitivity to inconsistency in regions less specific for language processing (e.g., right med FG, right SFG). Mood affected more weakly regions involved in accumulation of information. These results show that mood can influence activity in areas mediating well-defined language processes, and highlight that integration is the result of context-dependent mechanisms. The finding that language comprehension can involve different networks depending on people's mood highlights the brain's ability to reorganize its functions. © 2014 Elsevier Inc. All rights reserved.
Introduction The effect of mood, in particular happy and sad moods, is pervasive in several aspects of cognition (for reviews, see Clore and Huntsinger, 2007; Martin and Clore, 2001), including language processes at the sentence and discourse level. This effect of mood on language has been shown in behavioral and ERP studies (behavioral: e.g., Beukeboom and Semin, 2006; Egidi and Gerrig, 2009; ERP: e.g., Chwilla et al., 2011; Egidi and Nusbaum, 2012). In these studies, particular attention has been given to the comprehension of congruent and incongruent information with respect to prior sentence, discourse, or mood context (Chwilla et al., 2011; Egidi and Gerrig, 2009; Egidi and Nusbaum, 2012; Federmeier et al., 2001). This research has shown converging evidence that consistency processing in comprehension is highly dependent on comprehenders' mood; that is, whether a sentence or a word is understood as consistent with prior context does depend not solely on the features of the text, but also on the affective state comprehenders bring to language understanding.
While this research has shown that mood affects the processing of consistency, the neuroanatomical basis of this effect is not known. To date, no neuroimaging study has been reported examining the brain systems that underlie mood's influence on integration processes during language comprehension. The present study constitutes the first attempt at bridging this gap. Specifically, this study examines whether, during the comprehension of consistent and inconsistent information with prior discourse context, the brain regions whose activity is modulated by happy and sad moods are also those involved in the integration of linguistic stimuli. We focused our research on happy and sad moods, which are known to influence activity in a set of brain regions involved in multiple cognitive tasks (for a review on the interactions between emotion and cognition, see Dolcos et al., 2011).1 Dorsolateral prefrontal cortex (dlPFC), for example, is differentially sensitive to happy and sad emotional states (Habel et al., 2005). It is also sensitive to happy and sad moods during the performance of working memory tasks and executive functions in general (Aoki et al., 2011; Mitchell and Phillips, 2007; Robinson et al.,
⁎ Corresponding author at: Center for Mind/Brain Sciences (CIMeC), University of Trento, Via delle Regole 101, 38123 Trento, Italy. Fax: +39 0461 283066. E-mail address:
[email protected] (G. Egidi).
1 In this paragraph and the next, which describe brain regions' sensitivity to happiness and sadness, we do not discuss the direction of the regions' sensitivity. The goal of this section is to introduce the reader to the regions that show a differentiation between the two affective states across different tasks and the literature reported is based on results which may show very different patterns.
http://dx.doi.org/10.1016/j.neuroimage.2014.09.008 1053-8119/© 2014 Elsevier Inc. All rights reserved.
G. Egidi, A. Caramazza / NeuroImage 103 (2014) 20–32
2010) and to working memory tasks with emotional stimuli (Grimm et al., 2012). Medial frontal areas also respond differently under happiness and sadness states (Vytal and Hamann, 2010). In particular, orbitofrontal and striatal regions are particularly sensitive to happiness, as they are to positive rewards (Britton et al., 2006; Mitterschiffthaler et al., 2007; Schaefer et al., 2003). More generally, it has been suggested that left prefrontal areas are particularly sensitive to positive affect, whereas right prefrontal areas to negative affect (Davidson, 2003; Davidson and Irwin, 1999). Central and posterior medial areas, posterior insula and portions of the superior, middle, and inferior temporal gyri (STG, MTG, ITG), and parahippocampal gyrus (PHG) are also sensitive to happiness and sadness, both in the processing of these emotions per se, and in performing cognitive tasks under the influence of these affective states (Dolcos et al., 2011; Habel et al., 2005; Mitterschiffthaler et al., 2007; Vytal and Hamann, 2010). While occipital and parietal regions seem to be less sensitive to happiness and sadness (Phan et al., 2002; Vytal and Hamann, 2010), cerebellar regions and subcortical areas including the amygdala, hippocampus, thalamus, caudate and putamen are also sensitive to the two affective states (Britton et al., 2006; Habel et al., 2005; Mitterschiffthaler et al., 2007; Phan et al., 2002; Vytal and Hamann, 2010). The above results have particular relevance for understanding the neurobiological mechanisms of language comprehension, because several of the cortical regions sensitive to happiness and sadness mentioned above are also involved in processing consistency in language comprehension. Specifically, much of the neuroimaging literature on language comprehension has emphasized how the comprehension of inconsistencies (both at the sentence and discourse level) is associated with activation in a network that includes inferior frontal gyrus (IFG), medial areas of the prefrontal cortex (medFG), dlPFC, a middle region of the superior temporal sulcus, STG and MTG, the anterior temporal lobe (aTL), and posterior midline regions such as precuneus and posterior cingulate gyrus, especially on the left (Deen and McCarthy, 2010; Ferstl and von Cramon, 2001, 2002; Ferstl et al., 2008; Hasson et al., 2007; Kuperberg et al., 2003). The pattern of activation in this network is usually of greater signal change for inconsistent than consistent information. As a consequence, this network has been associated with detection of inconsistencies and subsequent attempt to reconcile coherence breaks during integration (which may occur both at the sentence and the discourse level). Given the overlap between several of these regions and those sensitive to happiness and sadness mentioned earlier, we expected that happy and sad mood would modulate the functioning of the frontal, temporal, and medial regions in this network. While we aimed to identify networks where mood affects the processing of inconsistencies, it was beyond the scope of this fMRI experiment to tease apart the detection of inconsistencies from the attempt at reconciling them. For this reason, we refer to them jointly as inconsistency processing or processing of inconsistencies. The difficulty in teasing the two apart is also due to the fact that the inferencing likely involved in reconciliation relies on regions linked to different aspects of language processing (e.g., IFG, MTG, AG; Mason and Just, 2004, 2011) and to semantic processing more generally (e.g., Binder et al., 2009). It is therefore difficult to tell which specific role these regions have in this context. Beyond examining the modulation of inconsistency, we also examined whether mood affects activity in a set of regions that show stronger responses to consistent endings. We have elsewhere proposed and provided evidence for a neural network involved in fluent processing, which is associated with the inverse pattern of activation, that is, greater signal change for consistent than inconsistent information in regions with above-baseline activation (Egidi and Caramazza, 2013). This pattern is less well understood and is found in a network that includes precentral gyrus (PreCG), posterior superior frontal gyrus (SFG), middle postcentral gyrus (PostCG), angular gyrus (AG), superior parietal lobule (SPL), central cingulate gyrus (CinG), anterior cuneus and the anterior
21
parieto-occipital sulcus (POS). On the left, this network also includes a portion of the ventromedial PFC (vmPFC), and temporal areas such as posterior MTG, middle temporal sulcus (MTS) and ITG. Based on corroborating evidence (Ferstl and von Cramon, 2001, 2002; Hasson et al., 2007; Perani et al., 1996, 1998; Vingerhoets et al., 2003), we have suggested that activity in these areas indicates an integration process of continuous, fluent, monotonic updating of a knowledge base (Egidi and Caramazza, 2013), which we will call here accumulation of information. Some of the studies that have found a pattern of increased activation for consistent information in these regions have also found the opposite pattern in the inconsistency processing network (e.g., Hasson et al., 2007). Based on this evidence, we further propose that both inconsistency processing and assimilation may occur during integration. Our study tests the effect of mood on both the networks for inconsistency processing and assimilation. Based on the current knowledge on the brain regions that are sensitive to happiness and sadness, we expect that moods will affect regions of this network as well. Specifically, we expect modulation in frontal and temporal areas, but less so in parietal and occipital regions. From a functional point of view, happiness and sadness influence cognition in different ways. A general principle is that mood promotes processing of information congruent with it in valence (e.g., Egidi and Nusbaum, 2012; Fiedler, 2001; Forgas and Locke, 2005). In addition, beyond preference for content, mood also influences processing style. Happy mood is thought to promote more flexible and creative processing that relies on broad knowledge structures and heuristics, whereas sad mood promotes more systematic and careful processing of external stimuli. As a consequence, happy mood promotes a more comprehensive and top-down processing style, whereas sad mood promotes a more narrow and bottom-up approach to information processing (Bless, 2000; Clore and Huntsinger, 2007; Fiedler, 2001). That said, the processing strategy people employ largely depends on task demands, and the influence of mood is only secondary: Mood acts mostly as a modulator (Forgas, 1995, 2001). These conclusions have been reached by studying mostly social judgments and memory processes (for reviews, see Clore and Huntsinger, 2007; Martin and Clore, 2001) and the picture for language processing is still incomplete. With respect to the processes examined here, both inconsistency processing and accumulation rely on careful processing of external stimuli. However, in monitoring the fit of incoming information with the context of the discourse, inconsistency processing requires more flexibility, a more comprehensive view of the discourse context and, in general, more top-down processing. It requires anticipating to some degree future incoming information, and then detect, signal, and perhaps even overcome mismatches. Predictive processing at different levels of language comprehension has been extensively shown in behavioral, ERP and, to some extent, fMRI studies (e.g., Federmeier, 2007; Gold et al., 2006; Hagoort and Van Berkum, 2007; Kamide, 2008; Lau et al., 2013; Rommers et al., 2013). Although the specificity of the predictions and the mechanisms by which these predictions occur are still under debate (e.g., Van Petten and Luka, 2012), there is wide agreement that prior context allows anticipating incoming information to a certain degree. Unlike inconsistency processing, accumulation reflects monotonic integration of contextually consistent information as it arrives. It shows little sensitivity to mismatching information and it does not need preparation to the most likely incoming information. It responds to information that is easily assimilated, and ignores the rest. Given the features of the two processes, we hypothesized that happy mood is more likely to enhance the system's ability to detect inconsistencies, whereas sad mood is more likely to enhance the system's ability for accumulation. In our fMRI experiment, we first induced participants to experience a happy or a sad mood, and then asked them to listen to short stories that ended with either a consistent or an inconsistent sentence (see Table 1 for sample stimuli). We tested our hypotheses with a whole
22
G. Egidi, A. Caramazza / NeuroImage 103 (2014) 20–32
brain analysis that, for each voxel, correlated the difference in activity associated with the comprehension of consistent and inconsistent story endings to participants' mood scores. This analysis revealed the existence of two networks modulated by mood: One in which inconsistent endings were associated with increased activity as compared to consistent endings (inconsistency processing) and one in which they were associated with decreased activity (accumulation). Our experiment was designed to also test whether mood can modulate integration processes in the absence of coherence breaks. To this end, we performed an additional whole brain analysis that, for each voxel, correlated the activity associated with the comprehension of consistent text with the participant's mood score. These texts were the bodies of the stories that preceded the consistent or inconsistent endings compared in the previous analysis, and therefore included only consistent information with minimal integration demands. We expected mood modulation of areas involved in basic semantic and integrative linguistic tasks, in particular frontal and temporal areas (e.g., SFG, IFG, MTG, STG; Price, 2010, 2012). It is crucial, however, to appreciate the difference between this analysis and the one on the story endings. The analysis on the story bodies examines the relation between mood and activation magnitude to a simple text block in a way commensurate with examining the relation between mood and response magnitudes to stimuli of any type whose accrual does not require sensitivity to their consistency (or lack thereof) with a narrative structure. In contrast, the analysis of the story endings probes for an interaction between mood and the differential processing of consistent and inconsistent linguistic information with respect to a plot. It therefore probes for a specific effect of mood on processes that most often occur in language comprehension (but may also occur in other communicative media, e.g., Cohn et al., 2012; Sitnikova et al., 2008). For this reason, we did not expect that the regions modulated by mood in the analysis of the story bodies would necessarily overlap with either of the two networks identified by the analysis of the story endings.
Materials and methods Participants Twenty-eight native Italian speakers participated in the fMRI study, 14 were randomly assigned to the happy mood-induction group (M age = 22.43; SD = 3.74; m = 4) and 14 to the sad moodinduction group (M age = 25.36; SD = 4.80; m = 4). They were all right-handed, had good or corrected vision, good hearing, no mood or attention disorders, and did not take psychotropic medications. Prior to the scan they underwent a medical interview to evaluate other criteria of exclusion from the fMRI procedure (e.g., presence of metal in the body, claustrophobia, pregnancy). The experimental procedures were approved by the Ethics Committee for Research Involving Human Subjects of the University of Trento. All participants signed a
written consent before beginning the experiment and were fully debriefed at the end.
Stimuli and design The stimuli consisted of 20 stories in Italian, 6 to 10 sentences long, describing simple events. All ending sentences were matched for length (between 12 and 14 syllables, M = 13.5), syntactic structure, lexical overlap with prior context, and number of content words (M = 3.4). We did not a priori match the endings for word frequency, but we performed a post hoc check of the frequency of the content words in the endings of half of the stories (randomly chosen), using the CoLFIS database (Bertinetto et al., 2005). Their frequency was highly similar (MCONS = 448, MINCO = 498, t(19) = .27, p = .79). A series of normings also verified that while the two types of endings differed in consistency with respect to prior context, they were still highly similar in (1) valence, (2) mismatch in valence with prior context, (3) mismatch in arousal with prior context, and (4) imaginability. We took these steps so that it would be possible to compare BOLD activity associated with the comprehension of the endings in the different moods. The results of these normings are reported in Section A of the Supplementary materials. Table 1 shows an example of the stories we used. In addition to the main experimental materials, we wrote 20 stories similar in length and structure to the experimental narratives, which we used as practice materials and fillers. We assigned the experimental stories to two lists so that each participant would be presented with only one version of each story. Each list contained 20 experimental stories, 20 stories unrelated to the current study, and 12 filler stories. In each list, 10 experimental stories were followed by a consistent ending and 10 by an inconsistent ending. The order of all the stories in the lists was random. Each of the two lists was presented to half of the participants in each mood group, randomly chosen. Presentation was auditory. The speaker who recorded the stories was blind to the purpose of the experiment, and recorded the story endings separately from the experimental stories, so as not to bias intonation in reading the story bodies as a function of the endings. We also used 4 video clips of about 8 min each to induce happy and sad moods. They were extracts from movies or shorts. The happy videos were humorous cartoons and the sad videos were about negative life changing experiences (e.g., death in a family). There was no overlap between the subjects of the videos and those of the experimental stories. These videos were also normed to ensure that they would elicit happy and sad moods. The results of these normings are also reported in Section A of the Supplementary materials. The design included one within-participant variable with two levels: Ending Consistency (Consistent, Inconsistent) and one betweenparticipant variable with two levels: Mood (Happy, Sad). To allow maximal power in the analysis, a continuous version of the mood variable was created, based on the mood scores provided by each participant
Table 1 Sample of stories used in the experiment. The two types of endings differed in consistency with respect to prior context, but were highly similar in (1) valence, (2) mismatch in valence with prior context, (3) mismatch in arousal with prior context, and (4) imaginability (see Section A of the Supplementary materials for more details). Story
Consistent ending
Inconsistent ending
In his tent, Garibaldi wrote the last few lines in his diary. Then he finished the letters for his family that he had started writing in the morning. Then he wore his red shirt, prepared his weapons, and called his second in command. He felt victory in his pocket and was impatient to throw himself into battle. During the flight, the stewards distributed drinks and lunches. They then strolled with the carts of the duty-free products so that interested passengers would have the chance to see them and buy them. The airplane was flying over Greece. It was a beautiful day and there were no clouds. From the loudspeaker the pilot announced,
He gave the order to attack right away.
He gave the order to wait longer.
“We will certainly arrive on time.”
“Prepare for emergency maneuvers.”
G. Egidi, A. Caramazza / NeuroImage 103 (2014) 20–32
on a questionnaire. This scale was correlated against participants' BOLD measures.
Procedure for the fMRI scan In the first part of the experiment, we acquired fMRI data while participants listened to the experimental materials. Fig. 1 shows a schematic of the procedure for both experimental and filler stories in the scanner. To ensure that participants would process the experimental and filler stories similarly, we used the same procedure for their presentation, and the filler stories' procedure only departed from that of the experimental ones once the story had been presented in its entirety: Filler stories were followed by a comprehension question requiring a yes/no response by a button press, and experimental stories were followed by a visual stimulus indicating that a button press was required. We used comprehension questions in the filler stories to ensure that participants would pay attention to the content of the stories throughout the experiment, and button presses to ensure that participants would follow the progress of the experiment. Comprehension questions were not asked after experimental trials to avoid contamination of the hemodynamic response of the critical final sentences with the processing associated with responding to a question. The procedure was identical to that used in Egidi and Caramazza (2013). Before entering the scanner, participants were given instructions and practiced the procedure on a block of 4 filler stories. Participants responded using a two-key button box. Half of the participants used their left hand, and half the right hand. Each trial began with a warning sound and the appearance of a cross in the center of the screen,
23
followed by a voice narrating the story. Participants were instructed to pay attention to the story as to a friend's voice recounting an anecdote. At the end of each story, the cross disappeared and participants were instructed to do one of two things. They were told that in some cases, a question about the story's content would appear on the screen for 4 s and that they should answer Yes or No by pressing one of two keys. In other cases, they were told that two circles would appear on the screen, and in that case they could press either key. These circles too were displayed for 4 s. When this screen or the question screen disappeared, participants saw a gray screen for several seconds, during which they were instructed to rest. We also informed participants of the structure of the experiment: We told them that they would watch a video clip for about 8 min, followed by two blocks of stories of 10 min each, followed by another video clip for 8 min, and two more blocks of stories of 10 min each. The video clips were presented during the acquisition of the structural images. Participants were asked to pay attention to the video clips and stories equally, and were told that they would be asked questions about both once they finished the part of the experiment in the scanner. Before presenting the second and fourth blocks of stories, to refresh participants' induced mood we showed a film still (single frame) of the video clip that participants had most recently watched followed by the question: “What was this clip about? We will ask you to give us your answer later”. We did not inform participants about the actual purpose of the videos until the end of the experiment, because we wanted to avoid directing participants' attention to their affective states and thus trigger strategic processing that reduces the effects of mood (or introduces noise). We also wanted to avoid obtaining biased replies to the questions assessing participants' mood after the fMRI scan. Procedure for the subsequent behavioral part
Fig. 1. Procedure for auditory story presentation in the MR scanner. The procedure was identical during the presentation of experimental and filler stories. It differed only after the stories had ended: Comprehension questions followed the presentation of filler stories to verify that participants listened to the content of the stories; button presses followed the presentation of the experimental stories to maintain vigilance over the study. Critical window: 4.4 s between the onset of the ending sentence and the onset of the button press screen.
After the fMRI experiment, participants completed a questionnaire that contained questions about the videos and the experimental stories, whose purpose was to evaluate the success of the mood induction. Participants first described the general subject and the location where the events of each video clip took place. This procedure was meant to revive the mood induction. Next, in a cued recall paradigm, participants were presented with the text of each of the stories and were asked to write how the story ended. We used these memory protocols to evaluate the degree to which participants in both groups paid attention to the stimuli presented. As a final task, participants filled out a survey about the experiment they had just completed and some personal preferences. This questionnaire contained several questions assessing the success of the mood induction. They were embedded within filler questions to minimize demand effects on the mood ratings. The critical questions asked participants whether they were feeling happy, sad, in a positive mood, or a negative mood at four times during the experiment; namely, when they arrived at the lab, immediately after watching the first clip, immediately after watching the second clip, and while they were completing the questionnaire itself. We used this combination of questions to ensure that the elicited mood was colored by happiness or sadness (and not, for example, by hope or anger) and that it was in fact a mood (a generally positive or negative affective state), rather than an intense emotion. Participants provided responses on a scale ranging from 1 (not at all) to 9 (very much). This mood validation method was the same we used in prior work (Egidi and Nusbaum, 2012). Finally, during debriefing we interviewed participants to assess whether they had had any intuition about the experimental mood manipulation. None of them had. fMRI data acquisition We acquired fMRI data with a Bruker MedSpec 4 T scanner with an eight-channel head coil at the Center for Mind/Brain Sciences (CIMeC)
24
G. Egidi, A. Caramazza / NeuroImage 103 (2014) 20–32
of the University of Trento. The functional EPI sequences consisted of 37 axial T2-weighted functional images in ascending interleaved order covering the entire brain, slightly tilted to run parallel to the AC–PC line (TR = 2200 ms; isotropic voxel size = 3 mm, gap size between slices = .45 mm, TE = 33 ms, FA = 75°, FoV = 192 × 192 mm). Each participant completed 4 runs of 270 volumes. Each functional run was preceded by an additional brief scan that measured the point-spread function (PSF) of the acquired sequence and allowed correcting for the distortions that the inhomogeneity of the magnetic field might induce in certain regions (Zaitsev et al., 2004). The anatomical images consisted of two high-resolution images acquired with a T1-weighted MP-RAGE sequence that were averaged to maximize the quality of the structural image. These were 176 sagittal images (TR = 2700 ms, isotropic voxel size = 1 mm, TE = 4 ms, FA = 7°, FoV = 256 × 254 mm). One structural image was acquired at the beginning of the experiment, and one after participants had completed two experimental runs. fMRI data processing and first-level analysis We performed functional data analyses by using AFNI's procedures (Cox, 1996). For each participant's functional data, we first removed the initial 11 volumes of each run, which we had acquired before the beginning of the experimental task, to allow for T1 stabilization. After registration of all runs to a reference time point in the first run, we performed motion correction, applied a Gaussian blur with a full width at half maximum (FWHM) of 6.0 mm, and mean-normalized the time series to obtain percent signal change values. Analysis of time series was done using simultaneous multiple regression implemented using AFNI's 3dDeconvolve regression routines. Regressors were waveforms with similarity to the hemodynamic response, generated by convolving a gamma-variant function with the onset time and duration of the endings and with the onset time and duration of the stories. There were three regressors of interest, one for each experimental condition (the two types of endings) and one whose timing matched the portion of the story that preceded the ending and the timing of the filler stories (which were identical in structure and therefore were not modeled separately). The regression solution returned a Beta value for each regressor. These Beta values were based on partial correlations modeling whether the presentation of each of the elements of interest accounted for variance above and beyond the variance associated with the other events occurring in the experiment. We modeled all story bodies with a single regressor, on the assumption that story content was quite similar up to the final sentences. Other regressors reflected factors of no interest and included the features of the other experimental stories irrelevant to the current experiment, button presses, comprehension questions to the filler stories, the 6
motion parameters estimated during head motion correction, and 1st– 4th order polynomial trends fitted for each run separately to account for instrumentation-induced drifts in the signal. We also removed from the regression functional acquisitions associated with major head movement (N2 mm) and acquisitions that presented outlier values in a large number of voxels (N 2000). This removal accounted for 2.9% of the data. For each participant, we co-registered and averaged the two MP-RAGE images. We then transposed both the MP-RAGE and the results of the regressions to Talairach space, where we conducted a second-level group analysis. fMRI second-level analyses The main analysis of the study aimed to identify brain regions where the difference between BOLD responses to consistent and inconsistent endings was modulated by participants' mood. To this end, for each voxel we first derived a difference measure (delta = inconsistent minus consistent). We conducted an analysis in three steps, as schematically shown in Fig. 2. As a first step, we conducted a whole brain voxelwise analysis where, for each voxel, we regressed the Beta values of the difference between inconsistent and consistent endings against participants' mood scores (see next section for a detailed description of the mood scores). In a second step, we split the regions identified in Step 1 in two subsets: One set of regions where delta was positive (i.e., INCO N CONS) and correlated with mood, and one set of regions where delta was negative (i.e., CONS N INCO) and correlated with mood. This split was aimed to offer interpretations for regions linked with inconsistency processing and accumulation as explained in the Introduction section. As a third and final step, we split the sets of regions in Step 2 depending on their correlation with mood. We divided in two subsets the regions where the INCO N CONS contrast correlated with mood: One subset included the regions in which the contrast increased with increasing happiness, and the other subset included the regions in which the contrast increased with increasing sadness. Analogously, we split the regions where the CONS N INCO contrast correlated with mood in two subsets: Those in which the contrast increased with increased levels of happiness and those in which the contrast increased with increased levels of sadness. In summary, this analysis differentiated the impact of mood based on the sign of delta and the direction of the correlation. Finally, we performed an analysis in which Mood was modeled as a between-participant variable. It is reported in detail in the Supplementary materials (the results are in Table A and Figure A). We note that this analysis is not statistically independent of the interindividual difference analysis based on the robust regression. It is important to note that, in this experiment, it is not possible to perform an analysis identifying the inconsistency processing network
Fig. 2. Schematic of the three stages of the story ending analysis.
G. Egidi, A. Caramazza / NeuroImage 103 (2014) 20–32
or the accumulation network independent of the mood modulation: If happy and sad moods affect different processes and networks in contrasting ways, the analysis would not produce reliable effects. An independent, second analysis identified brain regions where activity during comprehension of consistent information was modulated by participants' mood. To this end, we conducted a whole brain voxel-wise analysis where, for each voxel, we regressed the Beta values capturing the processing of the story bodies that preceded the endings against participants' mood scores. For all correlation analyses we used a robust regression method (as implemented in the R package robustbase; Rousseeuw et al., 2012) that is more robust to outliers than least-square regression and has been proposed for use in neuroimaging analyses (Wager et al., 2005). Robust regression can detect univariate outliers based on their probability to belong to the same data-generating process as the rest of the data, and consequently assigns weights to each data point. This allows a more accurate measure of the correlation between independent and dependent variables. All whole-brain neuroimaging analyses were controlled for familywise error (FWE) using cluster-based constraints: Cluster thresholding was based on Monte Carlo simulation methods (Forman et al., 1995) that controlled for a FWE rate of p b .05. These simulations take into account the smoothing in the data, the allowed distance between statistically significant vertices (2 mm), and voxel thresholding. For the analysis of the story endings, we used a single-vertex threshold set at an alpha level of p b .05 to identify large, distributed clusters. For the analysis of the story bodies, for which the data had greater power, we used a single-vertex threshold set at an alpha level of p b .005 which allowed us to identify smaller, more focal clusters. Note that using cluster-based thresholding for FWE correction means that cluster size is not chosen arbitrarily, but is tightly dependent on the voxel threshold selected so that the strictness of the cluster extent balances that of the single voxel threshold (more liberal single-voxel thresholds result in large cluster-size magnitudes).
25
Coding of recall protocols Two raters blind to the mood induction independently coded the recall protocols (as in Egidi and Caramazza, 2013). They divided each ending into meaningful units capturing an idea that could be remembered as a whole, and assigned one point to each idea fully recovered, half a point to ideas partially recovered and zero points to missing ideas, misrecalls, and guesses. For each meaningful unit, raters also assigned one point for perfect verbatim memory, and half for partial. Each rater then added together her gist and verbatim scores and transformed this sum into a percentage score. Inter-rater reliability was very high (Cronbach α = .99); the scores of only one rater were therefore used for the analyses. Behavioral results Validation of the mood induction Mood probes asked about participants' feelings at four different times during the study: Prior to the mood induction, immediately after each mood induction (after video 1 and after video 2), and at the end of the study. The trajectory of these ratings revealed that the mood manipulation was highly effective, consistent with similar effects in our prior work (Egidi and Nusbaum, 2012). Fig. 3 shows the results of the mood induction. The happy and sad groups differed significantly
Construction of the mood variable for the second-level analyses The mood score used as an independent variable in the regression was a composite measure derived from the ratings participants provided to the questions in the final questionnaire, which assessed their mood after watching the two video clips. The composite measure was constructed as follows: We first averaged the ratings given after the first and the second video clip for each of these four questions: (1) how happy did you feel, (2) how sad did you feel, (3) to what extent were you in a positive mood, (4) to what extent were you in a negative mood. The scale for each rating ranged from 1 to 9. Since the responses to the four questions were correlated, we constructed a measure reflecting the difference between a participant's rating of happiness and sadness: We subtracted the rating of (2) from (1) and the rating of (4) from (3) and created a single average from these. This scale is intuitively understood as follows: Participants in a happy mood should rate themselves as more happy than sad and more in a positive than in a negative mood, thus resulting in positive numbers on the scale. Participants in a sad mood should provide the opposite ratings, resulting in negative values on this scale. Therefore, positive numbers in the obtained scale reflect increasingly happy mood and negative numbers increasingly sad moods. This measure is a close variant of reverse coding: If this measure is written as a then a reverse coding procedure in the current study corresponds to (a + 10) / 2, i.e., a linear transformation that does not affect the correlation measure. With respect to reverse coding, however, the current formulation numerically reflects the difference between the self-reported happiness and sadness of each participant, and communicates increased happiness as positive scores and increased sadness as negative scores.
Fig. 3. Top panel: Participants' mean mood ratings at three times during the experiment. The ratings given after the mood induction were used as the independent variable against which to correlate Betas in the fMRI analyses. Bottom panel: distribution of mood ratings after the mood induction for all 28 participants in the experiment.
26
G. Egidi, A. Caramazza / NeuroImage 103 (2014) 20–32
when considering the average ratings provided after video 1 and video 2 (t(26) = 11.06, p b .001). Thus, the videos clearly achieved the intended effect. We used these rating values in the whole-brain robust regression. To evaluate the time-course of the mood induction, we also calculated a unique score for the ratings participants provided about their mood at the beginning and at the end of the experiment. We obtained the same composite measure from the 4 relevant questions, and quantified these ratings for the beginning and end of the study. Based on these measures, the two groups were in the same mood at the beginning of the experiment and were in different moods at the end of the experiment (t(26) = 2.56, p b .02). It is also important to note that after the mood inductions both groups showed a significant change in the intended direction with respect to their mood at their arrival in the lab (Happy group: t(13) = 4.29, p b .001; Sad group: t(13) = −9.04, p b .001). This means that after the mood manipulation both groups were in a mood that differed from neutral. To summarize, participants began the experiment in the same mood, were successfully induced to experience a happy or a sad mood by watching the different video clips, and this difference remained until the end of the experiment (though reduced).
Comprehension questions and recall scores as indicators of mood-dependent modulations of attention Happy and sad moods are known to orient preferential attention to different stimuli (especially mood-congruent ones) and promote a more or less systematic elaboration of external stimuli (Clore and Huntsinger, 2007; Martin and Clore, 2001). It may therefore be that participants in one mood group listened more attentively than participants in the other. In the current paradigm, however, this appeared to not be the case. Responses to comprehension questions following the filler stories showed that the two mood groups were equally attentive: Both responded with 92% accuracy (sd HAPPY = .09; sdSAD = 0.1). A more nuanced possibility is that in the two moods, consistent and inconsistent endings were processed with different attention levels. Recall scores, however, showed that both mood groups remembered consistent and inconsistent endings equally well (happy: M CONS = 0.27; MINCO = 0.30; sad: MCONS = 0.29; M INCO = 0.30; ps N .65). A correlation between the mood scores used for the analysis of the fMRI data and the difference in recall scores between consistent and inconsistent endings gave no hint of mood modulation (p = .96). We believe that differences in attention are expected when people adopt mood regulation strategies that are unlikely when mild implicit moods are induced and participants are not aware of what they feel, as is the case in this experiment.
fMRI results: neural activity associated with comprehension of consistent and inconsistent endings as a function of mood This analysis identified brain regions in which the difference in Betas between inconsistent and consistent endings correlated with mood scores (delta = inconsistent minus consistent). The analysis therefore identified areas showing correlations between delta and the combined mood scale indexing participants' mood after the mood inductions. In the next sections we first focus on regions where a positive delta (i.e., INCO N CONS) correlated positively and negatively with the combined mood scale. We then focus on regions were a negative delta (i.e., CONS N INCO) correlated positively or negatively with the mood scale. In all the mentioned clusters, activity for both consistent and inconsistent conditions was above baseline, so the mood modulations in question are modulations of activation rather than deactivation. Fig. 2 explains the analysis workflow in detail.
Clusters where greater activity for inconsistent endings correlated with mood We first consider the regions in which inconsistent endings were associated with greater activation than consistent endings (INCO N CONS) and where the magnitude of this difference correlated with mood. These regions constituted the largest network in our results. They are shown in Figs. 4 and 5 and Table 2.2 The regions in which the difference between consistent and inconsistent endings was greater with increased happiness extended bilaterally in temporal and cerebellar areas, specifically to part of the culmen, fusiform gyrus (FUS), posterior MTG, extending to the middle occipital gyrus (MOG), middle STG, and posterior insula. On the left, they extended subcortically to part of the caudate, and in the cerebellum to part of the declive and tonsil. In the temporal lobe, the correlation on the left extended to the posterior ITG and on the right to the transverse temporal gyrus (TTG). Correlations were also found bilaterally in the inferior part of PreCG; on the right they extended also to the superior part of PreCG, to PostCG, and to the adjacent SPL, the superior part of the precuneus, and CingG. Correlations in frontal areas were found mostly in the left hemisphere: These regions included part of the middle frontal gyrus (MFG), SFG, IFG-pars triangularis (IFG-Tri), and extended to part of the anterior cingulate gyrus (ACC) and of the medial frontal gyrus (medFG), which also extended to the right hemisphere. In a few regions, greater activity for inconsistent ending scaled positively with increased sadness. These were found only in the right hemisphere. They included part of medFG, SFG, IFG-Orb, and IFG-Tri. Thus, happy mood modulated most of the regions of the inconsistency processing network, whereas sad mood modulated additional areas less specific for this function.
Clusters where greater activity for consistent endings correlated with mood Several regions showed greater activity for consistent endings (CONS N INCO) that correlated with mood. They are shown in Fig. 6 and Table 2. In some areas, the degree of increased activity for consistent endings increased with the degree of participants' self-rated happiness. The cluster extended to medial and subcortical regions on the right: ACC, subcallosal gyrus (SuG), nucleus accumbens, and lentiform nucleus, and it also included a small portion of the SuG on the left. In other regions, the increased activity for consistent endings was greater the sadder participants rated themselves. These extended parietally on the left and frontally on the right and included: PostCG and the inferior parietal lobule (IPL) on the left, and the inferior part of PreCG, part of MFG and medFG, and part of ACC on the right. Thus, the two moods modulated different areas of the accumulation of information network.
Clusters where activity correlated with mood during listening to story bodies Our final analysis examined in which regions the Betas of the story bodies themselves (prior to the final sentences) correlated with mood scores. The signal changes estimations for the story bodies reflected increases or decreases in BOLD response relative to implicit baseline while listening to the story bodies. The implicit baseline comprised the residual BOLD fluctuations after all other events occurring in the experiment had been modeled (i.e., activity associated with listening to story endings, pressing the button, resting, and processing the comprehension questions). 2 Figs. 4, 5, 6, and 7 use standard abbreviations for names of brain regions. All labels on these figures indicate the region in the cluster shown by the slice, according to the Talairach atlas. All labels in the tables indicate instead the cluster centers according to the Talairach atlas.
G. Egidi, A. Caramazza / NeuroImage 103 (2014) 20–32
27
Fig. 4. Clusters where the difference in activity between inconsistent and consistent endings (INCO N CONS) increased with happiness. The results reported in this and the following figures are obtained with robust regressions analyses carried out at the single voxel level (see text). Talairach coordinates from top left slice to bottom: axial: z = 11, 11, 0, 5, 47, 13; coronal: y = −2, −38, 32, 4, 28, 32; sagittal: x = 40, 25, 25, −36, −29, −36. In this and all following figures, activation is presented over both gray and white matter underlays due to the transposition of individuals' data to the common Talairach template.
The analysis revealed areas where greater change (higher Beta) were found for increased levels of happiness, and areas where higher Betas were found for increased levels of sadness. The results are shown in Fig. 7 and Table 3. The clusters in which activity increased with greater happiness levels included part of medFG, SFG, and PreCG bilaterally. On the left they also included parts of MFG, IFG-Ope, PHG, and culmen. On the right, a cluster in MTG was also found. The clusters in which activity increased with greater sadness included part of ACC, FUS, and MTG, on the left, and the cerebellar tonsil on the right. Discussion The theoretical motivation for this experiment was to test whether happy and sad moods influence the functioning of the language
integration networks during discourse comprehension. To recap the theoretical points made in the Introduction section, on the basis of prior research, we focused on two networks associated with two distinct aspects of integration processing: One network for detection and resolution of inconsistencies and one network for continuous, fluent assimilation, which we have called accumulation. Given that prior research suggests that regions affected by mood states overlap with those showing sensitivity to consistent and inconsistent content in language processing, we had predicted a strong influence of moods on the function of those regions during language integration. From a cognitive point of view, inconsistency processing requires more flexibility and a more comprehensive view of the context, whereas accumulation requires monotonic integration with little contextual influence. Because happy mood promotes a more comprehensive processing style and sad
Fig. 5. Cluster where the difference in activity between inconsistent and consistent endings (INCO N CONS) increased with sadness. Talairach coordinates of slices: axial: z = 2; coronal: y = −42; sagittal: x = −47.
28
G. Egidi, A. Caramazza / NeuroImage 103 (2014) 20–32
Table 2 Talairach coordinates of centers of clusters where the difference in BOLD activity between listening to consistent and inconsistent story endings correlated with mood scores. Clusters match regions in Figs. 4, 5, and 6. Cognitive process
BOLD difference delta
Greater discrimination with increasing
Region focus point
Inconsistency processing
INCO N CONS
Happiness
Accumulation
CONS N INCO
Sadness Happiness Sadness
L caudate L medial frontal gyrus/BA9 L insula R culmen R postcentral gyrus/BA40 R insula/BA13 R middle frontal gyrus R anterior cingulate/BA25 L inferior parietal lobule/ BA40 R anterior cingulate
Volume
Talairach coordinates
mm3
x
y
z
9299 9110 2563 86,788 17,847 2828 3557 3128 2575
−22 −14 −39 14 28 40 37 5 −30
−16 37 −2 −52 −36 −21 34 9 −37
25 26 3 −16 52 13 −4 −7 55
1961
25
27
22
Note. The results reported in this and the following tables are obtained with robust regression methods, which are less sensitive to outliers than Pearson's correlation coefficient. See text for details.
mood a more narrow approach to information processing, we had also expected that happy mood would more likely enhance inconsistency processing, whereas sad mood would improve accumulation. Our results are mostly consistent with these predictions.
Mood-dependent processing of inconsistencies Inconsistency processing is typically associated with greater activity for inconsistent than consistent information in a network that includes
Fig. 6. Clusters where the difference in activity between consistent and inconsistent endings (CONS N INCO) increased with happiness and sadness. Talairach coordinates of slices: axial: z = −11, 51, 24; coronal: y = −6, 36, −26; sagittal: x = 2, 28, −26.
G. Egidi, A. Caramazza / NeuroImage 103 (2014) 20–32
29
Fig. 7. Clusters where the BOLD activity during listening to story bodies correlated with mood scores. Talairach coordinates of slices: axial: z = 15, 19; coronal: y = 23, 36, −15, 60; sagittal: x = −12, 43.
Table 3 Talairach coordinates of centers of clusters where the BOLD activity during listening to story bodies correlated with mood scores. Clusters match regions in Fig. 7. Greater activation with increasing
Region focus point
Happiness
L parahippocampal gyrus L precentral gyrus L middle frontal gyrus/BA10 L inferior frontal gyrus/BA44 L middle frontal gyrus/BA6 R medial frontal gyrus R precentral gyrus R medial frontal gyrus R middle temporal gyrus R superior frontal gyrus/BA6 R medial frontal gyrus/BA10 L anterior cingulate L fusiform gyrus/BA20 R cerebellar tonsil
Sadness
Volume
Talairach coordinates
mm3
x
y
z
773 390 385 256 230 1127 854 845 427 337 253 511 411 727
−34 −11 −25 −52 −35 1 14 19 65 4 9 −18 −43 21
−27 −26 59 14 8 −22 −19 44 −37 18 63 30 −2 −56
−23 65 12 20 47 54 67 14 −6 60 14 17 −25 −40
30
G. Egidi, A. Caramazza / NeuroImage 103 (2014) 20–32
IFG, medial PFC, dlPFC, STS/STG, MTG, aTL, and posterior midline regions (Ferstl et al., 2008; Hasson et al., 2007; Mason and Just, 2006), particularly on the left (Binder et al., 2009). As these areas are also differentially sensitive to happiness and sadness (e.g., Vytal and Hamann, 2010), we had predicted a mood-related modulation of this network with respect to this differentiation between consistent and inconsistent endings. As expected, we identified an extensive network previously linked to inconsistency processing where happier mood was associated with a stronger inconsistency effect. Modulation by mood occurred in all the frontal areas and most of the temporal areas predicted. The effect did not extend to the anterior temporal lobe, an area involved in inconsistency processing (Ferstl et al., 2008), but not mood (Phan et al., 2002). The regions identified did however extend ventrally to the FUS and ITG, regions that have been associated with semantic and integration processes (Binder et al., 1997; Binder et al., 2009). The midline regions identified were also consistent with the predicted areas, although their location was shifted more centrally and extended to more lateral and dorsal parietal areas; the shift is not necessarily informative as it could be due to differences in thresholding between this and prior studies. These parietal areas have typically not been found to be involved in affective processing, at least as far as happiness and sadness are concerned (Phan et al., 2002; Vytal and Hamann, 2010), but are linked to language processing, particularly to verbal working memory demands (Wager and Smith, 2003) and semantic processes (Binder et al., 2009; Ferstl et al., 2008). Their involvement in the current study suggests that the ability to keep information readily available facilitates detection and attempt at reconciliation of incoming information that does not fit prior context. Importantly, this ability is better utilized with happier mood. Our analysis also found a mood modulation in a widespread cluster of cerebellar areas. Some of these regions have been associated with discourse integration processes (Giraud et al., 2000; Lindenberg and Scheef, 2007; Xu et al., 2005), are often involved in different types of language tasks (Murdoch, 2010), and are known to contribute to affect processing (Habel et al., 2005; Vytal and Hamann, 2010). Consistent with prior literature showing that the function of these cerebellar areas is to detect and correct errors in both behavior (Schmahmann, 2000) and cognitive processing (Murdoch, 2010), we tentatively suggest that their contribution to the current task is monitoring consistency errors. It has also been proposed that these cerebellar regions, in conjunction with some of the cortical areas we found, such as MFG, FUS, MOG, and SPL (among others), constitute a semantic association network (Ghosh et al., 2010). Finally, although some subcortical areas are sensitive to happiness and sadness (Habel et al., 2005; Mitterschiffthaler et al., 2007; Vytal and Hamann, 2010), these were only minimally recruited for the task; only part of the left caudate was involved, and no limbic areas. This might depend of the fact that high-level linguistic processes are predominantly performed at the cortical level (see e.g., Fedorenko and Thompson-Schill, 2014; Hagoort and Indefrey, 2014; Price, 2012); sensitivity of subcortical areas to this type of cognitive task may not be easily achieved. The regions mentioned above, showing greater discrimination between inconsistent and consistent endings with increased happiness, constituted the widest network. In contrast, regions showing greater activity for inconsistent endings with increased sadness included only right hemisphere frontal regions. Our results therefore reveal a lateralized asymmetry in how mood affects linguistic integration in frontal cortex. Left hemisphere areas showed an inconsistency processing advantage for happy mood, whereas right areas showed an advantage for sad moods. This finding is in line with theories of prefrontal affect processing arguing that PFC regions are more likely to respond to positive affect on the left and to negative affect on the right (Davidson, 2003; Davidson and Irwin, 1999). Importantly, this finding also suggests that the differences in hemispheric processing posited by predictive theories of language comprehension (according to which the left
hemisphere is more apt to top-down processing and the right hemisphere to bottom-up; Federmeier, 2007; Wlotko and Federmeier, 2007), most likely apply inconsistency processing. To conclude, as the largest network of inconsistency detection is associated with increasing happiness bilaterally, happiness seems to be the best inconsistency detection enhancer. The different modulations of left and right frontal areas by positive and negative moods suggest that the two moods promote different mechanisms for inconsistency processing: While happy mood enhances the functioning of the language network most specialized for language processing (Binder et al., 2009; Vigneau et al., 2006, 2011), sad mood promotes the recruitment of a right hemisphere network supportive of functions that are less language-specific (Vigneau et al., 2011). It is important to note that, although inconsistency processing is most often studied with linguistic stimuli, some studies (e.g., Cohn et al., 2012; Sitnikova et al., 2008) have shown that image sequences can evoke neural responses (N400, P600) similar to those generated by language. Whether or not semantic integration as identified with language paradigms is indeed languagespecific or is generally shared by other domains is a fundamental issue that, to date, has not been extensively examined. The impact of mood on semantic processing in those domains is a question for future studies that might shed light on the issue.
Mood-dependent accumulation of information Accumulation of information is associated with greater activity for consistent than inconsistent information in a network that includes PreCG, posterior SFG, middle portion of PostCG, AG, SPL, CinG, anterior cuneus and anterior POS, a portion of vmPFC, posterior MTG/MTS, and ITG (e.g., Egidi and Caramazza, 2013; Ferstl and von Cramon, 2001, 2002; Hasson et al., 2007). We had predicted a mood-dependent modulation of the frontal and temporal areas of this network, but not of parietal and occipital regions, as these are less sensitive to the influence of happiness and sadness (Phan et al., 2002; Vytal and Hamann, 2010). Only part of the accumulation network was modulated by mood. A subset of regions in this network showed mood modulation with one of two patterns: A greater contrast for happier mood (medial frontal regions and their subcortical appendices), or a greater contrast for sadder mood (right frontal and left parietal regions). The pattern in frontal areas indicates that the sensitivity of the right prefrontal areas in this task is primarily to negative affect, though not exclusively, consistent again with what is generally suggested by theories of prefrontal affect (Davidson, 2003; Davidson and Irwin, 1999). In comparison with the pattern found for inconsistency processing, however, the right frontal areas showed more differentiation in susceptibility to both moods than the left areas. The modulation of ventromedial regions by happy mood is consistent with the literature showing that orbitofrontal and striatal regions are particularly sensitive to happiness and positive rewards (Britton et al., 2006; Mitterschiffthaler et al., 2007; Schaefer et al., 2003). Their connection to neighboring subcortical regions also explains the increased susceptibility to happy mood in the SuG and lentiform nucleus. The ventral parietal areas involved in accumulation that showed increased activity for sadder mood (IPL on the left) are not usually sensitive to happiness or sadness. They have however been linked to the semantic assimilation of incoming information with prior context (e.g., Binder et al., 2009; Ferstl et al., 2008; Humphries et al., 2007; Xu et al., 2005). This suggests that the more likely role of these areas in linguistic integration is that of contributing to the accumulation of information rather than to the processing of inconsistencies. To conclude, although we had predicted that sad mood would more likely modulate the accumulation process, both moods seemed to have an almost equal influence on it. We suggest that the two moods promote different aspects of accumulative processing: While sad mood enhances semantic integration (Binder et al., 2009), happy mood
G. Egidi, A. Caramazza / NeuroImage 103 (2014) 20–32
attributes a value to the incoming information, and enhances positive valuation of consistent information (Roy et al., 2012). Mood-dependent general integration of consistent information The last analysis identified brain regions in which activation during listening to consistent discourse (the story bodies) correlated with mood scores. This analysis tested whether mood modulates integration processes in the absence of coherence breaks. We had hypothesized that mood might modulate the frontal and temporal areas that are usually involved in integrative language tasks (e.g., IFG, MTG; Price, 2010, 2012). The analysis did in fact reveal that core regions mediating language processing (left IFG-Ope, right MTG) are sensitive to mood. The network where greater activity scaled with happiness was the most prominent, and it included mostly frontal and temporal regions. The network where greater activity scaled with sadness was much more limited, including not frontal and temporal areas, but also some cerebellar regions. A conjunction of the results of this network and those identified for mood-dependent inconsistency processing and accumulation revealed very minimal overlap. This finding was consistent with our hypothesis that the network identified in this analysis shows mood modulation of integration that is not sensitive to coherence or incoherence with a narrative structure and that may be similarly performed (in the same regions) with different types of input. Implications for mood-dependent processing and language comprehension Beyond identifying areas implicated in inconsistency processing and accumulation (replicating e.g., Egidi and Caramazza, 2013; Ferstl and von Cramon, 2001, 2002; Ferstl et al., 2008; Hasson et al., 2007), our findings make clear that these networks are differently sensitive to moods. Several general conclusions can be drawn. First, mood affects regions involved in inconsistency processing more than it affects those involved in accumulation. In addition, inconsistency processing appears amplified by happier mood, whereas for accumulation the picture is more balanced, with several regions showing increased dissociation for happier mood and others for sad mood. As mentioned in the Introduction section, we attribute this difference to the cognitive features of the two processes and the different tendencies of happy and sad moods to influence specific aspects of cognitive functions. Specifically, we have argued that processing of inconsistency and accumulation share careful processing of external stimuli, but processing of inconsistencies requires a more comprehensive view of the context in which incoming information is integrated, requires top-down anticipation of future information, and entails the detection and signaling of mismatches when inconsistent information is presented. Because happy mood promotes a more comprehensive top-down processing style (Clore and Huntsinger, 2007; Martin and Clore, 2001), it is not surprising that happy mood more widely affects the performance of inconsistency processing, and that it does so by influencing the regions of the language network specific for the process. The fact that sad mood also modulates the function of less specific regions for the same process suggests that sad mood may also affect the performance of inconsistency processing, but only via regions not typically involved in inconsistency processing. A possibility is that both moods are able to act equally effectively or in similar ways on relatively simple processes (e.g., accumulation); a divergence between moods' effects might occur only when the processes become more complex (e.g., inconsistency processing). Taken together, these results have important implications for both our knowledge of the influence of mood on cognitive functions and our knowledge of language processing. Contrary to the belief that different moods affect different processes, the data reported here show that happy and sad moods rely on different regions for achieving similar functions. Studies on the effects of mood on cognition are usually based on the assumption that a modulation of function is tantamount
31
to modulation of always the same set of regions. In this respect, our data make two important points: First, that additional, non languagespecific regions can be recruited to perform linguistic processes when the general conditions of the entire system change—in this case, when the affective context of the entire system is shifted from neutral. Second, the brain reorganizes the way by which it achieves its functions, as different processes can be mediated by different networks depending on mood. Thus, rather than speak of core inconsistency processing or accumulation networks that are relatively context independent, our findings suggest that mood may determine how inconsistency processing and accumulation operate, and that the networks will change as a function of mood. Our experiment was designed to identify brain networks associated with specific comprehension processes and their interaction with mood. For this reason, we have treated as networks the sets of regions we identified in our analyses, assuming that each mood mediates in the same way the function of the regions in each network. It is possible, however, that mood plays a more focused and well defined role on specific regions: A better understanding of the cognitive processes that may be mediated by mood in each of these areas is an interesting question for future research. Conclusion Our study shows that mood is a powerful modulator of language processing. It affects activity in core regions generally involved in language processing, such as middle temporal or inferior frontal gyri, and also brain systems specifically linked to the processing of inconsistencies or monotonic accumulation of information. Furthermore, being in a happy or sad mood fundamentally changes the balance between two integration processes underlying language comprehension: Happy mood boosts inconsistency processing, and enhances the evaluation of consistent information (in different systems); sad mood influences less specialized areas for inconsistency processing and boosts semantic integration of consistent information. This indicates a strong connection between cognition and affect in performing a high-level function such as language comprehension, which is basic and ubiquitous in all aspects of daily life. Acknowledgments This research was partially supported by the Fondazione Cassa di Risparmio di Trento e Rovereto. We thank Silvia Modenese for her help in coding the recall protocols and Uri Hasson for the suggestion of using a robust regression method. Appendix A. Supplementary data Supplementary data to this article can be found online at http://dx. doi.org/10.1016/j.neuroimage.2014.09.008. References Aoki, R., Sato, H., Katura, T., Utsugi, K., Koizumi, H., Matsuda, R., Maki, A., 2011. Relationship of Negative Mood With Prefrontal Cortex Activity During Working Memory Tasks: An Optical Topography Study. Neurosci. Res. 70, 189–196. Bertinetto, P.M., Burani, C., Laudanna, A., Marconi, L., Ratti, D., Rolando, C., Thornton, A.M., 2005. Corpus e lessico di frequenza dell'italiano scritto (CoLFIS). Scuola Normale Superiore di Pisa. Beukeboom, C.J., Semin, G.R., 2006. How mood turns on language. J. Exp. Soc. Psychol. 42, 553–566. Binder, J.R., Frost, J.A., Hammeke, T.A., Cox, R.W., Rao, S.M., Prieto, T., 1997. Human brain language areas identified by functional magnetic resonance imaging. J. Neurosci. 17, 353–362. Binder, J.R., Desai, R.h, Graves, W.W., Conant, L.L., 2009. Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cereb. Cortex 19, 2767–2796. Bless, H., 2000. The interplay of affect and cognition: the mediating role of general knowledge structures. In: Forgas, J.P. (Ed.), Feeling and Thinking: The Role of Affect in Social Cognition. Cambridge University Press, New York, NY, pp. 201–222.
32
G. Egidi, A. Caramazza / NeuroImage 103 (2014) 20–32
Britton, J.C., Phan, K.L., Taylor, S.F., Welsh, R.C., Berridge, K.C., Liberzon, I., 2006. Neural correlates of social and nonsocial emotions: an fMRI study. Neuroimage 31, 397–409. Chwilla, D.J., Virgillito, D., Vissers, C.T.W.M., 2011. The relationship of language and emotion: N400 support for an embodied view of language comprehension. J. Cogn. Neurosci. 23, 2400–2414. Clore, G.L., Huntsinger, J.R., 2007. How emotions inform judgment and regulate thought. Trends Cogn. Sci. 11, 393–399. Cohn, N., Paczynski, M., Jackendoff, R., Holcomb, P.J., Kuperberg, G.R., 2012. (Pea) nuts and bolts of visual narrative: Structure and meaning in sequential image comprehension. Cogn. Psychol. 65, 1–38. Cox, R.W., 1996. AFNI: software for analysis and visualization of functional magnetic resonance neuroimages. Comput. Biomed. Res. 29, 162–173. Davidson, R.J., 2003. Affective neuroscience and psychophysiology: toward a synthesis. Psychophysiology 40, 655–665. Davidson, R.J., Irwin, W., 1999. The functional neuroanatomy of emotion and affective style. Trends Cogn. Sci. 3, 11–21. Deen, B., Mccarthy, G., 2010. Reading about the actions of others: biological motion imagery and action congruency influence brain activity. Neuropsychologia 48, 1607–1615. Dolcos, F., Iordan, A.D., Dolcos, S., 2011. Neural correlates of emotion–cognition interactions: a review of evidence from brain imaging investigations. J. Cogn. Psychol. 23, 669–694. Egidi, G., Caramazza, A., 2013. Cortical systems for local and global integration in discourse comprehension. Neuroimage 71, 59–74. Egidi, G., Gerrig, R.J., 2009. How valence affects language processing: negativity bias and mood congruence in narrative comprehension. Mem. Cogn. 37, 547–555. Egidi, G., Nusbaum, H.C., 2012. Emotional language processing: how mood affects integration processes during discourse comprehension. Brain Lang. 122, 199–210. Federmeier, K.D., 2007. Thinking ahead: the role and roots of prediction in language comprehension. Psychophysiology 44, 491–505. Federmeier, K.D., Kirson, D.A., Moreno, E.M., Kutas, M., 2001. Effects of transient, mild mood states on semantic memory organization and use: an event-related potential investigation in humans. Neurosci. Lett. 305, 149–152. Fedorenko, E., Thompson-Schill, S.L., 2014. Reworking the language network. Trends Cogn. Sci. 18, 120–126. Ferstl, E.C., Neumann, J., von Bogler, C., Cramon, D.Y., 2008. The extended language network: a meta-analysis of neuroimaging studies on text comprehension. Hum. Brain Mapp. 29, 581–593. Fiedler, K., 2001. Affective states trigger processes of assimilation and accommodation. In: Martin, L.L., Clore, G.L. (Eds.), Theories of Mood and Cognition: A User's Guidebook. Erlbaum, Hillsdale, NJ, pp. 85–98. Forgas, J.P., 1995. Mood and judgment: the affect infusion model (AIM). Psychol. Bull. 117, 39–66. Forgas, J.P., 2001. The affect infusion model (AIM): an integrative theory of mood effects on cognition and judgments. In: Martin, L.L., Clore, G.L. (Eds.), Theories of Mood and Cognition: A User's Guidebook. Erlbaum, Hillsdale, NJ, pp. 99–134. Forgas, J.P., Locke, J., 2005. Affective influences on causal inferences: the effects of mood on attributions for positive and negative interpersonal episodes. Cogn. Emot. 19, 1071–1081. Forman, S.D., Cohen, J.D., Fitzgerald, M., Eddy, W.F., Mintun, M.A., Noll, D.C., 1995. Improved assessment of significant activation in functional magnetic resonance imaging (fMRI): use of a cluster-size threshold. Magn. Reson. Med. 33, 636–647. Ghosh, S., Basu, A., Kumaran, S.S., Khushu, S., 2010. Functional mapping of language networks in the normal brain using a word-association task. Indian J. Radiol. Imaging 20, 182–187. Giraud, A.L., Truy, E., Frackowiak, R.S.J., Grégoire, M.C., Pujol, J.F., Collet, L., 2000. Differential recruitment of the speech processing system in healthy subjects and rehabilitated cochlear implant patients. Brain 123, 1391–1402. Gold, B.T., Balota, D.A., Jones, S.J., Powell, D.K., Smith, C.D., Andersen, A.H., 2006. Dissociation of automatic and strategic lexical-semantics: functional magnetic resonance imaging evidence for differing roles of multiple frontotemporal regions. J. Neurosci. 26, 6523–6532. Grimm, S., Weigand, A., Kazzer, P., Jacobs, A.M., Bajbouj, M., 2012. Neural mechanisms underlying the integration of emotion and working memory. Neuroimage 61, 1188–1194. Habel, U., Klein, M., Kellermann, T., Shah, N.J., Schneider, F., 2005. Same or different? Neural correlates of happy and sad mood in healthy males. Neuroimage 26, 206–214. Hagoort, P., Indefrey, P., 2014. The neurobiology of language beyond single words. Annu. Rev. Neurosci. 37. Hagoort, P., van Berkum, J., 2007. Beyond the sentence given. Philos. Trans. R. Soc. B 362, 801–811. Hasson, U., Nusbaum, H.C., Small, S.L., 2007. Brain networks subserving the extraction of sentence information and its encoding to memory. Cereb. Cortex 17, 2899–2913. Humphries, C., Binder, J.R., Medler, D.A., Liebenthal, E., 2007. Time course of semantic processes during sentence comprehension: an fMRI study. Neuroimage 36, 924–932. Kamide, Y., 2008. Anticipatory processes in sentence processing. Lang. Linguis. Compass 2, 647–670. Kuperberg, G.R., Holcomb, P.J., Sitnikova, T., Greve, D., Dale, A.M., Caplan, D., 2003. Distinct patterns of neural modulation during the processing of conceptual and syntactic anomalies. J. Cogn. Neurosci. 15, 272–293. Lau, E.F., Gramfort, A., Hämäläinen, M.S., Kuperberg, G.R., 2013. Automatic semantic facilitation in anterior temporal cortex revealed through multimodal neuroimaging. J. Neurol. 33, 17174–17181.
Lindenberg, R., Scheef, L., 2007. Supramodal language comprehension: role of the left temporal lobe for listening and reading. Neuropsychologia 45, 2407–2415. Martin, L.L., Clore, G.L., 2001. Theories of Mood and Cognition: A User's Guidebook. Lawrence Erlbaum Associates, Mahwah, NJ. Mason, R.A., Just, M.A., 2004. How the brain processes causal inferences in text. Psychol. Sci. 15, 1–7. Mason, R.A., Just, M.A., 2006. Neuroimaging contributions to the understanding of discourse processes, In: Traxler, M., Gernsbacher, M.A. (Eds.), Handbook of Psycholinguistics, Second edition, pp. 765–799. Mason, R.A., Just, M.A., 2011. Differentiable cortical networks for inferences concerning people's intentions versus physical causality. Hum. Brain Mapp. 32, 313–329. Mitchell, R.L., Phillips, L.H., 2007. The psychological, neurochemical and functional neuroanatomical mediators of the effects of positive and negative mood on executive functions. Neuropsychologia 45, 617–629. Mitterschiffthaler, M.T., Fu, C.H., Dalton, J.A., Andrew, C.M., Williams, S.C., 2007. A functional MRI study of happy and sad affective states induced by classical music. Hum. Brain Mapp. 28, 1150–1162. Murdoch, B.E., 2010. The cerebellum and language: historical perspective and review. Cortex 46, 858–868. Perani, D., Dehaene, S., Grassi, F., Cohen, L., Cappa, S.F., Dupoux, E., …, Mehler, J., 1996. Brain processing of native and foreign languages. Neuroreport 7, 2439–2444. Perani, D., Paulesu, E., Galles, N.S., Dupoux, E., Dehaene, S., Bettinardi, V., …, Mehler, J., 1998. The bilingual brain. Proficiency and age of acquisition of the second language. Brain 121, 1841–1846. Phan, K.L., Wager, T., Taylor, S.F., Liberzon, I., 2002. Functional neuroanatomy of emotion: a meta-analysis of emotion activation studies in PET and fMRI. Neuroimage 16, 331–348. Price, C.J., 2010. The anatomy of language: a review of 100 fMRI studies published in 2009. Ann. N. Y. Acad. Sci. 1191, 62–88. Price, C.J., 2012. A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. Neuroimage 62, 816–847. Robinson, O.J., Cools, R., Crockett, M.J., Sahakian, B.J., 2010. Mood state moderates the role of serotonin in cognitive biases. J. Psychopharmacol. 24, 573–583. Rommers, J., Meyer, A.S., Praamstra, P., Huettig, F., 2013. The contents of predictions in sentence comprehension: activation of the shape of objects before they are referred to. Neuropsychologia 51, 437–447. Rousseeuw, P., Croux, C., Todorov, V., Ruckstuhl, A., Salibian-Barrera, M., Verbeke, T., …, Maechler, M., 2012. Robustbase: basic robust statistics. R Package Version 0.9-4. , (URL http://CRAN.R-project.org/package=robustbase). Roy, M., Shohamy, D., Wager, T.D., 2012. Ventromedial prefrontal–subcortical systems and the generation of affective meaning. Trends Cogn. Sci. 16, 147–156. Schaefer, A., Collette, F., Philippot, P., van der Linden, M., Laureys, S., Delfiore, G., …, Salmon, E., 2003. Neural correlates of “hot” and “cold” emotional processing: a multilevel approach to the functional anatomy of emotion. Neuroimage 18, 938–949. Schmahmann, J., 2000. The role of the cerebellum in affect and psychosis. J. Neurolinguistics 13, 189–214. Sitnikova, T., Holcomb, P.J., Kiyonaga, K.A., Kuperberg, G.R., 2008. Two neurocognitive mechanisms of semantic integration during the comprehension of visual real-world events. J. Cogn. Neurosci. 20, 2037–2057. Van Petten, C., Luka, B.J., 2012. Prediction during language comprehension: benefits, costs, and ERP components. Int. J. Psychophysiol. 83, 176–190. Vigneau, M., Beaucousin, V., Hervé, P.Y., Duffau, H., Crivello, F., Houdé, O., …, TzourioMazoyer, N., 2006. Meta-analyzing left hemisphere language areas: phonology, semantics, and sentence processing. Neuroimage 30, 1414–1432. Vigneau, M., Beaucousin, V., Hervé, P.Y., Jobard, G., Petit, L., Crivello, F., …, TzourioMazoyer, N., 2011. What is right-hemisphere contribution to phonological, lexicosemantic, and sentence processing? Insights from a meta-analysis. Neuroimage 54, 577–593. Vingerhoets, G., van Borsel, J., Tesink, C., van den Noort, M., Deblaere, K., Seurinck, …, Achten, E., 2003. Multilingualism: an fMRI study. Neuroimage 20, 2181–2196. Ferstl, E.C., von Cramon, D.Y., 2001. The role of coherence and cohesion in text comprehension: an event-related fMRI study. Cogn. Brain Res. 11, 325–340. Ferstl, E.C., von Cramon, D.Y., 2002. What does the frontomedian cortex contribute to language processing: coherence or theory of mind? Neuroimage 17, 1599–1612. Vytal, K., Hamann, S., 2010. Neuroimaging support for discrete neural correlates of basic emotions: a voxel-based meta-analysis. J. Cogn. Neurosci. 22, 2864–2885. Wager, T.D., Smith, E.E., 2003. Neuroimaging studies of working memory. Cogn. Affect. Behav. Neurosci. 3, 255–274. Wager, T.D., Keller, M.C., Lacey, S.C., Jonides, J., 2005. Increased sensitivity in neuroimaging analyses using robust regression. Neuroimage 26, 99–113. Wlotko, E.W., Federmeier, K.D., 2007. Finding the right word: hemispheric asymmetries in the use of sentence context information. Neuropsychologia 45, 3001–3014. Xu, J., Kemeny, S., Park, G., Frattali, C., Braun, A., 2005. Language in context: emergent features of word, sentence, and narrative comprehension. Neuroimage 25, 1002–1015. Zaitsev, M., Hennig, J., Speck, O., 2004. Point spread function mapping with parallel imaging techniques and high acceleration factors: fast, robust, and flexible method for echo-planar imaging distortion correction. Magn. Reson. Med. 52, 1156–1166.