Spoken Language of Individuals With Mild Fluent

0 downloads 0 Views 138KB Size Report
of individuals with aphasia using a divided-attention or dual-task ..... 8–20. 104–126. 6–132. 108–135. 11–15. 11–17. 9–16. Control (n = 8). M. 62.50. 15.00.
JSLHR, Volume 41, 213–227, February 213 1998

Murray et al. : Spoken Language and Attention

Spoken Language of Individuals With Mild Fluent Aphasia Under Focused and Divided-Attention Conditions Laura L. Murray Indiana University Bloomington

Audrey L. Holland Pelagie M. Beeson National Center for Neurogenic Communication Disorders University of Arizona Tucson

The spoken language of individuals with mild aphasia and age-matched control subjects was studied under conditions of isolation, focused attention, and divided attention. A picture-description task was completed alone and in competition with a tone-discrimination task. Regardless of condition, individuals with aphasia performed more poorly on most morphosyntactic, lexical, and pragmatic measures of spoken language than control subjects. Increasing condition complexity resulted in little quantitative or qualitative change in the spoken language of the control group. In contrast, the individuals with aphasia showed dual-task interference; as they shifted from isolation to divided-attention conditions, they produced fewer syntactically complete and complex utterances, fewer words, and poorer word-finding accuracy. In pragmatic terms, their communication was considered less successful and less efficient. These results suggest that decrements of attentional capacity or its allocation may negatively affect the quantity and quality of the spoken language of individuals with mild aphasia. KEY WORDS: aphasia, attention, spoken language, automatic and controlled processing

I

t is well documented that individuals with aphasia may have a number of concomitant impairments, including memory, vision, abstraction, and construction deficits (Beeson, Bayles, Rubens, & Kaszniak, 1992; Lesser, 1989; Rosenbek, LaPointe, & Wertz, 1989). Recently, researchers have examined the integrity of the attention skills of individuals with aphasia. Individuals with aphasia may display difficulties completing tasks that require engaging, orienting, sustaining, focusing, and dividing attention (Cohen, Woll, & Ehrenstein, 1981; Glosser & Goodglass, 1990; Loverso & Prescott, 1981; Petry, Crosson, GonzalezRothi, Bauer, & Schauer, 1994; Robin & Rizzo, 1987; Tseng, McNeil, & Milenkovic, 1993). With reference to a limited capacity theory of attention (Kahneman, 1973), McNeil and colleagues have proposed that individuals with aphasia have difficulty with these attention tasks, specifically those with linguistic demands, because of their inefficient or inappropriate allocation of attentional resources (Arvedson & McNeil, 1986a,b; Campbell & McNeil, 1985; McNeil, 1983; McNeil, Odell, & Tseng, 1991; Tseng et al., 1993). These researchers have also suggested that many characteristics of aphasic behavior, such as performance variability, may be related to attentional rather than linguistic impairments. ©1998, American Speech-Language-Hearing Association

1092-4388/98/4101-0213

Downloaded From: http://jslhr.pubs.asha.org/ by a University of Arizona - Library User on 09/29/2015 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx

Journal of Journal Speech, of Speech, Language, Language, and Hearing and Hearing ResearchResearch 213

214 How do these attention deficits interact with communication skills? Results from several studies have suggested that attention deficits can negatively affect the auditory processing skills of even mildly aphasic individuals. For example, LaPointe and colleagues required individuals to complete an auditory-vigilance task alone and in competition with a card-sorting task (Bowles, Erickson, & LaPointe, 1992; LaPointe & Erickson, 1991). The auditory-vigilance task involved identification of a target word randomly interspersed among other monosyllabic words, and the card-sorting task involved sorting stimuli from the Wisconsin Card Sorting Test (Grant & Berg, 1981) according to color. In both studies, the aphasic and normal control groups had similar accuracy scores on the auditory-vigilance task in isolation; however, under the dual-task or dividedattention condition, the aphasic group performed significantly more poorly. How deficits of attention or its allocation interact with the spoken language skills of individuals with aphasia has yet to be examined directly. A study by Klingman and Sussman (1983) provided some indirect evidence that attention deficits may negatively affect the verbal output of individuals with aphasia. Aphasic and normal control subjects completed manual and verbal tasks under three conditions: (a) a silent control, single-task condition in which subjects tapped as fast as possible with their right and left hands separately; (b) an expressive language, dual-task condition in which subjects tapped while verbalizing (e.g., reciting days of the weeks, describing pictures); and, (c) a receptive language, dualtask condition in which subjects tapped while listening to prerecorded task instructions (e.g., pointing to pictures, following directions) and then after tapping carried out a comprehension task. Using finger-tapping rates as the dependent variable, Klingman and Sussman found that control subjects demonstrated right-handonly disruption during dual-task conditions, whereas the aphasic subjects demonstrated disruption of both hands (i.e., intimating bilateral language processing). Interestingly, the aphasic subjects displayed greater manual disruption during concurrent expressive language and tapping tasks than control individuals, who showed equivalent disruption during expressive and receptive language tasks. Klingman and Sussman’s findings suggested that the spoken language of individuals with aphasia is sensitive to variation in attentional demands. However, the subjects in their study were not required to complete the language tasks in isolation and no analyses of verbal performance were made. Therefore no comparisons could be made between single- and dual-task language performances, and the direct impact of the dual-task condition on various parameters (e.g., syntax) of the aphasic subjects’ spoken language could not be determined.

JSLHR, Volume 41, 213–227, February 1998

The purpose of the present study was to examine the relation between attention and spoken language skills of individuals with aphasia using a divided-attention or dual-task paradigm. The capacity theory of attention predicts that the degree of impairment with dual tasks is the result of competition for the same limited-capacity attentional resources (Kahneman, 1973; Wickens, 1984, 1989); if the amount of resources (i.e., task demands) required to complete both tasks simultaneously exceeds attentional capacity, performance decrements or dualtask interference will occur. Task demands are partially determined by the degree to which a given task draws upon automatic versus controlled processes. According to Schneider and Shiffrin (1977), automatic processes are fast and involuntary and are thought to require little attention; therefore, behaviors performed automatically do not drain our limited capacity of attentional resources (Hasher & Zacks, 1979; Schneider & Shiffrin, 1977; Shiffrin & Schneider, 1977). In contrast, controlled processes are slow, under voluntary control, and require attentional resources for successful completion (Schneider & Shiffrin, 1977; Shiffrin & Schneider, 1977). Because controlled processes are limited by available capacity, combining tasks that require controlled processes should lead to interference and, consequently, dual-task performance decrements. Whereas interpretation of early studies frequently characterized automaticity as an all-or-none phenomenon, more recent research has indicated that it is more likely a continuous phenomenon that may vary as a function of task practice (Cohen, Dunbar, & McClelland, 1990; Kahneman & Treisman, 1984). Within this continuum of automaticity, predictions may be made concerning which aspects of spoken language would be expected to show dual-task interference. Language parameters dependent on relatively automatic processes should be unaffected or less affected by dualtask demands in comparison with language parameters dependent on relatively controlled processes. Morphosyntactic skills have been hypothesized to be relatively automatic in normal adult speakers (Bayles & Kaszniak, 1987; Bock, 1982; Kempler, Curtiss, & Jackson, 1987; Whitaker, 1976). Children are able to master syntactic and morphologic processes early because the rules that govern their use are limited and predictable. By adulthood, use of these rules is highly practiced, routine, and, thus, relatively automatic; minimal attentional resources are assigned or reserved by the normal adult speaker to plan and produce grammatical aspects of his or her spoken language. In contrast, researchers have suggested that semantic and pragmatic language skills are more dependent upon relatively controlled rather than automatic processes (Bayles & Kaszniak, 1987; Daneman & Green, 1986; Jorm, 1986; Schwartz, Marin, & Saffran, 1979; Ulatowska & Bond Chapman, 1995).

Journal of Speech, Language, and Hearing Research

Downloaded From: http://jslhr.pubs.asha.org/ by a University of Arizona - Library User on 09/29/2015 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx

215

Murray et al. : Spoken Language and Attention

Semantic and pragmatic skills are hypothesized to require conscious attention because of the infinite number of ideas that may be communicated and the array of contexts in which they may be communicated. Over time, we develop expectations and contextual sensitivity that help us predict and plan linguistic content and communicative purpose. However, drawing upon previous experiences and tracking of contextual variables (e.g., physical setting, the relationship between conversational participants) also requires conscious mediation and, thus, rely to some extent on controlled processes.

pothesized that normally aging adults could complete our speaking and secondary tasks without taxing their attentional capacity or allocation skills, we predicted that increasing condition complexity and, thus, attention demands would have little effect on their spoken language. In contrast, because of previously documented decrements of attentional capacity or its allocation (e.g., LaPointe & Erickson, 1991; Tseng et al., 1993), we predicted that increasing condition complexity would have a negative impact on the spoken language of individuals with aphasia.

It is important to note that whereas morphosyntactic skills are suggested to be relatively more automatic in nature than semantic and pragmatic skills, there most likely exists a gradation of automaticity within these language domains (Bates, Harris, Marchman, Wulfeck, & Kritchevsky, 1995; Kempler et al., 1987). For example, certain semantic tasks (e.g., counting, reciting the pledge of allegiance) and certain pragmatic speech acts (e.g., social greetings, leave taking) are performed and processed relatively automatically—most likely because of their frequency and predictability (Jackson, 1878; Lesser 1989; Lum & Ellis, 1994). Likewise, production or processing of certain morphosyntactic forms (e.g., in a given sentence, deciding whether whom versus who is the grammatically correct choice; producing complex sentences with a noncanonical order of thematic roles) can be more volitional and, thus, more resource-demanding (Bates et al., 1995; Waters, Caplan, & Hildebrandt, 1987).

A variety of linguistic measures were used to quantify and qualify changes in picture descriptions across speaking conditions. These measures were selected to examine morphosyntactic completeness and complexity (i.e., proportion of syntactically complete utterances, proportion of simple and complex sentences, morphological complexity of verb phrases) and lexical and pragmatic performance (i.e., total number of words, proportion of word-finding errors, percentage of correct information units, proportion of unsuccessful utterances). Given the notion of a continuum of processing automaticity, the following predictions were tested:

Because controlled processes are limited by available capacity, combining tasks that are reliant upon relatively controlled processes should result in dual-task interference (Kahneman, 1973; Wickens, 1989). Therefore, if dual-task demands approach or exceed capacity, breakdown in spoken language might be expected. Specifically, an increase in word-finding errors and pragmatic inappropriateness should be observed during dual-task conditions, because semantic and pragmatic aspects of language formulation are dependent on relatively controlled processes (Bayles & Kaszniak, 1987; Schwartz et al., 1979; Ulatowska & Bond Chapman, 1995). In contrast, morphosyntactic accuracy and complexity should be consistent across single- and dual-task conditions, because morphosyntactic functions are executed relatively automatically (Bock, 1982; Kempler et al., 1987; Whitaker, 1976). The purpose of the current study was to investigate the effects of varying attention demands on the spoken language of individuals with aphasia. Picture descriptions of aphasic and normal control subjects were compared under isolation, focused-attention, and divided-attention conditions. It was predicted that across conditions, the aphasic group would perform significantly more poorly than controls. Because we hy-

1.

The picture descriptions of individuals with aphasia would show little change in morphosyntax across speaking conditions because of the relatively automatic processes upon which morphosyntactic performance depends.

2.

As condition complexity increased (i.e., shifting from isolation to focused-attention to divided-attention conditions), significant decrements in terms of quantity and quality of lexical and pragmatic performance would occur because these aspects of spoken language are dependent upon relatively controlled, attention-demanding processes. That is, individuals with aphasia were expected to demonstrate decreased output, increased word-finding problems, decreased percentage of correct information units, and increased production of unsuccessful utterances.

Method Subjects Participants included 8 control subjects with no history of neurological impairment and 14 aphasic subjects. The first author interviewed all subjects and reviewed the medical records of aphasic subjects to determine that subjects met the following selection criteria: negative history for traumatic brain injury, alcohol or substance abuse, pre-existing communication or memory impairment, and psychiatric illness or clinical depression within the past 6 months; premorbid righthandedness; native speakers of English; and normal or Journal of Speech, Language, and Hearing Research

Downloaded From: http://jslhr.pubs.asha.org/ by a University of Arizona - Library User on 09/29/2015 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx

216

JSLHR, Volume 41, 213–227, February 1998

Table 1. Group characteristics. ADP Subtests Age (years)

Group

Education (years)

IQ (est.)

MPO

Aphasia a severity

Lexical b retrieval

33.29 36.02 6–132

117.21 7.78 108–135

13.00 1.47 11–15

Aphasia (n = 14) M SD Range

64.07 11.43 40–78

13.92 3.36 8–20

116.39 8.19 104–126

Control (n = 8) M SD Range

62.50 14.20 39–76

15.00 2.62 12–19

118.96 6.80 107–126

Aud.b comp.

13.79 1.81 11–17

Phrase b length

12.50 1.70 9–16

Notes. MPO = months post stroke; ADP = Aphasia Diagnostic Profiles (Helm-Estabrooks, 1992). IQ = premorbid IQ as estimated from a regression equation based on demographic variables such as occupation and years of education (Barona et al., 1984). Standard score with M = 100, SD = 15 based on standardization sample of 222 stroke patients. Standard score with M = 10, SD = 3 based on a sample of 140 right-handed patients with left-hemisphere stroke. a b

corrected vision (no visual hemianopsia or agnosia). All subjects passed visual and hearing screening tests. For the visual screening test, subjects were shown four line drawings and had to find an identical picture among an array of four visually similar pictures. Criteria for inclusion was 100% accuracy. Using a pure tone airconduction hearing screening, it was determined that all subjects had hearing thresholds (aided or unaided) of 35db HL or better at 0.5, 1.0, and 2.0 kHz in at least one ear. Table 1 summarizes group characteristics. There were no significant differences between aphasic and control

groups for age [t(10.8) = .935, p > .05], years of education [t(17.5) = –.760, p > .05], or estimated IQ (Barona, Reynolds, & Chastain, 1984) [t(16.7) = –1.013, p > .05]. Gender representation differed between groups, with 4/14 in the aphasic group and 3/8 in the control group being women. Aphasic subjects’ demographic and clinical characteristics are provided in Table 2. Aphasic subjects were identified through the University of Arizona Aphasia Clinic and two Tucson area hospitals that provided outpatient speech-language pathology services. All aphasic subjects had a left-hemisphere stroke (as documented by CT or

Table 2. Aphasic subject characteristics.

Ss

Age

Educ. (years)

Sex

IQ (est.)

MPO

Right hemiparesis

1 2 3 4 5 6 7 8 9 10 11 12 13 14

67 74 48 65 57 65 79 75 74 74 51 70 72 78

12 20 12 12 12 13 16 12 12 19 16 8 16 16

M M F M F F M M M M M M F M

108 125 107 108 112 121 126 114 107 125 122 106 124 129

75 47 6 16 10 6 22 12 47 51 6 13 17 15

+ + + + + + – – – – – – – –

ADP subtests (percentile rank)

Lesion location Frontal; Pre/Post Rolandic Frontal; Pre/Post Rolandic Frontal; Caudate Nucleus Frontal; Ant. Deep White Frontal; Basal Ganglia Frontal; Ant. Deep White Temporo-parietal Posterior Parietal Temporo-parietal Temporo-parietal; Insula Temporo-parietal Temporo-parietal Temporal Parietal

Aphasia type Anomic Anomic Anomic Anomic Borderline Fluent Anomic Conduction Anomic Conduction Anomic Anomic Anomic Conduction Anomic

Notes. MPO = months post stroke. ADP = Aphasia Diagnostic Profiles (Helm-Estabrooks, 1992).

Journal of Speech, Language, and Hearing Research

Downloaded From: http://jslhr.pubs.asha.org/ by a University of Arizona - Library User on 09/29/2015 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx

Aphasia Lexical Aud. severity retrieval comp. 84 87 75 94 79 92 95 77 70 73 99 70 87 97

75 91 84 95 84 91 91 63 63 63 91 63 91 91

95 91 63 91 84 99 95 75 75 84 99 75 91 95

Phrase length 75 63 75 95 50 84 63 91 75 84 98 75 75 75

217

Murray et al. : Spoken Language and Attention

MRI scan report) and were minimally 6 months poststroke. To circumvent apraxic errors from confounding key-pressing responses, all aphasic subjects were given the Limb Apraxia subtest of the Apraxia Battery for Adults (Dabul, 1979); all subjects performed with 100% accuracy, and no instances of searching behavior were observed. The Aphasia Diagnostic Profiles (ADP; HelmEstabrooks, 1992) were used to determine aphasia severity (Aphasia Severity Standard Score) and to classify aphasia type. All aphasic subjects achieved an Aphasia Severity Standard Score of at least 108 (percentile rank of at least 70) and, thus, were considered to present with relatively mild language impairments. The majority of the subjects presented with fluent aphasia: 10 with anomic aphasia and 3 with conduction aphasia. The exception was one subject who presented with a borderline fluent aphasia type.

Procedures In this study, subjects were asked to complete a picture-description task alone and in competition with a secondary, tone-discrimination task. Both speaking and listening tasks were performed under each of four conditions.

Conditions In the Isolation Condition, subjects completed the speaking or listening task without distraction. In the Focused-Attention Condition, the picture and tone stimuli were presented simultaneously, but subjects completed only one task (i.e., the subject either described the picture or responded to the tone stimuli) and were instructed to ignore the secondary, competing stimuli. In the Divided-Attention Condition #1, the picture and tone stimuli were presented simultaneously, and subjects were required to complete both tasks. They were instructed to give attentional priority to the picturedescription task and to guess at the competing listening task, if necessary. In the Divided-Attention Condition #2, the picture and tone stimuli were presented simultaneously, and subjects were again required to complete both tasks. In contrast with the preceding condition, subjects were instructed to give attentional priority to the listening task rather than to the picture-description task. See Appendix A for specific instructions.

Tasks A Tone-Discrimination Task required subjects to determine whether or not a tone stimulus was high or low. Twenty 500-ms pure tones (ten at 500 Hz and ten at 2000 Hz) were presented in random order. During divided-attention conditions, a larger number of tone stimuli were presented so that the distractor task was completed over the entire duration of the picture-

description task (i.e., 2 min). Subjects were instructed to press a computer key labelled HIGH (#7 key on the number pad) when they heard a high tone and to press a key labelled LOW (#9 key on the number pad) when they heard a low tone. Subjects were encouraged to respond as quickly and accurately as they could using their unaffected hand. They were given as much time as needed to respond. Pictures from the Western Aphasia Battery (Kertesz, 1982), ADP (Helm-Estabrooks, 1992), Brief Test of Head Injury (Helm-Estabrooks & Hotz, 1991), and ABA (Dabul, 1979), as well as two single pictures developed by Nicholas and Brookshire (1993), served as stimuli for the Picture-Description Task. The ADP picture was always used as the stimulus during the focused-attention condition, in which subjects attended and responded only to the tone-discrimination task (i.e., no verbal output was required). This procedure was adopted because all aphasic subjects had described the ADP picture as part of the formal language assessment; therefore, we wanted to avoid any type of practice effect. The remaining picture stimuli were randomized across practice and experimental conditions. Each picture was described during each speaking condition by at least one control subject and by at least two aphasic subjects. Subjects were instructed to describe each picture as completely as possible. They were given 2 minutes to complete the task. If subjects stopped talking before the allotted time, they were prompted with “Is there anything else you can tell me about this picture?” No feedback concerning accuracy or appropriateness was given, although occasional social lubricants (e.g., uh-huh, head nods) were provided.

Order of Presentation Practice preceded experimental conditions. All subjects performed the tone-discrimination task (in isolation) with 80% (16/20) accuracy or better during the practice condition. Subjects also practiced the picturedescription task in isolation; this practice condition was given to familiarize subjects with the 2-min time length and to encourage subjects to talk about all details depicted in the picture. If subjects stopped talking before the allotted time, related personal anecdotes or narratives, or did both during this practice trial, feedback and cues were provided concerning other appropriate details that they could describe. There was no accuracy criterion for the picture-description task, but during the practice trial subjects had to be capable of producing at least one complete independent clause (i.e., subject and predicate) that pertained to the picture stimulus. Experimental conditions were completed in a fixed order, progressing from isolation to focused attention to divided attention #1 to divided attention #2. This order Journal of Speech, Language, and Hearing Research

Downloaded From: http://jslhr.pubs.asha.org/ by a University of Arizona - Library User on 09/29/2015 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx

218 reflected a hypothesized hierarchy of difficulty in which progressively increasing demands were placed on subjects’ attention capacity and allocation skills. The fixed, as opposed to random, order also helped to avoid confusion over task and condition requirements. This order was also thought to circumvent fatigue and feelings of failure. That is, by presenting the simplest (isolation) condition first, the likelihood of initial success was increased. Brookshire (1972, 1976) found that following failure, aphasic individuals’ performances on subsequent items were negatively affected.

Equipment and Stimulus Recording Procedures Secondary listening-task stimuli were tones generated with SoundEdit software on a MacIntosh IIcx computer. Tone stimuli were presented via free field using a speaker placed approximately one meter in front of the subject. Duration of the tones was set at 500 ms (a duration found to be of sufficient length to allow accurate discrimination during pilot testing), and intensity was adjusted according to each subject’s wishes during practice trials. Tone presentation was controlled by a MacIntosh IIcx or a Power Macintosh 6100/60 computer using PsyScope software (Cohen, MacWhinney, Flatt, & Provost, 1993). The silent intertrial interval, defined as the time from the response offset to the onset of the subsequent tone stimulus, was 1000 ms. The PsyScope software also permitted on-line computation of accuracy and reaction time (RT) for the secondary task. RTs for the secondary task were calculated from the tone onset. Picture stimuli were presented on 8.5 × 11 inch cards. Following task instructions, each picture was placed in front of the computer monitor and held there by the experimenter until task completion. An audiocassette recorder with a lapel microphone was used to record the subjects’ picture descriptions for off-line transcription and analyses. Visual and auditory instructions were provided for each condition. Visual instructions were presented on the computer monitor in an 18-point bold font. At the same time, the experimenter explained the instructions (see Appendix A) and encouraged subjects to ask questions. The experimenter told subjects that once they began a task, no questions could be asked and no feedback would be given.

Language Analyses The spoken picture descriptions were transcribed and coded into a format designed for computer analyses. The CHAT (Codes for the Human Analysis of Transcripts) formatting system was used to code the speech samples for automatic analyses by various CLAN (Computerized

JSLHR, Volume 41, 213–227, February 1998

Language Analysis) programs (MacWhinney, 1995). The software ran on a Power MacIntosh 7200/75 computer. The following linguistic components were analyzed.

Number of Utterances Each speech sample was segmented into utterances according to criteria described by Glosser et al. (1988) and Saffran et al. (1989). Briefly, utterances were identified primarily on the basis of syntactic and prosodic boundary features (e.g., well-formed sentences with falling intonation); when segmentation was difficult because of distorted syntactic form or prosody, pausal patterns and semantic features as well were considered. Utterance data were not statistically analyzed but instead were used to calculate certain measures of morphosyntax (i.e., proportion of syntactically complete utterances) and pragmatics (i.e., proportion of unsuccessful utterances).

Morphosyntactic Components Syntactic grammaticality and complexity were analyzed by determining the proportion of grammatical utterances, the proportion of simple sentences versus grammatical sentences, and the proportion of complex sentences to grammatical sentences, using procedures described by Thompson and colleagues (1995). An utterance was scored as grammatically complete if it included at least one independent clause (i.e., complete subject and predicate) and no syntactic errors. Utterances that were syntactically well-formed but included semantic violations were scored as grammatically complete. Grammatical sentences that included no embedding or moved sentence constituents were coded as simple sentences, and those that included at least one embedded clause or were produced in noncanonical form were coded as complex sentences. Morphological aspects of the subjects’ spoken language were investigated by determining the morphological complexity of verb phrases. Because aphasic subjects have shown relatively greater difficulty producing bound morphology associated with verbs as opposed to nouns or adjectives (Goodglass & Berko, 1960; Haarmann & Kolk, 1992), we hypothesized that verb morphology would be a sensitive indicator of possible morphological change across speaking conditions. Restricting our analysis to morphological aspects of verb production also assisted in limiting the number of statistical comparisons and thus helped in controlling the probability of Type I error (Keppel, 1991). An AUX score was calculated according to the procedures described by Saffran et al. (1989); this score is computed by assigning each verb element (e.g., auxiliary, main verb, negation) a point to determine the extent the verb has been modified over and above its baseline or left in an uninflected and unmarked form (see Appendix B). An AUX score was calculated for each

Journal of Speech, Language, and Hearing Research

Downloaded From: http://jslhr.pubs.asha.org/ by a University of Arizona - Library User on 09/29/2015 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx

219

Murray et al. : Spoken Language and Attention

main verb in utterances with minimal indication of subject/predicate structure (i.e., noun/pronoun + main verb, noun/pronoun + copula + adjective/prepositional phrase). Therefore, the utterance did not have to be syntactically correct in order for the main verb to be scored. For example, a main verb that failed to agree in number with its subject was scored as long as it was part of an utterance with at least minimal subject/predicate structure.

subjects × 4 speaking conditions) were then randomly selected for retranscription by a second listener blind to subject group or condition identity. Point-to-point interjudge agreement for utterance boundaries and words transcribed was 97% (range = 93–100%) and 90% (range = 87–99%), respectively; any disagreements were discussed and resolved. Most transcription disagreements concerned the presence of fillers (e.g., er, uh) or the presence of sound or word repetitions.

Lexical and Pragmatic Components

A second set of 20 transcripts (3 aphasic subjects and 2 control subjects × 4 speaking conditions) was randomly chosen to be re-scored by a second listener. All linguistic components were recoded. Point-to-point interjudge agreement was as follows: 99% (range = 93– 100%) for grammatical sentences, 98% (range = 87– 100%) for simple sentences, 96% (range = 87–100%) for complex sentences, 94% (range = 83–100%) for AUX score, 98% (range = 95–100%) for total word counts, 98% (range = 91–100%) for word-finding errors, 92% (range = 83–99%) for CIUs, and 96% (range = 84–100%) for unsuccessful utterances. Those measures found to be in disagreement were resolved through discussion.

Total word counts and the number of word-finding problems were examined. Rules described by Nicholas and Brookshire (1993) were used to determine which words to include in the total word counts. For example, nonword fillers such as um or uh were not included in total word counts. To determine the frequency of wordfinding problems, the total number of word-finding errors was divided by the total number of words produced. The informativeness and efficiency of the speech samples were determined by examining percentage of correct information units (CIUs); CIUs are “words that are intelligible in context, accurate in relation to the picture(s) or topic, and relevant to and informative about the content of the picture(s) or the topic” (Nicholas & Brookshire, 1993, p. 348). The percent CIUs (i.e., number of CIUs divided by total word counts) was calculated according to the criteria developed by Nicholas and Brookshire. The number of unsuccessful utterances, an adaptation of the “tangler functions” described by Holland et al. (1985), was determined. Generally, an utterance was considered unsuccessful if the speaker failed to communicate accurate and novel information depicted in the target picture or if the speaker failed to follow task instructions (i.e., only describe the picture stimulus and refrain from asking questions while completing the task). The following types of utterances were coded as unsuccessful: (a) incomplete or abandoned utterances (e.g., “Here’s a um…,” “The children are….”), (b) incoherent utterances (e.g., “That must be rover ’cause there’s a car.”), (c) utterances including inaccurate information, (d) offtask comments about the task or personal value judgments (e.g., “Here’s a low tone.” “Yeah I had to think about it.”), (e) repetitions of previous information or utterances, and (f) questions regarding the task or picture (e.g., “Want more?”). The proportion of unsuccessful utterances was determined by dividing the total number of unsuccessful utterances by the total number of utterances.

Interjudge and Intrajudge Agreement Picture descriptions were audiotaped and then transcribed on the same day by the first author. Twentyfour transcripts (i.e., 4 aphasic subjects and 2 control

To determine intrajudge agreement, a third set of 12 transcripts (2 aphasic subjects and 1 control subject × 4 speaking conditions) was randomly selected and reanalyzed by the first author at least one month after the original linguistic and pragmatic coding. Point-topoint intrajudge agreement was 100% for grammatical sentences, 99% (range = 92–100%) for simple sentences, 97% (range = 92–100%) for complex sentences, 95% (range = 89–100%) for AUX score, 99% (range = 95– 100%) for total word counts, 100% for word-finding errors, 95% (range = 88–100%) for CIUs, and 97% (range = 86–100%) for unsuccessful utterances.

Data Analyses The ratio of the largest to the smallest variance (Fmax) was calculated for each data set to determine if the ANOVA assumption of variance homogeneity was met. When Fmax exceeds 3, presence of variance heterogeneity will lead to increased Type 1 error (Keppel, 1991). Therefore, data sets that failed to meet this Fmax criterion were transformed; all percentage data were arcsine transformed and some raw data, such as RT data, were logarithmically transformed. Fmax was again calculated to determine if transformed data were appropriate. Next, group means and 95% confidence intervals were calculated and plotted. For several linguistic and pragmatic variables, the control-group data failed to overlap with the data of the aphasic group. This finding indicated that for these variables the control group’s performance was significantly different from that of the aphasic group, and, therefore, only the aphasic group’s Journal of Speech, Language, and Hearing Research

Downloaded From: http://jslhr.pubs.asha.org/ by a University of Arizona - Library User on 09/29/2015 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx

220

JSLHR, Volume 41, 213–227, February 1998

data needed to be submitted to statistical testing. Excluding the control group from the ANOVA also decreased Type I error in two ways: (a) For some data sets, removing the control group data resulted in Fmax < 3. (b) Removing the control-group data resulted in fewer post hoc comparisons and therefore decreased Type I familywise error (Keppel, 1991). Statistical analyses consisted of a series of mixed 2-factor within-subjects ANOVAs, with group as the between-subjects factor and condition (isolation, focused attention, divided attention #1, divided attention #2) as the within-subjects factor. Significant main or interaction effects were further examined via Tukey post hoc pairwise comparisons with alpha set at .05.

Results Morphosyntactic Components Table 3 displays group data for the proportion of syntactically complete utterances. Significant effects were found for group [F(1, 20) = 10.196, p = .005], condition [F(3, 60) = 2.884, p = .043], and the group-by-condition interaction [F(3, 60) = 5.497, p = .002]. The control group produced a significantly larger proportion of syntactically complete utterances than the aphasic group during divided-attention conditions only; during isolation and focused-attention conditions, group differences did not reach significance. The proportion of well-formed utterances produced by the control group failed to differ significantly across conditions. The aphasic group displayed a significant decrease in the proportion of syntactically complete

utterances during divided-attention condition #2 (i.e., the condition in which attentional priority was to be given to the tone-discrimination task) compared to isolation and focused-attention conditions. Twelve of the 14 aphasic subjects showed a decrease in their percentage of wellformed utterances as they shifted from divided-attention condition #1 to #2, even though the aphasic group’s means for these two conditions failed to differ significantly. Proportions of simple and complex sentences were examined to identify changes in syntactic complexity across speaking conditions (see Table 3). Because the proportion of complex sentences is the inverse of the proportion of simple sentences, and because we wanted to restrict the number of statistical tests, only the proportion of simple sentences was submitted to statistical analysis. Main effects for group [F(1, 20) = 13.878, p = .001] and condition [F(3, 60) = 4.547, p = .006] and the group-bycondition interaction effect [F(3, 60) = 4.842, p = .004] were significant. During isolation and focused-attention conditions, the proportion of simple sentences produced by the aphasic and control groups failed to differ significantly. However, during divided-attention conditions, the aphasic group produced larger proportions of simple sentences than the control group. The proportion of simple sentences produced by the control group did not significantly differ as a function of speaking condition. In contrast, the aphasic group produced significantly larger proportions of simple sentences during divided-attention conditions than during isolation and focused-attention conditions. For example, the grammatical sentence production of half of the aphasic subjects was limited to simple sentences only during divided-attention condition #2.

Table 3. Group means, standard deviations, and ranges of total utterances, proportion of grammatical sentences/utterances, proportion of simple sentences/grammatical sentences, and proportion of complex sentences/grammatical sentences. Isolation

Focused Attention

Divided Attention #1

Aphasic

Control

Aphasic

Control

Aphasic

Divided Attention #2

Control

Aphasic

Control

Total utterances

M SD Range

20.50 5.16 12–31

25.13 8.24 17–43

20.57 6.86 10–31

28.75 5.17 19–37

15.07 4.29 9–21

24.25 5.75 15–32

12.79 7.02 5–31

24.25 6.43 14–32

Grammatical sentences

M SD Range

.725 .122 .54–.88

.812 .109 .58–.93

.758 .128 .45–.93

.810 .091 .67–1.00

.646 .175 .23–.89

.843 .120 .67–1.00

.540 .223 .20–.82

.830 .091 .67–.93

Simple sentences

M SD Range

.737 .152 .35–.88

.652 .136 .50–.82

.708 .118 .58–1.00

.591 .102 .38–.67

.864 .171 .56–1.00

.630 .122 .50–.80

.858 .160 .63–1.00

.597 .133 .33–.72

Complex sentences

M SD Range

.263 .152 .13–.64

.348 .136 .18–.50

.292 .118 0.00–.42

.409 .102 .33–.63

.136 .171 0.00–.44

.370 .122 .20–.50

.142 .160 0.00–.38

.403 .133 .28–.67

AUX score

M SD Range

.99 .43 .2–2.0

1.22 .34 .9–1.9

1.01 .54 0–1.8

1.06 .31 .8–1.7

.85 .42 .3–1.6

1.10 .34 .8–1.6

.89 .44 .3–1.6

1.11 .34 .8–1.7

Journal of Speech, Language, and Hearing Research

Downloaded From: http://jslhr.pubs.asha.org/ by a University of Arizona - Library User on 09/29/2015 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx

221

Murray et al. : Spoken Language and Attention

Table 3 also presents the group means, standard deviations, and ranges for the AUX score data. Neither the main effects of group [F(1, 20) = 1.852, p = .189] or condition [F(3, 60) = .586, p = .626] nor the interaction effect of group-by-condition [F(3, 60) = .409, p = .747] was significant. These findings indicated that the morphological complexity of verb phrases did not significantly differ between groups and that, for both groups, changing attentional requirements as a function of speaking condition had little impact on this aspect of their morphosyntactic production skills.

Lexical and Pragmatic Components Group means, standard deviations, and ranges for total words produced and percentage of word-finding problems are displayed in Table 4. The control group consistently produced more words than the aphasic group (as indicated by confidence intervals), and their word totals did not vary across conditions [F(3, 21) = 2.053, p = .137]. Statistical analysis of the aphasic group’s logarithmically transformed data indicated a significant condition effect [F(3, 39) = 19.854, p < .001]. The aphasic group displayed significant decreases in word production as its members shifted from isolation or focusedattention conditions to either divided-attention condition. The aphasic group’s decrease in total words as they shifted from divided-attention condition #1 to divided-attention condition #2 also approached significance (p = .062). An interesting performance pattern was observed when individual word production data were examined; 6 aphasic subjects and 5 control subjects produced more

words during the focused-attention condition than in isolation. For these subjects, the tones (presentation rate of approximately 1 tone/s) may have assisted their speech output by providing a set speaking rate. It is unlikely that this increased output was related to a practice effect, because no such increase was observed as these subjects moved from the practice to isolation conditions. This finding was also unrelated to the stimuli, because the pictures were randomly presented across conditions as well as across individuals. The control group produced few word-finding errors, and their confidence intervals did not overlap with those of the aphasic group, regardless of condition. Therefore, only the arcsine transformed word-finding problem data of the aphasic group were submitted to an ANOVA. The condition effect was significant [F(3, 39) = 10.788, p < .001], and post hoc testing indicated that significantly more word-finding errors were produced by the aphasic group during divided-attention condition #2 than during any other condition. During the isolation condition, only one aphasic subject displayed word-finding errors on 10% or more of his total words produced. In contrast, during divided-attention condition #2, the frequency of word-finding errors was 10% or more for 11 of the 14 aphasic subjects. Table 4 also displays group performances in terms of communicative efficiency or percent-correct information units. The control group produced significantly greater %CIUs than the aphasic group (as indicated by confidence intervals), and their performance varied little across conditions [F(3, 21) = 0.360, p = .793]. In contrast, statistical analysis of the aphasic group’s %CIUs indicated that the condition effect was significant [F(3, 39) =

Table 4. Group means, standard deviations, and ranges for lexical and pragmatic measures. Isolation

Focused Attention

Aphasic

Control

Aphasic

Control

Divided Attention #1 Aphasic

Divided Attention #2

Control

Aphasic

Control

Total words

M SD Range

142.14 48.81 70–235

258.50 72.90 186–395

144.36 53.31 54–221

269.63 87.44 184–413

86.64 30.62 37–134

240.38 88.22 163–364

76.29 43.56 11–138

245.39 85.97 147–365

% Word-finding problems

M SD Range

4.36 2.57 2–11

0.61 0.83 0–2

6.13 3.90 2–17

0.91 0.62 0–2

9.47 4.16 4–16

1.50 1.12 0–3

15.40 11.05 5–44

0.59 0.87 0–2

% Unsuccessful utterances

M SD Range

34.29 15.55 12–67

12.36 7.01 4–27

35.18 13.86 14–61

11.00 10.18 0–33

54.29 17.39 28–86

11.02 7.16 0–23

60.37 17.36 19–80

11.92 8.17 3–25

Correct information units

M SD Range

109.64 42.23 46–201

217.75 50.10 140–287

104.57 41.20 26–188

221.63 64.85 145–318

56.79 25.08 16–105

202.13 64.64 155–310

47.14 33.82 3–117

204.50 67.01 141–315

% Correct information units

M SD Range

76.31 6.56 64–85

85.18 7.66 73–94

71.63 9.18 48–80

82.79 4.85 77–89

63.77 11.73 35–81

85.04 5.40 77–93

54.85 18.19 25–85

84.40 7.47 73–96

Journal of Speech, Language, and Hearing Research

Downloaded From: http://jslhr.pubs.asha.org/ by a University of Arizona - Library User on 09/29/2015 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx

222

JSLHR, Volume 41, 213–227, February 1998

15.593, p < .001]. A significantly greater proportion of the aphasic group’s words were considered to be CIUs during the isolation condition as compared to both dividedattention conditions. The aphasic group also achieved a significantly greater %CIUs during the focused-attention condition than during divided-attention condition #2. The percentage of unsuccessful utterances was analyzed to investigate changes in pragmatic competence as a function of speaking condition (see Table 4). Examination of 95% confidence intervals indicated that the control group consistently produced a significantly smaller percentage of unsuccessful utterances than the aphasic group and that increasing condition complexity had little impact on the control group’s performance [F(3, 21) = 0.062, p = .979]. Analysis of the aphasic group’s data showed a significant condition effect [F(3, 39) = 16.405, p < .001]. The aphasic group produced significantly more unsuccessful utterances during dividedattention conditions than during either the isolation or focused-attention condition. During divided-attention conditions, over 50% of the aphasic group’s utterances were judged to be unsuccessful in terms of accurately conveying information about the target picture stimulus. The unsuccessful utterances of the control group typically consisted of off-task comments such as judgments regarding the events or people depicted in the picture stimuli (e.g., “That’s quite a situation to be in.” or “I’m not sure why he would do that.”). The aphasic group’s unsuccessful utterances primarily consisted of incomplete or incoherent utterances, as well as off-task comments.

Table 5. Percent correct accuracy and reaction time (RT) group means, standard deviations, and ranges for the tone discrimination task. Group Measure

Condition

Aphasic

Control

M SD Range

93.57 6.36 80–100

100.00 0.00 100–100

M SD Range

82.86 19.49 45–100

98.13 2.59 95–100

divided attention #1 M SD Range

53.24 20.44 8–98

95.07 5.87 85–100

divided attention #2 M SD Range

62.70 16.26 25–95

96.28 3.46 90–100

isolation

M SD Range

898 541 328 78 518–1777 405–663

focused attention

M SD Range

1022 649 360 124 566–1772 452–808

Accuracy isolation (%) focused attention

RT (ms)

divided attention #1 M SD Range

1839 1502 508 311 653–2547 1135–2074

divided attention #2 M SD Range

1679 1413 507 277 714–2594 985–1784

Tone Discrimination Task Group accuracy and RT means, standard deviations, and ranges for the secondary tone-discrimination task are displayed in Table 5. No a priori hypotheses were made with respect to performance because interest was limited to the secondary task’s effect on speaking performances; therefore, the following represent exploratory statistical analyses. Analysis of the arcsine transformed accuracy data showed significant main effects for group [F(1, 2) = 38.685, p < .001], condition [F(3, 60) = 23.744, p < .001], and the group-by-condition interaction [F(3, 60) = 6.224, p = .001]. The control group performed significantly more accurately than the aphasic group except during the isolation condition, in which there was no significant group difference. The aphasic group discriminated tones significantly more accurately during isolation and focused-attention conditions than during either dividedattention condition; no other comparisons across conditions were significant. The control group’s accuracies were similar across conditions. Statistical testing of the logarithmically transformed

RT data1 revealed significant effects for group [F(1, 2) = 7.333, p = .014], condition [F(3, 60) = 98.831, p < .001], and the group-by-condition interaction [F(3, 60) = 3.404, p = .023]. The control group responded significantly more quickly than the aphasic group during isolation and focused-attention conditions; the RTs of the two groups did not significantly differ during divided-attention conditions. Both groups responded significantly more slowly during divided-attention conditions than during isolation or focused-attention conditions.

Discussion The present findings suggest that increased attentional demands may have a negative effect on the 1

RTs from incorrect responses and extreme value RTs (e.g., greater than 2 standard deviations above or below the given individual’s mean RT for that condition) were excluded from all analyses. Less than 2% of each group’s RT data were outliers.

Journal of Speech, Language, and Hearing Research

Downloaded From: http://jslhr.pubs.asha.org/ by a University of Arizona - Library User on 09/29/2015 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx

223

Murray et al. : Spoken Language and Attention

spoken language of individuals with aphasia. Morphosyntactic, lexical, and pragmatic components of the aphasic individuals’ verbal output were affected by manipulating the demands placed upon attentional capacity or allocation. Therefore, these results extend the literature that has documented a negative interaction between the attentional impairments and auditory processing skills of individuals with aphasia (Arvedson & McNeil, 1986a,b; LaPointe & Erickson, 1991; Murray, Holland, & Beeson, 1997, in press; Tseng et al., 1993). As hypothesized, the aphasic group performed significantly more poorly on our linguistic and pragmatic speaking measures than the control group across most conditions. Overall, control subjects produced more syntactically complete and complex utterances, said more (i.e., greater number of words), and committed few wordfinding errors in comparison with the aphasic subjects. The control subjects also used a smaller proportion of unsuccessful utterances than the aphasic subjects, and their spoken language was considered to be more efficient (i.e., greater percentage of CIUs) than that of the aphasic subjects. Whereas the above results quantified expected differences between normal and aphasic speech, the more interesting finding is that several of these differences were exaggerated during divided-attention conditions. That is, shifting from isolation to focused-attention to divided-attention conditions (i.e., increasing task or resource demands) resulted in little change in morphosyntactic, lexical, or pragmatic aspects of the control subjects’ spoken language. It is possible that more sensitive language measures would have detected changes in the control group’s verbal output as a function of condition complexity; previous studies have documented negative effects of divided attention on the language skills of normal, aging adults (Morris, Gick, & Craik, 1988; Tun, Wingfield, & Stine, 1991). In contrast to our null findings for the control group, our measures indicated that increasing condition complexity had a negative impact on the verbal output of the aphasic individuals; as they shifted from isolation to divided-attention conditions, they produced fewer syntactically complete and complex utterances and fewer words, exhibited poorer word-finding accuracy, and (pragmatically) their communication was considered less successful and less efficient. Therefore, the individuals with aphasia experienced dual-task interference—as evidenced by increased difficulty with syntactic, lexical, and pragmatic aspects of their speech. We do not deny the possible existence of a specific linguistic impairment in aphasia given that even during optimal environmental conditions (i.e., isolation condition) differences were observed between the picture descriptions of the aphasic and control groups. However, whereas the quality and quantity of spoken language remained

constant for control subjects, it declined for the aphasic subjects. For example, aphasic and control speech samples significantly differed in terms of proportion of syntactically complete utterances during divided-attention conditions only. Thus, deficits of attentional capacity or its allocation are implicated in the individuals with aphasia because they displayed greater performance decrements under dual-task conditions. The aphasic group’s performance pattern on the secondary task was also indicative of deficits of attentional capacity or its allocation. For example, significant group differences in tone-discrimination accuracy were observed only during focused and divided-attention conditions. Furthermore, only the aphasic group showed a significant condition effect, performing less accurately in the divided-attention condition than in the other listening conditions. Therefore, only the aphasic group showed dualtask accuracy decrements. Interestingly, although control subjects responded more quickly than the aphasic subjects during isolation and focused-attention conditions, no significant group differences were found during dividedattention conditions. This finding suggests that the control subjects successfully used a speed/accuracy trade-off. Regardless of condition, the accuracy of their verbal output and listening skills remained stable; in contrast, during divided-attention conditions, their RTs for the tone task were significantly slower. The aphasic group also responded more slowly during divided-attention conditions. However, there was less apparent benefit to their speaking or listening skills, given that during dual-task conditions, they also showed accuracy decrements in their picture-description and tone-discrimination performances. Because the condition order was not counterbalanced, it is possible that the aphasic group’s dual-task decrements were a product of fatigue rather than attentional deficits. If the pattern of performance decrements was primarily a function of underlying timebased factors such as fatigue or noise build-up, one would expect to see a tuning- or fading-out pattern both within and across conditions (Brookshire, 1978; McNeil, 1982). Individuals with aphasia should have performed initial items or conditions most accurately, efficiently, and quickly, and shown performance decrements for subsequent items and conditions. However, the aphasic group failed to show progressive decline in speaking or listening performance for any dependent variable. Furthermore, the most common within-condition performance pattern among aphasic and control subjects was an intermittent pattern (i.e., random fading-in and -out). For example, review of individual picture descriptions indicated that syntactically complete utterances were interspersed throughout the 2-min sample, as opposed to just the initial minute or 30 s of that sample. Although we propose that fatigue effects were minimal in this study, future research should be designed to control for order effects. Journal of Speech, Language, and Hearing Research

Downloaded From: http://jslhr.pubs.asha.org/ by a University of Arizona - Library User on 09/29/2015 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx

224 In addition to significant group differences, we also predicted that there would be differential effects of speaking condition on the morphosyntactic, lexical, and pragmatic skills of the aphasic individuals. Given the assumption of automaticity in cognitive processing, we expected to observe dual-task interference in lexical and pragmatic performance because these language domains are dependent on relatively controlled, resource-demanding processes (Bayles & Kaszniak, 1987; Schwartz et al., 1979; Wickens, 1989). As hypothesized, individuals with aphasia showed greatest lexical and pragmatic difficulty during dual-task conditions. Also as predicted, the morphological complexity of the aphasic group’s verb phrases varied little across conditions. This finding supports the proposal that morphological processes are completed with little strain on attentional capacity (Bock, 1982; Jorm, 1986). That is, increasing attentional demands did not affect morphological complexity because this aspect of language formulation draws upon relatively automatic versus relatively controlled cognitive processes. However, we must acknowledge that we analyzed morphological production as it pertains to the verb only. Therefore, conclusions regarding the relation between attentional demands and other aspects of morphology (e.g., use of noun- and adjective-bound morphemes) would be premature. The automaticity hypothesis failed to predict the negative impact of dual-task conditions on the syntactic completeness and complexity of the aphasic group’s utterances. These aspects of productive grammar, like morphological complexity, were expected to be dependent on relatively automatic processes and, thus, resistant to dual-task interference (Bayles & Kaszniak, 1987; Bock, 1982; Jorm, 1986; Kempler et al., 1987). One possible explanation for the aphasic group’s syntactic decrements during dual-task conditions is that their lexical and pragmatic breakdowns ultimately limited or affected their choice and production of syntactic structures (Bastiaanse, Edwards, & Kiss, 1996; Blanken, Dittmann, Haas, & Wallesch, 1987; Parisi, 1987; Penn, 1988; Saffran et al., 1989). For example, Penn (1988) noted that the frequent self-correction attempts of her fluent aphasic subjects presented syntactically as incomplete sentences; she concluded from her in-depth pragmatic and syntactic analyses that there was an “interdependence between form and function” (p. 198). Another possibility is that during dual-task conditions, the individuals with aphasia tried to minimize attentional demands by simplifying their syntactic output. A similar proposition has been advanced by Hofstede and Kolk (1994; Kolk & Hofstede, 1994); their “strategy” hypothesis submits that agrammatic speech is the product of an aphasic speaker’s attempts to avoid using complete sentences. Individuals with aphasia may prefer to use either simple sentences or sentence fragments because

JSLHR, Volume 41, 213–227, February 1998

of the reduced resource requirements of these forms in comparison with grammatically complete or more complex utterance forms. Although originally formulated to describe agrammatic output, versions of the adaptation hypothesis have also been advanced to account for quantitative differences in the syntactic skills of normal speakers and those with fluent aphasia (Bastiaanse et al., 1996; Penn, 1988). For example, Nadeau (1988) described two patients with mild fluent aphasia whose spontaneous language was “certainly not agrammatic… [but] gave the impression of simplification and lack of variability in syntax” (p. 1128). When these patients were required to use a variety of more complex syntactic structures, both displayed syntactic impairments. Therefore, the adaption hypothesis provides a viable explanation for the negative impact of divided-attention conditions on the grammatical output of the aphasic group. However, it must be noted that inherent in this hypothesis is the notion of a continuum of processing automaticity: Production of incomplete or simple utterances has fewer costs in terms of processing resources than more complete and complex syntactic forms (Bates et al., 1995; Bock, 1982). What the adaption hypothesis brings to the notion of automaticity is that individuals with aphasia may develop and use strategies that capitalize on this continuum. The individuals with aphasia compromised their syntactic form in an attempt to reduce the attentional demands of the dual-task conditions and, consequently, to avoid or minimize communication breakdown. In the future, a more detailed analysis of language functions will be completed to determine if, during dual-task conditions, individuals with aphasia used simpler speech not only in terms of syntactic structure but also in terms of lexical and pragmatic forms (e.g., reduced lexical diversity). The present findings suggest that impairments of attentional capacity or its allocation can negatively affect the quantity and quality of aphasic individuals’ spoken language. Individuals with aphasia frequently comment about the adverse effects of distraction on their language skills (Marshall, 1993; Skelly, 1975). For example, Moore (1994) wrote, “A mild stroke indeed, but I cannot today do any two things simultaneously…. Noise is brutality, every aphasic will tell. And truly there should be a law against it” (p. 102). Clinically, our results quantify the complaints of Moore and other individuals with mild aphasia; they also highlight the importance of assessing and treating aphasic individuals’ spoken language skills in both optimal and suboptimal environments. This study represents an initial attempt to quantify and qualify the relation between impairments of attentional capacity or its allocation and aphasic speakers’ linguistic and pragmatic skills. These data were drawn from a larger study that incorporated complex

Journal of Speech, Language, and Hearing Research

Downloaded From: http://jslhr.pubs.asha.org/ by a University of Arizona - Library User on 09/29/2015 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx

225

Murray et al. : Spoken Language and Attention

listening and speaking tasks (Murray, 1994); consequently, only aphasic individuals with relatively mild impairments met performance criteria. Therefore, generalization of these findings to more severely languageimpaired individuals or to aphasic individuals presenting with different patterns of language impairment (e.g., individuals with agrammatic aphasia) would be premature. Because of statistical constraints (i.e., the threat of substantially increasing the probability of Type I error), we were also limited in the number of linguistic analyses we could complete. Confining future analyses to one linguistic domain (e.g., syntax only) would allow a more detailed specification of which, if any, language structures and processes within that domain are most or least sensitive to manipulations of attentional demands. Our findings may also have been restricted because of the nature of our speaking task. Picture-description tasks have a tendency to elicit labeling behavior, and this may have limited the complexity of morphosyntactic structures as well as the number and variety of lexical and pragmatic behaviors (Glosser et al., 1988; Li, Ritterman, Della Volpe, & Williams, 1996; Shadden, Burnette, Eikenberry, & DiBrezzo, 1991; Tompkins, 1995). Future studies should incorporate conversational discourse, story-retelling, video-narration, or procedural speaking tasks to determine whether impairments of attentional capacity or its allocation interact with linguistic and pragmatic behaviors (e.g., narrative knowledge, discourse coherence, speech acts) that we did not observe.

Acknowledgments We thank Scott Jackson and Debbie Johnson of St. Joseph’s Hospital, Janet Hawley of St. Mary’s Hospital, and Shannon Bryant of the University of Arizona for their help in recruiting subjects. We would also like to thank Drs. Malcolm McNeil, Connie Tompkins, and Cynthia Thompson, and one anonymous reviewer for their helpful comments on an earlier version of this paper. This project was supported in part by the National Center for Neurogenic Communication Disorders (Grant DC-01409).

References Arvedson, J. C., & McNeil, M. R. (1986a). Accuracy and response times for semantic judgments and lexical decisions with left and right hemisphere lesions. Clinical Aphasiology, 15, 188–200.

Bastiaanse, R., Edwards, S., & Kiss, K. (1996). Fluent aphasia in three languages: Aspects of spontaneous speech. Aphasiology, 10, 561–575. Bates, E., Harris, C., Marchman, V., Wulfeck, B., & Kritchevsky, M. (1995). Production of complex syntax in normal ageing and Alzheimer’s Disease. Language and Cognitive Processes, 10, 487–539. Bayles, K. A., & Kaszniak, A. W. (1987). Communication and cognition in normal aging and dementia. San Diego, CA: College-Hill. Beeson, P. M., Bayles, K. A., Rubens, A. B., & Kaszniak, A. W. (1993). Memory impairment and executive control in individuals with stroke-induced aphasia. Brain and Language, 45, 253–275. Blanken, G., Dittmann, J., Haas, J. C., & Wallesch, C. W. (1987). Spontaneous speech in senile dementia and aphasia: Implications for a neurolinguistic model of language production. Cognition, 27, 247–274. Bock, J. K. (1982). Toward a cognitive psychology of syntax: Information processing contributions to sentence formulation. Psychological Review, 89, 1–47. Bowles, M. E., Erickson, R. J., & LaPointe, L. L. (1992). Auditory vigilance during divided task attention in individuals with aphasia and with right hemisphere damage. Presented at Clinical Aphasiology Conference, Durango, CO. Brookshire, R. H. (1972). Effects of task difficulty on naming performance of aphasic subjects. Journal of Speech and Hearing Research, 15, 551–558. Brookshire, R. H. (1976). Effects of task difficulty on sentence comprehension performance of aphasic subjects. Journal of Communication Disorders, 9, 167–173. Brookshire, R. H. (1978). Auditory comprehension and aphasia. In D. F. Johns (Ed.), Clinical management of neurogenic communicative disorders (pp. 103–128). Boston: Little, Brown and Company. Campbell, T. F., & McNeil, M. R. (1985). Effects of presentation rate and divided attention on auditory comprehension in children with an acquired language disorder. Journal of Speech and Hearing Research, 28, 513–520. Cohen, J. D., Dunbar, K., & McClelland, J. L. (1990). On the control of automatic processes: A parallel distributed processing account of the Stroop effect. Psychological Review, 97, 332–361. Cohen, J. D., MacWhinney, B., Flatt, M., & Provost, J. (1993). PsyScope: A new graphic interactive environment for designing psychology experiments. Behavioral Research Methods, Instruments and Computers, 25,257–271. Cohen, R., Woll, G., & Ehrenstein, W. H. (1981). Recognition deficits resulting from focused attention in aphasia. Psychological Research, 43, 391–405.

Arvedson, J. C., & McNeil, M. R. (1986b). Response interference of auditory processing with left/right hemisphere lesions. Paper presented at the annual convention of the American Speech-Language-Hearing Association, Detroit, MI.

Dabul, B. (1979). Apraxia battery for adults. Austin, TX: Pro-Ed.

Barona, A., Reynolds, C., & Chastain, R. (1984). A demographically based index of premorbid intelligence for the WAIS–R. Journal of Clinical and Consulting Psychology, 52, 885–887.

Glosser, G., & Goodglass, H. (1990). Disorders in executive control functions among aphasic and other brain-damaged patients. Neuropsychology, 12, 485–501.

Daneman, M., & Green, I. (1986). Individual differences in comprehending and producing words in context. Journal of Memory and Language, 25, 1–18.

Journal of Speech, Language, and Hearing Research

Downloaded From: http://jslhr.pubs.asha.org/ by a University of Arizona - Library User on 09/29/2015 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx

226

JSLHR, Volume 41, 213–227, February 1998

Glosser, G., Wiener, M., & Kaplan, E. (1988). Variations in aphasic language behaviors. Journal of Speech and Hearing Research, 53, 115–124. Goodglass, H., & Berko, J. (1960). Agrammatism and inflectional morphology in English. Journal of Speech and Hearing Research, 3, 257–267. Grant, D. A., & Berg, E. A. (1981). Wisconsin card sorting test. Odessa, FL: Psychological Assessment Resources. Hasher, L., & Zacks, R. (1979). Automatic and effortful processes in memory. Journal of Experimental Psychology: General, 108, 356–388. Haarmann, H. J., & Kolk, H. H. J. (1992). The production of grammatical morphology in Broca’s and Wernicke’s aphasics: Speed and accuracy factors. Cortex, 28, 97–112. Helm-Estabrooks, N. (1992). Aphasia diagnostic profiles. Chicago: Riverside. Helm-Estabrooks, N., & Hotz, G. injury. Chicago: Riverside.

(1991). Brief test of head

Hofstede, B. T. M., & Kolk, H. H. J. (1994). The effects of task variation on the production of grammatical morphology in Broca’s aphasia: A multiple case study. Brain and Language, 46, 278–328. Holland, A. L., Miller, J., Reinmuth, O. M., Bartlett, C., Fromm, D., Pashek, G., Stein, D., & Swindell, C. (1985). Rapid recovery from aphasia: A detailed language analysis. Brain and Language, 24, 156–173. Jackson, J. H. (1978). On affections of speech from disease of the brain. Brain, 1, 304–330. Jorm, A. F. (1986). Controlled and automatic information processing in senile dementia: A review. Psychological Medicine, 16, 77–88. Kahneman, D. (1973). Attention and effort. Englewood Cliffs, NJ: Prentice-Hall. Kahneman, D., & Treisman, A. (1984). Changing views of attention and automaticity. In R. Parasuraman, D. R. Davies, & J. Beatty (Eds.), Varieties of attention(pp. 29– 61). Toronto: Academic Press. Kempler, D., Curtiss, S., & Jackson, C. (1987). Syntactic preservation in Alzheimer’s Disease. Journal of Speech and Hearing Research, 30, 343–350. Keppel, G. (1991). Design and analysis: A researcher’s handbook. Englewood Cliffs, NJ: Prentice Hall. Kertesz, A. (1982). Western aphasia battery.New York: Grune & Stratton. Klingman, K. C., & Sussman, H. M. (1983). Hemisphericity in aphasic language recovery. Journal of Speech and Hearing Research, 26, 249–256. Kolk, H. H. J., & Hofstede, B. T. M. (1994). The choice of ellipsis: A case study of stylistic shifts in an agrammatic speaker. Brain and Language, 47, 507–509. LaPointe, L. L., & Erickson, R. J. (1991). Auditory vigilance during divided task attention in aphasic individuals. Aphasiology, 5, 511–520. Lesser, R. (1989). Linguistic investigations of aphasia. London: Whurr Publications. Li, E. C., Ritterman, S., Della Volpe, A., & Williams, S. E. (1996). Variation in grammatic complexity across three types of discourse. Journal of Speech-Language Pathology and Audiology, 20, 180–186.

Loverso, F. L., & Prescott, T. E. (1981). The effect of alerting signals on left brain damaged (aphasic) and normal subjects’ accuracy and response time to visual stimuli. Clinical Aphasiology, 10, 55–67. Lum, C. C., & Ellis, A. W. (1994). Is “nonpropositional” speech preserved in aphasia? Brain and Language, 46, 368–391. MacWhinney, B. (1995). The CHILDES project: Tools for analyzing talk. Hillsdale, NJ: Lawrence Erlbaum. Marshall, R. C. (1993). Problem-focused group treatment for clients with mild aphasia. American Journal of SpeechLanguage Pathology, 2, 31–37. McNeil, M. R. (1982). The nature of aphasia in adults. In N. J. Lass, L. V. McReynolds, J. L. Northern, & D. E. Yoder (Eds.), Speech, language and hearing volume II: Pathologies of speech and language (pp. 692–739). Toronto: W.B. Saunders. McNeil, M. R. (1983). Aphasia: Neurological considerations. Topics in Language Disorders, 1, 1–19. McNeil, M. R., Odell, K., & Tseng, C. H. (1991). Toward the integration of resource allocation into a general theory of aphasia. Clinical Aphasiology, 20, 21–39. Moore, D. (1994). A second start. Topics in Stroke Rehabilitation, 1, 100–103. Morris, R. G., Gick, M. L., Craik, F. I. M. (1988). Processing resources and age differences in working memory. Memory and Cognition, 16, 362–366. Murray, L. L. (1994). Attention impairments of individuals with aphasia due to anterior versus posterior left hemisphere lesions. Unpublished dissertation, University of Arizona, Tucson. Murray, L. L., Holland, A. L., & Beeson, P. M. (1997). Accuracy monitoring and task demand evaluation in aphasia. Aphasiology, 11, 401–414. Murray, L. L., Holland, A. L., & Beeson, P. M. (in press). Auditory processing in individuals with aphasia: A study of resource allocation. Journal of Speech, Language, and Hearing Research. Nadeau, S. E. (1988). Impaired grammar with normal fluency and phonology: Implications for Broca’s aphasia. Brain, 111, 1111–1137. Nicholas, L. D., & Brookshire, R. H. (1993). A system for quantifying the informativeness and efficiency of the connected speech of adults with aphasia. Journal of Speech and Hearing Research, 36, 338–350. Parisi, D. (1987). Grammatical disturbances of speech production. In M. Coltheart, G. Sartori, & R. Job (Eds.), The cognitive neuropsychology of language(pp. 201–219). Hillsdale, NJ: Erlbaum. Penn, C. (1988). The profiling of syntax and pragmatics in aphasia. Clinical Linguistics and Phonetics, 2, 179–207. Petry, M. C., Crosson, B., Gonzalez-Rothi, L. J., Bauer, R. M., Schauer, C. A. (1994). Selective attention and aphasia in adults: Preliminary findings. Neuropsychologia, 32, 1397–1408. Robin, D. A., & Rizzo, M. (1987). The effects of focal lesions on intramodal and cross-modal orienting of attention. Clinical Aphasiology, 18, 62–74. Rosenbek, J. C., LaPointe, L. L., & Wertz, R. T. (1989). Aphasia: A clinical approach. Austin, TX: Pro-Ed.

Journal of Speech, Language, and Hearing Research

Downloaded From: http://jslhr.pubs.asha.org/ by a University of Arizona - Library User on 09/29/2015 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx

227

Murray et al. : Spoken Language and Attention

Saffran, E. M., Berndt, R. S., & Schwartz, M. F. (1989). The quantitative analysis of agrammatic production: Procedure and data. Brain and Language, 37, 440–479. Schneider, W., & Shiffrin, R. M. (1977). Controlled and automatic human information processing: 1. Detection, search, and attention. Psychological Review, 84, 1–66. Schwartz, M. F., Marin, O. S., & Saffran, E. M. (1979). Dissociations of language function in dementia: A case study. Brain and Language, 7, 277–306.

Appendix A. Task instructions. 1. Isolation a) Picture-description Task. I’m going to show you a picture. I want you to describe this picture as completely as possible until I say stop. You have lots of time so don’t feel rushed. You don’t have to press any keys.

Shiffrin, R. M., & Schneider, W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending, and a general theory. Psychological Review, 84, 127–190.

b) Tone Discrimination Task. You’re going to hear some tones. Some of the tones are high like this (subject hears example), and some of the tones are low like this (subject hears example). When you hear the high tone, press the HIGH key (gesture to key). When you hear the low tone, press the LOW key (gesture to key). Try to respond as accurately and as quickly as you can.

Skelly, M. (1975). Aphasic patients talk back. American Journal of Nursing, 75, 1140–1142.

2. Focused Attention

Shadden, B. B., Burnette, R. B., Eikenberry, B. R., & DiBrezzo, R. (1991). All discourse tasks are not created equal. Clinical Aphasiology, 20, 327–342.

Thompson, C. K., Shapiro, L. P., Tait, M. E., Jacobs, B. J., Schneider, S. L., & Ballard, K. J. (1995). A system for the linguistic analysis of agrammatic language production. Brain and Language, 51, 124–129. Tompkins, C. A. (1995). Right hemisphere communication disorders: Theory and management. San Diego, CA: Singular. Tseng, C. H., McNeil, M. R., & Milenkovic, P. (1993). An investigation of attention allocation deficits in aphasia. Brain and Language, 45, 276–296. Tun, P. A., Wingfield, A., & Stine, E. A. L. (1991). Speechprocessing capacity in young and older adults: A dual-task study. Psychology and Aging, 6, 3–9. Ulatowska, H. K., & Bond Chapman, S. (1995). Discourse studies. In R. Lubinski (Ed.), Dementia and communication (pp. 115–132). San Diego: Singular. Waters, G. S., Caplan, D., & Hildebrandt, N. (1987). Working memory and written sentence comprehension. In M. Coltheart (Ed.), Attention and performance XII (pp. 531–555). London: Erlbaum. Wickens, C. D. (1984). Processing resources in attention. In R. Parasuraman, D. R. Davies, & J. Beatty (Eds.), Varieties of attention(pp. 63–102). Toronto: Academic Press. Wickens, C. D. (1989). Attention and skilled performance. In D. Holding (Ed.), Human skills (pp. 72–105). New York: John Wiley & Sons. Whitaker, H. (1976). A case of the isolation of the language function. In H. Whitaker & H. A. Whitaker (Eds.), Studies in neurolinguistics, Vol. 2 (pp. 1–58). New York: Academic Press. Received March 15, 1996

a) Picture-description Task. At the same time, I’m going to show you a picture and you’re going to hear some tones. I want you to describe the picture as completely as possible until I say stop. Ignore the tones. You don’t have to press any keys, just describe the picture. b) Tone Discrimination Task. At the same time, I’m going to show you a picture and you’re going to hear some tones. I want you to respond to the tones as accurately and as quickly as you can. You don’t have to say anything, just respond to the tones.

3. Divided Attention #1 At the same time, I’m going to show you a picture and you’re going to hear some tones. This time you have to do both tasks. I want you to attend primarily to describing the picture until I say stop. Try to describe this picture as completely and as accurately as you did before, when you did the picture description all by itself. At the same time as you are speaking, I also want you to try to respond to the tones. Feel free to guess if you are unsure about the tone. Remember I want you to do both tasks, but your primary job is to describe the picture.

4. Divided Attention #2 At the same time, I’m going to show you a picture and you’re going to hear some tones. This time you have to do both tasks. I want you to attend and respond primarily to the tones. Try to identify the tones as quickly and as accurately as you did before, when all you had to do was respond to the tones. At the same time as you are responding to the tones, I also want you to try to describe the picture. Remember I want you to do both tasks, but your primary job is to respond to the tones.

Accepted July 10, 1997 Contact author: Laura L. Murray, PhD, Department of Speech and Hearing Sciences, Indiana University, Bloomington, IN 47405

Appendix B. Formula for the Morphological Complexity (AUX) Score (Saffran et al., 1989).

(

total AUX score number of matrix verbs

)

– 1 = AUX score

Journal of Speech, Language, and Hearing Research

Downloaded From: http://jslhr.pubs.asha.org/ by a University of Arizona - Library User on 09/29/2015 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx