Phonological Processing and Lexical Access in Aphasia - Science Direct

20 downloads 0 Views 1006KB Size Report
correspondence and reprint requests to William Milberg GRECC, 1400 VFW Parkway, ...... In H. Whitaker & H. Whitaker (Eds.), Studies in neurolinguistics.
BRAIN

AND

34, 279-293

LANGUAGE

Phonological

(1988)

Processing

and Lexical Access in Aphasia

WILLIAM MILBERG Aphasia Research Center and VA Medical Center, Boston, and Geriatric Research, Education and Clinical Center, VA Medical Center, West Roxbury, Massachusetts

SHEILA BLUMSTEIN Aphasia

Research

Center and VA Medical Center, Boston, and Department and Linguistic Sciences, Brown University

of Cognitive

AND BARBARADWORETZKY Aphasia

Research

Center

and VA Medical

Center,

Boston

This study explored the relationship between on-line processing of phonological information and lexical access in aphasic patients. A lexical decision paradigm was used in which subjects were presented auditorily with pairs of words or word-like stimuli and were asked to make a lexical decision about the second stimulus in the pair. The initial phonemes of the first word primes, which were semantically related to the real word targets, were systematically changed by one or more than one phonetic feature, e.g., cat-dog, gat-dog, wat-dog. Each of these priming conditions was compared to an unrelated word baseline condition, e.g., nurse-dog. Previous work with normals showed that even a nonword stimulus receives a lexical interpretation if it shares a sufficient number of phonetic features with an actual word in the listener’s lexicon. Results indicated a monotonically decreasing degree of facilitation as a function of phonological distortion. In contrast, fluent aphasics showed priming in all phonological distortion conditions relative to the unrelated word baseline. Nonfluent aphasics showed priming only in the undistorted, related word condition relative to the unrelated word baseline. Nevertheless, in a secondary task requiring patients to make a lexical decision This research was supported by Grant NS 22282 to Sheila Blumstein at Brown University, Grant NS 06209 to Harold Goodglass at Boston University School of Medicine, and by VA Merit Review 097-44-3765-001 and NIA Grant RO 1 AG 03354-03 to William Milberg. Thanks to Steven Kosslyn for comments on an earlier version of this paper, and to Allyson Rosen and Anna Barrett for their help in the preparation of the manuscript. Address correspondence and reprint requests to William Milberg GRECC, 1400 VFW Parkway, VA Medical Center, West Roxbury, MA 02132. 279 0093-934x/88

$3.00

Copyright 0 1988 by Academic Press, Inc. All rights of reproduction in any form reserved.

280

MILBERG,

BLUMSTEIN,

AND DWORETZKY

on the nonword primes presented singly, all aphasics showed phonological feature sensitivity. These results suggest deficits for aphasic patients in the various processes contributing to lexical access, rather than impairments at the level of lexical organization or phonological organization. 0 1988 Academic Press, Inc.

A great deal of research has been conducted to determine the extent to which speech perception impairments may contribute to auditory language comprehension deficits in aphasic patients. Results of these studies have generally shown that nearly all aphasic patients display some impairments in the perception of phonemic or segmental contrasts. However, surprisingly, speech perception abilities do not seem to correlate with language comprehension abilities (Blumstein, Baker, & Goodglass 1977a). Numbers of studies have shown that Wemicke’s aphasics who typically show profound auditory language comprehension impairments may not necessarily show the poorest performance in such speech perception tasks (Basso, Casati, & Vignolo, 1977; Blumstein et al., 1977a; Jauhianen & Nuutila 1977; Miceli, Caltagrone, Gainotti, & Payer-Rigo, 1978; Miceli, Gainotti, Caltagirone, & Masullo, 1980). Thus, it has been suggested that speech perception deficits alone seem to be an unlikely basis for auditory language comprehension deficits particularly in Wernicke’s aphasics. Nevertheless, further studies have suggested that aphasic patients do display speech perception impairments, particularly as speech processing may interact with higher levels of linguistic processing. In particular, studies using synthetic speech continua have shown a dissociation between aphasics’ ability to discriminate and label stimuli varying along a particular acoustic dimension (Blumstein, Cooper, Zurif, & Caramazza, 1977b, 1984). This dissociation has been interpreted as reflecting the fact that the perception of the acoustic parameters defining phonetic categories may be spared in aphasia, but the ability to use these dimensions to categorize the sounds in a linguistically relevant way may be impaired. This possibility was explored by Baker, Blumstein, and Goodglass (1981) who found an interaction between phonological and semantic factors in lexical access in their patients. In this study, they devised a series of tasks designed to systematically increase the demands for semantic processing of auditorily presented words. Within each experiment, the test words were varied in terms of the phonological distance of the correct responses from the test foils. Results showed that for all groups, as semantic demands increased, phonological discrimination suffered; as phonological discrimination became more difficult, semantic processing suffered. While both Wernicke’s and Broca’s aphasics showed similar patterns, these effects were significantly greater for Wernicke’s than for Broca’s aphasics. These results then suggest an intimate interaction between phonological and semantic factors in word processing that obtains across aphasia types. Thus, while speech perception deficits per se may not

PHONOLOGICAL

PROCESSING

281

account for aphasic patients’ language comprehension failures, an impairment in speech processing as it relates to lexical access may be a prime candidate in explicating some of the language impairments displayed by aphasic patients. What is not clear from the studies conducted to date is the extent to which differences may emerge in aphasic patients as a function of such an impairment. One limitation of all of the previous studies is that they required judgments or decisions that were based on the “end product” of processing by the entire language system. Language processing may be subject to a number of time-limited component processes corresponding to phonological, lexical, and syntactic analysis. All of these ultimately contribute to this so-called cognitive “end-product.” Unless these processes are sampled on-line, that is, while they are in operation, it is impossible to distinguish the contribution of these various processes and their potential impairments in aphasic patients. In a recent study using normal subjects Milberg, Blumstein, and Dworetzky (1988) employed a lexical decision paradigm that permitted the exploration of the relationship between phonological information and lexical access while these processes were ongoing. This study provided information about how phonetic feature information is used in the course of lexical access. It was demonstrated that even a nonword stimulus receives a lexical interpretation if it shares a sufficient number of phonetic features with an actual word in the listeners vocabulary. A paradigm was used in which subjects were presented auditorily with pairs of words or word-like stimuli, and were asked to make a lexical decision about the second stimulus in the pair (the first word of each pair will henceforth be referred to as the “prime” and the second word of each pair will henceforth be referred to as the “target”). The initial phoneme of word primes that were semantically related to real word targets were changed by one or more phonetic features, keeping the succeeding rhyming part of the word constant. This change also altered the lexical status of the prime from a real word to a nonword (e.g., “cat” to “gat” or “wat”). A single feature change of voicing in the initial phoneme of the word “cat” (which would be semantically related to the target “dog”) would form the nonword “gat.” A change of more than one phonetic feature of the initial phoneme of “cat” would result in the nonword prime “wat”. Each of these priming conditions (cat-dog, gat-dog, wat-dog) was compared to an unrelated word baseline such as “nurse” (nurse-dog). In this manner it was possible to examine the sensitivity of the semantic facilitation effect to the explicit alteration of a speech-related phonetic variable. In particular, it was possible to explore whether semantic facilitation would occur in a nonword which was phonologically similar to a real word. Normal subjects were found to show a strongly monotonic relationship between the nature of the phonetic alteration of the prime and the size of the semantic facilitation effect compared to the related word prime

282

MILBERG,

BLLJMSTEIN,

AND DWORETZKY

and the unrelated word baseline. The greater the distortion of the initial consonant the smaller the priming effect, although a change of more than one feature was needed to significantly reduce the facilitation effect below that of the undistorted related word. These results showed that even nonwords access the lexicon, as long as the nonword shares some phonetic features with an actual lexical entry. Although this lexical access seems to be slowed as a function of the phonetic distance between the initial consonant of the nonword and an actual real word lexical entry, semantic facilitation nonetheless takes place. The current study was an attempt to extend these observations to patients with a variety of aphasic disorders. Aphasic patients were administered a lexical decision task identical in design to that used by Milberg et al. (1988). The functional relationship between lexical access (in the lexical decision task) and phonological distortion was examined across a number of clinical language variables. Of particular interest was whether patients who were differentiated by such clinical variables as fluency and comprehension will produce the monotonic relationship between lexical decision latency and phonological distortion that is characteristic of the normal listener. Dimensions such as fluency and comprehension have served as useful variables in the characterization of patterns of language breakdown in aphasia (e.g., Blumstein et al., 1977a). In addition a preliminary examination of the clinical dimension of agrammatism was also included in the current study because of its central role in characterizing syntactic deficits in aphasia (e.g., Kean, 1985). METHOD Subjects. Seventeen aphasic patients from the Aphasia Research Center of the Boston Veterans Administration Medical Center served as subjects. All but one patient became aphasic as a consequence of stroke. One patient with progressive aphasic symptomatology who met the classic criteria for Wemicke’s aphasia was also included. They were exclusively male, with an age range of 39 to 73 and a mean age of 59.6 years. All of the aphasic patients were administered the Boston Diagnostic Aphasia Exam (BDAE) (Goodglass & Kaplan, 1972) by speech pathologists as well as complete clinical and neurological examinations by the staff. On the basis of these measures the patients were diagnosed by aphasia type. Three subjects were classified as Wemicke’s, four as Broca’s, one as mixed anterior, one as transcortical sensory, one as transcortical motor, one as mixed transcortical, two as conduction, two as global, one as anemic, and one alexic. The subjects that were tested were representative of a wide range of clinical syndromes, none of which formed a sufficiently large group for the purposes of statistical analysis. Therefore, for the purpose of group analysis subjects were divided into high- and low-comprehension groups based on the Word Discrimination score from the BDAE (patients with z scores of 0 or less were classified as low-comprehension, patients with z scores above 0 were classified as highcomprehension). Subjects were also divided into high- and low-fluency groups based on BDAE ratings. Table 1 shows a summary of the patients ages, etiology, z scores for the auditory comprehension subtest and fluency classifications based on phrase length ratings from the BDAE. In addition to the standardized tests, subjects were also classified based on the presence

PHONOLOGICAL

283

PROCESSING

TABLE 1 SUBJECTS Aphasia type Wernicke’s Wernicke’s Wernicke’s Broca’s Broca’s Broca’s Broca’s Global Global Transcortical sensory Transcortical motor Mixed transcortical Conduction Conduction Pure alexic Anemic Anterior

Age

Etiology

59 62 57 72 73 54 61 63 39 55 53 60 60 65 70 49 61

Prog. CVA CVA CVA CVA CVA CVA CVA CVA CVA Hemorrhage Hemorrhage CVA CVA CVA Hematoma CVA

Comprehension (z score) 0

-1.6 -0.33 0.83 0.5 1.04 0.28 - 0.82 - 0.45 -0.43 0.51 -0.68 0.62 0.51 0.94 0.79 0.34

Agrammatic

Fluency

No No No Yes Yes Yes Yes Yes Yes No Yes No No No No No Yes

Fluent Fluent Fluent Nonfluent Nonfluent Nonfluent Nonfluent Nonfluent Nonfluent Flunet Fluent Fluent Fluent Fluent Fluent Fluent Fluent

of productive agrammatism in their speech output. That is, any patient who used telegraphic speech, who incompletely used tenses, plurals, grammatical words in the interview, Cookie Theft Description, or the repetition subtest of the BDAE was classified as agrammatic in his speech output. There was no quantification of these observations. All patients were classified as to the agrammatic quality of their speech output independent of their clinical diagnosis. As a result, one transcortical motor aphasic was classified by the criteria listed above as agrammatic in his speech output. The agrammatism ratings are also shown in Table 1. Stimuli. The stimuli used in this study were identical to those described in Milberg et al. (1988). They consisted of high-frequency pairs of real and nonwords with the first member of the pair considered the prime and the second member considered the target. Six types of these “prime-target” pairs were constructed in order to create four priming conditions for the real words (YES responses) and two priming conditions for the nonwords (NO responses). The first four conditions contained the same 15 real word monosyllabic targets preceded by either a related (semantically associated) word, e.g., “cat’‘-“dog” (henceforth “0” distortion), a nonword differing from the semantically related prime by one distinctive feature (either voicing or place) in the initial phoneme position, e.g., “gat”“dog” (henceforth “1” distortion), a nonword differing from the semantically related prime by more than one distinctive feature in the initial phoneme position, e.g., “wat”-“dog” (henceforth “2” distortion), or a real word that was semantically unrelated with respect to the target word, e.g., “table”-“dog” (henceforth “unrelated baseline”) The last two types of pairs (totaling 45) consisted of nonword targets preceded by either a real word (15 in number), e.g., “cat”-“jand,” or nonword prime (30 in number), e.g., “wat”-“naib.” These latter two conditions served as lexical decision foils and were not subjected to statistical analysis. The nonwords were all pronounceable English sequences differing from real words by one phoneme, e.g., flower-flowem, circle-pircle. Table 2 contains a list of the test stimuli.

284

MILBERG,

BLUMSTEIN, TABLE

AND DWORETZKY 2

STIMULI

Distance from prime features Target Apple Bird Bread Chair Dog Fish Girl Hammer Hate Lion Night Oil Rose Waltz War

Zero

One

Two”

Fruit Canary Butter Table Cat Trout BOY Saw Love Tiger Day Gas Flower Dance Peace

Vruit Tanary Dutter Pable Gat Prout POY Zaw Rove Kiger Tv Das Slower Tance Beace

Cruit Sanary Yutter Rable Wat Frout LOY Gaw Jove Miger Shay Vas Blower Mance Yeace

Unrelated Canary Dance Day BOY Table Flower War Love Tiger Gas Fruit Bread Saw Cat Trout

L?Summed over several phonetic feature dimensions (see text).

In summary, the test stimuli consisted of 105 trials: 15 real word targets were preceded by 15 semantically related words, 15 nonwords derived by changing one distinctive feature in the initial phoneme position of the semantically related word, I5 nonwords derived by changing more than one distinctive feature in the initial phoneme position of the semantically related words, and 15 semantically unrelated real words, as well as 45 nonword targets, 15 of which were preceded by real word primes and the remaining 30 which were preceded by nonword primes. The stimulus pairs were presented in four blocks, each of which contained the 15 real word targets (YES responses) and I1 different nonword targets (NO responses). On the basis of a previous pilot study, the order of presentation of stimulus pairs was shown to have an effect on response time. In order to control for order effects, two different test tapes were constructed in which the order of the stimulus presentation was reversed. Within each test tape, stimulus pairs were pseudorandomized to maximize the distance between repeated targets. In addition, all possible orders of types of pairs were represented across the two tapes. That is, the phonologically derived primes were ordered such that half of them appeared before their related real word prime and half of them appeared after their related real word prime. Analysis of normal subjects performance indicated that there were no significant tape by condition differences or interactions (Milberg et al., 1987a). There was an I-set interval of silence between each stimulus pair and a 0.5~set interval of silence between each member within the pair. Following the test items, subjects heard the same items from the test randomly presented one at a time with a 6-set interval of silence between each item. Thirty of these items were real words (YES responses) and were presented to determine whether all of the test stimuli, and in particular, the phonologically derived nonword primes, were in fact perceived correctly as either words or nonwords. Subjects were provided with five practice trials before beginning the test. These trials contained pairs of unrelated items, none of which appeared in the testing session.

PHONOLOGICAL

PROCESSING

285

Apparatus. A male speaker read several exemplars of each of the stimuli. His productions were tape-recoreded, and then digitized onto a PDPl l-34 computer available at the Brown University Phonetics Laboratory. The stimuli were sampled at 10 kHz using a 4.8-kHz filter setting with a IO-bit quantization. The best exemplar for each stimulus was used for the creation of the test tapes. The test tapes consisted of the stimuli recorded on one channel and a SO-msec tone occurring simultaneously with the onset of the second, the target stimulus. There was a OS-set interstimulus interval and a 6-set intertrial interval. The apparatus for the experiment consisted of a Sony TC 630-D stereo tape-recorder, a Pioneer SA SOOAstereo amplifier, a Lafayette Instruments voice-operated relay, a Gerbrands millisecond timer, two pairs of headphones, and a subject response board. The response board consisted of two keys marked “YES” (real word) and “NO” (nonword), respectively. the output of the tape-recorder was amplified with the stereo amplifier. Output of the amplifier was split so that the channel containing the tone activated the voice-operated relay. With the onset of the target word in each trial, the tone activated the voice-operated relay which in turn activated the millisecond timer. Subjects heard the stimuli from both channels through sealed headphones. The tape was stopped by pressing one of the two 2 x 4-in. keys marked YES and NO mounted on a moveable board. Procedure. Subjects were run individually during one session which lasted approximately 45 min. The subject was instructed that he would hear pairs of sounds. Some of these sounds would be real English words and some of them would be nonsense words. His task was to decide whether the second stimulus of each pair was a real word. If it was, he was to press the YES key and if it was not, the NO key. Responses were made with the subject’s preferred hand, which was allowed to rest between the keys after each trial. The subject was trained to respond as quickly as possible and to disregard the first item in the pair as it was irrelevant to the task, using procedures described previously (Blumstein, Milberg, & Shrier, 1982). The procedure for the lexical decision task for the prime words alone was identical to the above.

RESULTS All data were analyzed with two-way (Group x Priming Condition) repeated-measures analyses of variance and were based on the mean latencies of correct lexical decisions to real word targets. Differences between priming conditions within each subject group were analyzed using Fisher’s Protected Least Significant Different t test (Fischer’s PLSD test) with a .05 significance level. In the current study the presence of semantic facilitation served as a baseline condition from which to explore the effects of phonetic distortion on lexical access. Thus, any subject who did not show semantic facilitation as determined by a faster reaction time to a target word preceded by a semantically related prime word (cat-dog) compared to the semantically unrelated prime word condition (nurse-dog) was eliminated from the study. On this basis the results of three patients, a mixed anterior and two global aphasics, were not included in the subsequent analyses. Analyses are based on the scores of the patients described in Table 1. The aphasics as a group made very few lexical decision errors when presented with real word targets. The following were the mean number of errors for the whole sample studied in the 0,1,2, and baseline priming conditions respectively: 1, 1.5, 2.1, 2.0. There was no apparent relationship

286

MILBERG,

BLUMSTEIN,

AND DWORETZKY

1650

1350

High camp Low camp.

1250 i 1150 1050-I

1

I

I

I

0

I

2

Phonological

.

I

*

unrelated

distortion

FIG. 1. Mean lexical decision latencies as a function of phonological distortion for highand low-comprehension patients.

between these errors and patient group. These factors make a speedaccuracy trade-off or other error-related artifacts very unlikely so that the real word target errors were not subjected to further analysis. Comprehension: High vs. Low

The results as a function of priming condition and comprehension level are seen in Fig. 1. Although the effect of priming condition was significant, F(3, 45) = 6.319, p < .OOl, there was no effect of comprehension, F(1, 15) = 3.731, p 3 . 10, nor was there a significant comprehension by condition interaction, F(3, 45) < 1, p > .lO. As a group it appeared that the aphasics showed some facilitation in the O,l, and 2 distortion condition relative to the nonword baseline condition, though to a lesser degree in the latter two conditions. Post hoc analysis using Fischer’s PLSD test indicated that as a group the aphasics showed faster reaction times in the related word prime condition than in the unrelated word baseline. In addition the 1 and 2 feature distortion conditions were each approximately 140 msec faster than the unrelated word baseline condition. These differences were also significant. Although the reaction times for the 1 and 2 feature distortion conditions were each approximately 100 msec slower than the related prime condition, these differences were not statistically significant. The results of the single word lexical decision task for the nonword primes appears in Table 2. When asked to make a lexical decision about the nonword prime words alone all subjects made significantly more errors (i.e., nonwords were classified as words) in the 1 feature distortion condition than in the 2 feature distortion condition, F(1, 14) = 34.773, p < .OOl. However, there was no effect of comprehension ability, F( 1,

PHONOLOGICAL

287

PROCESSING

1E.00 -4 /

1500 1400 1300

-

i

i

/’/

l-200 -

0 Phonological

I

2

0 +

fluent nonfluent

unrelated

distortion

FIG. 2. Mean lexical decision latencies as a function of phonological fluent and nonfluent patients.

14) = 3.11, p = .099, nor was there a comprehension interaction, F(1, 14) < 1.

distortion

for

by distortion

Fluency: Fluent vs. Nonfluent

Overall, the effect of priming condition was significant, F(3, 45) = 6.719, p < .OOl. Although there was no effect of fluency, F(l, 15) = 1.587, p > .lO, there was a significant fluency by condition interaction, F(3, 45) = 3.096, p < .05. Inspection of the data depicted in Fig. 2 suggests that the Fluent aphasic group produced reaction times in the 0,1,2 feature distortion conditions 246 msec faster than the unrelated word baseline condition. The reaction times within these three conditions were within 20 msec of each other. Post hoc analysis with Fischer’s PLSD test confirmed that the 0,1,2 feature conditions each differed significantly from the unrelated baseline, but did not differ significantly from each other. In contrast, inspection of the data for the nonfluent Aphasic group suggests that the 0 distortion condition resulted in reaction times 200 msec faster than the mean of the 1,2 feature distortion conditions and the unrelated baseline. These latter three conditions showed reaction times within 60 msec of each other. Fischer’s PLSD test confirmed that only the 0 distortion (related word) condition differed from the unrelated baseline. Further, the 1,2, and unrelated baseline conditions did not differ significantly from each other. When asked to make a lexical decision about the nonword prime words alone, all subjects made significantly more errors (i.e., nonwords were classified as words) in the 1 feature distortion condition than in the 2 feature distortion condition, F(l, 14) = 41.00, p < .OOl. There was no effect of verbal fluency (F(l, 14) < l), nor was there a fluency by distortion interaction, F(1, 14) = 3.19, p = .09.

288

MILBERG,

1100 J

BLUMSTEIN,

,

1 1

0

Phonological

AND DWORETSKY

2

unrelated

distortion

FIG. 3. Mean lexical decision latencies as a function of phonological agrammatic and nonagrammatic patients.

Agrammatism:

Agrammatic

distortion

for

vs. Nonagrammatic

As noted earlier, for the purposes of analysis patients were classified as agrammatic if they showed agrammatic characteristics in spontaneous speech (see Table 1). Although these patients included one transcortical motor aphasics and three global aphasics who are not generally classified as agrammatic, it was felt that it would be of interest to determine if there was a relationship between performance on the lexical decision task and any evidence of specific syntactic difficulties. This division of subjects produced groups almost identical to the those formed with the fluency classification and hence produced very similar results. As in the case of the fluency analysis the effect of priming condition was significant, F(3, 39) = 6.292, p < .OOl. There was no effect of agrammatism, F(1, 15) = 1.587, p > . 10. However, the agrammatism by condition interaction was marginally significant, F(3, 39) = 2.536, p = .07, and the pattern of reaction times was similar to those obtained when the patients were divided as a function of Fluency. Inspection of the data depicted in Fig. 3. suggests that for the Nonagrammatic Aphasic group the mean reaction time of the 0,1,2 feature distortion conditions was 226 msec faster than the unrelated word baseline condition, The reaction times within these three conditions were within 40 msec of each other. Again Fischer’s PLSD test confirmed the statistical significance of this pattern. The 0,1,2 distortion priming conditions differed significantly from the unrelated baseline, but did not differ significantly from each other. In contrast, inspection of the data for the Agrammatic Aphasic group suggests that the 0 distortion condition resulted in reaction times 170 msec faster than the mean of the 1,2 distortion conditions and the unrelated baseline. In contrast the 1,2 distortion conditions and the unrelated baseline were within 20 msec of each other. As would be expected, Fischer’s PLSD test revealed that only the 0 distortion (related word) priming condition

PHONOLOGICAL

PROCESSING

289

differed from the unrelated word baseline. The 1,2 and unrelated baseline conditions did not differ from each other. Again when asked to make a lexical decision about the nonword prime words alone all subjects made significantly more errors (i.e., nonwords were classified as words) in the 1 feature distortion condition than in the 2 feature distortion condition, F(1, 14) = 38.83, p < .OOl. There was no effect of agrammatism, F( 1, 14) < 1, nor was there an agrammatism by distortion interaction F(1, 14) = 1.98, p > .lO. DISCUSSION

This study indicates that the pattern of lexical decision latencies resulting from the alteration of the phonological characteristics of related prime words is different for aphasic patients than was previously reported for normals. While normal subjects showed aphoneticfeature-based sensitivity to phonological distance (Milberg et al., 1988), aphasic patients do not. Two distinct patterns emerge for the aphasic patients. Fluent aphasics (and possibly nonagrammatic patients) show priming in all phonological distortion conditions relative to the unrelated baseline. Nonfluent (and possibly agrammatic patients) show priming only in the undistorted related word condition relative to the unrelated word baseline. Because both fluent and nonfluent aphasics showed semantic priming relative to the unrelated word baseline, it can be concluded that the differences between these groups were due to differential sensitivity to phonological distortion. The effect of the phonological manipulations in the lexical decision paradigm was quite different from the effect of these same manipulations on the second task requiring a lexical decision on the prime words presented alone. First the patients made relatively few errors in perceiving the nonwords as nonwords. That their performance level was so good on this task suggests that the pattern of performance obtained in the priming experiment could not be attributed to a failure to correctly perceive the nonword primes. Moreover, ull patients made more lexical decision errors in the second task for those nonwords which were distinguished from a real word by one phonetic feature compared to several phonetic features. Therefore, the different patterns of results obtained in the fluent and nonfluent patients could not be due to patterns of misperceptions of these nonword primes. If this were the case, a feature effect should have emerged for both groups of patients, as was found with normals, since one-feature changes were more likely misperceived as real words than were words containing several feature changes. These results are consistent with other studies that have examined aphasics’ phonological abilities (Basso et al., 1977; Blumstein et al., 1977a; Jauhianen and Nuutila 1977; Miceli et al., 1978, 1980). In these studies, aphasic patients, irrespective of clinical diagnosis, are more likely to show discrimination failures for words distinguished by only

290

MILBERG,

BLUMSTEIN,

AND

DWORETZKY

one phonetic feature compared to words distinguished by several phonetic features. Milberg et al. (1988) found that normal subjects showed a monotonic relationship between the degree to which a word was phonetically distinct from a potential real word and lexical access. Lexical access seems to occur in normals as long as there are some shared features between the initial consonant of the actual word and the phonetically manipulated words, although the phonetic manipulations slow this access. There are several classes of lexical access models that posit some form of input normalization from a “noisy environment” of the sort suggested by these results (e.g., McClelland & Elman, 1986). Although the exact means by which this normalization occurs is unclear, the results of the Milberg et al. (1988) study suggest that normal listeners can rapidly overcome the effect of distortion as long as some of the phonetic feature information contained in the first phonetic segment is preserved. The pattern of results obtained for the fluent (and nonagrammatic) patients suggests that the initial consonant may be irrelevant for lexical access, and that rhyming (shared vowel plus consonant) is sufficient to influence lexical access. These results imply that a fairly nonspecific access system is in operation in these patients (e.g., all words rhyming with “cat” are presumably accessed when the listener hears “cat”). Operating alone, a system based on such nonspecific lexical access would produce frequent lexical access errors. These errors could contribute importantly to and perhaps form the basis of some lexical deficits and more generally, deficits in auditory comprehension. Moreover, assuming that word production obeys the same rules of lexical access, such a deficit could contribute to the occurrence of paraphasic errors. (ShattuckHufnagel (1987) recently demonstrated that normal subjects seem to provide separate status to initial consonants in the production of slips of the tongue.) Therefore, both lexical and auditory comprehension deficits as well as potential paraphasic errors may occur as a result of a lexical access deficit without assuming deficits at the level of lexical or phonological organization. These patterns of deficits are consistent with the clinical symptomatology of fluent aphasics, and particularly Wernicke’s aphasics. In contrast, the nonfluent and agrammatic aphasics showed semantic facilitation relative to the unrelated word baseline ifand only ifthe subject is presented with a phonologically correct real word that is semantically related to the target. Such a lexical system will only access entries specified by the exact phonological content of the input. If the input is distorted in such a way that it resembles another word in the listener’s vocabulary, then presumably that word will be accessed (e.g., BAT for CAT); if the distortion has no corresponding entry to the stimulus (e.g., WAT for CAT), a lexical entry will not be directly accessed. The emerging context may eventually allow for the generation of hypotheses for the

PHONOLOGICAL

PROCESSING

291

ultimate access of a lexical entry. However, the consequences of such a system would be that lexical access, although often accurate, would be greatly slowed, would fail to activate as full an array of lexical entries compared to normals, and would be expected to frequently fail without the aid of an informative semantic context. Such are some of the clinical characteristics of nonfluent aphasics. This account is also consistent with previous reports of inconsistent semantic priming in Broca’s aphasics (Milberg & Blumstein 1981; Milberg, Blumstein, & Dworetzky, 1987b). Again, it is not necessary to posit a deficit at the level of lexical organization or phonological organization to account for at least some of these patients’ impairments. The results obtained for the aphasic patients suggest at least two possible sources of impairment. One source may be an impairment in one of the stages or mechanisms postulated for normal lexical access. Some models of lexical access in normals have proposed a first stage mechanism that is somewhat insensitive to specific phonological information. This stage permits only general limitations on the number of lexical candidates chosen for further processing. For example, Forster (1978) reviews a lexical access model that uses “feature detectors” with fairly low thresholds as an early stage on the process of lexical access. Although this mechanism permits some limitation on the number of lexical candidates chosen for further processing, operating alone, it would result in the access of a large number of related but incorrect lexical candidates. Forster argues that it is “necessary to postulate a further stage of processing which takes the set of detectors activated by a given item and evaluates them each in turn to see whether the correct detector has been activated” (p. 265). The second part of his model describes a lexical access mechanism that uses abstract representations of words to derive and test hypotheses about the final identity of the speech signal. It may be that aphasic patients display impairments in either one of these two stages or mechanisms, with the nonfluent aphasics showing impairments in the first stage of the lexical access mechanism that provides a fairly global, nonspecific set of alternatives for further analysis, and the fluent patients showing impairments in the second stage of the lexical access mechanism that uses abstract representations to derive and test hypotheses about the final identity of the speech signal. Nevertheless, it is worth noting that not all models of lexical access propose these two stages or mechanisms (McClelland & Elman, 1986). Alternatively, the impairments displayed by the aphasic patients may reflect impairments in the processing mechanisms contributing to lexical access. In particular, there may be a change in the threshold of sensitivity for activation of the lexicon. The fluent aphasics could be characterized as having a decreased threshold of sensitivity for lexical access. Thus, they would show a lessened sensitivity to phonological distortion, sub-

292

MILBERG,

BLUMSTEIN,

AND

DWORETZKY

sequently accessing more words in the lexicon than normal. In contrast, the nonfluent aphasics could be characterized as having an increased threshold of sensitivity to lexical access. Thus, they would show an increased sensitivity to phonological distortion, subsequently accessing fewer words in the lexicon than normal. What is clear from the results obtained in this experiment is that impairments in the use of phonological information to access the lexicon can manifest themselves in different ways in aphasic patients. In one case, too many lexical items seem to be accessed, and in the other, too few items seem to be accessed. Most importantly, these impairments reflect difficulties in accessing the lexicon without necessarily affecting the organization or structural properties of the lexicon itself. No matter what the relationship to normal language processing, the current results suggest that alterations in the ability to use phonological information to access word meaning are central to the understanding of aphasia. Although a case has been made to relate lexical access functions specifically to the clinical dimension of verbal fluency, there are other aphasic symptom dimensions such as auditory comprehension and agrammatism that must be explained. It perhaps should not be surprising that in the present study comprehension level was not related to semantic facilitation and phonological distortion. It has already been noted that the preponderance of previous evidence indicates that phonological deficits are not directly related to deficits in language comprehension (e.g., Blumstein et al., 1977a). It may also be, as Zurif and Caramazza (1976) and others have argued, that comprehension deficits can be determined by a number of different linguistic factors each with independent structural and clinical correlates. The high- and low-comprehension groups included both nonfluent and fluent aphasics, e.g., high-comprehension aphasics included Broca’s as well as anemic aphasics, and low-comprehension aphasics included global and Wernicke’s aphasics. It seems plausible that these groups would be heterogeneous with respect to underlying lexical and phonological abilities. Agrammatism has been a critical clinical symptom in recent theoretical analyses of neural language representation (e.g., Kean, 1985). The current study provides some preliminary evidence that agrammatic patients may be distinguished from nonagrammatic patients by the nature of their lexical access. At this stage it is difficult to incorporate this finding into existing models of agrammatism. Nevertheless, a number of recent experiments have provided evidence of impairments in lexical access in nonfluent aphasics, particularly Broca’s aphasics (cf. Milberg & Blumstein, 1981; Milberg, Blumstein and Dworetzky 1987b; Swinney, Zurif, Rosenley, & Nichol, unpublished manuscript). The current study needs to be replicated with patients with better-defined and homogeneous agrammatic symptoms. However, if similar results are obtained, the role of the in-

PHONOLOGICAL

293

PROCESSING

teraction of phonological and lexical information have to be given serious consideration.

in agrammatism

will

REFERENCES Baker, E., Blumstein, S. E., & Goodglass, H. 1981. Interaction between phonological and semantic factors in auditory comprehension. Neuropsychologiu, 19, 1-15. Basso, A., Casati, B., & Vignolo, L. A. 1977. Phonemic identification defects in aphasia. Cortex.

13, 84-95.

Blumstein, S. E., Baker, E. B., & Goodglass, H. 1977a. Phonological factors in auditory comprehension in aphasia. Nruropsychologia, 15, 19-30. Blumstein, S. E., Cooper, W. E., Zurif, E., & Caramazza, A. 1977b. The perception and production of voice-onset time in aphasia. Neuropsychologiu, 15, 371-383. Forster, K. I. 1978. Accessing the mental lexicon. In R. J. Wales & E. C. T. Walken (Eds.), New approaches to language mechanisms. Amsterdam: North-Holland. Goodglass, H., & Kaplan, E. 1972. The ussessment of aphasia and related disorders. Philadelphia: Lea & Febiger. Jauhianen, T., & Nuutila, A. 1977. Auditory perception of speech and speech sounds in recent and recovered cases of aphasia. Brain and Language, 4, 572-519. Kean, M. L. (Ed.). 1985. Agrummutism. New York: Academic Press. Marslen-Wilson, W. D., & Welsh, A. 1978. Processing interactions and lexical access during word recognition in continuous speech. Cognitive Psychology. 10, 29-63. McClelland, J. L., & Elman, J. L. 1986. The TRACE model of speech perception. Cognition, 18, I-86.

Miceli, B., Caltagirone, C., Gainotti, C., & Payer-Rigo. P. 1978. Discrimination of voice versus place contrasts in aphasia. Bruin and Language, 6, 47-51. Miceli, G., Gainotti, G., Caltagirone, C., & Masullo, C. 1980. Some aspects of phonological impairment in aphasia. Brain and Language, 11, 159-169. Milberg, W., & Blumstein, S. E. 1981. Lexical decision and aphasia: Evidence for semantic processing. Bruin and Language, 14, 371-385. Milberg, W., Blumstein, S. E., & Dworetzky, B. 1988. Phonological factors in semantic facilitation: Evidence from an auditory lexical decision task. Bu//etin of the Psychonomic Society, in press. Milberg, W., Blumstein, S. E., & Dworetzky, B. 1987b. Processing of lexical ambiguities in aphasia. Bruin and Language 31, 138-150. Shattuck-Hufnagel, S. 1987. The role of word-onset consonants in speech production planning: New evidence from speech error patterns. In E. Keller and M. Gopnik (Eds.). Motor and sensory processes of language. Hillsdale, NJ: Erlbaum. Swinney, D., Zurif, E., Rosenly, B., & Nichol, J. Lexical processing during sentence comprehension in agrammatic and Wernicke’s aphasia. Unpublished manuscript. Zurif, E. B., & Caramazza, A. 1976. Psycholinguistic structures in aphasia: Studies in syntax and semantics. In H. Whitaker & H. Whitaker (Eds.), Studies in neurolinguistics. New York: Academic Press. Vol. I.