American Sign Language (ASL) has evolved within a completely diffet'ent biological ... exposure to ASL on language processing, independent of grammatical ...
Journal of Psycholinguistic Research, VoL 22, No. 2, 1993
Processing a Dynamic Visual--Spatial Language: Psycholinguistic Studies of American Sign Language Karen Emmorey
American Sign Language (ASL) has evolved within a completely diffet'ent biological medium, using the hands and face rather than the vocal tract and perceived by eye rather than by ear. The research reviewed in this article addresses the consequences of this different modality for language processing, linguistic structure, and spatial cognition. Language modality appears to affect aspects of lexical recognition and the nature of the grammatical form used for reference. Select aspects of nonlinguistic spatial cognition (visual imagery and face discrimination) appear to be enhanced in deaf and hearing ASL signers. It is hypothesized that this enhancement is due to experience with a visualspatial language and is tied to specific linguistic processing requirements (interpretation of grammatical facial expression, perspective transformations, and the use of topographic classifiers). In addition, adult deaf signers differ in the age at which they were first exposed to ASL during childhood. The effect of late acquisition of language on linguistic processing is investigated in several studies. The results show selective effects of late exposure to ASL on language processing, independent of grammatical knowledge.
Language modality may dramatically influence the nature of language processing. The different perceptual and production systems of signed and spoken languages may result in differing constraints on the nature of linguistic processing as well as on language structure. In this article, This research was supported in part by National Institutes of Health grant HD-13249 awarded to Ursula Bellugi and Karen Emmorey, as well as NIH grants DC-00146, DC00201, and HD-26022. I would like to thank and acknowledge Ursula Bellugi for her collaboration during much of the research described in this article. i Address all correspondence to Karen Emmorey, Laboratory for Cognitive Neuroscience, The Salk Institute, 100t0 North Torrey Pines Rd., La Jolla, California 92037. 153
0090-6905/93/0300-0153507.00/0 9 1993 Plenum Publishing Corporation
154
Emmorey
I present an overview of some recent research on sign language processing, focusing on American Sign Language (ASL). ASL is a dynamic visual-spatial language which has developed outside the mainstream of spoken language. Unlike written English which has been described as "visual" language, ASL consists of dynamic and constantly changing forms rather than static symbols. Because ASL is produced with the hands and face and perceived by eye rather than by ear, it presents a challenge to models of language processing which have traditionally focused on vocal speech production, auditory perception, or reading. American Sign Language presents a natural opportunity to explore how the modality in which a language is expressed crucially affects the psychological mechanisms required to decode the linguistic signal. Another issue is whether the nature of the processing system itself affects the nature of the grammar. How might the differing capacities of visual vs. auditory processing influence the form of the linguistic structures that a language exploits? Several researchers have argued that language modality does n o t affect the underlying form of language--signed and spoken languages encode the same sorts of grammatical properties and conform to the same universal principles (Klima & Bellugi, 1979; Lillo-Martin, 1991). However, the differing processing constraints for signed and spoken languages may force or favor particular linguistic encodings. For example, all signed languages (that we are aware of) use space for reference; in fact, deaf children spontaneously create idiosyncratic spatial reference systems when they are exposed only to Manually Coded English, an invented visual-gestural language which does not use referential space (S. Supalla, 1991). In contrast to processing constraints, the structural nature of the language itself may command certain processing requirements. For example, French has regular and clearly bounded syllables while English does not; Cutler, Mehler, Norris, and Segui (1986) have argued that French speakers, but not English speakers, exploit a syllable-based segmentation process during speech perception. The nature of the phonological system thus appears to determine the selection of alternative segmentation routines for language processing. An important avenue of research is to determine the nature of the interplay between linguistic structure, processing constraints, and grammatical function; the study of signed languages offers an ideal research tool with which to investigate these relationships. In addition, American Sign Language provides a unique opportunity to examine the role of early language experience on adult linguistic processing. Although most congenitally deaf people use ASL as their primary language in adulthood, they vary widely as to when they were
Processing Dynamic Visual-Spatial Language
155
first exposed to the language. The majority of deaf individuals are born to hearing parents who do not know any sign language, and therefore these signers generally had little or no exposure to language in infancy and early childhood. Previous research with these late learners by Elisa Newport and colleagues has focused on a possible critical period for the acquisition of linguistic competence and grammatical knowledge (Newport, 1990, 1991). In contrast, I have been investigating whether late exposure to a first language may have a selective and critical effect on the ability to understand and produce language in real time. One major aim of this research is to try to tease apart what aspects of the processing system must be acquired early in order to function efficiently in adulthood and what aspects are more robust and do not require early linguistic input. Finally, our Salk research group is also investigating the interplay between general spatial cognition and the linguistic processing systems involved in knowing and using a visual-spatial language. These studies address the question of whether processing within one cognitive domain (language) exerts an influence on processing within a separate cognitive domain (spatial cognition). Our early results indicate that experience with American Sign Language effects certain aspects of mental imagery and face perception. These specific domains play a critical and unique role in processing ASL: Facial expression conveys required grammatical information, and ASL syntax and discourse make constant demands on forming and manipulating mental images. We hypothesize that the selective enhancement of certain nonlinguistic visual abilities is tied to the specific linguistic processing requirements of ASL. In sum, I will discuss three major issues that should be considered in the study of language processing in American Sign Language: (1) the effect of language modality on linguistic processing and structure, (2) the effect of late acquisition on language processing, and (3) the interplay between language processing and spatial cognition. These three research topics arise naturally and inevitably when studying ASL, given its fundamental visual-spatial nature and the composition of the deaf signing population (10% native signers, 90% nonnative signers). Moreover, this research program forms a unique and many pronged approach to studying the nature of human language and language processing. MODALITY EFFECTS
Modality Effects on Lexical Recognition Like spoken languages, ASL exhibits structure at the sublexical or phonological level. Figure 1 illustrates phonological distinctions con-
156
Emmorey
veyed by hand configuration, place of articulation, and movement. Figure la illustrates signs that differ only in handshape--ASL contrasts about 36 different handshapes (not all sign languages have the same handshape inventory). Figure lb shows three signs that differ according to where they are made on the body, and Fig. l c illustrates signs that differ only in their movement. ASL permits several different path movement types (e.g., circling, arc, straight), and signs may also contain "internal" movement such as wiggling of the fingers or changes in handshape. Current approaches to both signed and oral phonology make use of sequential, linear representatives, as well as nonlinear or simultaneous phonological structure (Perlmutter, 1988; Sandler, 1989). However, signed and spoken languages differ in the capacity that each modality has for expressing simultaneous and sequential information. Signed languages appear to have a greater capacity for expressing information simultaneously, which may be an inherent property of the visual system, compared to the auditory system, which appears to be particularly adept at distinguishing fast temporal distinctions. The differing capacities that sign and speech exhibit for expressing sequential and parallel information may be reflected by differences in the organizational properties of the lexicon as well as differences in lexical recognition processes. For example, models of auditory word recognition seem to be highly conditioned by the serial nature of speech. Most models of word recognition, such as the cohort model (MarslenWilson, 1987) or the TRACE model (McClelland & Elman, 1986) hypothesize that an acoustic representation of some sort is sequentially mapped onto lexical entries. These kinds of models strongly reflect the serial nature of speech perception. Since signed languages are less dependent upon serial linguistic distinctions, does this difference have an effect upon lexical access and sign recognition? (Emmorey and Corina (1990) used a gating technique to address this question (see also Grosjean, 1981). Isolated signs were presented repeatedly, and the length of each presentation was increased by one videoframe (33 msec). After each presentation deaf signing subjects reported what they thought the sign was, and how confident they were. This study revealed some important differences between sign and speech. Signs were isolated surprisingly rapidly (see Fig. 2). Although signs are much longer than words, only about 240 msec of a sign had to be seen before the sign was identified. This is significantly faster than word recognition for speech. Grosjean (1980) found that for English, approximately 330 msec had to be heard before the word could be isolated. There are at least two reasons why signs may be identified earlier
Processing Dynamic Visual-Spatial Language
CANDY (a)
APPLE
157
JEALOUS
Signs contrasting only in Hand Configuration
SUMMER
UGLY
DRY
Signs contrasting only in Place of Articulation
(b)
/ J Z.-~_-'-Nx
TAPE (c)
CHAIR Signs contrasting only in Movement
TRAIN
Fig. 1. (a)-(c) Illustration of part of the phonological system o f A S L . These signs minimally contrast different phonological parameters o f A S L .
than spoken words. First, the nature of the visual signal for sign provides a large amount of lexical information very early such that a lot of phonological information is available simultaneously compared to the serial nature of phonemes and syllables in speech. The availability of this
Emmorey
158
Lexical Identification: Gating Task 350
325
300
275 1"4
2so
225
200
175 Isolation Point
Fig. 2. A S L signs are identified earlier than English words in gating tasks.
information can dramatically narrow the set of lexical candidates for the incoming sign. Second, the phonotactics and morphotactics of a visual language such as ASL may be different than those of speech. Sign initial cohorts seem to be much more limited by phonotactic structure. Unlike English in which many initial strings have large cohorts (e.g., the initial strings [kan] or [maen] are each shared by 30 or more words), ASL has few signs which share an initial phonological shape. This phonotactic structure limits the size of the initial cohort in ASL. Thus, the more constrained phonotactics and the early and simultaneous availability of phonological information may conspire to produce numerically and proportionally faster identification times for ASL signs. Another effect of language modality appears to be the unique role of movement in lexical recognition for sign language. Emmorey and Corina (1990) found that the isolation of one particular phonological parameter, movement, led directly to lexical identification. The handshape, orientation, and location of a sign were identified nearly simul-
Processing Dynamic Visual-Spatial Language
159
taneously after about 170 msec, but the time to identify phonological movement did not differ from the time to identify the sign itself. Such a direct correlation between lexical identification and a phonological element does not occur in English and may not occur in any spoken language. That is, there appears to be no phonological feature or segment, the identification of which leads directly to word recognition. Furthermore, movement appeared to play a special role in identifying morphologically complex signs. Subjects identified the base form of morphologically complex signs much earlier than they identified monomorphemic signs. This result is comparable to finding that English speakers identify the base verb in following earlier than they can identify the monomorphemic verb follow. Such a result seems highly unlikely--the phonological cues to the base verb are nearly the same for both the inflected and uninflected form. However, in ASL such a result was observed and is explained in terms of the role of movement in lexical recognition. The identification of phonological movement is required to identify a monomorphemic sign, but identification of the inflectional movement is not required to identify the base of an inflected sign. A base sign could be recognized without waiting for the inflectional movement to be resolved, but recognition of a monomorphemic sign required the subject to wait until the movement was identified. Thus, movement plays a different role in the identification of monomorphemic signs compared to the identification of base signs within an inflected form. 2 This relation between lexical recognition and a morphophonemic element has no obvious parallel in spoken language. In summary, language modality appears to affect the speed of lexical identification--the visual modality may have an advantage, at least for the early stages of lexical recognition. In addition, unlike spoken languages, the identification of a particular phonological element--movem e n t - l e a d s directly to lexical identification. However, I should make clear that our results overall suggest that there are more similarities than differences between signed and spoken word recognition. Despite the recognition time differences, both sign and word recognition are temporally dependent, and the same kind of sequential matching between mental representations and sensory (visual or acoustic) information appears to take place. Furthermore, the same kind of lexical priming effects (phonological, morphological, and semantic) have been found for sign languages (Corina & Emmorey, 1993; Emmorey, 1991). z This is only the case when movement signals the inflectional information. ASL allows other nonmovemental morphological markings as well.
160
Emmorey
Modality Effects and the Use of Spatial Representations The use of space is a unique resource afforded by the visual modality, and signed languages tend to exploit spatial mechanisms in their reference systems. In a separate set of studies, we have focused on the spatial properties of the ASL reference system. Pronominal reference in ASL involves the association of nominal elements with spatial loci. Nominals introduced into ASL discourse may be assigned an arbitrary locus in the plane of signing space, and a pronominal sign directed to that locus clearly refers back to the previously mentioned nominal. In addition, these spatial loci can be manipulated, shifted, rotated, or split (such that one referent is associated with two loci; van Hock, 1989). Signers must be able to track these spatial loci and their associated nominals as they shift and rotate in signing space. Our research has been concerned with how signers understand and maintain the association between referents and their spatial loci during real-time processing. Recently, the relationship between a referent and a locus in ASL has become the topic of debate within the linguistic community. Traditionally, this relationship has been treated as one of referential equality between the locus and the referent (Friedman, 1975; Klima & Bellugi, 1979; Lillo-Martin & Klima, 1990; Wilbur, 1987). On this view, a spatial locus can be said to represent or stand for a particular referent. Referential equality predicts a strong relationship between a referent and spatial locus because the referent and locus are not only associated but also referentially equal. In contrast, Liddell (1990) argued against referential equality, in favor of the concept of "location fixing." Location fixing is a function that fixes the location for a referent in space rather than establishing a specific point in space which is referentially equivalent to the referent. Liddell suggested that an appropriate analogy for "referential equality" is the relationship between the legal use of the term the borrower and Mr. Jones in which both noun phrases have equal referential status. Referring to the borrower is the same as referring to Mr. Jones. In contrast, location fixing is analogous to the relationship between an actor and a physical mark on the stage upon which the actor stands. Under this analysis, the relationship between a referent and a spatial locus is a physical locative relationship. In this case, there is only an association between a referent and a location in space. Emmorey, Norman, and O'Grady (1991) investigated the strength of the relationship between a referent and spatial locus (or location in Liddell's terminology) by examining interference effects in a probe rec-
Processing Dynamic Visual-Spatial Language
161
ognition task. In our task, deaf subjects viewed signed ASL sentences on videotape and judged as quickly as possible whether or not a probe sign had occurred in the sentence. Probe signs were spliced in before the end of the sentence and were articulated either at a locus that was congruent with the noun phrase in the test sentence or at an incongruent locus. Spatial location was indicated by articulating the probe sign at a specific location (rather than in neutral space) and with eye-gaze and body shift toward that location (see Fig. 3). Each noun phrase in the test sentence was associated with a specific locus, and this association was also signaled by eye gaze, body shift, and articulation of the nominal at a particular location in space. Subjects were instructed to make their decisions based on lexical content and ignore the spatial location information (thus, the decision would be y e s for both the spatially congruent and incongruent probes because the sign itself appeared in the sentence). Assuming that the association between the nominal and its spatial locus must be maintained in memory for future discourse reference, we predicted that it should take subjects longer to recognize a probe which had been marked for an incorrect spatial locus with respect to the test sentence. However, our results did not support this prediction--response times were very similar for probes articulated at congruent and incongruent spatial locations. There was little evidence of an interference effect for probes produced with incorrect spatial loci. This result is more consistent with Liddell's (1990) location fixing hypothesis than with the referential equality hypothesis. Liddell's hypothesis proposes a much weaker relationship between spatial loci and their associated nominals compared to the referential equality hypothesis. Location fixing does not explicitly predict interference effects when there is a mismatch between the spatial location and referent because locations and referents are not equal in meaning. Spatial location plays a minor role under this analysis and does not carry the same referential meaning as that proposed under the referential equality hypothesis. In a second experiment, I further investigated the nature of the association between a referent and its spatial locus by contrasting the meaningfulness of spatial loci. Spatial loci in ASL can serve at least two purposes: to represent arbitrary syntactic distinctions, as I have described, and to represent topographic relationships in real-world space itself. The sentences from the above study contained spatial loci that were established arbitrarily, and there was no inherent topographic relation between the loci. In Fig.
162
Emmorey
s
r~
z
~0~ m
c~
.
~.I
m
q
i ~ _= 2
~
i
.-~:~o~
Processing Dynamic Visual-Spatial Language
163
3, the loci for the two nominals (BUG 3 and BUTTERFLY) could be reversed without a corresponding change in meaning. Spatial loci in these sentences serve the syntactic function of establishing referential distinctions for both pronouns and verbs [agreement verbs in ASL move between these loci to indicate grammatical role (subject and object); see below]. However, the space within which signs are articulated can also be used to describe the layout of objects in space. In such mapping, spatial relations among signs correspond in a topographic manner to actual relations among objects described. The linguistic conventions used in spatial mapping specify the position of objects in a highly geometric and nonarbitrary fashion by situating sign forms in space such that they maintain the topographic relations of the "world-space" being described. Furthermore, the use of space to convey syntactic relations and the use of space to convey topographic relations can be differentially affected by brain damage (Poizner, Klima, & Bellugi, 1987). Damage to the right hemisphere can impair expression and comprehension of topographic space, leaving intact the ability to use space for syntactic contrasts (Bellugi, Poizner, & Klima, 1990; Corina, Bellugi, Kritchevsky, O'GradyBatch, & Norman, 1990; Corina, Kritchevsky, & Bellugi, 1992). Bellugi and colleagues have reported several deaf signers with right-hemisphere lesions who correctly associated nominals with arbitrary spatial loci and also correctly used agreement verbs which must be articulated with respect to these loci. In contrast, these same right-hemisphere-damaged signers were unable to use space to convey topographic relationships. For example, when asked to describe a room, a right-hemisphere-damaged signer might completely neglect the left half of signing space, piling all of spatial locations for the furniture onto the right half of space. However, when spatial loci were used for syntactic distinctions, they 3 Words in capital letters represent English glosses for A S L signs. The gloss represents the meaning of the unmarked, unmodulated root form of a sign. A bracketed word following a sign gloss indicates that the sign is made with some regular change in form associated with a systematic change in meaning, and thus indicates grammatical morphology in ASL (e.g., SPILLtcaro~oss]). Multiword glosses connected by hyphens are used when more than one English word is required to translate a single sign (e.g., TAKE-CARE-OF). Subscripts are used to indicate spatial loci; nouns, pronouns, and agreeing verbs are marked with a subscript to indicate the loci at which they are signed (e.g., INDEX,.,ASI~). Subscripted numerals may also be used to indicate first, second, or third person (e.g., INDEXls,, CHILDREN3rd or 3a). Superscripts indicate the syntactic or discourse function of a particular word or clause [e.g., topic (t)], and the scope of the function is indicated by a raised line covering the word or phrase. Classifier forms are abbreviated CL, followed by the handshape of the classifier and a description of the meaning in quotes. Forms marked with (2h) are produced with two hands.
164
Emmorey
might use all of signing space (including the left half of space). Such results suggest that topographic and syntactic (arbitrary) uses of space may have different mental representations and processing demands. In this second study, I again used the probe recognition technique to investigate how topographic and arbitrary space might be processed in ASL (Emmorey, 1992). As in the previous experiment by Emmorey et al. (1991), probe signs were articulated either at a locus that was congruent with the noun phrase in the test sentence or at an incongruent locus. In contrast to the previous experiment, however, the probe was articulated simultaneously with an index that was directed toward the correct or incorrect locus (see Fig. 4). In the previous study, body shift and eye gaze alone were used to establish the association between a nominal and a locus. In this second study, the spatial incongruity of the probe sign was enhanced by adding overt indexation information to the probe. Subjects were told to decide whether the probe word itself was used in the sentence regardless of indexation. Figure 4 illustrates the sentence types that were contrasted in this study. In the top sentence, spatial loci represent a topographic mapping of a scene in which the spatial loci stand in specific relationships to each other, and these relationships are nonarbitrary. The second example illustrates a test sentence in which the loci are arbitrary and bear no inherent relation to each other. Great care was taken to ensure that these sentences could not be construed as having any kind of real-world spatial representation. I predicted that the probe interference effect would be most severe for sentences using topographic space compared to arbitrary space because in topographic sentences a locus actually represents a spatial location and performs a semantic function. In the arbitrary space sentences, the locus does not play the same kind of semantic role but merely serves the syntactic function of establishing an association of "location fixing" in Liddell's terms. Because of the semantic role played by loci in the topographic sentences, we should observe much slower response times to probes which are incongruent with the loci in the test sentence. This prediction was confirmed. Incongruent probe loci produced much greater interference effects when the sentence used topographic space compared to arbitrary space. As in the previous experiment, the interference effect observed in sentences using arbitrary space was not significant. In contrast, when the test sentence used topographic space, response times to probes with incongruent spatial indexing were significantly slower. These results suggest that the mental representations for topographic and arbitrary spatial loci differ in a nontrivial way. The
Example of Topographic Use of Space
TABLE
NAIL-POLlSN
MESS
BLUSH
SPILLI=,rde=,I
INOEX,
IN{)EX~
~EAK
PEFI~UME
INOEXb
Efa~
Congruent Space
incongruent Space
Probes:
~'ONE
PERFUME
Enalish Translation: "My (vanity) table is a mess. The case for my blush which is on the right is broken. My nail polish on the left has spilled, and my perfume bottle in the center is empty."
Example of Syntactic (Arbitrary) Use of Space
T
MOTHER.FATHER
_,.__r
HOU~
~ I E
iNOF-Xb
U6Ei~
ELEC"mtC~
____j
WATER
~
j
INOEXr
Congftmlt Space
HiGH
C~IE
~
incongruent Space
pro#era
wAl"r~
WATER
Enalish Translation: "My parents' house is very expensive - their electric bills are high, they usa a lot of gas and go through a lot of water."
Fig. 4. Examples of topographic and arbitrary uses of space in ASL. These sentences and probe signs were presented in a probe recognition task.
166
Emmorey
findings suggest that there is a much stronger relationship between a spatial locus and its associated nominal when that locus represents a possible real-world spatial layout. Subjects were unable to suppress their recognition of a spatially incongruent probe in the topographic sentences, but in sentences in which the locus was arbitrary and played no particular semantic function, subjects could suppress recognition of an incongruent index and showed little interference effect. These studies have laid the groundwork for future investigations of how signers represent and process linguistic structure that is encoded spatially. The spatial reference system of ASL is extremely complex, and these first experiments have examined only a very basic property of referential space--the association of loci with nominal referents. However, these loci can be reassigned by spatially shifting frames of reference, and tracking reference in ASL requires co-ordination and integration of several different linguistic subsystems that are spatially expressed. Furthermore, the ability to use space to directly encode spatial relationships linguistically is a unique property of signed languages, unavailable to spoken languages. We are currently investigating the consequences of the use of space to encode" space for language processing and representation, as well as the effect this linguistic property might have on general spatial cognition (see below).
E F F E C T S OF A G E O F A C Q U I S I T I O N ON L A N G U A G E PROCESSING The study of adult deaf signers can provide insight into the role that early language experience plays in the establishment of language processing mechanisms. As noted earlier, the majority of deaf signers began acquiring ASL not from birth, but later in childhood. In our studies of sign language processing, we have compared native signers born to deaf signing parents with nonnative signers who had hearing parents and who learned ASL when they entered residential schools sometime between the ages of 4 and 16. In this section, I describe some of our recent results which suggest that late exposure to a primary language influences not only the grammatical competence achieved (as found by Newport and colleagues), but it also affects aspects of linguistic processing. In the sign gating study described earlier, Emmorey and Corina (1990) found that late signers were significantly delayed compared to native signers in identifying ASL signs (see Fig. 5). Late signers were also delayed in identifying the individual phonological parameters of a
Processing Dynamic Visual-Spatial Language
167
sign (i.e., handshape, location, orientation, and movement). These results suggest that late signers may not be able to exploit visual information that is present early in the sign signal. Native signers were able to make use of early phonetic or phonological information to uniquely identify a target sign whereas late signers needed to see more of the sign before they could identify it. However, there was no difference between the groups in the point at which they confidently recognized a sign. This result suggests that early language experience may be particularly critical to the initial access of the lexicon during sign recognition. Rachel Mayberry has also found that phonological processing seems to be particularly vulnerable to the effects of late acquisition of language (Mayberry, 1992; Mayberry & Eichen, 1991; Mayberry & Fischer, 1989). For both sign shadowing and recall tasks, late signers made many more phonological substitutions compared to native signers, who tended to produce more semantically based errors. This different error pattern was not due to general memory or visual-perceptual differences between the two groups. Late signers appeared to devote much more attention to the phonological structure of signs which interfered with their ability to quicldy
Sign Identzfication: Gating Task 275
250
ffJ 0
o
225'
r.O r-.I r-q .r-I
200"
175
!
Isolation Point
Fig. 5. Late signers took significantly longer than native signers to identi~" isolated signs in a gating task.
168
Emmorey
access lexical meanings. Native signers, on the other hand, were able to automatically access and integrate lexical meaning, and their error patterns indicated they did not focus on decoding the phonology of signs. Note that, as in our studies, Mayberry controlled for practice effects, such that the only difference between native and late learners was the time at which they began language acquisition. These findings suggest that childhood language acquisition is critical for automatic and effortless phonological processing. In addition, we have found that late exposure to language can also affect processing at higher lexical and syntactic levels. Emmorey, Bellugi, Friederici, and Horn (1992) used a sign monitoring task to investigate the on-line integration of morphological and syntactic information, focusing on verb agreement and aspect morphology. In this task, subjects watched videotaped signed sentences and monitored for a particular target sign, pushing a telegraph key when they detected the sign. The nature of the grammatical context that preceded the target sign was manipulated. The immediately preceding context was either correct or contained an error in ASL verb agreement or aspect morphology. We were interested in whether native and late signers might be differentially sensitive to these grammatical errors. ASL agreement morphology operates within the spatially organized reference system of ASL. As described above, noun phrases introduced into ASL sentences may be associated with points in a plane of signing space. ASL agreeing verbs move between these abstract loci and bear obligatory markers for person and number. The verbs are called "agreeing" verbs because their spatial endpoints mark agreement with the spatial loci established for the subject and object of the sentence. Figure 6 provides an illustration of a sentence used in the sign monitoring experiment by Emmorey et al. (1992). At the beginning of each sentence, subjects were told which sign to monitor for, and in the test sentences the target sign either followed the correct verb or a verb marked with incorrect agreement. In the illustrated example, the target sign was FEED, which followed either a correctly inflected agreeing verb or the same verb with reversed agreement. In the correct condition, the preceding verb TAKE-CARE-OF requires subject agreement with first person and object agreement with third person. Note that the grammatical context is established with the use of a first person index immediately before the verb. In the incorrect verb, subject agreement is with third person and object agreement with first person. In addition, we investigated the sensitivity of late signers to violations of ASL aspect morphology. Aspect morphology tends to express
Processing Dynamic Visual-Spatial Language
169
E r ~--r~ 0 0
E
_oE
E
t~.c=
E e=_: ~g "~0~
Z
,,,
[.
t.u rr
~
~'5 $u~
< 0 0
.E U
f~ w
0
6 o
-& ,g
~
8 i
i
170
Emmorey
the temporal features of the verb, unlike agreement morphology, which identifies the participants in an action. Comrie (1976) suggested that grammatical aspect describes the "'internal temporal constituency of a situation" and refers exclusively to the action or state described by the verb. In ASL, aspect and agreement morphology also tend to have different morphophonemic forms. Agreement morphology connects loci within a plane of space, whereas aspect morphology is expressed by superimposing dynamic movement contours on sign stems. Figure 7 illustrates an example test sentence from this experiment which contains an aspect violation. In this example, a context is set up which requires a continuous marking for the verb TEASE-EACH-OTHER, that is, this action occurred all night. The punctual inflection which means roughly " g o t c h a " or to "tease once" is incorrect. We predicted slower response times for targets which followed an error in either verb agreement or aspect. If subjects were sensitive to these errors, then encountering an error should slow recognition of the sign which followed. However, we hypothesized that native signers might be more sensitive to violations of verb agreement and aspect morphology compared to signers who acquired ASL later in childhood. In order to determine more specifically the age of critical language exposure, we refined the subject population to include both early signers who began to acquire ASL between ages 2 and 7 and truly late signers who acquired ASL between ages 10 and 20. In addition, in order to control for practice effects, subjects were matched as closely as possible for the number of years experience with ASL. The results of the study indicated that native signers, but not early or late signers, were sensitive to errors in spatial verb agreement (see Fig. 8A). For native signers, when a verb agreement error was encountered, it slowed recognition of the following sign--response times were longer in the error context than the correct context. Recognition times may be slower because language processing mechanisms have more difficulty interpreting an incorrectly inflected verb. For both early and late signers, there was no evidence of disruption. There are at least two possible explanations for the failure of these nonnative signers to exhibit grammatical sensitivity. One possibility is that these signers simply did not recognize the error in verb agreement as an error. Another explanation is that they do not integrate verb agreement information into the sentence as efficiently as native singers. It may be that late signers' syntactic and semantic interpretation mechanisms are relatively slow compared to native signers. If this is the case, then nonnative signers may recognize the verbal errors in an off-line grammaticality judgment
Processing Dynamic Visual-Spatial Language
171
S O
O
J
]
o
2A~
~~
"d
r~
,.~.~ m.~ 9 : ~ 9 ~= ~ - "~.~ =-
~
eE~
==1oi : =.~
.#
176
Emmorey
complexity not the iconicity of verbs which determines the order and timing of their acquisition (Meier, 1981). Given these facts, will adult signers, who began the acquisition process late, be sensitive to the iconic properties of the visual-geometric classifiers? Or will late signers fail to process the internal morphological components of a classifier form and therefore be insensitive to the internal violation? The results from the sign monitoring task indicated that both native and early signers (exposed to ASL at a mean age of 4 years) were sensitive to errors involving either categorizing or visual-geometric classifiers (see Fig. 9). For both groups, categorizing classifier errors appeared to create more disruption than visual-geometric classifier errors. In contrast, for late signers (exposed to ASL at a mean age of 13 years), we found no evidence of a disruption in processing for either classifier type. In addition, late signers were less able to consciously detect both types of classifier violations on the off-line grammaticality judgment task. Thus, despite the iconic properties of visual-gestural classifiers, late signers did not exploit this aspect of the morphological form during processing. This result suggests that, even for late signers, iconicity may not be exploited during either real time or off-line processing. Although late signers performed poorly on the grammaticality judgment task (see Fig. 9), their lack of sensitivity to grammatical errors on the sign monitoring task may not have been due solely to a lack of grammatical knowledge. When the data were reanalyzed to include only those items judged correctly on the grammaticality task, our results did not change. Thus, although late signers were consciously aware of the classifier errors for these items, they did not show evidence of such sensitivity when they were processing the sentence in real time. Both the sign monitoring and the grammaticality judgment data indicate that errors involving visual-geometric classifiers were harder to detect than errors involving categorizing classifiers. This was true for all subject groups. Here again, we find that signers are not capitalizing on the iconicity of visual-geometric classifiers. Despite the fact that there was a clear visual mismatch between the visual-geometric classifier and the object classified (see Fig. 9), signers did not find these errors easier to detect. In fact, just the opposite was foundmerrors involving categorizing classifiers which do not have a strong iconic value were actually the easiest to identify. One possible explanation for the difference between these classifier types is that categorizing classifiers form a much more restricted set with almost no variation in the choice of classifier for a given noun. For example, only the airplane classifier can be used with
Processing Dynamic Visual-Spatial Language
177
airplanes (see Fig. 9). Visual-geometric classifiers, on the other hand, form a much a larger set with greater variability in choice of classifier. For example, in the sentence illustrated in Fig. 9, other visual-geometric classifiers could have been used for cereal boxes, depending upon the nature of the action and the orientation, shape, and size of the boxes. Because the mapping between classifier and noun is more straightforward for categorizing classifiers, signers may be more sensitive to violations of this mapping. In contrast, because several different visual-geometric classifiers may be appropriate for the same noun, signers may be less ready to reject a particular choice of classifier and may take longer to recognize a violation during processing. The pattern of results that is emerging from these studies suggests that the effects of late exposure to language may be selective and only affect certain aspects of linguistic processing. Late exposure to language appears to affect the ability to quickly interpret phonological information and to quickly integrate morphosynactic information conveyed by agreement and classifier morphology within a sentence. In contrast, late acquisition of ASL appears to leave intact the ability to interpret temporal aspect morphology, as well as some aspects of referential processing (referent activation by overt and null pronouns is not affected by late acquisition; see Emmorey et al., 1991; Emmorey & Lillo-Martin, 1991). Late acquisition of language appears to affect both the ultimate grammatical competence achieved in a language and the processing mechanisms required to efficiently and quickly interpret linguistic input. We found different effects of late exposure for linguistic knowledge compared to language processing using off-line and on-line tasks. That is, although late signers may have been overtly aware of grammatical errors, they did not show an on-line disruption when these errors were encountered during real-time language processing. We hypothesize that late signers may be slower and less efficient in integrating lexical and syntactic information. Furthermore, efficient processing is not simply a matter of practice or experience with language since the subject groups did not differ in number of years of signing practice; rather the groups differed only in when they were first exposed to ASL. Furthermore, at least in the study involving classifiers, signers who acquired ASL early in childhood performed better on both the on-line and off-line tasks, compared to signers who acquired ASL quite late in childhood (after age 10). These findings indicate that the particular maturational state of the brain during childhood is important for the establishment of language processing mechanisms.
178
Emmorey
T H E I N T E R P L A Y B E T W E E N SPATIAL C O G N I T I O N AND VISUAL-SPATIAL LANGUAGE Finally, language modality may affect not only certain aspects of linguistic structure and processing, it may also affect nonlinguistic processing. In general, signers are faced with the dual task of spatial perception, spatial memory, and spatial transformation, on the one hand, and processing grammatical structure on the other--in one and the same visual event. In several sets of studies, we have investigated the possible effects of experience with a visual-spatial language-or alternatively the possible effects of auditory deprivation from birth-on nonlanguage visual perception and imagery. To distinguish between effects of using ASL from effects of being deaf from birth, we tested a group of subjects who were hearing and were born to deaf parents. These subjects learned ASL as their first language and have continued to use ASL in their daily lives. If these hearing native signers have visual-spatial skills that are similar to those found for the deaf signers, this would suggest that differences in spatial cognition arise from the use of a visual-spatial language. On the other hand, if these signers have skills similar to those found in hearing subjects, this would suggest that differences in spatial cognition may be due to auditory deprivation from birth. In one set of studies, Ursula Bellugi and colleagues have investigated deaf and hearing subjects' ability to discriminate between different human faces. Interpretation of faces is a critical aspect of processing ASL because different facial expressions are used to convey linguistic constrasts. A significant portion of the syntactic structure of ASL is conveyed by facial expression. Specific facial expressions serve to signal relative clauses, the scope of wh questions, conditionals, topicalization, as well as several adverbial forms (see Liddell, 1980; Lillo-Martin, 1991). Bellugi, L. O'Grady, Lillo-Martin, M. O'Grady, van Hoek, and Corina (1990) found that deaf signing children could discriminate faces under different conditions of spatial orientation and lighting better than hearing children. Bellugi et al. used the Benton Test of Facial Recognition in which subjects must choose a target face from an array of six faces which have different orientations and shadowing. In addition, Bellugi and Emmorey (1993) and Bettger (1992) have found that the enhanced ability of deaf children to discriminate faces carries over into adulthood. On the same task, adult deaf signers performed significantly better than hearing nonsigners. Furthermore, Bettger has found that hearing subjects who have deaf parents and learned ASL as their first language also out-performed hearing nonsigners who had no knowledge of ASL. The fact that
Processing Dynamic Visual-Spatial Language
179
both deaf and hearing signers discriminate faces better than nonsigners suggests that acquiring the ability to detect grammatical distinctions expressed on the face enhances other (nonlinguistic) aspects of face recognition. These results also indicate that some aspects of visual processing may subserve both linguistic and nonlinguistic functions. In a second set of studies, Emmorey, Kosslyn, and Bellugi (1993) examined the relation between processing ASL and the use of visual mental imagery. Visual-spatial perception, memory, and mental transformations are prerequisites to grammatical processing in ASL, and also are central to visual mental imagery (Farah, 1988; Finke & Shepard, 1986; Kosslyn, 1980). We have studied three visual mental imagery abilities that we hypothesize are integral to ASL production and comprehension: image generation, maintenance, and transformation. These abilities also reflect the typical progression of processing when imagery is used in cognition: An image is first generated, and it must be maintained in short-term memory in order to be manipulated. If ASL recruits these abilities, and thus signers practice them frequently, then signers should be better at these aspects of imagery than nonsigners. Image generation is the process whereby an image (i.e., a shortterm visual memory representation) is created on the basis of information stored in long-term memory (see Kosslyn, Brunn, Cave, & Wallach, 1985). In ASL, image generation may be an important process underlying the use of classifier constructions. As described earlier, classifier verbs indicate that movement and location of objects in space, and often require precise representation of visual-spatial relationships within a scene; such explicit linguistic encoding may necessitate the generation of detailed visual images. Moreover, Liddell (1990) has argued that a visual conception of a referent is generated for certain syntactic constructions utilizing agreement verbs. Thus, the ability to generate visual mental images of referents and spatial scenes may play a role in the production of ASL. Second, we investigated the ability of deaf and hearing subjects to maintain mental images. As noted earlier, nominals are associated with loci in signing space, and this association must be remembered throughout the discourse. We hypothesized that the signer must maintain a visual-spatial representation of these loci during discourse production and comprehension. This linguistic requirement may heighten deaf signers' ability to maintain nonlinguistic mental images in short-term memory. Finally, once spatial loci have been established, there are syntactic and discourse rules that allow a signer to shift these loci to convey perspective shift or a change in location (see van Hoek, in press). Moreover, during sign perception the perceiver may mentally reverse these spatial arrays
180
Emmorey
in order to reflect the signer's perspective. We hypothesized that these linguistic requirements may enhance signers' ability to mentally rotate and/or reverse nonlinguistic visual images. In sum, we hypothesized that three imagery abilities (image generation, maintenance, and rotation) play crucial roles in sign language processing, and therefore ASL signers may be relatively adept at these abilities, even if they are recruited in tasks that have no relation to sign language. The image generation task that Emmorey et al. (1993) used is illustrated at the top of Fig. 10. Subjects first memorized uppercase block letters and then were shown a series of grids (or sets of brackets) that contained an X mark. A lowercase letter preceded each grid, and subjects were asked to decide as quickly as possible whether the corresponding uppercase block letter would cover the X if it were in the grid. The crucial aspect of the experiment was that the probe mark appeared in the grid only 500 msec after the lowercase cue letter was presented. This was not enough time for the subjects to complete forming the letter image. Thus, response times reflected in part the time to generate the image. Kosslyn and colleagues have used this task to show that visual mental images are constructed serially from parts (e.g., Kosslyn, Cave, Provost, & Von Gierke, 1988; Roth & Kosslyn, 1988). Subjects tend to generate letter images segment by segment in the same order that the letter is drawn. Therefore, when the probe X is covered by a segment that is generated early (e.g., on the the first stroke of the letter F), subjects have faster reaction times, compared to when the probe is located under a late-imaged segment. Crucially, this different in response time based on probe location is not found when image generation is not involved, i.e., when both the probe X and letter (shaded gray) are physically present. We hypothesized that ASL signers might be better at generating images than nonsigners because the production of certain constructions in ASL may require the formation of detailed mental images. As noted in the previous section, classifier constructions, in conjunction with the mapping function of space in ASL, convey very specific information about the location and movement of objects and persons in space. Unlike English, ASL requires spatial relations to be encoded linguistically and specified explicitly when describing the layout of a scene. For example, within the classifier system of ASL, it is impossible to sign The bed is on the right and the chair on the left without also specifying the orientation and location of the bed and chair as well as their relationship to each other. When a signer describes a scene, the language appears to
181
Processing Dynamic Visual-Spatial Language
J 0~ x_/
~3
>
E
.o
9
oo o
t~o g
Q
•
_=
182
Emmorey
require him or her to create a more detailed mental image than is required of an English speaker. English does not demand the same kind of explicit spatial information to describe a similar scene; indeed, to be as explicit, several adjunct phrases must be added within each sentence. We found that both deaf and hearing signers formed images of complex letters significantly faster than nonsigners (see Fig. 10). This result suggests that experience with ASL can affect the ability to mentally generate visual images. Results from a perceptual baseline task indicated that this enhancement was due to a difference in image generation ability, rather than to differences in scanning or inspection--signers and nonsigners did not differ in their ability to evaluate probe marks when the shape was physically present. The signing and nonsigning subjects were equally accurate, which suggests that although signers create complex images faster than nonsigners, they generate equally good images. The fact that the hearing native signers performed like the deaf signers shows that the enhancement of image generation is not a consequence of auditory deprivation. Rather, it appears to be due to experience with a visual language. Furthermore, deaf and hearing subjects appeared to image letters in the same way: Both groups of subjects required more time and made more errors for probes located on late-imaged segments, and these effects were of comparable magnitude in the two groups. This finding indicates that neither group of subjects generated images of letters as complete wholes, and both groups imaged segments in the same order. Unlike our results with image generation, we found no difference between groups on a task of short-term maintenance of mental images. In this task, subjects were shown a nonverbalizable block pattern which they had to memorize. After either a short (500 msec) or long (2500 msec) delay, subjects were shown a blank grid (or set of brackets) containing a probe X. Subjects had to decide as quickly as possible whether the pattern (if it were present) would cover the X. In this task, subjects did not have to retrieve information from long-term memory or generate the image; they simply needed to retain an image of the pattern in visual short-term memory. Similarly, ASL signers may retain visual information about linguistic spatial loci in short-term memory. However, we found that deaf signers were not faster or more accurate on this task compared to hearing subjects. Experience with a visual language does not appear to enhance the ability to maintain a pattern in a visual image, at least in the task we examined. There are at least two reasons for this result. This task measured image maintenance over relatively short time spans, and in ASL discourse, spatial loci must be maintained over much longer intervals. Second, it is possible that signers did not show en-
Processing Dynamic Visual-Spatial Language
183
hancement for image maintenance because images of referential spatial loci may not be maintained as visual images but rather may be quickly transferred to a more abstract representation which is not located in visual short-term memory. Further research with ASL signers will help to determine the relationship between memory for nonlinguistic visual images and memory for linguistic structure that is visually and spatially coded. Finally, we examined the ability of deaf and hearing subjects to mentally rotate imaged objects. We used a mental rotation task similar to the one devised by Shepard and Metzler (1971). Subjects were shown two forms created by juxtaposing cubes to form angular shapes, and they were asked to decide whether the two shapes were the same or mirrorimages, regardless of orientation (see Fig. 10). We were interested in using the mirror-image judgment task because ASL makes use of reversal transformations, and we therefore hypothesized that deaf signers might be faster at making these judgments. For example, during sign comprehension, the perceiver (i.e., the addressee) must mentally r e v e r s e the spatial arrays created by the signer such that a spatial locus established on the right of the signer (and thus on the left of the addressee) is understood as on the right in the scene being described by the signer. The scene is normally understood from the signer's perspective, not the addressee's. This problem is not unlike that facing understanders of spoken languages who have to keep in mind the referent directions left and right with regard to the speaker. The crucial difference for ASL is that these direction are encoded spatially by the signer. The spatial loci used by the signer to depict a scene (e.g., describing the position of objects and people) must therefore be understood as the reverse of what the addressee actually o b s e r v e s during discourse (assuming a face-to-face interaction). Furthermore, in order to understand and process sign, the addressee must perceive the reverse of what they themselves would produce. Anecdotally, hearing subjects have great difficultly with this aspect of learning ASL; they do not easily transform a signer's articulations into the reversal that must be used to produce the signs. Given these linguistic processing requirements, we hypothesized that signers would be better than hearing subjects at making the required reversal judgment at all degrees of rotation, including the situation where no rotation was necessary. Our results supported this hypothesis are are illustrated at the bottom of Fig. 10. The data indicated that deaf signers were not faster at mental rotation per se because the slopes for angle of rotation did not differ between groups. Rather, the results suggest that both deaf and hearing signers were faster overall in detecting mirror reversals. Again, the fact
184
Emmorey
that the hearing signers performed similarly to deaf signers suggests that this enhanced ability is due to experience with ASL, rather than to auditory deprivation. In addition, our results indicated that native signers were more accurate than both early and late signers. Early and native signers did not differ in the number of years of signing practice, which suggests that the increased accuracy of the native signers was due to exposure to a visual-spatial language from birth. However, the hearing native signers performed more like the early signers, which indicates that is may be the combination of auditory deprivation and exposure to a signed language from birth that results in an enhanced ability to make accurate judgments of mirror reversal. We found no effects of age of acquisition on any of our other visual-spatial tasks (including the studies involving face discrimination; see Bettger, 1992). To summarize, ASL signers are better than nonsigners in specific aspects of visual mental imagery and face perception. Both deaf children and adults show an enhanced ability to discriminate faces compared to hearing children and adults. Further, both deaf and hearing signers have an enhanced ability to generate visual mental images and to detect mirror reversals in a mental rotation task. In contrast, these groups did not differ in ability to retain information in images for brief periods of time or to imagine objects rotating. We hypothesize that ASL signers' enhanced visual-spatial abilities may be tied to specific linguistic requirements (e.g., the use of grammatical facial expression, topographic uses of space, and perspective transformations). This research program is just in its beginnings, and our ultimate goal is to develop a general model of the interplay between language and nonlanguage spatial cognitive processing. Our early studies have uncovered some important links and dissociations between linguistic visual-spatial processing and nonlanguage visualspatial abilities.
FINAL COMMENTS The nature of the Salk research program and the work that I've presented here is to use the study of ASL (and other signed languages) as a tool to understand the nature of human language and language processing, as well as the relationship between language and cognition. By studying languages such as ASL which have evolved within a completely different biological medium, we can gain tremendous insight into the biological foundations of language. Such studies will help us to determine what aspects of language processing may be universal and thus
Processing Dynamic Visual-Spatial Language
185
reflect the human capacity for language and what aspects are dependent upon language modality.
REFERENCES Bellugi, U., & Emmorey, K. (1993). Enhanced spatial abilities in adult deaf signers. Bellugi, U., O'Grady, L., Lillo-Martin, D., O'Grady, M., van Hock, K., & Corina, D. (1990). Enhancement of spatial cognition in deaf children. In V. Volterra & C. Erting (Eds.), From gesture to language in hearing and deaf children (pp. 278298). New York: Springer-Verlag. Bellugi, U., Poizner, H., & Kilma, E. S. (1990). Mapping brain function for language: Evidence from sign language. In G. M. Edelman, W. E. Gall, & W. M. Cowan (Eds.), Signal and senses: Local and global order in perceptual maps. (pp. 521543). New York: Wiley-Liss. Bettger, J. (1992). The effects of experience on spatial cognition: Deafi~ess and knowledge of ASL. Unpublished doctoral dissertation, University of Illinois, UrbanaChampaign. Bybee, J. (1985). Morphology. Amsterdam: John Benjamins. Comrie, B. (1976). Aspect. Cambridge, England: Cambridge University Press. Corina, D. P., Bellugi, U., Kritchevsky, M., O'Grady-Batch, L., & Norman, F. (1990). Spatial relations in signed versus spoken language: Clues to right parietal functions. Corina, D. P., Emmorey, K. (1993). Phonological and Semantic Priming in ASL. Corina, D. P., Kritchevsky, M., & Bellugi, U. (1992). Linguistic permeability of unilateral neglect: Evidence from American sign language. Proceedings of the 14th annual conference of the cognitive science society. Hillsdale, NJ: Erlbaum. Cutler, A., Mehler, J., Norris, D., & Segui, J. (1986). The syllable's role in the segmentation of French and English. Memory and Language. 25(4), 385--400. Emmorey, K. (1991). Repetition priming with aspect and agreement morphology in American Sign Language. Journal of Psycholinguistic Research, 20(5), 365-388. Emmorey, K. (1992). Processing topographic vs. arbitrary space in ASL. Paper presented at the International conference on theoretical issues in sign language research, San Diego, CA. Emmorey, K., Bellugi, U., Friederici, A., & Horn, P. (1992). Effects of age of acquisition on grammatical sensitivity: Evidence from on-line and off-line tasks. Manuscript submitted for publication. Emmorey, K., & Corina, D. (1990). Lexical recognition in sign language: Effects of phonetic structure and morphology. Perceptual and Motor Skills, 71, 1227-1252. Emmorey, K., & Corina, D. (1992). Differential sensitivity to classifier rnorphology in ASL signers. Paper presented at the Linguistic Society of America, Chicago, IL. Emmorey, K., Kosslyn, S. M., & Bellugi, U. (1993). Visual imagery and visual-spatial language: Enhanced imagery ability in deaf and hearing ASL signers. CoAmition, 46, 139-181. Emmorey, K., & Lillo-Martin, D. (1991). Processing spatial anaphora: Referent activation from overt and null pronouns in ASL. Paper presented at CUNY Sentence Processing Conference, Rochester, NY. Emmorey, K., Norman, F., & O'Grady L. (1991). The activation of spatial antecedents from overt pronouns in American Sign Language. Language and Cognitive Processes, 6(3), 207-228.
Emmorey
186
Farah, M. (1988). Is visual imagery really visual? Overlooked evidence from neuropsychology. Psychological Review, 95(3), 307-317. Finke, R. A., & Shepard, R. N. (1986). Visual functions of mental imagery. In K. R. Boff, L. Kaufman, & J. P. Thomas (Eds.), Handbook of perception and human performance. New York: Wiley-Interscience. Friedman, L. A. (1975). Space, time, and person reference in American sign language. Language, 51, 940-961. Galvan, D. (1989). A sensitive period for the acquisition of complex morphology: Evidence from American Sign Language. Papers and Reports on Child Language
Development, 28. Grosjean, F. (1980). Spoken word recognition processes and the gating paradigm. Perception and Psychophysies, 28, 267-283. Grosjean, F. (1981). Sign and word recognition: A first comparison. Sign Language Studies, 32, 195-219. Klima, E., & Bellugi, U. (1979). The signs of language. Cambridge, MA: Harvard University, Press. Kosslyn, S. M. (1980) Image and mind. Cambridge, MA: Harvard University Press. Kosslyn, S. M. Brunn, J. L., Cave, K. R., & Wallach, R. W. (1985) Individual differences in mental imagery ability: A computational analysis. Cognition, 18, 195243. Kosslyn, S., Cave, C., Provost, D., & Von Gierke, S. (1988). Sequential processes in image generation. Cognitive Psychology, 20, 319-343. Liddell, S. (1980). American Sign Language syntax. The Hague: Mouton. Liddell, S. (1990). Four functions of a locus: Re-examining the structure of space in ASL. In C. Lucas (Ed.), Sign language research, theoretical issues (pp. 176-198). Washington, DC: Gallaudet College Press. Lillo-Martin, D., & Klima, E. (1990). Pointing out differences: ASL pronouns in syntactic theory. In S. D. Fischer, & P. Siple, (Eds.), Theoretical issues in sign language research, (Vol. 1, pp. 191-210). Chicago, IL: University of Chicago Press. Lillo-Martin, D. (1991). Universal grammar and American Sign Language: Setting the null argument parameters. Dordrecht, The Netherlands: Kluwer Academic Publishers. Marslen-Wilson, W. (1987). Functional parallelism in spoken work recognition. Cognition, 25, 71-102. Marslen-Wilson, W., & Tyler, L. K. (1981). Central processes in speech understanding. Philosophical Transactions of the Royal Society of London, B, 295, 317-332. Mayberry, R. (1992). Mentalphonology in sign language. Paper presented at Theoretical Issues in Sign Language Research, University of California, San Diego, La Jolla, CA. Mayberry, R., & Eichen, E. (1991). The long-lasting advantage of learning sign language in childhood: Another look at the critical period for language acquisition. Journal of Memory and Language, 30, 486-512. Mayberry, R., & Fischer, S. (1989) Looking through phonological shape to sentence meaning: The bottleneck of non-native sign language processing. Memory and Cognition, 17, 740-754. McClelland, J. L., & Elman, J. L. (1986). A distributed model of human learning and memory. In D. E. Rumelhart, J. L. McClelland, & the PDP Research Group (Eds.), Parallel distributed processing: Explorations in the micrsctructure of cognition (Vol. 2). Cambridge, MA: The MIT Press.
Processing Dynamic Visual-Spatial Language
187
Meier, R. (1981). Icons and morphemes: Models of the acquisition of verb agreement in ASL. Papers and Reports on Child Language Development, 20, 92-99. Newport, E. (1990). Maturational constraints on language learning. Cognitive Science, 14, 11-28. Newport, E. (1991). Contrasting conceptions of the critical period for language. In S. Carey and R. Gelman (Eds.), The epigenesis of mine# Essays in biology and cognition. Hillsdale, NJ: Erlbaum. Perlmutter, D. (1988). A moraic theory of ASL syllable structure. Paper presented at Theoretical Issues in Sign Language Research, Gallaudet University, Washington, DC. Poizner, H., Klima, E. S., & Bellugi, U. (1987). What the hands reveal about the brain. Cambridge, MA: MIT Press. Roth, J., & Kosslyn, S. M. (1988). Construction of the third dimension in mental imagery. Cognitive Psychology, 20, 344--361. Sandler, W. (1989). Phonological representation of the sign: Linearity and nonlinearity in American Sign Language. Dordrecht, The Netherlands: Foris. Shepard, R., & Metzler, J. (1971). Mental rotation of three-dimensional objects. Science, 171, 701-703. Supalla, S. (1991). Manually coded English: The modality question in signed language development. In P. Siple & S. D. Fischer (Eds.), Theoretical issues in sign language research (Vol. 2, pp. 85-109). Chicago, IL: University of Chicago Press. Supalla, T. (1982). Structure and acquisition of verbs of motion and location in American Sign Language. Unpublished doctoral dissertation., University of California, San Diego. Supalla, T. (1986). The classifier system in American Sign Language. In C. Craig (Ed.), Noun classification and categorization. Amsterdam: J. Benjamins North America. Supalla, T. (in press). Structure and acquisition of verbs of motion and location in American Sign Language. Cambridge, MA: MIT Press/Bradford Books. van Hoek, K. (1989). Locus splitting in American Sign Language. In R. Carlson, S. DeLancey, S. Gildea, D. Payne, & A. Saxens (Eds.), Proceedings of the fourth meeting of the Pacific Linguistics Conference. (pp. 239-255). Eugene: University of Oregon Press. van Hoek, K. (in press). Conceptual locations for referents in A S k In E. Sweetser & G. Fauconnier (Eds.), Spaces, worlds, and grammars. Wilbur, R. (1987). American sign language: Linguistic and applied dimensions. Boston, MA: College Hill Publications.