and either making incorrect conclusions or finding no systematicity at all. .... Figure 1 in the next section for an illustration of this structure; the features of the.
Researcher’s Resources
Handshape coding made easier A theoretically based notation for phonological transcription Petra Eccarius and Diane Brentari Purdue University
This paper describes a notation system for the handshapes of sign languages that is theoretically motivated, grounded in empirical data, and economical in design. The system was constructed using the Prosodic Model of Sign Language Phonology. Handshapes from three lexical components — core, fingerspelling, and classifiers — were sampled from ten different sign languages resulting in a system that is relatively comprehensive and cross-linguistic. The system was designed to use only characters on a standard keyboard, which makes the system compatible with any database program. The notation is made relatively easy to learn and implement because the handshapes, along with their notations, are provided in convenient charts of photographs from which the notation can be copied. This makes the notation system quickly learnable by even inexperienced transcribers. Keywords: transcription, research tools, sign language phonology, handshape
1. Introduction Methods for transcribing sign language data are needed at all levels of linguistic analysis. Currently, pictures and glosses are still the most common ways of representing signs, neither of which adequately capture their phonology at a featural level. Notation systems that capture this sort of linguistic information are needed not only in research to depict phonological commonalities and contrasts (thus enabling more efficient analyses), but also in academic publications so that other researchers can more fully understand (and continue to question) new advances in the field. Adequate phonological transcription is especially important for researchers because it is phonological information that expresses distinctions throughout the grammar of a language. Consequently, knowing which kinds of Sign Language & Linguistics 11:1 (2008), 69–101. doi 10.1075/sl&l.11.1.11ecc issn 1387–9316 / e-issn 1569–996x © John Benjamins Publishing Company
70 Petra Eccarius and Diane Brentari
properties to look for, the best way to transcribe them, and how to search for individual properties in the transcriptions, is crucial. Because it is in the data transcription that observable patterns emerge, transcribing too much, too little, or the incorrect properties can make the difference between formulating useful analyses and either making incorrect conclusions or finding no systematicity at all. Notation systems in general (regardless of language modality) vary greatly depending on the requirements of the research project involved. In the case of sign language research, however, the problem of finding a way to transcribe data is doubly difficult — not only do the needs of specific projects vary, but also the visual, more simultaneous nature of these languages is difficult to capture by the usual “spoken language” means, (i.e. mostly-linear arrangements of single symbols based on familiar written alphabets). Furthermore, researchers are still determining which aspects of these languages are linguistically relevant to their phonology, morphology, syntax, etc. Thus, knowing what to transcribe in these languages becomes as difficult a task as knowing how to transcribe it. In this paper, we describe a notation system developed for use in studying one phonological aspect of sign languages, namely handshape. This system was created to aid in the phonological analysis of handshape in a cross-linguistic research project studying classifier forms, but ultimately it was expanded to include handshapes found throughout the lexicons of ten sign languages — Hong Kong Sign Language, Japanese Sign Language, British Sign Language, Swedish Sign Language, Israeli Sign Language, Danish Sign Language, German Sign Language, Swiss German Sign Language, and American Sign Language. A detailed analysis of handshape was needed to aid us in identifying phonological and morphological patterns because this parameter can vary in its morphophonemic properties depending on context (e.g. potentially morphological features in classifier handshapes can be purely phonological features in core lexical signs; see Eccarius 2008 and Brentari & Eccarius, in press). The required details included aspects like the number of finger groups in a handshape, the fingers involved in each group, their joint configurations, and the position of the thumb. To perform this kind of analysis, however, our data had to first be transcribed in a manner that carried this kinds of detail in a consistent and easily searchable format. The system presented here was developed with these specific needs in mind and was based on the following goals: – It should have sound theoretical grounding so that natural classes will be apparent. – It should be searchable within commonly used database systems, (i.e. it should contain no characters other than those on a standard keyboard). – It should be economical (i.e. the representation should be as compact as possible while continuing to convey important linguistic information).
Handshape coding made easier
– It should be relatively easy to use, even for inexperienced signers or inexperienced transcribers, so that training is as fast as possible and the transcriptions are as reliable as possible. In this work, we describe the main characteristics of the notation system ultimately developed for our project, explaining how it meets the goals stated above. We begin by providing some background about why our research project required a new system, followed by an explanation of the system’s theoretical grounding. We then describe the design of the system, showing how it facilitates detailed database searches without causing the transcriptions to become too unwieldy. Finally, we briefly discuss the practical application of our system with regard to use by transcribers in hopes that it will prove beneficial to other researchers with similar notational requirements. 2. Background 2.1 Terminology To keep the aims of this paper clear, we maintain a distinction between the terms ‘notation system’, ‘transcription’ and ‘theoretical model’. Notation systems and transcriptions are both written abstract renderings of a linguistic signal linked to specific properties (or values) in production (van der Hulst & Channon, in press). In this paper, we use the former term to refer to the symbolic system itself, and the latter to refer to the end result of that system’s use (i.e. written representations of actual data). Theoretical models, on the other hand, are representations based on abstract principles of linguistic organization. For example, a feature geometry (e.g. Clements 1985) used to represent the hierarchical organization of phonological features of a spoken or signed language would be considered a theoretical model. Notation systems may or may not be based on theoretical models (i.e. they can represent visual or physiological characteristics of an utterance whether or not they have linguistic significance). Our handshape notation system is linked to such a model, namely, the Prosodic Model of Sign Language Phonology (Brentari 1998). Debating the merits of various theoretical models for sign languages is not one of the aims of this paper, and consequently, we restrict our discussion of models to two points: (1) we have made the decision to base our notation system on a theoretical model, which we present as an advantage of the system; and (2) since our notation system is based on a particular phonological model, we describe it here only to the extent needed to understand the notation and how to use it. This issue will be taken up again in the next section.
71
72
Petra Eccarius and Diane Brentari
2.2 Existing notation systems Before describing the specifics of our notation system, we must first explain why we felt it necessary to develop a new one. Many notation systems exist for use with sign languages, but their representations of handshape were not adequate to address the particular research needs of our project. In this section, we will provide three examples of existing systems frequently used by sign language researchers and then briefly explain why they were not sufficient for our purposes.1 One of the oldest notation systems developed for a sign language is the system developed by Stokoe and used in the Dictionary of American Sign Language (Stokoe, Casterline & Croneberg 1965). At the time it was developed, this system (and the analysis it represented) was far beyond anything else available in terms of the linguistic complexity it could represent; it represented the handshape, movement, location and orientation of a sign instead of merely describing or depicting the sign as a whole as most others did. However, when compared to more recent research in sign language phonology, it is found to be incomplete in some important respects. For example, the 19 handshape symbols used in Stokoe’s system represent only a subset of the more current versions of ASL’s contrastive handshape inventory. Also, because of its original purpose (distinguishing between minimal pairs of lexical items), it offers very little linguistic detail in its handshape symbols. Most symbols are named based on their visual similarity to ASL fingerspelling handshapes (e.g. ‘B’ for all handshapes visually resembling the fingerspelled letter w ) rather than more linguistically descriptive categories like the number of selected fingers in the handshape (i.e. those that are active or important in a handshape; see Mandel 1981) or their joint specifications. Furthermore, what few diacritics are used to represent variants of the “basic” symbols are not always consistently applied.2 Consequently, the system fails to capture many of the important relationships between the handshapes being examined by our project. Another attempt at transcribing (and writing) sign languages, SignWriting (e.g. Sutton 2002), has its own disadvantages for this sort of linguistic research. It is highly iconic, and, while it does represent in its symbols some (but not all) of the phonological features important to a study of handshape, the features themselves 1. Other notations may soon exist that might also be compatible with this type of research. For example, Liddell & Johnson’s notation system (1989) is currently in the process of evolving from earlier versions, (see Johnson & Liddell in prep). However, these revisions were not available at the time we were developing our system. 2. For example, three dots over a character can represent a spread version of an already bent (or ‘clawed’) base handshape (0 vs. ) ), a curved version of an extended, unspread base handshape (w vs. =) or a bent version of an extended and spread base handshape (Y vs. b ).
Handshape coding made easier
are not always depicted consistently and cannot be teased apart for use in searches. The resulting number of separate handshape symbols, (110 are listed on their webpage), can also make it cumbersome for transcribers to learn and use. In addition, special fonts and keyboards are needed to use the system, making it incompatible with many searchable databases. HamNoSys (Prillwitz et al. 1989) is another notation system used by researchers. This system, although more particularly developed for use in sign language research than SignWriting, shares many of the same disadvantages in terms of our needs. It is largely iconic and requires special fonts and keyboards, again making detailed searches more difficult. Also, while it is much more detailed and more consistent in its featural representations than SignWriting, (its symbols sometimes contain more phonetic information than even our project requires), because it was not based on any particular theoretical model, it still misses some of the basic generalities between handshapes that we were looking for. For example, many theoretical models of sign language phonology distinguish between ‘selected’ and ‘non-selected’ fingers in handshapes, differentiating between those that are active or foregrounded in a handshape and those that remain (Mandel 1981). HamNoSys does not utilize this distinction in its representation of handshape preferring to stay with a more a-theoretic approach; therefore handshapes with the same selected fingers (e.g. the beginning and ending handshapes in the ASL sign send, 6 and >) have unrelated base symbols.3 For these reasons, this system, like the others, did not meet the needs of our project. In our research of these and other notation systems, we were unable to find any for handshape which focused exclusively on that parameter in great enough detail to be a useful tool in our phonological or morphological analysis. One reason for this lack of detail is that most notation systems currently available to the research community strive to represent the whole sign — the handshape representation is only a small part of the overall transcription. Because of this, they understandably make their handshape notation as compact as possible (usually one character representing the whole handshape) so that the entire transcription is more space efficient. Unfortunately, this efficiency can only be achieved at the expense of detail, and it is exactly that detail that we needed for our project. Therefore, our project needed something new.
3. See Takkinen (2005) for a more detailed discussion of these sorts of limitations.
73
74
Petra Eccarius and Diane Brentari
3. Theoretical grounding The first of our goals in the development of this notation system, theoretical grounding, was of paramount importance for the purposes of our project; we needed a notation that would convey very specific phonological information about each handshape in the data. To this end, we used the Prosodic Model of Sign Language Phonology (Brentari 1998) and expansions of that model’s handshape branch for cross-linguistic use by Eccarius (2002), as well as more recent research pertaining to handshape contrast (Eccarius 2008). The Prosodic Model (hereafter abbreviated as PM) represents handshapes (among other things) by means of a binary branching feature hierarchy combined with a set of distinctive features. (See Figure 1 in the next section for an illustration of this structure; the features of the model, their placement, and their definitions can be found in Brentari 1998.) This is an area of sign language phonological representation that has a reasonably wide degree of consensus — van der Hulst (1995), Sandler (1996), Channon (2002) and van der Kooij (2002) have very similar handshape representations — however, the capabilities of the PM were best suited to our notation’s requirements. Minimally, our notation needed to represent the number of separate finger groups present in a handshape, the digits that belonged to each group, and the joint specifications of each one. Unlike other models, the handshape portion of the expanded PM includes branches for three groups of fingers per handshape — primary selected fingers (PSF), secondary selected fingers (SSF), and nonselected fingers (NSF) — as well as combinations of branching structures and features to represent which digits and joint configurations are involved in each group. The SSF group, not in the original PM, is a branch of structure added by Eccarius (2002) to account for the most complex handshapes found throughout the languages in the project in which the selected fingers must be divided into two groups because they assume separate joint postures (e.g. } from Hong Kong Sign Language). 3.1 Attested vs. possible forms But why base the notation system on a theoretical model in the first place? First, for our project we wanted a notation system that would allow us to transcribe all attested forms in the languages in question. This is different than the set of all possible forms. Allowing the system to generate every physiological possible form results in a list that is much larger than what actually occurs. For example, not all finger combinations are attested as selected fingers (e.g. index+ring), and not all joint configurations are attested in every finger group (e.g. [spread] and [stacked] are not found in the SSF). We wanted a notation system that (as much
Handshape coding made easier
as possible) included only handshapes that were known to occur, while still being flexible enough to expand easily if/when more are found. 3.2 Markedness and complexity We also wanted a system that (at least in part) could reflect linguistic markedness through the complexity of its transcriptions. A phonologically-based notation system has built into it factors such as ease of articulation, ease of perception, order of acquisition, and frequency of occurrence, while a purely phonetic notation system does not. As a result, the transcription of a complex, or ‘marked’, form with regard to the factors just mentioned, will actually appear more complex than that of an ‘unmarked’ form if the notation system used to transcribe it is based on phonological principles. Conversely, transcriptions of marked and unmarked forms using a phonetic system would be roughy equivalent in terms of their complexity. Since there is now sufficient evidence across a wide number of sign languages in the Americas, Asia, and Europe to show that factors such as these exert common pressures on their respective handshape systems (e.g. Mandel 1981; Ann 2006; van der Kooij 2002; Greftegreff 1993; Eccarius 2008), relative markedness was important for our analysis. We needed a notation system that could reflect these differences. Our notation system is phonologically-based rather than phonetic, but each of the whole handshapes represented and transcribed in our charts (see Appendix) is not a phoneme in and of itself. These whole handshapes are not phonemic because handshape systems from many sign languages are represented here, and the concept of phonemic opposition holds for just one language at a time. Moreover, the term ‘phonemic’ has come to mean ‘lexically contrastive’ (i.e. it creates a minimal pair in the core lexicon). While each handshape in our set has a meaningful contrast within at least one language — either in the lexicon, the fingerspelling system, or classifier system — they are not necessarily contrastive in the core lexicons of all ten languages of the project. 3.3 Contrast Here, we must clarify what we mean by ‘contrast’, since the ability to represent contrasts is another important requirement of the notation system. The notion of ‘contrast’ itself is in the process of evolving. Starting with the theory of Structuralism, only properties involved in minimal pairs were considered candidates for contrastiveness; this practice continued through the 1950s and 1960s and still persists today (Jakobson, Fant & Halle 1951; Jakobson & Halle 1956; Chomsky & Halle 1968). In the 1970s, theories of phonological representation began to change, and with that work, categories of features emerged besides phonemic (contrastive) and
75
76
Petra Eccarius and Diane Brentari
phonetic (redundant). For example, with the advent of Autosegmental Phonology (Goldsmith 1976) and Feature Geometry (Clements 1985), some features (i.e. tone) were shown to have special abilities, and as a result, all features no longer had the same status or type of representation. In addition, the observation that features may be contrastive in one position in a word (e.g. word-initial position), while redundant in another (e.g. word final position) became more important. To make sense of these new discoveries, Clements (2001) suggested that three conceptual distinctions be made among phonological features: distinctive, active and prominent.4 Anytime a feature is used to establish a minimal pair it is distinctive (i.e. when the feature distinguishes two unrelated meanings in the core lexicon). For instance, y and < in ASL are found in the minimal pair kiss-on-the-cheek vs. thick (consistency)). In this case, we could call the feature for the contact between the thumb and fingers something akin to [closed] or [loop]. In contrast, anytime a feature is involved in a phonological operation (i.e. a rule or constraint), it is active. For example, the feature [stacked] operates in the phonological rule changing a ‘plain’ handshape Y to d in signs like see and verb (ASL) when the middle finger contacts the face (Eccarius 2008; Brentari & Eccarius, in press). Finally, a property is prominent if it participates in certain types of phonological operations, one of which is morphological status.5 Considering [stacked] again in ASL, this feature represents specific differences in meaning in related classifier forms (e.g. in the body part classifiers for ‘legs stand’ Y vs. ‘legs leap’ d). Moreover, the concepts of distinctive, active and prominent can hold for an entire language, or only for a particular part of the lexicon. This is particularly true for languages with multiple origins, as is the case with many sign languages. These languages have a foreign component of the lexicon based on the manual alphabet and/or written characters, a core vocabulary, and a spatial lexicon containing spatial verbs and classifier forms (Brentari & Padden 2001). Let us return again to the feature [stacked] as an example. This feature is distinctive in the manual alphabet of ASL (‘V’ Y vs. ‘K’ d), prominent in classifier forms (‘legs stand’ Y vs. ‘legs leap’ d), and active in the phonological rules of the whole lexicon, (Y becomes d in see (core) and verb (foreign), and Z becomes stacked in the classifier form ‘car on its side’ (spatial) due to its orientation). In our project we wanted to include data from all parts of the lexicon utilizing all different kinds of contrastive relationships. If a feature is distinctive, active, or prominent in any part of the lexicon, it is a part of the PM. Consequently, handshapes utilizing all of these 4. These concepts were used for sign language even before spoken language (Brentari 1998) though not labeled as such. 5. Prominent status is granted to a feature if it meets the criteria to be an autosegmental tier, established in Goldsmith (1976).
Handshape coding made easier
contrast types can be represented by our notation system and are included in our appendix charts. 3.4 Empirical basis While our notation system, like the major research questions of our project, was informed by an existing phonological theory, (a ‘top-down’ approach to issues of contrast), we depend upon actual data (a ‘bottom-up’ approach) to test the system. After all, a theoretical approach without empirical grounding is worth very little. The data that helped serve as a basis for this notation came from the ten languages in the cross-linguistic classifier project mentioned in the introduction. Handshapes were taken from the three parts of each sign language lexicon following Brentari & Padden (2001) — the core lexicon (from dictionary vocabulary), the foreign lexicon (from the manual alphabet or forms based on Chinese characters), and the spatial lexicon (from classifier predicates). First, for the core and foreign parts of the lexicon, a native signer of each language articulated handshapes from the standard dictionary and was photographed by Brentari while in each country. Native signers were then interviewed about the use of each handshape, and a list was made indicating in which of the three components of the lexicon the handshape was used. Our initial handshape chart was created from these handshapes. We then examined elicited classifier data for each language, specifically, data from a picture description task developed by Zwitserlood (2003). This was extremely important because while most of the dictionaries for these sign languages were very good (it was one of the criteria for inclusion in the project), very few researchers had elaborated on the foreign or core handshapes by looking carefully at the classifier system. We then used this expanded set of data to test our theoretical assumptions. When additional handshapes were found that appeared to be contrastive in some way, we added them to our original handshape charts and made alterations to the notation system as necessary. This section has described the theoretical and empirical bases for our notation system. To maximize data coverage, we used as wide a range of handshapes as possible from the ten sign languages of our study. An expanded notion of ‘contrast’ was also employed, and a variety of sampling methods used so that we would have access to all three lexical components of each language. At the same time, we did our best to insure that the system did not over-generate, and that it represented the relative markedness of handshapes as much as possible. In the next section we describe the design of the system itself in more detail.
77
78
Petra Eccarius and Diane Brentari
4. Searchable and economical design 4.1 Searchable characters The second of our goals for the system, choosing the symbols that would represent each theoretical feature/feature group, required a little more creativity. First, each needed to be represented by characters available on a standard keyboard. Meeting this requirement insured that the system could be used in the text fields of almost any database program without additional fonts or scripts, unlike more iconic notation systems such as SignWriting and HamNoSys. Second, the linguistic aspects of each handshape needed to be independently represented in the notation to facilitate searches for specific phonological feature bundles. In other words, each of the three groups of fingers (PSF, SSF, and NSF) and their joint specifications needed to be represented by separate characters. Figure 1 shows the cross-linguistic version of the PM tree structure for handshape (Eccarius 2002) as well as the relationships HAND
NONSELECTED FINGERS
Base Symbols Joint Symbols
SELECTED FINGERS
[extended] [flexed] SECONDARY SELECTED FINGERS [loop] [flexed ]
THUMB
PRIMARY SELECTED FINGERS
JOINTS [flexed] [spread] [stacked] [crossed]
FINGERS
QUANTITY POINT [all] OF [one] REFERENCE [mid] [ulnar]
base nonbase
FINGERS
THUMB
1
FINGERS0
[opposed] [unopposed] QUANTITY POINT [all] OF [one] REFERENCE [mid] [ulnar]
Figure 1. The Prosodic Model’s Hand branch and its relationship to the notation.
Handshape coding made easier
between parts of the model and various aspects of the notation system.6 Twentysix characters were chosen from the standard US keyboard and mapped onto the possible finger combinations and joint configurations predicted by PM or based on contrasts found in subsequent work (Eccarius 2008). These characters fall into two groups: (1) base symbols (which represent the areas of the tree surrounded by ovals), and (2) joint symbols (which represent the areas surrounded by rectangles). The divisions between the finger groups themselves are represented by the organization of these characters as described in Section 4.2. 4.1.1 Base symbols The base symbols of this system indicate which digits are included in particular finger groups. They do this by representing the Fingers node (a combination of the Quantity and Point of Reference branches, used to indicate the number and location of fingers) and the Thumb node of both the PSF and SSF structures in the model. (NSF groups do not require base symbols since the group is comprised of all fingers not in the other groups.) Table 1 lists all 13 base symbols, the digits included in the combination they represent, and the theoretical features used in PM to indicate those combinations.7 Table 1. Base symbols with the specific selected fingers and PM features they represent. Base Symbol B M D U H A P 2 1 8 7 J T
Selected fingers IMRP IMR MRP IM IP MR MP RP I M R P T
PM features Quantity [all] [all]/[one] [all]/[ one] [one]/[ all] [one]/[ all] [one]/[ all] [one]/[ all] [one]/[ all] [one] [one] [one] [one] thumb
Point of Reference
[ulnar] [ulnar] [mid] [mid]/[ulnar] [ulnar]/[mid] [mid] [ulnar]/ [mid] [ulnar]
6. This section only presents a very basic description of the Prosodic Model’s capabilities. For more information about the model as a whole, see Brentari (1998), and for more information about the cross-linguistic expansion of the model’s Hand branch, see Eccarius (2002). 7. I=index, M=middle, R=ring, P=pinky, and T=thumb.
79
80 Petra Eccarius and Diane Brentari
To help our transcribers more easily learn the system, the twelve symbols in Table 1 representing finger combinations were chosen primarily because of their relationship to ASL fingerspelling and number handshapes with the same combinations of fingers in their PSF group. In instances where a combination of fingers did not occur as selected fingers in an ASL handshape, mnemonics from ASL classifiers or from other languages were used when possible.8 The thumb base symbol (‘T’) was chosen for obvious reasons, and is set apart because it is used in conjunction with the other base symbols, whereas the symbols used to represent the finger combinations themselves are typically not used in combination with each other.9 4.1.2 Joint symbols Joint symbols represent the joint specifications of the various finger combinations. The number of symbols possible for each finger group varies depending on the joint features available in PM’s tree structure; accordingly, there are ten joint symbols possible for use with the fingers in the PSF group, two possibilities for use with the SSF group, and two possibilities for the NSF group. The thumb, which can also use the aforementioned joint symbols, can additionally be accompanied by the ‘unopposed’ symbol. Table 2 lists all thirteen joint symbols (organized by finger group possibilities), as well as the joint configurations they signify and the theoretical features used by PM to represent those configurations.10 (See Brentari (1998) for definitions and illustrations of these joint configurations.) As with the base symbols, the decision to use the specific characters for the joint symbols was largely mnemonic; in this case, the symbols were chosen because of their relative iconicity, i.e. they look as much like fingers in the particular joint configurations as the standard keyboard allows. The only exceptions are the ‘unopposed’ symbol (-), (which we felt was a fairly obvious choice), and the ‘closed’ symbol for NSF (#). The latter symbol, as well as its extended counterpart (/), was added because in our research, we often need to search the transcribed data for a joint value in a selected vs. nonselected finger group. Using different symbols for NSF joints facilitates these kinds of searches. 8. Mnemonics not from ASL fingerspelling include ‘H’ for ‘horns’, ‘A’ for ‘animal face’, and ‘P’ for a Hong Kong Sign Language variant of ‘airplane’. ‘2’ was chosen somewhat arbitrarily since the RP combination is very rare, hence no good mnemonic could be found. 9. Possible exceptions include IMP or IRP selected finger combinations, which occur very rarely cross-linguistically. If they do appear in a language, they can be represented by using combinations of the base symbols provided, (‘UJ’ and ‘12’ respectively). 10. The distinction between curved-open ‘narrow’ and ‘wide’ is an addition made to the notation based on subsequent research on joint contrasts (Eccarius 2008). Differences in theoretical structure are as yet unclear.
Handshape coding made easier
Table 2. Joint symbols with the configurations and PM features for each finger group. PSF
Joint Symbols SSF
NSF
empty
empty
/
PM features
extended
empty (PSF, SSF); [extended] (NSF)
curved-open (narrow) curved-open (wide)
c ( o
Joint c onfiguration
o
< > [ @ x k ^
@
-
-
curved-closed
#
flat-open flat-closed bent closed crossed stacked spread unopposed (thumb only)
nonbase + base nonbase + base [flex] + nonbase + base (PSF); [loop] (SSF) base [flex] + base [flex] + nonbase [flex] [cross] [stack] [spread] [unopposed]
In addition to the symbols in the table, the absence of joint symbols can also carry meaning about the positioning of the joints. In the PSF and SSF groups, not using a joint symbol implies an extended joint configuration (indicated in Table 2 by the cells marked empty) and the absence of the ‘unopposed’ symbol next to the thumb’s base symbol implies an opposed thumb. In the NSF group, no joint symbol means that there are no digits included in that group. These meaningful omissions in the notation mimic PM’s omission of features and/or branches of the tree structure in certain cases (such as extended selected fingers) indicating less marked forms. Additionally, this characteristic helps the notation fulfill its afore mentioned economy requirement. 4.1.3 Status of the thumb Before discussing how these characters are organized according to finger group, we must first say a few words about how we determine the finger-group membership of the thumb. Very little research exists currently regarding the phonological status of various thumb positions. In fact, thumb positions seem to vary so much from situation to situation, that the topic is actively avoided by many researchers. Despite this high degree of variability, thumb positions are not completely unpredictable — some systematicity does exist (Brentari 1998).
81
82
Petra Eccarius and Diane Brentari
Table 3. Finger group assignments for the thumb in various extended positions. Opposed thumb (T) Joint configurations of spread from palm fingers in PSF group palm adjacent open, spread ^ PSF SSF open, unPSF spread curved-open c curved-closed o bent, spread ^[ SSF bent, un[ spread flat-open, ^< PSF spread flat-open, < PSF unspread flat-closed > PSF closed @ SSF
Unopposed Thumb (T-) spread from radial side radial side adjacent PSF NSF PSF NSF SSF NSF SSF
SSF NSF NSF
PSF
SSF
NSF
SSF
NSF SSF
PSF NSF
As part of the cross-linguistic expansion of PM, Eccarius (2002) looked at the frequency of various thumb positions in the published handshape inventories of 12 sign languages. Although these conclusions still need to be tested with natural data, we used them as a starting point for the notation of thumb positions in our system. Based on these conclusions, as well as the original predictions of PM, we make two over-arching assumptions in our system concerning the notation of the thumb: 1. If the thumb is a member of a selected finger group, it will behave like the other members of the group in terms of the basic joint configurations, (i.e. the nonbase, or interphalangeal, joint of the thumb will approximate the nonbase joints of the selected fingers, the thumb will spread away from the hand if the fingers are spread, etc.). 2. In cases where the nonbase joint of the thumb is extended (and thus, group membership is not immediately apparent), the thumb is assigned to a finger group according to the information in Table 3.11 (Shading indicates combinations that are unattested and/or not expected to occur. See Eccarius (2002) for further explanation of these group assignments.) 11. Examination of our data so far indicates that the distinction between the curved-open configurations ‘c’ and ‘(’ is only relevant when there is an opposed, selected thumb. If this turns out not to be the case, we would expect the thumb positions of ‘(’ to pattern with ‘c’.
Handshape coding made easier
4.2 Economical organization The final task in the development of this notation system was to ensure that the resulting representations would be relatively compact without losing any important linguistic information. This task, although seemingly more trivial than the first two, is important for the human users of the system. If the string of characters is too long, it cannot be easily interpreted (or easily typed out) by a researcher or transcriber, and has therefore lost a great deal of its usefulness. In addition, the longer a notation string is, the greater the opportunity for error. To fulfill this final requirement, we allowed the spatial organization of the symbols to convey meaning about finger group assignment. We also allowed the base and joint symbols to represent the same linguistic information regardless of selected finger group, thus reducing the need for additional symbols and more efficiently utilizing space. The organization of the system can be understood as being two-tier: first, there is the basic organization of characters within each finger group, and second, there is the organization of the finger groups themselves into the notation for the whole handshape. The basic organization within each finger group is composed of five symbol slots, written in a particular order. This order is illustrated in Figure 2. At the beginning of the string, there is a slot for the base joint indicating the finger combination involved in that group. Immediately following the finger base symbol is a slot for the thumb’s base symbol. The three remaining slots are reserved for various joint symbols. The first of these, adjacent to the symbol for the thumb, is available for the ‘unopposed’ symbol. The second joint slot houses the ‘spread’, ‘stacked’ and/or ‘crossed’ symbols, which receive their own slot because they are the only joint configurations that can occur in conjunction with other configurations (e.g. [bent] + [spread], )). The final slot in the string is reserved for the remaining joint symbols (i.e. those representing degrees of flexion). It is important to emphasize that not all of the five possible slots will be filled while
B T -^ [ finger base symbol
thumb base symbol
‘unopposed’ joint symbol
‘spread’ ‘stacked’ &/or ‘crossed’ joint symbol(s)
Figure 2. Basic organization of notation (order of base and joint symbols).
remaining joint symbol
83
84
Petra Eccarius and Diane Brentari
1 T -^ @; 1 T - @; # Primary Selected Fingers
Secondary Selected Fingers
Nonselected Fingers
Figure 3. Overall organization of notation (division into finger groups).
notating a given handshape, nor will they all be available for every finger group. When present, however, the symbols will occur in the order illustrated above. Once the possible symbols for each finger group have been organized into strings, those strings must also be placed in a specific order. Not surprisingly, in the final arrangement the PSF group comes first, followed by the SSF group, and ending with the NSF group. These groups are divided by a semicolon (;) as illustrated in Figure 3. (For illustration’s sake, the symbols used in this example serve only as place-holders for specific types of symbols — e.g. ‘1’ for base symbols, ‘@’ for joints, etc.) In the figure, there appears to be a possibility of twelve symbols (including the semicolons) for any given handshape notation — five slots for the PSF group (all those discussed above), four slots for the SSF group ([spread], [stacked] and [crossed] have not been attested in SSF), and one slot for the NSF joint specification. However, an actual notation could never contain all twelve symbols, since there is overlap between the groups. For example, if the thumb base symbol appears in the PSF group, neither of the thumb symbols (base or joint) could be included in the SSF group (and vice versa) since a digit can only belong to one finger group at a time. If this overlap is taken into consideration, the maximum string length for any handshape notation is ten characters, although, in practice, the average notation is only four or five characters long. This relatively short string length, while not optimum for transcribing discourse, is quite manageable when a project focuses on handshape, once again aiding in the fulfillment of this system’s economy requirement. 4.3 Examples To illustrate the symbols and organizational principles discussed in this section, we have provided some examples of handshapes and their notations in Figure 4. These examples were chosen because of their theoretical similarities across a number of parameters (digits involved, joint configurations, etc.) to demonstrate
Handshape coding made easier
1T@;#
1To;#
1To;/
1Tc;/
1T>;/
1T