psychological, sociological or combinations (Bernstein & Tiegerman-Farber, 2008). Prevalence ... making themselves understood, and thus could benefit from AAC (Lindsay, Dockrell, Desforges, Law, &. Peacey .... People who rely on AAC participate actively in AAC research and practice. 2. ..... SAM communication software.
Augmentative and Alternative Communication systems for the motor disabled Alexandros Pino National and Kapodistrian University of Athens, Accessibility Unit, Greece
Abstract This chapter discusses Augmentative and Alternative Communication (AAC) for individuals with motor disabilities. Motor disabilities do not only affect movement, but very often also affect speech. In these cases where voice is very weak, speech unintelligible or motor problems in the human speech production systems do not allow a person to speak, AAC is introduced. Aided and unaided communication is explained, and low and high tech AAC examples are illustrated. The ITHACA framework for building AAC applications is used as a paradigm in order to highlight the AAC software lifecycle. The same framework is also used to highlight AAC software design issues concerning component-based development, the open source model and the Design for All principles. Key features of an AAC application like virtual keyboards, scanning techniques, symbol dashboards, symbolic communication systems, message editors, symbol translation, word prediction, text to speech, and remote communication are presented. Finally, practical hints for choosing an AAC system are given, and case study of informally evaluating it is cited.
INTRODUCTION For people with complex communication needs, those with motor and speech impairment, daily routine as well as rehabilitation and educational programs often include the use of Augmentative and Alternative Communication (AAC) aids (Beukelman & Mirenda, 2005). AAC is an umbrella term that encompasses the communication methods used to supplement or replace speech or writing for those with impairments in the production of comprehensible spoken or written language (Fossett & Mirenda, 2009). AAC is used in a wide range of speech and language impairments, including congenital impairments such as cerebral palsy (McNaughton, Light, & Arnold, 2002), intellectual impairment and autism (Shook & Coker, 2006) (Mirenda, 2003), and acquired conditions such as amyotrophic lateral sclerosis (Doyle & Phillips, 2001) and Parkinson's disease (Beukelman & Garrett, 1988). AAC can be a permanent addition to a person's communication or a temporary aid. Modern use of AAC began in the 1950s with systems for those who had lost the ability to speak following surgical procedures (Lloyd, Fuller, & Arvidson, 1997). During the 1960s and 1970s, spurred by an increasing commitment towards the inclusion of disabled individuals in mainstream society and developing the skills required for independence, the use of manual sign language and then graphic symbol communication grew greatly (Koul, Corwin, & Hayes, 2005). It was not until the 1980s that AAC began to emerge as a field in its own right. Rapid progress in technology, including microcomputers and speech synthesis, has paved the way for communication devices with speech output and multiple access options for those with physical disabilities. AAC systems are diverse: unaided communication uses no equipment and includes signing and body language, while aided approaches use external tools and range from pictures and paper communication boards to speech generating devices (Light, 1988). The symbols used in AAC include gestures, photographs, pictures, line drawings, letters and words, which can be used alone or in combination. Body parts, adapted mice, or eye tracking can be used to select target symbols directly, and
2 switch access scanning is often used for indirect selection. Message generation is generally much slower than spoken communication, and as a result rate enhancement techniques may be used to reduce the number of selections required. These techniques include word prediction or completion, in which the user is offered guesses of the word or phrase being composed. Computer based solutions include on-screen dashboards that present selectable grids of concepts for communicating with others synchronously or at a later time (i.e., composing and storing messages). These grid-based applications offer great opportunities for non-literate users, such as children or language impaired individuals (Hourcade, Pilotte, West, & Parette, 2004). With a set of well-chosen icons, they could still use a sufficient vocabulary to communicate with others. The same or similar grids can be populated with letters of the alphabet, words, and phrases, serving literate users as well. The evaluation of a user's abilities and requirements for AAC include the individual's motor, visual, cognitive, language and communication strengths and weaknesses. Studies show that AAC use does not impede the development of speech, and may result in a modest increase in speech production (Millar, Light, & Schlosser, 2006). Users who have grown up with AAC, report satisfying relationships and life activities; however, they may have poor literacy and are unlikely to be in employment (McNaughton, Light, & Groszyk, 2001). The International Society for Augmentative and Alternative Communication (ISAAC) is a leading organization that promotes awareness and research on AAC, and holds the ISAAC Biennial Conference (ISAAC, 2013). Another important resource for current trends on AAC research is the Augmentative and Alternative Communication journal (Informa Healthcare, 2013). The first section of this chapter will explore special communication needs. In the next section, the various types of AAC and symbols are presented, and state of the art follows. Next, the AAC product lifecycle is explained taking the example of a framework for developing AAC applications called ITHACA. The AAC application features section includes the description of possible modules like on-screen keyboards, on-screen communication boards, scanning patterns and control techniques, speech synthesis, word prediction, message editors, symbol translation, and remote communication. Next, a number of questions are given, that have to be answered by the AAC professionals and users’ families, in order to correctly select an AAC system. The informal evaluation case study illustrates the deployment of four distinct AAC systems for different users and includes a number of observations based on the subsequent use of these systems and conversations with the family members and therapists who were involved.
USER NEEDS It is estimated that approximately 3% of the total population have serious communication problems (Royal National Institute of Blind People, 2008). 40% of cases are due to purely pathological causes: mental or physical disability, deafness, diseases, strokes, accidents, acquired or congenital brain injury, throat surgery, abnormal vocal cords, etc. Many cases are not clearly pathological; the causes may be psychological, sociological or combinations (Bernstein & Tiegerman-Farber, 2008). Prevalence data vary greatly depending on the country, age and disabilities surveyed, but typically between 0.1 to 1.5% of the population are considered to have such severe speech-language impairments that they have difficulty making themselves understood, and thus could benefit from AAC (Lindsay, Dockrell, Desforges, Law, & Peacey, 2010). An estimated 0.05% of children and young people require high technology AAC (Gross, 2010). From the communicative side, people with communication problems can be classified into one of the following three groups according to their needs (von Tetzchner & Martinsen, 1992): 1. Individuals who require expressive language: These are people who have an understanding of the language, understand others, but are unable to express themselves, usually due to severe
3 motor problems (e.g., due to cerebral palsy) or surgical intervention (such as laryngectomy). They need a way to use their, often limited, motor skills to express and to communicate. Since the causes are predominantly permanent, an AAC system will be used throughout the person’s lifetime. 2. Individuals who need supportive language: People who can potentially speak, but the process of speech development is not normal. This group can be separated into two categories: those who have a speech delay, and those who produce unintelligible speech. In the first category, the use of an alternative form of communication has been often the means to acquire language comprehension and expression. Common examples of this category are children with developmental disabilities, and Down syndrome. In the second category are those who have developed speech, but, for various reasons, others cannot understand it. Therefore they need means to explain to their listeners what is not understood (for example sign language, writing, etc.). Both categories are very often combined with physical disabilities. 3. Individuals who require alternative language: This category includes persons who are unable to communicate in any form of speech. They are unable to express themselves, and sometimes also to understand the speech of others. The alternative language is the only way of lifelong communication. A common characteristic of this group is agnosia in sounds, which means that it is impossible for them to distinguish the sounds as phonetic entities such as words, and some are not even capable to distinguish the types of sounds and their origins. Generally, this category includes cases of autism or severe mental retardation, as well as brain injury or neurological disorders. Classification of diseases, disorders, and disabilities, as well as the exhaustive listing of those that affect speech is out of the scope of this book chapter. For completeness, Table 1 gives an indicative non-exhaustive list of speech-related motor conditions, diseases and disorders. Cerebral Palsy Traumatic Brain Injury (intracranial injury) Spinal Cord Injury Brain Stem Stroke syndromes Muscular Dystrophy (MD) Amyotrophic Lateral Sclerosis (ALS - Lou Gehrig's disease) Multiple Sclerosis (MS) Parkinson’s Disease Dysarthria Morphological Abnormalities in the Vocal Organs Rett Syndrome Apraxia of Speech Ataxia Aphasia
4 Table 1: The main motor or neurogenic conditions, diseases and disorders associated with communication problems The World Health Organization’s International Classification of Functioning, Disability and Health, known more commonly as ICF, provides a standard language and framework for the description of health and health-related states. It is a classification of health and health-related domains - domains that help us to describe changes in body function and structure, what a person with a health condition can do in a standard environment (their level of capacity), as well as what they actually do in their usual environment (their level of performance). These domains are classified from body, individual and societal perspectives by means of two lists: a list of body functions and structure, and a list of domains of activity and participation. In ICF, the term functioning refers to all body functions, activities and participation, while disability is similarly an umbrella term for impairments, activity limitations and participation restrictions. ICF also lists environmental factors that interact with all these components. Table 2 illustrates the “Communication” chapter of the “Activities and Participation” component of ICF. This chapter discusses the general and specific characteristics of communication through language, meanings and symbols, including the production and reception of messages, discussions and the use of devices and communication techniques. Code Identifier
Description
Communicating - receiving d310
Receiving spoken messages
Comprehending literal and implied meanings of messages in spoken language, such as understanding that a statement asserts a fact or is an idiomatic expression.
d315
Receiving nonverbal messages
Comprehending the literal and implied meanings of messages conveyed by gestures, symbols and drawings, such as realizing that a child is tired when she rubs her eyes or that a warning bell means that there is a fire. Includes: communicating with - receiving - body gestures, general signs and symbols, drawings and photographs
d325
Receiving written messages
Comprehending the literal and implied meanings of messages that are conveyed through written language, such as following political events in the daily newspaper or understanding the intent of religious scripture.
Communicating - producing d330
Speaking
Producing words, phrases and longer passages in spoken messages with literal and implied meaning, such as expressing a fact or telling a story in oral language.
d335
Producing nonverbal messages
Using gestures, symbols and drawings to convey messages, such as shaking one's head to indicate disagreement or drawing a picture or diagram to convey a fact or complex idea. Includes: producing body gestures, signs, symbols, drawings and photographs
d345
Writing messages
Producing the literal and implied meanings of messages that are conveyed through written language, such as writing a letter to a friend.
Conversation and use of communication devices and techniques
5
Code Identifier
Description
d350
Conversation
Starting, sustaining and ending an interchange of thoughts and ideas, carried out by means of spoken, written, sign or other forms of language, with one or more people one knows or who are strangers, in formal or casual settings. Includes: starting, sustaining and ending a conversation; conversing with one or many people.
d355
Discussion
Starting, sustaining and ending an examination of a matter, with arguments for or against, or debate carried out by means of spoken, written, sign or other forms of language, with one or more people one knows or who are strangers, in formal or casual settings. Includes: discussion with one person or many people
d360
Using communication devices and techniques
Using devices, techniques and other means for the purposes of communicating, such as calling a friend on the telephone. Includes: using telecommunication devices, using writing machines and communication techniques
Table 2: The communication-related identifiers of the ICF A rating system of each one of these codes above can be used in the context of ICF to describe the condition of a person. More information on the ICF, the codes, and its use can be found in (WHO, 2001; Bornman, 2004; Raghavendra, Bornman, Granlund, & Björck-Åkesson, 2007; Cook, Polgar, & Hussey, 2008). For someone to learn and use AAC effectively, it needs to be part of everyday life, not a ‘task’ done occasionally. Communication doesn’t happen in isolation. Each person using AAC will have a network of people around them – some with a formal remit to ‘work’ on communication and others who have communication links with that person on a more personal, social, work related or educational level. Murphy, Scott, Moodie, & McCall (1994) found that most of the people in their study could identify a group of individuals who had some communication remit in their work with the AAC user – however within each group there was confusion as to who the other members were, and the role of each. For AAC to be maximized a good co-ordination is required, with a speech and language therapist or other rehabilitation professional acting as coordinator (Murphy, Markova, Collins, & Moodie, 1996). The following people have a very important role to play, in helping the AAC system to function effectively, and to help the user learn to communicate efficiently with AAC:
Parents, families, spouses, friends
Helpers or facilitators
Speech and language therapists
Teachers and classroom assistants
Occupational therapists
Physiotherapists
Rehabilitation engineers
Computer programmers/assistive technology specialists
6
Volunteers
AAC users
Blackstone, Williams, & Wilklins (2007) introduced the key principles underlying research and practice in AAC: 1. People who rely on AAC participate actively in AAC research and practice. 2. Widely accepted theoretical constructs are specifically addressed in the design and development of AAC technologies and instructional strategies. 3. AAC technologies and instructional strategies are designed to support and foster the abilities, preferences, and priorities of individuals with complex communication needs, taking into account motor, sensory, cognitive, psychological, linguistic, and behavioral skills, strengths, and challenges. 4. AAC technologies and instructional strategies are designed so as to recognize the unique roles communication partners play during interactions. 5. AAC technologies and instructional strategies enable individuals with complex communication needs to maintain, expand, and strengthen existing social networks and relationships, and to fulfill societal roles. 6. AAC outcomes are realized in practical forms, such as guidelines for clinical practice, design specifications, and commercial products. The social validity of these outcomes is determined by individuals with complex communication needs, their family members, AAC manufacturers, and the broader AAC community. The main problems that disabled users face with existing computer-based commercial solutions include: costly products, absence of multilingual support, difficulty in locating products because of the geographically dispersed and fragmented market, lack of proper support for customization, low adaptability of the user interface, difficulty in adding or removing functionality or components when needed, and a limited number of complete products to choose from. Moreover, designing and developing interpersonal communication aids for people with disabilities is a domain for which modern software engineering approaches, such as those that combine Component-Based Development (CBD) and the open source model have not been widely applied.
TYPES OF AAC The field was originally called "Augmentative Communication"; the term served to indicate that such communication systems were to supplement natural speech rather than to replace it. The addition of "Alternative" followed later, when it became clear that, for some individuals, these systems were their only means of communication (Vanderheiden, 2002). Motion impaired users may need AAC systems based on written word; if they are mentally challenged too they may need systems based on symbols. AAC users typically utilize a variety of aided and unaided communication strategies, depending on the communication partners and the context (Fossett & Mirenda, 2009). The term ‘AAC’ includes four interlinking strands (Royal College of Speech and Language Therapists, 2006):
The communication medium – how the meaning of the message is being transmitted. This can be unaided, for instance by using gestures, facial expression, signing, etc., or it can be aided, where the person communicates using some sort of device other than their body, for instance via a communication chart, or an electronic device with speech output.
7
A means of access to the communication medium – this may be via a keyboard or touch screen, or by using a switch to scan from an array of letter, words, or pictures.
A system of representing meaning – when people speak, their meaning is represented by spoken words which act as ‘symbols’. Where a person is unable to speak, their meaning has to be represented by a different set of symbols. These symbols may be traditional orthography (letters or words), or it may be a set of pictorial symbols
Strategies for interacting with a communication partner, for example being able to start up a conversation, or to sort things out when the other person does not understand.
Most AAC users use a number of different forms of AAC – a mixture of unaided and aided communication systems, and a mixture of low tech and high tech aids – depending on the situation (Millar & Scott, 1998). Unaided Communication This term is used for an augmentative method of communication which does not require the use of any additional material or equipment. Unaided communication includes the whole range of expressive things we can do with our bodies, such as gestures, facial expression, eye gaze, and body postures, and might include some mime-like movements and signs. At the simplest level, gesture is intuitive for everybody, and often immediately intelligible. It may be used by people with profound difficulties. More sophisticated gestural codes can also be developed. The biggest advantage of gestures and signing is that they are quick, immediate and practical – you can’t forget to take these systems with you; you can use them wherever you are; you don’t need any expensive or cumbersome equipment; they can’t break down. The disadvantage of gesture for transmitting information is, as Michael Williams (1994), who is himself an AAC user, says: “Gestures can get you a cup of coffee in the morning, but they do a poor job of telling your friend about that delicious piece of cake you had the other night. Gestures can only express things in the here and now. Also, gestures are poor candidates for expressing things like truth and beauty.” Signing is a much more sophisticated form of communication, a whole specialist area in itself, which is beyond the scope of this book. There are a number of different forms of signing – some use restricted numbers of signs as a support for speech, while at the other end of the scale others provide complex and powerful language, with enormously rich expressive capabilities. While signing is of course a primary AAC choice for people with deafness and hearing impairment, it is not always quite as useful for people with other motor and physical difficulties. The disadvantage of sign language is above all that not everyone in the communication impaired person’s environment – in fact, sometimes hardly anyone – may sign well themselves or understand sign to an advanced degree. Staff needs continual training in signing. Furthermore, many people who need augmentative communication systems have some degree of physical and/or neurological impairment, which may make the formation of recognizable signs physically difficult. Aided Communication This term refers to systems which involve some physical object or equipment such as symbol charts or books; it also refers to computer based systems or Voice Output Communication aids (VOCAs). An aided communication system can be something very simple (e.g., the alphabet written on a plain post-card) or it may involve a highly sophisticated microelectronic system specially programmed with a large vocabulary. The biggest advantages of aided communication are the flexibility and the richness of communication that can be achieved by creating and customizing vocabulary sets; employing
8 sophisticated methods of storage and retrieval; and providing users with special means of accessing them. Aided communication can be used by very young children, non-readers, and individuals with severe intellectual and sensory disabilities, as many systems are based on simple pictures and symbols. Systems based on alphabetic symbols, for those who can use them, give access to a limitless range of communication. Low tech systems can be very quick and simple to use. High tech aids can be designed for operation by very minimal movements (e.g., a single switch press), so can be accessible to individuals with severe physical disabilities. Rate enhancement techniques may be included in the design of an electronic aid, to try and help users approach a higher speed of communication. Voice output increases users’ independence. Use of high tech systems greatly increases the range of types of communication available (e.g., group discussions, phone use, use in employment, connection with other computers, email, chatting online, etc.) beyond personal face to face communication. The biggest disadvantage of aided communication is the equipment itself. Having to remember and carry objects around with you, inevitably means something can get forgotten, left behind, lost or broken. Sometimes equipment can be bulky, or heavy. If the communication equipment is electronic, there may be a need to keep track of wheelchair mountings, battery chargers or spare batteries, on top of the basic equipment – and there is always the specter of technical failure. For this reason, it is vital to have a) a non-electronic back-up, and b) insurance. Another disadvantage, to the user, of high tech aids is that acquisition of a sophisticated piece of technology may be very expensive. Low tech Low tech is anything that doesn’t involve electricity or electronics (Vanderheiden, 1984). Low tech communication systems may take many forms, and might include, for example:
Tangible symbols (e.g., real objects, miniature objects or parts of objects, on an activity calendar)
Picture/photo boards or books
Symbol communication charts or books, topic boards
Letter, word or phrase boards
Communication cards (e.g., clipped on a key ring on a belt)
Eye-blink, or eye-pointing pointing codes
Fix pictures, symbols or letters, or a code to a frame in front of the user, who eye-points to the item they want to communicate (see
Figure 1a).
Features of a low tech system to look out for are the choice of representational system, (i.e., what kind of pictures, symbols or codes suit the user best) and the method of selection of items (e.g., direct pointing, saying ‘yes’ or ‘no’ when a helper points, switch use, etc.). High tech High tech is anything using electricity/electronics. This category covers a wide spectrum, starting with very low ‘high tech’ devices, which do contain some technological element, like a battery or a switch, but are very simple. For example:
Pointer boards (hit a switch to stop the pointer going round, when it’s at the object, picture or symbol required.
Switches connected to battery-operated devices or simple environmental control devices such as attention-getting beepers, cassette players, single message tape-loops or other simple message players.
9
Switches connected to a ‘Mains Switcher’ to allow a user to control the environment and devices like a television, or a lamp.
Toys or books that speak when certain areas are pressed.
Simple, single message playback devices such as talking press switches.
State of the art in high technology AAC is computer-based systems with text to speech software, and sophisticated access methods like eye-gazing. Terms like Voice Output Communication Aid (VOCA), Speech Generating Device (SGD), and Communicator can be used to denote such systems, but they are also sometimes used for lower tech non computer-based devices. Initially the computer-based systems were housed in desktop computers, later in notebooks, and the most successful configuration came when industrial or consumer tablet computers were used. Both industrial and consumer tablet computers have the important feature of the touch screen that can be used for direct selection, and the industrial ones have the advantage of being shock, drop, dust and moisture tolerant. In the very usual case where users can’t use a normal keyboard, these tablet computers that lack keyboard are much more compact, robust, and require limited space, which is very important when mobility is needed. Lately the outburst of the tablet market with products from Apple Inc., like the iPad, and Samsung, like the Galaxy series, offered very convenient and portable new platforms for AAC systems. Furthermore, Android based Smartphones, and iPhones also raised the high tech systems one level higher, having hundreds of AAC apps available online. Features to look out for in high ‘high tech’ communication systems would include:
Portability and robustness.
Range and type of possible input methods (keyboard; overlay keyboard; switch input; a range of scan options).
Type of screen display (none; static; displaying only text; dynamic; displaying symbols).
Techniques used to store and retrieve messages.
Output (transient or permanent); what type of screen, if any; digitized voice; synthetic voice; text; hard copy printout; storage on disk.
High tech communication aids vary also in the degree to which they demand of the user more or less sophisticated techniques of visual perception, memory, sequencing skills, language processing, meaning associations, grammar or encoding. Table 3 summarizes the communication options for individuals with physical and speech disabilities, and Figure 1 illustrates drawings of examples of AAC systems. Communication Techniques and Selection Techniques
Unaided Communication (No Tech) – no external aids or devices
Aided Communication Aids and devices with symbols (objects, photographs, picture, text, written and/or spoken words)
• Natural gestures (e.g., eye gazing, pointing) • Signing • Encoding (e.g., Morse code)
Gestural communication - Facial expressions
Low Tech Non-voice output aids or devices • Eye gaze board/frame
High Tech VOCA or SGD • Voice amplification system • Simple voice output with a
10
Communication Techniques and Selection Techniques
Direct selection - Eye gaze - Light or optical pointer - Head pointer - Manual point (e.g., finger, hand, etc.) - Switch-activated - Electronic head pointing with switch or dwell selection Indirect selection - Linear/step scanning - Auto Scanning - Auto Row/Column Scanning - Group Scanning - Directed Scanning - Auditory Scanning
Unaided Communication (No Tech) – no external aids or devices
Aided Communication Aids and devices with symbols (objects, photographs, picture, text, written and/or spoken words)
- Eye gaze - Body postures - Hand gestures
• Object communication board • Picture communication cards, board or book • Communication cards, board or book with text or pictures • Clock communicator or rotary scanner • Paper and pencil or pen • Dry erase board and marker • Alphabet board
Sign Language Limited Speech
single message (record and playback) • Simple voice output device with multiple messages (record and playback) • Voice output device with icon sequencing • Voice output device with dynamic display • Voice output device with interactive visual scenes and hot spots • Voice output device with word-based vocabulary • Device with speech synthesis for typing with or without word prediction • Software: text to speech
Table 3: Communication options for individuals with physical and speech impairement (Hawaii Department of Education, 2008)
(a)
(b)
(c)
Figure 1: Examples of AAC systems, (a) Low-tech non-electronic picture frame for eye gaze selection, (b) High-tech electronic symbol selection pre-recorded message output devices (non-computer based), (c)High-tech computer based VOCA with scanning and text to speech.
SYMBOLS AAC can rely on written word for literate users, but it can also utilize representational graphic symbols as a means of communication for non-literate users. These symbols, due to their different nature may vary in
11 characteristics and properties. Each symbolic communication system has a set of properties that mainly relate to the ease of recognition, decoding and learning: iconicity, learnability, vocabulary and language expressiveness, need for knowledge of the system by all communication partners, recommended user age, and system applications. Iconicity The primary and most obvious property of a representational symbol is the ability to make the concept that it represents obvious: whether there is an obvious relationship between the reference and the referent. Iconicity is the measure of this relationship (von Tetzchner & Martinsen, 1992). Iconicity is measured in a continuous space, on one end of which are the obvious symbols, and on the other end completely abstract and unrecognizable symbols. Symbols whose meaning is obvious are called transparent. With transparent symbols it is easy for someone to guess their meaning even if he is uninitiated in the system. For example, an iconographic symbol showing a cat is more likely to mean just that. At the other end of the continuous interval of iconicity are opaque symbols. With opaque symbols it is very difficult for someone to guess the meaning to which they refer, and usually the connection of the symbol with the concept has been under some sort of contract. The opaque symbols have no resemblance to the referent. Under this light, written words can be considered opaque symbols. In the interval between transparent and opaque symbols are the translucent symbols. These symbols could perhaps be deciphered by someone uninitiated in the system but with difficulty and the need for additional information. But once someone has come into contact with the symbol system the meanings become obvious and easily identifiable. The ease or difficulty for someone to guess the referent only by the reference is directly linked to the concept of iconicity. Learnability The second main property of a symbol is the learnability. Of course, this property can only be seen in relation to the person who teaches or studies a symbol or a symbolic communication system, and his/her physical and mental abilities. However, studies with various groups of people have shown that the ability to primarily guess the meaning of a symbol and the ability to remember it since are usually compatible (Reichle, York, Sigafoos, & York-Barr, 1991). Vocabulary and Language Expressiveness The iconicity and learnability of symbols are properties mainly related to graphics/symbols of a symbolic communication system as separate entities. But as symbolic systems seek - and often succeed, their definition as a language, there are many additional elements that characterize them. Among the main elements is the vocabulary offered. Some systems provide a vocabulary limited to certain concepts, mostly identical to the basic needs of an individual (e.g., food, cold, etc.) while others provide almost complete vocabularies (Allen, et al., 1992). It is obvious that the vocabulary is directly linked to the needs that the system covers, and the categories of individuals using the given system. The size of the vocabulary is generally analogous to the mental capacity and the multitude of experiences of the person who communicates or will learn to communicate using the system. The number of words one uses on a daily basis depends on the need for expression that the person has. The need begins from expressing basic needs and extends to thoughts and feelings. For this reason, the various symbolic systems usually provide the vocabulary that corresponds to the capabilities and needs of the user group which mainly addressed. The number of words is also determined by the type of included words. If the key for the system are words that refer to specific concepts only, the vocabulary is considerably less than one that includes abstractions, verbs and other more complex parts of speech.
12 A vocabulary is characterized not only as to the number of individual words it comprises, but also by how flexible it is, i.e., the property of words or concepts to be used with multiple meanings, may be synonymous or have metaphorical use, and if they can create new concepts by combinations. Such capabilities offer flexibility and expressive potential close enough to the possibilities of traditional languages. In some symbolic systems, the graphics show no correlation with the written word. Other systems have been created on the model of traditional languages and are in line with the written word, and others include syntactic and grammatical structures. The degree of correlation is dependent on the cognitive abilities of people that each system addresses and their need of expression. The systems that have some sort of correspondence with the written word often are the first step to create the perception of grammar and syntax, and a deeper understanding of the structures of language to people who are not familiar with such concepts. Need for knowledge of the system by all communication partners Symbolic systems usually require not only training of the person who will use them as a primary mode of communication, but also the training of his/her interlocutors, even if they are able to use the traditional language. The degree that this training is necessary varies. In some systems, the communication is impossible if the "listener" has no familiarity with the symbolic system. In others, communication is greatly facilitated if there is a familiarity, but then again if missing, communication is still possible (e.g., when showing the meaning of each symbol in the legend). Age The age at which a user is able to get acquainted with a symbolic system depends on cognitive skills, but also depends significantly on the system itself, how complex it is, and its iconicity. In the case of people with mental problems or retardation, mental age is also relevant. System applications Almost all symbolic systems are accompanied by methods and techniques to use. Some are easily applicable and require few resources, others are more complicated. The development of computer-based AAC systems was revolutionary in the assistance provided. Generally, the system applications are divided as written in the previous sections in no-tech, low-tech, and high tech. Taxonomies AAC has involved the use of over 100 different aided and unaided symbol sets and systems. To our knowledge, the first published English-language AAC symbol classifications were by Kiernan (1979) and subsequently by Kiernan, Reid and Jones (1982). From 1977 through 1986 a number of AAC taxonomies were used (Lloyd & Fuller, 1986). Vanderheiden and Lloyd (1986) proposed a static vs. dynamic symbol taxonomy. Although such classification can be of clinical or research value, it was flawed because several symbol sets and systems can be placed in either of the two categories. Lloyd and Fuller (1986) proposed the aided vs. unaided AAC symbol taxonomy, which does not have this ambiguity of classification. In an attempt to develop other types of classifications, Fuller, Lloyd & Schlosser (1992) elaborated on the taxonomy from Lloyd, Quist, and Windsor (1990) and published factors of classification (i.e., aided vs. unaided, static vs. dynamic, iconic vs. opaque, and set vs. system). In attempting to further conceptualize AAC for research and practice, Fuller, Lloyd & Schlosser (1992) proposed a model which was primarily based on a robust multimodal communication model used in aural rehabilitation (Sanders, 1982) but which also includes an AAC transmission process interface: the means to represent symbols, the means to select a symbol, and the means to transmit a symbol.
13 Although the taxonomies published to date have some clinical and research value, they lack an indepth consideration of language use. Trying to overcome this drawback, the three types of the latest semantic encoding approaches include (Baker & Lloyd, 2011):
Type I: no defined elements and an indefinite total number of symbols (Real objects, Miniature objects, Photographs, Simple line drawings, Picture Communication Symbols, Oakland Picture Dictionary, Pictogram, Ideogram, Communication PIC, Makaton, Sigsymbols, DynaSyms, Lingraphica Concept-Images, Widget Symbols, Imagine Symbols, PMLS Symbols, Tech/Syms, American Sign Language);
Type II: defined and restricted number of elements but an indefinite total number of possible symbols (Yerkish Lexigrams, PICSYMS, Blissymbolics, Pixons, Premack Symbols, Sign Writing, Jet Era Glyphs/CyberGlyphs, or outside the field of AAC, Mandarin Chinese); and
Type III: a restricted number of symbols that recombine to provide an indefinite number of total symbol sequences (Unity Word Strategy, Deutsche Wortstrategie, Svenska Blisstrategi, Blissymbol Component Minspeak Word Strategy, Minspeak, Unity). Examples also include phonetic (not semantic) systems: Morse Code, Aided Representation of Finger Spelling, Traditional Orthography, Braille, Phonetic Alphabets
Each of the three encoding types has differing approaches to iconicity (McDougall & Curry, 2004). Signs which represent a concept sufficiently to be recognized without any interpretation are “transparent”. “Translucent” signs refer to the represented concept but require interpretation to be understood. “Opaque” signs bear no direct resemblance to the represented concept. Type I encodings strive for high iconicity. Some common actions (“kiss”) and objects (“toaster”) are picture-producers that are easy to represent transparently, whereas other common actions and objects (such as “need” and “trouble”) cannot be represented transparently. Type I encodings omit high-frequency vocabulary, perhaps because there are few pictureproducing words in the 400 most common words in a language. Type I focuses on large collections of nouns designating objects. Non-picture producing vocabulary is represented by symbols of low translucency and sounds-like strategies with additional phonetic labels to guide instructors. Type II encodings are systems that constrain the relationships between primitive symbol elements. An example is the Mandarin character system, “Hanzi,” which has a limited number of strokes and constraints on stroke placement. All elements of Mandarin surface structure are represented faithfully by the various Hanzi characters. Iconic transparency is not a high goal in Mandarin, although many mnemonic rationales are used to teach the meanings of Hanzi. An example of a Type II symbol system in AAC is Blissymbolics, a logical, universal language based on semantic primes many of which were derived from Mandarin Hanzi. Type III symbol systems use a restricted selection set (rarely exceeding 100 icons) to represent an indefinite number of concepts in a language. Selections can combine with each other following a set of grammar rules. Whereas Blissymbolics and Mandarin compose individual primitives to form a selection with translucent properties, Type III encodings form short, rule-driven sequences representing an indefinite number of concepts. Type III systems simultaneously reduce the size of the selection set and the number of selections in a string. For example, Minspeak is a way of representing language in a communication device. If you show people a picture of something simple, like an apple, they will naturally associate more than one idea with that picture. People usually say the most obvious idea first – “apple” – but then they start associating more ideas – “fruit,” “red,” “eat,” “bite,” and “hungry.” Minspeak takes advantage of this natural tendency by using a small set of pictures to represent a large number of words in a communication device. With a small number of pictures – called icons – the person using Minspeak can communicate a large vocabulary without having to spell or to learn and navigate through a large set of pictures.
14 Likewise, the Unity system focuses on frequently used words. Research has shown that approximately 400 frequently used words make up more than 75% of our speech, regardless of age, gender, or background. Eighty percent of the words we use come from a small set of 400 to 500 core words. Core words are used across many situations by people of all ages. The rest of the words we use are called fringe words, and consist mostly of infrequently used nouns that are activity specific. “I want that.” All the words in this sentence are core words. “I want that cookie.” Cookie is a fringe word. All the other words are core words. The Unity language system contains many core and fringe vocabulary words. The most frequently used core words are always available on the first screen. The location for a core word is always in the same place. A beginning communicator might start with a Unity vocabulary where one picture represents one word. A more advanced communicator might use a Unity vocabulary that uses short sequences of pictures to represent words or phrases. Today, there is a smaller number of widely used symbolic systems, which include (short names in alphabetical order): ARASAAC, BLISS, DO2LEARN, DYNASYMS, IMAGINE, LEXIGRAMS, MAKATON, MINSPEAK, MULBERRY, OACKLAND, PCS, PIC, PICSYMS, PICTURESET, PREMACK, REBUS, SCLERA, SELF-TALK, SIGSYM, SYMBOLSTIX, TALKING PICTURES, TECH/SYM, UNITY, WIDGIT.
SCLERA
PCS
MULBERRY MAKATON
BLISS
ARASAAC
Bed
Book
Home
See
Television
Water
TECHSYMS
15
Figure 2: Indicative symbolic representations of six concepts in seven symbolic communication systems
Figure 2 depicts the concepts Bed, Book, Home, See, Television, and Water in the symbolic set of the Aragonese Portal of Augmentative and Alternative Communication (ARASAAC, 2013), Blissymbolics (Bliss, 1978), Makaton (Grove & Walker, 1990), Mulberry (Straight-Street, 2013), Picture Communication Symbols (PCS) (Mayer-Johnson, 2012), Sclera (Sclera NPO, 2013), and Tech/Syms.
STATE OF THE ART In the past, AAC was dominated by low-technology or non-electronic devices (Reichle, York, Sigafoos, & York-Barr, 1991). A few decades ago, several electronic aids were introduced in the international market, incorporating voice recording and playback capabilities. Such non-computer-based products are still widely used and are available from companies such as Ablenet(AbleNet, Inc., 2013), Attainment (Attainment Company, Inc., 2013), and Mayer-Johnson (Mayer-Johnson, 2013). Although these devices are considered to be very useful for some persons using AAC, they provide a limited vocabulary, need extra effort from facilitators to add new recordings, and cannot keep up with the nontrivial progress usually achieved by their users (Cumley & Swanson, 1999). Recently, a number of computer-based communication aids that support a range of symbolic communication systems, incorporate special I/O devices, configurable user interfaces, and speech synthesis, have been developed by various companies such as Ablenet (AbleNet, Inc., 2013), Tobii (Tobii Technology, 2013), Dynavox (DynaVox, 2013), Gus Communications (Gus Communication Devices Inc., 2013), and Prentke Romich (Prentke Romich Company, 2013). All these devices have large vocabularies, and they support a limited number of natural languages. These modern computer-mediated interpersonal communication systems are adaptable in order to satisfy the wide variety of the users’ changing needs and the specific user profiles (Light, 1989); nevertheless, from the software engineering perspective, they are usually monolithic and rather difficult to modify or extend. The traditional AAC market changed rapidly when iPhones, iPads, and Android tablets appeared, offering new ways of interacting, like multitouch, but above all a very high degree of mobility and much lower hardware cost (Higginbotham, 2011). The fact that these devices are not especially designed as AAC platforms poses some disadvantages, like, for example the lack of very loud built-in speakers, but the widespread availability of hardware and the very high numbers of AAC apps developed (app lists are continuously enriched every day) compensates for that. As research advances it becomes more and more evident that this might be the future of AAC (Sennott & Bowker, 2009), (Flores, et al., 2012). In the following Table 4 several devices are listed. The first column contains the name (model) of the device, the manufacturer name (company), the operating system of the device, and its weight. Devices are sorted from heavier to lighter. The second column contains indicative features for each device, e.g., access methods, pre-installed software, screen size, specifications, etc. A picture of each device is located in the last column. Most companies have a full product line comprising alternative for the entire range of weights and functionalities; the products shown here have been chosen randomly.
16
Device Features (Manufacturer) C15 (Tobii Technology) Microsoft Windows 7 - Weight: 4kg
ECO2 with ECOpoint (Prentke Romich Company) Microsoft Windows 7 - Weight: 3.5kg
Accent1200 (Prentke Romich Company) Microsoft Windows 7 - Weight: 2.5kg Communicator TB19 (Gus Communication Devices, Inc.) Microsoft Windows XP - Weight: 2.3kg
Optional CEye eye control module Touch screen, head mouse Swappable batteries 15’’ LED screen Environmental control Cell phone calls, SMSs Wheelchair mountable 4 integrated speakers, 2 microphones, and a camera Optional ECOpoint eye control 14.1" XGA TFT display Infrared environmental control Integrated head pointing system by Madentec Integrated Bluetooth connectivity Single or dual switch scanning Vocabulary Builder and Unity language system software with multiple voices A dedicated version without the computer capabilities is also available to meet Medicare/Medicaid funding requirements. Pre-loaded with NuVoice and AAC Language Lab Context-sensitive help Two cameras for picture-taking 12” touchscreen display High quality sound Carrying handle Single- or dual-switch scanning Multiple voices Based on Panasonic Toughbook - 3 year warranty Fully rugged, shock, drop and water resistant 10.4” Touchscreen viewable both indoors and outdoors even in direct sunlight 8+ hours of battery life Loud internal speaker Built-in carrying handle Included software: SpeechPRO , Communicator, Overboard 10.4” Touchscreen
Pictures
17
Device Features (Manufacturer) C8 (Tobii Technology) Microsoft Windows 7 - Weight: 1.8kg Tango (Dynavox) Dedicated - Weight: 1.1kg
Skytouch (Words+) Windows 8 Weight: 940gr
Papoo Touch (Smartio) Dedicated - Weight: 180gr
Pictures
Moisture resistant Mountable Swappable batteries Environmental control 8.4" resistive touch screen Powerful stereo speakers Lightweight and portable, built-in desk stand Natural-sounding Acapela Speech 5.8 x 2.1’’ touch screen Direct selection, scanning with seven patterns Speakers, microphone, and camera Symbol or text communication Number of visible buttons to 6, 12 or 18 6 hardware navigation and function buttons Say-it! SAM communication software AT&T synthetic voice output Direct 10.1’’ touch screen access Can be locked to speech generation only, avoiding inappropriate access Live initial and ongoing training and support Wheelchair mounting Portable, switch access 4.3’’ TFT touch screen Symbol and text communication with Grid 2 Nuance text to speech, 14 languages Media player, games, calendar Lockable front buttons
Table 4: AAC devices and indicative features In Table 5 a limited number of AAC applications are briefly presented. In column 1 there is the software name and the name of the developer (or company), as well as the operating system on which the software runs. The second column lists prominent features of each application like purpose, symbols contained, access methods supported, built in text to speech engines, etc. The third column depicts screenshots of the applications. Software (Developer)
Features
Screenshots
18
Software (Developer)
Features
Communicator (Tobii Technology) Microsoft Windows
Communication board creation Visual scene creation Speech synthesis engine 15,000 SymbolStix symbols On-screen custom keyboards Word prediction (frequency based) Built-in dictionary Grammar functions and word conjugation Mouse emulation for switch users Access to Windows Environmental control Sound and speech recording Calculator, calendar and address book Facebook, chat, Skype, email, and SMS Switch scanning Touch screen, mouse click, mouse dwell, joystick, or head mouse access Eye control ready Auditory prompts and Undo function
Sono Flex (Tobii Technology) Apple iOS, Google Android, Microsoft Windows
Communication board creation 50 pre-made context vocabularies Topic boards linking 11,000 SymbolStix symbols Camera and photo albums integration 5 Acapela voices (boy, girl, woman, man) Separation of seldom and often used words
The Grid 2 (Sensory Software International Ltd.) Microsoft Windows
Uses symbols or text Access to Windows desktop and other programs Pre-made grid sets Widgit symbols and SeeSense picture library included PCS, Blissymbolics, SymbolStix, Makaton, and Snaps are optional Virtual keyboards
Screenshots
19
Software (Developer)
Features Word prediction with phonetic matching, similar spelling, and verb morphology, options Stored messages Can display symbols next to words Variety of scanning options Acapela voices 22 languages
Clarity Symbol Set (Liberator - Prentke Romich Company) Exclusive to PrentkeRomich AAC devices
9,000 symbols Topics-based items such as countries, math, chemistry Creation of user-specific keyboards Includes homophones and homonyms Allows to change the picture-word labels Create cheat sheets, flashcards, manual boards, and vocabulary sorts
Boardmaker with Speaking Dynamically Pro (Mayer-Johnson) Microsoft Windows MacOS
Ideal for symbol training 4,500 Picture Communication Symbols Optional additional PCS symbols Visual scene creation 44 languages Virtual keyboards with word prediction 600 interactive sample boards Text to speech or recorded speech Switch access with auditory and visual scanning Ability to link boards and print messages Abbreviation expansion, customizable dictionaries Build sentences with symbols or text Highlight words as it reads Play QuickTime movies Launch other programs Button magnification for low vision
Screenshots
20
Software (Developer)
Features
SpeechPRO (Gus Communication Devices, Inc.) Microsoft Windows
Combines both an on-screen keyboard (top half) and a dynamic display expanding grid (bottom half) Designed for text and/or symbol based communication Includes 5,500 communication symbols and Neospeech Premium voices. Free software upgrades and tech support
Overboard (Gus Communication Devices, Inc.) Microsoft Windows MacOSX
Includes speech output Create communication boards 5,500 communication symbols 250 communication device templates Link to 150,000 images and symbols from Microsoft's clipart library Page linking Resize pictures and symbols to fit any page size
Point To Pictures-PC (RJ Cooper) Microsoft Windows
Board layouts from 1-64 Cells/Board Super-size Cells Zoom into full screen image upon selection Playback of MP3's 2 Neospeech Text-To-Speech voices Word Completion and preview Auto and Step-Scanning built in Environmental control
InterAACt (Dynavox) Microsoft Windows
Language framework Core words and dictionaries Visual scenes Page & pop-up templates Auditory scanning cues Wide range of word prediction options On-screen keyboards The level of complexity can be chosen to suit the user’s age and ability, as well as the context or setting of the interaction
Screenshots
21
Software (Developer)
Features
Athena ΑΑC (University of Athens) Microsoft Windows
Switch access with several scanning options including 3-level scanning Fully customizable supports all symbolic languages Advanced translation from telegraphic text to natural language Microsoft SAPI support Emergency and dialogue bars
Screenshots
Table 5: AAC applications and indicative features There are numerous AAC apps for Android and iOS. In (AppsForAAC, 2013) 269 iPod, iPhone, and iPad apps are listed in total, 53 of which are free. Almost the same number of apps for the same platforms is listed in (Farall, 2012). Android AAC apps are contained in (AppsForAAC, 2012), and in (Speech and Accessibility Group, University of Athens, 2013). A different approach in AAC that is gaining ground is Visual Scene Displays (VSDs). VSDs, like other types of AAC displays, may be used to enhance communication on either low-tech boards or hightech devices. The VSDs are meant primarily to address the needs of beginning communicators and individuals with significant cognitive and/or linguistic limitations (Blackstone S. , 2004). They are considered valuable tools in teaching symbols or training for language skills. VSDs are designed to provide a high level of contextual support and to enable communication partners to be active and supportive participants in the communication process. Take, for example, a VSD showing a photo of picnic tables and family members eating, drinking and having fun at a reunion. If used on a low-tech board, the photo provides a shared context for interactants to converse about topics related to the reunion. When used on an AAC device, spoken messages can be embedded under related elements in the digitized photo. These are known as “hot spots”. Someone with aphasia may use a VSD to converse with a friend by touching hot spots to access speech output, e.g., “This is my brother Ben,” or “Do you go to family reunions?” Or, a three-year-old child with cerebral palsy may tell how “Grandpa” ate “pie” at the reunion by touching the related “hot spots” on the VSD. All of the major AAC manufacturers, for example, Dynavox with InterAACt, Mayer-Johnson with Boardmaker and Speaking Dynamically Pro, Enabling Devices with low tech VSD devices – are offering VSD on their devices. Some companies are even offering them to the exclusion of traditional grids of symbols or they are making it very difficult to use traditional grids within their software requiring hours of reprogramming VSDs to grids. The research on VSDs seems to be primarily done on AAC users who have aphasia (Dietz & McKelvey, 2006) and secondarily done on those with autism spectrum disorders (Shook & Coker, 2006). There is little research on VSDs in other populations (cerebral palsy, brain injury, Down syndrome, etc.). There is an assumption that visual scene displays reduce cognitive load, but this likely varies by individual, i.e., some individuals may find VSDs easier, but others may find traditional grids - with no questions as to what is selectable vs. what isn't - easier. This varies from person to person and task to task. The need for VSD may also change based on the person's style of learning - highly visual people, may prefer the VSDs while others may prefer traditional grids. There is also an assumption that visual scenes will act as a visual cue to prompt conversation, but while this is possible it is also possible that a VSD can act as a distractor and lead away from the point that needed to be made.
22 The computer-based AAC systems that originated during the past decades can be classified as: a) dedicated AAC systems housed in specially designed hardware, b) proprietary AAC software applications functioning on a mainstream Operating System (OS) and housed in specially designed hardware, and c) proprietary AAC software applications functioning on a mainstream OS and housed in a mainstream device. DeRuyter, McNaughton, Caves, Bryen, & Williams (2007) sketch out the main strengths and drawbacks of these three approaches, concluding that the third approach is the most applicable and affordable. Finally, they refer to the development of open source software that runs on mainstream computers, as a better alternative in order to provide maximum flexibility and accessibility. Open source along with CBD, and the Design for All approach might be the key for the future of AAC software design and development (Pino & Kouroupetroglou, 2010). Software engineering offers several technical solutions (Larsen, 2000) including turnkey solutions, custom implementations, and Component Based Development (CBD). Turnkey solutions are ready to go, “out of the box” applications that address a wide market. However, specific and restricted user groups are not easily catered for. Usually, these applications do not have enough adaptation capabilities, and when they do, they require extensive configuration work. Custom implementation solutions typically involve building most of the system functionality from scratch, according to specific user needs. This results in final products with a very steep price and high maintenance costs. On the other hand, CBD solutions can result in families of products, with interchangeable pieces of software. Such a modular approach can address a variety of user groups or needs. Component-based frameworks are partial implementations, specifying the nature and interconnectivity of software components and the way to extend the resulting applications with pluggable features. Final products can incorporate a variety of components coming from independent developers, encourage software reuse, and allow each specialized developer or manufacturer to deal only with the area of his expertise. Software components can be considered mature software engineering concepts (Hopkins, 2000). D’ Souza and Wills (1999) define a software component as: “A coherent package of software artifacts that can be independently developed and delivered as a unit and that can be composed, unchanged with other components to build something larger.” They generalize to a component-based framework description (D'Souza & Wills, 1999): “In general, a component-based framework is a collaboration in which all the components are specified with type models; some of them may come with their own implementations. To use the framework you plug in components that fulfill the specifications.” Next key principles come from the open source model. Open source can be described from three different standpoints (solely or in conjunction): a) Software protected under special copyright licenses aimed at ensuring availability and free (re)distribution of the source code. b) A process of software development that incorporates some unique technical and social characteristics, such as volunteer programming, the ability of users to suggest new features, report faults in programs, etc. c) A movement based on the ideals of the hacker culture which is premised upon the freedom to use, create, and tinker with the software (Kollock, 1998). Initially, most of the software produced by the open source movement had an infrastructural character. As Castells (2002) indicates, this meant that its users consisted of programmers and system administrators
23 and very few applications were addressed to the average, non-technical user. However, this is rapidly changing. Open source is being adopted by a growing number of public and corporate organizations, and reaching a wider and more diverse non-technical user base compared to its earlier phases of development. Last but not least, Design for All or Universal Design constitutes an approach for building modern applications that need to accommodate for heterogeneity in user characteristics, devices and contexts of use (Stephanidis, 2001). In the AAC domain, a Design for All approach can be implemented in such manner so that everybody can communicate with everybody either face-to-face or in distance, in an asynchronous way (e.g., by exchanging e-mail) or in real time (e.g., by e-chatting). Most people do not need to use AAC aids, but they occasionally meet users of AAC devices or persons who use symbolic languages. In these cases, a Designed for All system should allow for both parties of communication to understand each other. The same also stands for people of different ages, speaking different languages or using different symbolic communication systems. However, the goal of Universal Design is not a single product; instead it is a design approach that takes into account alternative user and usage-context characteristics (Emiliani, 2006). At the research field, projects COMSPEC (1998) and ACCESS (Kouroupetroglou, Viglas, Anagnostopoulos, Stamatis, & Pentaris, 1996), have made significant steps towards component-based development. The outcomes of these projects, ComLink (Lundälv, Lysley, Head, & Hekstra, 1999), and ATIC (Kouroupetroglou, Viglas, Stamatis, & Pentaris, 1997) were component-based approaches. Although both these frameworks were characterized “open” (which meant that third-party developers could theoretically develop compatible components), essentially they were closed source, as their code was not freely available. ComLink was an AAC application generator that imposed a rather rigid user interface on the resulting aids. Another drawback was an immature software platform and a limited starter set of software components. ATIC was introduced as a proprietary development environment for communication aids based on components and software agents. The main disadvantages of ATIC were that it required all the components to be available before the time of integration, and that there was too much programming involved in the integration process. ATIC was the predecessor of the ULYSSES framework (Kouroupetroglou & Pino, 2001), and was based on the “proprietary AAC software functioning on a mainstream OS and housed in a mainstream device” approach. The main difference from the newer ULYSSES approach was that ATIC used a proprietary message manager and a complex communication protocol between components, making the conformance with the specific architecture difficult for the developers. On the other hand, ULYSSES used a widely available and known infrastructure and messaging system (COM+) that was embedded in the OS, and a simpler object model (COM), making its guidelines and specifications straightforward. Nevertheless, ULYSSES had the main drawback that the whole AAC industry needed to be accustomed to its proprietary guidelines and code, in order to comply with the framework. This is very unlikely to happen, especially when the framework is closed-source such as all three frameworks described so far. ULYSSES evolved to ITHACA framework, which will be described in the next session and will be used as a paradigm in several sections that follow. The World Wide Augmentative and Alternative Communication (WWAAC) project has contributed towards the direction of open source development, in the domain of Internet accessibility for AAC users (Poulson & Nicolle, 2004). The most important contribution of the WWAAC project was the Concept Coding Framework (CCF). CCF provided direct support for symbol users on web pages through its open-sourced concept coding infrastructure and protocol (Judson, Hine, Lundälv, & Farre, 2005). The vision of concept coding was that instead of images and symbols having to be transferred from one computer to another, it should be possible to transmit a unique code designating the meaning of the symbol needing to be transferred. Using this infrastructure, the WWAAC project has also developed a Web authoring tool which enables Web developers to embellish their web pages with symbols using the online concept coding database (Lundälv & Judson, 2005). These important issues are also being discussed within the W3C’s Web Content Accessibility Guidelines Working Group (World Wide Web
24 Consortium, 2008). WWAAC Web browser and CCF have recently been hosted in the Open Source Assistive Technology Software repository (Judge & Lysley, 2005). The CCF effort is now continued in the European AEGIS project (Gkemou & Bekiaris, 2011), as well as the current vocabulary efforts within Blissymbolics Communication International, highlight that issues of AAC vocabulary content, management and interoperability are principal (Lundälv & Derbring, 2012). The ITHACA AAC framework The ITHACA open source framework for the development of AAC applications (Pino, 2008) introduced the fusion of the three modern trends in software engineering: CBD, open source software and Design for All (Savidis & Stephanidis, 2006). ITHACA relieved programmers from the burden of familiarizing themselves with complex interfaces and rules for building AAC software components, while facilitating the creation of sophisticated applications from off-the-shelf independently pre-manufactured parts. The ITHACA’s vision was to provide end-users access to a large variety of state-of-the-art, inexpensive and easily accessible AAC products with versatile and adaptable UIs that fit their exact needs (Pino & Kouroupetroglou, 2010). Two main user groups of the ITHACA framework were identified: the AAC manufacturers or software developers, and the AAC systems’ integrators or communication aids resellers. Although both user groups were aware of the basic characteristics of the framework, each had to know different aspects of ITHACA in detail. The ITHACA framework was used to develop .NET applications that use COM+ services; the most important feature that the framework provided was a simple communication protocol between software components ( Figure 3). This protocol was open and easily modified according to the application’s needs for data exchange between its components. The protocol was based on the consideration that the fundamental data used in AAC are the “Concepts” (an idea widely accepted in the AAC domain). Abstract/logical concepts can engage various data types at the presentation level; that is, concepts may be represented by strings (i.e., words or phrases), video, sound, or images. A concept that is conveyed from one component to another, locally or remotely, may be processed and may change data type and/or language between components.
25
Figure 3: ITHACA data interfaces and component communication model. To simplify the situation, we defined a base language (database of concepts) named “Interlingua”, i.e., a pseudo-language based on English, in which all concepts can be represented as character strings. Language-independent components can communicate using Interlingua, while output components or language-aware components use the equivalent representation (in any form) of the Interlingua concept in their own natural language or symbolic system according to the database’s relations. The database of natural and symbolic languages (including Interlingua, picture, photograph, sound or video-based languages) is the heart of the framework and allows simplification of inter-component communication by using strings only. End users and/or facilitators and educators can easily add new content to the database, and add or modify concepts of the Interlingua. The ITHACA concept transmission protocol consists of a set of interfaces used as a common channel for propagating strings of data, i.e., characters, words, sentences or complete messages that the user composes. This communication protocol and interfaces, in cooperation with the ITHACA database and the Interlingua concepts helps to overcome translation and inter-component communication related challenges that AAC component-based framework research has encountered for many years (Pino & Kouroupetroglou, 2010). There are four available interfaces named according to the string data type they convey: Character, Word, Sentence and Message. Furthermore, there is a fifth interface called Configuration, which is used to convey control messages to the components (e.g., “initiate” or “terminate”). Components
26 that need access to data (write or read), may simply implement appropriate interfaces and Publish (write) or Subscribe (read) to these interfaces, gaining access to the corresponding “communication channels”. The AAC Software Lifecycle Traditionally, software application developers in the domain of communication aids were creating standalone, monolithic applications based on their studies of user needs and market research. Retailers did not get actively involved in the development or in the configuration and adaptation process of communication aids. The only possible feedback in the product life cycle was between the end user and the reseller and that feedback was seldom propagated to the developer. Furthermore, a decade ago AAC products were very few and expensive due to the small market and the lack of software reuse, as many manufacturers developed the same functionalities and features from scratch again and again. Throughout each product’s life cycle, from the original idea to the end user, there was no significant feedback and evaluation. Finally, finding the right product for specific user needs was a difficult task due to the dispersed information and selling points. ATIC proposed a different life cycle that solved some of these problems and introduced an extended role for communication aids resellers. They were considered as an important user group (namely the integrators) having an essential part in the life cycle of the AAC products, with the task of assembling the whole AT system from available software components and suitable I/O devices and techniques. ULYSSES introduced the important role of the Internet as a widely accessible medium for gathering and propagating information about the framework and available software components and I/O devices. In ATIC, the stores that were specialized in AT products played the role of the component repository. ULYSSES replaced the traditional stores with a specialized website offering a higher degree of availability, variety and flexibility. The decision as to which AAC device or software is purchased, how instruction is provided, and how the device is maintained and developed typically involves many individuals, including the person who uses AAC, family members, communication and education professionals, and funding agencies (Rackensperger, Krezman, Mcnaughton, Williams, & D'Silva, 2005). All stakeholders included in the AAC product lifecycle can participate and contribute to open source software development, both on technical and non-technical aspects. Developers (including software companies, individual programmers, and high technology rehabilitation product manufacturers) concentrate on the technical aspects of a framework (ITHACA will be used as an example) regarding the open source software engineering techniques, interfaces, and guidelines. On the other hand, integrators (including AAC resellers, rehabilitation consulting centers, specialized therapists, and facilitators) focus on the proposed product life cycle, integration methods, and administrative tools for installing, configuring, modifying and maintaining the applications. ITHACA proposed an extended and upgraded (compared to the traditional) AAC product life cycle ( Figure 4). The online components may be open source (community or commercial) or closed source, thus modifying the lifecycle to a new form of hybrid open source/closed source product lifecycle. A substantial aspect of the new life cycle is the high importance that is given to information propagation between all stakeholders and all stages (Kouroupetroglou & Pino, 2002).
27
Figure 4: ITHACA AAC product Life Cycle. The framework allows for a new viewpoint from the side of AAC production stakeholders (AAC companies, designers, developers, vendors). The following procedures are positively affected:
Design. Design for All principles make the AAC components more usable. For example, if AAC component designers have those principles in mind, they take caution so that the Graphical User Interface of the components is adaptable to various user needs and can be modified following the users’ progress and preferences (Kouroupetroglou, Pino, & Viglas, 2001).
Development. Open source code provided by the framework’s knowledge base and the online component inventory, allows for code reuse and facilitates the developers’ job (Spinellis & Szyperski, 2004). Code reuse in software development denotes that a part of a computer program of any size (code lines), written at one time can be used in another program written at a later time.
28 It is often achieved through using common libraries, components, subroutines, and interfaces. The reuse of programming code is a common technique which attempts to save time and energy by reducing redundant programming work. Furthermore, the component model allows developers to create software modules (i.e., software components) according to their expertise, in contrast to developing complete applications from scratch. This speeds up and lowers the cost of the development process. For example, a company that develops a word prediction component for the Greek language will not worry if it does not possess a Greek Text-to-Speech system to complete an AAC application; another company may have already published a Greek speech synthesizer in the component inventory. The open source development model is a key aspect, opening new ways and possibilities for volunteer or other institutional contributions and interference (Freeman, 2007).
Distribution. AAC components are catalogued and available online and there is no need for shipping or physical packaging of software and documentation, thus speeding up the distribution procedure. Licensing can also be managed online as well as payment for components that are not free. For free components, the GNU General Public License is used (Free Software Foundation, 2007). Both open source and commercial components can coexist, maintaining the companies’ interests while increasing competition (Shah, 2006).
From the end-users’ viewpoint (people with complex communication needs, facilitators, therapists, integrators, special education professionals and organizations) the framework also allows for innovative approaches in lifecycle:
Search. AAC products and components are no longer geographically dispersed; they are gathered in a web-based inventory, providing easy and accurate detection. This is important, especially given the usual mobility difficulties of the potential users; shopping can be done from home.
Selection. In such a framework’s website users should be able to rate the components they use; this feature, along with easy pricing and ordering systems, will help potential clients to easily decide what to choose from a variety of available components. Users will not have to worry about compatibility issues as all components will interoperate as expected.
Modification. Components are pluggable so they can be added or removed from an AAC application at any time. Integrators or facilitators can download new components to replace the old ones as user needs change or technology advances are made. They can also download new vocabulary sets or new languages. These advanced modification features lead to a more versatile lifecycle and longevity of the AAC product.
Maintenance. New updates of the framework core files or new versions of components can be downloaded or automatically distributed and installed remotely from the framework’s website. Component licensing may also be remotely managed for the commercial components. Developers, facilitators with programming skills and even hobbyist volunteers can contribute to the testing, evolvement and upgrade of software from the community side, continuously upgrading the quality and reliability of the components
AAC APPLICATION FEATURES The most common functionality of an AAC application can be developed in the form of pluggable components (Kouroupetroglou & Pino, 2002) or just be a set of features in a monolithic software product. The main features described in the following sections can be all included in a single AAC application or only a few of them can be supported. The highest rated products of the AAC market usually incorporate most of these features. Each feature may accommodate different user needs, but they are usually used in conjunction.
29 On-screen keyboard As the name implies, an on-screen keyboard is a virtual keyboard with active graphical keys that is displayed on the computer screen. It is used as an alternative to the physical keyboard, so people can type by using a mouse or other input device. The on-screen keyboard can be configurable to various layouts such as alphabetic, QWERTY or sorted by most frequently accessed letters. It can contain only letters, or add numbers and symbols. The virtual keys can resemble a physical keyboard key or not, and they can be quite big and their color can be configured in some cases. An on-screen keyboard can support multiple typing modes you can use to type data, such as:
Clicking mode, in which you click the on-screen keys to type text, or directly select keys on a touch screen.
Scanning mode, in which the virtual keyboard continually scans the keys and highlights areas where you can select characters by pressing a hot key or using a switch-input device.
In hovering mode, you use a mouse, joystick or other device that can move the mouse pointer to point to a key for a predefined period of time, and the selected character is typed automatically.
Users can view an enhanced layout that includes the numeric keypad, or a standard keyboard that does not include the numeric keypad. They may also be able to display the keyboard with the keys in the standard layout, or in a block layout in which the keys are arranged in rectangular blocks. Block layout is especially useful in scanning mode. An important feature that a virtual keyboard might support is to display alphabets from several national languages, something a physical keyboard can’t do. A virtual keyboard can switch from US English to Greek and next to Cyrillic using some hotkeys and the user always sees the corresponding alphabet characters on the virtual keys. An on screen keyboard, except for text input in the AAC application itself, can also be used to enter text to other applications like for example a word processor or an Internet browser. The “Always on Top” feature can be handy in these cases to keep the keyboard displayed on the screen when you switch programs or windows. This component can also use a click sound to add an audible feedback when you select a key, as well as a visual mechanical or graphical action of the key to provide visual feedback of the key press. Communication boards Communication boards, or dashboards, or grids, or selection sets are the most typical feature of an AAC application. They usually are arrays of symbols or words, built with selectable interactive cells or buttons. Selection of a cell or button can result to:
speech output of a prerecorded or synthesized word or message,
the addition of the symbol or word of the cell to a message composing area,
display of another communication board (linked boards),
launch of another application (computer control), and
switch on/off or control an external device (environmental control).
Like in virtual keyboards, all types of scanning techniques may apply to communication boards. The arrangement of their vocabulary is most times very important. Some characteristics include the ability to change cell or button color, size, mechanical action and feedback. Labels are also usually provided in every cell or button above or below the symbol. Even if the user cannot read those labels, they are sometimes necessary for the educators or helpers in order to remember the meaning of the symbols. Users or facilitators should be able to turn those labels on and off.
30 The tools provided for the facilitator/AAC professional in order create new boards or modifying existing ones is a principal criterion for the quality and value of a high tech AAC system. Most systems come with symbolic systems included and allow for adding new symbols or whole symbolic languages to their inventory. Easy drag and drop techniques for populating boards with symbols, and accurate mechanisms for the creators to locate a symbol or meaning are highly appreciated by AAC professionals and users. Scanning Switch access scanning is an indirect selection technique or access method, used by an AAC user to choose items from the selection set or grid (Beukelman & Mirenda, 2005, p. 92). Unlike direct selection (e.g., typing on a keyboard, using a finger, head pointer, mouth stick, or eye gaze to select and/or activate the desired item on the screen), with scanning the user can only make selections when the scanning indicator (highlighting or cursor) of the electronic device is on the desired choice. The scanning indicator moves through items by highlighting each item on the screen (i.e., visual scanning), or by announcing each item via voice output (i.e., auditory scanning), and the user activates a switch at the right time to select the item (American Speech-Language-Hearing Association, 2004). The speed and pattern of scanning, as well as the way items are selected, are individualized to the physical, visual and cognitive capabilities of the user. The most common reason for using scanning is a physical disability resulting in reduced motor control for direct selection (Radomski & Trombly Latham, 2007). Communication during scanning is slower and less efficient than direct selection and scanning requires more cognitive skill (e.g., attention) (Hedman, 1990). Scanning using technology allows even those with only one voluntary movement to be independent in controlling the assistive technology. Depending on the system, users are allowed to select scanning time intervals, scanning patterns, and control techniques. Scanning Patterns A scanning pattern refers to the way items in the selection set are presented to the user. It allows for easier item selection as the scanning is systematic and predictable. Three primary scanning patterns exist:
In circular scanning, individual items are arranged in a circle (like the numbers on a clock face) and the scanning indicator moves in a circle to scan one item at a time. Although circular scanning is visually demanding, it is the simplest scanning pattern as it is cognitively easy to master.
In linear scanning, items are usually arranged in a grid and the scanning indicator moves through each item in each row systematically. Although linear scanning is cognitively more demanding than circular scanning, it is relatively straightforward and easy to learn. However, it may be inefficient if there are many items in the set. For example, in a grid consisting of 8 items per row, if the desired item were the 7th item in the 4th row, the scanning indicator would have to scan through 30 undesired items first before reaching the desired item.
In group-item scanning, items are grouped (e.g., by row, column, or other meaningful categories) and the scanning indicator will first scan by groups. Once the user selects the group his desired item belongs to, the scanning indicator will scan each item in the selected group. Among the variety of group-item scanning patterns, the most common is row-column scanning where the items are grouped in rows or columns. Some systems add an extra level with block scanning. For example, initial activation of the switch highlights one-fourth of items at a time. The second activation of the switch begins a row - column scanning within the selected quadrant. A third activation of the switch activates the item that is highlighted.
Scanning Control Techniques Scanning control and item selection occur through switch activation in three general ways:
31
In step scanning, the user controls each movement (or step) of the scanning indicator through its preset pattern by hitting a switch. To select an item, the AAC user hits a second switch once the indicator reaches his desired item. Because of the constant switch activation, this method might be too fatiguing for certain users, including those with ALS, but is easier cognitively. In the one switch version the switch is activated to move the highlight from one item to the next, stopping on the desired one for a preset amount of time in order to select it automatically.
In automatic (regular or interrupted) scanning, the indicator scans in the preset pattern on its own and item selection occurs when the user hits a switch. Alternatively, initial switch activation begins a continuously moving highlight from one item to the next. When the desired item is highlighted, the switch is activated again to activate the desired one.
In directed (or inverse) scanning, the indicator will only scan in the preset pattern when the user holds down a switch. The highlight pauses at each item for a preset time. Selections are made when the switch is released.
Several variations of scanning patterns and control techniques exist, making use of one to six switches. For example, the six switch version dedicates four switches for choosing directions in linear step scanning (up, down, left, and right). A fifth switch can control block scanning in step scanning mode and the last switch can be used for selection (of block initially, group next, and item last). Speech synthesis Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer. A text-to-speech system converts digital text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech (Allen, Hunnicutt, & Klatt, 1987). Synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database. Systems differ in the size of the stored speech units; a system that stores phones or diphones provides the largest output range, but may lack clarity. For specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of the vocal tract and other human voice characteristics to create a completely "synthetic" voice output (Rubin, Baer, & Mermelstein, 1981). The quality of a speech synthesizer is judged by its similarity to the human voice and by its ability to be understood. A text-to-speech system (or "engine") is composed of two parts: a front-end and a back-end (van Santen, Sproat, Olive, & Hirschberg, 1997). The front-end has two major tasks. First, it converts symbols and/or raw text containing symbols like numbers and abbreviations into the equivalent of written-out words. This process is often called text normalization, pre-processing, or tokenization. The front-end then assigns phonetic transcriptions to each word, and divides and marks the text into prosodic units, like phrases, clauses, and sentences. The process of assigning phonetic transcriptions to words is called textto-phoneme or grapheme-to-phoneme conversion. Phonetic transcriptions and prosody information together make up the symbolic linguistic representation that is output by the front-end. The back-end often referred to as the synthesizer- then converts the symbolic linguistic representation into sound. In certain systems, this part includes the computation of the target prosody (pitch contour, phoneme durations), which is then imposed on the output speech (Xydas, Spiliotopoulos, & Kouroupetroglou, 2005). In AAC systems a speech synthesizer can be built-in or installed separately. Quality varies usually with price. Some systems even work with pre-recorded messages and do not support speech synthesis. Others give the freedom to either use an average quality synthesizer that they have built-in or choose another making use of a suitable connection interface like, for example the Microsoft Speech Application Programming Interface (SAPI).
32 Word prediction Word prediction or completion is a feature built into some virtual keyboards to facilitate speed of word or message selection (Gillette & Hoffman, 1995). It can also be a separate component or application. The user types a letter, and the program offers a choice of words, each beginning with that letter in a separate dialog. When another letter in the word is selected, another choice of words is offered to the user, beginning with the two letters he/she typed and so on. When the program zeros in on the word the user wants to enter in the text line, the user can select it by number, scanning or direct selection. For example, the user may want to type "where". By typing "w" the user may see "want", "will", "word", "while", and "what". Then, by adding an "h" to the existing "w", the user may see, "what", "where", "who", "why", and "whether". By typing the number 2, or pressing the F2 function key, or directly selecting the word itself the user will place the word "where" on the line of type with three keystrokes, or clicks, often followed with an automatic space. The word prediction feature can save effort on the part of a person with a severe physical disability. In addition, it can provide cues for spelling words correctly for individuals who have not yet achieved good skill in spelling. It could remind a user to place the "e" at the end of "where", for example. A word prediction program can prompt users in several ways. First, it may provide cues for words which it predicts may come next. When the word prediction program includes vocal output, some of the programs will speak the words in the prediction list if you select them with a mouse click, for example. In this way, the user can hear the word list to determine if that is the desired word. Second, it may provide a cue for incorrect spelling when no words appear in the list or none of the words which do appear match the user's needs. Also, when early writers need ideas, the word prediction list offers suggestions of words to use in writing. Many researchers are currently investigating ways to improve the simplistic word prediction model (Li & Hirst, 2005; Trnka, Yarrington, McCoy, & Pennington, 2006). These methods include predictions based not just on characters entered, but also on previous whole words entered and even topics that a user is talking about. In some systems the user doesn’t even have to input the first letter of the word as the predictor guesses it based on the previous word entered and the history of the user’s input; and this can be done for several consecutive words. Message editor A simple editor usually displays the messages that users compose. The user’s input is accumulated in the editor’s dialog box. It displays either symbol or text messages, and provides a simple button interface to correct the displayed message deleting wrong characters or symbols or completely clearing the display area for composing a new message. It also usually incorporates the “speak” button of the communication aid. It can either be a component of the AAC application, especially when symbolic communication is involved, or it can be an external application, when, for example, text communication with a virtual keyboard and word prediction is realized. In the latter case it can be any program that accepts text input, like an e-mail editor or a sophisticated word editor. Symbol translation Database/Translation. In ITHACA’s example the database system developed by Viglas and Kouroupetroglou (2002) has taken into account software building technologies particularly regarding database connectivity and Web-enabled access. The implementation of the database was very straightforward using a contemporary Database Management System. Storing and manipulating data was ensured via software tools that deal with adding, deleting, updating and reorganizing data in a transparent and secure way. Interlingua concepts can be modified locally in an AAC application by an integrator or the user. Mappings can be changed locally without affecting the central online database. Whoever makes
33 the changes or modifications is responsible for avoiding redundancy, and the system can be reset by simply downloading the original database again. A variety of media such as speech, text, icons and pictures, video, etc., were associated with languages and symbolic communication systems. Translation is facilitated by means of relating the corresponding words and symbols via Interlingua that primarily gets all supported concepts defined, categorized, marked for synonyms, and indexed. The database can contain any required language and symbolic system, using the proper structures to best incorporate their properties and values, making use of ontologies. Concepts are also defined for each language or system including any associated media representation needed. The Interlingua concepts (Antona, Stephanidis, & Kouroupetroglou, 1999) are mapped onto the elementary communicative blocks of the defined languages (i.e., their words) and symbolic systems (i.e., their graphic symbols) with which they share the same meaning. In this way we ensure a simple first degree of concept-by-concept translation. Syntactic parser. Most of the symbolic systems do not have syntax and grammar similar to natural languages. These systems rely on pre-stored words or messages and do not support generative language production, i.e., users are unable to construct novel well-formed sentences. This is a problem when the communication counterpart expects to hear typical and intelligible utterances. In the context of ITHACA framework, we have developed a novel technique for expanding spontaneous telegraphic input to wellformed sentences, by adopting a feature-based surface realization for natural language generation (Karberis & Kouroupetroglou, 2002). This research works only for Bliss to Greek, but is a good paradigm that can be applied to all symbolic systems and natural languages. Remote communication Chat. This feature offers synchronous remote communication, using either symbols or natural language. Users who take part in these conversations may not even use the same language (for example one user may use BLISS and the other may use English). This can be achieved through a database-driven real time translation module that can be connected to this component. E-mail. This component realizes asynchronous remote communication. An mail server manages the entire remote communication functionality. The module described in the previous paragraph can also add translation features to the e-mail component.
AAC SELECTION AND INFORMAL EVALUATION Several issues need to be considered in the selection of an AAC device or system (Ballinger, 1999). Some of the answers below will be inherent to the device or system itself, others have to come from the manufacturer of the device.
If a device uses batteries, how long will the batteries last before needing to be recharged? A dead battery without a replacement could leave the user without his or her primary means of communication.
How reliable is the device? Users are more likely to encounter breakdowns with technologydependent devices than with technology-independent devices.
How easily is the device repaired? Sometimes high tech devices like VOCAs and computers take weeks to be fixed.
How easy is it to make, program, add to, modify or update the device? High technology devices, for example, must be programmed. Many graphical devices, both high and low technology, require overlays or pages to be constructed. Given the number of activities and, therefore, messages in which the user may be involved in at work, at home, at school and in the community, updating can require a substantial amount of work.
34
What is the quality of the speech output of a VOCA? Can it generate different voices, such as a female’s or a child’s? Not only is it crucial that partners are able to understand the speech output of the device, but users are often very sensitive to the quality and type of voice that is being used to represent them. The wrong voice could mean refusal to use the device.
How portable is the device or system? Can it mount on a wheelchair? Can a young child handle it independently? Keep in mind that users should be able to communicate at all times, even while walking or otherwise moving from place to place, playing, riding in the car, etc. Children are also small and may have difficulty handling a large or heavy device whether they have physical disabilities or not. This underscores the importance of developing a multimodal AAC program, in which different AAC devices or systems are used in different circumstances.
How sturdy is the device or system? Disabled users can be very hard on equipment, including AAC devices.
How expensive is the device? The expense must be looked at in terms of both money and time. High technology devices can cost thousands, while low technology ones can often be homemade. Yet, the time required to construct all the overlays or vocabulary pages necessary for many different activities in different environments may be so great that the high tech device becomes more cost effective. In addition, it is important to consider how difficult it would be to replace a device if it were lost, stolen or irreparably broken.
Can the device accommodate direct selection or scanning as needed? Direct selection is when the user is able to directly indicate a message, for example, by pushing a button, pointing or looking at a selection. When a user is unable to use direct selection, message items must be presented sequentially until the user indicates a choice, i.e., scanning.
Can the system be used independently, or does it require the assistance of a partner? A system is independently used if the user is able to produce messages without help. For example, if the user directly selects the message, or is able to activate and interrupt an electronic scanner then he or she does not require the help of another person. A system requires the assistance of a partner if the partner is in control of scanning and awaits a signal from the user indicating message choice. Examples of this are if the partner recites the available choices (auditory scanning), or points from one picture to another on a picture board (visual scanning).
Can the system be used over distances? Can it be used if the partner is not looking? A picture board or sign language necessitates the partner being close enough to see the picture or sign that is being indicated. An eye gaze system is even more demanding, and requires that the partner be positioned so that he or she is able to tell at what the AAC user is looking. On the other hand, a user who can vocalize, use speech, use a VOCA, clap loudly, etc., is able to get the attention of a partner some distance away who is not looking at or paying attention.
Does the system provide feedback (i.e., does it let the user know whether the right selection was made or not)? Many VOCAs offer the user several kinds of feedback; for example, when a message button is pushed a light may go on, a beep may be heard, and, of course, the message is spoken aloud. All this informs the user that a button was pushed successfully, and what the message on the button was. On the other hand, if a user is pointing at a picture on a communication board, he or she may not know whether or not the correct picture is being indicated until the partner responds. Feedback allows the user to self-correct independently when necessary.
How rapidly can communication occur? The speed of communication plays a large role in conversational quality. Interactions between AAC users and non-users tend to be imbalanced, with non-users dominating conversations, and users primarily in the role of respondent. Studies
35 have shown that one of the main stumbling blocks to equalizing their standing is the speed with which the AAC user can converse (Calculator, 1997), (Venkatagiri, 1995). One way a user may be able to increase conversational speed is by utilizing more than one mode of communication at the same time, for example using speech and gestures whenever possible, relying on a VOCA only as necessary.
How difficult is it for the user to learn how to operate/use the device? How difficult is it to learn the symbol system? Often there is a trade-off between ease of learning and system flexibility for both devices and symbol systems. For example, a VOCA with only three buttons may be very easy to figure out, but it offers limited opportunity for growth. Among symbols, pointing to tangible objects may be easy to learn, but is less flexible and convenient than graphic symbols.
How much vocabulary can be made available at one time? For example, a computer-based system may be able to store thousands of messages, or a picture board can be made with many pages. On the other hand, a system based on tangible objects will be highly limited due to the size of the objects.
How comprehensible is the system to partners? VOCAs, for example, output regular speech, but sign language or other symbol systems can be like a foreign language and must be learned by partners.
How viable is the system in different kinds of weather and at various times of day? It is obvious that rain and darkness can be issues, but computer screens and LCD screens can be difficult to read in bright sunlight too.
Can the system accomplish other activities besides AAC? In particular, can the system accommodate writing as well? A computer, for example, is very flexible in this regard. For many users, a writing device is essential.
Is there capacity within the system for a user to develop, without it being too complicated or too expensive? In selecting a system, it is necessary to ensure that the user is capable of using it effectively right now, but also that there is room to accommodate advancement. The cost of the system comes into play, too, since systems, electronic ones in particular, are often not expected to last forever. Regarding VOCAs, one rule of thumb is to expect the device to last approximately three to five years.
Case study: qualitative evaluation of ITHACA AAC applications In order to confirm the breadth of software that can be produced as a proof of concept of the ITHACA framework, we have further conducted a number of demonstrations with real users. Making combinations of components, we assembled a range of customized AAC applications addressing various user needs and communication requirements. We briefly summarize our observations regarding the experiences of the users, their family members, and their teachers. Our goal is not to provide a formal evaluation of the resulting systems, but to provide a sense as to how the systems were received by the individuals for whom they were designed. Four users (referred to as PD, JK, AT, LH) were selected from a special education and rehabilitation center; they were all children monitored in their special school environment, and the appropriate communication aids were selected following the main guidelines proposed in (Woltosz, 1988). All participants were diagnosed as having cerebral palsy, with different symptoms that ranged from mild to very severe. All users had severe speech problems, and for PD and AT, their initial intellectual ability was difficult to evaluate due to the lack of communication. None of the users had ever used computer-based AAC before. The aim was to help all four users communicate in their Greekspeaking environment, firstly at school and secondly at home. Some of the users’ characteristics are summarized in
36 Table 6. Table 6 also summarizes the features of the personalized AAC systems assembled for each participant. All configurations were set up in cooperation with their therapists, facilitators and families. Their comments and requests for improvements were incorporated into the systems. Screenshots of the final graphical user interfaces of the four applications are illustrated in Figure 5 and Figure 6. User
PD
JK
AT
LH
Sex
Male
Male
Female
Female
Age
6
13
11
8
Motor disability
Severe: Can use only his left hand with great difficulty.
Mild: Can use both hands with difficulty.
Severe: Can only control head movement with great difficulty.
Moderate: Can use her right hand.
Intellectual disability
Mild
Moderate
Severe
Mild
Speech production
Non existent
Non intelligible
Non existent
Intelligible, but with very low volume
Initial communication
Gazing on objects, sounds for yes/no
Pointing to objects, use of symbol cards
Gazing at symbol cards for yes, no, pain
Speech
AAC input language
PCS
BLISS
MAKATON
Natural Greek
Input device
Five switches
Touchscreen
Puff switch
One Switch
Input method
Directed scanning
Direct selection
Automatic scanning
Automatic scanning
Input components
8x8 symbol board, Dialog bar, Emergency bar
6x4 board, Dialog bar, Emergency bar
3x2 symbol board, Dialog bar, Emergency bar
QUERTY Greek virtual keyboard, 8x8 word board, Emergency bar
Intermediate components
Symbol editor, Chat, e-mail, Syntactic parser
Symbol editor, Chat, e-mail, Syntactic parser
Symbol editor, Syntactic parser
Text editor
Output
Speech synthesis, email message, chat message
Speech synthesis, email message, chat message
Speech synthesis
Speech synthesis, printed text
Table 6: Overview of the four users of ITHACA AAC applications
37
Figure 5: PCS (user PD), and Bliss (user JK) communication aids.
Figure 6: MAKATON (user AT), and Natural Greek (user LH) communication aids. The users’ therapists declared that, in all four cases, little progress had been made in the past years using traditional no-tech methods and progress had more or less reached a dead-end. After three months of using the ITHACA-based AAC aids, each disabled user’s therapist completed a questionnaire that focused on the following aspects: Communication (McNaughton & Lindsay, 1995); Cognitive Abilities (Todman, Elder, & Alm, 1995); Behavior (Remington, 1994); Social Integration (Bedrosian, 1997); and Installation, upgrade, maintenance and commercial value (Newell, 1987). The last part of the questionnaire aimed to investigate how the AAC system’s life cycle was affecting final users and their facilitators and families. Efforts were made to involve them in the selection, installation, debugging and modification procedures, and their comments were requested about all these phases. Observations Although the demonstration period was relatively small (3 months) the study and analysis of the answers given by the facilitators lead to the following observations: Vocabulary: By the end of the testing period the vocabulary of the users was enriched by: 990% for user PD; 50% for user JK; and 400% for user AT (
38 Table 7). Furthermore, users’ messages were now intelligible because the system “spoke” for them. Therapists provided positive feedback, especially when comparing the outcomes to those experienced with traditional methods (and the users' initial vocabularies). This progress shows that in the cases of motion and speech impairments with mild or severe intellectual and speech disabilities studied, and for the specific ages (6-13), symbolic systems of interpersonal communication combined with synthetic speech are effective concept learning tools. User
PD
JK
AT
LH
Initial vocabulary
10 PCS symbols
~100 BLISS symbols
3 MAKATON symbols
Rich (Greek)
Month 1
+16 concepts
+9 concepts
+2 concepts
Month 2
+31 concepts
+15 concepts
+4 concepts
Month 3
+42 concepts
+26 concepts
+6 concepts
Total (after testing period)
99 concepts
150 concepts
12 concepts
No apparent change
Table 7: Concepts/symbols attained during the demonstration period Training time: For symbol users the time of training was very important. All three months of the study were considered as training time and the progress made was directly related to the accumulative time each user spent with his/her communication aid. These users had previously attained the symbols of their initial vocabulary (using paper symbol cards), but were not familiar with computer interaction or with the synthetic speech output corresponding to the symbols/concepts they previously used. In the first week of using the system they got accustomed to the new form of the symbols they had previously learned and started learning new ones. Device: The answers of educators and families to the open-type questions showed that the use of the desktop computer as the communication device was welcomed and positive in all cases. The participants’ educators rated the possibility of producing synthetic speech and the ability to correct errors as highly significant. Furthermore, the computer-based AAC system itself encouraged all users, facilitators and families to develop extra communication skills and improved their psychological condition. Feedback: The importance of the direct participation of therapists, facilitators and family in configuring the user interface was clear; in many cases they gave insightful directions and requests to the developers. This feedback helped the developers to better understand the needs of the users. During the demonstration period modifications were made to the users’ vocabulary, as well as to features like colors, sizes, labels on/off, positions of the components, etc. These frequent changes did not have any negative impact on users; on the contrary, they renewed their interest in the AAC device, while maintaining their attention and the device’s appeal to them. Discussion All observations are in agreement with similar studies (Salminen, 2000; Schlosser, 2003). However, a formal evaluation of the AAC applications was not the scope of this work, as we focus on the framework and not the resulting application. Although the everyday training sessions at school were quite amusing and pleasant for the users, they could not use the system the rest of the day. The drawback was that the communication aids were installed only on desktop computers at school and not at home. Nevertheless, the AAC system was a valuable learning aid as therapists stated. The anticipated cost of the computer and special input devices, given the free open source AAC software, was considered normal for what the
39 system was offering. Even if some cost was added for including commercial components to the system (like a commercial Text-to-Speech system), it was considered acceptable by users’ families. For a formal evaluation of AAC application not much literature is available. However, the generic ISO/IEC 25010:2011 international standard models can be used to evaluate AAC software in the same way as software quality in other domains is evaluated (International Organization for Standardization, 2011). The ISO/IEC 25010:2011 or “Systems and software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - System and software quality models” defines: 1. A quality in use model composed of five characteristics (some of which are further subdivided into subcharacteristics) that relate to the outcome of interaction when a product is used in a particular context of use. This system model is applicable to the complete human-computer system, including both computer systems in use and software products in use. 2. A product quality model composed of eight characteristics (which are further subdivided into subcharacteristics) that relate to static properties of software and dynamic properties of the computer system. The model is applicable to both computer systems and software products. The characteristics defined by both models are relevant to all software products and computer systems. The characteristics and subcharacteristics provide consistent terminology for specifying, measuring and evaluating system and software product quality. They also provide a set of quality characteristics against which stated quality requirements can be compared for completeness. The scope of the models excludes purely functional properties, but it does include functional suitability. The scope of application of the quality models includes supporting specification and evaluation of software from different perspectives by those associated with acquisition, requirements, development, use, evaluation, support, maintenance, quality assurance and control, and audit. The models can, for example, be used by developers, acquirers, quality assurance and control staff and independent evaluators, particularly those responsible for specifying and evaluating software product quality.
REFERENCES AbleNet, Inc. (2013). Speech Generation Device - SGD Products. (Ablenet, Inc.) Retrieved January 11, 2013, from Ablenet Technology Products and Special Ed Curriculum for Persons with Disabilities: http://www.ablenetinc.com/Assistive-Technology/Communication Allen, J., Cockerill, H., Davies, E., Fuller, P., Jolleff, N., Larcher, J., . . . Winyard, S. (1992). Augmentative Communication: More Than Just Words. Oxford: ACE Centre. Allen, J., Hunnicutt, M. S., & Klatt, D. (1987). From Text to Speech: The MITalk system. Cambridge University Press. American Speech-Language-Hearing Association. (2004). Roles and Responsibilities of Speech-Language Pathologists With Respect to Augmentative and Alternative Communication: Technical Report. Retrieved January 30, 2013, from ASHA: http://www.asha.org/policy/TR2004-00262.htm Antona, M., Stephanidis, C., & Kouroupetroglou, G. (1999, December). Access to lexical knowledge in interpersonal communication aids. Augmentative and Alternative Communication, 15(4), 269-279. AppsForAAC. (2012). AAC Apps for Android. Retrieved from http://appsforaac.net/content/aac-appsandroid-0 AppsForAAC. (2013). Retrieved from http://www.appsforaac.net/ ARASAAC. (2013). Aragonese Portal of Augmentative and Alternative Communication. Retrieved from http://www.catedu.es/arasaac/
40 Attainment Company, Inc. (2013). Assistive Technology. (Attainment Company) Retrieved January 11, 2013, from Attainment Company: http://www.attainmentcompany.com/assistive-technology Baker, B. R., & Lloyd, L. (2011). Clinical Implications of a Symbol Taxonomy for AAC – Electronic and Manual. Retrieved January 27, 2013, from http://www.letsgoexpo.com/utilities/File/viewfile.cfm?LCID=4575&eID=80000300 Ballinger, R. (1999). AAC Devices and Systems. Retrieved January 25, 2013, from AAC Connecting Young Kids (YAACK): http://aac.unl.edu/yaack/c3.html Bedrosian, J. (1997, September). Language acquisition in young AAC system users: issues and directions for future research. Augmentative and Alternative Communication, 13(3), 179-185. Bernstein, D. Κ., & Tiegerman-Farber, E. (2008). Language and Communication Disorders in Children (6th ed.). New York, NY, USA: Allyn & Bacon. Beukelman, D. R., & Mirenda, P. (2005). Augmentative and Alternative Communication: Management of Severe Communication Disorders in Children and Adults (3rd ed.). Baltimore, MD, USA: Paul H. Brookes Publishing Co. Beukelman, D., & Garrett, K. (1988). Augmentative and alternative communication for adults with acquired severe communication disorders. Augmentative and Alternative Communication, 2, pp. 104-121. Blackstone, S. (2004, August). Upfront. Augmentative Communication News, 16(2), pp. 1-2. Blackstone, S. W., Williams, M. B., & Wilklins, D. P. (2007, September). Key principles underlying research and practice in AAC. Augmentative and Alternative Communication, 23(3), 191-203. Bliss, C. K. (1978). Semantography: Blissymbolics (3rd enlarged ed.). Sydney, Australia: Semantography-Blissymbolics Publications. Bornman, J. (2004). The World Health Organisation's terminology and classification: application to severe disability. Disability and Rehabilitation, 26(3), 182-188. Calculator, S. (1997). Fostering early language acquisition and AAC use: exploring reciprocal influences between children and their environments. Augmentative and Alternative Communication, 13(3), 149-157. Castells, M. (2002). The Internet Galaxy: Reflections on the Internet, Business, and Society (Reprint ed.). New York, NY, USA: Oxford University Press. Cook, A. M., Polgar, J. M., & Hussey, S. M. (2008). Cook & Hussey's Assistive Technologies: Principles and Practice (3rd revised ed.). Little Rock, MO, USA: Mosby Elsevier. Cumley, G. D., & Swanson, S. (1999, June). Augmentative and alternative communication options for children with developmental apraxia of speech: Three case studies. Augmentative and Alternative Communication, 15(2), 110-125. DeRuyter, F., McNaughton, D., Caves, K., Bryen, D. N., & Williams, M. B. (2007, January). Enhancing AAC connections with the world. Augmentative and Alternative Communication, 23(3), 258-270. Dietz, A., & McKelvey, M. (2006, April). Visual Scene Displays (VSD): New AAC Interfaces for Persons With Aphasia. Perspectives on Augmentative and Alternative Communication, 15(1), 13-17. Doyle, M., & Phillips, B. (2001). Trends in augmentative and alternative communication use by individuals with amyotrophic lateral sclerosis. Augmentative and Alternative Communication, 17(3), pp. 167-178. D'Souza, D. F., & Wills, A. C. (1999). Objects, components, and frameworks with UML: The catalysis approach. Boston, MA, USA: Addison-Wesley Professional.
41 DynaVox. (2013). AAC Devices | Augmentative Communication Devices - DynaVox. Retrieved January 13, 2013, from Communication Devices – Speech Devices | DynaVox: http://www.dynavoxtech.com/products/ Emiliani, P. L. (2006, March). Assistive Technology (AT) versus Mainstream Technology (MST): The research perspective. Technology and Disability, 18(1), 19-29. Farall, J. (2012, November 28). iPhone/iPad apps for AAC. Retrieved from Spectronics - Inclusive Learning Technologies: http://www.spectronicsinoz.com/iphoneipad-apps-for-aac Flores, M., Musgrove, K., Renner, S., Hinton, V., Strozier, S., Franklin, S., & Hil, D. (2012, June). A Comparison of Communication Using the Apple iPad and a Picture-based System. Augmentative and Alternative Communication, 28(2), 74-84. Fossett, B., & Mirenda, P. (2009). Augmentative and Alternative Communication. In S. L. Odom, R. H. Horner, M. E. Snell, & J. Blacher, Handbook of Developmental Disabilities (pp. 330-366). New York: Guilford Press. Free Software Foundation. (2007). The GNU General Public License, 3.0. Retrieved December 10, 2012, from http://www.gnu.org/copyleft/gpl.html Freeman, S. (2007). The material and social dynamics of motivation: Contributions to open source language technology development. Science Studies, 20(2), 55-77. Fuller, D., Lloyd, L., & Schlosser, R. (1992). Further development of an Augmentative and Alternative Communication symbol taxonomy. Augmentative and Alternative Communication, 8(1), 67-74. Gillette, Y., & Hoffman, J. L. (1995). Getting to word prediction: Developmental literacy and AAC. Retrieved January 29, 2013, from Education Development Center, Inc.: http://www2.edc.org/ncip/library/wp/Gillette.htm Gkemou, M., & Bekiaris, E. (2011). Overview of 1 st AEGIS Pilot Phase Evaluation Results. Lecture Notes in Computer Science, 6765, 215-224. Gross, J. (2010, September). Augmentative and alternative communication: a report on provision for children and young people in England. Retrieved January 19, 2013, from The Communication Trust: http://www.thecommunicationtrust.org.uk/media/12802/aac-page-aac-report-final-23-09-101.doc Grove, N., & Walker, M. (1990). The Makaton Vocabulary: Using manual signs and graphic symbols to develop interpersonal communication. Augmentative and Alternative Communication, 6(1), 15-28. Gus Communication Devices Inc. (2013). Speech Packages from Gus Communication Devices, Inc. Retrieved February 2, 2013, from http://www.gusinc.com/2012/Speech_Packages.html Hawaii Department of Education. (2008). Augmentative and Alternative Communication Handbook. Retrieved January 23, 2013, from Hawaii Public Schools: http://doe.k12.hi.us/specialeducation/assistivetechnology/Resource%20Materials/AAC_Handbook.pdf Hedman, G. (1990). Rehabilitation Technology. New York: Routledge. Higginbotham, J. (2011, June). The Future of the Android Operating System for Augmentative and Alternative Communication. Perspectives on Augmentative and Alternative Communication, 20(2), pp. 52-56. Hopkins, J. (2000, October). Component primer: Laying the foundation. Communications of the ACM, 43(10), 27-30. Hourcade, J., Pilotte, T. E., West, E., & Parette, P. (2004). A History of Augmentative and Alternative Communication for Individuals with Severe and Profound Disabilities. Focus on Autism and Other Developmental Disabilities, 19(4), pp. 234-244.
42 Informa Healthcare. (2013). Augmentative and Alternative Communication. Retrieved March 25, 2013, from http://informahealthcare.com/loi/aac/ International Organization for Standardization. (2011, March 1). ISO/IEC 25010:2011 Systems and software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - System and software quality models. Retrieved from http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=35733 ISAAC. (2013). The International Society for Augmentative and Alternative Communication. Retrieved March 25, 2013, from https://www.isaac-online.org/english/home/ Judge, S., & Lysley, A. (2005, November). OATS - Open Source Assistive Technology - a way forward. Communication Matters, 19(3), 11-12. Judson, A., Hine, N. A., Lundälv, M., & Farre, B. (2005). Empowering disabled users through the sematic web: The concept coding framework an application of the semantic web. In J. Cordeiro, V. Pedrosa, B. Encarnação, & J. Filipe (Ed.), Proceedings of WEBIST: the 1st International Conference on Web Information Systems and Technologies, Miami, FL, USA, May 26-28, 2005 (pp. 162-167). Setubal, Portugal: INSTICC Press. Karberis, G., & Kouroupetroglou, G. (2002). Transforming spontaneous telegraphic language to wellformed Greek sentences for Alternative and Augmentative Communication. In I. P. Vlahavas, & C. D. Spyropoulos (Ed.), Methods and Applications of Artificial Intelligence: Proceedings of the 2nd Hellenic Conference on AI, SETN 2002, April 11-12, 2002, Thessaloniki Greece. Vol. 2308 of Lecture Notes in Computer Science series, pp. 155-166. London, United Kingdom: Springer-Verlag. Kiernan, C. (1979). Alternatives to speech: A review of research on manual and other forms of communication with the mentally handicapped and other noncommunicating populations. Communication Disorders Quarterly, 3, 65-99. Kiernan, C., Reid, B., & Jones, L. (1982). Signs and Symbols: Use of Non-vocal Communication Systems. London: Heinemann. Kollock, P. (1998). The economies of online cooperation: Gifts and public goods in cyberspace. In M. Smith, & P. Kollock (Eds.), Communities in Cyberspace (pp. 220-239). London, United Kingdom: Routledge. Koul, R., Corwin, M., & Hayes, S. (2005, January). Production of graphic symbol sentences by individuals with aphasia: Efficacy of a computer-based augmentative and alternative communication intervention. Brain and Language, 92(1), pp. 58-77. Kouroupetroglou, G., & Pino, A. (2001). ULYSSES: A framework for incorporating multi-vendor components in interpersonal communication applications. In Č. Marinček, C. Bühler, H. Knops, & R. Andrich (Ed.), Assistive Technology: Added value to the quality of life: Proceedings of the 6th European Conference for the Advancement of Assistive Technology, AAATE 2001, September 3-6, 2001, Ljubljana, Slovenia. Vol. 10 of Assistive Technology Research series, pp. 55-59. Amsterdam, Netherlands: IOS Press. Kouroupetroglou, G., & Pino, A. (2002). A new generation of communication aids under the ULYSSES component-based framework. In V. L. Hanson, & J. A. Jacko (Ed.), Proceedings of the 5th International ACM Conference on Assistive Technologies, ASSETS 2002, July 8-10, 2002, Edinburgh, Scotland (pp. 218-225). New York, NY, USA: ACM Press. Kouroupetroglou, G., Pino, A., & Viglas, C. (2001). Managing accessible user interfaces of multi-vendor components under the ULYSSES framework for interpersonal communication applications. In C. Stephanidis (Ed.), Universal access in HCI: Towards an information society for all, Proceedings of the
43 9th International Conference on Human-Computer Interaction, HCI International 2001, August 5-10, 2001, New Orleans, LA, USA. 3, pp. 185-189. Mahwah, NJ, USA: Lawrence Erlbaum Associates, Inc. Kouroupetroglou, G., Viglas, C., Anagnostopoulos, A., Stamatis, C., & Pentaris, F. (1996). A novel software architecture for computer-based interpersonal communication aids. In J. Klaus, E. Auff, W. Kremser, & W. L. Zagler (Ed.), Interdisciplinary Aspects on Computers Helping People with Special Needs: Proceedings of the 5th International Conference on Computers Helping People with Special Needs, ICCHP 1996, Part II, July 17-19, 1996, Linz, Austria (pp. 715-720). Vienna, Austria, and Munich, Germany: Austrian Computer Society - R. Oldenbourg. Kouroupetroglou, G., Viglas, C., Stamatis, C., & Pentaris, F. (1997). Towards the next generation of computer-based interpersonal communication aids. In G. Anogianakis, C. Bühler, & M. Soede (Ed.), Advancement of Assistive Technology: Proceedings of the 4th European Conference for the Advancement of Assistive Technology, AAATE 1997, September 29-October 2, 1997, Porto Carras, Greece. Vol. 3 of Assistive Technology Research series, pp. 110-114. Amsterdam, Netherlands: IOS Press. Larsen, G. (2000, October). Component-based enterprise frameworks. Communications of the ACM, 43(10), 24-26. Li, J., & Hirst, G. (2005). Semantic knowledge in word completion. Assets '05: Proceedings of the 7th international ACM SIGACCESS conference on Computers and accessibility (pp. 121-128). New York: ACM. Light, J. (1988). Interaction involving individuals using augmentative and alternative communication systems: State of the art and future directions. Augmentative and Alternative Communication, 4(2), pp. 66-82. Light, J. (1989, June). Toward a definition of communicative competence for individuals using augmentative and alternative communication systems. Augmentative and Alternative Communication, 5(2), 137-144. Lindsay, G., Dockrell, J., Desforges, M., Law, J., & Peacey, N. (2010, December). Meeting the needs of children and young people with speech, language and communication difficulties. International Journal of Language & Communication Disorders, 45(4), 448-460. Lloyd, L. L., Fuller, D. R., & Arvidson, H. H. (1997). Augmentative and Alternative Communication: A Handbook of Principles and Practices. New York: Pearson. Lloyd, L., & Fuller, D. (1986, November). Toward an augmentative and alternative communication symbol taxonomy: A proposed superordinate classification. Augmentative and Alternative Communication, 2(4), 165-171. Lloyd, L., Quist, R., & Windsor, J. (1990). A proposed augmentative and alternative communication model. Augmentative and Alternative Communication, 6(3), 172-183. Lundälv, M., & Derbring, S. (2012). AAC Vocabulary Standardisation and Harmonisation. Lecture Notes in Computer Science, 7383, 303-310. Lundälv, M., & Judson, A. (2005). Concept coding. In S. von Tetzchner, & M. de Jesus Gonçalves (Eds.), Theoretical and methodological issues in research on Augmentative and Alternative Communication (pp. 198-205). Toronto, Canada: ISAAC Press. Lundälv, M., Hekstra, D., & Stav, E. (1998). Comspec, a Java Based Development Environment for Communication Aids. In I. Placencia Porrero, & E. Ballabio (Ed.), Improving the Quality of Life for the European Citizen: Technology for Inclusive Design and Equality. Vol. 4 of Assistive Technology Research series, pp. 203-207. Amsterdam, Netherlands: IOS Press.
44 Lundälv, M., Lysley, A., Head, P., & Hekstra, D. (1999). ComLink, an open and component based development environment for communication aids. In C. Bühler, & H. Knops (Ed.), Assistive Technology on the Threshold of the New Millennium: Proceedings of the 5th European Conference for the Advancement of Assistive Technology, AAATE 1999, November 1- 4, 1999, Düsseldorf, Germany. Vol 6. of Assistive Technology Research series, pp. 174-179. Amsterdam, Netherlands: IOS Press. Mayer-Johnson. (2012). PCS™ Symbols - Picture Communication Symbols. Retrieved from http://www.mayer-johnson.com/category/symbols-and-photos Mayer-Johnson. (2013). Low Tech AAC - AAC Devices. Retrieved January 13, 2013, from Special Needs Products - Special Education Software | Mayer-Johnson: http://www.mayerjohnson.com/category/assistive-technology/aac-low-tech McDougall, S., & Curry, M. (2004). More than just a picture: Icon interpretation in context. Coping with Complexity Workshop (pp. 16-17). University of Bath. McNaughton, D., Light, J., & Arnold, K. (2002). ‘Getting your wheel in the door’: successful full-time employment experiences of individuals with cerebral palsy who use Augmentative and Alternative Communication. Augmentative and Alternative Communication, 18(2), pp. 59-76. McNaughton, D., Light, J., & Groszyk, L. (2001). “Don't give up”: Employment experiences of individuals with amyotrophic lateral sclerosis who use augmentative and alternative communication. Augmentative and Alternative Communication, 17(3), pp. 179-195. McNaughton, S., & Lindsay, P. (1995, December). Approaching literacy with AAC graphics. Augmentative and Alternative Communication, 11(4), 212-228. Millar, D. C., Light, J. C., & Schlosser, R. W. (2006, April). The Impact of Augmentative and Alternative Communication Intervention on the Speech Production of Individuals With Developmental Disabilities: A Research Review. Journal of Speech, Language, and Hearing Research, 49, 248-264. Millar, S., & Scott, J. (1998). What is Augmentative and Alternative Communication? An Introduction. In A. Wilson (Ed.), Augmentative Communication in Practice: An Introduction (pp. 3-12). Edinburgh: University of Edinburgh CALL Centre. Mirenda, P. (2003, July). Toward Functional Augmentative and Alternative Communication for Students With Autism: Manual Signs, Graphic Symbols, and Voice Output Communication Aids. Language, Speech, and Hearing Services in Schools, 34, pp. 203-216. Murphy, J., Markova, I., Collins, S., & Moodie, E. (1996). AAC systems: obstacles to effective use. International Journal of Language & Communication Disorders, 31(1), 31-44. Murphy, J., Scott, J., Moodie, E., & McCall, F. (1994). The Role of Communication Support Networks in the Training and Use of AAC Systems by People with Cerebral Palsy. Communication Matters, 8(3), 2526. Newell, A. F. (1987). How can we develop better communication aids? Augmentative and Alternative Communication, 3(1), 36-40. Pino, A. (2008). ITHACA Framework: Open Source Development of Augmentative and Alternative Communication Applications. (National and Kapodistrian University of Athens) Retrieved February 17, 2012, from http://speech.di.uoa.gr/ithaca/ Pino, A., & Kouroupetroglou, G. (2010, June). ITHACA: An Open Source Framework for Building Component-based Augmentative and Alternative Communication Applications. ACM Transactions on Accessible Computing (TACCESS), 2(4), 14.1-14.30. Poulson, D., & Nicolle, C. (2004, March). Making the Internet accessible for people with cognitive and communication impairments. Universal Access in the Information Society, 3(1), 48-56.
45 Prentke Romich Company. (2013). AAC and Speech Devices from PRC. Retrieved January 13, 2013, from https://store.prentrom.com/ Rackensperger, T., Krezman, C., Mcnaughton, D., Williams, M., & D'Silva, K. (2005, September). When I first got it, I wanted to throw it over a cliff: The challenges and benefits of learning technology as described by individuals who use AAC. Augmentative and Alternative Communication, 21(3), 165-186. Radomski, M. V., & Trombly Latham, C. (2007). Occupational therapy for physical dysfunction (Sixth ed.). Lippincott Williams & Wilkins. Raghavendra, P., Bornman, J., Granlund, M., & Björck-Åkesson, E. (2007, December). The World Health Organization's international classification of functioning, disability and health: implications for clinical and research practice in the field of augmentative and alternative communication. Augmentative and Alternative Communication, 23(4), 349-361. Reichle, J., York, J., Sigafoos, J., & York-Barr, J. (1991). Implementing Augmentative and Alternative Communication: Strategies for Learners with Severe Disabilities. Baltimore, MD, USA: Paul H. Brookes Publishing Co. Remington, B. (1994, March). Augmentative and Alternative Communication and behavior analysis: A productive partnership? Augmentative and Alternative Communication, 10(1), 3-13. Royal College of Speech and Language Therapists. (2006). Communicating Quality3: RCSLT’s guidance on best practice in service organisation and provision. Retrieved January 22, 2013, from RCSLT: http://www.rcslt.org/speech_and_language_therapy/standards/CQ3_pdf Royal National Institute of Blind People. (2008, February). Ambient Intelligence - Paving the way... (J. Gill, Ed.) Retrieved January 25, 2012, from Tiresias: http://www.tiresias.org/cost219ter/ambient_intelligence/Ambient_Intelligence.pdf Rubin, P., Baer, T., & Mermelstein, P. (1981). An articulatory synthesizer for perceptual research. Journal of the Acoustical Society of America, 70(2), 321–328. Salminen, A. L. (2000). Daily life with Computer Augmented Communication. Research Report 119, STAKES, Helsinki, Finland. Sanders, D. A. (1982). Aural Rehabilitation: A Management Model. New York: Prentice Hall. Savidis, A., & Stephanidis, C. (2006, January). Inclusive development: Software engineering requirements for universally accessible interactions. Interacting with Computers, 18(1), 71-116. Schlosser, R. W. (2003, March). Roles of speech output in augmentative and alternative communication: Narrative review. Augmentative and Alternative Communication, 19(1), 5-27. Sclera NPO. (2013). Sclera picto's. Retrieved from http://www.sclera.be/index.php?taal=ENG Sennott, S., & Bowker, A. (2009, December). Autism, AAC, and Proloquo2Go. Perspectives on Augmentative and Alternative Communication, 18(4), pp. 137-145. Shah, S. K. (2006, July). Motivation, governance, and the viability of hybrid forms of open source development. Management Science, 52(7), 1000-1014. Shook, J., & Coker, W. (2006). Increasing the appeal of AAC technologies using VSD’s in preschool language intervention. Proceedings of the 22nd Annual International Technology and Persons with Disabilities Conference. Los Angeles. Retrieved January 26, 2013, from http://www.csun.edu/cod/conf/2006/proceedings/2963.htm Speech and Accessibility Group, University of Athens. (2013). Mobile Athena. Retrieved from http://speech.di.uoa.gr/mobileATHENA
46 Spinellis, D., & Szyperski, C. (2004, January/February). How Is Open Source Affecting Software Development? IEEE Software, 21(1), 28-33. Stephanidis, C. (Ed.). (2001). User Interfaces for All: Concepts, Methods and Tools. Mahwah, NJ, USA: Lawrence Erlbaum Associates. Straight-Street. (2013). Mulberry Symbols. Retrieved from http://straight-street.org/ Tobii Technology. (2013). Cerebral Palsy and Augmentative and Alternative Communication with Tobii Assitive Technology. Retrieved January 13, 2013, from Tobii - AAC - Assistive Technology and AAC Devices: http://www.tobii.com/en/assistive-technology/global/disabilities/common-disabilities/cerebralpalsy/ Todman, J., Elder, L., & Alm, N. (1995, December). Evaluation of the content of computer-aided conversations. Augmentative and Alternative Communication, 11(4), 229-233. Trnka, K., Yarrington, D., McCoy, K., & Pennington, C. (2006). Topic Modeling in Fringe Word Prediction for AAC. IUI '06: Proceedings of the 11th international conference on Intelligent user interfaces (pp. 276-278). New York: ACM. van Santen, J. P., Sproat, R. W., Olive, J. P., & Hirschberg, J. (1997). Progress in Speech Synthesis. Springer. Vanderheiden, G. C. (1984). High and Low Technology Approaches in the Development of Communication Systems for Severely Physically Handicapped Persons. Exceptional Education Quarterly, 4(4), 40-56. Vanderheiden, G. C. (2002, November/December). A journey through early augmentative communication and computer access. Journal of Rehabilitation Research and Development, 39(6), 39-53. Vanderheiden, G. C., & Lloyd, L. (1986). Communication systems and their components. In Blackstone (Ed.), Augmentative communication (pp. 49-161). Rockville, MD: American Speech-Language-Hearing Association. Venkatagiri, H. S. (1995, November). Techniques for Enhancing Communication Productivity in AAC. American Journal of Speech-Language Pathology, 4, 36-45. Viglas, C., & Kouroupetroglou, G. (2002). An open machine translation system for augmentative and alternative communication. In K. Miesenberger, J. Klaus, & W. L. Zagler (Ed.), Proceedings of the 8th International Conference on Computers Helping People with Special Needs, ICCHP 2002, July 15-20, 2002, Linz, Austria. Vol. 2398 of Lecture Notes in Computer Science series, pp. 699-706. London, United Kingdom: Springer-Verlag. von Tetzchner, S., & Martinsen, H. (1992). Introduction to Sign Teaching and the Use of Communication Aids. London, United Kingdom: Whurr Publishers. WHO. (2001). International Classification of Functioning, Disability and Health (ICF) (1st ed.). Geneva, Switzerland: World Health Organization. Williams, M. B. (1994). AAC 101: A Crash Course for Beginners. Alternatevely Speaking, 1(1). Woltosz, W. (1988). A proposed model for Augmentative and Alternative Communication evaluation and system selection. Augmentative and Alternative Communication, 4(4), 233-235. World Wide Web Consortium. (2008). Web Content Accessibility Guidelines, 2.0. Retrieved December 11, 2012, from http://www.w3.org/TR/2008/REC-WCAG20-20081211/ Xydas, G., Spiliotopoulos, D., & Kouroupetroglou, G. (2005, March). Modelling Improved Prosody Generation from High-Level Linguistically Annotated Corpora. IEICE Transactions on Information and Systems, Special Issue on Corpus-Based Speech Technologies, E88-D(3), 510-518.
47
ADDITIONAL READINGS Alant, E., & Lloyd, L. L. (2005). Augmentative and alternative communication and severe disabilities: beyond poverty. London: Whurr Publishers Ltd. Baumgart, D., Johnson, J., & Helmstetter, E. (1990). Augmentative and Alternative Communication Systems for Persons With Moderate and Severe Disabilities. Baltimore: P.H. Brookes Publishing Company. Koul, R. (2011). Augmentative and Alternative Communication for Adults With Aphasia. Bingley, UK: Emerald Group Publishing Limited. Schlosser, R. W. (2003). The Efficacy of Augmentative and Alternative Communication. Leiden, Boston and Tokyo: Brill Academic Publishers. Wendt, O. (2011). Augmentative and Alternative Communications Perspectives: Assistive Technology: Principles and Applications for Communication Disorders and Special Education. Bingley, UK: Emerald Group Publishing Limited.
KEY TERMS AND DEFINITIONS Disability: An umbrella term, covering impairments, activity limitations, and participation restrictions. An impairment is a problem in body function or structure; an activity limitation is a difficulty encountered by an individual in executing a task or action; while a participation restriction is a problem experienced by an individual in involvement in life situations. Motor disabilities: Disabilities that effect a person's ability to learn or perform motor tasks such as moving and manipulating objects, walking, running, skipping, tying shoes, crawling, sitting, handwriting, and others. To be considered a disability, the problem must cause a person to have motor coordination that is significantly below what would be expected for his or her age, and the problem must interfere with the activities of learning and daily living. Assistive Technology: Technology used by individuals with disabilities in order to perform functions that might otherwise be difficult or impossible. Assistive technology can include mobility devices such as walkers and wheelchairs, as well as hardware, software, and peripherals that assist people with disabilities in accessing computers or other information technologies. Alternative communication: Methods of communication used by persons with no voice capability. Augmentative communication: The use of aids or techniques that enhance or complement the existing voice or verbal skills. Augmentative and Alternative Communication: The methods of communication used to supplement or replace speech or writing for people with disabilities in the production or comprehension of spoken or written language.