LWW/TLD
AS268-01
February 12, 2004
21:1
Char Count= 0
Top Lang Disorders Vol. 24, No. 1, pp. 18–30 c 2004 Lippincott Williams & Wilkins, Inc.
Augmentative and Alternative Communication and Language Evidence-Based Practice and Language Activity Monitoring Katya Hill, PhD The goal of augmentative and alternative communication (AAC) is the most effective communication possible. Speech-language pathologists are obligated to collect data, measure communication, and apply the principles of evidence-based practice (EBP). This article presents a model for EBP that represents how collecting and evaluating performance data supports clinical decisions. The three components of EBP are discussed: (1) field evidence, (2) evidence at the personal level, and (3) clinician experience. Language activity monitoring or data logging are offered as one approach of collecting performance data. Software tools such as the AAC Performance Report Tool (PeRT) for reporting language skills and communication competence are described. A clinical example is incorporated to show the application of EBP and performance measurement. Key words: augmentative and alternative communication, communication competence, data logging, evidence-based practice, language activity monitoring, performance report tool
F
or individuals with complex communication needs (CCN), their families, and those providing clinical services, the process of achieving communication success may be perceived as an insurmountable challenge. Yet, many people who rely on augmentative and alternative communication (AAC) are testimony to having achieved effective communication and success. ASHA (American Speech-Language-Hearing Association) has recognized individuals who have achieved the goal of AAC for six consecutive years through the AAC Edwin and Esther Prentke
From the Department of Speech and Communication Studies, Edinboro University of Pennsylvania, Edinboro, Pennsylvania. Corresponding author: Katya Hill, PhD, Assistant Professor, Speech and Communication Studies, Edinboro University of Pennsylvania, 102 Compton Hall, Edinboro, PA 16444. E-mail:
[email protected] The author discloses that language activity monitoring was developed through two grants from the National Institutes of Health awarded to Prentke Romich Company, with initial collaboration on the research with the University of Pittsburgh and Edinboro University of Pennsylvania.
18
Distinguished Lecturer Series at the annual ASHA convention. At the Pittsburgh Employment Conference for Augmented Communicators, the world’s largest gathering of individuals who rely on AAC, conference participants can benefit from sessions presented by individuals relying on AAC systems to discuss their success. These individuals are shining examples that AAC systems can enhance a person’s ability to reach the maximum potential of life. Clinicians are seeking knowledge and skills to apply a structured and scientific approach to AAC assessment and intervention to achieve the most effective communication possible for those being served. By applying the principles of evidence-based practice (EBP) and performance measurement, clinicians can have a dramatic impact on improving the quality of life for individuals with CCN and feel confident that they are providing exemplary AAC services and supports. This article familiarizes clinicians with the definition, components, and steps of EBP and performance measurement to support the AAC assessment and intervention processes. The basic components of EBP are discussed in terms of building and measuring
LWW/TLD
AS268-01
February 12, 2004
21:1
Char Count= 0
Evidence-Based Practice and Language
19
communication competence with individuals with CCN. Tools and methods to collect data to measure the parameters of communication are introduced. Clinical examples are presented to illustrate application of the principles of EBP and performance measurement in accordance with the ASHA Scope of Practice (2001). Arriving at a performance-based understanding of how language is acquired and generated using an AAC system is a valued outcome of applying these principles. AAC EVIDENCE-BASED PRACTICE Principles of EBP AAC service delivery has been making a rapid shift toward applying the principles of EBP. ASHA has recognized the importance of EBP through the revised Scope of Practice (2001). The Special Interest Division on AAC (SID-12) recognized the importance of applying the principles of EBP in several of the knowledge and skill areas recommended for AAC service provision (2001). Both documents articulate the importance of systematic data collection and outcomes measurement. The expectation of the profession is that clinicians providing AAC clinical services are using instrumentation to collect data and measure outcomes in accordance with the principles of EBP. The principles of AAC EBP are derived from evidence-based medicine. Evidencebased medicine requires conscientious and judicious use of current best evidence in making decisions about the care of individuals (Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996). EBP is an approach that promotes the collection, interpretation, and integration of valid, important, and applicable patient-reported, clinician-observed, and research-derived evidence (McKibbon, Wilczynski, Hayward, Walker-Dilks, & Haynes, 1995). EBP places the client’s benefits first, as practitioners adopt a process of lifelong learning to apply evidence of direct practical importance to clients (Gibbs, 2003). EBP has three components: (1) evidence at the
Figure 1. AAC evidence-based practice model.
field (external) level; (2) evidence at the personal (individual) level; and (3) knowledge and skills of the clinician (clinical expertise). The basic steps for applying EBP include: (1) characterizing the person; (2) asking the most appropriate questions; (3) searching for and appraising the field evidence; (4) implementing the intervention; (5) monitoring (measuring) change; and (6) evaluating the results. Figure 1 shows a model for AAC EBP that illustrates the application of the basic steps and the three EBP components for clinical service delivery (Hill & Romich, 2002a). The systems model starts with the expectation that clinicians thoroughly characterize the person and measure the problem to formulate the best questions to seek evidence. Field (external) evidence AAC evidence at the field level is obtained from clinically relevant and systematic external research. The research is identified and appraised in terms of levels of evidence. Levels of evidence are categories that rank research studies according to hierarchies of strength. Levels of evidence provide a mechanism to help practicing clinicians sort through the volume of literature that may relate to a client’s communication disorder. The highest strength is given to peer-reviewed randomized controlled trials, whereas the
LWW/TLD
AS268-01
20
February 12, 2004
21:1
Char Count= 0
TOPICS IN LANGUAGE DISORDERS/JANUARY–MARCH 2004
lowest strength is given to authoritative writings. This component occurs in the systems model at the point of “Performance of Other Individuals and Research.” The process involves the judicious use of current best evidence obtained from clinically relevant research and quantitative performance data from others. External evidence includes the following. • Research based on individuals who rely on AAC • Demonstrated communication performance of people who rely on AAC • Research using able-bodied individuals using AAC • Other research • Descriptive studies, expert opinions, and authoritative literature Evidence at the personal level AAC evidence at the personal level is contributed by and recorded from the client. This component occurs in the systems model at the points of “Characterize the Individual”and “Measure Performance.”Clinicians involve the client in the process by identifying and including the client’s values and expectations when making decisions. However, well thought-out and careful clinical planning involves many types of quantitative data collected from the individual. Today, evidence at the personal level includes: • Logged data from the activity recorded during an individual’s use of an AAC system such as with the language activity monitor • Data collected using traditional methods of recording and observation such as audio and video taping and transcription • Results of surveys or other measure-
Well thought-out and careful clinical planning involves many types of quantitative data collected from the individual.
ment instruments based on interview and observation • Anecdotal and other qualitative observations Formulating EBP questions To apply EBP a clinician must learn to formulate appropriate questions (Hooper, Watson, & Biddle, 2002). The process starts with the client and the clinical problem. The clinician must construct a well-built question derived from the information (data) collected from the client. Question formulation requires careful description of the client or problem, identification of the interventions under consideration, and outcomes measurement. Comparison among interventions may be considered also. All three components of EBP combine in order to evaluate the evidence for the most effective results. Table 1 shows the basic information gathered by the clinician to formulate a well-constructed EBP question. Using Table 1 to formulate a question, the clinician may ask, “For my adult client with CCN secondary to cerebral palsy using an AAC system, what core vocabulary should I introduce to improve utterance generation?” After formulating the question, the clinician conducts a search through the evidence to answer the question. Several published research studies identify the concept of core and extended vocabulary and the frequency of use of vocabulary by different cohorts according to age and disability. The clinician may start a search by referring to sections in AAC textbooks that discuss core vocabulary or reviewing materials disseminated at a conference. However, original published sources of research are higher levels of evidence. The clinician may want to read the article by Beukelman, McGinnis, and Morrow (1991) on vocabulary selection in AAC. Once familiar with the concepts, research based on adults who rely on AAC may be more appropriate for her client. Consequently, the clinician may search and find frequency lists of core vocabulary
LWW/TLD
AS268-01
February 12, 2004
21:1
Char Count= 0
Evidence-Based Practice and Language
21
Table 1. Components for formulating an evidence-based practice question Component for Question Patient/problem
Intervention
Comparison, if any Outcome
Details for Question Adult with complex communication needs using an augmentative and alternative communication voice-output device (additional information such as diagnosis, use of language representation methods, name of vocabulary application program, name of device). Use of core vocabulary based on word frequency of adults (additional information to narrow intervention such as speaking tasks, environments, topics). Extended vocabulary (specific to tasks, topics, or environments). Increase number of core words used to generate utterances during language sampling and additional functional communication
reported in Beukelman, Yorkston, Poblete, and Naranjo (1977), Balandin and Iacono (1999), or Hill (2001). The expertise of the clinician will determine the appraisal of the evidence and application to clinical decision making for the particular client. To summarize, in applying the principles of evidence-based practice with her adult client with CCN the clinician starts by collecting data. Knowing that her client values building vocabulary skills with an AAC system, the clinician measures the client’s language and vocabulary skills for baseline data. The clinician uses this evidence at the personal level to formulate a question about AAC vocabulary use and intervention. The search for external evidence leads to several studies on core and extended vocabulary along with vocabulary frequency lists to use during intervention. Based on the clinician’s ability to appraise the evidence, she may further narrow the external evidence based on the individual profiles reported in the studies. The next steps will be to apply and monitor the intervention of improving her client’s access to core vocabulary using a dedicated speech output AAC system. The remainder of this article focuses on the tools and methods to support the automated collection of performance data from dedicated speech output AAC systems. Data collection is required for the second compo-
nent of EBP. The historic and technical description of the tools and methods is intended to help clinicians gain an appreciation for the recent advances in AAC performance measurement. The information on language activity monitoring can be used to guide clinical practice in routine performance measurement to report language skills and communication competence for the AAC assessment or intervention processes.
LANGUAGE ACTIVITY MONITORING Development of language activity monitoring Language activity monitoring (LAM) refers to recording the use of an AAC system for functional communication. More specifically, LAM is a feature in an AAC system that supports the automated recording of events representing how an AAC system is used by an individual to communicate. When the LAM or data logging feature is turned on in a toolbox or feature menu of an AAC device, a logfile is automatically recorded and saved internally. Clinicians easily appreciate this convenient, time-efficient, and cost-effective method of collecting a language sample by recording a client’s activity on an AAC system. This
LWW/TLD
AS268-01
22
February 12, 2004
21:1
Char Count= 0
TOPICS IN LANGUAGE DISORDERS/JANUARY–MARCH 2004
method of data collection may be used during assessment trials or for therapy. Early efforts to automate logging of data were developed as research tools. The early research approaches vary in the degree of comprehensiveness of the logfile data. Higginbotham (1989) logged the text output of the Autocom and then segmented the text into utterances based on videotape recordings. Miller, Demasco, and Elkins (1990) proposed a software based approach to track usage. Venkatagiri (1994) and Koester and Levine (1994) used logging in investigating word prediction performance. However, because they logged data primarily for research, data logging for monitoring language activity did not transfer initially to clinical practice. Work on an LAM for clinical use started in 1998 at the School of Health and Rehabilitation Sciences at the University of Pittsburgh as a solution to provide for the transmission of performance data for AAC telerehabilitation services (Hill & Romich, 1998). Work continued and a Small Business Innovation Research phase I grant was awarded to Prentke Romich Company from the National Institutes of Health in 1999 to study the feasibility of LAM. Simultaneous to the LAM research, the Rehabilitation Engineering and Research Center on Communication Enhancement or AAC-
RERC was developing logging tools to support research efforts (Higginbotham, Lesher, & Moulton, 1998). LAM procedures The basic procedures for LAM are illustrated in Figure 2. LAM functions or tools are available for digitized and synthesized voice output AAC systems. LAM logfile data can be collected in three ways: (1) using a built-in feature in an AAC system; (2) using an external LAM device; or (3) using a software application allowing a PC computer to function as a LAM such as the Universal LAM (U-LAM). Regardless of the LAM implementation, the logfile saved in the AAC system must be uploaded into a computer for analysis. To successfully use LAM, the clinician must become familiar with basic procedures for setting the internal features of the specific AAC system, turning on the data logging function, and uploading the logfile to a computer. These procedures would be similar to learning how to change the voice in the device or backing up the memory of a client’s customized vocabulary. Although the built-in data logging feature may not be available on all AAC systems, software tools are available that allow logfile recording from any digitized or synthesized speech output AAC device. Consequently, LAM
Figure 2. Language activity monitoring procedures.
LWW/TLD
AS268-01
February 12, 2004
21:1
Char Count= 0
Evidence-Based Practice and Language procedures have universal application with all speech output AAC devices manufactured. To facilitate widespread application of language activity monitoring, a standard logging format is required (Hill & Romich, 1999a). A universal logfile format was proposed by the AAC-RERC (Higginbotham & Lesher, 1999). Logfile data must be periodically uploaded into a computer for editing, coding, analysis, and report generation. The LAM logfile format is used to illustrate the basic standard. Additional information can be added to this basic format to enhance the analysis process. The LAM format includes a mnemonic to identify how a word is generated. The universal logfile format has 22 fields that can be selected to analyze various human-machine interface factors. The LAM logfile for the built-in feature starts with a header and includes a privacy statement, the device sending the information, the current software version, and date. The standardization of the data-reporting format is necessary for processing or analysis programs to accept data from different sources. A standard protocol ensures data that are: (1) readily interpretable by clinicians and researchers using nontechnical comparison procedures and (2) suitable for use by standard language analysis programs. For a language event, the protocol is: hh:mm:ss “Any continuous text or speech that is transmitted by the AAC device” where hh:mm:ss represents the time of day in hours, minutes, and seconds using the 24hour clock format. For example, consider the excerpt of a raw LAM logfile in Table 2. The sample logfile (Table 2) was recorded using a prototype of ULAM (Hill & Romich, 2003b) by an adult with
To facilitate widespread application of language activity monitoring, a standard logging format is required.
23
Table 2. Example of language activity monitoring logfile without mnemonic codes after time stamp ### CAUTION ### The following data represent personal communication Please respect privacy accordingly PC Language Activity Monitor Version 1.00Beta Prentke Romich Company YY-MM-DD = 00-07-10* 16:26:05 “It’s” 16:26:08 “faster” 16:26:14 “than” 16:26:41 “sp” 16:26:42 “e” 16:26:45 “l”
16:26:45 “l” 16:26:46 “i” 16:26:47 “n” 16:26:47 “n” 16:26:48 “g” 16:26:49 “ ” 16:26:58 “everything” 16:27:02 “out” 16:27:05 “which” 16:27:08 “is” 16:27:11 “what” 16:27:14 “I” 16:27:19 “used” 16:27:22 “to do”
cerebral palsy who used Unity on a Deltatalker (Prentke Romich Company, Wooster, OH). The one utterance is 13 words in length. The word “spelling” is the only word accessed using spelling as a language representation method (LRM). Otherwise, all other words in the utterance were selected using semantic compaction or Minspeak (Baker, 1982). The time stamps could be used to calculate communication rate for the utterance. When available, the language events are recorded in text. This is generally the case when the AAC system supports alphabetbased language representation methods (spelling, word prediction, and orthographic word selection). Alternatively, the language events may be audio and will need to be transcribed into text before analysis can be done. Analysis of LAM data Tools and methods to analyze and report logfile data are available. Logfile data provide information about the communication and
LWW/TLD
AS268-01
24
February 12, 2004
21:1
Char Count= 0
TOPICS IN LANGUAGE DISORDERS/JANUARY–MARCH 2004
language performance of the individual using an AAC system. To increase the clinical benefit from the language information contained in the logfiles, several tools and methods to analyze and report the data are available. Various researchers have proposed transcription methods and codes for use with language sampling conducted with individuals who rely on AAC systems (Soto, 2000; Higginbotham et al., 2001). Hill (2000) proposed specific codes for logfile data use. Higginbotham and Cornish (2002) developed transcription codes that reflect the multimodal aspects of AAC system communication. These methods of analysis mostly involve manual manipulation of the recorded data supplemented with some type of database or language analysis software to arrive at summary measures to report performance. These initial steps in reporting AAC performance are based on language sampling procedures and techniques and the reporting of summary measures of language skills and communication competence. The clinical application of AAC performance measurement has its roots in the development of computerized language analysis tools. Miller and Chapman (1983) began to develop the Systematic Analysis of Language Transcripts (SALT) at the University of Wisconsin in 1983. SALT (Miller & Chapman, 2000) provides clinicians with numerous measures to report the linguistic skills of a child that can be compared with a research database. Manual segmentation of the LAM logfile events into utterances to create a SALT transcript allows the clinician to report any of the language-based performance measures possible with SALT. ACQUA (Augmented Communication Quantitative Analysis) program was developed as a research tool for the AACRERC. ACQUA was designed because many statistical measures for establishing performance standards in augmentative communication cannot be easily derived using generic analysis programs (Lesher, Moulton, Rinkus, & Higginbotham, 2000). ACQUA supports the universal and LAM logfile formats and can be downloaded from a web site. Some manual manipulation of the logfile is required by the researcher to calculate the selected per-
formance measure. SALT and ACQUA do not provide a report summarizing the results automatically for the clinician. PeRT (Performance Report Tool) was development by the AAC Institute (Romich et al., 2003; Hill & Romich, 2003a). PeRT provides the automated analysis of utterances based on LAM data to generate an AAC performance report (Figure 3) of 17 summary measures and vocabulary frequency lists (Hill & Romich, 2001; Romich & Hill, 1999). The tools and methods available to support LAM or data logging offer clinicians a range of choices for collecting and reporting performance using an AAC system. The clinician may simply upload and save a logfile to get a better impression of the client’s activity. Language samples may be transcribed using SALT, ACQUA, or PeRT to record specific performance measures considered relevant for charting progress. Finally, clinicians could generate an AAC Performance Report using PeRT to complement a funding request or support the requirements of Individuals with Disabilities Education Act. The clinician interested in increasing core vocabulary with her client may have used a performance report initially to support a third-party funding request by comparing vocabulary performance among various trial AAC systems.
APPLYING EBP AND PERFORMANCE MEASUREMENT Patient-oriented evidence that matters EBP places an emphasis on valid, reliable evidence rather than intuition, anecdote, and authority. In addition, EBP emphasizes POEMs (Patient-Oriented Evidence that Matters) rather than just Disease-Oriented Evidence (Dollaghan, 2002). POEMs means that the research and case studies report outcome measures that clinicians and clients care about and that the results have the potential to change the way clinicians practice. Given the importance placed on authentic evidence for clinical decision making, language activity monitoring is a valuable tool to report
LWW/TLD
AS268-01
February 12, 2004
21:1
Char Count= 0
Evidence-Based Practice and Language
25
Figure 3. Sample performance report.
performance and measure outcomes about AAC system competence. The tools and methods associated with automated performance measurement supplement and support traditional methods of observation needed to capture the multimodal aspects of communication. However, specific summary measures needed to understand how language is generated using an AAC system can only be captured using data logging. These summary measures have been identified, operationally defined, and are consistent and comparable across studies. Quantitative data on AAC performance are emerging in studies that show the application of these summary measures to EBP and outcomes measurement. Language-based performance Summary measures based on logfile data from language samples have their foundation in well-documented traditional measures of language performance. Many of the utterancebased and word-based summary measures found in the AAC performance report
(Figure 3) should be familiar to clinicians routinely analyzing language samples or using SALT. These summary measures may include the number of complete utterances, mean length of utterance, number of different word roots, or total number of words used in the sample. An AAC performance report could be used to compare back to the EBP example about the clinician interested in core vocabulary. Use of LAM allows the clinician to generate a word frequency list to compare the core vocabulary used by her client with the core vocabulary from one or more of the studies found as evidence. In addition, Hill (2001) reported performance indices on several SALT summary measures for the adults in the study. Consequently, an AAC performance report would provide several related statistics to chart progress related to vocabulary activity with the AAC system. Many clinicians are concerned with vocabulary use by preschoolers rather than adults, but the same principles of EBP apply. Vocabulary lists generated using PeRT could be
LWW/TLD
AS268-01
26
February 12, 2004
21:1
Char Count= 0
TOPICS IN LANGUAGE DISORDERS/JANUARY–MARCH 2004
compared with studies specific to preschoolers (Banajee, Dicarlo, & Stricklin, 2003; Beukelman, Jones, & Rowan, 1989). However, evidence based on young children who rely on AAC may be considered more relevant evidence by the clinician. Tullman and Hurtubise (2000) reported on vocabulary frequency of a preschooler using an AAC system. The study relied on LAM tools to collect performance data on mean length of utterance, number of different words, and total number of words collected during intervention. The authors indicated that LAM data can be easily and objectively collected on actual word usage with a device. In addition to a list of the most frequently used words, Hill (2003) reported the frequency of spontaneous utterances using core vocabulary versus prestored messages by a preschooler during play therapy sessions. The clinician working on core vocabulary to increase utterance generation could use this evidence to monitor progress from techniques gathered from Cumley and Swanson (1999). Similar progress charting is possible for any clinician using LAM tools and methods. Although many summary measures could be reported based on traditional methods of observation and reporting rather than relying on data logging, several measures require both the time stamp and either the LRM mnemonic codes or knowledge of how the AAC system is being used to report. These measures include how vocabulary is accessed based on the available LRMs of single meaning pictures, alphabet-based methods, and semantic compaction. In the example of the logfile provided earlier in this article, the clinician can easily spot that spelling was used to generate the word “spelling.” Another AAC system may have allowed the individual to finish that word through word prediction that would be indicated in the logfile. If the word occurred with a high frequency, the clinician and client may consider storing the word as a single meaning picture or an icon sequence. Several studies have reported the frequency of use for the various language representation methods (Hill, 2001; Hill & Romich, 1999b; Sturm et al.,
2002). For example, the performance indices for adults who rely on AAC systems that support the three language representation methods indicate that semantic compaction is used with significantly more frequency for communication (Figure 4). A clinician working with an adult client learning to use an AAC system that supports semantic compaction can compare the results of intervention to the research evidence. If the profile of the client matches the profiles of the research participants, the clinician can expect the client to be using semantic compaction around 90% of the time after three months. For the clinician working on core vocabulary, a vocabulary list identifying the LRM of each word may be helpful to determine if an individual is making appropriate use of the language application program. One of the most important outcomes expressed by individuals who rely on AAC is to speak as fast as possible. The time stamp of a logfile provides for an accurate and reliable method to measure rate of communication or access. Average and peak communication rates have been defined and reported (Romich & Hill, 2000b). Evidence on communication rates achieved by various LRMs is available (Hill, Romich, & Holko, 2001). In addition, comparisons of communication rates among the LRMs and the variables influencing communication have been investigated (Hill, Romich, & Cook, 2002; Hill, Romich, & Holko, 2001). Selection rate (Romich, Hill, & Spaeth, 2001) and a rate index (Hill & Romich, 2002c) also have been defined and reported as clinically useful. Investigations on improving character prediction (Lesher & Rinkus, 2002), and comparing scanning arrays (Lesher, Moulton, Higginbotham, & Alsoform, 2002) provide evidence for clinical application. Clinicians working with clients with amyotrophic lateral sclerosis can use the summary measures for communication and selection rates to monitor the changing needs of this client population. Cook and Hill (2003) reported that frequent system reconfiguration of selection technique features was needed to maintain client access to the AAC system using one-switch scanning
LWW/TLD
AS268-01
February 12, 2004
21:1
Char Count= 0
Evidence-Based Practice and Language
27
Figure 4. Language representation model comparisons.
though the course of the amyotrophic lateral sclerosis disease process. In summary, the feasibility of reporting logfile data has been well established, the tools are available, the clinical value has been demonstrated, and a research base is building for clinicians to use LAM data routinely to support EBP. AAC COMMUNICATION COMPETENCE A performance-based understanding of communication competence has long been a basic aim of AAC clinicians. EBP provides the principles to move toward a more comprehensive, empirically based description of communication competence. An evidencebased description of AAC communication competence allows clinicians and clients to measure progress in achieving the most effective communication. In 1989, Light articulated a definition for domains of communicative competence to determine how individuals who rely on AAC best facilitate their daily interactions. These four domains describe communication competence associated with linguistic, operational, strategic, and social skills. However, until recently, clinicians did not have convenient automated tools and resources to support measuring AAC device activity in working toward building AAC communication competence. In addition, the definition of specific performance measures
based on information is only provided because of logfile time stamps helps to define domains. Consequently, clinicians today can target more specific outcomes for services. Table 3 identifies eight domains of communication competence accompanied by performance measures to facilitate clinicians identifying specific outcomes (Hill, 2001). Table 3 shows that domain skills can be measured using either traditional methods of observation or LAM tools. By using traditional methods of recording or U-LAM keyboard entry with automatic time stamping, clinicians may record other modes of communication such use of gestures and verbalizations in addition to partner assistance for communicating. LAM tools provide reporting and monitoring related to AAC device activity. An important consideration for clinicians is collecting and discussing the perceptions of success by the client also. Now the reader should return to the clinician working on core vocabulary. The targeted domain of communication competence is linguistic content skills. In working on increasing use of core vocabulary, the clinician collects language samples to chart use of core vocabulary using an AAC system. Over the intervention period, the clinician will discuss results with her client. Included in the process, the clinician will record how the client evaluates progress toward improving communication.
LWW/TLD
AS268-01
28
February 12, 2004
21:1
Char Count= 0
TOPICS IN LANGUAGE DISORDERS/JANUARY–MARCH 2004
Table 3. Domains of communication competence and performance measures
Domain
Measures using Traditional Methods of Observation
Language representation
Frequency of communication
Methods
Modes (gestures, verbal)
Linguistic: form
Mean length of utterance, Type token ratio Number different words Total number words, core and extended vocabulary Accuracy
Linguistic: content
Access Operational Strategic: rate
Use of non-language features
Strategic: construction
Frequency multimodal utterances
Social
Frequency communication functions
∗ U-LAM
Measures using Language Activity Monitoring∗ Frequency of language representation methods Using augmentative and alternative communication system Same Same
Selection rate in bits/second Rate index in words/bit frequency of features Average communication rate Peak communication rate Frequency spontaneous Utterances and prestored messages
allows for recording of and comments by partner.
To achieve the most effective communication possible, clinicians need to build client skills based on the dynamics of the complex nature of interactive communication. EBP always begins and ends with the client. Collecting, analyzing, and reporting performance data begins with the client and ends with the client being an “adept user of language, able to adapt language style to different social contexts and different communication
goals” (Shames, Wiig, & Secord, 1998). The most effective AAC clinicians evaluate services based on the routine use of performance measurement in accordance with the principles of EBP. To achieve the goal of AAC, the most effective communication possible, understanding and measuring language performance is paramount and not an undue burden on clinicians holding the interest of their client paramount.
REFERENCES American Speech-Language-Hearing Association (ASHA). (2001). Scope of practice. Rockville, MD: Author. American Speech-Language-Hearing Association (ASHA). (2001). Augmentative and alternative communication: Knowledge and skills for service delivery. (III419). Rockville, MD: Author. Baker, B. (1982). Minspeak: A semantic compaction system that makes self-expression easier for communicatively disabled individuals. Byte, 7, 186–202.
Balandin, S., & Iacono, T. (1999). Crews, wusses, and whoppas: Core and fringe vocabularies of Australian meal-break conversations in the workplace. Augmentative and Alternative Communication, 15, 95–109. Banajee, M., Dicarlo, C., & Stricklin, S. B. (2003). Core vocabulary determination for toddlers. Augmentative and Alternative Communication, 19, 67–73. Beukleman, D. R., Jones, R., & Rowan, M. (1989). Frequency of word usage by nondisabled peers in
LWW/TLD
AS268-01
February 12, 2004
21:1
Char Count= 0
Evidence-Based Practice and Language integrated preschool classrooms. Augmentative and Alternative Communication, 5, 243–248. Beukelman, D. R., McGinnis, J., & Morrow, D. (1991). Vocabulary selection in augmentative and alternative communication. Augmentative and Alternative Communication, 7, 171–677. Beukelman, D., Yorkston, K., Poblete, M., & Naranjo, C. (1984). Frequency of word occurrence in communication samples produced by adult communication aid users. Journal of Speech and Hearing Disorders, 49, 360–367. Cook, S., & Hill, K. (2003, June). AAC performance data for an individual with amyotrophic lateral sclerosis. Proceedings of the RESNA 2003 Annual Conference [CD-ROM], Atlanta, GA: RESNA. Cumley, G. D., & Swanson, S. (1999). Augmentative and alternative communication options for children with developmental apraxia of speech: Three case studies. Augmentative and Alternative Communication, 15, 110–125. Dollaghan, C. (2002, November). An evidence-based approach to clinical practice in communication disorders. Paper presented at the 2002 American SpeechLanguage-Hearing Association Annual Convention, Atlanta, GA. Gibbs, L. B. (2003). Evidence-based practice for the helping professions: A practical guide with integrated multimedia. Pacific Grove, CA: Thompson Brooks/Cole. Higginbotham, D. J. (1989). The interplay of communication device output mode and interaction style between nonspeaking persons and their speaking partners. Journal of Speech and Hearing Disorders, 54, 320–333. Higginbotham, D. J., & Cornish, J. L. (2002, November). Transcription & coding methods for studying interactive discourse. Poster presented at the 2002 American Speech-Language-Hearing Association annual convention, Atlanta, GA. Higginbotham, D. J., & Lesher, G. W. (1999, June). Development of a voluntary standard format for augmentative communication device logfiles. Proceedings of the RESNA 1999 Annual Conference (pp. 25–27), Arlington, VA: RESNA Press. Higginbotham, D. J., Lesher, G., & Moulton, B. (1998). R4-Evaluating and enhancing communication rate, efficiency and effectiveness. Communication enhancement AAC-RERC: Engineering advances for communication enhancement in the new millennium. National Institute on Disability and Rehabilitation Research. CFDA: 84.133E. Higginbotham, J., Soto, G., Muller, E., Mathy. P., Hill, K., Cornish, J., & Hunt-Berg, M. (2001, November). Transcription & analysis of augmentative communication: New visions & technologies. Paper presented at the 2001 American Speech-Language-Hearing Association Annual Convention, New Orleans, LA.
29
Hill, K. (2000, August). AAC performance monitoring: Issues of transcript preparation and analysis. Paper presented at the 2000 ISAAC Research Symposium, Washington, D. C. Hill, K. (2001). The development of a model for automated performance measurement and the establishment of performance indices for augmented communicators under two sampling conditions. Dissertation Abstracts International, 62(05), 2293 (UMI No. 3013368). Hill, K. (2003). The use of AAC performance data to support evidence-based practice with a preschooler. In Proceedings of the RESNA 2001 Annual Conference [CD-ROM]. Atlanta, GA: RESNA Press. Hill, K., & Romich, B. (1998, October). Language research needs and tools in AAC. Paper presented at the Biomedical Engineering Society 1998 Annual Conference, Cleveland, OH. Hill, K., & Romich, B. (1999a). A proposed standard for AAC and writing system data logging for clinical intervention, outcomes measurement, and research. Proceedings of the RESNA 1999 Annual Conference (pp. 22–24). Arlington, VA: RESNA Press. Hill, K. J., & Romich, B. A. (1999b). Identifying AAC language representation methods used by persons with ALS. Poster presented at 1999 American SpeechLanguage-Hearing Association Annual Convention, San Francisco, CA. Hill, K. J., & Romich, B. A. (2001). A language activity monitor for supporting AAC evidence-based clinical practice. Assistive Technology, 13, 12–22. Hill, K. J., & Romich, B. A. (2002a). AAC evidence-based clinical practice: A model for success. Edinboro, PA: AAC Institute Press. Hill, K., & Romich, B. (2002b). The AAC rate index in clinical practice. Proceedings of the RESNA 2002 (pp. 81– 83). Arlington, VA: RESNA Press. Hill, K. J., & Romich, B. A. (2003a). PeRT (Performance Report Tool): A computer program for generating the AAC Performance Report. [Computer software]. Edinboro, PA: AAC Institute. Hill, K. J., & Romich, B. A. (2003b). U-LAM (Universal Language Activity Monitor): A computer program for collecting AAC language samples. [Computer software]. Edinboro, PA: AAC Institute. Hill, K., Romich, B., & Cook, S. M. (2002, November). AAC performance: the elements of average communication rate. Presentation at the 2002 American Speech-Language-Hearing Association Annual Convention, Atlanta, GA. Hill, K. J., Romich, B. A., & Holko, R. (2001, November). AAC performance: the elements of communication rate. Presentation at the 2001 American SpeechLanguage-Hearing Association Annual Convention, New Orleans, LA. Hooper, C., Watson, L., & Biddle, A. (2002, November). Using evidence-based research in speech-language pathology. Paper presented at the 2002 American
LWW/TLD
AS268-01
30
February 12, 2004
21:1
Char Count= 0
TOPICS IN LANGUAGE DISORDERS/JANUARY–MARCH 2004
Speech-Language-Hearing Association Annual Convention, Atlanta, GA. Koester, H. H., & Levine, S. P. (1994). Modeling the speed of text entry with a word prediction interface. IEEE Transactions on Rehabilitation Engineering, 2(3), 177–187. Lesher, G. W., Moulton, B. J., Higginbotham, D. J., & Alsoform, B. (2002, June). Acquisition of scanning skills: the use of and adaptive scanning delay algorithm across four scanning displays. Proceedings of the RESNA 2002 Annual Conference (pp. 75–77). Arlington, VA: RESNA Press. Lesher, G., Moulton, B. J., Rinkus, G., & Higginbotham, D. J. (2000). A universal logging format for augmentative communication. Proceedings of the 2000 CSUN Conference, Los Angeles, CA: CSUN. Retrieved February 14, 2003, from http://www.csun.edu/cod/conf/ 2000/proceedings/0088Lesher.htm. Lesher, G. W., & Rinkus, G. J. (2002, June). Leveraging word prediction to improve character prediction in a scanning configuration. Proceedings of the RESNA 2002 Annual Conference (pp. 90–92). Arlington, VA: RESNA. Light, J. (1989). Toward a definition of communicative competence for individuals using augmentative and alternative communication systems. Augmentative and Alternative Communication, 5, 137– 144. McKibbon, K. A., Wilczynski, N., Hayward, R. S., WalkerDilks, C., & Haynes, R. B. (1995). The medical literature as a resource for evidence based care. Working Paper from the Health Information Research Unit, McMaster University, Ontario, Canada. Miller, J. F., & Chapman, R. S. (1983). Systematic analysis of language transcripts (SALT). San Diego: College Hill Press. Miller, J. F., & Chapman, R. S. (2000). SALT: A computer program for the Systematic Analysis of Language Transcripts. [Computer software]. Madison, WI: University of Wisconsin. Miller, L. J., Demasco, P. W. & Elkins, R. A. (1990, June). Automatic data collection and analysis in an augmentative communication system. Proceedings of the RESNA 1990 Annual Conference (pp. 99–100). Arlington, VA: RESNA Press. Romich, B. A., & Hill, K. J. (1999, June). AAC core vocabu-
lary analysis: Tools for clinical use. Proceedings of the RESNA 1999 Annual Conference (pp. 67–69). Arlington, VA: RESNA Press. Romich, B. A., & Hill, K. J. (2000a). Language activity monitor feasibility study. National Institute for Deafness and Other Communication Disorders of the National Institutes of Health (NIH Grant No. 1 R43 DC 4246–01). Romich, B. A., & Hill, K. J. (2000b). AAC communication rate measurement: Tools and methods for clinical use. Proceedings of the RESNA 1999 Annual Conference (pp. 59–60). Arlington, VA: RESNA Press. Romich, B., Hill, K., Seagull, A., Ahmad, N., Strecker, J., & Gotla, K. (2003). AAC Performance report tool. Proceedings of the RESNA 2001 Annual Conference [CDROM]. Atlanta, GA: RESNA Press. Romich, B. A., Hill, K. J., & Spaeth, D.M. (2001, June). AAC: A selection rate measurement: A method for clinical use based on spelling. Proceedings of the RESNA 2001 Annual Conference (pp. 52–54). Arlington, VA: RESNA Press. Romski, M., & Sevcik, R. A. (1996). Breaking the speech barrier: Language development through augmented means. Baltimore: Paul H. Brookes Publishing Co., Inc. Shames, G. H., Wiig, E., & Secord, W. A. (1998). Human communication disorders: An introduction (5th ed.). Boston: Allyn and Bacon. Soto, G. (2000, June). Multimodal transcription in AAC: Theoretical and methodological considerations. Paper presented at the 2000 ISAAC Research Symposium, Washington, D. C. Sturm, J. M., Miller, J. F., Foley, B., Erickson, K. A., Yoder, D. E., Finch, A. M., Hill, K. J., & Lytton, R. (2002, November). Language and literacy issues in AAC: Future directions. Paper presented at the 2002 American Speech-Language-Hearing Association Convention, Atlanta, GA. Tullman, J., & Hurtubise, C. (2000, June). Language activity monitoring on a young child using a VOCA. Proceedings of the Ninth Biennial ISAAC Conference, Washington, D. C.: ISAAC. Venkatagiri, H. S. (1994). Effective sentence length and exposure on the intelligibility of synthesized speech. Augmentative and Alternative Communication, 10, 96–104.