JSLHR
Research Article
Directional Microphone Hearing Aids in School Environments: Working Toward Optimization Todd A. Ricketts,a Erin M. Picou,a and Jason Galsterb
Purpose: The hearing aid microphone setting (omnidirectional or directional) can be selected manually or automatically. This study examined the percentage of time the microphone setting selected using each method was judged to provide the best signal-to-noise ratio (SNR) for the talkers of interest in school environments. Method: A total of 26 children (aged 6–17 years) with hearing loss were fitted with study hearing aids and evaluated during 2 typical school days. Time-stamped hearing aid settings were compared with observer judgments of the microphone setting that provided the best SNR on the basis of the specific listening environment.
Results: Despite training for appropriate use, school-age children were unlikely to consistently manually switch to the microphone setting that optimized SNR. Furthermore, there was only fair agreement between the observer judgments and the hearing aid setting chosen by the automatic switching algorithm. Factors contributing to disagreement included the hearing aid algorithm choosing the directional setting when the talker was not in front of the listener or when noise arrived only from the front quadrant and choosing the omnidirectional setting when the noise level was low. Conclusion: Consideration of listener preferences, talker position, sound level, and other factors in the classroom may be necessary to optimize microphone settings.
A
children with hearing loss may be even more susceptible to the effects of poor SNRs than their peers with typical hearing (Boothroyd, Eran, & Hanin, 1996; Crandell & Smaldino, 2000). In addition to background noise, reverberation and speaker distance can adversely affect listening conditions (Bistafa & Bradley, 2000; Crandell & Smaldino, 1994; Crandell, Smaldino, & Flexer, 1995; Crukley et al., 2011; Finitzo-Hieber & Tillman, 1978; Leavitt & Flexer, 1991; Nábělek & Pickett, 1974). These factors act to increase listening difficulties in the classroom, particularly in the presence of poor SNRs (Bradley & Sato, 2008; Crandell & Smaldino, 1994). Although appropriate hearing aid fittings can improve audibility for speech signals (i.e., Keidser, Dillon, Flax, Ching, & Brewer, 2011; Scollie, 2008; Scollie et al., 2005; Scollie, Seewald, Moodie, & Dekok, 2000), overcoming poor SNRs remains a significant problem (Bradley & Sato, 2008; Gravel et al., 1999; Neuman et al., 2010; Valente et al., 2012). However, microphone-based technologies that improve the SNR, such as directional microphones, have considerable potential to improve speech recognition in noise because they are more sensitive to sounds arriving from one direction (typically the front) compared with sounds arriving from other directions (typically the side or back). Omnidirectional microphones are approximately
n ideal school environment fosters the learning and development of students and prepares them for the future. However, considerable data have shown that school environments are commonly acoustically disadvantaged for the types of listening and communicating that facilitate learning, particularly for children with hearing loss (e.g., Crandell & Smaldino, 2000; Crukley, Scollie, & Parsa, 2011; Valente, Plevinsky, Franco, Heinrichs-Graham, & Lewis, 2012). Some investigators have estimated that school-age children spend nearly 80% of their time listening to speech in noise during the school day (Crukley et al., 2011). In addition, ancillary rooms, such as music or computer rooms, are likely to have even less favorable signalto-noise ratios (SNRs; Crukley et al., 2011). Poor SNRs significantly reduce speech recognition for children both with and without hearing loss (Bradley & Sato, 2008; Gravel, Fausel, Liskow, & Chobot, 1999; Neuman, Wroblewski, Hajicek, & Rubinstein, 2010; Valente et al., 2012), although
a
Vanderbilt University Medical Center, Nashville, TN Starkey Hearing Technologies, Eden Prairie, MN Correspondence to Todd A. Ricketts:
[email protected]
b
Editor: Nancy Tye-Murray Associate Editor: Kathleen Cienkowski Received March 10, 2016 Revision received June 24, 2016 Accepted July 10, 2016 DOI: 10.1044/2016_JSLHR-H-16-0097
Disclosure: The authors have declared that no competing interests existed at the time of publication.
Journal of Speech, Language, and Hearing Research • Vol. 60 • 263–275 • January 2017 • Copyright © 2017 American Speech-Language-Hearing Association
Downloaded From: http://jslhr.pubs.asha.org/pdfaccess.ashx?url=/data/journals/jslhr/936005/ by a ReadCube User on 01/14/2017 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
263
equally sensitive to sounds originating from all directions. When the noise and speech are spatially separated, directional microphones have the potential to significantly improve speech recognition in noise for adults (Ricketts & Hornsby, 2003, 2006; Wu & Bentler, 2012) and children (Gravel et al., 1999; Hawkins, 1984; Kuk, Kollofski, Brown, Melum, & Rosenthal, 1999; McCreery, Venediktov, Coleman, & Leech, 2012; Ricketts, Galster, & Tharpe, 2007; Ricketts & Picou, 2013; Thibodeau, 2010).
Environmental Factors That Affect Directional Benefit Despite this potential for benefit, considerable interactions between benefit with directional microphones and specific school listening environments are expected. In a previous study, we examined classroom listening environments to determine which microphone settings were most likely to result in the best SNR for children in school environments (Ricketts, Picou, Galster, Federman, & Sladen, 2011). The current investigation expands our earlier work to evaluate methods for switching between directional and omnidirectional microphone settings in typical school environments. For the purposes of this article, we define optimal as the microphone setting—directional or omnidirectional— that provides the best SNR. To determine the optimal setting, it is necessary to consider how specific environmental factors may interact with directional benefit. Directional benefit, defined as the difference in speech recognition performance with a directional setting relative to an omnidirectional setting, is affected by a variety of environmental factors, including the orientation of the directional microphone relative to the talker and competing noise, presence of environmental noise, degree of reverberation, and distance between the talker and listener. First, one of the most critical factors that can influence directional benefit is the orientation of the microphone relative to the target and competing signals. When noise surrounds the listener, the energy from the talker of interest must arrive from a direction of high microphone sensitivity—typically the front quadrant. With typical directional microphone sensitivity patterns, if the listener is not facing the primary talker, a directional setting will attenuate the target signal, impair speech recognition, and lead to a directional decrement (Ching et al., 2009; Henry & Ricketts, 2003; Ricketts et al., 2007; Ricketts & Picou, 2013). Data show that younger and older children are capable of accurately orienting to a primary talker, providing the potential for directional benefit in school environments (Ching et al., 2009; Ricketts & Galster, 2008). However, these same data demonstrate that children do not always face the speaker of interest and are accurately oriented no more than 30% to 50% of active listening time. This limitation was present when there was a single talker of interest and is expected to be even more prevalent in listening environments with multiple talkers of interest. In addition, these studies examined children fitted with omnidirectional hearing aids. Aided localization accuracy in people who wear hearing aids is grossly similar
264
for directional and omnidirectional settings, although it is somewhat poorer for the omnidirectional setting at some azimuths (Carette, Van den Bogaert, Laureyns, & Wouters, 2014). As a consequence, similar orientation findings are expected when using a directional setting, but this has yet to be confirmed. Second, the level of environmental noise can influence directional benefit. Significant directional benefit has been measured for a wide range of noise levels (Picou, Aspell, & Ricketts, 2014; Ricketts & Henry, 2002; Wu & Bentler, 2012); however, in order to attenuate the competing noise, the energy from competing signals must not arrive only from the same angle as the talker. That is, either the competing noise must surround the listener or the primary energy must arrive from angles of minimal microphone sensitivity, typically to the side of or behind a listener (Ricketts, 2000b). Reverberation is the third factor that has the potential to limit directional benefit. In reverberant environments, noise sources comprise energy that comes directly from the source but also reflected energy that arrives from a variety of directions (Beranek, 1954; Blauert, 1997). Therefore, in rooms with longer reverberation times, the reflected energy from competing noise sources may arrive from an angle at which the directional microphone is most sensitive. This can occur even if the noise originates from an angle at which the directional microphone provides the greatest attenuation, limiting the degree to which the total noise energy is attenuated. Indeed, previous investigations have shown that longer reverberation times decrease directional benefit (Hawkins & Yacullo, 1984; Madison & Hawkins, 1983; Ricketts, 2000b; Ricketts & Hornsby, 2003; Wu & Bentler, 2012). Last, the distance between the talker and the listener has the potential to limit directional benefit, especially outside the critical distance. Critical distance is defined as the distance from the source at which the intensity level from the direct and reflected signal energies are equal. When critical distance is exceeded, reflected energy dominates and overall intensity no longer decreases with increasing distance according to the inverse square law. If all else is constant, increased reverberation will result in an increase in the intensity of the reflected energy and a decrease in critical distance. Long reverberation times are therefore associated with relatively short critical distance values (Peutz, 1971). Once a listener is well beyond the critical distance, the angles of arrival for sources of interest and competing sources are increasingly dispersed, thereby limiting the magnitude of directional benefit (Ricketts, 2000b; Ricketts & Hornsby, 2003). In summary, we can predict that a directional setting will provide a better SNR than an omnidirectional setting in a hearing aid when (a) the listener is surrounded by competing signals or the competing signal is concentrated in the rear hemisphere, (b) the listener is facing the target signal, (c) noise is present in the environment, and (d) the listener is near the sound source of interest relative to the critical distance (decreasing distance is required with increasing reverberation). Previous reports suggest that adults are
Journal of Speech, Language, and Hearing Research • Vol. 60 • 263–275 • January 2017
Downloaded From: http://jslhr.pubs.asha.org/pdfaccess.ashx?url=/data/journals/jslhr/936005/ by a ReadCube User on 01/14/2017 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
often in listening situations where directional benefit is likely. Walden, Surr, Cord, and Dyrlund (2004) reported that adults spend approximately 60% of their time listening in noise, and the most common listening experiences included those with the target signal near and in front of the listener (approximately 80% of listening scenarios). Furthermore, an interfering noise source located only in front of the listener was rare ( 1500 ms]), and (h) microphone setting providing the better SNR for the target signal (directional optimal [D], omnidirectional optimal [O], microphone setting would not affect SNR [D/O], or no talker present). Observers were trained to report the microphone setting that would lead to the most positive SNR only on the basis of the acoustics of the listening environment and their best guess of the listener’s intended target. It is important to note that observers were trained to focus on what the student appeared to actually be listening to rather than what might be judged to be the most important information. For example, if the student appeared to be listening to the instructor while at the same time whispering to a friend, the friend and teacher were both deemed to be sound sources of interest. Because it is difficult to determine what a child may be paying attention to at any point in time, observers were trained to err on the side of overinclusion. Any talker who was judged to potentially be audible at the position of the listener was deemed a talker of interest. The only exception to this instruction was when it was clear that the child was intently focusing on a talker of interest and was appearing to strain to understand and ignore other talkers. This occasionally
Ricketts et al.: Directional Microphones in Schools
Downloaded From: http://jslhr.pubs.asha.org/pdfaccess.ashx?url=/data/journals/jslhr/936005/ by a ReadCube User on 01/14/2017 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
267
happened in the lunchroom environment when the sound level was very high. Because the general directional sensitivity patterns of directional and omnidirectional hearing aid microphones on a listener’s head were known, it was possible to confidently estimate the microphone setting that led to the best SNR on the basis of these environmental factors for most listening environments (Ricketts & Dittberner, 2002). Observers were encouraged to indicate that either microphone setting may be optimal (D/O) if they were unsure which would lead to the best SNR on the basis of the listening environment or if either setting was equally appropriate. As a consequence, when a D/O rating was indicated, the microphone setting was not expected to significantly affect speech recognition ability. This would include, for example, environments in which a single speech signal and a single competing noise both arrived from in front of the listener. Although not affecting communication, it could be argued that the omnidirectional setting had the potential to better monitor other sounds arriving from all other angles and therefore may be the preferable microphone setting. Therefore, for analysis purposes, ratings of D/O were reclassified as omnidirectional optimal. Each of the coding factors was accompanied by an indication of the time of day. Whenever there was a change in any of the factors, observers entered a new time of day and made notes about all the relevant factors. For example, if a child was in the lunchroom talking to a classmate in front of him but then started to listen to a child behind him instead, the observer would note the time of the change and make notes about all eight factors of interest listed previously. This procedure was applied each time any of the eight factors changed. All factors and times were logged with a pen or pencil on a coded paper score sheet to facilitate data collection. These data were then entered into an electronic spreadsheet for analyses after the day of observation concluded. In order to ensure consistency and reliability, all observers were comprehensively trained. Training consisted of a brief lecture followed by hands-on training and practice. During the lecture, the observers were instructed about the purpose of the project, factors of interest, and tips for observing students in busy classrooms. In addition, observers were instructed about use of the score sheet for data collection and subsequent data entry. Following the lecture, observers accompanied the instructor(s) to practice making notes and observing. During this hands-on training, feedback was provided regarding observer judgments, and any trainee questions were addressed. Specific examples of signal and noise position, noise level, reverberation, and how they would interact with the microphone setting that optimized SNR were provided. Observers completed at least one training session but often attended multiple sessions to refresh their training. The first time an observer went to complete an observation, he or she was always accompanied by an experienced observer. In order to determine consistency of ratings, two trained observers completed ratings for the first five participants. At the end of the school day, the ratings from these
268
two observers were compared and any discrepancies in ratings were resolved. These comparisons revealed only a single instance of disagreement, which was related to a judgment of whether a noise originating from the side of the participant should be coded as originating from the front or the rear quadrant. Because of the lack of discrepancies, all further observational data were collected by only a single observer during any one observational period.
Hearing Aid Data Logging and Analysis In order to log the hearing aid setting throughout an observation day, the hearing aids were hardwired via programming cables to a wireless communicator (NoahLink; Himsa, Copenhagen, Denmark) worn by the child. Depending on the age of the participant, the NoahLink was secured to the child’s body through one of several techniques, including neck lanyard, arm strap, custom pocket sewn into a T-shirt, or existing pocket on a shirt or blouse. The child and observer worked together to choose the most appropriate technique. Most children wore the communicator on a lanyard around the neck; the arm band was the second most popular technique. The information from the hearing aid was wirelessly streamed via the NoahLink to a Windows smartphone held by the observer (QTek S200; Microsoft, Redmond, WA). An information packet containing data was sent every 0.25 s via custom software. These time-stamped hearing aid data included the microphone setting and hearing aid classifier information, including probability estimates for each of four classifications (speech in quiet, speech in noise, noise only, or music). For example, a noisy environment with many talkers might be classified at some instant in time as having probabilities reflecting 0% speech in quiet, 70% speech in noise, 10% noise only, and 20% music. Overall, these data would be used to classify the environment as speech in noise because that environment classification was assigned the highest probability. The time-stamped hearing aid data were then processed so they could be compared to the observer data. This step was necessary because the hearing aids logged information every 0.25 s, whereas observers logged information every time the listening environment changed. To evaluate the accuracy of the two switching methods (automatic and manual), the observer ratings were compared to the hearing aid output at every moment in time throughout the observation period. Four outcomes regarding optimal microphone setting were possible: (a) observer rating and hearing aid setting were both omnidirectional (O/O), (b) observer rating and hearing aid setting were both directional (D/D), (c) the hearing aid used the omnidirectional setting but the observer chose directional (O/D), and (d) the hearing aid used the directional setting but the observer chose omnidirectional (D/O). The percentage of time each of these four outcomes occurred was calculated. Then, the environmental factors and the percentage of time the hearing aid classified the environment for each of the four categories (speech in quiet, speech in noise, noise only, and music) were analyzed to examine patterns in agreement
Journal of Speech, Language, and Hearing Research • Vol. 60 • 263–275 • January 2017
Downloaded From: http://jslhr.pubs.asha.org/pdfaccess.ashx?url=/data/journals/jslhr/936005/ by a ReadCube User on 01/14/2017 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
and disagreement across the four outcomes. Data from both hearing aids initially were independently compared to observer judgements. However, preliminary results revealed agreement outcomes within 1% across right and left hearing aids. Therefore, data from the right hearing aid were used for all analyses reported here.
Results Environmental Classification It was not always possible to observe every child for the entire school day. On average, students were observed for 5.8 hr. This average reflects the fact that approximately one third of the students were observed for an entire school day (approximately 6.5 hr), whereas the remaining students were observed for only one half of the school day (approximately 3.5−4.0 hr). Of the total time observed during the day, observations were made in traditional classrooms (68% of the time), in special classrooms (consisting primarily of music or art classes; 14%), over lunch (9%), in hallways (7%), and during recess (2%). In addition to total time, it was of interest to examine active communication time, when there was at least one talker of interest. When considering only active communication time, listening in traditional classrooms still accounted for the highest percentage of listening time (73%), followed by listening in special classrooms (14%), lunch (7%), hallways (5%), and recess (1%). It is important to note that the amount of time observing during recess is disproportionately low relative to the amount of time children actually spent in recess because many children opted to remove the wireless communicator and/or the hearing aids during play time. It was not possible to collect data during time periods for which the wireless communicator was not active. The proportion of the total time that the observers indicated omnidirectional optimal, directional optimal, or no talkers present is shown in Figure 2. When considering the total listening time (left side of Figure 2), active listening Figure 2. The proportion of the total time that the observers indicated omnidirectional optimal, directional optimal, or no talkers present for both total listening time and the subset of active communication time for which at least one talker was present.
accounted for 70% of the recorded time (combined speech in quiet and speech in noise). The remaining 30% reflected listening situations with no talkers present, including studying, taking tests in classrooms, walking between classes, and playing alone. When considering only active communication time (right side of Figure 2), omnidirectional was deemed optimal 58% of the time and directional was optimal 42% of the time. Although the aforementioned data were organized with regard to microphone setting or estimated optimal microphone setting, it was also of interest to examine acoustic factors across all listening environments. The hearing aid classifier environmental estimates consisted of 40% speech in quiet, 44% speech in noise, 13% noise only, and 3% music. These data are in marked contrast to overall observer data. When considering the total listening time, noise was present 80.5% of the time (combined speech in noise and noise only). That is, observers indicated that noise was absent in only 19.5% of environments, whereas the hearing aid classifier estimated that noise was absent 40% of the time. In an attempt to determine why this discrepancy existed, speech and noise were played at a variety of levels and SNRs, and the output of the hearing aid classifier was examined in the laboratory. When the noise level was 60 dB SPL or less, the hearing aid classification was “speech in quiet” even for SNRs as poor as +5 dB. On the basis of this finding, it appears that the hearing aids used in this investigation did not classify an input as containing noise unless the overall level exceeded approximately 60 dB SPL.
Switching Accuracy Because providing audible speech signals above background noise is the primary goal of amplification, the remainder of the results focus only on the active communication time during which there was at least one talker of interest. As noted previously, the agreement was calculated between the optimal microphone setting and the actual microphone setting chosen by the user (manual switching) or the hearing aid algorithm (automatic switching) for the active communication time. Recall that the optimal microphone setting was defined as the one judged to provide the best SNR. Manual Switching For the manual switching conditions, 17 participants did not switch and remained in the omnidirectional setting throughout the observation period. Of the remaining nine participants, three actively switched throughout the day and six switched only once. The three participants who actively switched did so three, six, and eight times during the day and were 14, 17, and 11 years old, respectively. The participant who switched three times was in the directional setting for 72% of their active communication time. The other two participants chose the directional setting for 12% and 20% of their active communication time. When in the directional setting, the observer data generally showed that the microphone setting was appropriate. The directional
Ricketts et al.: Directional Microphones in Schools
Downloaded From: http://jslhr.pubs.asha.org/pdfaccess.ashx?url=/data/journals/jslhr/936005/ by a ReadCube User on 01/14/2017 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
269
setting was deemed inappropriate (i.e., the listener was not facing the talker) for 44%, 0%, and 13% of active communication time for the three participants. Of the six participants who switched only once, a 9-year-old switched midmorning at the start of gym class into the directional setting. The remaining five participants (aged 11−16 years) switched into the directional setting at the beginning of the day. These six participants did not switch again. Due to the small number of switches and the inconsistent behavior as a group, meaningful statistical analyses were not possible for the manual switching conditions. In general, these results suggest that school-age children do not manually switch between microphone settings to optimize the SNR in all environments, even when training is provided. Automatic Switching The agreement between the observer- and classifierselected microphone settings, as a function of listening environment, is shown in Figure 3. As specified previously, there were two microphone setting outcomes for which the hearing aid setting was in agreement with the observer judgment (O/O and D/D; top panel) and two for which Figure 3. The overall agreement between the observer rating of optimal microphone setting and the classifier-selected microphone setting as a function of the four microphone setting outcomes. The top panel reflects environments for which there was agreement between the observer and the classifier-selected microphone setting, whereas the bottom panel reflects environments for which there was not agreement. The two panels combined represent 100% of active listening time. These outcomes for the six subclassifications of school environments are also shown. Omni = omnidirectional; Dir = directional.
270
there was disagreement (O/D and D/O; bottom panel). These microphone setting outcomes, quantified in five different classroom settings, were the focus of the remaining data analyses. The data demonstrated a considerable lack of agreement between the observer-rated and algorithmselected microphone settings. Overall, the observer and the hearing aid classifier algorithm agreed on either the omnidirectional or directional setting for 62% of the active communication time (combined O/O and D/D). When examining specific environments, the average proportion of total agreement ranged from a low of 40% (special classrooms) to a high of 66% (typical classroom and lunch). Averaged across environment, observer and algorithm disagreements occurred 38% of the time (combined O/D and D/O). The majority of these were instances of observers choosing directional setting and the algorithm choosing the omnidirectional setting (O/D; 29% of the active listening time); the remaining were instances of observers choosing the omnidirectional setting and the algorithm choosing the directional setting (D/O; 9% of active listening time). One factor that could have led to disagreement was the acoustic environment that the rater and the algorithm used to determine microphone setting. The observers made their choices by watching and listening. The classifier algorithm used the acoustic properties of the environment to assign probability estimates for each of four possible environments (speech in quiet, speech in noise, noise only, and music). As a consequence, the specific classification pattern of the acoustic environment may have been slightly different for different agreement outcomes. For example, consider two environments for which the classifier identified the environment as directional optimal because probability was highest for speech in noise. In the first environment, the observer and classifier both chose the directional setting as optimal, and the classifier indicated a 90% probability of speech in noise. In the second environment, the classifier chose the directional setting as optimal but the observer chose the omnidirectional setting as optimal, and the classifier indicated a 55% probability of speech in noise. To examine whether classification patterns differed in this way, the percentage likelihood values from the classifier were examined as a function of the four agreement outcomes. The average classifier probabilities as a function of microphone setting outcome are shown in Figure 4. Visual comparison of the top and bottom panels of Figure 4 demonstrates a striking similarity. That is, the pattern of classifier probabilities was essentially identical when the observer agreed that the microphone setting was optimal (top panel) and when the observer-rated optimal setting did not agree with the classifier-selected microphone setting (bottom panel). These data suggest that the disagreement between the hearing aid setting and observer rating could not be attributed to differences in the acoustic environment that were recognized by the hearing aid classifier. The similarity of pattern noted in the top and bottom panels was confirmed by statistical analysis. Analysis of these classifier probabilities was completed using a generalized linear model for the four microphone setting outcomes (O/O, D/D, O/D,
Journal of Speech, Language, and Hearing Research • Vol. 60 • 263–275 • January 2017
Downloaded From: http://jslhr.pubs.asha.org/pdfaccess.ashx?url=/data/journals/jslhr/936005/ by a ReadCube User on 01/14/2017 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
Figure 4. The average classifier probabilities for each of the four microphone setting outcomes. Because very short quiet pauses are present even when there is an active talker, the sum of probabilities was slightly less than 100% for speech in quiet, reflecting an absence of sound classification. Omni = omnidirectional; Dir = directional.
and D/O) and the four classifier environments (speech in quiet, speech in noise, noise only, and music). Results revealed a significant main effect of classifier environment (F = 393.619, p < .001, ηp2 = .737) as well as an interaction between microphone setting outcome and classifier environment (F = 76.639, p < .001, ηp2 = .624). No other significant effects or interactions were present. To follow up the significant interaction, linear contrasts were completed by examining the microphone setting outcomes within each of the four classifier environments with Bonferroni adjustment. This analysis revealed a significant effect of microphone setting outcome for speech in quiet (F = 1169.582, p < .001, ηp2 =.972) and speech in noise (F = 1526.623, p < .001, ηp2 = .988). This analysis revealed that the proportion of time the hearing aid classified the environment as speech in noise or speech in quiet was dependent on whether the classifier selected the directional or omnidirectional mode. However, the observer ratings of optimal microphone setting were not significantly related to either the classifier data or the hearing aid microphone setting. As described previously, one difficulty in classifying sounds using only acoustic information relates to the listener’s intended target. For example, classroom noise can comprise not only noise from the environment (e.g., air conditioning, outside traffic) but also competing talkers. Furthermore, there is likely considerable variability in the location and level of the competing talkers. To further examine underlying
factors, the potential contribution of the observers’ ratings of environmental factors to microphone setting outcomes was explored. A generalized linear model was used to analyze the four microphone setting outcomes (O/O, D/D, O/D, and D/O) within each environmental factor (main source position, main source distance, estimated noise level, general noise position, and estimated reverberation time). It is important to note that of these factors, only estimated noise level is an acoustic factor that may be predictable from the classifier data. However, the classifier estimate of noise level was based on the total signal level and the estimated SNR assuming the noise is steady state. As a consequence, this estimate is expected to be imprecise, particularly when noise sources fluctuate in amplitude. When considering only environments for which the hearing aid setting disagreed with the observer’s estimate of the optimal microphone setting (O/D and D/O), there were no significant differences within observer ratings of main source distance or estimated reverberation time. A significant difference, F(1, 26) = 12.643, p < .001, ηp2 = .363, was found in the estimated noise level across microphone setting outcomes. The average noise level for environments for which observer judgment and actual hearing aid setting were directional (D/D) was significantly higher (3.2 on a 4-point scale) than all other outcomes (O/O = 2.2; O/D = 2.5; D/O = 2.6). Combined with our measures of directional microphone activation, these data suggest that a lower overall noise level may have contributed to the algorithm choosing the omnidirectional setting in environments for which the observer nominated the directional setting. As described previously, the relative location of talkers and noise sources also affects which microphone setting is optimal. The location of the signal(s) of interest and the competing noise(s) as a function of microphone setting outcomes is shown in Figure 5. Across the active
Figure 5. The location of the signals of interest and the competing noises as a function of the four microphone setting outcomes. O/O = observer rating and hearing aid setting were both omnidirectional; D/O = the hearing aid used the directional setting but the observer chose omnidirectional; D/D = observer rating and hearing aid setting were both directional; O/D = the hearing aid used the omnidirectional setting but the observer chose directional.
Ricketts et al.: Directional Microphones in Schools
Downloaded From: http://jslhr.pubs.asha.org/pdfaccess.ashx?url=/data/journals/jslhr/936005/ by a ReadCube User on 01/14/2017 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
271
communication environments evaluated, the talker of interest was in the front quadrant 65% of the time, whereas there were multiple talkers of interest simultaneously in front and in other quadrants for 25% of the time. The remaining 10% of the time there was a single talker who was not in the front hemisphere. Data analysis focused on the proportion of time the noise was in the front quadrant across the four microphone setting outcomes (O/O, D/D, D/O, and O/D). The results revealed a significant difference in the proportion of time the noise was in only the front quadrant (F = 6.089, p < .001, ηp2 =.153) across the microphone setting outcomes. The noise was in the front quadrant a significantly greater proportion of the time (p < .003) when the observer rated the omnidirectional setting as optimal (O/O and D/O) than when the directional setting was judged as optimal (D/D and O/D). This suggests that the speech and noise both arriving from only the front quadrant may have contributed to the algorithm choosing the directional setting in environments for which the observer nominated the omnidirectional setting as optimal or indicated that both microphone settings were equally optimal. Data analysis was completed across the four microphone setting outcomes (O/O, D/D, D/O, and O/D) and the four talker positions (front, side, back, and multiple). Significant main effects of talker position (F = 225.237, p < .001, ηp2 = .627) and an interaction between microphone setting outcomes and talker location (F = 27.737, p < .001, ηp2 = .387) were identified. Linear contrast followup evaluation of the significant interaction with Bonferroni adjustment indicated that the talker position was not significantly different for the environments for which the observer rated the omnidirectional setting as optimal (O/O and D/O) or the environments for which the directional setting was judged as optimal (D/D and O/D). However, environments for which the observer rated the omnidirectional setting as optimal (O/O and D/O) had a significantly (p < .001) lower proportion of talker front only and a significantly higher proportion of talker side, talker back, and multiple talkers (p < .001) than those environments for which the directional setting was judged as optimal (D/D and O/D). This suggests that the speech arriving from quadrants other than the front may have contributed to the algorithm choosing the omnidirectional setting in environments for which the observer nominated the directional setting as optimal.
Discussion Manual Switching Overall, the directional microphone setting was expected to yield the best SNR for approximately 42% of the active communication time in the observed school environments. These values are broadly consistent with the percentage of active listening environments for which directional microphones are preferred by adults who wear hearing aids as reported in previous investigations (Blamey et al., 2006; Cord, Surr, Walden, & Dyrlund, 2004; Walden
272
et al., 2007) and provide additional support that there is considerable potential for directional benefits in school environments. Despite the potential for benefit, the results from the manual switching conditions demonstrate that school-age children are unlikely to consistently switch to the microphone setting that optimizes SNR for talkers of interest. Even though all children in this study were provided training on manual switching behaviors that would lead to the best SNR, only three of the 26 participants switched programs more than once over the course of an observation period. There are several factors that may preclude higher manual switching accuracy, including forgetting to switch or not noticing enough benefit to warrant the inconvenience of switching. However, the fact that six children switched to the directional setting a single time and left the hearing aid in this program for the remainder of the day suggests another interesting possibility: Perhaps children did not switch “appropriately” because they had strong preferences for one program or the other. Although preference data were not gathered as part of this study, anecdotal information supports this hypothesis. For example, the 14-year-old participant who switched three times during the day was in the directional setting for 72% of active communication time even though this setting did not optimize SNR for nearly half (approximately 44%) of the time it was active. Another example is the 11-year-old participant who switched only once at the start of the observation period. When we asked her why she switched only once, she indicated that she could not understand well enough in the other program. It also seems likely that at least some portion of children who never switched out of the omnidirectional setting may also prefer this program for full-time use. Indeed, preference for microphone setting may not be strongly related to the setting that provides the best SNR. Instead, preference may be affected by the magnitude of benefit, ability to overhear, personal preference, performance in each setting, and a host of other factors. Future work is warranted to determine the extent to which the nonswitching behavior is related to microphone preference or to other factors, such as immaturity or improper training.
Automatic Switching When considering automatic switching, the results of this study suggest that there was only fair agreement between the observers’ judgments and the hearing aid algorithm settings. One factor that appears to have influenced this mismatch is the hearing aid algorithms’ environmental classification. The observers reported low to moderate noise levels in many listening environments that the hearing aid classified as speech in quiet. Consistent with previous data (Crukley et al., 2011), the observers in the current study reported that approximately 20% of the overall time evaluated was quiet, whereas the classifier placed this value at 40%. Indeed, this discrepancy appeared to contribute heavily to the 29% of environments for which the observer rated the directional setting as optimal but for which the
Journal of Speech, Language, and Hearing Research • Vol. 60 • 263–275 • January 2017
Downloaded From: http://jslhr.pubs.asha.org/pdfaccess.ashx?url=/data/journals/jslhr/936005/ by a ReadCube User on 01/14/2017 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
hearing aid was in the omnidirectional setting (D/O). As is evident in Figure 4, the listener was facing the talker and was surrounded by noise in 100% of these environments. On the basis of laboratory studies, directional benefit would be expected in these listening situations (e.g., Gravel et al., 1999; Ricketts & Picou, 2013). However, due to the lower noise levels present, the overall hearing aid classification for these environments was typically speech in quiet. We speculate that this was a design decision in this model of hearing aid that prevented activation of the directional mode unless the environment contained noise and had an overall level of more than approximately 65 dB SPL. It is important to note that previous research has demonstrated that directional benefit in laboratory settings is not limited to moderate or higher noise levels; rather, directional benefit has been demonstrated with lower level speech and noise (Ricketts & Henry, 2002). Therefore, the ability of the hearing aid model used in this investigation to select the optimal microphone mode leading to the best SNR may have been improved by lowering the overall level criteria necessary for classifying an environment as containing noise. However, activation of a directional microphone for lower overall input levels is sometimes prevented by manufacturers in order to limit the introduction of audible noise (Ricketts & Henry, 2002). In addition, if the hearing aid was designed to switch into the directional mode more often, it may have also increased disagreement between observer and algorithm in the opposite direction—that is, when the observer rated the omnidirectional setting as optimal and the algorithm chose the directional setting (D/O). In the present study, this discrepancy existed during 9% of active communication time (see Figure 3). The data support the idea that the discrepancy can be explained in large part by noise location. Figure 5 shows that for 13% of environments for which the D/O disagreement occurred, the noise originated only from the front quadrant. Although having speech and noise both originating from the front quadrant precludes directional benefit, it could also be argued that the directional mode would not impair speech recognition performance in this scenario. However, the use of the directional setting is expected to impede overhearing and monitoring off-axis sounds, so the omnidirectional setting may be optimal. Figure 5 also demonstrates that there was no talker of interest in the front quadrant in 45% of these environments. In listening environments for which the talker of interest is not in the front quadrant and the listener is surrounded by noise, the directional setting will decrease SNR compared with the omnidirectional setting (Ricketts, 2000a). As a consequence, this misclassification was expected to decrease audibility and speech recognition performance in these specific situations (Ricketts, 2000b). Although there was generally only fair agreement between the hearing aid and observer, the pattern of microphone setting outcomes varied as a function of listening environment. In particular, the magnitude of agreement between the hearing aid classifier and the observers was not particularly high in the classroom environment (66% of the total time), and it was especially poor in the hallway and
special classrooms (47% and 40% of the total time, respectively). The data in Figure 3 reveal that the majority of this increased discrepancy for the special classrooms can be attributed to the hearing aid being in the omnidirectional setting when the observer indicated that the directional setting would provide the best SNR (D/O). This discrepancy occurred in 47% of the specials environments, suggesting that music classes and other special classrooms are quite likely to contain noise. However, the noise levels were commonly too low to automatically trigger the directional microphone setting of the hearing aid used in this study. The data from Figure 3 also revealed that the majority of the increased discrepancy in hallways was attributed to the hearing aid being in the directional setting when the observer indicated that the omnidirectional setting would provide the best SNR (D/O). This discrepancy occurred in 20% of the hallway environments compared with only 9% of the active communication time combined across all environments. This discrepancy also occurred in 20% of lunch environments. These findings were not particularly surprising because these listening environments are more likely to contain talkers that the child is not facing. Given the social nature of these environments, the findings support some potential for concern related to social learning because audibility for these off-axis talkers will be limited by the directional setting in these environments. As a consequence, further work is warranted to determine potential trade-offs between microphone setting and learning for on- and offaxis talkers. One limitation of this study is that it was not possible to quantify children’s listening intent with 100% accuracy. In addition, the accuracy of the automatic switching algorithm was limited by the specific classification scheme used. More recent schemes include a greater range of estimated environments, and some are even able to predict the general direction of the talker with the highest input level to the hearing aid. These advancements are expected to allow for better switching to optimize the SNR of the highest level talker in the environment. In addition, some automatic algorithms now allow for the clinician to adjust the activation threshold at which the directional setting may become active. The current data show that a lower threshold may be advantageous in some school environments. Despite these advances, current technologies still are based only on the acoustic input and are unable to identify the intended talker of interest or make changes on the basis of environmental factors that do not affect the acoustic input (e.g., direction of noise arrival). When also considering that the manual data suggest that some children may prefer a single microphone setting, further study related to microphone preference and speech recognition performance is warranted.
Conclusions This study demonstrated that the directional microphone setting is expected to provide improved SNR compared with the omnidirectional setting in 42% of the active communication time in the school environments evaluated.
Ricketts et al.: Directional Microphones in Schools
Downloaded From: http://jslhr.pubs.asha.org/pdfaccess.ashx?url=/data/journals/jslhr/936005/ by a ReadCube User on 01/14/2017 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
273
However, the results also provide additional evidence that the microphone setting expected to provide the best SNR in noisy school environments is not always directional. There was only fair agreement between the observer ratings and the hearing aid’s automatic switching algorithm. Further, the pattern of agreement varied considerably as a function of the specific category of listening environment (e.g., classroom, hallway). Possible misclassification by the hearing aid was attributed in part to (a) environments with speech and low-level noise being classified as speech in quiet and (b) the use of the directional setting when the talker was not in front of the listener. These findings suggest that consideration of specific environmental factors in the classroom may allow for better optimization of microphone settings for directional hearing aids used in school environments. Further, although previously described in the manufacturers’ literature, these data highlight that environmental classification can be based on carefully considered trade-offs and are not perfectly accurate. As a consequence, it is prudent clinically to view the classifier output as an estimate rather than a completely accurate representation of a patient’s listening environments. Despite the potential for directional benefit, the results from the manual switching trial demonstrated that schoolage children are unlikely to consistently switch to the microphone setting that provides the best SNR for talkers of interest. Although preference was not directly evaluated in this study, the children who switched once (i.e., remaining in directional mode) or left the hearing aid in the default omnidirectional mode present interesting potential implications. It is possible that some portion of the children tested may prefer either the omnidirectional or the directional setting for the majority of environments. If true, future investigations that examine whether individual differences and individual listener preferences can be used to improve and individualize microphone settings may be warranted.
Acknowledgments This project was funded by National Institute on Disability and Rehabilitation Research: Field Initiated Research Grant H133G060012. Technological and material support for the hearing aid streaming procedures was provided by Phonak Hearing Systems. The authors thank Christine Williams for her assistance with data collection and her tireless work with the monumental task of data processing. We also thank Jeff Crukley, Jeremy Federman, Hannah Kim, Travis Moore, Casey Artz Schneider, Douglas Sladen, Jennifer Ratner, and Hollea Ryan for their assistance in participant recruitment, test material development, and data collection. In addition, we are grateful to Vicki Powers with Metropolitan Nashville Public Schools for her assistance with participant recruitment. Last, we thank all of the children who participated in this study. Preliminary data related to the classroom quantification portion of this project for 18 children with hearing impairment and 13 children with typical hearing were published as conference proceedings (Ricketts et al., 2011). Portions of this article were presented at the International Pediatric Audiology Conference “A Sound Foundation Through Early Amplification” (November 2010, Chicago, IL) and at the Annual Convention of the American SpeechLanguage-Hearing Association (November 2013, Chicago, IL).
274
References Beranek, L. L. (1954). Acoustics. New York, NY: McGraw-Hill. Bistafa, S. R., & Bradley, J. S. (2000). Reverberation time and maximum background-noise level for classrooms from a comparative study of speech intelligibility metrics. The Journal of the Acoustical Society of America, 107, 861–875. Blamey, P. J., Fiket, H. J., & Steele, B. R. (2006). Improving speech intelligibility in background noise with an adaptive directional microphone. Journal of the American Academy of Audiology, 17, 519–530. Blauert, J. (1997). Spatial hearing: The psychophysics of human sound localization (rev. ed.). Cambridge, MA: MIT Press. Boothroyd, A., Eran, O., & Hanin, L. (1996). Speech perception and production in children with hearing impairment. In F. H. Bess, J. Gravel, & A. M. Tharpe (Eds.), Amplification for children with auditory deficits (pp. 55–74). Nashville, TN: Bill Wilkerson Center Press. Bradley, J. S., & Sato, H. (2008). The intelligibility of speech in elementary school classrooms. The Journal of the Acoustical Society of America, 123, 2078–2086. Carette, E., Van den Bogaert, T., Laureyns, M., & Wouters, J. (2014). Left-right and front-back spatial hearing with multiple directional microphone configurations in modern hearing aids. Journal of the American Academy of Audiology, 25, 791–803. Ching, T. Y. C., O’Brien, A., Dillon, H., Chalupper, J., Hartley, L., Hartley, D., . . . Hain, J. (2009). Directional effects on infants and young children in real life: Implications for amplification. Journal of Speech, Language, and Hearing Research, 52, 1241–1254. Chung, K. (2004). Challenges and recent developments in hearing aids. Part I. Speech understanding in noise, microphone technologies and noise reduction algorithms. Trends in Amplification, 8, 83–124. Cord, M. T., Surr, R. K., Walden, B. E., & Dyrlund, O. (2004). Relationship between laboratory measures of directional advantage and everyday success with directional microphone hearing aids. Journal of the American Academy of Audiology, 15, 353–364. Cord, M. T., Surr, R. K., Walden, B. E., & Olson, L. (2002). Performance of directional microphone hearing aids in everyday life. Journal of the American Academy of Audiology, 13, 295–307. Crandell, C. C., & Smaldino, J. J. (1994). An update of classroom acoustics for children with hearing impairment. The Volta Review, 96, 291–306. Crandell, C. C., & Smaldino, J. J. (2000). Classroom acoustics for children with normal hearing and with hearing impairment. Language, Speech, and Hearing Services in Schools, 31, 362–370. Crandell, C., Smaldino, J., & Flexer, C. (1995). Sound field FM amplification: Theory and practical applications. San Diego, CA: Singular. Crukley, J., Scollie, S., & Parsa, V. (2011). An exploration of non-quiet listening at school. Journal of Educational Audiology, 17, 23–35. Fabry, D., & Tchorz, J. (2005). Results from a new hearing aid using “acoustic scene analysis.” The Hearing Journal, 58, 30–36. Finitzo-Hieber, T., & Tillman, T. W. (1978). Room acoustics effects on monosyllabic word discrimination ability for normal and hearing-impaired children. Journal of Speech and Hearing Research, 21, 440–458. Gravel, J. S., Fausel, N., Liskow, C., & Chobot, J. (1999). Children’s speech recognition in noise using omni-directional and dual-microphone hearing aid technology. Ear and Hearing, 20(1), 1–11.
Journal of Speech, Language, and Hearing Research • Vol. 60 • 263–275 • January 2017
Downloaded From: http://jslhr.pubs.asha.org/pdfaccess.ashx?url=/data/journals/jslhr/936005/ by a ReadCube User on 01/14/2017 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
Hawkins, D. B. (1984). Comparisons of speech recognition in noise by mildly-to-moderately hearing-impaired children using hearing aids and FM systems. Journal of Speech and Hearing Disorders, 49, 409–418. Hawkins, D. B., & Yacullo, W. S. (1984). Signal-to-noise ratio advantage of binaural hearing aids and directional microphones under different levels of reverberation. Journal of Speech and Hearing Disorders, 49, 278–286. Henry, P., & Ricketts, T. (2003). The effect of head angle on auditory and visual input for directional and omnidirectional hearing aids. American Journal of Audiology, 12, 41–51. Keidser, G., Dillon, H., Flax, M., Ching, T., & Brewer, S. (2011). The NAL-NL2 prescription procedure. Audiology Research, 1, e24. Kuk, F., Kollofski, C., Brown, S., Melum, A., & Rosenthal, A. (1999). Use of a digital hearing aid with directional microphones in school-aged children. Journal of the American Academy of Audiology, 10, 535–548. Leavitt, R., & Flexer, C. (1991). Speech degradation as measured by the Rapid Speech Transmission Index (RASTI). Ear and Hearing, 12, 115–118. Madison, T., & Hawkins, D. (1983). The signal-to-noise ratio advantage of directional microphones. Hearing Instruments, 34, 18. McCreery, R. W., Venediktov, R. A., Coleman, J. J., & Leech, H. M. (2012). An evidence-based systematic review of frequency lowering in hearing aids for school-age children with hearing loss. American Journal of Audiology, 21, 313–328. Nábělek, A. K., & Pickett, J. M. (1974). Monaural and binaural speech perception through hearing aids under noise and reverberation with normal and hearing-impaired listeners. Journal of Speech and Hearing Research, 17, 724–739. Neuman, A. C., Wroblewski, M., Hajicek, J., & Rubinstein, A. (2010). Combined effects of noise and reverberation on speech recognition performance of normal-hearing children and adults. Ear and Hearing, 31, 336–344. Olson, L., Ioannou, M., & Trine, T. D. (2004). Appraising an automatically switching directional system in the real world. The Hearing Journal, 57, 32–38. Peutz, V. (1971). Articulation loss of consonants as a criterion for speech transmission in a room. Journal of the Audio Engineering Society, 19, 915–919. Picou, E. M., Aspell, E., & Ricketts, T. A. (2014). Potential benefits and limitations of three types of directional processing in hearing aids. Ear and Hearing, 35, 339–352. Ricketts, T. A. (2000a). Directivity quantification in hearing aids: Fitting and measurement effects. Ear and Hearing, 21, 45–58. Ricketts, T. A. (2000b). Impact of noise source configuration on directional hearing aid benefit and performance. Ear and Hearing, 21, 194–205. Ricketts, T. A., & Dittberner, A. B. (2002). Directional amplification for improved signal to noise ratio: Strategies, measurements and limitations. In M. Valente (Ed.), Hearing aids: Standards, options and limitations (2nd ed., pp. 274–346). New York, NY: Thieme Medical. Ricketts, T. A., & Galster, J. (2008). Head angle and elevation in classroom environments: Implications for amplification. Journal of Speech, Language, and Hearing Research, 51, 516–525. Ricketts, T. A., Galster, J., & Tharpe, A. M. (2007). Directional benefit in simulated classroom environments. American Journal of Audiology, 16, 130–144.
Ricketts, T. A., & Henry, P. (2002). Low-frequency gain compensation in directional hearing aids. American Journal of Audiology, 11, 29–41. Ricketts, T. A., & Hornsby, B. W. (2003). Distance and reverberation effects on directional benefit. Ear and Hearing, 24, 472–484. Ricketts, T. A., & Hornsby, B. W. Y. (2006). Directional hearing aid benefit in listeners with severe hearing loss. International Journal of Audiology, 45, 190–197. Ricketts, T. A., & Picou, E. (2013). Speech recognition for bilaterally asymmetric and symmetric hearing aid microphone modes in simulated classroom environments. Ear and Hearing, 34, 601–609. Ricketts, T. A., Picou, E. M., Galster, J. A., Federman, M. S., & Sladen, D. P. (2011). Potential for directional hearing aid benefit in classrooms: Field data. In R. C. Seewald (Ed.), A sound foundation through early amplification 2010: Proceedings of the Fifth International Conference (pp. 143–152). Chicago, IL: Phonak. Scollie, S. D. (2008). Children’s speech recognition scores: The Speech Intelligibility Index and proficiency factors for age and hearing level. Ear and Hearing, 29, 543–556. Scollie, S. D., Seewald, R., Cornelisse, L., Moodie, S., Bagatto, M., Laurnagaray, D., . . . Pumford, J. (2005). The desired sensation level multistage input/output algorithm. Trends in Amplification, 9, 159–197. Scollie, S. D., Seewald, R. C., Moodie, K. S., & Dekok, K. (2000). Preferred listening levels of children who use hearing aids: Comparison to prescriptive targets. Journal of the American Academy of Audiology, 11, 230–238. Summers, V., Grant, K. W., Walden, B. E., Cord, M. T., Surr, R. K., & Elhilali, M. (2008). Evaluation of a direct-comparison approach to automatic switching in omnidirectional/directional hearing aids. Journal of the American Academy of Audiology, 19, 708–720. Surr, R. K., Walden, B. E., Cord, M. T., & Olson, L. (2002). Influence of environmental factors on hearing aid microphone preference. Journal of the American Academy of Audiology, 13, 308–322. Thibodeau, L. (2010). Benefits of adaptive FM systems on speech recognition in noise for listeners who use hearing aids. American Journal of Audiology, 19, 36–45. Valente, D. L., Plevinsky, H. M., Franco, J. M., Heinrichs-Graham, E. C., & Lewis, D. E. (2012). Experimental investigation of the effects of the acoustical conditions in a simulated classroom on speech recognition and learning in children. The Journal of the Acoustical Society of America, 131, 232–246. Walden, B. E., Surr, R. K., & Cord, M. T. (2003). Real-world performance of directional microphone hearing aids. The Hearing Journal, 56, 40–42. Walden, B. E., Surr, R. K., Cord, M. T., & Dyrlund, O. (2004). Predicting hearing aid microphone preference in everyday listening. Journal of the American Academy of Audiology, 15, 365–396. Walden, B. E., Surr, R. K., Cord, M. T., Grant, K. W., Summers, V., & Dittberner, A. B. (2007). The robustness of hearing aid microphone preferences in everyday listening environments. Journal of the American Academy of Audiology, 18, 358–379. Wu, Y.-H., & Bentler, R. A. (2012). The influence of audiovisual ceiling performance on the relationship between reverberation and directional benefit: Perception and prediction. Ear and Hearing, 33, 604–614.
Ricketts et al.: Directional Microphones in Schools
Downloaded From: http://jslhr.pubs.asha.org/pdfaccess.ashx?url=/data/journals/jslhr/936005/ by a ReadCube User on 01/14/2017 Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
275