Training-Related Changes in the Brain: Evidence ... - Semantic Scholar

2 downloads 0 Views 154KB Size Report
Auditory Training; Guest Editor, Robert W. Sweetow,. Ph.D. Semin Hear 2007;28:120–132. ...... Rapin I, Schimmel H, Tourk LM, Krasnegor NA,. Pollak C. Evoked ...
Training-Related Changes in the Brain: Evidence from Human Auditory-Evoked Potentials Kelly L.Tremblay, Ph.D.,1

ABSTRACT

Auditory-evoked potentials are being used to examine trainingrelated changes in the human central auditory system, and there is converging evidence that focused listening training, using various training methods and different types of stimuli alters evoked neural activity. Such trainingrelated changes are often described in terms of physiological plasticity, a process whereby the neural representation of the acoustic cue is modified with training. In this review, the concept of plasticity is discussed from a broader perspective. Specifically addressed is how electrophysiological methods are being used to study physiological modifications that occur with training, and how this information might contribute to the rehabilitation of people who wear hearing aids and cochlear implants. KEYWORDS: Auditory plasticity, auditory training, perceptual learning, electrophysiology, event-related potentials, mismatch negativity, N100, P1-N1-P2 complex

Learning Outcomes: As a result of this activity, the participant will be able to (1) summarize three potential ways in which auditory training might be altering the neural representation of sound, and (2) identify two auditoryevoked potentials that have been used to examine training-related changes in the human auditory system.

O

ur understanding of the neural mechanisms underlying normal and disordered perception is improving. Not only did the ‘‘Decade of the Brain (1990 – 2000)’’ yield a wealth of knowledge, but rapid advances in technology also were seen. Along with faster

computers and more sophisticated signal averaging techniques, came improved methods for quantifying and modeling the neural representation of sound in humans. When combined with perceptual measures, it became possible to examine physiological correlates of normal

1

Auditory Training; Guest Editor, Robert W. Sweetow, Ph.D. Semin Hear 2007;28:120–132. Copyright # 2007 by Thieme Medical Publishers, Inc., 333 Seventh Avenue, New York, NY 10001, USA. Tel: +1(212) 584–4662. DOI 10.1055/s-2007-973438. ISSN. 0734-0451.

Associate Professor, Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington. Address for correspondence and reprint requests: Kelly L. Tremblay, Ph.D., Department of Speech and Hearing Sciences University of Washington. 1417 NE 42nd St., Seattle, WA 98105., E-mail: [email protected].

120

TRAINING EEG/TREMBLAY

and disordered perception and to propose new brain–behavior relationships. One such principle is the concept of brain plasticity. Plasticity is a term used to describe a variety of physiological changes in the central nervous system in response to sensory experiences. Simply stated, the brain changes; it is modified and shaped by sensory experiences (for a review, see Eggermont1). In this review, the concept of physiological plasticity is discussed from the perspective of rehabilitating individuals with hearing loss. Specifically, I discuss how electrophysiological methods are being used to study physiological modifications that occur with training, and how this information is (re)shaping our view of auditory rehabilitation.

PLASTICITY OF THE AUDITORY SYSTEM Like all sensory systems, the auditory system is highly organized. Frequency maps, for instance, exist throughout the peripheral (cochlea) and central (brainstem–cortex) auditory systems. From the ear to the brain, spectral and temporal information contained in speech signals is represented using place and timing codes. As an example, neural response patterns in the auditory cortex have been shown to reflect perceptually relevant time-varying parameters that correlate with the perception of fundamental frequency, voice-onset-time (VOT), place of articulation, and formant transition duration.2–8 It was believed that these maps (and neural codes) were ‘‘hard-wired,’’ and thus resistant to change. However, research has shown that the central auditory system (CAS) changes as a function of experience, reorganizing throughout the lifespan according to the auditory input that is available to the individual. Sometimes referred to as deprivation or injury-related plasticity or use-related plasticity,9 modified sensory maps, synaptic alterations, and neurochemical changes have been shown to occur following periods of auditory deprivation and auditory stimulation.

How Does the Concept of ‘‘Plasticity’’ Relate to Auditory Perception? Audiologists are trained to approach rehabilitation by improving audibility (by means of

hearing aids, cochlear implants, or assistive listening devices). Obviously, this is a critical first step. Nevertheless, successful rehabilitation also depends on the central auditory system’s ability to represent and integrate spectral and temporal information contained in the speech signal and delivered by the hearing device. Take for example the typical individual fit with a hearing aid. This person has likely experienced significant deprivation-related changes in the CAS. When deprived of sound (e.g., cochlear hearing loss), the manner in which spectral and temporal information is coded along the auditory pathway changes. A commonly cited example is age-related high-frequency hearing loss. When deprived of high-frequency input, intact regions of the tonotopic map adjacent to the impaired high-frequency region often become responsive.10,11 Put simply, because the specific high frequency region has not been stimulated for some time, the ability to use this area to encode spectral and temporal information is compromised. Instead, adjacent areas (representing other frequencies) respond when they would not have ordinarily done so. When a person is fit with a hearing aid, two additional forms of plasticity are presumed to take place. First, when a hearing aid increases the intensity of a signal, aspects of the auditory system that were once deprived of sound now become stimulated. This change in auditory experience likely contributes to additional changes in the CAS. This assumption is based on evidence from multiple unit studies in animals, which demonstrates that electric and acoustic stimulation of a deprived auditory system also modifies the CAS.12–15 Second, hearing aids and cochlear implants deliver a modified signal to an impaired and reorganized auditory system.16,17 Hearing aids alter the acoustics of a stimulus (e.g., stimulus-rise characteristics, signal-to-noise ratio, and amplitude overshoot caused by circuitry activation). Thus, hearing aids deliver a modified signal to the auditory system. In a sense, this modified signal is a new signal that is likely stimulating new neural response patterns in the CAS. Some of the perceptual difficulties and performance variability experienced by hearing-impaired individuals (who wear hearing aids or implants) may be related to their

121

122

SEMINARS IN HEARING/VOLUME 28, NUMBER 2

2007

inability to make use of the modified cues.18–20 The degree of benefit a particular person receives from a hearing aid (or cochlear implant) may depend, in part, on the ability of an individual’s own CAS to adapt to the modified cues delivered by the device. Simply put, individuals who do not experience significant benefits from a cochlear implant (or hearing aid) might have auditory systems that are less plastic (i.e., less capable of neurally representing new acoustic cues) and poor speech perceivers may have more difficulty learning how to associate these new neural patterns to an existing memory of sounds. This rationale underlies some of the motivation for auditory training exercises – yet another form of auditory stimulation.

What Happens to the Auditory System during Training? The physiological representation of sound can be modified by training exercises. Importantly, training-related changes in physiology can coincide with improved perception. Sometimes described as learning-related plasticity, animal studies have demonstrated that auditory maps and timing codes can be altered with training.21 Bakin and Weinberger,21 for example, used a classical conditioning paradigm to investigate changes in cortical receptive fields in guinea pigs. The animals that were exposed to the conditioning paradigm showed an increase in neural response magnitude to the conditioning stimulus frequency and a reduction in response magnitude for nontrained frequencies. They also found that the altered receptive fields were retained for as long as 8 weeks posttraining. What happens to the brain during training? Training-related physiological changes have been attributed to several different processes including, but not limited to (1) a greater number of neurons responding in the sensory field; (2) improved neural synchrony (or temporal coherence); and (3) neural decorrelative processes whereby training decorrelates activity between neurons, making each neuron as different as possible in its functional specificity relative to the other members of the population.22 In other words, with training, neurons within a given ensemble attempt to arrange

themselves in a way that maximizes the representation of the unique characteristics of each stimulus. This process assumes that information common to two stimuli is disregarded, while responses to unique features of each stimulus are enhanced. Whatever the actual neural mechanisms underlying training-related gains in perception might be, the ability to alter neural response patterns through training is provocative, and the translation of this basic animal research to human auditory rehabilitation is an obvious goal for those of us interested in rehabilitating people with auditory-based communication disorders. Because performance variability among hearing aid and cochlear implant users can vary dramatically, the ability to reshape the auditory system (using training exercises), in a way that improves perception, is an emerging focus of interest.23–29 There exists a substantial body of literature demonstrating that humans (with and without hearing loss) can improve their perception of spectral and temporal cues by engaging in auditory training exercises. Sometimes categorized as analytical (bottom up), synthetic (top down), or a combination of both,30,31 auditory training programs are designed to improve the ability to perceive auditory events through repetitive listening exercises. Analytic training emphasizes the acoustic content (spectral, temporal, and intensity cues) of the signal, and the task involves identifying or discriminating sounds that differ acoustically. Synthetic training is designed to improve perception by enhancing a person’s ability to attend to, integrate, and use contextual information. To date, the majority of training programs designed to improve auditory perception in people with communication disorders is categorized as analytic. The task may involve training individuals to identify and discriminate acoustically similar consonant–vowel (CV) syllables, (for example, voiceless ‘‘thee’’ and ‘‘fee’’). One of the underlying assumptions of this approach is that the current neural representation of acoustic cues is compromised in some way; therefore, training should improve the physiological ability to detect and discriminate important acoustic cues. Moreover, if neural response patterns can be modified, and the

TRAINING EEG/TREMBLAY

person learns to differentiate acoustically similar sounds, then improved perception might generalize beyond the training environment. In other words, a person might be less likely to confuse consonants ‘‘th’’ and ‘‘f’’ in the context of a word and/or sentence.

Measuring Training-Related Changes in the Brain Functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), and electroencephalography (EEG) are noninvasive measures that have emerged as tools to assess speechevoked brain activity in humans. These techniques are useful for studying the neural processing of sound and have the capacity to measure changes in neural activity related to training. However, each method has its strengths and weaknesses. An advantage of fMRI is its spatial resolution—it provides exemplary information about where sound is being processed in the brain. The examination of the neural representation of subtle temporal cues that distinguish two speech sounds, however, would be difficult. In contrast, EEG has exquisite temporal resolution, but limited spatial information. For these reasons, EEG and MEG are useful for investigating the neural representation of rapidly changing spectral and temporal cues contained in speech. Moreover, because the underlying assumptions of many of the analytic training programs are that perceptual gains reflect improved neural representation of time-varying spectral and temporal cues, EEG and MEG have been used to examine the neural representation of spectral and temporal cues before and after training. Even though EEG and MEG have been widely used to characterize neural mechanisms underlying various types of learning in all sensory systems, in this review, studies relating to the neural representation of acoustic cues are highlighted.

ELECTROPHYSIOLOGY: PRINCIPLES AND NOMENCLATURE Auditory-evoked potentials (AEPs) are electrophysiological measures (bioelectrical potentials) that are time-locked to an auditory event. They

are typically described in terms of peak response polarity, positive (P) or negative (N) peaks, and amplitude and peak latency. Amplitude describes the strength of the response in microvolts (uV). Latency reflects the amount of time, in milliseconds (ms), that it takes for the sound to travel through the peripheral auditory system to the place of excitation in the CAS. Because a large portion of the training exercises have focused on improving the acoustic (analytic) representation of sound, physiological measures that are sensitive to acoustic processing have been used to quantify trainingrelated changes in the brain. Two examples are the P1-N1-P2 signal and mismatch negativity (MMN) responses. Whereas the P1-N1-P2 complex signals the physiologic detection of sound,32 the MMN reflects physiological discrimination and sensory memory of two or more sounds.33 To learn more about the anatomical and physiological mechanisms that contribute to each specific evoked response, readers can consult one of the many review publications available.32–41

Electrophysiological Evidence of Training-Related Changes in the Brain Before we can exploit brain plasticity in a way that benefits people with communication disorders, it is first important to define the physiological effects of auditory training in normal functioning auditory systems. For this reason, much of what we know about training-related changes in the human auditory system is limited to spectral and temporal training in populations without hearing loss. For example, Kraus et al42 used the MMN to determine if the ability to perceive time-varying cues improved with training and if training-related physiological changes were reflected in the MMN. Over the course of a week, individuals participated in six 1-hour training sessions. Participants were trained to discriminate two variants of the syllable /da/. Acoustically, the two stimuli differed in terms of their second and third formant onset frequencies. As the ability to discriminate the two /da/ sounds improved, the MMN (evoked by these same sounds) increased in amplitude. Kraus and colleagues attributed these findings to training-related changes in temporal coherence of

123

124

SEMINARS IN HEARING/VOLUME 28, NUMBER 2

2007

neurons (neural synchrony) representing the distinguishing cues. In Tremblay et al,43 we extended the abovedescribed experiment by asking the following questions: Are training-related perceptual and physiological changes specific to the trained cue? Or, do the effects of training generalize to sounds that contain the same acoustic cue, but not used during training? Ultimately, the goal of any training program is to have training-related gains in performance generalize to stimuli and situations outside of the laboratory. Therefore, we used a training and transfer paradigm initially designed by McClaskey, Carell, and Pisoni.44 In this experiment, young normalhearing listeners were taught to identify an unfamiliar VOT cue that distinguished two prevoiced /ba/ syllables. This prevoiced cue is not used phonemically in the English language and therefore was a new temporal distinction to be learned by the native English speakers. Prior to training, both stimuli (-20-ms and -10-ms VOT sounds) were described by listeners as sounding like /ba/, and identification and discrimination scores approximated chance. Following 5 days of training (during which time individuals were trained to arbitrarily identify the -20-ms VOT as ‘‘mba’’ and the -10-ms VOT as ‘‘ba’’), perception improved. Similar to the Kraus et al42 study, coinciding with improved performance scores, the MMN increased in amplitude. In a control condition, where a separate group was tested and then retested without intervening training, there were no significant perceptual or physiological changes. Importantly, trained perceptual and neural distinctions generalized from one place of articulation (labial /ba/) to another (alveolar /da/). In other words, the perceptual and physiologic gains observed for the labial prevoiced cues generalized to spectrally different (alveolar) stimuli that shared the same VOT distinction. Because participants were not exposed to the alveolar stimuli during training, it is unlikely that increased MMNs (and improved perception) were related to stimulus exposure. In a follow-up study, we45 demonstrated that physiological changes occur quite rapidly, and precede changes in perception. All individuals showed significant physiological changes in the MMN following one 45-minute training

session. The magnitude of the MMN continued to increase with additional training, but the time course of perceptual changes was variable. Although some individuals showed significant improvements after one or two training sessions (fast learners), others required additional training sessions before significant perceptual gains became evident. These results suggest that the auditory system is quite responsive to training, but there is variability across individuals in the ability to make use of these physiological cues. Similar training-related changes in the MMN were later reported by Menning et al,46 who reported enhanced MEG-recorded MMNs following frequency discrimination training.47 Similarly, Atienza et al48 and Gottselig et al49 reported rapid increases in the MMN as participants learned to discriminate frequency patterns. In short, the MMN studies described here demonstrate that training-related physiological changes do occur in humans and that these changes can be recorded noninvasively using the MMN. A common interpretation of these findings is that training-related changes in the MMN represent improved neural representation of the trained acoustic cue(s), which, in turn, contributes to improved perception. If the underlying assumption is that acoustic cues must first be detected before they can be discriminated, it also follows that improved physiological discrimination must have been preceded by improved physiological detection of the trained cue. For example, if changes in the MMN reflected improved discrimination of VOT, we might then assume that the physiological detection of VOT also changed. To address this issue, an AEP that is sensitive to physical properties contained in speech (such as VOT) was used to examine training-related changes in the neural representation of VOT. As noted earlier, the P1-N1-P2 sequence of responses consists of a small positive wave (P1), a large negative component (N1), followed by a positive peak (P2) (Fig. 1). Neural generators of the N1 are said to be in the superior temporal gyrus, near the primary auditory cortex.50 The N1 is often described to be an exogenous response, meaning that it is sensitive to physical characteristics of the sound used to

TRAINING EEG/TREMBLAY

Figure 1 Pre- (thin line) and posttraining (thick line) small positive wave (P1), large negative component (N1), positive peak (P2) responses (n ¼ 13). Significant increases in P2 amplitude are evident at most midline and frontal electrode (Cz, Fz, F3, F4) sites in response to both trained stimuli. ‘‘mba’’ ¼ -20-ms voice onset time (VOT); ‘‘ba’’ ¼ -10-ms VOT. (From Tremblay K, Kejriwal C, De Nisi J. Auditory training and the N1–P2 complex. Abstract presented at: International Evoked Audiometry Response Study Group Meeting; 2001; Vancouver, Canada.)

evoke the response. As an example, the N1 reflects the physical detection acoustic changes, including the onset of sound (from silence to sound) and acoustic changes within a sound (such as CV transitions). It is for this reason that the P1-N1-P2 complex has been used to examine the neural representation of perceptually relevant temporal cues such as VOT. Results show that P1-N1-P2 responses reflect incremental changes in VOT.51–53

Monotonic increases in N1 latency correspond in time with incremental increases in VOT. Because P1-N1-P2 responses show distinctly different waveforms when evoked by stimuli that differ by 10-ms VOT, one could question if the VOT training effects reported earlier also would be reflected in the P1-N1-P2 complex. If training improves the synchronous representation of the cue being trained, or decorrelates neurons so they represent each stimulus as being

125

126

SEMINARS IN HEARING/VOLUME 28, NUMBER 2

2007

different from one another, one would expect P1-N1-P2 responses (representing each stimulus) to appear different from one another following training. In other words, prior to training, when both VOT stimuli are perceived to be the same, would P1-N1-P2 responses, for stimuli differing by 10-ms of prevoicing, appear similar? Following training, when the ability to identify each sound improves, would P1-N1-P2 responses appear different from one another? In a series of studies, we27,54,55 compared P1-N1-P2 responses evoked by the two prevoiced stimuli (-20-ms and -10-ms VOT) described earlier. Prior to training, when the two stimuli were perceived to be similar, N1 latencies evoked by each stimulus were similar. Following training, significant improvements in perception were seen; however, the expected N1 latency shift characterizing the 10-ms VOT difference between the two stimuli did not occur. Instead, we observed increases in P2 amplitude following training (Fig. 1); a finding that would later be replicated by others.47,48,57–59 In a control condition, where participants were tested and then retested without any intervening training, no significant physiological or perceptual changes were seen.55 These results suggest that training alters physiological activity in some way. Specifically how the brain was altered, and what training-related changes in P2 amplitude reflect, remain points for speculation. One possibility is that N1 latencies do not reflect subtle differences in prevoicing, even though it has been shown to reflect short- and long-lag VOTs. If this is true, acoustic representations of prevoicing might have been altered with training, but the N1 was not sensitive to the physiological modifications. It also could be said that mechanisms underlying N1 are resistant to training. However, there is converging evidence indicating that the N1 (as well as some of its subcomponents, for example, N1c) can be modified with training. For example, Menning et al46 found training-related changes in the MEG-recorded N1 following frequency discrimination training. Others, using various forms of training tasks, also report electrically evoked N1 and/or P2 changes. Brattico et al60 reported increased N1 amplitudes following a brief period of discrimination training. Bosnyak et al59 reported enhanced P2 responses and

increases in the radially oriented N1c as people were trained to discriminate small changes in the carrier frequency of 40 Hz amplitude modulated pure tones. Finally, Reinke et al57 reported decreased N1 and P2 latencies and enhanced P2 amplitudes, as trained listeners improved their ability to complete a vowel segregation task (Fig. 2). Even though most of the training experiments cited here used what could be described as analytical-type training paradigms and the reported physiological changes might be reflecting training-related changes in the sensory representation of the specific trained cue(s), it is also possible that the AEP changes seen with training reflect modified neural activity that is not specific to spectral or temporal coding. For instance, it is interesting to note that consistent training-related changes, affecting the MMN and P1-N1-P2 responses, can be found despite the fact that different stimulus types (e.g., tones, speech), different training tasks (identification, discrimination, vowel segregation), and different measurement tools (EEG or MEG) were used. Moreover, physiological changes occur regardless of whether the AEPs are recorded while the subject is actively doing the training task or days later, after the final training task. This observation might suggest that the physiological changes that coincide with training are not related to sensory-specific acoustic refinements, but rather some process that is common to all training paradigms. For example, training exercises involve stimulus exposure and various types of attention, memory, decision-making, and task execution. It is possible that neural mechanisms associated with these processes are activated during training and contribute to the posttraining AEP findings. Sheehan et al61 suggest that training-related increases in P2 amplitude result from repeated stimulus exposure, and not training. Although stimulus exposure is certainly a necessary component to training, there is evidence to suggest that exposure alone is insufficient to account for all of the training-related physiological changes reported in the literature. Numerous studies have shown good test-retest reliability for P1,N1,P2,55,62–64 and MMN responses,56,65,66 which suggests that prior exposure to stimuli during Test 1 (the before

TRAINING EEG/TREMBLAY

Figure 2 Auditory evoked potentials were collected while individuals participated in a vowel segregation task. As perception improved, group mean small positive wave (P1), large negative component (N1), positive peak (P2) responses also showed training-related changes. N1 and P2 latencies decreased, and P2 amplitude increased (electrode Fz shown). Training-related differences can also be seen in the slow wave (SW). (From Reinke KS, He Y, Wang C, Alain C. Perceptual learning modulates sensory evoked response during vowel segregation. Brain Res Cogn Brain Res 2003;17(3):781–79. Adapted with permission.)

condition) does not automatically affect the physiological representation of sound during Test 2 (the after condition). Moreover, some research designs have shown that repeated exposure to sound can minimize, rather than

enhance, MMN67,68 and P2 responses (Fig. 3).58 Finally, despite being exposed to the same number of stimuli, not all individuals show physiological changes. If the reported training results were solely the product of

Figure 3 Negative component (N1) and positive peak (P2) responses decrease in amplitude with repeated stimulus exposure. Group mean P1-N1-P2 responses as a function of blocks of trials while participants watched a subtitled video of their choice (electrode site Fcz). (From Alain C, Snyder JS, He Y, Reinke KS. Changes in auditory cortex parallel rapid perceptual learning. Cereb Cortex 2006; in press. Adapted with permission).

127

128

SEMINARS IN HEARING/VOLUME 28, NUMBER 2

2007

stimulus exposure, then one might expect the effects of exposure to be comparable for all individuals. However, the effects of stimulus exposure, combined with attention and/or a task, might provide a different story. Although P1-N1-P2 responses are described as being exogenous, meaning they are highly dependent on the physical properties of the stimulus used to evoke it, they are also endogenous in that they can be modulated by attention in certain circumstances.39,69,70 Similarly, the MMN is often described as a preattentive automatic response, but there is evidence suggesting that the MMN is not entirely free from attentional effects.71,72 As described in Tremblay et al,55 it is possible that some of the training-related physiological changes reported might not reflect sensory-specific fine-tuning processes as described in the animal literature, but instead may be reflecting other top-down modulatory influences that are activated during focused listening tasks – processes similar to those emphasized in synthetic training programs. Because EEG and MEG measures are farfield recordings, it is difficult to know exactly where and how training is altering the neural representation of sound in the human brain. However, by using animal and human source models, as well as by manipulating experimental designs, it is possible to disentangle some of the presumed sources to learn more about brain–behavior relationships. Whether it is acoustic fine-tuning, stimulus exposure, or some form of attention-related process, it is clear that auditory training exercises, of different types and of different forms, induce perceptual and physiological changes. These changes can occur quite rapidly, and the effects generalize to stimuli not used during training. Using noninvasive AEP recordings, it is possible to study brain and behavior changes to learn more about experience-related changes in the human auditory system.

Electrophysiological Evidence of Training-Related Changes in the Hearing-Impaired Little is known about the physiological effects of auditory training in populations with hearing

loss or in people who wear hearing aids and/or cochlear implants. Eventually, as we learn more about the underlying neural mechanisms contributing to training-related AEP changes in normal hearing systems, this information will help us interpret training-related changes in people with hearing disorders. It should be noted, however, that recording AEPs in hearing aid and cochlear implant users does come with some caveats and concerns. Even though the test-retest reliability of AEPs in individuals wearing hearing aids62 or cochlear implants64 is good; the devices themselves are a source of variables.29 To date, we know little about the interaction between device settings and evoked neural activity. For example, when stimuli are presented in sound field, signal processors alter the signal. Hearing aids, for instance, alter stimulus rise time characteristics, signal-tonoise ratios, and introduce amplitude overshoot caused by circuitry activation. These signal modifications likely contribute to aided neural response patterns in ways that are different from unaided recordings. This point might explain why inconsistent findings can be found when reading the limited number of studies that have recorded cortical evoked potentials in individuals wearing hearing aids.73–77 In most cases, the functional status of the hearing aid and circuitry information were not reported. When the output of a hearing aid is taken into consideration, both expected and unexpected AEP results can be found.62,78–80 For example, when sound is amplified by a hearing aid, neural response patterns should be larger in amplitude (strength) and shorter in latency (neural conduction travel-time) when compared with unaided neural responses. This assumption is based on decades of research demonstrating the effects of incremental increases (as small as 2 dB) in stimulus intensity on electrophysiological recordings.81–87 Increasing the intensity of a signal by 20 dB (using hearing aid amplification) should therefore also result in significant changes in magnitude and timing of synchronous neural activity. However, recent experiments conducted in our laboratory indicate this is not the case. In two separate experiments, when 20 dB of gain was provided by a hearing aid, there were no significant differences between unaided and aided

TRAINING EEG/TREMBLAY

electrophysiological response patterns.62,79 These results demonstrate that longstanding principles underlying electrophysiological recordings, which are based on unaided recordings, do not necessarily apply when sound is processed by a hearing aid and then delivered to the auditory system. Presumably, some sort of interaction between the way sound is processed through the hearing aid and then encoded in the auditory system is taking place. These concerns likely apply to cochlear implants as well, when sound is presented in sound field and then modified by the speech processor. Therefore, before conclusions about the effects of auditory training on the brain can be made in people with hearing aids or cochlear implants, it is necessary to understand the basic effects of signal processing on the central auditory representation of sound.

CONCLUSION In 1960, Carhart88 outlined what could be considered a traditional approach to auditory training for adults. At that time, little was known about plasticity of the auditory system, yet researchers and clinicians recognized that perception could be improved with auditory training. Because of recent technologic advances, we are now able to examine the neural mechanisms underlying training-related changes in perception. Converging evidence from animal and human studies indicates that auditory training alters neural activity. However, precisely what, where, and how the brain is altered, remain questions for debate. AEPs are currently being used to identify training-related changes in the human auditory system, and there is interest in using these brain measures to guide the rehabilitation of hearing aid and cochlear implant users. AEPs might be helpful in identifying what specific aspects of auditory training are effective, and who might benefit from auditory training. That being said, when sound is processed through a hearing aid (or cochlear implant) the content of the signal is modified, and to date, we know very little about the effect of these modifications on the physiologic encoding of sound. Thus, before conclusions about the effects of auditory training on the brain can

be made in people with hearing aids or cochlear implants, it will be necessary to define the basic effects of signal processing on the neural representation of sound.

ACKNOWLEDGMENT

The author thanks Candace Kukino and Katherine Faulkner for their editorial assistance. Also acknowledged is funding from the National Institutes of Health NICDC R01DC007705.

ABBREVIATIONS

AEPs CAS CV EEG EP ERP fMRI

auditory-evoked potentials central auditory system consonant–vowel electroencephalography evoked potential event-related potential functional magnetic resonance imaging Hz hertz MEG magnetomagnetoencephalography MMN mismatch negativity VOT voice onset time REFERENCES 1. Eggermont JJ. The correlative brain: theory and experiment. In: Friston KJ, ed. Neutral Interaction (Studies of Brain Function). New York: SpringerVerlag; 1990:1–36 2. Steinschneider M, Schroeder CE, Arezzo JC, Vaughan HG. Physiologic correlates of the voice onset time boundary in primary auditory cortex (A1) of the awake monkey: temporal response patterns. Brain Lang 1995;48(3):326–340 3. Steinschneider M, Reser D, Schroeder CE, Arezzo JC. Tonotopic organization of responses reflecting stop consonant place of articulation in primary auditory cortex (A1) of the monkey. Brain Res 1995;674(1):147–152 4. Steinschneider M, Arezzo J, Vaughan JHG. Speech evoked activity in the auditory radiations and cortex of the awake monkey. Brain Res 1982; 252(2):353–365 5. Steinschneider M, Volkov IO, Noh MD, Garell PC, Howard MA. Temporal encoding of the voice-onset-time phonetic parameter by field potentials recorded directly from human auditory cortex. J Neurophysiol 1999;82(5):2346–2357

129

130

SEMINARS IN HEARING/VOLUME 28, NUMBER 2

2007

6. Phillips DP, Hall SE. Response timing constraints on the cortical representation of sound time structure. J Acoust Soc Am 1990;88(3):1403–1411 7. Ahissar E, Nagarajan S, Ahissar M, et al. Speech comprehension is correlated with temporal response patterns recorded from auditory cortex. Proc Natl Acad Sci USA 2001;98(23):13367–13372 8. Schreiner CE, Mendelson J, Raggio MW, Brosch M, Krueger K. Temporal processing in cat primary auditory cortex. Acta Otolaryngol Suppl 1997; 532:54–60 9. Irvine DRF, Rajan R. Injury-and use-related plasticity in the primary sensory cortex of adult mammals: possible relationship to perceptual learning. Clin Exp Pharmacol Physiol 1996; 23(10–11):939–947 10. Willott JF. Changes in frequency representation in the auditory system of mice with age-related hearing impairment. Brain Res 1984;309(1):159– 162 11. Willott JF. Aging and the Auditory System. San Diego: Singular; 1991 12. Javel E, Shepherd RK. Electrical stimulation of the auditory nerve. III. response initiation sites and temporal fine structure. Hear Res 2000;140(1–2): 45–76 13. Shepherd RK, Baxi JH, Hardie NA. Response of inferior colliculus neurons to electrical stimulation of the auditory nerve in neonatally deafened cats. J Neurophysiol 1999;82(3):1363–1380 14. Kral A, Hartmann R, Tillein J, Heid S, Klinke R. Hearing after congenital deafness: central auditory plasticity and sensory deprivation. Cereb Cortex 2002;12(8):797–807 15. Chang EF, Merzenich MM. Environmental noise retards auditory cortical development. Science 2003; 300(5618):498–502 16. Stelmachowicz PG, Kopun J, Mace A, Lewis DE. The perception of amplified speech by listeners with hearing loss: acoustic correlates. J Acoust Soc Am 1995;98(3):1388–1399 17. Tyler RS, Summerfield AQ. Cochlear implantation: relationships with research on auditory deprivation and acclimatization. Ear Hear 1996; 17(3 Suppl):38s–50s 18. Watson CS. Auditory perceptual learning and the cochlear implant. Am J Otol 1991;12(Suppl):73– 79 19. Robinson K, Summerfield AQ. Adult auditory learning and training. Ear Hear 1996;17(3 Suppl): 51S–65S 20. Robinson K, Gatehouse S. The time course of effects on intensity discrimination following monaural fitting of hearing aids. J Acoust Soc Am 1996;99(2):1255–1258 21. Bakin JS, Weinberger NM. Classical conditioning induces CS specific receptive field plasticity in the

22.

23.

24.

25.

26.

27.

28.

29.

30.

31.

32.

33.

34. 35.

36. 37.

auditory cortex of the guinea pig. Brain Res 1990; 536(1–2):271–286 Barlow HB, Foldiak P. Adaptation and decorrelation in the cortex. In: Miall RMD, Mitchison, GJ, eds. The Computing Neuron. New York: AddisonWesley; 1989:54–72 Tremblay K. Beyond the ear: physiological perspectives on auditory rehabilitation. Semin Hear 2005;26(3):127–136 Tremblay K. Page ten. Hearing aids and the brain: what’s the connection? Hearing J 2006;59: 10–16 Neuman AC. Central auditory system plasticity and aural rehabilitation of adults. J Rehabil Res Dev 2005;42(4, Suppl 2):169–186 Tremblay KL. Central auditory plasticity: implications for auditory rehabilitation. The Hearing Journal 2003;56(1):10–17 Tremblay KL, Kraus N. Auditory training induces asymmetrical changes in cortical neural activity. J Speech Lang Hear Res 2002;45(3):564–572 Willott JF. Physiological plasticity in the auditory system and its possible relevance to hearing aid use, deprivation effects, and acclimatization. Ear Hear 1996;17(3 Suppl):S66–S77 Souza PE, Tremblay KL. New perspectives on assessing amplification effects. Trends Amplif 2006; 10:119–143 Alcantara JI, Dooley GJ, Blamey PJ, Seligman PM. Preliminary evaluation of a formant enhancement algorithm on the perception of speech in noise for normally hearing listeners. Audiology 1994;33(1): 15–27 Sweetow R, Palmer CV. Efficacy of individual auditory training in adults: a systematic review of the evidence. J Am Acad Audiol 2005;16(7):494– 504 Naatanen R, Picton T. The N1 wave of the human electric and magnetic response to sound: a review and an analysis of the component structure. Psychophysiology 1987;24(4):375–425 Naatanen R. The role of attention in auditory information processing as revealed by eventrelated potentials and other brain measures of cognitive function. Behav Brain Sci 1990;13(2): 201–288 Hall J. Handbook of Auditory Evoked Responses. Boston, MA: Allyn & Bacon; 1992 Stapells DR. Threshold estimation by the toneevoked auditory brainstem response: A literature meta-analysis. J Speech Lang Path Audiol 2000; 24(2):74–83 Hyde M. The N1 response and its applications. Audiol Neurootol 1997;2(5):281–307 Naatanen R. The mismatch negative (MMN). In: Attention and Brain Function. Hillsdale, NJ: Lawrence Erlbaum; 1992:136–200

TRAINING EEG/TREMBLAY

38. Steinschneider M, Dunn M. Electrophysiology in Developmental Neuropsychology. 2nd ed. Amsterdam, The Netherlands: Elsevier Science; 2002:91– 146 39. Woods DL. The component structure of the N1 wave of the human auditory evoked potential. Electroencephalogr Clin Neurophysiol Suppl 1995; 44:102–109 40. Burkard R, Don M, Eggermont J. Auditory Evoked Potentials: Basic Principles and Clinical Application. 1st ed. Philadelphia: Lippincott Williams & Wilkins; 2006 41. Martin BA, Tremblay K, Stapells DR. Principles and applications of cortical evoked potentials. In: Burkard R, Don M, Eggermont J, eds. Auditory Evoked Potentials: Basic Principles and Clinical Application 1st ed. Philadelphia, PA: Lippincott Williams & Wilkins; 2006:482–507 42. Kraus N, McGee T, Carrell T, King C, Tremblay K, Nicol N. Central auditory system plasticity associated with speech discrimination training. J Cogn Neurosci 1995;7:27–32 43. Tremblay K, Kraus N, Carrell TD, McGee T. Central auditory system plasticity: generalization to novel stimuli following listening training. J Acoust Soc Am 1997;102(6):3762–3773 44. McClaskey CL, Pisoni DB, Carrell TD. Transfer of training of a new linguistic contrast in voicing. Percept Psychophys 1983;34(4):323–330 45. Tremblay K, Kraus N, McGee T. The time course of auditory perceptual learning: neurophysiological changes during speech-sound training. Neuroreport 1998;9(16):3557–3560 46. Menning H, Roberts L. Pantev, C. Plastic changes in the auditory cortex induced by intensive frequency discrimination training. Neuroreport 2000;11(4):817–822 47. Menning H, Imaizumi S, Zwitserlood P, Pantev C. Plasticity of the human auditory cortex induced by discrimination learning of non-native, mora-timed contrasts of the Japanese language. Learn Mem 2002;9(5):253–267 48. Atienza M, Cantero JL, Dominguez-Marin E. The time course of neural changes underlying auditory perceptual learning. Learn Mem 2002; 9(3):138–150 49. Gottselig JM, Brandeis D, Hofer-Tinguely G, Borbely AA, Achermann P. Human central auditory plasticity associated with tone sequence learning. Learn Mem 2004;11(2):162–171 50. Picton TW, Alain C, Woods DL, et al. Intracerebral sources of human auditory-evoked potentials. Audiol Neurootol 1999;4(2):64–79 51. Sharma A, Dorman M. Cortical auditory evoked potential correlates of categorical perception of voice-onset-time. J Acoust Soc Am 1999;106(2): 1078–1083

52. Tremblay KL, Piskosz M, Souza P. Aging alters the neural representation of speech-cues. Neuroreport 2002;13(15):1865–1870 53. Tremblay KL, Piskosz M, Souza P. Effects of age and age-related hearing loss on the neural representation of speech cues. Clin Neurophysiol 2003; 114(7):1332–1343 54. Tremblay K, Kejriwal C, De Nisi J. Auditory training and the N1–P2 complex. Abstract presented at: International Evoked Audiometry Response Study Group Meeting; July 22–27, 2001; Vancouver, Canada 55. Tremblay K, Kraus N, McGee T, Ponton C, Otis B. Central auditory plasticity: changes in the N1– P2 complex after speech-sound training. Ear Hear 2001;22(2):79–90 56. Tremblay K, Kraus N, Carrell TD, McGee T. Central auditory system plasticity: generalization to novel stimuli following listening training. J Acoust Soc Am 1997;102(6):3762–3773 57. Reinke KS, He Y, Wang C, Alain C. Perceptual learning modulates sensory evoked response during vowel segregation. Brain Res Cogn Brain Res 2003;17(3):781–791 58. Alain C, Snyder JS, He Y, Reinke KS. Changes in auditory cortex parallel rapid perceptual learning. Cereb Cortex 2006; in press 59. Bosnyak DJ, Eaton RA, Roberts LE. Distributed auditory cortical representations are modified when non-musicians are trained at pitch discrimination with 40 Hz amplitude modulated tones. Cereb Cortex 2004;14(10):1088–1099 60. Brattico E, Tervaniemi M, Picton TW. Effects of brief discrimination-training on the auditory N1 wave. Neuroreport 2003;14(18):2489–2492 61. Sheehan KA, McArthur GM, Bishop DV. Is discrimination training necessary to cause changes in the P2 auditory event-related brain potential to speech sounds? Brain Res Cogn Brain Res 2005; 25(2):547–553 62. Tremblay KL, Billings CJ, Friesen LM, Souza PE. Neural representation of amplified speech sounds. Ear Hear 2006;27(2):93–103 63. Tremblay KL, Friesen L, Martin BA, Wright R. Test-retest reliability of cortical evoked potentials using naturally produced speech sounds. Ear Hear 2003;24(3):225–232 64. Friesen LM, Tremblay KL. Acoustic change complexes (ACC) recorded in adult cochlear implant listeners. Ear Hear 2006;27:678–685 65. Escera C. New clinical applications of brain evoked potentials: mismatch negativity. Med Clin (Barc) 1997;108(8):701–708 66. Pekkonen E, Rinne T, Naatanen R. Variability and replicability of the mismatch negativity. Electroencephalogr Clin Neurophysiol 1995;96(6):546– 554

131

132

SEMINARS IN HEARING/VOLUME 28, NUMBER 2

2007

67. Paavilainen P, Cammann R, Alho K, et al. Eventrelated potentials to pitch change in an auditory stimulus sequence during sleep. Electroencephalogr Clin Neurophysiol Suppl 1987;40:246–255 68. Teismann IK, Soros P, Manemann E, et al. Responsiveness to repeated speech stimuli persists in left but not right auditory cortex. Neuroreport 2004;15(8):1267–1270 69. Hillyard SA, Hink RF, Schwent VL, Picton TW. Electrical signs of selective attention in the human brain. Science 1973;182(108):177–180 70. Woldorff MG, Hackley SA, Hillyard SA. The effects of channel-selective attention on the mismatch negativity wave elicited by deviant tones. Psychophysiology 1991;28(1):30–42 71. Alain C, Woods DL. Attention modulates auditory pattern memory as indexed by event-related brain potentials. Psychophysiology 1997;34(5): 534–546 72. Alain C, Izenberg A. Effects of attentional load on auditory scene analysis. J Cogn Neurosci 2003; 15(7):1063–1073 73. Gravel J, Kurtzberg D, Stapells DR, Vaughan HG, Wallace IF. Case studies. Semin Hear 1989; 10(3):272–287 74. Kurtzberg D. Cortical event-related potentials assessment of auditory system function. Semin Hear 1989;10:252–261 75. Kraus N, McGee T. Auditory event-related potentials. In: Katz J, ed. Handbook of Clinical Audiology. 4th ed. Baltimore, MD: Williams and Wilkins; 1994:403–406 76. Rapin I. Evoked responses to clicks in a group of children with communication disorders. Ann NY Acad Sci 1964;112:182–203 77. Sharma A, Martin K, Roland P, et al. P1 latency as a biomarker for central auditory development in children with hearing impairment. J Am Acad Audiol 2005;16(8):564–573

78. Korczak PA, Kurtzberg D, Stapells DR. Effects of sensorineural hearing loss and personal hearing aids on cortical event-related potential and behavioral measures of speech-sound processing. Ear Hear 2005;26(2):165–185 79. Billings C, Tremblay KL, Souza PE. Effects of amplification and stimulus intensity on cortical auditory evoked potentials. Audiology NeuroOtology 2007: in press 80. Tremblay K, Kalstein L, Billings C, Souza PE. The neural representation of consonant-vowel transitions in adults who wear hearing aids. Trends Amplif 2006;10:155–162 81. Adler G, Adler J. Influence of stimulus intensity on AEP components in the 80- to 200- millisecond latency range. Audiology 1989;28(6):316–324 82. Adler G, Adler J. Auditory stimulus processing at different stimulus intensities as reflected by auditory evoked potentials. Biol Psychiatry 1991;29(4): 347–356 83. Beagley HA, Knight JJ. Changes in auditory evoked response with intensity. J Laryngol Otol 1967;81(8):861–873 84. Beagley HA, Knight JJ. The auditory evoked cortical response as an index of hearing in practical audiometry. J Laryngol Otol 1967;81(3):347–351 85. Martin BA, Boothroyd A. Cortical, auditory, evoked potentials in response to changes of spectrum and amplitude. J Acoust Soc Am 2000; 107(4):2155–2161 86. Picton TW, Goodman W, Bryce D. Amplitude of evoked responses to tones of high intensity. Acta Otolaryngol 1970;70(2):77–82 87. Rapin I, Schimmel H, Tourk LM, Krasnegor NA, Pollak C. Evoked responses to clicks and tones of varying intensity in waking adults. Electroencephalogr Clin Neurophysiol 1966;21(4):335–344 88. Carhart R. Auditory Training. New York: Holt, Rinehart and Winston; 1960