Document not found! Please try again

Lessons From the Neural Bases of Speech and Voice

0 downloads 0 Views 195KB Size Report
consistently across 14 patients (Greenlee et al., 2004). Transcranial magnetic ..... Hearing Research, 43(4), 934-950. Fridriksson, J. .... Stinear, C. M., Barber, P. A., Coxon, J. P., Verryt, T. S., Acharya, P. P., & Byblow, W. D. (2009). Repetitive.
Unless otherwise noted, the publisher, which is the American Speech-LanguageHearing Association (ASHA), holds the copyright on all materials published in Perspectives on Speech Science and Orofacial Disorders, both as a compilation and as individual articles. Please see Rights and Permissions for terms and conditions of use of Perspectives content: http://journals.asha.org/perspectives/terms.dtl

Lessons From the Neural Bases of Speech and Voice Christy L. Ludlow Department of Communication Sciences and Disorders, James Madison University Harrisonburg, VA

Abstract The premise of this article is that increased understanding of the brain bases for normal speech and voice behavior will provide a sound foundation for developing therapeutic approaches to establish or re-establish these functions. The neural substrates involved in speech/voice behaviors, the types of muscle patterning for speech and voice, the brain networks involved and their regulation, and how they can be externally modulated for improving function will be addressed.

Functional Neural Networks Our understanding of brain functioning for complex behaviors such as speech/voice has changed over the last 2 decades with the increased use of non-invasive functional neuroimaging methods such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), along with anatomical imaging techniques such as diffusion tensor imaging (DTI). Structural and functional interconnections among a variety of brain regions characterize functions such as speech/voice and breathing in both healthy adults and patients with idiopathic speech/voice disorders (Chang, Kenney, Loucks, & Ludlow, 2009; Simonyan & Ludlow, 2010; Simonyan, Ostuni, Ludlow, & Horwitz, 2009). Complex neural networks involving positive and negative interactions among multiple brain regions supporting highly skilled motor behaviors such as speech/voice have been defined in normal adults (Riecker et al., 2005). This construct represents a major change in our thinking about brain bases for these functions. In the 1960s through 1990s, investigators examined patients’ aphasic or dysarthric symptoms and lesion locations in an attempt to identify areas in the brain that control specific language and speech functions; unfortunately, their efforts met with limited success (Dronkers, 1996; Kertesz, Lesk, & McCabe, 1977). We now understand that these functions rely heavily on complex and interlinked brain networks. Thus, functional neuroimaging in healthy volunteers has operated to change our conception of the brain bases for these behaviors as involving the integration of functional systems that require efficient relationships among regions of the brain. Corticobulbar Interactions For many orofacial and upper airway behaviors, cortical control for their voluntary production must be integrated with brainstem circuits for the successful activation of these automatized behaviors. Brainstem circuits can be triggered either by peripheral sensory inputs or corticobulbar pathways as a result of cortical networks developed for learned behaviors such as volitional voice production (Jurgens, 2009) and swallowing (Ertekin & Aydogdu, 2003; Ertekin et al., 2001; Figure 1).

5

Figure 1. A schematicc showing bo oth sensory and excitato ory corticobu ulbar influencces, on a bra ain stem con ntaining both h excitatory and a inhibitorry inter-neuro ons leading to activation n or suppress sion of activity y in a motor neuron pooll.

Mammalian M vocalization v is an innate e behavior deependent up pon circuits in the pons that can prod duce charactteristic soun nd patterns (Hage & Jurg gens, 2006).. In the squiirrel monkey y, vocalization pattern generators can c be trigge ered by inpu uts from the periaquadu uctal gray in the n in response e to pain an nd anger and d in responsee to descend ding modula ation from th he midbrain anterior cingulate co ortex. Unlike e non-human n primates ((Simonyan & Jurgens, 2 2003), huma ans ect cortical inputs to mid dbrain and brainstem b vvocalization ssystems and d associated have dire motor ne eurons (Kuyp pers, 1958). The capacitty to directly y modulate vvocalization--related systtems is deemed necessary y for the exec cution of rap pid changes in voicing to o differentia ate voiced vs. voiceless sound conttrasts. The automatiized function ning of brain nstem mecha anisms is sttriking in tha at it involves s regular patterning p off alternating g agonistic an nd antagoniistic muscle systems. Th his was clearrly demonstrated for the e vocalization system in a recent stu udy of sponttaneous laug ghter in hum mans nnegan, Bake er, & Smith,, 2006). Lary yngeal electrromyograph hy showed we ell (Luschei,, Ramig, Fin defined repeated r pattterns of alte ernating burrsts between n adducting a and abductiing laryngea al muscles in cycles tha at were relattively invaria ant. The typ pe of patterning seen in h human laug ghter e is i similar to the pattern n of vocalizattion that can n be evoked (an emottional vocal expression) through electrical stiimulation off the periaqu uaductal gra ay in cats (Zh hang, Bandller, & Davis,, n the feline model, m simila ar alternatin ng bursts of adductor an nd abductor laryngeal 1995). In muscles over a series s of cries can be observe ed. As such,, laughter m may be trigge ered from the e aductal gray in humans (Luschei et al., 2006), th hus activating the vocall pattern periaqua generators in the pon ns to produc ce the laugh hter pattern o ase of expression. As laughtter in this ca was spon ntaneous, th here may nott have been cortical invo olvement. It is entirely p possible to ask a human participant p to produce la aughter volittionally indiccating that ccortical inpu ut to this sys stem is indeed d possible. The regular and a well defined alternatting patternss between a agonists and antagonistiic laryngeall muscles for vocal fold adduction and a abductio on noted durring laughte er are very different from the pa atterning see en during sp peech producction. Durin ng speech prroduction, alll laryngeall muscles arre active and d continuous sly modulateed at low levvels in a variiable fashion n for vocal fold d adduction and abducttion in assoc ciation with changes in ssubglottal aiir pressure (Finnegan, Luschei, & Hoffman, 2000; Poletto, Verdun, Strominger,, & Ludlow, 2004). Given n the al variation in i muscle pa atterning forr speech pro oduction (Polletto et al., 2 2004), laryngeal individua muscle control c for sp peech is like ely a learned process witth individua als convergin ng on differen nt motor co ontrol solutio ons.

6

Emotional vocal expression such as laughter, cry, and shout are relatively unaffected in some voice disorders such as the spasmodic dysphonias (Bloch, Hirano, & Gould, 1985). Neuroimaging has been used to examine for differences in cortical and subcortical networks when syllables are produced in contrast with whimper, a non-speech task (Simonyan & Ludlow, 2010). During speech syllable production, a skill learned from infancy, the pattern of cortical somatosensory activation was overly active in patients with spasmodic dysphonia and tended to relate to the severity of symptom production, while whimper vocalizations were unaffected and did not involve abnormal cortical activation. However, in normal voice production, the speech and vocalization systems may interrelate. When brain activation during voiced and whispered narrations of past events was studied using PET, the brain activation network during the voiced production correlated with the level of activation in the periacqueductal gray (PAG; Schulz, Varga, Jeffires, Ludlow, & Braun, 2005). No such interactions of cortical activity with the PAG were found during the whisper condition. Thus, the emotional vocalization system including PAG activity appears to be integrated to some degree with the cortically based system in normal adults during speech expression. However, in some disorders such as spasmodic dysphonia, the cortically based system of learned speech gestures may be selectively affected while the vocalization system for laughter and cry is unaffected. By adulthood speech production has become automatic and speech execution is implicitly controlled as patterned movement trajectories (Smith, Johnson, McGillem, & Goffman, 2000). Speech execution skill takes relatively long to develop (Smith & Goffman, 1998) with the timing of patterns continuing to be variable until adolescence (Smith & Zelaznik, 2004). Thus, by adulthood speech is an automatic skilled motor behavior involving a highly efficient neural network across both cortical and subcortical brain regions. It is likely extremely difficult to explicitly relearn speech in adulthood. Thus, changing speech or voice behaviors in adults may require the induction of changes in the neural network through intense and prolonged practice (Ludlow et al., 2008). Neural Substrates Some have discussed how speech production may differ from other oral and vocal behaviors (Weismer, 2006). The topic is controversial and affects determinations about how to alter speech motor control after speech has developed and whether other oral or vocal behaviors might be used to modify speech or re-establish speech after brain injury. As reviewed earlier, functional neuroimaging has changed our conceptual framework of the brain systems that are involved in speech production. The motor execution system for speech was first addressed by Penfield and Roberts (1959) during their studies of the ability to induce vocalization with electrical stimulation in the left and right hemispheres in patients with intractable epilepsy. Disruption of voice/speech production was possible with electrical stimulation over a large area in either the right or left hemisphere (Penfield & Roberts), similar to the effects of electrical stimulation disrupting oral naming over a large area of the cortex (Ojemann, 1979; Ojemann & Mateer, 1979). Recent findings of a large distributed network for speech/voice behaviors might indicate that speech can be disrupted if electrical stimulation or injury interferes with efficient functional interrelationships among the various neural substrates involved in these behaviors. The first indication of the neural substrates essential to speech/voice also came from Penfield and Roberts (1959); they found that stimulation only in a primary motor cortex region above representation of lip, jaw, and tongue movements could induce voice and called it the vocalization area. A meta-analysis of the brain imaging studies by Brown and colleagues (Brown, Ngan, & Liotti, 2008) also identified this area as common to studies of voice production. Again, this region was not the region of primary representation of the laryngeal muscles, but rather an integrative area for voice production above the orofacial representation.

7

Interactions between the vocalization area of primary motor cortex and the inferior frontal gyrus (IFG), a premotor area long associated with speech, were tested during surgery for intractable epilepsy and found to have close cortico-cortical functional coupling. Stimulation in either region induced evoked potentials in the other within both the right and left hemispheres consistently across 14 patients (Greenlee et al., 2004). Transcranial magnetic stimulation using short bursts of 10 Hz stimulation for 1 second was used to determine if and where stimulation disrupted speech on the cortex (Stewart, Walsh, Frith, & Rothwell, 2001). Although stimulation over the primary motor cortex disrupted speech equally on the right and left sides, stimulation of a region anterior to the primary motor cortex (including the IFG) disrupted speech only on the left side (Stewart et al., 2001). A temporal relationship between the premotor and primary motor regions for speech was also shown using magnetoencephalography (Salmelin et al., 1998). Changes in activation in the left premotor region preceded activation in the primary motor cortex by 400 ms during oral reading in healthy volunteers. Surprisingly, the sequence was reversed in adults who stuttered, demonstrating that changes in activation sequencing between the primary motor and the premotor areas might involve a network connectivity abnormality. Besides the tight connectivity between the IFG and the primary motor area, close functional linkage has also been demonstrated between the posterior superior temporal gyrus and the IFG. Active online processing of auditory feedback during voice production was demonstrated by Larson and colleagues using a pitch perturbation method; when changes in the voice feedback modified the fundamental frequency by a small pitch shift, speakers automatically adjusted their fundamental frequency in the opposite direction within 100 ms (Burnett, Freedland, Larson, & Hain, 1998; Burnett & Larson, 2002; Larson, Burnett, Kiran, & Hain, 2000). The brain bases for these types of rapid responses were found bilaterally in the superior temporal cortex when speech feedback involved shifted formants (Tourville, Reilly, & Guenther, 2007). Increased activation with auditory perturbation also involved the right prefrontal and primary motor regions. The authors concluded that projections from the posterior superior temporal gyrus to the right frontal cortex were involved in the auditory feedback control of speech. Thus, a large bilaterally distributed network was proposed for online processing of speech production.

Speech vs. Non-Speech Networks It is important to determine the degree to which the large distributed network of functional modulations between brain regions involving both hemispheres inter-relates with speech processes, such as controlled exhalation required for speech production. Two studies have examined this issue; both comparing volitionally controlled breathing with speech production. The first study compared syllable repetition using a glottal stop and vowel repetition with prolonged exhalation (a voiceless sigh) over a constant time period of 3 seconds (Loucks, Poletto, Simonyan, Reynolds, & Ludlow, 2007). When each condition was compared with rest, both the glottal stop and vowel conditions showed greater activation in the left hemisphere within the IFG and primary motor cortex vocalization area. However, the posterior superior temporal gyrus (pSTG) was only active on the left side during speech, but not during prolonged exhalation. This result might have been due to greater sound pressure level of the voiced speech signal over the sigh. Therefore, Loucks et al. conducted a second experiment whereby subjects produced speech-like phonatory and whispered syllable repetitions and nonspeech vocalizations (whimper) during one session with normal auditory feedback and another with auditory masking. The same repeated syllables were used for both the phonatory and whisper tasks. In the normal feedback condition, the comparison between phonated and whispered speech (when there was a difference in sound pressure level) showed no differences in activation within the pSTG on either side of the brain. However, during the contrast between the phonated syllables and whimper (when the sounds pressure levels of the two stimuli were equivalent), there was a significantly greater activation in the left pSTG in the phonated

8

syllables condition. Thus, the greater activation occurred in the pSTG during speech, not because of greater sound pressure levels in the feedback signal, but rather because the task involved actual speech (greater activation) compared to a non-speech whimper (Loucks et al., 2007). This issue was further examined in a study characterizing the functional and structural networks for speech syllable repetition compared to voluntary breathing (Simonyan et al., 2009). Functional connectivity networks were computed from psychophysical interactions with the cytoarchitectonic locations of the maximum peak activations for the speech and for the breathing tasks. The center of peak activation was used as the seed to extract time series data during each task (contrasted against a silent fixation task). The functional maps showed left laterality during the speech syllable production tasks, but not during the voluntary breathing task. When diffusion tensor imaging was used to examine for differences in probabilistic tractography using the seeds from the speech and breathing tasks within the left and right hemispheres, researchers found no laterality effects for either the speech or breathing seeds. This suggested that the anatomical substrates did not differ across the two hemispheres, but that speech motor execution involved a larger network of neural regions in the left hemisphere. Speech perception is critical to the emergence of speech production, and theoretical models have proposed a rapid modulation between speech perception and speech production regions in the brain (Guenther, 2006; Guenther, Ghosh, & Tourville, 2006; Hickok & Poeppel, 2000). Several studies have demonstrated production-perception interactions. Numminen and colleagues showed a delay in auditory cortical responses to tones presented during speech reading that was not present during listening (Numminen, Salmelin, & Hari, 1999). Other studies have demonstrated reduced activation in the speech perception regions during speech production compared to listening (Houde, Nagarajan, Sekihara, & Merzenich, 2002; Ventura, Nagarajan, & Houde, 2009). However, these relationships between speech production and perception may not be limited to speech and may be present for other oral motor behaviors (Aliu, Houde, & Nagarajan, 2009). Structural anatomical asymmetries have been shown in the arcuate fasiculus on the right and left sides and are proposed to be the brain basis for speech perception and production linkages (Catani, Jones, & ffytche, 2005; Nucifora, Verma, Melhem, Gur, & Gur, 2005). However, if there are neural substrates supporting speech production and perception that are greater in the left hemisphere, such structural asymmetries might also support greater linkages in the left hemisphere for perception–production relationships between oral production and perception of non-speech sounds. This was tested on a functional basis for non-speech sound production in healthy adults (Chang, Kenney, Loucks, Poletto, & Ludlow, 2009). Non-speech sound productions included: cough, sigh, singing a tone, raspberry noises, kiss, snort, laugh, tongue click, whistle, and cry. All productions were familiar vocal sounds to the subjects and were easily recognized and imitated. These productions constitute a set of behaviors that were far less familiar than typical speech syllables. In 34 adults, the brain activation patterns for meaningless syllables and non-speech noises showed that the same neural substrates were activated during perception and production, with left hemisphere dominance for both. Activation in the right and left pSTG were the same during perception of speech and non-speech sounds and in the opercular regions during production of speech and non-speech sounds (Chang et al., 2009). The same neural substrates with left hemisphere predominance were used for both speech and non-speech vocal tract sound perception and production. Thus, neural substrates within the speech system could be activated by nonspeech vocal tract productions and might be a way of re-activating neural substrates in patients with speech disorders.

Alternate Methods To Re-activate Speech/Voice Systems Little attention has been given to translating current understanding of the neural basis of speech/voice into the development of treatment strategies to enhance recovery after brain injury or during neurological disease. In contrast, such knowledge is being used to inform

9

rehabilitation practices with non-fluent aphasia. Studies of post-stroke recovery of naming skills (with therapy) have demonstrated that the degree of recovery depends upon the extent of preservation of neural tissue in the left hemisphere and the degree to which these tissues were activated by the therapy (Fridriksson, 2010). Focusing training on behaviors that will reactivate the original brain substrates involved in the target behavior may be most effective. Constraint–induced movement therapy forces the patient to use the arm contralateral to the brain injury by constraining the limb contralateral to the unaffected hemisphere. This principle was applied to aphasia using intensive therapy in everyday communication settings (Pulvermuller et al., 2001) and was later shown to be effective in activating previously dysfunctional perilesional areas in the left hemisphere (Meinzer et al., 2008). If re-activating the original neural substrates can enhance recovery of function, one approach to aid recovery of function in the original neural substrates of the left hemisphere might be to suppress compensatory regions in the right hemisphere. Applying transcranial magnetic stimulation (TMS) to the right hemisphere at a slow rate of 1 Hz at 90% of motor threshold may suppress brain function on the right. In non-fluent aphasia, repetitive TMS applied over the right homologue of Broca’s area was used to de-recruit right hemisphere brain areas in a few cases (Naeser et al., 2005). In one example, a patient with a severe non-fluent aphasia over 5 years post-onset was treated with right hemisphere TMS and showed improvements, not just in naming, but in overall speech fluency (Hamilton et al., 2010). When two patients were contrasted, one responsive and one not responsive to right hemisphere TMS therapy, the nonresponsive patient did not regain transfer of function for speech back in the left hemisphere, while the responsive patient did (Martin et al., 2009). Recently, study found that right hemisphere TMS at 1 Hz had a concomitant effect of increasing excitability in the left hemisphere (Barwood et al., 2011). Alternatively, brain function may be enhanced in a region during task performance using repetitive TMS in short bursts (theta stimulation) to enhance excitability during movement preparation (Stinear et al., 2009). Another avenue to the recovery of speech function in the left hemisphere may be the use of alternate behaviors to re-activate brain mechanisms for speech in the left hemisphere. Melodic Intonation therapy uses singing to re-activate residual speech mechanisms in the left hemisphere. Using PET, investigators showed that, in 7 patients without recovery prior to therapy, attempts to speak activated right hemisphere regions, but, after therapy, behavioral improvement with melodic intonation therapy was associated with recovery of activation in the left prefrontal areas (Belin et al., 1996). The mechanisms underlying this re-activation in the left hemisphere speech mechanisms with melodic intonation therapy remain unclear. In six selected cases of non-fluent aphasia subsequent to large left hemisphere lesions, DTI was used to measure the organization of the arcuate fasiculus on the right side before and after 75 sessions of intensive melodic intonation therapy (Schlaug, Marchina, & Norton, 2009). The number of white matter fibers in the right arcuate fasiculus increased and was positively related to the degree of speech recovery across the 6 patients studied. Perhaps other nonspeech vocal behaviors could also be used to re-activate speech production. As pointed out previously, the brain networks involved in speech and voice are complex, and different therapeutic approaches may facilitate recovery of function by accessing this network in a variety of ways (Hillis, 2007). Perhaps changes in speech and voice behaviors can be enhanced using other audiovocal non-speech behaviors previously shown to activate similar brain mechanisms to speech in the left hemisphere (Chang et al., 2009). In combination with using TMS to suppress right hemisphere compensation and/or enhancing excitability in a region during preparation for speech with rapid theta stimulation, these approaches need to be explored for speech and voice intervention (Figure 2).

10

Figure 2. A schematicc of potentia al mechanism ms for re-actiivating speecch/voice circcuits after brrain injury or neurologicall disease. Trranscranial magnetic m stim mulation (TM MS) may be u used to incre ease the excita ability of the e left hemisph here speech mechanisms s which inclu ude the voca alization sys stem in the priimary motor cortex and pre-motor p are eas. Both so omatosensory ry and audito ory targets ccan interact to t modulate the speech motor m outputt. Nonspeech h vocal behav viors shown to co-activatte the same sys stem might be b used as behavioral b ta argets as the ese have been shown to a activate the same neural su ubstrates, while 1 Hz TM MS stimulatio on can be us sed to suppre ess homologue regions in n the right hem misphere.

Conclu usions Speech/voice e brain syste ems are com mplex and aree widely disttributed in tthe cortex wiith tight cou upling with brain b stem patterns. p In contrast c to tthe innate vo ocalizations of most mammalls that are lo ocalized in th he PAG, pon ns, and brain nstem, speecch motor con ntrol takes a long time e to be learn ned, depends s upon audittory feedbacck, and beco omes a highly y skilled automatiic function by b adulthood d. Therefore,, once estab blished speecch behaviors s in the adullt may be resistant r to change c unde er either norrmal conditio ons or follow wing a disturrbance by neurolog gical diseases s or brain in njury. New avenues a of neeurorehabiliitation using g other audiovocal beh haviors and brain stimu ulation to rea activate the sspeech netw work while su uppressing compens satory brain mechanism ms need to be e explored.

Refere ences Aliu, S. O., Houde, J. F., F & Nagaraja an, S. S. (2009). Motor-ind duced suppresssion of the a auditory cortex x. euroscience, 21(4), 791-802 2. Journal off Cognitive Ne Barwood, C. H., Murdo och, B. E., Wh helan, B. M., Lloyd, L D., Rieek, S., O'Sullivvan, J. D., Co oulthard, A., & Wong, A. (2011). Modu ulation of N400 in chronic non-fluent n ap phasia using llow frequency y Repetitive Transcran nial Magnetic Stimulation (rTMS). ( Brain n and Languag ge, 116(3), 12 25-135. Belin, P., Van Eeckhou ut, P., Zilbovic cius, M., Rem my, P., Francoiis, C., Guillau ume, S., Chaiin, F., Rancurrel, G., & Sam mson, Y. (1996 6). Recovery from f nonfluen nt aphasia aftter melodic in ntonation therrapy: A PET sttudy. Neurology y, 47(6), 1504--1511. Bloch, C. S., Hirano, M., M & Gould, W. W J. (1985). Symptom S imp provement of spastic dysph honia in respo onse nnals of Otolog gy, Rhinology and Laryngo logy, 94, 51-5 54. to phonatory tasks. An

11

Brown, S., Ngan, E., & Liotti, M. (2008). A larynx area in the human motor cortex. Cerebral Cortex, 18(4), 837-845. Burnett, T. A., Freedland, M. B., Larson, C. R., & Hain, T. C. (1998). Voice F0 responses to manipulations in pitch feedback. Journal of the Acoustical Society of America, 103(6), 3153-3161. Burnett, T. A., & Larson, C. R. (2002). Early pitch-shift response is active in both steady and dynamic voice pitch control. Journal of the Acoustical Society of America, 112(3 Pt. 1), 1058-1063. Catani, M., Jones, D. K., & ffytche, D. H. (2005). Perisylvian language networks of the human brain. Annals of Neurology, 57(1), 8-16. Chang, S. E., Kenney, M. K., Loucks, T. M., & Ludlow, C. L. (2009). Brain activation abnormalities during speech and non-speech in stuttering speakers. NeuroImage, 46(1), 201-212. Chang, S. E., Kenney, M. K., Loucks, T. M., Poletto, C. J., & Ludlow, C. L. (2009). Common neural substrates support speech and non-speech vocal tract gestures. NeuroImage, 47(1), 314-325. Dronkers, N. F. (1996). A new brain region for coordinating speech articulation. Nature, 384, 159-161. Ertekin, C., & Aydogdu, I. (2003). Neurophysiology of swallowing. Clinical Neurophysiology, 114(12), 22262244. Ertekin, C., Kiylioglu, N., Tarlaci, S., Turman, A. B., Secil, Y., & Aydogdu, I. (2001). Voluntary and reflex influences on the initiation of swallowing reflex in man. Dysphagia, 16(1), 40-47. Finnegan, E. M., Luschei, E. S., & Hoffman, H. T. (2000). Modulations in respiratory and laryngeal activity associated with changes in vocal intensity during speech. Journal of Speech, Language and Hearing Research, 43(4), 934-950. Fridriksson, J. (2010). Preservation and modulation of specific left hemisphere regions is vital for treated recovery from anomia in stroke. Journal of Neuroscience, 30(35), 11558-11564. Greenlee, J. D., Oya, H., Kawasaki, H., Volkov, I. O., Kaufman, O. P., Kovach, C., Howard, M. A., & Brugge, J. F. (2004). A functional connection between inferior frontal gyrus and orofacial motor cortex in human. Journal of Neurophysiology, 92(2), 1153-1164. Guenther, F. H. (2006). Cortical interactions underlying the production of speech sounds. Journal of Communication Disorders, 39(5), 350-365. Guenther, F. H., Ghosh, S. S., & Tourville, J. A. (2006). Neural modeling and imaging of the cortical interactions underlying syllable production. Brain and Language, 96, 280-301. Hage, S. R., & Jurgens, U. (2006). Localization of a vocal pattern generator in the pontine brainstem of the squirrel monkey. European Journal of Neuroscience, 23(3), 840-844. Hamilton, R. H., Sanders, L., Benson, J., Faseyitan, O., Norise, C., Naeser, M., Martin, P., & Coslett, H.B. (2010). Stimulating conversation: Enhancement of elicited propositional speech in a patient with chronic non-fluent aphasia following transcranial magnetic stimulation. Brain and Language, 113(1), 45-50. Hickok, G., & Poeppel, D. (2000). Towards a functional neuroanatomy of speech perception. Trends in Cognitve Sciences, 4(4), 131-138. Hillis, A. E. (2007). Aphasia: Progress in the last quarter of a century. Neurology, 69(2), 200-213. Houde, J. F., Nagarajan, S. S., Sekihara, K., & Merzenich, M. M. (2002). Modulation of the auditory cortex during speech: A MEG study. Journal of Cognitive Neuroscience, 14(8), 1125-1138. Jurgens, U. (2009). The neural control of vocalization in mammals: A review. Journal of Voice, 23(1), 1-10. Kertesz, A., Lesk, D., & McCabe, P. (1977). Isotope localization of infarcts in aphasia. Archives of Neurology, 34(10), 590-601. Kuypers, H. G. J. M. (1958). Cortico-bulbar connexions to the pons and lower brainstem in man. An anatomical study. Brain, 81, 364-388. Larson, C. R., Burnett, T. A., Kiran, S., & Hain, T. C. (2000). Effects of pitch-shift velocity on voice F0 responses. Journal of theAcoustical Society ofAmerica, 107(1), 559-564. Loucks, T. M., Poletto, C. J., Simonyan, K., Reynolds, C. L., & Ludlow, C. L. (2007). Human brain activation during phonation and exhalation: Common volitional control for two upper airway functions. Neuroimage, 36(1), 131-143.

12

Ludlow, C. L., Hoit, J., Kent, R., Ramig, L. O., Shrivastav, R., Strand, E., Yorkston, K., & Sapienza, C. M. (2008). Translating principles of neural plasticity into research on speech motor control recovery and rehabilitation. Journal of Speech, Language and Hearing Research, 51(1), S240-258. Luschei, E. S., Ramig, L. O., Finnegan, E. M., Baker, K. K., & Smith, M. E. (2006). Patterns of laryngeal electromyography and the activity of the respiratory system during spontaneous laughter. Journal of Neurophysiology, 96(1), 442-450. Martin, P. I., Naeser, M. A., Ho, M., Doron, K. W., Kurland, J., Kaplan, J., Wang, Y., Nicholas, M., Baker, E.H., Fregni, F., & Pascual-Leone, A. (2009). Overt naming fMRI pre- and post-TMS: Two nonfluent aphasia patients, with and without improved naming post-TMS. Brain and Language, 111(1), 20-35. Meinzer, M., Flaisch, T., Breitenstein, C., Wienbruch, C., Elbert, T., & Rockstroh, B. (2008). Functional re-recruitment of dysfunctional brain areas predicts language recovery in chronic aphasia. NeuroImage, 39(4), 2038-2046. Naeser, M. A., Martin, P. I., Nicholas, M., Baker, E. H., Seekins, H., Kobayashi, M., Theoret, H., Fegni, F., Maria-Tormos, J., Kurland, J., Doron, K.W., & Pascual-Leone, A. (2005). Improved picture naming in chronic aphasia after TMS to part of right Broca's area: An open-protocol study. Brain and Language, 93(1), 95-105. Nucifora, P. G., Verma, R., Melhem, E. R., Gur, R. E., & Gur, R. C. (2005). Leftward asymmetry in relative fiber density of the arcuate fasciculus. Neuroreport, 16(8), 791-794. Numminen, J., Salmelin, R., & Hari, R. (1999). Subject's own speech reduces reactivity of the human auditory cortex. Neuroscience Letters, 265(2), 119-122. Ojemann, G. A. (1979). Individual variability in cortical localization of language. Journal of Neurosurgery, 50, 154. Ojemann, G. A., & Mateer, C. (1979). Cortical and subcortical organization of human communication: evidence from stimulation studies. In H. D. Steklis & M. J. Raleigh (Eds.), Neurobiology of social communication in primates: An evolutionary perspective. New York, NY: Academic Press. Penfield, W., & Roberts, L. (1959). Speech and brain mechanisms. Princeton, NJ: Princeton University press. Poletto, C. J., Verdun, L. P., Strominger, R., & Ludlow, C. L. (2004). Correspondence between laryngeal vocal fold movement and muscle activity during speech and nonspeech gestures. Journal of Applied Physiology, 97(3), 858-866. Pulvermuller, F., Neininger, B., Elbert, T., Mohr, B., Rockstroh, B., Koebbel, P., & Taub, E. (2001). Constraint-induced therapy of chronic aphasia after stroke. Stroke, 32(7), 1621-1626. Riecker, A., Mathiak, K., Wildgruber, D., Erb, M., Hertrich, I., Grodd, W., & Ackermann, H. (2005). fMRI reveals two distinct cerebral networks subserving speech motor control. Neurology, 64(4), 700-706. Salmelin, R., Schnitzler, A., Schmitz, F., Jancke, L., Witte, O. W., & Freund, H. J. (1998). Functional organization of the auditory cortex is different in stutterers and fluent speakers. Neuroreport, 9(10), 22252229. Schlaug, G., Marchina, S., & Norton, A. (2009). Evidence for plasticity in white-matter tracts of patients with chronic Broca's aphasia undergoing intense intonation-based speech therapy. Annals of the New York Academy of Sciences, 1169, 385-394. Schulz, G. M., Varga, M., Jeffires, K., Ludlow, C. L., & Braun, A. R. (2005). Functional neuroanatomy of human vocalization: an H215O PET study. Cerebral Cortex, 15(12), 1835-1847. Simonyan, K., & Jurgens, U. (2003). Efferent subcortical projections of the laryngeal motorcortex in the rhesus monkey. Brain Research, 974(1-2), 43-59. Simonyan, K., & Ludlow, C. L. (2010). Abnormal activation of the primary somatosensory cortex in spasmodic dysphonia: an FMRI study. Cerebral Cortex, 20(11), 2749-2759. Simonyan, K., Ostuni, J., Ludlow, C. L., & Horwitz, B. (2009). Functional but not structural networks of the human laryngeal motor cortex show left hemispheric lateralization during syllable but not breathing production. Journal of Neuroscience, 29(47), 14912-14923. Smith, A., & Goffman, L. (1998). Stability and patterning of speech movement sequences in children and adults. Journal of Speech, Language and Hearing Research, 41(1), 18-30.

13

Smith, A., Johnson, M., McGillem, C., & Goffman, L. (2000). On the assessment of stability and patterning of speech movements. Journal of Speech, Language and Hearing Research, 43(1), 277-286. Smith, A., & Zelaznik, H. N. (2004). Development of functional synergies for speech motor coordination in childhood and adolescence. Developmental Psychobiology, 45(1), 22-33. Stewart, L., Walsh, V., Frith, U., & Rothwell, J. C. (2001). TMS produces two dissociable types of speech disruption. NeuroImage, 13(3), 472-478. Stinear, C. M., Barber, P. A., Coxon, J. P., Verryt, T. S., Acharya, P. P., & Byblow, W. D. (2009). Repetitive stimulation of premotor cortex affects primary motor cortex excitability and movement preparation. Brain Stimulation, 2(3), 152-162. Tourville, J. A., Reilly, K. J., & Guenther, F. H. (2008). Neural mechanisms underlying auditory feedback control of speech. Neuroimage, 39(3), 1429-1443. Ventura, M. I., Nagarajan, S. S., & Houde, J. F. (2009). Speech target modulates speaking induced suppression in auditory cortex. BMC Neuroscience, 10, 58. Weismer, G. (2006). Philosophy of research in motor speech disorders. Clinical Linguistics and Phonetics, 20(5), 315-349. Zhang, S. P., Bandler, R., & Davis, P. J. (1995). Brain stem integration of vocalization: Role of the nucleus retroambigualis. Journal of Neurophysiology, 74(6), 2500-2512.

14

Suggest Documents