May 24, 2013 - model study will be discussed as well as its implications for further research and pedagogical applicatio
Schedule Friday 8:45
Saturday
Sunday
Rhythm
Cues
Welcome
9:00 9:30
Pedagogy
10:00 10:30 11:00 11:30
Poster Session II Poster Session I
Break Keynote
Lunch
12:00
Lunch
12:30 13:00
Lunch Musicology I
13:30
Listening
Lyrics
14:00 14:30
Break/Poster Session I
Break/Poster Session II
Break
Brain
Musicology II
Timing and Dynamics
Break/Poster Session I
Break/Poster Session II
Closing
Keynote
Keynote
15:00 15:30 16:00 16:30 17:00 17:30 Friday, May 24th 8:45–9:00 – Welcome 9:00–11:00 – Pedagogy Stephen C. Hedger, Serena Klos, Howard C. Nusbaum “Individual differences in the short and long-term training of absolute pitch” University of Chicago, USA One view of the acquisition of absolute pitch (AP) has assumed that early learning plays a critical role. This theory does not support the idea that non-AP adults can learn musical note names and become “true” AP possessors. In the current studies, we measure how pitch
acuity correlates to explicit learning of musical notes. Adult (post-sensitive period) participants first engaged in a pitch reproduction task, similar to that used by Ross, Olson, and Gore (2003) to measure auditory sensitivity. Following this task, participants engaged in a musical note learning task, in which they were given a training task on a single octave of piano tones and subsequently tested on these tones as well as on notes from different timbres and octaves (generalization). We found a significant correlation between performance on the pitch reproduction task and generalized learning, suggesting that individual differences in auditory memory might influence the degree to which one can attain AP. Importantly, previous musical experience was not a significant predictor of generalized note learning. Future research will use electrophysiological measures (such as auditory brainstem responses) to track the neural changes that result from both short and long-term training of musical pitches. Ivan Jimenez “Emphasizing Salience: Promoting the Role of Secondary Musical Parameters in Undergraduate Music Theory” University of Pittsburgh, USA Empirical evidence suggests that when most listeners are asked to identify similarity between musical passages or degree of tension, they tend to notice secondary musical parameters (timbre, dynamics, tempo, etc.) more than primary parameters (harmonic, rhythmic-metric structures, etc.) (Cupchik et al., 1982; Lamont & Dibben, 2001; Novello et al., 2006, 2011; Eitan & Granot, 2009; Farbood, 2012). This prioritization of secondary parameters appears to be the result of a perceptual salience inherent to the parameters themselves, perhaps because of their important role in extra-musical auditory perception (processing of speech and identification of environmental sonic cues). Nonetheless, undergraduate music theory textbooks clearly prioritize the study of primary parameters, most particularly harmonic structures. This focus, however, runs the risk of neglecting the role of secondary parameters in the experience of harmony itself and in music in general, thus distancing the study of music theory from the non-specialized listening experience. In this presentation, I lay out empirical evidence supporting the notion that harmony is not as perceptually salient as other musical parameters in non-specialized listening experiences; in addition, I suggest ways in which these findings of empirical research can be used by theory instructors and shared with theory students without threatening the central role of harmony in the theory curriculum. I argue that such openness to the potential problems of over-focusing on harmony encourages students and instructors to identify the most relevant aspects of harmony for non-specialized listening, which in turn can facilitate discussion of the interaction between harmony and other musical parameters. Catherine Massie-Laberge and Isabelle Cossette “Between traditional teaching strategies and modernity” McGill University, Canada Despite the fact that teachers and students consider expression as one of the most crucial aspect of a performer’s skills, it is still often seen as a notion that cannot be taught. Juslin and Karlsson (2008) have demonstrated that traditional teaching strategies of expression often do not provide adequate cognitive feedback to students. The recently developed software, Feedback-Learning of Musical Expressivity (Feel-ME), enables performers to understand better the relationships between the performer’s intentions and the listener’s perceptions, thus providing informative feedback to students. Such technological application
has been shown to be effective in improving emotional communication. This project aimed at identifying the different types of strategies used by instrumental instructors to teach expression, but also the limitations of the actual strategies preventing them to systematize their teaching of expression. Musicians tend to have a negative attitude toward using technology for the learning of expressive skills. Therefore, this project also consisted in accumulating more evidence about what the prospects of using computer-based instruction in teaching expressivity are. As teachers’ perception and understanding of expressivity may affect how they teach it, interview questionnaires were conducted to assess both how instrumental music teachers and students define expressivity and how they appraise technological strategies for expressivity teaching. Observations of lessons were analyzed by using the pre-determined codebook by Rostvall and West (2001). Results from this research should permit the eventual development of instrumental teaching strategies of expression that will provide appropriate feedback to facilitate the achievement of more specific goals adapted to current needs. Rita Di Ghent “Expert and Novice Learning Strategies in Music” York University, Canada A professional concert pianist (the “expert”) and two distinguished university music program students (the “novices”) were given the task of preparing a transcribed gender wayang piece for public performance. The novices, only superficially familiar with gender wayang music, were given a written description and recorded musical examples to study two days prior to the observation session. A cognitive science technique called think aloud was used to track the problem-solving strategies of all three participants as they prepared the piece. A protocol analysis of the data was performed revealing compelling results. Although both piano students were considered to be highly skilled, one resembled the concert pianist's high-level problem-solving skills; the other student exhibited low-level problem-solving skills (as defined in the literature on expertise). Recordings of the students’ final performances of the piece were given to a musicologist for assessment, which revealed that the high- versus lowlevel skills set was correlative to the success of the respective performances: the student whose strategies resembled those of the expert clearly “got it”, as evidenced by his performance, whereas the other student did not. The high- and low-level skills tracked in this model study will be discussed as well as its implications for further research and pedagogical application. Note: This study was done in cooperation with Dr. Carl Bereiter and Dr. Marlene Scardamalia at The Centre For Applied Cognitive Science, University of Toronto 11:00–12:00 – Poster Session I Artur C. Jaschke and Erik J.A. Scherder “Music intervention as system: reversing Hyper Systemizing in Autism Spectrum Disorders” VU University Amsterdam, The Netherlands This paper seeks to combine the notion of the empathising-systemising (E-S) theory and the resulting twist from the executive dysfunction theory in autism spectrum conditions in light of music intervention as system. To achieve these points it will be important to re-visit, nonetheless briefly, the above mentioned theories and re-define music intervention in view of these. Our understanding of autism in light of impaired neuropsychological functions, calls for alternatives in the improvement of quality of life for these individuals, moving away
from still more disabling pharmaceutical intrusions. Music intervention offers this alternative through its multifaceted cognitive stimulation, lacks however a unified and applicable model. Against this backdrop there is the need to adjust the executive dysfunction theory to a theory of dysfunctioning executive functions (dEF) to understand music intervention, E-S and the mentioned dEF. This in turn allows an insight into the cognitive workings of music on the autistic spectrum. These notions - based on case reports of both clients and therapists aswell as systematic literature reviews - will create a different understanding of music intervention in this framework, placing the grounding stone in the development of future and existing music intervention programs applied clinically. These applications will evolve around a structuralised approach to music intervention as system, proposing five consecutive systems. It will therefore argue the aspects of expanding existing theories in ASD together with the call for generalised interventions to better assess autism. Theories have to be updated in a time of fast and ever- changing development; a notion that this paper seeks to argue from a clinical, therapeutic, interventional, cognitive and theoretical point of view. Ralph Lorenz “Aural Tracking of Modulations: Does It Matter? Is It Possible?” Kent State University, USA Is the ability to aurally track modulations an important skill for musicians? If so, is it a trainable skill, especially for musicians who do not possess absolute pitch (AP)? Charles Rosen (1927-2012) believed that all musicians should be able to hear long-range key relationships, but Rosen was a possessor of AP. Studies have shown that non-possessors of AP struggle with tracking long-range tonal relationships, despite the finding that musicians exhibit enhanced auditory working memory. In the music theory courses at our institution, we ask students to aurally identify keys in various locations of sonata form excerpts, but we use passages exemplifying standard key relationships and placements. In this paper, I will expand upon the following approaches and scenarios for aurally tracking modulations: 1) Examples from the early Common Practice Era, in which listeners can assume the use of standard closely related keys. (Example from our classes: Mozart, Symphony No. 19/I) 2) Examples in which the composer makes it obvious that a modulation has been a “trick.” (Example: Beethoven Op. 110/I) 3) For non-possessors of AP, tracking modulations using the technique of continually humming the initial tonic and, with the aid of relative pitch skills, comparing the new key to that tonic. Recognizing keys through absolute pitch and key characteristics. Here I will address my ongoing research (including a self-study with promising results) regarding adult acquisition of absolute pitch vs. quasi-absolute pitch (QAP), and whether this could be a useful pedagogical approach in pursuing the ability to aurally track modulations. Steven Parker “The Effect of Timbre Differences on Musical Intonation Judgment: A Signal Detection Approach” University of Colorado, Boulder, USA Musical intonation judgments for pairs of harmonically complex tones are influenced by spectral characteristics and mediated by musical expertise. Previous psychophysical studies on timbre-intonation interactions have employed sequential stimulus presentations, and the effect of superposition of tones on musical intonation judgment has been largely unexplored. In this study, a signal detection paradigm was used to assess the relative
contribution of timbre characteristics, heterodyne components, and music expertise in musically germane microintonation judgments. Musicians (n = 19) and nonmusicians (n = 19) performed a categorical intonation judgment task. Stimuli timbre and fundamental frequency (F0) were varied. Pre-recorded instrumental samples (oboe, flute, clarinet, french horn, and generated sinusoidal wave) were paired to juxtapose stimulus timbres. F0 for judged tones varied within a range of 434.95 and 445.11 Hz (±20 semitones), superposed onto 440.00 Hz referent tones; subjects judged the intonation of test tones as “Flat,” “In Tune,” or “Sharp.” The results of this model are shown to provide an improvement of previous predictions of timbre-affected musical intonation judgment. This study extends the application of Signal Detection Theory (SDT) in research on musical expertise. Pedagogical implications for mixed-instrument ensembles are discussed. Ana Luisa Santo “Singing in Tune: Insights from Music Educators and Psychological Researchers” York University, Canada The term “tone-deaf” is used colloquially to describe out-of-tune singers. While this term suggests that these individuals have faulty pitch perception, it is typically used to mean that they have faulty pitch production. Amusia, on the other hand, is a term proposed to refer to a developmental disorder that is characterized primarily by a deficit in pitch perception. An interdisciplinary study was conducted to explore these two conditions and possible reasons for out of tune singing. The first component consisted of interviewing four singing teachers and asking them about their experiences with out-of-tune students. An analysis of the interviews revealed common themes, including observations that out-of-tune singers seem to have difficulty hearing differences between pitches and a strong belief that everyone can be taught to sing in tune. The second component reviewed the literature on “tone deafness” and amusia and made a case for visuospatial (VS) deficits as one possible cause of out-oftune singing. A research experiment was conducted to examine this hypothesis. Results showed that while those with amusia did not have general deficits in VS ability, they do appear to have a significant deficit in VS working memory. The out-of-tune singers did not have this VS deficit, highlighting the separability of the two conditions. The third component examined ideas for bridging the gap between music researchers and music educators and how each discipline can best inform the other in order to achieve the mutual goal of aiding those with music learning challenges. Silvia Velardi “Music training and simultaneous interpreting” This study is part of an ongoing interpreting PhD research project. The approach adopted is interdisciplinary, drawing from two fields never before combined: music and simultaneous interpreting. Interestingly, the link between music and language has intrigued scholars, since it may cast new light on their nature, their evolution, their neural underpinnings, and their dynamics (Besson & Shön 2001; Deutsch 1991; Huron 2001; Koelsch et al. 2004; Levitin 2003; Patel 2003). Music and simultaneous interpreting share features and show similarities; musicians and interpreters respond simultaneously to multiple tasks during their performance (Gile, 1998). Interpreting music and interpreting languages is a highly demanding cognitive task. In light of scientifically-based studies affirming the power of music both on cognition (Moreno 2009) and developmental functions (Stansell 2005), the aim of the study is to investigate cognitive differences and similarities between the two profiles. The aim is to explore whether music training methodologies – related rhythm
mechanisms (Grahn 2009) – improves interpreters’ performance. Musicians divide their attention, practice first-sight playing and memorize music. The assumption is: can music training serve as preparation for the propaedeutic process of interpreting? An experimental study will be carried out on trainee conference interpreters, to observe their interpreting performance, namely in terms of prosody. Speeches will be orally-translated during interpreting training sessions considering directionality and effort theory parameters (Gile 1995). The aim is to evaluate the trainees’ performance after the music training methodology, showing the results emerging from the data. Yung-Ching Yu “The Effects of Melodic Intonation Therapy on Nonfluent Aphasia” Over the past few decades, an increasing amount of studies have clearly pointed out that music based interventions can improve mood and contribute to recovery from mental illness for neurological disorders. Recently, an effective evidence can be shown on the recovery of an ex-congresswoman– Gabrielle Giffords. The process of the ex-congresswoman Giffords’s treatment demonstrates how music assists her to pave the road back to language after she suffered from aphasia. Furthermore, from past experience, we can realize that music making can improve the plasticity of the brain and provide another substitutive method to solve patients’ problem. In 1932, a new therapy, Melodic Intonation Therapy, was developed to treat some disorders, such as Stuttering and Aphasia. This therapy can improve patients’ language recovery by using melody and rhythm. How this music-based treatment helps this kind of patient is a popular issue of recent clinical research, and it is also generally applied in recent clinical treatment. Through clinical tests, part of this treatment has been better than through drugs alone. Such research may become the mainstream in the future and give patients a better quality of medical care. This paper will discuss how this therapeutic method is used to treat persons with nonfluent aphasia and improve their expressive language by engaging undamaged right hemisphere regions capable of supporting speech. In addition, part of this paper will be devoted to the study of two clinical cases from the previous research to support the efficacy of Melodic Intonation Therapy. 12:00–1:00 – Lunch 1:00–14:30 – Listening Thomas Schäfer1, Peter Sedlmeier1, Christine Städtler1, David Huron2 “The psychological functions of music listening” 1 2 Chemnitz University of Technology, Germany Ohio State University, USA Why people listen to music? Over the past several decades, scholars have proposed numerous functions that music might fulfill. However, different theoretical approaches, different methods, and different samples have left a heterogeneous picture regarding the number and nature of the functions of music. And there is still no agreement about the underlying dimensions of these functions. Part one of the paper reviews the contributions that have explicitly referred to the functions of music. It is concluded that a comprehensive investigation about the basic dimensions among the plethora of functions of music is still outstanding. Part two of the paper presents an empirical investigation about the hundreds of functions that could be extracted from the reviewed contributions. They could be narrowed down to 129 non-redundant functions and were presented to a large sample of 834 respondents. Factor analysis suggested three distinct underlying dimensions: People listen to music to regulate arousal and mood, to achieve self-awareness, and as an expression of social
relatedness. The first and second dimensions were judged to be much more important than the third, which contrasts with the idea that music has primarily evolved as a means for social cohesion and communication. The implications of these results for the application of musical stimuli in all areas of psychology and for research in music cognition are discussed. Marina Korsakova-Kreyn “Emotion in music: affective response to reorientation in tonal space” Tonal modulation is the reorientation of a scale on a different tonal center in the same musical composition. Modulation is one of the main structural and expressive aspects of music in the European musical tradition. Although it is known a priori that different degrees of modulation produce characteristic emotional effects, these effects have not yet been thoroughly explored. We conducted two experiments to investigate affective responses to tonal modulation by using semantic differential scales related to valence, synesthesia, potency, and tension. Experiment 1 examined affective responses to modulation to all 12 major and minor keys using 48 brief harmonic progressions. The results indicated that affective response depends on degree of modulation and on the use of the major and minor modes. Experiment 2 examined responses to modulations to the subdominant, the dominant, and the descending major third using a set of 24 controlled harmonic progressions and a balanced set of 24 excerpts from piano compositions belonging to the First Viennese. School and the Romantics; all stimuli were in the major mode to maintain the ecological validity of modulation to the dominant. In addition, Experiment 2 investigated the affective influence of melodic direction in soprano and bass melodic lines. The results agreed with the theoretical model of pitch proximity based on the circle of fifths and demonstrated the influence of melodic direction and musical style on emotional response to reorientation in tonal space. Examining the affective influence of motion along different tonal distances can help deepen our understanding of aesthetic emotion. Gary Yim, “Implicit measures for dynamic musical affect” Ohio State University, USA In assessing listeners' emotional responses to music, implicit measures of affect may be used to avoid demand characteristics associated with self-reported measures. Sollberger, Reber & Eckstein (2002) primed listeners with brief chords in a word categorization task. Reaction times for congruent conditions (i.e.: positive words primed with consonant chords, and negative words with dissonant chords) were faster than for mismatched conditions. Thus, consonant and dissonant chords were implicated with positively and negatively valenced affect, respectively. However, using implicit measures seems ineffective for musical stimuli of longer duration, because priming effects are strong only for brief primes (Rotteveel, de Groot, Geutskens & Phaf 2001). This study implements an experimental design where implicit measures of affect are obtained for lengthier musical stimuli, such as melodies. Participants completed a word categorization task with concurrent musical stimuli, which is framed as background music. They categorized words as either positive or negative, while listening to a) major/minor chord progressions; b) major/minor melodies; or c) happy/sad music. Reaction times were recorded continuously. Data is still being collected. It is anticipated that task responses will be faster and more accurate in congruent conditions (i.e.: positive, major, and happy; or negative, minor, and sad), consistent with well-corroborated major-positive and minor-negative associations (e.g.: Hevner 1936). The method implemented here allows for an analysis of affective responses over time for lengthy musical
stimuli. Moreover, the method can be readily adapted to obtain implicit measures for various other affective responses. 14:30–15:00 – Break/Poster Session I 15:00–16:30 – Brain Amy M. Belfi, A.E. Rhone, B. McMurray, H. Oya, H. Kawasaki, and M.A. Howard “The effect of expectancy on musical chord perception: behavioral and intracranial responses” University of Iowa, USA Music holds aesthetic power, in part, because it creates and violates listeners’ expectancies. However, it is not yet clear how this plays out in the brain. Neuroscience research has demonstrated musical expectancy effects in frontal lobe regions (Koelsch et al., 2000), suggesting a cognitive component to evaluating expectancy. However, it is unclear how musical expectancy shapes low-level auditory representations. We tested this using electrocorticography to record from electrodes placed directly on the auditory cortex, in individuals awaiting surgery for epilepsy. Participants heard chord progressions that created expectation for a final chord (see McMurray et al., 2008). Chord progressions were I-IV-V (e.g., C-F-G) followed by either the expected tonic chord (C) or its unexpected minor (Cm), or by the expected relative minor (Am) or its major counterpart (A). Four progressions were used, such that a given final chord (C) could participate as tonic (after C-F-G) or relative minor (after D#-G#-A#). Thus, we observed cortical responses to the same auditory stimulus (the final chord) when it was expected or unexpected, and whether it was the tonic or relative minor. High-gamma responses (rapidly oscillating activity which indicates neural firing) were observed at the onset of the final chord in the superior temporal gyrus. Expected chords and unexpected chords produced high gamma responses in different regions of the temporal lobe, suggesting different loci for processing expected versus unexpected chords. Thus, musical expectancy not only affects higher-order cognitive processes (as would be reflected by responses in the frontal lobes), but even low-level auditory representations. Artur C. Jaschke and Erik J.A. Scherder “Thalamic multisensory integration: creating a neural network map of involved brain areas in music perception, processing and execution” VU University Amsterdam, The Netherlands Music activates a wide array of neural areas involved in different functions besides the perception, processing and execution of music itself. Understanding musical processes in the brain has had multiple implications in the neuro- and health sciences. Engaging the brain with a multisensory stimulus such as music, activates responses beyond the temporal lobe. Brain networks involve the frontal lobes, parietal lobes, the limbic system such as the Amygdala, Hippocampus and thalamus, the cerebellum and the brainstem. Nonetheless, there has been no attempt to summarise all involved brain areas in music into one overall encompassing map. This may well be, as there has been no thorough theory introduced, which would allow an initial point of departure in creating such an atlas. A thorough systematic review has been therefore conducted to identify all mentioned neural connection involved in the perception, processing and execution of music. Tracing the direct responses in the involved brain regions back to its origin (the incoming stimulus through the cochlea),
neural tracks lead nearly exclusively via the thalamic nuclei. Communication between the thalamic nuclei is the initial step in multisensory integration, which lies at the base of the neural networks as proposed in this paper. Against this backdrop, this manuscript introduces the to our knowledge first thorough map of all involved brain regions in the perception, processing and execution of music, out of the general need of such a map and the knowledge, which can be gained from it. Consequently, placing thalamic multisensory integration at the core of this atlas allowed to create a preliminary theory to explain the complexity of music induced brain activation, ergo a consecutive network encompassing and explaining the connections between all areas and not only areas of interest in the singularity of different strains of music related research. Margaret Moore “The Neuroaesthetics of Musical Beauty: A Philosophical Evaluation” University of Tennessee, USA This paper provides a philosophical evaluation of recent neuroscientific studies claiming to explain our judgments of musical beauty. I argue that while this research is interesting and potentially relevant to aestheticians, at present it suffers from two related difficulties. First, neuroscientific studies of aesthetic judgments of beauty implicitly or explicitly appeal to competing models of aesthetic judgment. These models sometimes seem to follow the 18th century philosopher Francis Hutcheson (in that aesthetic evaluation is identified with the affective response that results from automatic perceptual processing) and sometimes follow Immanuel Kant (in that a properly aesthetic judgment requires cognitive as well as perceptual and affective processing). Second, these studies suffer from an ambiguity in the musical object of appreciation, failing to distinguish between a response to musical form (requiring higher cognition) and to musical sounds (producing an affective response from perceptual processing alone). These two problems are alike in that it is unclear just what the relation between affective and cognitive processes is supposed to be in a properly aesthetic judgment. Further, these tensions in the scientific treatment of musical beauty are reminiscent of Eduard Hanslick; indeed, they repeat debates in music aesthetics regarding form versus feeling as the proper object of musical appreciation. 16:30–17:00 -–Break/Poster Session I 17:00–18:00 – Keynote Glenn Schellenberg “Music training and nonmusical abilities” University of Toronto at Mississauga, USA Music listening can lead to improvements in nonmusical abilities because of improved moods and arousal levels. Music training is a more complicated story. In childhood, music lessons are associated with listening skills, visuospatial abilities, language skills, memory, general intelligence, and academic achievement. Because most research is correlational, however, the direction of causation is unclear. Moreover, associations with professional musicians are inconsistent. Music training does not appear to be linked to social or emotional abilities except when listening is involved. Musically trained individuals tend to have different personalities than untrained individuals, and personality may be a better predictor of music training than cognitive abilities. Because individuals who take music lessons differ from other individuals in terms of demographics, cognitive abilities, and personality, a simple explanation of the available data is that pre-existing differences
influence who takes music lessons and for how long. The burden of proof lies with those who claim that music lessons improve nonmusical abilities in systematic ways. Saturday, May 25th 9:00–10:30 – Rhythm Niall Klyn, Erin T. Allen, YongJeon Cheong, and Udo Will “Short term memorization of vocal and instrumental rhythms and effects of concurrent rhythm tasks” Ohio State University, USA Neurophysiological and imaging studies have shown that the human brain processes speech and non-speech sounds differently (e.g. Belin et al., 2000; Levy et al., 2003), and that vocal and instrumental melodic contours have differential effects on speech processing (Poss et al., 2008; Poss, 2012). A recent imaging study (Hung, 2011) found significant differences in processing of vocal and instrumental rhythms in early auditory pathways (temporal lobe areas). The current study extends this research and asks whether differences between vocal and instrumental rhythms also exist in short term memorization. In experiment one, musicians and non-musicians listened to stimulus pairs containing both vocal and clapstick rhythm patterns. Stimulus pairs were presented in immediate and delayed succession and participants had to make same-different judgments on either the vocal or the instrumental rhythm of each pair while error rates and reaction times were recorded. Results show: musicians perform better than non-musicians on clapstick but not voice rhythm judgments; for delayed presentations both groups exhibit decreased memory performance for voice but not clapstick rhythms, and musicians are better on voice rhythms in the same than in the different condition. Experiment two tests a possible involvement of the phonological loop in the rehearsal strategies for vocal and instrumental rhythm memorization. Participants completed a control run (same as experiment 1), a run with a concurrent sub-vocalization task and a run with a simultaneous finger-tapping task. Results for the musicians show that the concurrent tasks differentially affect vocal and instrumental rhythms as well as same and different decisions (data for non-musicians are currently being collected) and are interpreted as indicating different forms of representation in memory. The fact that vocal and instrumental sounds, melodic contours, and rhythms are processed differently strongly supports ideas about different phylogenetic origins of instrumental and vocal music. Colin Raffel “Quantifying Rhythmic Synchrony” Columbia University While the topics of automatic beat and onset detection have seen a good deal of recent research focus, relatively little work has been done on the computational analysis of rhythmic structure and synchronization. The relative timing of notes played by individual performers in a piece of music can indicate the piece's genre and style as well as the performer's skill and intent. We propose a novel technique for measuring the rhythmic synchronization of a piece of music and test its application in a variety of settings. Our approach first calculates a perceptual strength function using a first order difference of successive Mel-scaled magnitude spectra. The entropy of the cross-correlation of different perceptual strength functions is used to measure their rhythmic similarity. To evaluate this technique, we first show that our measure correlates closely with certain human-annotated tags applied to a variety of music. We also compare our metric with “unity of ensemble sound” and coherence” scores assigned by expert judges to different performances of a piece of jazz
music. Finally, we show that using the relative entropy in each frequency band can improve beat detection algorithms. Fiona Manning and Michael Schutz “Timing perception and production in expert musicians and nonmusicians” McMaster University, Canada Recent studies have demonstrated that musicians exhibit lower thresholds for perceiving timing deviations in simple rhythmic sequences (Ehrlé & Samson, 2005; Madison & Merker, 2002) in addition to lower tapping variability (Repp, 2010; Franek, et al., 1991) than do nonmusicians. We have previously shown that movement can improve perceived timing, but had not yet explicitly examined the role of musical experience. Here we investigated how musical expertise can impact the relationship between movement and timing perception. We tested a group of highly trained percussionists and a group of nonmusicians on their ability to detect temporal deviations in a sequence. Participants listened to an isochronous sequence and identified whether a final beat was on time or not. On half of the trials the final beat occurred on time and on half of the trials the final beat occurred slightly late. Participants either tapped along with the sequence or listened without tapping. Overall, both groups performed significantly better when moving than when listening alone. As expected, percussionists’ taps were significantly less variable than the taps of nonmusicians. Although the percussionists uniformly outperformed the nonmusicians when moving, when forced to remain still their performance was occasionally worse than that of some non-musicians. This raises interesting questions about the degree to which percussionist expertise in timekeeping is dependent upon motion. These findings complement musical expertise and timing perception literature by demonstrating that movement may differentially affect the timing perception of individuals with high levels of musical experience. 10:30–11:30 – Poster Session II Joshua Albrecht “Empirical Approaches to Defining Affective Expression Terms for the Beethoven Piano Sonatas” University of Mary Hardin-Baylor Studies measuring perceived musical affect have examined different groups of affects. However, it is difficult to compare results between studies using different terms. Several recent projects have addressed this difficulty by building a general-purpose taxonomy of affective musical terms (Zentner, et al 2008, Schubert 2003). However, these studies assume that the same labels can be effectively used for many repertoires. This study seeks to expand this work by developing a taxonomy of terms to use for studying Beethoven’s piano sonatas. The current report compares lists of affective terms gathered from three participantgenerated approaches. In the first study, 28 participants each chose 20 terms deemed most relevant for “early Romantic piano music” from a list of 91 terms taken from three studies of music and emotions. In the second study, 21 participants listened to 15 excerpts from Beethoven’s piano sonatas and chose 5 terms for each excerpt from the same list. Finally, 43 participants provided free-response descriptions of the affect of the same excerpts, subjected to content analysis for derivation of affective terms. Data collection is ongoing, and will result in repertoire-specific participant-generated lists of affective terms. These three lists will then be compared, to determine a) how experimental paradigms influence participantgenerated affective terms and b) what the most appropriate affective terms are for the
Beethoven piano sonatas. The results from this study will provide a taxonomy of affective terms that are repertoire-specific for the Beethoven piano sonatas. Jenine Brown “Hearing Anton Webern’s Concerto for Nine Instruments, Op. 24/iii” Eastman School of Music, USA This study investigates whether listeners implicitly attune to the repetitive adjacent interval patterns found in Webern’s Concerto for Nine Instruments, Op. 24/iii. The composer presents trichords on the musical surface that are a version of set-type [014]. Webern often presents this trichord as -8+11, +8-11, +11-8, or -11+8 (in semitones), which I call the “motive.” Webern also presents [014] with other adjacent intervals, such as -13+4 or +11+4; I call these variants “near-motives.” Participants (n=12) were freshmen or sophomore music majors at the Eastman School of Music and heard a familiarization phase consisting of Webern’s Op. 24/iii, repeated 14 times for a total of 19 minutes of familiarization. Before and after familiarization, listeners rated 34 randomized melodic dyads and trichords on a 1-7 scale. Before familiarization, listeners were asked to rate how often these short melodies occurred in most music. After familiarization, they were asked how often they occurred during the composition. Difference scores for trichordal motives (M=2.08, SE=.25) were significantly higher than the d-score means for other trichords, such as small tonal lures, large tonal lures, and large atonal lures that mimicked the size and dissonance of the motive; thus, the motive was learned. D-scores for near-motives (M=1.07, SE=.31) were also significantly higher than d-score means for small tonal lures and large tonal lures, but were not significantly higher than large atonal lures. D-score means for all trichords correlated with the Trichord Profile at r=0.618 (26 trichords), which illustrates the number of times each trichord occurred during familiarization. Nathaniel Condit-Schultz “A Music Theory of Flow: The Musicality of Rap Delivery” Ohio State University, USA The stylized delivery of the words in relation to the beat, the 'flow,' makes rapping a musical experience even when the words are not understood (Kyle Adams, 2008/2009). This study analyzes the musical attributes of rap delivery. A theory is proposed that rhyme functions like motivic parallelism to embed a separate “higher-level” perceptual stream in the monophonic delivery. Thus rappers use rhyme and other prosodic features to create a multileveled rhythmic structure, akin to the 'implied polyphony' found in some monophonic melodic works (Stacy Davis, 2006/2011). It is argued that rap flow is structured so as to create expectations on multiple levels, which are alternatively fulfilled or thwarted to create a balance of tension and release analogous to that described in traditional music theories of melody and harmony. An empirical study is conducted which tests the validity of the theory by measuring the predictability of rap delivery using techniques from information theory. The entropy of the metric placement of the “higher” stream of rhyming syllables is shown to be modulated by rappers in order to give flow a balance of predictability and surprise. A corpus analysis of popular rap works reveals that the overall entropy in popular rap has increased over the genre's history, from the relatively predictable rhymes of “old-school” rap into the more complex flows of the 1990s new-school. This increase in rhythmic entropy reinforces the previously established increase in the complexity and volume of rhyme over the same period (Hussein Hirjee & Daniel Brown, 2010).
Lincoln G. Craton, C.R. Poirier, D.S. Juergens, H.R. Michalak, K. Ackerman, E. Hackney, S. Hill, M. Tardiff, and S. Waller “Explicit Knowledge of Rock Harmony: The Effect of Musical Training” Stonehill College, USA Rock music employs chords and chord sequences that violate Common Practice (de Clercq & Temperley, 2011; Stephenson, 2002). Previously, we reported evidence that listeners with minimal musical training possess some explicit knowledge of Rock harmony (Craton et al., 2011). Here we attempted to replicate and extend these findings with a larger sample, in order to be able to explore the effect of musical training. Fifty-one participants provided liking and surprise ratings for 31 (major, minor, or dominant 7) target chords played after a brief musical context (major scale or major pentatonic scale + tonic major triad) or a nonmusical context (white noise + silence). The manipulation of greatest interest was harmonic system: 1) traditional diatonic chords were those that would be expected in both Common Practice and Rock music; 2) rock-only diatonic chords, would be unexpected in Common Practice music but expected in Rock; and 3) nondiatonic chords were those lying outside either harmonic system. To the extent that listeners possess explicit harmonic expectations based on Rock harmony, we hypothesized that ratings in the traditional diatonic and the rock-only diatonic conditions will be similar to each other to and more positive than ratings in the nondiatonic condition. We will also be comparing our findings to the frequencies of root motions reported in the corpus analysis by de Clercq & Temperley (2011) and intend to use our ratings data to provide new predictions for corpus analyses of Rock music, specifically for frequencies of root/chord quality combinations. Larissa Padula Ribeiro da Fonseca “The child musical memory: an exploratory study about audiation of timbristic sequences by children between 4 and 12 years” The aim of this research was to check the working memory of 36 Brazilian children for sequences of different timbres as well as their familiarity and preference for small percussion instruments. The participating children were from the city of Salvador, Bahia. Some of them had attended musical classes earlier, and some had not. The examined children were between 4 and 12 years old. Data analysis revealed that the majority of the children, regardless of age, recalled the 8 musical items they had been hearing. A comparison of this result with the studies which assessed the extent of memory for verbal items, shows that the memory for musical items may have a higher capacity. Children which attended music lessons had a better index of familiarity with all instruments, but even those whom did not attend the same classes showed a high level of familiarity with the pandeiro, the triangle and the caxixi, which was probably caused by the cultural context of the city of Salvador. There was a significant index of indifference among the children aged 10 - 12 years. It could be observed that it is necessary to carry out further research and expand them in order to know more about the idiosyncrasies of musical memory in relation to its different perspectives. Olivier Gagnon “Investigation of the influence of harmony on the perception of emotion” This ongoing project of study aims to investigate the influence of various harmonizations on the perception of emotions in music. For this purpose, we have composed short musical pieces that evoke five basic emotions -- happiness, tenderness, sadness, fear and anger. The composition of these miniatures was based on the summary of cue utilization in performer's communication of emotion in music (Juslin, 2001, p.315) and on the shared acoustic cues for
emotions in speech and music (Juslin & Laukka, 2003, p.802). These pieces are derived in three different harmonizations made by the superimposition of intervals of the same kind : (1) thirds/sixths (tonal/modal harmonies), (2) fourths/fifths and (3) seconds/sevenths. The last two harmonizations correspond to more modern harmonic systems which are not often investigated in the field of music perception. A pilot study, including perceptual testing, has been conducted and the preliminary results showed that the harmony influences the valence of the perceived emotions. More precisely, when the harmonization is based on intervals that are smaller than a major third, the perceived emotions tend to have a more negative valence. In addition to this, a more dissonant harmonization also tend to make the participants perceive an emotion with a more negative valence. These results confirm our hypothesis, but for the results to be conclusive, more people will have to participate to the experiment. Lúcia de Fátima Ramos Vasconcelos and Adriana Giarola Kayama “Translating song: meter, rhythm and rhymes as structural and expressive factors” UNICAMP, Brazil The main objective of this study is to defend the study of meter, rhythm, syllabic division, and rhymes as structural elements in the translation process of vocal works. The case study is the translation of Arnold Schoenberg's Pierrot Lunaire, translated to Brazilian Portuguese by the poet Augusto de Campos. Based on specific literature along with interviews with Augusto de Campos, the article defends the importance of metric schemes as well as the perception of poetic rhythm, musical rhythm and rhymes, and its significance as an indissoluble unity in the song translating process. The study ends with an analysis demonstrating a series of examples with solutions found by the translator in various situations and musical language. We also discuss aspects that facilitate the work of the performer in regard to translated texts in general. 11:30–12:30 – Lunch 12:30–14:30 – Musicology I Brian A. Miller “Coding Schenker: Case Studies in Automated Cadence Detection” University of Kansas Any attempt at computerized musical analysis faces the challenge of translating a musician’s intuition into algorithmic form. Computer languages with musical toolkits and extensive digital corpora provide a powerful platform for such analysis, but tonal methodologies like Schenkerian analysis resist straightforward computerization. The algorithm considered here is designed to detect significant cadential figures based on Schenkerian criteria, particularly including dominant-tonic bass progression and melodic motion with scale degree one as target. Factors ranging from availability and quality of digitized scores to instrumentationspecific analytical considerations complicate such an approach, but it is nonetheless capable of generating useful data much more quickly than a human theorist working by hand. In the first case study, the cadence detection algorithm facilitates corpus-wide analysis and confirmation of some basic assumptions about cadences in Schenkerian theory. Next, the algorithm is adjusted to detect instances of the rare ascending Urlinie as described by David Neumeyer. The second study produces promising results but also highlights and leaves unresolved some of the aforementioned difficulties involved in computerized tonal analysis, suggesting the need for further study in this area.
Jenine Brown “The Psychological Representation of Trichords in a Twelve-Tone Context” Eastman School of Music, USA This study investigates whether listeners implicitly attune to the repetitive adjacent interval patterns found in twelve-tone rows. Specifically, it addresses whether listeners can learn the repetitive trichords found in derived rows, which contain four versions of one trichordal settype. Listeners were freshmen music majors at the Eastman School of Music and first heard a ~20 minute familiarization phase consisting of the 48 versions of a twelve-tone row. The melody heard during the familiarization phase was presented as a stream of tones similar to that heard in statistical learning studies (e.g. Saffran, Johnson, Aslin, and Newport, 1999). In Experiment 1, listeners (n=10) heard a row created with four trichords, each containing adjacent intervals 1 and 3 (in semitones). In Experiment 2, another group of listeners (n=11) heard a row created with four trichords, each containing adjacent intervals 2 and 5. During the trial phase of both experiments, listeners heard randomly ordered probemelodic-trichords. Listeners rated trichords on a 1-7 scale, where higher ratings were more idiomatic of the melody heard during the familiarization phase. Listeners rated many types of trichords, including members of all trichordal set-classes. In both experiments, listener ratings significantly correlated with the “Trichord Distribution,” which illustrates the number of times each trichord occurred during the familiarization phase. Listeners also rated within-row trichords significantly higher than not-in-row trichords in both Experiments 1 and 2, and more notably, rated common within-row trichords higher than less common within-row trichords. We can conclude that listeners attuned to the intervals occurring between the pitches, rather than the notes themselves. Thus, uninformed listeners were able to implicitly create a hierarchy of more common and less common trichords occurring in a twelve-tone melody, an initial step in aurally understanding a musical genre notoriously difficult to hear. Claire Arthur “Caution: Octave Ahead! A Perceptual Account of the Direct Octaves Rule” Ohio State University, USA What goal is served by following the traditional Western voice-leading prohibition against Direct Octaves (a.k.a. Hidden Octaves or Exposed Octaves)? There are many guidelines and heuristics put forth in music theory textbooks, some of which seem unclear in their goal or purpose. Over the past two decades, research has shown that many voice-leading rules can be understood as consistent with perceptual principles that facilitate unconscious auditory organization of multipart textures (e.g., Huron, 2001). The current investigation follows in this tradition by testing possible perceptual or cognitive interpretations of traditional musical practices. In this study, a series of experiments are conducted in order to determine the perceptual effects of stepwise approach to an octave. In the first part of this study, musically trained listeners were exposed to a series of isolated sonorities (both familiar and unfamiliar) and asked how many distinct pitches were present. Based on previous research we expect that listeners will underestimate the number of pitches present in chords that contain octaves due to tonal fusion, where the constituent tones combine to form a single auditory image (Stumpf, 1890; van Noorden, 1971; Rasch, 1978; Bregman, 1990). In the remaining experiments, octave-bearing sonorities were primed by step or by leap, and listeners were again asked to make numerosity judgments. We anticipate that the step-wise priming of the octave will help to overcome tonal fusion and lead to more accurate numerosity judgments.
Joshua Albrecht “A New Key-Finding Algorithm Using Euclidean Distance: An Improved Treatment of the Minor Mode” University of Mary Hardin-Baylor, USA For many years, automatic key finding has posed unique problems for computational music theorists and those interested in working with large corpora. The current report proposes a new key finding algorithm, utilizing Euclidean distance rather than correlation, and examining only the first and last eight measures of each piece. A model was trained on a data set of 490 pieces encoded into the Humdrum “kern” format, and was tested on a reserve data set of 492 pieces. The proposed model was found to have a significantly higher overall accuracy (93.1%) than many previous models, such as the (88.6%) Temperely Kosta-Payne (2007), the (74.2%) Krumhansl-Schmuckler (1990), the (91.1%) Bellman-Budge (2005), and the (91.7%) Aarden-Essen models (2003). In addition, separate accuracy ratings for major mode and minor mode works were determined for each of the existing key-finding models in which it was found that most other models provide significantly greater accuracy for major mode rather than minor mode works. The proposed key-finding algorithm performed more accurately on minor-mode works than all of the other algorithms tested, although it did not perform significantly better than the models created by Aarden or Bellman. The proposed model and the Aarden-Essen model were found to have complementary results. Consequently, a post-hoc algorithm that combines the two models is suggested, and results in significantly more accurate key assessments (95.1%) than all the other extant models. 14:30–15:00 – Break/Poster Session II 15:00–16:30 – Musicology II Trevor de Clercq “How Melody Engenders Cadence in the Chorales of J. S. Bach: A Corpus Study” Ithaca College, USA This paper reports a corpus study of the 371 chorales harmonized by J. S. Bach. The focus of this study is to investigate what kinds of events are typical at phrase endings (as demarcated by fermatas) given various melodic conditions, i.e., how well melodic structure is a predictor of cadence choice. Each fermata event was analyzed by ear and encoded with regard to the local key area and the cadence type, using a modified version of traditional cadence classifications. The frequency of each cadence type was then tabulated with respect to categorizations – as determined by the intervallic pattern and scale degree content – of the melodic structure prior to the fermata. It is shown that most fermata events can be categorized by a small collection of event types. As a result, a simplified conceptual model of cadence choice is posited. This model essentially proposes that a basic harmonization default is to interpret the soprano note at the fermata as scale-degree 1, 2, or 3 in some closelyrelated key area. The efficacy of this model is found to be very good, especially given certain conditions. Moreover, only a few extensions to the model are required to achieve an overall prediction success rate above 90%. The implications of these findings are discussed in terms of projects both inside and outside of the classroom. Kelly J. Maynard “'The Auricular Sense of Space'“: Medicine and Music in Fin-de-Siècle France” Grinnell College, USA
This paper stems from an ongoing historical project on the reception of Wagner in France. Working methodologically at the intersection of music reception and the history of medicine, I examine the research of Pierre Bonnier (1861-1918), an avid Wagnerian who took from the vertiginous experiences of hearing performances in Paris and Bayreuth the inspiration to pursue a career in medicine. Trained under the auspices of the famed Parisian neurologist Jean-Martin Charcot, Bonnier dedicated his career to explaining through physiological and neurological mechanisms the overwhelming experience of hearing Wagner's works. Early in his career Bonnier undertook “scientific” analyses of Wagner's works, notably Meistersinger and Parsifal, using thematic development as an analogue for evolutionary theory and presenting musical themes on the page like cells seen under a microscope. Beginning with his thesis for the Paris Medical School in 1890, entitled “The Auricular Sense of Space,” time and time again his medical theories reflected back upon musical experience, alluding for example to the physiological sense of vertigo brought on by hearing the Tarnhelm motif emerge during a performance of Götterdämmerung. Bonnier published nearly a dozen volumes of research relating to hearing and ear dysfunctions during the 1890s and 1900s. He also explored the neurological relationship between vision and hearing, a juxtaposition prompted by the novel theatrical practices employed at Bayreuth. In the larger project I map this exploration onto an analysis of the work that music can do in shaping intellectual habits of mind at specific moments and in particular national contexts. Mark Yeary “Stravinsky’s Passport: The Design and Use of Memorable Chords” Indiana University, USA Relatively little attention has been given to one of Stravinsky’s most enduring musical accomplishments: the creation of chords so distinctive that they take on the name of the movement or piece in which they occur. Such significance applied to a single sonority is a rarity in Western art music; the roster of recognizable “named chords” is not long, and Stravinsky is the only composer to which we may attribute more than a single named chord. In this presentation, I draw upon studies of perceptual learning, categorization, and psychoacoustics to analyze Stravinsky’s most recognizable chords, including the “Augurs chord” (Rite of Spring), the “Psalms chord” (Symphony of Psalms), and the initial “passport” chord of his Violin Concerto in D. Whereas novelty has been shown to greatly facilitate learning and memory (Tulving and Neal 1995), typical musical listening rewards familiarity at the expense of novelty (Huron 2006). Accordingly, I examine how Stravinsky’s most memorable chords may be heard as both novel and broadly familiar within their respective musical contexts. I frame the musical techniques of intensity, repetition, and isolation as devices intended to attract the listener’s attention and promote recognition, and I examine a chord’s potential for novelty based on how its holistic features—pitch and chroma profile in particular—compare with those of common cultural exemplars of chords. I offer support for the anecdotal observation that Stravinsky “regarded every chord as an individual sonority” (Piston 1947), and I offer a analytic approach toward the study of other memorable chords. 16:30–17:00 – Break/Poster Session II 17:00–18:00 – Keynote Robert Gjerdingen “Counting on Corpora in Music: What Should We Count?” Northwestern University, USA
Corpora of speech and written texts have, over recent decades, had a profound effect on studies of language. The promise of computational corpus studies in music is great, but realizing that promise may depend on improved ideas of what should get counted. The talk will focus on three areas where research in music cognition and music history can help: 1) the single-stream hypothesis, 2) adjacency vs non-adjacency, and 3) the scope of reference. Sunday, May 26th 9:00–10:30 – Cues Michael Schutz “Exploring the evolution of cues for emotion in 24 piece set piano preludes” McMaster University, Canada Recent corpus analyses illustrate that major key (nominally “happy”) excerpts are higher (Huron 2008), faster (Post & Huron, 2009), and louder (Turner & Huron, 2008) than their minor key (nominally “sad”) counterparts. Intrigued by this documentation of parallels with the communication of emotion in speech, my lab has begun a large-scale project analyzing sets of piano preludes containing 12 major and minor key pieces matched according to chroma. We have shown that major key compositions by J.S. Bach (Well Tempered Clavier, book 1) and Chopin (Preludes) are a major second higher in pitch height and 28 percent faster in timing then their minor key counterparts (Poon & Schutz, 2011). Additionally, we found intriguing differences between the two composers with respect to their use of tempo. In order to determine whether this was indicative of a broader trend, we analyzed the tempos of 10 new sets of preludes from the Classical and Romantic eras, finding consistent differences (Poon & Schutz, 2012) complementing previous reports of changes between eras (Post & Huron, 2009; Ladinig & Huron, 2010). However, noting an inherent tension between written and performed tempos, we are now looking at individual differences in the tempos used within a several recordings of Bach’s WTC as complied by Palmer (1981). Our analysis revealed that performers played major key preludes faster than their minor key counterparts in each of the thirteen recordings. The consistency of this difference is striking given the idiosyncratic nature of some performers surveyed (i.e., Gould). Kirsten Nisula “Distinguishing Sad from Sleepy and Relaxed Musical Expressions: Speech Prosody vs Animal Signaling Interpretations” Ohio State University, USA It has been common to regard speech prosody as a template for affective expression in music (e.g. Juslin & Laukka, 2003). For instance, Kraepelin (1899/1921) identified a number of features characteristic of sad speech prosody: slow speaking rate, low overall pitch, quiet dynamic, monotone pitch movement, mumbled articulation, and dark timbre. These same features have been observed in nominally sad musical compositions (e.g., Schutz et al., 2008; Turner & Huron, 2008; Post & Huron, 2009; Ladinig & Huron, 2010). However, all of these features are plausible artifacts of low physiological arousal. Moreover, other states, such as sleepiness and relaxation that involve the same low physiological state, may be associated with the same prosodic features. In this study, we use the method of adjustment to test whether participants are able to create music-like sequences that distinguish sadness from sleepiness or relaxation and several additional lure affects. The participants manipulate seven musical parameters associated with speech prosody: note-rate (tempo), overall pitch height, intensity, pitch range, interval size, sustain (articulation), and timbre. In addition,
participants can select the modality of the sequence. We predict that participants will have difficulty distinguishing between the affects involving low physiological arousal. If this is the case, we suggest that ethological signaling theory provides a parsimonious account for this apparent confusion. Laura K.Cirelli, Kathleen M. Einarson, and Laurel J. Trainor “When in infancy does interpersonal motor synchrony become a social cue: Do babies prefer others after bouncing to music with them?” McMaster University, Canada Because humans can entrain their movements to the rhythmic pulse in music, behaviours such as dancing, singing, and music production encourage high levels of motor synchrony among group members. Such interpersonal synchrony is associated with increased group cohesion and social bonding between those involved. The developmental trajectory of this social effect of interpersonal synchrony has been little studied. However, by their first birthday, infants are active social agents, and have some requisite abilities for moving to music. However, although infants tend to spontaneously produce rhythmic movement when listening to music, these movements tend to be asynchronous until preschool age. The current research highlights when, in infancy, this effect of interpersonal synchrony influences social behaviour. Results from experiment 1 show that 14-month-old infants are significantly more likely to help an adult by handing to them accidently dropped objects after being moved synchronously with that adult’s movements compared to being moved asynchronously. Also, this influence on social behaviour is present in 14-month-olds regardless of whether these movements are to an isochronous or a random beat pattern. Results from experiment 2, however, show that 10-month-old infants do not yet seem to display strong dispositional preferences for individuals with whom they previously experienced interpersonal motor synchrony. Together, these results suggest that interpersonal motor synchrony becomes a cue for social bonding by the end of the first year of life, when the understanding of the intentions of others and the first displays of altruism emerge. 10:30–11:00 – Break 11:00–12:00 – Keynote Elizabeth West Marvin “Building Bridges: Music Cognition and Music Theory Instruction” Eastman School of Music, USA Twenty years have now passed since Butler and Lockstampfor (1993) issued a call for a closer alliance between music-cognitive research and music-theory pedagogy, characterizing the relationship between the two with the metaphor “Bridges Unbuilt.” Since that time, numerous studies with implications for music teaching and learning have been published by cognitive scientists, music theorists, and collaborating teams, yet ties to those teaching in the field remain fuzzy. This presentation updates two previous essays (Marvin 1995, 2007) in which I surveyed recent empirical research—my own research and that of others—to suggest teaching approaches based on sound cognitive principles. Among the areas to be explored are: implicit learning and its effect on scale-degree identification, meter induction, key finding, and formation of musical schemas; contextual pitch memory and the phenomenon of incipient absolute pitch; theories of dynamic attending and expectation and
their impact on musical recognition and prediction; the challenge of students with apparent tone deafness; and, finally, emotion in music as intrinsic motivation for study. 12:00–1:00 – Lunch 1:00–2:30 – Lyrics Janet Bourne “Listeners Reconcile Music and Lyrics Mismatch in Song Interpretation” Northwestern University, USA Previous experiments on song (music with lyrics) focus on how lyrics/music elicits emotion (e.g. Sousou, 1997), yet do not address song’s narrative. This project researches how listeners integrate mismatched (contradictory) music and lyrics when interpreting song’s communicative implications. Our hypotheses: (1) in song, music does not simply elicit emotion but also plays a part in a listener’s communicative interpretation; a listener uses both. (2) If music and lyrics mismatch, listeners will reconcile the contradictory sources to create coherence. (3) When the music and lyrics conflict in a song sung by a character, a listener may infer the character as being ironic, lying, sarcastic or humorous (as seen in speech-gesture-mismatches; McNeill, 2005). Participants listened to song clips from Broadway musicals and provided free response and Likert scale ratings. The study used a 2x2 within-subjects design where the factors are the affect of the music and the congruence or incongruence of the lyrics: 1) Positive Music/Positive Lyrics, 2) Positive Music/Negative Lyrics, 3) Negative Music/Negative Lyrics, 4) Negative Music/Positive Lyrics. Two-way repeated measures ANOVA was run on Likert ratings (1-6). Among our significant results, for “How does the character feel about…”: a significant main effect of musical affect (F=92.32, p