nonperformance, as well as performance, based ...

2 downloads 0 Views 155KB Size Report
with a teenage fan of Britney Spears who has had. 5 years of ...... In D. A. Hodges (Ed.), Handbook of music psychology. (pp. ... SPSS survival manual. NSW ...
Music2703_04

1/14/10

1:53 PM

Page 197

Nonperformance Music Engagement and Verbal Memory

197

N ONPERFORMANCE , A S W ELL A S P ERFORMANCE , BASED M USIC E NGAGEMENT P REDICTS V ERBAL R ECALL TAN C HYUAN C HIN & N IKKI S. R ICKARD Monash University, Melbourne, Australia MUSICIANS HAVE BEEN REPORTED TO DEMONSTRATE

significantly better verbal memory abilities than do nonmusicians. In this study, we examined whether forms of music engagement other than formal music training might also predict verbal memory performance. Gender, socioeconomic status, and music performance variables were controlled in the main study; IQ was also assessed for a subset of participants. While performance musicianship remained a stronger predictor of verbal learning and memory, convincing evidence is presented that nonperformance music engagement (listening activity) also predicted verbal memory measures. The role of music engagement was independent of control factors both in the main study results and in the subset. The findings highlight the need for a more extensive conceptualization of musicianship in research that examines the impact of music on cognitive performance. Received May 26, 2008, accepted November 5, 2009. Key words: music training, cognitive functioning, nonmusician, affective engagement, music listening

M

USICIANS DEMONSTRATE SUPERIOR FUNCTIONING

than nonmusicians on a range of nonmusical cognitive tasks (Brandler & Rammsayer, 2003; Chan, Ho, & Cheung, 1998; Ho, Cheung, & Chan, 2003; Schellenberg, 2006). A causal link is suggested by experimental studies in which music training interventions enhance performance above that of controls (Bilhartz, Bruhn, & Olson, 2000; Costa-Giomi, 2004; Gardiner, Fox, Knowles, & Jeffrey, 1996; Rauscher, Shaw, & Ky, 1993). For instance, 6-year-olds randomly allocated to music lessons for 32 weeks scored significantly higher on a general intelligence test than did students allocated to control groups, which included a group allocated to drama lessons, while holding constant potential confounding variables such as family income and parents’

Music Perception

VOLUME

27,

ISSUE

3,

PP.

197–208,

ISSN

0730-7829,

education (Schellenberg, 2004). Both children and adults exposed to several years of music training in Hong Kong were shown to develop significantly better verbal learning and memory (Chan et al., 1998; Ho et al., 2003). The explanation for this effect, however, remains unclear. The musician’s advantage for verbal memory may be explained by enhanced synaptic plasticity, networking, or efficacy that results from years of advanced sensorimotor coordination and auditory processing. For instance, the anterior corpus callosum tends to be thicker in musicians, implying greater communication between the hemispheres (Lee, Chen, & Schlaug, 2003; Schlaug, Jancke, Huang, Staiger, & Steinmetz, 1995). Auditory neurons also have been found to function more efficiently in musicians (Fujioka, Ross, Kakigi, Pantev, & Trainor, 2006; Pantev, Roberts, Schulz, Engelien, & Ross, 2001). Auditory temporal order processing skills appear to be superior in musicians (Jakobson, Cuddy, & Kilgour, 2003; Rammsayer & Altenmüller, 2006), and this skill may mediate the relationship between duration of music training and verbal recall (Jakobson et al., 2003; Jakobson, Lewycky, Kilgour, & Stoesz, 2008). Recent evidence has demonstrated that children with better second language production skills also process music sounds more accurately than do children with poor linguistic skills (Milovanov, Huotilainen, Välimäki, Esquef, & Tervaniemi, 2008). Evoked potential measurements revealed larger fronto-central region activation using the mismatch negativity paradigm in response to a C major triad in the linguistically superior group. These data suggest that verbal and music ability may share neural networks in auditory brain regions (Patel, 2008, but see also Jackendoff, 2009). Plasticity also has been reported in a number of other brain regions of musicians, including the subcortical auditory areas (Musacchia, Sams, Skoe, & Kraus, 2007) and motor cortex (Elbert, Pantev, Wienbruch, Rockstroh, & Taub, 1995), although these effects may be stronger in males than in females (Gaab, Gaser, Zaehle, Jancke, & Schlaug, 2003; Hutchinson, Lee, Gaab, & Schlaug, 2003; Lee et al., 2003). Additional gender effects in verbal

ELECTRONIC ISSN

1533-8312 © 2010

BY THE REGENTS OF THE UNIVERSITY OF CALIFORNIA . ALL

RIGHTS RESERVED. PLEASE DIRECT ALL REQUESTS FOR PERMISSION TO PHOTOCOPY OR REPRODUCE ARTICLE CONTENT THROUGH THE UNIVERSITY OF CALIFORNIA PRESS ’ S RIGHTS AND PERMISSIONS WEBSITE , HTTP :// WWW. UCPRESSJOURNALS . COM / REPRINTINFO. ASP.

DOI:10.1525/MP.2010.27.3.197

Music2703_04

1/14/10

198

1:53 PM

Page 198

TanChyuan Chin & Nikki S. Rickard

memory (Herlitz, Airaksinen, & Nordstroem, 1999; Kramer, Delis, & Daniel, 1988) and musicianship (e.g., Lee et al., 2003) emphasize the need to control for gender in this research area. In addition, as forms of music training and performance are diverse, there is likely to be more than one potential means by which music training could yield cognitive benefits. Additional speculations for cognitive benefits of music training have included enhanced synaptic plasticity resulting from environmental enrichment, facilitation of academic habits that arise from being involved in a school-like activity, improved capacity for processing abstract stimuli (for instance, recognizing the same melody across different timbres, keys, tempi, or octaves) and refinement of perceptual and cognitive processing skills, including focused attention, memorization, and emotional expression (Schellenberg, 2001, 2005). One important caveat of this research is that the categorization of musicianship has been inconsistent across studies. Criteria for musicianship often include current active status as a performer, without taking into account previous participation in performance or training or music aptitude. The music skill of active listeners and people who seek out and engage (for instance, emotionally, socially, or intellectually) with music has been largely overlooked. This oversight not only has limited understanding of this phenomenon, but has also potentially confounded the dichotomous categorization of musician/nonmusicianship with the unmeasured variable of ‘music engagement.’ Nonperformance based music engagement may involve advanced or frequent listening experiences, a commitment to music activities, or a high level of participation in a particular type of music activity (e.g., social or emotional). Previous research on verbal memory for instance, has been limited to a comparison of those with or without formal music training or music performance (Hall & Blasko, 2005; Wallace, 1994). We argue here that a range of other types of music engagement might confer at least some of the cognitive benefits yielded by formal music training. An individual can be highly engaged with music activities without having undergone any form of music training. On the other hand, a proficient musician, who spends hours practicing and performing for her profession, may have little intrinsic motivation for performing, and may not be actively engaged with music activities out of her domain of expertise. Gjerdingen (2003) illustrates this point with the comparison of a classical music fan who listens to symphonies for 8 hours a day for 50 years with a teenage fan of Britney Spears who has had 5 years of piano lessons; the former would be classified

in most research as a nonmusician, while the latter is classified as ‘music trained.’ Many nonmusicians are not, however, music novices, achieving considerable music knowledge and pleasure from extended periods of ‘experienced’ listening. While the level of auditory skill may not be as developed in music listeners as that of current performers, it would be reasonable to assume that advanced music listeners also would develop refined auditory processing abilities from their extensive exposure to, and analysis of, complex music (Lerdahl & Jackendoff, 1983). In fact, individuals without formal music training have been shown to listen to music (in a music excerpts discrimination task) and perceive music structures (tensions and relaxations in melodies and harmonic sequences) in a similar manner as do individuals at the end of their studies in music conservatories (Bigand, 2004; Bigand & Poulin-Charronnat, 2006). Surprisingly, Bigand and Poulin-Charronnat (2006) reported that an intensive 15-year long training program caused only small differences in a music excerpt recognition task between musicians and nonmusicians. Implicit learning of complex music sounds through simple, passive exposure to environmental stimuli may therefore be sufficient to develop an advanced sensitivity to music (Bigand, 2003). Further, given the range of mechanisms offered to explain cognitive advantages in musicians (Schellenberg, 2001; Schellenberg, 2005), it is possible that other aspects of music engagement apart from music training might also be beneficial for some forms of cognitive processing. For instance, if cognitive benefits arise from refinement of emotional expression or recognition of emotional nuances (see Damasio, 1994), then a strongly affective style of engaging with music (Gabrielsson, 2001; Sloboda, O’Neill, & Ivaldi, 2001) might also be expected to enhance cognitive function. In the current study, we explored the role of performance and nonperformance based forms of music engagement in predicting various aspects of verbal memory performance after potential confounds were controlled. Potential confounds included gender and socioeconomic status (SES, indicated by highest level of education attained). As recent research has shown that music training also improves general intelligence (Schellenberg, 2004; see also Schellenberg, 2008), verbal and nonverbal IQ also were controlled in a subset of the sample. Performance based music engagement was operationalized by the highest level of formal music training achieved, instrumental use, and self-reported performance and improvisation ability. Nonperformance music engagement was operationalized by listening activity and level of participation in various forms of music activities.

Music2703_04

1/14/10

1:53 PM

Page 199

Nonperformance Music Engagement and Verbal Memory

Method Participants

One hundred participants (66 females and 34 males) were recruited via posters and convenience sampling. An attempt was made to source a broad range of music abilities, from musically naïve to professional musicians. Data from two participants were incomplete, resulting in a final sample of 98 participants. The main study was conducted on this sample (32 males and 66 females; mean age = 24.87 years, SD = 6.86). A subset of the sample (N = 38) also completed a second testing session 14 months later, and this sub-sample was generally representative of the full sample (8 males and 30 females; mean age = 25.34 years, SD = 4.08). Demographics for each are presented in Table 1. Participants were naïve to the hypotheses of the study, and all procedures were approved by Monash University’s Standing Committee on Ethics in Research in Humans. Materials

Demographics (age, gender, highest level of education attained, and employment status) were obtained via a

self-report questionnaire. Educational attainment is regarded as a fundamental indicator of SES (Saegert et al., 2006), and is particularly relevant in the current study as it is the most likely of the various SES indicators to confound prediction of verbal memory abilities. The Kaufman Brief Intelligence Test (KBIT-2; Kaufman & Kaufman, 2004) was used to measure IQ of a subset of the sample. The performance-based music variables, highest level of formal music training achieved (grade level attained in Associated Board of the Royal School of Music or Australian Music Examinations Board) and instrumental use (an index of duration, currency, and frequency of instrumental use) also were obtained via the self-report questionnaire, while self-reported performance and improvisation ability was assessed using the ‘Innovative’ subscale from the Brief Music Experience Questionnaire (BMEQ; see below). Nonperformance music engagement variables consisted of listening activity (an index of daily duration, weekly duration, and weekly frequency; assessed via the demographic questionnaire), and the remaining five subscales of the BMEQ. The BMEQ is a brief version of the Music Experience Questionnaire (MEQ; Werner, Swope, & Heide, 2006) that measures the same primary subscales yielded by the MEQ, and was obtained directly from the

TABLE 1. Demographics of Full and Subset of the Sample.

Males: Females Age—Mean (SD) Employment status Highest Educational level Postgraduate University Undergraduate University Technical Institute Secondary School Not completed Secondary Music Training ABRSM or AMEB (Grade 5 and below) (Completed Grade 6) (Completed Grade 7) (Completed Grade 8) Less than 5 years of music training More than 5 years of music training IQ—Mean (SD)

199

Full Sample

Subsample

32:66 24.87 (6.86) 47 employed 53 tertiary students

8:30 25.34 (4.08) 26 employed 12 tertiary students

6 38 14 39 1

3 26 2 7 0

83 4 2 9 84

25 2 2 9 26

14

12

NA

Composite: 119.74 (5.01) Verbal IQ: 106.61 (6.47) Nonverbal IQ: 127.39 (4.50)

Note: ABRSM: Associated Board of the Royal School of Music; AMEB: Australian Music Examinations Board

Music2703_04

1/14/10

200

1:53 PM

Page 200

TanChyuan Chin & Nikki S. Rickard

authors.1 The BMEQ assesses aspects of self-reported music experience via 53 items, rated on a 5-point scale ranging from “1” (“very untrue”) to “5” (“very true”). The instrument comprises six subscales: ‘Innovative Musical Aptitude’ is a self-reported measure of music performance ability and the individual’s ability to generate or create music themes (e.g., “I can easily improvise on an instrument without having music in front of me”), ‘Commitment to Music’ relates to the pursuit of music experiences in the individual’s life (e.g., “Music is the most important thing in my life”), ‘Social Uplift’ relates to the experience of being stirred and uplifted in a group setting by music (e.g., “There’s nothing more powerful than singing a beloved song with other people”), ‘Affective Reactions’ relates to an individual’s affective and spiritual reactions to music (e.g., “A song has never made me feel joyous” (reverse scored), ‘Positive Psychotropic Effects’ relates to an individual’s state of mental reactions (e.g., “I easily get ‘lost’ in the depth of my concentration on music”), and ‘Reactive Musical Behaviour’ relates to an individual’s physical reactions to music (e.g., “I often find myself swaying in tune with music to which I’m listening”). This instrument has good internal consistency (alpha ranged from .74 to .89) for all subscales except ‘Social Uplift,’ which was reported by Werner et al. (2006) as .62, and was confirmed as poor in the current study (alpha = .54). The California Verbal Learning Test (CVLT-II; Delis, Kramer, Kaplan, & Ober, 2000) was used to provide a comprehensive assessment of verbal learning and memory. Immediate recall, short- and long-term free and cued recall, verbal learning, semantic organizational learning strategy, and the effect of interference can all be assessed via a set of recall trials. The task consists of two 16-word lists containing an embedded semantic structure. List A comprises four words from each of the four categories of animals, furniture, transportation, and vegetables. Words from the same category are not presented consecutively, which affords an assessment of semantic clustering, the most effective strategy for learning unstructured verbal information. An interference list (List B) includes 16 words from the four categories of animals, music instruments, vegetables, and parts of a house. In the first five trials, the participant is asked to recall words from List A immediately after each presentation of the list. In the current study, written

1 The Brief Music Experience Questionnaire (BMEQ) is available directly from the authors: http://sites.google.com/site/musicexperiencequestionnaire/Home

responses were requested rather than spoken responses to enable participants to be tested in small groups. Trial 1 measured immediate free recall of List A, and is indicative of an individual’s auditory attention span. Trials 2-5 also measured immediate free recall of List A, and when summed, reflect an individual’s core verbal learning ability. List B was then presented for one trial. Trial 6 measured immediate free recall of List B, and indicated an individual’s degree of proactive interference (the detrimental effect of prior learning on the retention of subsequently learned material). The interference trial was followed by short-term free recall (Trial 7) and short-term cued recall trials (Trials 8-11) of List A. A 20-minute delay followed, during which nonverbal distractor tasks were performed. Trial 12 measured long-term free recall of List A and Trials 1316 measured long-term cued recall of List A. Trial 17 consisted of a 2-option forced-choice (yes/no) recognition task of List A, measuring an individual’s ability to recognize words from List A. Delis et al. report an overall reliability of .82 for this test, and it has demonstrated strong validity in both clinical and normal populations. Procedure

Participants were tested in small groups (up to four participants) in a quiet room. The immediate recall, short-term free and cued recall test trials of the CVLTII (Delis et al., 2000) were administered first, followed by the demographics questionnaire and the BMEQ (Werner et al., 2006). A 20-minute interval was mandated between the completion of the short-term test trials and the long-term test trials, and questionnaires (unrelated to verbal learning or memory) were completed during this interval. The remaining verbal memory test trials were administered after the interval. A subset of the sample completed the KBIT-2 (Kaufman & Kaufman, 2004) IQ test in a separate testing session 14 months later. Results Prediction of Verbal Recall—Full Sample

The relative importance of music training and engagement was explored by subjecting the data to eight hierarchical regressions to determine whether music engagement improved prediction of verbal memory scores beyond that predicted by music training. Correlations for each predictor are presented in Table 2. (Because high correlations existed between overall learning score and total free recall score, r(96) = .99,

Music2703_04

1/14/10

1:53 PM

Page 201

Nonperformance Music Engagement and Verbal Memory

201

TABLE 2. Correlation Matrix Between Predictors and Verbal Memory Measures (Full Sample, N = 98).

Overall Immediate learning Interference Proactive Short-term Short-term Long-term Semantic recall score recall interference free recall cued recall free recall clustering Gender#^ SES# Frequency of instrument playing Music training# Music Listening Activity Commitment to music Innovative musical aptitude Social uplift Affective reactions Positive psychotropic effects Reactive musical behavior

.34** .34**

.41** .39**

.33** .20

−.07 .13

.45** .38**

.42** .39**

.38** .32**

.37** .45**

.24* .47**

.27** .44**

.22* .37**

−.01 −.01

.18 .48**

.22* .49**

.27** .42**

.14 .46**

.33**

.45**

.43**

−.17

.40**

.37**

.49**

.31**

.28** .36**

.24* .36**

.13 .28**

.14 .04

.15 .33**

.18* .40**

.30** .43**

.11 .32**

.28** .38**

.29** .50**

.07 .47**

.20 −.17

.25* .46**

.28** .47**

.34** .55**

.13 .43**

.28**

.31**

.23*

.01

.27**

.37**

.38**

.25*

.18

.24*

.20*

−.05

.15

.30**

.36**

.09

Note: #Spearman’s Rho was employed for categorical data. *p < .05. **p < .01. ^Gender: males = 0, females = 1.

p < .01, as well as between long-term free recall and longterm cued recall, r(96) = .94, p < .01, analyses for the total free recall score and long-term cued recall measures are not reported.) All assumptions of hierarchical regression were satisfied. No multicollinearity was observed in the regression models when compared against benchmarks for Tolerance (> .10) or VIF (< 10) (Pallant, 2005). After identification of predictors that significantly correlated with the criterion variables, the potential confounds, gender and SES, were entered in the first block of the regression, followed by the performance music engagement predictors (music training, frequency of instrument playing, and Innovative Musical Aptitude) in the second block. Nonperformance music engagement predictors (listening activity, Commitment to Music, Social Uplift, Affective Reactions, Positive Psychotropic Effects, and Reactive Musical Behaviour) were entered in the last block to evaluate the unique contribution that the music engagement predictors made to each regression model when other predictors, such as gender, SES, and music performance variables, were statistically controlled. Up to 50% of the variance in verbal memory measures was predicted by the combination of these predictor variables (see Table 3). Performance musicianship variables improved prediction of six measures, while nonperformance music engagement variables improved prediction of all eight measures. Once gender and SES

were controlled, analysis of individual music performance predictors revealed that Innovative Musical Aptitude predicted immediate recall, short-term cued and free recall, long-term free recall, and semantic clustering (.26 < β < .39), while Music Training alone predicted short-term free and cued recall and semantic clustering (.27 < β < .29). Analysis of individual nonperformance music engagement variables (gender and SES were controlled) revealed that Listening activity predicted Overall learning score, interference recall, and shortterm free recall (.22 < β < .24), while Commitment to music predicted immediate recall (β = .28), and Affective reactions to music predicted Overall Learning Score, Interference recall, short-term free recall, long-term free recall, and Semantic Clustering (.25 < β < .34). (Semantic Clustering was also predicted by Social Uplift, although due to poor psychometric properties, the data for this subscale of the BMEQ are unreliable.) Control of IQ as a Potential Confound—Subsample

The hierarchical regressions were replicated (using the same set of verbal memory scores) on the retested subsample with verbal and nonverbal IQ also included in the first block. Due to the smaller sample size, there were fewer cases per predictor variable than typically advised, although this was still within minimum recommendations (Coakes & Steed, 2007). To avoid loss

Music2703_04

1/14/10

202

1:53 PM

Page 202

TanChyuan Chin & Nikki S. Rickard

TABLE 3. Improvement of Verbal Memory Model Prediction by Performance and Nonperformance Music Variables, Controlling for Gender and SES (Full Sample, N = 98).

R2 (adjusted R2) Memory measure Immediate Recall

Change in R2

Total Model

Covariates

Performance Engagement

Nonperformance Engagement

.43** (.35**)

.25**

.09**

.09*

Gender SES

Overall Learning

.55** (.50**)

.33**

Gender SES

Interference Recall

.39** (.31**)

.16* (.06*)

Short-term Free Recall

.52** (.46**)

b .43** .38**

b .33** .21*

Instrument Training Aptitude

Instrument Training Aptitude

b −.04 .12

Instrument Training Aptitude

b −.01 .16 .20

Instrument Training Aptitude

b .11 .28* −.02 .14 −.12 .08

Listening Commit Social Affective Psychotropic Reactive

b .24* .13 .01 .26* −.08 .02

.19** b −.01 .16 .15

Listening Commit Social Affective Psychotropic Reactive

b .24* −.00 −.16 .34* −.06 −.03

.14* b −.06 −.01 .08

.08** b .44** .36**

Listening Commit Social Affective Psychotropic Reactive .16**

.00

.32**

Gender SES

b −.09 .21 .27*

.05

.02

Gender SES

Instrument Training Aptitude

.07*

.15**

Gender SES

Proactive Interference

b .35** .36**

Listening Commit Social Affective Psychotropic Reactive

b −.16 .27 .18 −.27 −.05 .11

.12** b −.19 .27* .26*

Listening Commit Social Affective Psychotropic Reactive

b .22* .03 −.01 .26* .01 −.10 (continued)

Music2703_04

1/14/10

1:53 PM

Page 203

203

Nonperformance Music Engagement and Verbal Memory

TABLE 3. (Continued)

R2 (adjusted R2) Memory measure Short-term Cued Recall

Change in R2

Total Model

Covariates

Performance Engagement

Nonperformance Engagement

.52** (.46**)

.29**

.13**

.11**

Gender SES

Long-term Free Recall

.52** (.46**)

.19**

Gender SES

Semantic Clustering

.55** (.50**)

b .40** .37**

b −.23 .27* .37**

b .34** .28**

Instrument Training Aptitude

.19** b −.14 .21 .39**

Instrument Training Aptitude

b .20 .18 .04 .25* −.13 .18

Listening Commit Social Affective Psychotropic Reactive

.09** b .40** .44**

b .12 .04 −.05 .14 .07 .17

Listening Commit Social Affective Psychotropic Reactive

.14**

.36**

Gender SES

Instrument Training Aptitude

.11** b −.26* .29** .28*

Listening Commit Social Affective Psychotropic Reactive

b .11 .04 −.18* .27* .08 −.11

Notes: Covariates: Gender, SES. Performance engagement variables: Instrument frequency, Music training, Innovative Musical Aptitude. Nonperformance engagement variables: Listening activity and remaining BMEQ subscales. Adjusted R2 is included as an estimate of the effect in the population. *p < .05. **p < .01.

of power resulting from the smaller sample size, the number of predictor variables was reduced according to the strength of the correlations. Among the nonperformance music engagement variables, listening activity was retained, together with each of the three music performance variables. Correlations for the retained predictors are presented in Table 4. All assumptions of hierarchical regression again were fulfilled. The potential confounds, gender, SES, verbal and nonverbal IQ were entered in the first block, followed by the music performance predictors in the second block, and finally the nonperformance music engagement predictor. Table 5 reveals that inclusion of IQ in the model substantially improved the prediction of the majority of verbal learning and memory scores (typically accounting

for more than 85% of the variance). Music performance and nonperformance music engagement variables also accounted for a greater percentage of variance in the retested subsample than in the full sample, but the pattern of results was very similar. Again, both sets of variables significantly predicted all measures except proactive interference, although with IQ controlled, music performance tended to account for a greater amount of variance than did the nonperformance music engagement variable, listening activity. Discussion

While the relationship between music training and verbal memory is well documented, the current data provide convincing evidence that extended listening or

Music2703_04

1/14/10

204

1:53 PM

Page 204

TanChyuan Chin & Nikki S. Rickard

TABLE 4. Intercorrelation Matrix Between IQ, Performance and Nonperformance Music Variables (Subsample, N = 38).

IQ Composite Verbal IQ Nonverbal IQ Music training# Music listening activity Instrument frequency

Verbal IQ

Nonverbal IQ

Music training#

.86**

.75** .32*

−.13 .06 −.30

Music listening activity .47** .53** .16 .31

Instrument frequency −.45** −.20 −.59** .76** .24

Innovative musical aptitude .18 .36* −.13 .60** .72** .69**

Note: #Spearman’s Rho was employed for categorical data. *p < .05. **p < .01.

participation in nonperformance music activities has a similarly strong association with verbal memory performance. This relationship persisted once SES, gender, IQ, and performance music variables were controlled, indicating that measures of nonperformance music engagement offer unique contributions to the prediction of verbal learning and recall that are not captured through other variables conventionally used for studying the benefits of music. In addition, nonperformance based variables predicted a reasonable amount of variance in verbal recall, accounting for a further 10-17% (with IQ, gender, and SES controlled, or 9-19% with just SES and gender controlled) of variance above that predicted by performance variables (which themselves predicted up to 38% of the variance). The finding that performance musicianship significantly predicted long-term free recall, short-term free, and cued recall is consistent with previous research regarding the effects of formal music training on verbal memory (Jakobson et al., 2003; Jakobson et al., 2008). The strongest individual predictors amongst the performance based music variables were self-reported performance and improvisation ability (as measured by the innovative musical aptitude scale of the BMEQ) and music training. While the highest level or years of music training achieved has routinely been used in research of this type, the current study indicates that additional insight into the benefits of performancebased musicianship may be achieved if measures of innovative aptitude are also obtained. In this context, more extensive measures of musicianship—such as the Musical Sophistication Index (Ollen, 2006) and the music training questionnaire used by Cuddy, Balkwill, Peretz, and Holden (2005)—are commended. Nonperformance music engagement significantly predicted all components of verbal memory scores tested, including proactive interference (which was not predicted by performance-based music variables). In the

main sample, when gender and SES were controlled, the most consistent predictors of the nonperformance music variables were listening activity and affective engagement style. The finding that affective engagement might predict verbal recall is consistent with the finding that enjoyment or liking of music accounted for improved spatial-temporal performance observed following exposure to pieces of classical (Husain, Thompson, & Schellenberg, 2002; Schellenberg, 2005) or popular (Schellenberg & Hallam, 2005) music. The most powerful and robust individual predictor of verbal memory among any of the individual predictors was listening activity. This variable was an index of daily and weekly listening durations and weekly frequency of listening, and therefore provides quite a sensitive estimate of measure of listening activities. While listening is clearly not independent of performance musicianship, the current data demonstrate it predicts verbal memory even once the contribution of music training and other performance music variables has been controlled. The finding that in terms of unique prediction, music listening predicts verbal memory more than either music training or innovative music aptitude is surprising. There is undisputedly an enormous range of skills and abilities developed by music training that is absent in nonmusicians. For instance, formal music training requires coordination of perceptual, cognitive and motor skills to sight-read or play a music instrument. Notably, performers are likely to be more sensitive than listeners to the small changes in music structures. The current data suggest, however, that ‘advanced’ listening might hone at least some of the capacities that are associated with verbal memory, independent of music training. The knowledge about the long-term effects of everyday music listening itself on cognitive functions, particularly verbal memory, is limited. However, recent brain imaging studies have shown that neural activity associated with music listening extends well beyond

Music2703_04

1/14/10

1:53 PM

Page 205

Nonperformance Music Engagement and Verbal Memory

205

TABLE 5. Improvement of Verbal Memory Model Prediction by Performance and Nonperformance Music Variables, Controlling for Gender, SES, Verbal IQ and Nonverbal IQ (Subsample, N = 38).

R2 (adjusted R2) Memory measure Immediate Recall

Change in R2

Total Model

Covariates

Performance Engagement

Nonperformance Engagement

.89** (.86**)

.54**

.19**

.15**

Gender SES NVIQ VIQ Overall Learning

.88** (.84**)

.54**

Gender SES NVIQ VIQ Interference Recall

.73** (.66**)

.32* (.13*)

Short-term Free Recall

.88** (.85**)

.86** (.82**)

b .25 −.08 −.01 .52**

Instrument Training Aptitude

Instrument Training Aptitude

b −.05 .26 .12 −.30

Instrument Training Aptitude

b −.31 .39* .50*

Instrument Training Aptitude

b −.15 .24 .49

Instrument Training Aptitude

Listening

b .69**

Listening

b .69**

.10* b −.10 −.07 −.23

Listening

b −.57*

.17** b −.08 .31 .41

.24** b .40** .17 .04 .39*

b .70**

.15**

.22** b .43** .18 .05 .38*

Listening

.15**

.08

.47**

Gender SES NVIQ VIQ

b −.31 .40* .49*

.19**

.50**

Gender SES NVIQ VIQ Short-term Cued Recall

b .41** .17 .10 .41**

.13

Gender SES NVIQ VIQ

Instrument Training Aptitude .19**

.39**

Gender SES NVIQ VIQ Proactive Interference

b .43** .19 .10 .39**

Listening

b .74**

.14** b −.00 .30 .40

Listening

b .68**

(continued)

Music2703_04

1/14/10

206

1:53 PM

Page 206

TanChyuan Chin & Nikki S. Rickard

TABLE 5. Improvement of Verbal Memory Model Prediction by Performance and Nonperformance Music Variables, Controlling for Gender, SES, Verbal IQ and Nonverbal IQ (Subsample, N = 38) (Continued).

R2 (adjusted R2) Memory measure Long-term Free Recall

Total Model .86** (.82**)

Change in R2

.37**

Gender SES NVIQ VIQ Semantic Clustering

.79** (.73**)

Performance Engagement

Covariates

.38** b .34* .07 .05 .38*

.44**

Gender SES NVIQ VIQ

Nonperformance Engagement

Instrument Training Aptitude

.11** b −.30 .38* .74**

.21** b .35* .18 .12 .36*

Instrument Training Aptitude

Listening

b .59**

.14** b .28 .01 .32

Listening

b .68**

Notes: Covariates: Gender, SES, Nonverbal IQ, Verbal IQ. Performance engagement variables: Instrument frequency, Music training, Innovative Musical Aptitude. Nonperformance engagement variables: Music listening activity. Adjusted R2 is included as an estimate of the effect in the population. *p < .05. **p < .01.

the auditory cortex, involving a wide-spread bilateral network of frontal, temporal, and parietal areas underlying multiple forms of attention, semantic processing, and working memory (Janata, Tillmann, & Bharucha, 2002). In this context, it is not surprising then that individuals who often engage in ‘advanced’ listening may develop an enhanced capacity to pay attention to, maintain, and retrieve verbal information, despite not having any form of music training. This is supported by findings of our study, in which listening is the most comprehensive predictor of short and long-term recall. Interestingly, music listening also was associated with superior semantic clustering. Past researchers have argued that the ability to segregate or group sounds appears to be innate and automatic (Bregman, 1990; McAdams & Bertoncini, 1997). As music requires listeners to attend to multiple dimensions of the music structure, regular exposure could promote the ability to recognize pattern regularities (Schellenberg, 2006). In support, Jakobson et al. (2008) recently demonstrated that musicians utilized a semantic clustering strategy more effectively than did nonmusicians. An advanced ability to use Gestalt cues such as similarity, proximity, and good continuation obtained from music listening (Aiello, 1994; Lipscomb, 1996) might therefore transfer to grouping of other auditory stimuli, such as verbal information.

The finding that listening to music is a powerful predictor of verbal recall scores has exciting implications for the development of early intervention programs and incorporation of music engagement activities into school curricula. For instance, it suggests that encouraging students to participate in a variety of music listening activities, even if they do not excel in performance-based musicianship, may be beneficial for their verbal skills. Experimental research using nonperformance music engagement programs as an intervention for populations with delays in verbal learning or literacy is therefore an urgent priority to test this possibility. The current study was nevertheless limited by its correlational nature. It is, for instance, possible that superior verbal memory could cause one to perform or listen to music more frequently, although theoretically, this direction of causality would seem less plausible (see Schellenberg, 2001, for convincing arguments regarding music training). In addition, while the potential impact of educational attainment and gender were controlled, and IQ was controlled in a subset of the sample, there remain alternative interpretations of the current data that require further research. For instance, it is possible that individuals who report high listening activity generally were also engaged more in other leisure or educational activities, and it is this general engagement style

Music2703_04

1/14/10

1:53 PM

Page 207

Nonperformance Music Engagement and Verbal Memory

that is, in fact, responsible for the superior verbal skills. Future research could include measures of participation in a range of activities to explore this possibility further. In addition, it is possible that educational attainment may have not fully captured the SES of participants. In the current largely well educated sample (half of whom were tertiary students), it is likely that any variability in SES beyond that reflected by educational attainment (for instance, occupation type or income) would be very small. Nevertheless, future research could employ additional SES indicators to exclude this possibility. In conclusion, our findings provide the first demonstration, to the best of our knowledge, that the association between music listening and verbal memory is significant and novel in its own right. A more comprehensive examination of the benefits of engaging in music activities other than

207

those arising from music training is therefore warranted, and offers an exciting pathway for future research. Author Note

The authors would like to thank Professor Lola Cuddy and three anonymous reviewers for their valuable advice on an earlier version of this manuscript. We would also like to acknowledge the contribution of Christina Morris towards the collection of data, and Paul Werner, Alan Swope, and Frederick Heide for permission to use the BMEQ. Correspondence concerning this article should be addressed to Nikki S. Rickard, School of Psychology, Psychiatry & Psychological Medicine, Monash University, PO Box 197, Caulfield East, VIC, 3145, Australia. E-MAIL: [email protected]

References A IELLO, R. (1994). Music and language: Parallels and contrasts. In R. Aiello & J. A. Sloboda (Eds.), Musical perceptions (pp. 40-63). New York: Oxford University Press. B IGAND, E. (2003). More about the musical expertise of musically untrained listeners. Annals of the New York Academy of Sciences, 999, 304-312. B IGAND, E. (2004). L’oreille musicale expertepeut-elle se developper par l’ecoute passive de la musique? [The expert musical ear-can it develop by passive listening to music?] Revue de Neuropsychologie, 14, 191-221. B IGAND, E., & P OULIN -C HARRONNAT, B. (2006). Are we “experienced listeners”? A review of the musical capacities that do not depend on formal musical training. Cognition, 100,100-130. B ILHARTZ , T. D., B RUHN , R. A., & O LSON , J. E. (2000). The effect of music training on child cognition development. Journal of Applied Developmental Psychology, 20, 615-636. B RANDLER , S., & R AMMSAYER , T. H. (2003). Differences in mental abilities between musicians and nonmusicians. Psychology of Music, 31, 123-138. B REGMAN , A. S. (1990). Auditory scene analysis. Cambridge, MA: MIT Press. C HAN , A. S., H O, Y. C., & C HEUNG , M. C. (1998). Music training improves verbal memory. Nature, 396, 128. C OAKES , S. J., & S TEED, L. G. (2007). SPSS: Analysis without anguish. Queensland, Australia: John Wiley & Sons. C OSTA-G IOMI , E. (2004). Effects of three years of piano instruction on children’s academic achievement, school performance and self-esteem. Psychology of Music, 32, 139-152. C UDDY, L. L., B ALKWILL , L.- L., P ERETZ , I., & H OLDEN , R. R. (2005). Musical difficulties are rare: A study of “tone deafness”

among university students. Annals of the New York Academy of Sciences, 1060, 311. DAMASIO, A. R. (1994). Descartes error: Emotion, reason, and the human brain. New York: Avon Books. DELIS , D. C., KRAMER, J. H., KAPLAN, E., & OBER, B. A. (2000). California Verbal Learning Test II. New York: Psychological Corporation. E LBERT, T., PANTEV. C., W IENBRUCH , C., R OCKSTROH , B., & TAUB , E. (1995). Increased cortical representation of the fingers of the left hand in string players. Science, 270, 305-307. F UJIOKA , T., R OSS , B., K AKIGI , R., PANTEV, C., & T RAINOR , L. J. (2006). One year of musical training affects development of auditory cortical-evoked fields in young children. Brain, 129, 2593-2607. G AAB , N., G ASER , C., Z AEHLE , T., JANCKE , L., & S CHLAUG , G. (2003). Functional anatomy of pitch memory—An fMRI study with sparse temporal sampling. NeuroImage, 19, 1417-1426. G ABRIELSSON , A. (2001). Emotions in strong experiences with music. In J. A. Sloboda & P. N. Juslin (Eds.), Music and emotion: Theory and research (pp. 431-449). Oxford, UK: Oxford University Press. G ARDINER , M. F., F OX , A., K NOWLES , F., & J EFFREY, D. (1996). Learning improved by arts training. Nature, 381, 284. G JERDINGEN , R. (2003). Review of Ken Stephenson, What to listen for in rock: A stylistic analysis. Music Perception, 20, 491-497. H ALL , M. D., & B LASKO, D. G. (2005). Attentional interference in judgments of musical timbre: Individual differences in working memory. Journal of General Psychology, 132, 94-112.

Music2703_04

1/14/10

208

1:53 PM

Page 208

TanChyuan Chin & Nikki S. Rickard

H ERLITZ , A., A IRAKSINEN , E., & N ORDSTROEM , E. (1999). Sex differences in episodic memory: The impact of verbal and visuospatial ability. Neuropsychology, 13, 590-597. H O, Y. C., C HEUNG , M. C., & C HAN , A. S. (2003). Music training improves verbal but not visual memory: Cross-sectional and longitudinal explorations in children. Neuropsychology, 17, 439-450. H USAIN , G., T HOMPSON , W. F., & S CHELLENBERG , E. G. (2002). Effects of musical tempo and mode on arousal, mood, and spatial abilities. Music Perception, 20, 151-171. H UTCHINSON , S., L EE , L., G AAB , N., & S CHLAUG , G. (2003). Cerebellar volume of musicians. Cerebellar Cortex, 13, 943-949. JACKENDOFF, R. (2009). Parallels and nonparallels between language and music. Music Perception, 26, 195-204. JAKOBSON , L. S., C UDDY, L. L., & K ILGOUR , A. R. (2003). Time tagging: A key to musicians’ superior memory. Music Perception, 20, 307-313. JAKOBSON , L. S., L EWYCKY, S. T., K ILGOUR , A. R., & S TOESZ , B. M. (2008). Memory for verbal and visual material in highly trained musicians. Music Perception, 26, 41-55. JANATA , P., T ILLMANN , B., & B HARUCHA , J. J. (2002). Listening to polyphonic music recruits domain-general attention and working memory circuits. Cognitive, Affective and Behavioral Neuroscience, 2, 121-140. K AUFMAN , A. S., & K AUFMAN , N. L. (2004). Kaufman Brief Intelligence Test (2nd ed.). Circle Pines, MN: AGS Publishing. K RAMER , J. H., D ELIS , D. C., & DANIEL , M. H. (1988). Sex differences in verbal learning. Journal of Clinical Psychology, 44, 907-915. L EE , D. J., C HEN , Y., & S CHLAUG , G. (2003). Corpus callosum: Musician and gender effects. NeuroReport, 14, 205-209. L ERDAHL , F. & JACKENDOFF, R. (1983). A generative theory of tonal music. Cambridge, MA: MIT Press. L IPSCOMB , S. D. (1996). The cognitive organization of musical sound. In D. A. Hodges (Ed.), Handbook of music psychology (pp. 133-175). San Antonio: IMR Press. M C A DAMS , S., & B ERTONCINI , J. (1997). Organization and discrimination of repeating sound sequences by newborn infants. Journal of the Acoustical Society of America, 102, 29452953. M ILOVANOV, R., H UOTILAINEN , M., V ÄLIMÄKI , V., E SQUEF, P. A., & T ERVANIEMI , M. (2008). Musical aptitude and second language pronunciation skills in school-aged children: Neural and behavioral evidence. Brain Research, 1194, 81-89. M USACCHIA , G., S AMS , M., S KOE , E., & K RAUS , N. (2007). Musicians have enhanced subcortical auditory and audiovisual processing of speech and music. Proceedings of the National Academic of Sciences USA, 104, 15894-15898. O LLEN , J. (2006). A criterion-related validity test of selected indicators of musical sophistication using expert ratings.

In M. Baroni, A. R. Addessi, R. Caterina, & M. Costa (Eds.), Proceedings of the 9th International Conference on Music Perception and Cognition (ICMPC9), Bologna, Italy. PALLANT, J. F. (2005). SPSS survival manual. NSW, Australia: Allen & Unwin. PANTEV, C., R OBERTS , L. E., S CHULZ , M., E NGELIEN , A., & R OSS , B. (2001). Timbre-specific enhancement of auditory cortical representations in musicians. NeuroReport, 12, 169-174. PATEL , A. D. (2008). Music, language, and the brain. Oxford, UK: Oxford University Press. R AMMSAYER , T., & A LTENMÜLLER , E. (2006). Temporal information processing in musicians and nonmusicians. Music Perception, 24, 37-48. R AUSCHER , F. H., S HAW, G. L., & K Y, K. N. (1993). Music and spatial task performance. Nature, 365, 611. S AEGERT, S. C., A DLER , S. C., B ULLOCK , H. E., C AUCE , A. M., L IU , W. M., & W YCHE , K. F. (2006) APA task force on socioeconomic status (SES). Retrieved February 19, 2009, from http://www.apa.org/governance/CPM/SES.pdf 2006 S CHELLENBERG , E. G. (2001). Music and nonmusical abilities. Annals of the New York Academy of Sciences, 930, 355-371. S CHELLENBERG , E. G. (2004). Music lessons enhance IQ. Psychological Science, 15, 511-514. S CHELLENBERG , E. G. (2005). Music and cognitive abilities. Current Directions in Psychological Science, 14, 317-320. S CHELLENBERG , E. G. (2006). Long-term positive associations between music lessons and IQ. Journal of Educational Psychology, 98, 457-468. S CHELLENBERG , E. G. (2008). Commentary on “Effects of early musical experience on auditory sequence memory” by Adam Tierney, Tonya Bergeson, and David Pisoni. Empirical Musicology Review, 3, 205-207. S CHELLENBERG , E. G., & H ALLAM , S. (2005). Music listening and cognitive abilities in 10- and 11-year olds: The blur effect. Annals of the New York Academy of Sciences, 1060, 202-209. S CHLAUG , G., JANCKE , L., H UANG , Y., S TAIGER , J. F., & S TEINMETZ , H. (1995). Increased corpus callosum size in musicians. Neuropsychologia, 33, 1047-1055. S LOBODA , J. A., O’N EILL , S. A., & I VALDI , A. (2001). Functions of music in everyday life: An exploratory study using the Experience Sampling Method. Musicae Scientiae, 5, 9-32. WALLACE , W. T. (1994). Memory for music: Effect of melody on recall of text. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 1471-1485. W ERNER , P. D., S WOPE , A. J., & H EIDE , F. J. (2006). The Music Experience Questionnaire: Development and correlates. The Journal of Psychology, 140, 329-345.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.