1-cover page

12 downloads 0 Views 2MB Size Report
The function and control of timbre by the player of a traditional acoustic pipe organ ..... year: I realised that, far from finishing this process, I had only just begun.
bagatelle

case

channels

computer

comm

citizens

concepts

content

create debate democracy

detailed

development

different

digital

democra

document

CreateWorld 2009 environm education eportfolio experience figure files example

global griffith hack hacking http idea information

given

group

in

learning

life

Draft Papers of the language CreateWorld09 Conference

media microphones 30 November - 02 December 2009 Griffith University, Brisbane Queensland Australia

model

mu

performanc practice primary

order outcomes paper Michael Docherty :: Queensland University of Technology

Editors mymedia number

pieces

Darryl Rosin :: Griffith University

political

positive

produce production

recording

required

sound

study

success

space

support

project

provi

research

stude

system

table

te

CreateWorld 2009 Draft Papers of the CreateWorld09 Conference

Editors Michael Docherty :: Queensland University of Technology Darryl Rosin :: Griffith University

© Copyright 2009, Apple University Consortium Australia, and individual authors. Apart from any use as permitted under the Copyright Act 1968, no part may be reproduced by any process without the written permission of the copyright holders.

CreateWorld 2009 Contents Draft Papers

Andrew Blackburn

Computers, the pipe organ and realtime dsp

Kim Cunio

The War on the Critical Edition Volume 1

Andrew Dekker, Stephen Viller, Aaron Tan

Waving creatively: An examination of Google Wave to facilitate collaboration in creative processes

Paul Draper

Authority 3.0: Toward a digital press for universitybased musicians, and its role in validating ERA outputs

Matt Hitchcock

Vertical Integration through Blended Learning: a whole-of-program case study

Kat Hope, Malcolm Riddoch

The Vanishing Bass- Possible implications of Internet centric listening on bass perception

Jean Penny

Inside-Out Flutes: The Emergence of the Transformative Meta-flautist

Jason Zagami

iPrac - twittering to survive practicum

Jason Zagami

iPrac - use of spontaneous recording devices to enhance digital portfolios

computers, the pipe organ and realtime dsp

Andrew Blackburn p 1

Andrew Blackburn M.Mus(Melb) Dip Ed (Melb) DMA Candidate, Queensland Conservatorium Griffith University CreateWorld09 November 2009 Computer use in music for the pipe organ and real time dsp - or the music of Janus Abstract Ever since the Bremen Radio Broadcast Performance – 20 May 1962 – a broadcast that included Gyorgy Ligeti’s ‘Volumina’, Mauricio Kagel’s “Improvisation Ajoutee” and Bengt Hambreaus’ ‘Interference’ – all compositions that exposed a whole new world of texture, timbre and musical possibility, the pipe organ has been reclaiming a position of prominence in contemporary art music. The timbral, technical and musical possibilities exhibited in these compositions and the more recent advent of accessible and portable real time dsp (Digital Signal Processing) has encouraged an ever widening range of composers/performers to write for the instrument, extending both its timbral potential and inherent spatial possibilities. These developments have changed our expectations and perceptions of what a pipe organ musically can be and do. In this paper I shall provide a brief background to this development, focussing on four significant and recently composed works for pipe organ and electronics. I shall explore how the instrument’s apparently static timbral world is dynamically altered, and the means by which this alteration is achieved.

computers, the pipe organ and realtime dsp

Andrew Blackburn p 2

Table of Contents Definitions and Background

4

Live Digital Signal Processing and organ

4

Four Compositions for pipe organ and real time dsp

4

Dialogo Sopra i Duo Sistemi (2003 revised 2007)- René Uiljenhoet

4

Vanitas - Steve Everett

5

Symmetrie Integrante (2007)- Andrian Pertout

7

Eight Panels(2007) a structured improvisation - Lawrence Harvey and Andrew Blackburn 8 Conclusions REFERENCES

9 10

computers, the pipe organ and realtime dsp

Andrew Blackburn p 3

Introduction The function and control of timbre by the player of a traditional acoustic pipe organ can be equated in contemporary electronic musical terms to the control and building of sound through additive synthesis. In the last few years a number of composers have also taken advantage of increased portable power and availability of software and vst plug ins to create a new timbral world for the pipe organ. The extraordinary sonic richness of the acoustic organ provides a wonderful source for the digital signal processing (DSP), creating new sounds, not just imitating organ sounds, but opening a whole new range of timbres and aural spatial relationships, changing our expectations and perceptions of what a pipe organ musically can be and do. Pipe organs have many different styles that may be defined by continent, nationality, region, era and individual builder. Today the pipe organ is often associated with Christian religious institutions, often of a highly conservative bent and accordingly music associated with the pipe organ is assumed to be similarly styled. This attitude however ignores the significant music composed since the 1950’s till today, and now often performed on organs in large civic buildings. Before proceeding, it must be emphasized that compositions for acoustic instruments with digital signal processing are not new, for there are compositions for flute and realtime dsp from as early as 1952 (Penny 2009). Bruno Maderna Musica su due dimensioni (1952) for flute, percussion and electronic sounds is certainly one of the earliest examples. Within a couple of years, composers such as Edgar Varése, Otto Leunig and Vladimir Ussachevsky were composing works for acoustic instruments, accompanied or expanded by recordings and processed sounds. In 1958 (six years after the first work combining acoustic instrument and manipulated sounds) Swedish composer Bengt Hambraeus composed two significant works for organ and manipulated organ sounds on tape - Doppelrohr II and Konstellationer (1958). The works are highly significant in the lineal development of the organ as an avant-garde instrument bridging the span between the so-called ‘experimental’ works of Olivier Messiaen and Gyorgy Ligeti’s Volumina (1962) The quasi electronic clusters of Volumina that so shocked many listener did not arrive from a vacuum: timbral explorations from composers such as Tournemire, Messiaen, and Hambraeus may be directly traced forward to Ligeti and so, for all its radicalism, Volumina represents Ligeti ‘...searching for what the next step is in any field. What next step is implied…’ (John Cage quoted in Duckworth, W p 28) This lineal quality in avant-garde music can be traced as strongly in organ music as any instrumental genre, though this trail is beyond the scope of this paper.1 Between the late 1960’s and late 1980’s there was parallel experimentation in avant garde organ music - composers including Australians Warren Burt, Stephen Ingham, and Ron

1

The topic is explored in Blackburn (2008) The organ as an avant garde instrument unpublished paper.

computers, the pipe organ and realtime dsp

Andrew Blackburn p 4

Nagorka looking at combinations of taped sounds and organ and those like Gyorgy Ligeti, Iannis Xenakis or Stephen Montague who extended performance techniques acoustically.

Definitions and Background Live Digital Signal Processing and organ A brief and simple description of what we mean by music composed for pipe organ and Real time Digital Signal Processing (DSP): a pipe organ in which, in addition to the acoustic qualities and potential of the instrument that are already established, microphones placed in and around the organ input signal to a processing unit. This then sends processed sounds from the organ to a speaker system in the same physical space, so blending the acoustic and electronic sounds in a coherent whole to an audience. The configuration of all these elements is variable and dependent upon the room in which the organ is situated, the layout and disposition of the organ, and the musical requirements established by the composer. Wolfgang Mitterer (Linz) and Morgan Fisher (Tokyo) - are both improvisers who the organ and live dsp of the organ sound. Some other recent examples of improvisor/ keyboardists include Chris Abrahams and Charlemagne Palestine - both using pipe organ with other (non organ based) realtime dsp in their improvisations. 2 So far the earliest example of organ with live digital signal processing appears to be: Hans W Koch orgel/topographie (1998) for one performer inside a church organ with live-electronic, a second performer at the keyboard. It uses a hand held microphone to amplify certain sounds from deep within the organ. aus "sechzig" teil IV: paradiso infernale (1997)composed for an exhibition of Salvador Dali’s xylographies after the "divina commedia" by Dante for two speakers (male/ female), tenor saxophone, organ, synthesizer and live- electronic. In an email from Hans Koch to the author, he writes of the live dsp... I´d say it was a crude mix of ring modulation, some very cheap echo, filtering and some oscillators. i used a lot of feedback also fed trough the echo. the truth is, that i didn´t have much money at that time (not that i would now...), so started building my own circuits. they mostly worked, but had a peculiar sound. and then, failure sometimes has its own beauties :-) (Koch, H. (2006) Personal email to Andrew Blackburn)

Four Compositions for pipe organ and real time dsp Dialogo Sopra i Duo Sistemi (2003 revised 2007)- René Uiljenhoet 2

In this paper, the ability to reproduce music without the input from the original creator is a selective factor, and so the work of these practitioners is not considered.

computers, the pipe organ and realtime dsp

Andrew Blackburn p 5

For organ and quadraphonic live electronics, duration c. 18’30” The title of this work also suggests its intent - a dialogue between two systems - one old and the other new. It is derived from the Gallileo Galliiei Dialogo di Galileo Galilei ... : doue ne i congressi di quattro giornate si discorre sopra i due massimi sistemi del mondo tolemaico e copernicano proponendo indeterminatamente le ragioni filosofiche e naturali tanto per l’vna, quanto per l’altra parte (1632), in which the author compares the prevailing concept of a flat earth with his own theory of a spherical, rotating planet. Dialogo has received a number of performances3 since its composition in 2003 and the most recent iteration of the score (2009) includes many warnings and notes that suggest they are born of performance experience. From the composers introductory notes “... Due to the musician operating the computer using both hands and eyes to operate the computer as well as the mixing desk, it is recommended to ask an assistant to help turning the pages of the score in order to stay synchronous with the organist...’ or regarding the setup of the microphones in the organ case ‘... be prepared to spend a lot of time realising the installation and be careful not to damage pipes...during this installation process! Also make sure – in advance of the concert date – that the owner of the organ allows this harmless treatment to be executed.’ The electronics for this piece are programmed in SuperCollider 3 (http:// supercollider.sourceforge.net ) and a runtime and patch file are provided with the score. The setup requirements are as follows: OS X (10.4.8 to 10.4.11) running SuperCollider3 to version 3.3.1 (rev 9267) with a (minimum 8 I/O) audio interface attached. Eight microphones to inputs: 4 outputs (1 - 4 on interface) to mixer and to four amp/loudspeaker combinations, set up around the audience: Mic 1 is inserted in the Swell - left side; Mic 2 is inserted in the Swell - right side; Mic 3 and 4 inserted in the left and right of the Great; Mic 5 & 6 in the left and right pedal towers; Mic 7 & 8 in the left and right sides of the positive organ. This arrangement - with less or more microphones has turned out to be a most successful arrangement for most organs. It is the maximum which has been used by this author, and allows for the true spatial expansion of the instrument within the performance space. The software is set with 35 performance ‘scenes’ each descriptively titled, within a single user interface page.

Vanitas - Steve Everett

3

eg Randall Harlow - Eastman Organ Festival, at the Laurenskerk, Rotterdam and recording etc

computers, the pipe organ and realtime dsp

Andrew Blackburn p 6

Vanitas refers to a type of still life painting consisting of a collection of objects that symbolize the brevity of human life and the transience of earthly pleasures and achievements (e.g., a human skull, Everett S Vanitas 2005 organist books, musical instruments, decaying fruit and flowers, a mirror, and Randall Harlow broken pottery) – a reminder that worldly riches cannot stop man’s mp3 perf 2007 inevitable decay. Such paintings were particularly popular in the sixteenth and seventeenth centuries, especially in the Netherlands.(Harlow, R (2007) Contemporary Organ Music Festival April 11-14, 2007 Programme notes, http:// ecmc.rochester.edu/ecmc25/concert8.pdf last accessed 1 November 2009) The aim of the electronic processing in this instance is, to create an impression of the decay and ephemeral nature of life, as depicted in the Vanitas paintings through effects used in the electronics including timbral shift, spatial re-location and tuning and detuning of the organ sound. Vanitas is composed for organ with live electronic processing using Kyma4 Sound Processing System. 5 Everett explains his technical requirements in the performance instructions are as follows: Four to eight microphones [are] placed as close as possible to the organ case in a vertical array on both sides of the performer. If possible it is desirable to place the microphones inside the case to avoid feedback issues related with microphones placed in acoustically rich halls and churches. This audio is then processed through eleven computer Sound Objects in Kyma created by the composer. Each Sound Object consists of three or more spectral filters, delays, and diffusion effects... scheduled with the Kyma Timelone and is notated in the score as “ *Kyma 1-11 Ideally a four channel sound system with a fifth sub-bass channel, all hidden from audience view is preferred for playback. The goal of the live electronic processing is to subtly enhance timbral shifts, spatial location and tuning of the organ sounds. (Everett, S 2005 p 2) One of the important aspects of this piece is the relationship which is established between the acoustic organ and the processed sounds. In requesting that the speakers be concealed, Everett seeks to create a blend of acoustic and electronic sounds, but blurring the boundaries between them. Which sound source is responsible for each timbre or effect is rendered indistinguishable by concealing the speakers in and around the organ, and mixing the combination of sounds in the performance room. It is a compositional device that makes Vanitas distinct from the others under discussion in this paper. Where Dialogo, Symmétrie Intégrante6 and Eight Panels7 exploit the potential for re-spatializing the organ and its relationship to the performance space through speakers placed around, amongst, above and

4 www.symbolicsound.com 5 A Max/MSP version of the electronics is available at http://music.emory.edu/COMPUTER/SE/ VanitasMaxMSP.pdf 6

Andrian Pertout 2007

7

Lawrence Harvey (in conjunction with Andrew Blackburn) 2007

computers, the pipe organ and realtime dsp

Andrew Blackburn p 7

below the audience, Everett arranges the electronic sounds so that they all appear to emanate from the organ itself. It is an idea currently being explored further by Christophe d'Alessandro et al, in a recent (unpublished) paper delivered at the 2009 ICMC in Montreal: The Ora Project: audio-Visual Live Electronics and the Pipe Organ Symmetrie Integrante (2007)- Andrian Pertout A work for organ, flutes (3) and live electronics. (click here for a video example of excerpts of this piece and section8) It was commissioned for performance at the Melbourne Town Hall. It is a startling piece that contrasts instruments of different dimensions from the largest to shortest organ pipes (10 metres to 4 or 5 mm) as well as flutes alto to piccolo. The electronics are a minimum of four microphone inputs connected to a computer through a digital audio I/O to a mixer incorporating a 4 channel sound diffusion system, ideally configured similarly to that of the Uijenhoet piece described earlier. Again, the music is arranged into preset ‘scenes’ - nine for the flutes and 4 for the organ. They were operated by the composer in the original performance and the composer is currently creating a more portable version in Max/MSP for another performance in the Organs of the Ballarat Goldfields scheduled for next January. Programme 2 ‘Waves metaflanger5.0 VST plug in (a vintage tape flanging and phaser emulation audio plug-in that generates gentle choruses and dual delay flanging sounds to sharp phasing and extreme jet sweeps.)’ (Pertout A Symmétrie Intégrante 2007 composer’s performance notes) The actual settings of this scene are: mix: 100%; feedback: 80.0%; phase enable: on; filter type: low pass; cut off frequency: 1.2HΩ; filtering: on; delay: 9.0ms; tape: on; rate/oscillation speed: 0.10Hz; sync: manual; depth: 12.0%; link: off; waveform: triangle; stereo: 180.00 ; gain: +0.0dB.9 The effect of this is startling in impact, particularly contrasting the almost pure sine wave structure of the organ (registration specified as rohrflute 8’ and 4’ principal) with the triangle wave form inserted by the plug in 8 Panels excerpt Panels 5 - 6- organist Andrew Blackburn and projected with the extreme ‘jet Click to play: sweeps’ created by the wide stereo setting of 180.00. Pertout found that DSP for the organ is most musical when manipulating quieter input levels, a facet that is common in all the works under discussion. The opening and closing sections of the work are loud (registration - principal chorus to mixtures) and his dsp in this is used to create waves of sound that project into the hall bathing the audience in an audio wash which exaggerates the wash of sound from the organ itself.

8 9

http://www.hutes.com.au/hute_1./Andrew_Blackburn_Video_Samples.html

The full list of settings is provided in the compserʼs performance notes. Andrian Pertout has adopted the approach to performance that, given the relative transience of many audio applications, a technical list if settings is most desirable and this can be re-created in the software that is available to the performers.

computers, the pipe organ and realtime dsp

Andrew Blackburn p 8

Eight Panels(2007) a structured improvisation - Lawrence Harvey and Andrew Blackburn Eight Panels was also commissioned by the City of Melbourne and first performed in the Melbourne Town Hall in October 2007. It is a structured improvisation conceived by Lawrence Harvey in conjunction with Andrew Blackburn. The audio input arrangement from the organ is similar to that of Symmétrie Intégrante given above, but the output arrangement is considerably developed. Again, here the intention behind the work is to draw the organ from its location in the Town Hall (in a very large chamber across the full width of the stage The work is built around 8 major sections, each one exploring different sets of dsp effects through Harvey, L 2007 8 Panels Score Panel 5 (section) the audio input from a carefully structured though improvised organ part. The output for the work is a 16 channel surround sound - two concentric circles of speakers placed around the audience, and within the audience space, four speaker ‘trees’ were positioned. These provided a highly specific sound source which was also positioned vertically in the aural space. Players required to perform this work are: organist and two sound technologists ( one technologist controls the signal processing of the organ sounds, whilst the second controls the location of the sounds within the large space of the hall. The score is divided and precisely indicates this process. (see example above) Melbourne Town Hall Organ - the larger pipes on the facade are in excess of 5 metres in length.

computers, the pipe organ and realtime dsp

Andrew Blackburn p 9

Conclusions When used to manipulate the acoustic sound of a pipe organ, computers and computing offer extraordinary possibilities to impact on the timbral and spatial world of the pipe organ. There is an ever growing body of work for the organ with realtime DSP and an expanding number of composers around the world who are interested to write for this combination. The works discussed all come from composers of very different backgrounds and even cultures - yet in this frontier sound world there is a commonality of purpose as well as a (perhaps surprising) unanimity of what works when manipulating acoustic organ sounds. As far as I have been able to ascertain, at the compositional stage, none of the composers were referencing earlier works, but all have used similar processing techniques; flanging, delay, reverberation, granular synthesis, ring modulation and more. Although there are some practical difficulties positioning microphones and speakers in and around large pipe organs, the musical result is well worth overcoming these potential pitfalls.

computers, the pipe organ and realtime dsp

Andrew Blackburn p 10

REFERENCES Blackburn, A (2008) The Organ as an instrument of choice for avant garde composers. Unpublished paper d’Allesandro, C., Noisternig, M et al (2009) The ORA Project: Audio-Visual Live Electronics and the Pipe organ Unpublished paper presented at ICMC 2009 http:// www.icmc2009.org/ last accessed 6th November 2009 Duckworth, W. (1995) Talking Music – Conversations with John Cage, Philip Glass, Laurie Anderson and five generations of American experimentalist composer New York. Da Capo Press Everett, S. (2005) Vanitas (musical score), self published and available from the composer Everett, S. (2005) Vanitas (mp3) http://music.emory.edu/COMPUTER/Media/ Vanitas_Harlow_organ.mp3 last accessed 4th November 2009 Harlow, R (2007) Contemporary Organ Music Festival April 11-14, 2007 Programme notes, http://ecmc.rochester.edu/ecmc25/concert8.pdf last accessed 1 November 2009 Koch, H. (2006) Personal email to Andrew Blackburn O’Connell, M (2005) Flexible Learning and Educational Design http://eddesign.blogspot.com/2005/07/designing-learning-experiences-what-is.html web page last accessed 6 November 2009 Pertout, A. (2007) Symmetrié Intégrante (2007) for organ, flutes and electronics Op 394 (musical score), self published and available through the compser’s website www.pertout.com or the Australian Music Centre Snyder, K. J. (ed) (2002). The organ as a mirror of its time North European Reflections, 1610 - 2000. Oxford, Oxford University Press.

computers, the pipe organ and realtime dsp

Andrew Blackburn p 11

Uijenhoet, R. (2003 rev 2009) Dialogo sopra i due sistemi for organ and quadraphonic live electronics (musical score) Amsterdam, Netherlands: Muziek Centrum Nederland.

Melbourne Town Hall Organ Case. The larger pipes visible are in excess of 5 metres in length

Create World: AUC, Brisbane Australia, November -2 December 2009 http://www.auc.edu.au/AUC

THE WAR ON THE CRITICAL EDITION VOLUME 1 Dr Kim Cunio Queensland Conservatorium Griffith University

ABSTRACT In my so called ‘serious’ research (into best practice realisation of ancient and medieval music), a major theme has been the preparation of multiple realisations of a text or musical work, in response to music that has no critical or singular edition. This has applied to both scores and recorded works and this premise has had a profound effect on both my realised early music and new art music composition. This paper documents two methods of consciously working against the notion of a critical edition. The first is three recorded realisations of the prologue to Hildegard of Bingen’s 12th Century music drama Ordu Virtutum (ABC Classics 2007). Each realisation becomes an existing work in itself and sets to prove that early music notation allows the space for significant new composition. The second case study, Namu Amida Butsu, a new piece of honkyoku 1 for solo shakuhachi, is the genesis of another process. An existing scored and recorded work is currently being deconstructed with the purpose of being recomposed either on Garageband or a comparable music sequencing program. The ramifications of this method are significant because the technique of ‘comping’ 2, from which this is derived, is common in popular and image based music where it is used to produce a critical edition similar to that of a score. However in this case new technology is not used to reinforce an existing structure, but to find multiple new structures from the source material.

1. INTRODUCTION This paper serves two functions: to document part of the practice based research contained in two composition and recording projects between 2004 and 2006, as well as starting a future research project into the processes and ramifications of recomposing around existing material. Both projects were commissioned works that had to work within the confines of agreements, budgets and players, and a process of artistic self-examination was undertaken concurrently, particularly as I was completing my Doctorate in composition at the time. Two of the findings of my research were that multiple editions of a work do not inherently endanger a musical tradition as long as the

1 2

A traditional Japanese Zen music and meditation form.

The arrangement of a final recorded version of a recording from multiple takes or versions on a music sequencer.

contributors to it are fully aware of the process of artistic investigation (Cunio, 2008), and that intercultural and early music can inherently benefit from not being defined by a critical or singular edition (Cunio, 2009). One of the conclusions was the need for a practical investigation of this premise, a process that this paper begins. Two works that I wrote in this period are therefore unpacked and reworked within the notion of resisting the singular critical edition. Three pieces from The Sacred Fire (Cunio, Lee, 2007), derived from Hildegard of Bingen’s Ordo Virtutum Prologue demonstrate my most common practice in breaking the concept of the critical edition, which is multiple realisations of the same textual and melodic source material. Namu Amida Butsu, the second work, is much more radical, as it experiments with postmodern representations of traditional music and will allow the listener to recompose the music itself. Namu Amida Butsu is presently being cut into multiple loops. When finished the loops will be imported into the Apple Garageband loop library. The loops will be sent to colleagues and students offering them the chance to recompose the work. No reference copy of the existing critical edition will be supplied.

2. THE CRITICAL EDITION The critical edition is at the heart of western art music. When someone asks to hear Ave Maria at their wedding there is an inherent cultural assumption that they will hear a particular version of the Ave Maria, usually composed by Schubert or Bach/Gounot. Though the Bach/Gounot is an arrangement of an earlier critical edition it has been absorbed into the cannon of western art music alongside the Schubert, and as such either composition can be attributed to a composer(s). Musicology can document the works of western composers thanks to the invention of notated music by Guido of Arrezzo in the 11th Century, and the extant works of the thousands of composers who have worked in both sacred and secular music. These thousand year old notations give us many of the best and worst aspects of our current musical life: they allow copyright and patents to flourish, intellectual capital to be recognised, but they also allow singular editions to push their multiple counterparts to the side. For example an artist may play a hit song many different ways but the critical edition is always the recorded and disseminated version of the work. Technology is changing how we perceive both music and tradition. It is no longer necessary to write a definitive

Create World: AUC, Brisbane Australia, November -2 December 2009 http://www.auc.edu.au/AUC score when working in many music styles. Indeed when notating and working with traditional music full scoring can be a burden, making future renditions unnecessarily complex or rigid in nature. The journal Recent Researches in the Oral Traditions of Music defines this point of change: Recent Researches in the Oral Traditions of Music encourages scholars to rethink the critical edition as a crucial component in the current rapprochement between ethnomusicology, historical musicology, and cultural studies. As new media make it possible to experience musics from throughout the world, as oral traditions have become essential to the globalization of local musical practices, and as popular musics give postmodern meaning to historical diasporas, so too does Recent Researches in the Oral Traditions of Music invite music scholars to conceive of editions that will contribute fundamentally to some of the most critical debates of our day. (Bohlman, 2005). The computer has revolutionised music, and art music composers and institutions are only now coming to terms with the ramifications. The recording of music offers a potentially perfect copy of a performance that can then be transcribed or learnt orally, making it a meeting place between oral and written forms. It can be argued that notation, as we historically understand it, is now only one of a number of processes to preserve and record music. Innovations such as the Music Instrument Digital Interface (MIDI), the Digital Audio Workstation (DAW), and wave file composition (whereby the composition takes place after the recording of the individual parts), have replaced traditional scoring for many composers. In addition to this we now extend the term composition and composer well beyond the historical Western definitions. The composer of a work does not necessarily have to know the craft of notation, nor be able to perform a work the same way twice. Reid states that a written score can range from a chord chart to a Pro Tools file (Reid, 2007), yet a standard composition degree at a tertiary institution is still mainly concerned with the authorship of singular critical written works.

3.

Eugene Goosens Hall, Sydney, and released in May 2007 by the ABC. This project involved taking the music of Hildegard of Bingen, (1098-1179), the visionary composer author and mystic, and recomposing around existing scores. The brief for the project was to create a CD recording of Hildegard of Bingen’s music unlike any other to date. It was made clear by Lee that the vocal line would be performed as written, though there would be room for ornamentation and harmony (organum4) in the vocal lines. Despite these constraints there was enormous room for innovation in instrumentation, texture, and accompaniment. A series of new pieces were written for the assembled intercultural ensemble, and one work was selected to be part of this larger experiment. Lee and myself argued in the liner notes of the disc that the craft of medieval music notation is unable to provide anything like the nuance that we expect from a contemporary score, and as such the original editions of Hildegard’s scores (the Riensekodex and Dendermonde collections) can enable a process of imagination and recomposition (Lee, 2007). Many things simply do not exist in the original scores, most prominently absent is any rhythmic duration or emphasis, and the interpretation of neumatic5 notation in which this is expressed is far from standard. The argument comes down to one essential point: a purely authentic rendition is not actually possible, and from this premise multiple realisations of a single text are valid. The Prologue to Hildegard’s play Ordo Virtutum was selected for an in-depth exploration in this manner. Because Ordo Virtutum is such a long piece, it was impossible within the confines of the commission to work with the whole piece (it is both significant and long enough to constitute a whole recording). The Prologue was therefore a perfect choice. Three main parameters investigated in this realisation of the Ordo Virtutum Prologue were the use of multiple realisations of a single text, the use of free and metric time with the same text, and the use of harmony.

EXPERIMENT 1: THE SACRED FIRE

I have worked with soprano Heather Lee for the last 10 years on a large variety of projects from Western classical music to traditional music. Consequently, Lee was an obvious collaborator for this project. Lee has had a strong interest in the music of Hildegard of Bingen, and her background in medieval and Baroque music was ideal for this project. Additional collaboration was with Cantillation, a vocal ensemble based at the ABC, and a newly formed intercultural ensemble sourced for the project.3 The music was recorded in February 2006, at the

Three new pieces The pieces written from this realisation process are: Who are these? The Sacred Fire (TSF) disc 2 track 6, Patriachs, Prophets and Virtues, TSF disc 2, tracks 7-8, and Ordo Virtutum – Instrumental Prologue, TSF disc 2 track 10. Who are these? is a recitation of a translation of the prologue text into English by Rebecca Frith and an intercultural ensemble. The music is constructed around a

Parallel harmony, most commonly up a 5th or down a 4th from the original melody. 4

Kim Cunio, reed organ, Jamal Rekabi kemanche, Llew Kiek plucked strings, Paul Jarman winds and Tunji Beier percussion.

3

5

Sign based - a precursor to modern notation.

Create World: AUC, Brisbane Australia, November -2 December 2009 http://www.auc.edu.au/AUC very simple descending Dorian fragment G – F – E – D, in which all instruments have the opportunity to improvise as Frith speaks. The traditional wind tarogato is the featured melodic instrument and plays a long phrase with circular breathing at the end of the piece. Patriachs, Prophets and Virtues is a significant setting of the text and music of Hildegard. A series of Burmese gongs stress the D Dorian scale (with a Bb available to augment it). A massive slightly detuned low E gong thunders the feeling of the piece into newly constructed cadence points before and after the singing, and the Cantillation male ensemble sing the primary text of the prologue very slowly. They then accompany Lee who sings the text of the Virtues over a I-V vocal drone. The men then respond to close the section, before all is repeated with variations and organum harmonies (Lee, 2007). Ordo Virtutum – Instrumental Prologue is a setting of the same melody for an instrumental ensemble. In this version additive rhythm is used to provide a pulse that there is no record of in the neumatic notation. Material that was sombre and austere is now infused with energy. This piece retains the odd lengths of Hildegard’s phrases, as opposed to O beatissime Ruperte (TSF1, track 13) which fits the melody into a constant time signature. Rhythm in this case is crucial, and an underlying quaver pulse ties the music together.

Time and harmony The Ordo Virtutum – Instrumental Prologue is in metric time. The following score shows the first two phrases of original melodic line of Hildegard followed by its instrumental adaptation. This use of metre responds to Hildegard’s music - it is definitely pulsed in a manner similar to the original, yet it is not historically judged as being capable of being played ‘in time’. The bars in this example are described as 14/8/ + 14/8/ + 10/8 + 9/8 + 10/8, in the scores of the realisation (Cunio, 2008).

The Patriachs, Prophets and Virtues explores differing uses of harmony. The music itself is repeated and developed in a modified strophic repeat. The first time there is a little harmony, principally a I -V drone from the male choir, the second time this radically changes. In the repeat (TSF disc 2, track 8) the gongs play through the text instead of only at the beginning and end. They provide a functional harmonic progression of I- bV1- 1V -11- 1 (D – Bb – G – E – D). The tarogato enters in a melismatic and dissonant manner, pushing towards suspended intervals such as the 2nd, commenting on phrases in the scale, and providing melodic emphasis completely different to the music of Hildegard. The voices are faster and more urgent and the male choir sings in organum harmony up a 5th from the original. The voices are also different in the repeat. The men sing in organum with the tenors a 5th higher, the reed organ enters with a drone and the texture grows to a tutti, culminating with the final refrain of the male choir in organum, with obligato lines for both the female voice and tarogato. The piece ends with the tarogato and gongs, with the tarogato not resolving its final phrase, finishing on the 2nd. In summary all three pieces are completely new pieces of music derived from the same source text and score. They are not arrangements or variations, as the character of each work is completely different, derived from compositional intention, parameters and forces. They are newly composed, utilising the score, improvisation and a limited amount of post production editing. I was feeling that the research proved a particular point, and would raise debate but to my surprise no-one was even mildly upset; reviewers either accepted the suppositions of the research or simply did not feel it necessary to mention. The ABC was more concerned about the marketing and packaging of the disc than any threat to the music of Hildegard, and academics responded to both the production values and sense of adventure in the project without addressing the implications of the new composition at all. However, something became apparent over the following year: I realised that, far from finishing this process, I had only just begun. A number of ideas were opening up for me as I began to teach composition increasingly with technology. In 2008 I asked students to write new compositions from motivic fragments that I prepared for them and the majority of the submitted works sounded like new compositions, despite coming from the same source material. 6 The logical next step was (as it still is

6

(Cunio: 2006).

There were three discreet assignments. The first involved composing a rhythm track from a combination of rhythm recordings. They were in different music styles, primarily Egyptian and South African and were deliberately given to the students in different beats per minute so that they could not be merely placed one under the other as tracks in a DAW. The second was to recompose from the actual audio files from a 2005

Create World: AUC, Brisbane Australia, November -2 December 2009 http://www.auc.edu.au/AUC currently) to treat my own work in a similar manner. This part of the project is in its genesis and I hope to expand it to include the work of a number of new and old composers, and to ask students, professional composers and interested third parties to undertake this process of recomposition with me. •

4. EXPERIMENT 2: NAMU AMIDA BUTSU The war has begun and, like the city of Darwin in 1942, the war is now is close to home. It is all very well to play games with music that are held in the public domain, or to write a piece of music on a royalty free loop, where no-one really suffers directly as a result of the experiment. But what happens when I disown my own music, moreso a piece that I am personally proud of, that has strong aesthetic and cultural values independent of its mere score? Am I being simply naive? And how does this disassociation take place, via the score or the recording? For me the answer is obvious: anyone can listen to a recording but few people can read a score. Further, the practical steps in manipulating a recording are very simple to learn. The selection of the piece was also important. I decided in 2007 that the first selection must be largely in one key and either in free time or strict metric time to allow for quick recomposition. There is also little point for me to undertake this process with popular music (though others researchers might want to). Garageband and other loop based programs already offer a large selection of popular and cinematic music styles to recompose with. Finally there are a few implicit cultural presumptions I hope to test during the project.

My critical edition Namu Amida Butsu was commissioned by Bronwyn Kirkpatrick. It was premiered in her Masters recital in shakuhachi,7 at the Carrington Ballroom, Katoomba on September 12, 2004. Kirkpatrick had been a student of Grand Master Riley Lee in Australia for the previous six years and was about to embark on a course of study in Japan. Her Masters recital was a milestone in her career, the only available qualification in Australia, as the shakuhachi does not currently run as a performance major in the University system. Kirkpatrick requested a piece of ‘new music’ that would relate to the body of traditional work for the shakuhachi, in particular honkyoku. It was decided to write a piece for solo shakuhachi that would interpret music written for the instrument, and the tradition which it has come from, Zen Buddhism. The piece is fifteen minutes long. Traditional honkyoku is a dialogue of sound and silence. The piece begins with silence and then the first breath, which is consciously experienced as it enters the whole body by means of the skin surface coming into the "hara" and then slowly up into the whole of the lungs. There is a slight holding of the breath and then the sound. The sound is entered into, developed, colored and exited, and then with just as much attention the silence is entered into. A seamless connection, unbroken. Silence of breathing leaving the music unbroken sound. The silence then becomes part of the sound as the sound becomes silence. Words only, if not experienced in minute detail in the body; this is the rhythm of the traditional Honkyoku (Brandwein, 1999).

Positive • • •





That new composition can be undertaken from motivic fragments on a program such as Garageband. That it is possible to write inherently new music from a static collection of source material. That any person with a computer has the ability to cut up and change the compositional structure of an existing work, and that these techniques, which are more common in popular and screen music can be applied to art music. That art musicians have generally distanced themselves from the loop revolution of Apple’s Garageband and other software which allows composition from motivic fragments, and that this process of composition is potentially as valid as notation. That this process can be applied both to audio, and scores exported as MIDI data.

composed from a limited subset of recordings that all have the same sonic signature. A guitar track made entirely out of Apple audio loops might have numerous compositional possibilities embedded in its scale pitch, harmony etc, but it only has one recorded sound-world, instead of the almost limitless numbers of instruments, recording spaces, microphone and preamplifiers that would otherwise be available. A loss of respect for the traditions of music as everything becomes equal on the page of the DAW.

Musical excerpts of Namu Amida Butsu

Negative • • •

A perceived or actual reduction in the need for high level skills or training to realise a new composition. The loss of notation as a primary medium for western composition, though this can also be argued as a positive outcome. A singularisation not of composition, but psychoacoustic and other schematic data, as music is increasingly

television commission of mine in which new instruments and tracks were encouraged, while the third was to compose from the supplied parts of an instrumental Irish tune. The experiments were done by students at the Sydney Institute.

(Cunio, 2004).

The piece develops from this motivic fragment and the intervals become increasingly jagged as time progresses alongside subtle additive rhythmic variations. This piece is designed to be played relatively ‘in time’, a response to the strict physical disciplines of Zen which gives the

7

A cylindrical bamboo flute extant from medieval Japan.

Create World: AUC, Brisbane Australia, November -2 December 2009 http://www.auc.edu.au/AUC illusion of formlessness through great attention to form itself. The lead up to and the opening of figure B illustrate this. The piece starts to move, and a melodic flow begins to take shape within the tonality of the opening. At figure B the alternation between 4/4 and 6/4 gives the piece a rhythmic flow that is subtle, yet still regulated.

(Cunio, 2004).

The piece then moves to the extremes of the instrument with many jumps, using either octave displacement or the intervals of a major and minor 9th. This section represents the yearning desire for enlightenment, and the stage of actively seeking that often comes before surrender. It is introduced towards the end of the first page at the figure D animato. The grace note leaps of a 7th (bar 46), followed by a 9th, are evocative of much of the piece. This outward focused section peaks on the high G# (the highest note of Kirkpatrick’s instrument) at bar 53, before retreating at bar 56. The repeated section at bar 56 gives the player the opportunity to explore the subtleties of repetition.

enlightenment chop wood carry water’. Though everything is outwardly the same after this musical representation of enlightenment, bar 97 is marked ‘with delicacy’. A final point of stillness is achieved at H. The markings are all soft and the note to play ‘breathy’ in bar 102 sets the tone for the final phrase, which is a merging with the cosmos. A ppp morendo at bar 103 makes the final bars as soft as possible.

(Cunio, 2004).

5. LET THE WAR BEGIN Namu Amida Butsu is currently being prepared for the upcoming collaboration. The following questionnaire will accompany the composition task. THE WAR ON THE CRITICAL EDITION VOLUME 1 NAMU AMIDA BUTSU I am making war on the critical or singular edition of music. Will you participate? Namu Amida Butsu was written for shakuhachi Master Bronwyn Kirkpatrick in 2004. I am hoping to find out whether it can be used as a basis for new composition in the manner of the Apple loop library or similar software looping programs.

(Cunio, 2004).

Figures E and F represent the transition towards enlightenment and an increasingly introverted state. The peak of this section is bar 70, the end of figure E, where the words ‘Namu Amida Butsu’ are written. They can be whispered, spoken or thought in the accompanying General Pause. The music is sparse. Long notes are punctuated by recurring grace notes, in the manner of much honkyoku. Fermatas are used at the end of every phrase to allow length in the playing.

The original piece has been cut up into randomly numbered motivic fragments that can easily be imported into your loop browser. From there simply drag, drop and compose, everything is up to you. You can form any structure, use or not use any part of the source material, combine multiple tracks, process the audio in any manner you wish, change the tempo, duration, amplitude, formant or anything else you can think of. Or if you are really stuck you can try to recreate what I did! When finished please burn the track to CD and post: Dr Kim Cunio: Lecturer Music Sound and the Moving Image Queensland Conservatorium PO Box 3428 South Bank Qld 4101 Australia Or email via http://www.yousendit.com/ To me: [email protected]

(Cunio, 2004).

I hope to report the results of this research at Create World 2010, and release a disc of the results in 2010.

The piece ends with one last flourish at G, a representation of the Zen Buddhist quote and parable ‘Before enlightenment chop wood carry water, after

While this process is in its infancy I feel strongly that there are two legitimate means by which to challenge the critical edition, the mode of multiple source based

Create World: AUC, Brisbane Australia, November -2 December 2009 http://www.auc.edu.au/AUC composition (Hildegard of Bingen) and the recomposition of an existing composition (Namu Amida Butsu), utilising the technologies of popular music. Both require substantive further investigation.

6.

REFERENCES

1. Bohlman, P. (2005). General Editor. Recent Researches in the Oral Traditions of Music A-R Edition 2005ISSN 1066-8209 introduction,.http://www.areditions.com/rr/rrotm.html. 2. Brandwein, M. (2007). Playing Honkyoku;_Praying Honkyoku, USA, 1999, www.shakuhachi.com, 2007. 2. Cunio, K. (2008). Intercultural composition and the realisation of ancient and medieval music. University of Western Sydney, 2008. 3. Cunio, K. (2004). Namu Amida Butsu score, Lotus Foot Music 2004. 4. Cunio, K. (2006). The Sacred Fire scores, Lotus Foot Music, 2007. 5. Cunio, K. (2009). The Thread of Life: Intercultural music making and the process of defining cultural connection. In proceedings of MSA 2009, 25-28 October 2009, Newcastle, Australia. 6. Lee, H. & Cunio, K. (2007). Hildegard Von Bingen, The Sacred Fire, ABC Classics, 4765705, 2007.

7. Lee, H. (2007). The Sacred Fire liner notes, ABC Classics, 4765705, 2007. 8. Reid B. (2007). Composition, Personal communication, 23 November – 15 December 2007.

Waving
creatively:
An
examination
of
Google
Wave
to
 facilitate
collaboration
in
creative
processes

 Andrew
Dekker,
Stephen
Viller,
Aaron
Tan
 {dekker
|
viller
|
aaron}
@itee.uq.edu.au

 School
of
Information
Technology
and
Electrical
Engineering

 University
of
Queensland

 St
Lucia,
Queensland


Abstract

 Current
technologies
to
support
collaboration
rely
on
either
synchronous
or
asynchronous
 communication
at
any
given
time
and
a
combination
of
multiple
tools
is
often
used,
which
in
turn
 can
get
in
the
way
of
the
creative
activity
itself.
This
paper
explores
the
barriers
and
challenges
in
 using
digital
collaborative
tools
in
the
creative
process.
We
examine
the
role
of
Google
Wave
in
 creative
collaboration
and
its
potential
to
become
an
environment
to
support
conceptual
phases
 of
design,
and
document
the
creative
process.
Finally
we
explore
the
potential
of
Wave,
and
how
 it
can
be
extended
to
integrate
with
current
creative
workflows
and
design
tools
through
the
 development
of
CocoaWave:
a
Mac
OS
X
Wave
client.
 

 Themes:
Ubiquitous
computing,
Online
collaboration,
Social
networks.



Introduction

 Computers
have
played
many
different
roles
in
supporting
creativity,
particularly
as
the
content
 being
created
has
moved
into
the
digital
realm.
Computers
are
increasingly
used
in
the
creation
 of
new
content
with
suites
of
tools
dedicated
to
the
creation
and
manipulation
of
online
 multimedia
artefacts
such
as
digital
video,
audio,
and
images,
as
well
as
print‐based
texts
and
 images.
These
tools
are
typically
targeted
at
individuals
rather
than
groups
or
teams,
so
that
any
 collaboration
around
the
artefacts
produced
must
either
take
place
in
a
meeting
where
the
team
 members
gather
in
the
same
space,
or
if
the
team
are
distributed
and
co‐presence
is
not
possible,
 then
some
other
tool
or
technology
must
be
employed
to
support
the
collaboration.
The
tools
 used
to
support
collaboration
are
often
very
mundane
and
everyday
in
their
nature,
for
example
 discussion
can
be
supported
via
a
telephone
call
or
teleconference
facilities,
and
richer
 conversations
can
take
place
via
videoconference,
particularly
with
the
increasingly
standard
 inclusion
of
video
cameras
in
mainstream
computers
such
as
the
built‐in
iSight
cameras
in
Apple
 iMacs
and
MacBooks.
Perhaps
the
most
ubiquitous
tool
for
electronic
communication
is
email,
 which
since
the
advent
of
multimedia
mail
standards
has
become
widely
used
(many
would
say
 overused)
for
the
distribution
of
digital
content
such
as
embedded
images
and
file
attachments.
 This
brings
us
to
the
problem
being
explored
here,
which
is
how
digital
tools
are
used
to
support
 creativity
through
collaboration,
and
how
the
design
of
such
tools
can
be
improved
based
on
an
 understanding
of
the
nature
of
collaborative
activities
in
creative
processes.
In
the
remainder
of
 this
paper
we
examine
first
how
the
creative
process
has
been
characterised,
and
in
particular
 the
role
that
collaboration
plays
in
creativity.
We
then
discuss
how
computer‐based
technologies
 have
been
designed
to
support
collaboration,
and
present
a
number
of
problems
with
current
 approaches
to
supporting
collaboration
in
creativity.
We
examine
the
role
of
Google
Wave
in
 creative
collaboration
and
its
potential
to
become
an
environment
to
support
conceptual
phases
 of
design,
and
document
the
creative
process.
Finally
we
explore
the
potential
of
Wave,
and
how
 it
can
be
extended
to
integrate
with
current
creative
workflows
and
design
tools
through
the
 proof
of
concept
development
of
CocoaWave,
a
Mac
OS
X‐based
Wave
client.



Collaboration
in
creativity

 In
this
paper,
we
are
looking
at
collaboration
within
creative
processes
(by
which
we
mean
a
 series
of
actions
that
generate,
iterate
and
evolve
ideas
over
time).
There
already
exist
several
 definitions
of
the
creative
process
resulting
from
a
large
amount
of
research
conducted
into
the


process
itself
and
its
influencing
factors
(Warr
et
al.,
2005;
Fischer,
2005;
Sternberg
et
al.,
1999;
 Maslow,
1963),
but
we
are
not
concerned
here
with
debates
around
the
definition
or
with
 contrasting
specific
understandings
of
the
process
between
different
creative
disciplines.
Instead,
 we
are
interested
in
the
social
process
that
emerges
around
creativity,
and
how
we
can
support
 this
collaboration
with
digital
tools.
Farooq
et
al.
(2007),
break
down
these
collaborative
 processes
within
creativity
into
three
core
activities:
creative
conceptualisation,
realisation
 (implementation),
and
evaluation.
We
feel
it
is
clear
that
throughout
the
lifecycle
of
the
creative
 process,
the
requirements
and
needs
of
a
support
system
will
change,
and
there
may
also
arise
 specific
needs
for
some
specific
communities.
Nevertheless,
we
wish
to
explore
the
design
of
a
 general
platform
for
creative
collaboration
which
can
subsequently
be
tailored
for
more
specific
 purposes.

 Based
on
numerous
studies
within
this
area
(Perry‐Smith
et
al.,
2003;
Warr
et
al.,
2005;
 Mamykina
et
al.,
2002),
we
can
highlight
key
aspects
of
creative
collaboration
that
need
to
be
 considered
when
determining
the
appropriateness
of
digital
support
tools
(generically
rather
 than
for
one
specific
field
or
creative
activity).
Tools
must
be
able
to
support
diverse
and
 evolving
collaboration.
They
have
to
be
useful
not
only
at
the
conceptualisation
stage,
but
also
 the
realisation
and
evaluation
stages.
They
have
to
support
multiple
methods
of
interaction,
from
 real‐time
to
asynchronous,
and
be
able
to
move
between
them.
They
need
rich
media
support,
as
 once
media
is
split
between
multiple
collaboration
tools,
content
is
no
longer
managed
in
a
 structured
way.
From
this,
we
argue
that
it
is
fundamental
that
the
tool
provides
a
means
for
 documenting
the
process
over
time,
for
the
evaluation
stage
of
the
creative
process,
and
to
 support
iterative
workflows.
The
collaborative
tool
must
also
support
awareness
between
 participants,
and
put
emphasis
on
the
collaboration,
rather
than
the
creative
outputs.
As
soon
as
 ideas
or
artefacts
are
required
to
be
packaged
for
distribution
before
they
can
be
collaborated
on,
 the
efficiency
and
flow
of
the
process
is
dramatically
reduced
(Warr
et
al.,
2005).



Computer‐mediated
communication

 A
number
of
terms
have
been
used
to
denote
the
research
into,
and
development
of,
technologies
 to
support
cooperative
activities.
Computer
Supported
Cooperative
Work
(CSCW),
a
research
 community
which
grew
out
of
Human‐Computer
Interaction
(HCI)
has
evolved
from
an
early
 focus
on
the
design
of
collaborative
tools
and
studies
of
cooperation
in
various
enterprises,
into
a
 broad
multidisciplinary
exploration
of
how
computer‐based
technologies
can
support
social
 interaction
of
all
types.
Around
the
same
stage
in
the
mid
1980s
the
term
Groupware
became
 popular
to
denote
software
designed
to
support
groups.
Early
CSCW
research
characterised
the
 nature
of
collaboration,
and
the
tools
to
support
it,
in
terms
of
how
it
is
distributed
over
space
 and
time.
The
now
classic
“four‐square
map”
of
groupware
(Johansen,
1988)
characterises
 groupware
in
terms
of
where
it
sits
in
a
2
x
2
table
mapping
the
activities
taking
place
in
same
 and
different
place
and
time.
More
recently,
this
rather
simplistic
view
was
revised
to
instead
 consider
how
activity
can
be
placed
in
a
2‐dimensional
space
mapping
space
and
time,
in
order
to
 account
for
how
some
technologies
can
be
used
across
the
space/time
division
in
the
original
 classification.
For
example,
email
to
a
colleague
on
a
local
area
network
is
delivered
with
so
little
 delay
that
virtually
synchronous
communication
is
possible
with
what
is
essentially
an
 asynchronous
communication
technology—and
many
people
not
only
use
email
in
this
 synchronous
manner,
but
are
surprised
when
instant
communication
does
not
happen.
 Conversely,
asynchronous
message
passing
can
take
place
when
using
synchronous
instant
 messaging
(IM)
tools
such
as
ICQ
and
iChat.
As
CSCW
became
more
established
and
more
studies
 of
such
tools
in
action
were
conducted,
design
principles
and
guidelines
started
to
emerge
which
 provided
designers
with
guidance
for
common
problems
and
pitfalls
such
as
understanding
the
 cost/benefit
equation
for
deploying
CSCW
systems,
how
a
'critical
mass'
of
users
are
required
for
 collaborative
systems
to
work,
and
the
challenges
raised
by
technology
that
by
its
very
nature
 must
be
used
in
multiple
locations
by
many
people
at
the
same
and
also
at
different
times
before
 it
can
be
evaluated
effectively
(Grudin,
1994).
In
more
recent
times,
with
the
explosion
of
 collaborative
applications
on
web‐based
platforms,
now
broadly
characterised
as
Web
2.0,
the
 term
Social
Software
has
become
adopted
to
describe
essentially
the
same
area.
In
contrast
with
 the
CSCW
community,
which
had
its
roots
in
academia,
Social
Software
designers
tend
to
be
in
 internet‐focused
companies
and
start‐ups,
releasing
their
products
online,
often
in
long‐running
 'public
beta'
mode
while
the
critical
mass
user‐base
is
built
and
the
stability
of
the
software
is


improved.
Similar
to
Grudin's
challenges
for
designers
of
groupware,
there
have
been
prominent
 authors
in
the
field
writing
for
designers
of
social
software
about
what
must
be
designed
for
and
 taken
account
of
(Shirky,
2003).


 In
recent
times,
there
has
been
an
explosion
of
social
software
applications
that
are
designed
to
 support
online
collaboration
in
various
ways.
The
basic
building
blocks
of
these
systems
are
 Blogs,
which
allow
quick
and
easy
publishing
of
content
online;
Wikis,
which
allow
collaborative
 editing
of
online
content
using
simple
mark‐up;
and
Forums,
which
implement
well‐established
 note
+
comment
discussions
(which
have
their
origins
in
the
bulletin
boards
which
existed
in
the
 earliest
days
of
the
internet).
In
the
area
of
creativity,
several
prominent
tools
have
evolved
 around
the
online
publication
and
discussion
of
multimedia
content.
For
example,
flickr1
 supports
the
online
publication
of
photos
and
videos,
allowing
discussion,
annotation,
and
 critique
through
the
web
site.
In
addition
to
the
large
community
of
users
sharing
their
holiday
 snaps,
flickr
also
supports
an
active
community
of
professional
and
semi‐professional
 photographers
who
use
the
site
to
publicise
and
receive
critique
on
their
work.
YouTube2,
while
 it
has
a
strong
association
with
the
posting
of
amusing
clips,
music
videos,
and
copyright
material
 from
TV
shows,
and
its
comments
tend
to
not
be
as
thoughtful
or
constructive
as
they
are
on
 flickr,
is
also
widely
used
to
publish
creative
content
from
video
makers.




Problems
with
current
tools

 In
studying
research
into
collaborative
creativity,
and
reviewing
the
numerous
options
for
 collaborative
tools
to
support
this
social
process,
we
have
found
a
number
of
areas
where
 current
tools
fall
short
in
terms
of
their
support
for
the
creative
aspects
of
this
collaboration.

 “Group
members
need
an
integrated
view
that
networks
and
combines
their
contributions
 in
a
meaningful
way
and
provides
and
social
and
temporal
index
of
who
is
doing
what
and
 when”
‐
Farooq
et
al.
(2007)



The
need
to
use
multiple
tools.
The
expense
of
developing
“collaboration‐aware”
desktop
 applications
has
always
been
a
hurdle
in
CSCW
and
Groupware,
where
alternative
approaches
 have
previously
been
explored
whereby
single‐user
applications
can
be
'fooled'
into
behaving
as
 if
they
are
collaborative
applications.
The
proliferation
of
social
software
tools
available
over
the
 internet
is
leading
to
a
situation
now
where
there
is
an
increasing
number
of
tools
available
 which
target
specific
niche
markets.
This
approach
offers
the
promise
of
an
integrated
online
 environment
where
a
single
application
can
support
not
only
the
creation
of
digital
content,
but
 also
the
creative
collaboration
that
leads
to
it.
This
is
still
far
from
the
norm
at
the
moment,
 however,
where
it
is
more
likely
that
specific
tools
will
be
used
for
content
creation,
and
others
 will
be
used
for
the
creative
process
around
this.
For
example,
using
Adobe
Creative
Suite
tools
to
 generate
illustrations,
and
email/IM/flickr
for
initial
idea
generation,
discussion,
and
critique.


 “In
addition
to
summarizing
interaction
history,
group
members
need
a
workspace
for
 reflection
where
they
can
discuss
pros
and
cons
of
novel
ideas,
provide
an
exegesis,
and
 decide
how
a
particular
idea
would
be
implemented”
‐
Farooq
et
al.
(2007)



Content
vs.
Process:
the
overheads
of
using
a
specific
technology.
One
of
the
issues
to
 contend
with
in
any
collaboration
is
the
effort
required
to
maintain
the
collaboration
itself.
In
 small
group
work
this
is
often
referred
to
as
the
'group
maintenance'
problem,
and
refers
to
the
 fact
that
the
benefits
of
working
in
a
team
have
to
be
offset
against
the
overheads
of
keeping
the
 team
functioning
effectively.
When
collaboration
takes
place
in
a
creative
context,
the
line
 between
content
and
process
can
become
blurred
as
a
large
amount
of
the
process
is
focused
on
 generating,
recording,
critiquing,
and
reflecting
about
the
content.
However,
because
of
the
need
 to
use
separate
tools
to
support
the
different
media
being
used,
the
collaborators
are
forced
to
 spend
more
time
and
effort
focusing
on
the
tools
at
hand,
rather
than
on
the
content
or
the
 communication
about
it.
 



































































 1
http://flickr.com/
 2
http://youtube.com/


“An
attractive
solution
is
to
give
users
control
over
a
flexible
space
for
composition,
able
to
 impose
or
remove
constraints
at
will,
making
use
of
them
as
an
aid
to
understanding
 practicalities
rather
than
having
to
work
around
them
when
developing
ideas.
This
also
 allows
composers
to
design
a
space
suited
to
their
own
working
method
and
current
 project”
‐
Coughlan
et
al.
(2006)



Fitting
into
a
linear
(time­based)
process
model.
Creative
processes
are
often
non‐linear,
 especially
when
collaboration
is
concerned.
Early
generative
stages
of
a
creative
process
where
 multiple
options
are
being
entertained
and
evaluated
can
lead
to
parallel
operation
on
multiple
 fronts
until
a
final
solution
is
arrived
at/agreed
upon.
Many
tools
are
limited
by
either
a
 hierarchical
model
of
how
points
in
the
process
are
related
to
each
other,
or
alternatively
a
linear
 timeline‐based
representation.
When
reviewing
the
process
and
moving
backwards
from
the
 current
solution
to
review
all
decisions
made
along
the
way,
these
solutions
work
reasonably
 well,
but
as
soon
as
one
needs
to
reevaluate
past
decisions
due
to
arriving
at
a
creative
'dead‐ end'
in
the
process,
if
the
tool
hasn't
successfully
captured
the
previously
dismissed
alternatives,
 then
reviewing
and
changing
earlier
decisions
can
be
very
difficult.

 “Our
analysis
suggests
that
a
recap
of
interaction
history,
specifically
for
novel
ideas,
is
 important
for
group
members
to
have
access
to”
‐
Farooq
et
al.
(2007)


 “[in
music
composition,
a
non‐linear
process
is
essential,]
with
composers
modifying
 elements
of
a
composition
or
adding
completely
new
ideas
at
late
stages”.
‐
Coughlan
et
al.
 (2006)




Lack
of
replay
and
revisiting
decisions.

Further
to
the
above,
it
is
often
useful
to
return
to
the
 beginning
of
a
creative
process
and
be
able
to
replay
it
from
start
to
finish.
If
the
supporting
tools
 do
not
allow
for
this
playback
functionality
there
is
a
problem
here,
which
is
compounded
by
 how
the
creative
process
is
represented
and
stored.

 “When
creativity
is
taken
as
a
long‐term,
complex
activity,
support
for
awareness
is
also
 required
for
group
members
to
monitor
the
development
of
ideas,
track
how
these
ideas
 got
narrowed
down
to
a
few
alternatives,
and
to
stay
cognizant
of
how
the
alternatives
are
 being
implemented
and
integrated
by
other
group
members”
‐
Farooq
et
al.
(2007)



Lack
of
awareness
of
others'
activities.

Awareness
is
one
of
the
key
research
areas
within
 CSCW
(Rittenbruch
&
McEwan,
2009),
and
is
concerned
with
a
variety
of
issues
around
how
 collaborative
technologies
support
their
users'
awareness
of
the
activity
of
others,
and
of
the
 development
of
the
content.
Awareness
may
be
something
as
simple
as
providing
a
'telepointer'
 for
each
user
of
the
system
so
that
others
are
able
to
see
where
they
are
pointing
or
gesturing
to
 in
the
shared
workspace.
Alternatively,
it
may
be
a
feature
directly
related
to
the
content,
where
 users
are
able
to
see
the
content
being
added
to
the
shared
workspace
as
it
is
being
created.

 “Creative
contributions,
even
smaller
ones,
are
necessary
to
help
the
creative
actor
simply
 maintain
his
or
her
status
as
well
as
enhance
it.”
‐
Perry‐Smith
et
al.,
2003

 “Production
blocking
is
common
when
ideas
are
expressed
verbally
within
a
group.
 Verbally
expressing
ideas
is
a
form
of
asynchronous
interaction,
i.e.
only
one
person
in
a
 group
can
express
her
ideas
at
one
time.
The
problem
with
synchronous
forms
of
 interaction
with
respect
to
group
reativity
is
that
if
one
member
of
the
group
is
expressing
 her
ideas,
other
members
of
the
group
are
simultaneously
prohibited
from
expressing
their
 ideas.
They
may
subsequently
forget
their
ideas
or
suppress
them
because
they
may
feel
 their
ideas
are
less
relevant
as
time
passes”.
‐
Warr
et
al.,
2005



Granularity
of
interaction.
Finally,
and
related
to
the
above
point
about
awareness,
the
level
of
 granularity
of
interaction
that
is
communicated
to
collaborators
using
the
tool
also
has
an
impact


on
how
well
they
are
engaged
in
the
process
with
their
team
mates.
When
thought
of
in
terms
of
 shared
text
editors,
it
is
the
difference
between
sending
updates
on
a
character‐by‐character
 basis,
allowing
others
to
see
keyboard
activity
in
near
real
time
(including
mistakes
and
 deletions),
to
only
updating
the
content
when
a
user
hits
return.
In‐between,
it
is
also
possible
to
 think
of
word‐by‐word,
or
line‐by‐line
updates
as
well.
The
decision
for
tool
developers
here
is
 often
a
trade‐off
between
better
activity
awareness
and
privacy,
as
well
as
the
cost
of
sending
 more
data
over
the
network
to
update
all
collaborators
with
different
levels
of
detail
as
the
 content
is
updated.
It
is
also
worth
considering
the
challenge
of
implementing
a
multi‐user
undo
 function
and
how
this
might
change
as
the
granularity
of
updates
shifts
from
an
atomic
level
up
 to
larger,
possibly
more
meaningful,
chunks
of
content.
Add
to
this
the
further
complexity
of
 updates
happening
simultaneously
in
order
to
reduce
production
blocking
issues
(allowing
 simultaneous
inputs
with
idea
generation
activities,
for
example),
and
the
enormous
challenge
of
 developing
tools
to
support
collaborative
design
becomes
all
too
apparent.



Google
Wave

 Google
Wave
is
a
web
browser
based
collaboration
and
communications
tool
developed
by
 Google
which
investigates
ways
that
distributed
collaboration
can
be
enhanced.
The
concept
 presented
by
Google
for
Wave
is
“what
would
email
look
like
if
we
set
out
to
invent
it
today?”,
 however
this
idea
can
be
easily
misunderstood
and
unrepresentative
of
the
technology.
Rather
 than
treating
Google
Wave
as
similar
to
email
from
an
interaction
perspective,
its
comparison
is
 more
related
to
the
use
of
email:
a
ubiquitous
platform
that
facilitates
communication
and
 collaboration,
and
is
generic
enough
that
it
can
be
adapted
and
extended
for
unexpected
uses.


 The
use
of
a
generic
collaborative
tool
such
as
Wave
has
the
benefit
of
being
a
single
contact
 point
for
all
digital
collaboration.
Tools
such
as
Sharepoint
and
Alfresco
have
been
used
to
 facilitate
collaboration
within
enterprises.
The
flaw
with
these
tools
for
creative
collaboration,
 however,
is
that
they
focus
on
a
collection
of
documents
and
conversations
in
a
heirarchical
 organisational
structure,
and
do
not
aid
in
facilitating
continuous
collaboration;
rather
they
act
as
 a
repository
for
documents
produced
throughout
the
creative
process.
Additionally
these
tools
 do
not
directly
provide
creative
support
internally,
and
instead
act
as
a
hub
from
the
output
of
 other
applications.

 To
further
explore
Google
Wave
as
a
collaborative
tool
to
support
the
creative
process,
we
 outline
the
current
implementation
of
Wave
as
well
as
the
potential
of
Wave
as
a
collaborative
 tool
for
supporting
creativity.
The
Wave
interface
(see
Figure
1)
is
separated
out
similar
to
an
 email
application,
primarily
consisting
of
a
list
of
users,
a
list
of
conversations,
a
list
of
folders
 (which
contain
conversations),
and
a
wave
viewing
and
authoring
interface.
We
can
break
these
 elements
down
into
two
main
categories,
organisation
(users
list,
conversations
list
and
folders
 list),
and
the
wave
authoring
interface.
While
the
organisation
aspects
of
the
Wave
interface
are
 similar
to
existing
collaborative
tools
(albeit
with
an
intuitive
drag
and
drop
user
interface),
the
 authoring
interface
is
unique
and
should
be
explored
in
detail.

 A
wave
is
a
conversation
where
one
or
more
users
can
create,
modify
and
remove
content.
 Rather
than
the
content
within
a
wave
being
temporal
based,
users
are
allowed
to
insert
content
 at
any
point
in
a
wave,
as
well
as
nest
content
as
a
response
to
existing
content.
Each
piece
of
 content
is
termed
a
wavelet,
and
is
the
atomic
unit
of
a
wave.
Google
Wave
makes
it
easy
to
 automate
and
connect
up
content
and
interweaving
ideas
with
each
other
without
enforcing
a
 specific
structure.
The
way
of
interacting
with
objects
here
is
intuitive,
the
conversation
is
free
 flowing
and
it
does
not
interfere
with
the
creative
workflow.



Figure
1:
The
Google
Wave
interface





 A
wavelet
can
contain
not
only
textual
information,
but
images
and
other
forms
of
media
(both
 interactive
and
non‐interactive).
The
media
type
that
a
wave
supports
is
not
limited
to
commonly
 used
media
(such
as
images
and
video),
but
can
contain
any
form
of
media
by
using
plug‐ins
 called
Gadgets.
Examples
of
Wave
Gadgets
include
the
ability
to
include
and
annotate
a
Google
 Map,
construct
a
Concept
Map
or
flowchart,
multiplayer
Sudoku,
a
virtual
whiteboard
for
 drawing
primitives
and
free‐form
objects,
and
even
a
fully
automated
organisation
and
initiation
 of
a
teleconference
meeting.
Media
within
these
Gadgets
that
a
user
adds
into
a
wavelet
are
not
 only
shown
to
all
participants,
but
allow
full
interactivity
and
can
be
edited
by
any
user
within
 the
wave,
in
real‐time.
This
allows
Wave
to
not
be
limited
in
what
kinds
of
collaboration
can
be
 supported,
compared
with
other
collaboration
tools
which
often
fallback
on
email
for
the
 distribution
of
unique
media
types.
Wave
also
supports
Robots,
which
are
automated
users
that
 can
be
programmed
to
either
generate
or
react
to
content
within
a
Wave.

 The
wave
itself
is
a
constantly
evolving
collection
of
wavelets,
organised
by
the
users,
and
 synchronised
to
all
users
in
near
real‐time.
The
underlying
Wave
infrastructure
provides
all
 users
within
a
wave
character
by
character,
line
by
line
updates
of
any
changes
or
additions
to
a
 wave.
Communication
is
synchronous,
allowing
multiple
users
to
edit
a
piece
of
content
 simultaneously,
while
maintaining
information
integrity.

 Waves
are
persistent
and
not
lost
during
breaks
in
conversation,
allowing
for
asynchronous
 collaboration
where
it
is
preferred
or
required.
A
big
strength
of
Wave
over
other
asynchronous
 collaborative
tools
is
that
wave
constantly
keeps
track
of
updates,
and
has
the
ability
to
go
back
 in
time,
playing
through
the
evolution
of
the
wave
one
interaction
at
a
time.
As
Farroq
et
al.
 (2007)
point
out,
“When
creativity
is
taken
as
a
long‐term,
complex
activity,
support
for
 awareness
is
also
required
for
group
members
to
monitor
the
development
of
ideas,
track
how
 these
ideas
got
narrowed
down
to
a
few
alternatives,
and
to
stay
cognizant
of
how
the
 alternatives
are
being
implemented
and
integrated
by
other
group
members”.
 The
current
Google
Wave
interface
is
similar
to
other
Google
applications,
such
as
Gmail
and
 Google
Calendar.
As
Wave
is
a
web
application,
any
user
with
internet
access
and
a
web
browser
 is
able
to
access
and
collaborate
on
a
Wave,
without
requiring
a
deep
technical
background
or
 powerful
computer
(one
of
the
best
ways
to
interact
with
Wave
is
via
the
web
browser
on
the
 iPhone).
Additionally,
as
no
information
is
stored
locally,
a
user
can
move
between
locations
or
 devices
and
still
have
the
full
history
and
ability
to
collaborate
inside
the
wave.



The
potential
of
Google
Wave
‐
Addressing
the
problem
of
 current
technologies

 The
potential
of
Wave
is
less
in
its
individual
parts,
but
more
in
its
consolidation
of
different
 media
types,
and
its
adaptability.
The
scientific
community
is
already
embracing
Wave,
and
 utilizing
it
in
ways
that
it
was
not
originally
designed
(BBC,
2009).
Wave
does
not
force
a
single
 method
of
collaboration,
instead
it
provides
the
tools
that
allow
its
users
to
develop
a
process
 that
works
for
them.
A
wave
is
simply
a
wrapper
for
media
and
information,
providing
users
 ways
to
manipulate
and
iterate
the
information
collaboratively.
Due
to
its
realtime
appearance,
 Wave
also
promotes
the
idea
of
a
“stream
of
consciousness”,
rather
than
allowing
users
to
reflect
 on
their
work
before
publishing.
This
drives
the
collaborative
aspect
of
the
work,
and
reinforces
 collaboration
on
the
process
itself,
rather
than
the
outputs.
Current
collaborative
tools
focus
on
a
 single
form
of
media
(for
example
Google
Docs
focuses
on
textual
content),
and
therefore
 backchannel
collaboration
gets
pushed
into
the
creation
itself,
and
is
often
hard
to
separate.
Due
 to
the
flexible
and
generic
nature
of
Wave,
users
can
develop
a
way
to
easily
differentiate
 between
backchannel
conversation,
collaboration,
and
the
creation
itself
(Figure
2).

 Figure
2.
Google
Wave
Gadgets
can
support
collaboration
via
alternate
means
of
media,
 rather
than
simply
text




 While
Wave
already
offers
many
advantages
over
traditional
collaboration
platforms,
the
real
 benefit
is
yet
to
be
fully
realised.
Wave
is
not
simply
a
unique
user
interface
that
promotes
 collaboration,
instead
it
is
built
upon
the
latest
technology
to
support
both
synchronous
and
 asynchronous
collaboration.
The
standard
user
interface
for
Wave
is
an
example
of
how
the
 technology
can
be
utilised,
and
is
being
updated
daily
to
improve
the
user
experience
as
well
as
 add
new
features
to
improve
collaboration.
Due
to
the
open
source
nature
of
Wave,
the
 technology
allows
a
tremendous
amount
of
interoperability
with
existing
technologies,
and
has
 the
potential
to
pull
in
information
from
other
mediums
such
as
email
and
instant
messaging
 collaboratively.

 The
core
strength
of
Wave
is
the
technological
concepts
it
is
founded
upon:
Operational
 Transformation
(OT)
and
Federation.
OT
allows
each
collaborator
within
a
wave
to
safely
 collaborate
in
real‐time.
While
other
technologies
(such
as
Google
Docs
and
SubEthaEdit)
have
 previously
implemented
this,
there
are
limitations
on
both
the
update
speed,
and
also


consistency/reliability
of
the
information
being
presented.
This
is
important
for
creative
 collaboration,
where
idea
generation,
as
well
as
quick
and
agile
feedback
can
be
made
difficult
by
 struggling
with
the
technology
and
consistency.
OT
provides
the
seamless
character
by
character
 synchronisation
seen
in
the
current
interface,
as
well
as
the
ability
for
history
and
revision
 information
to
be
recorded
effectively.
As
Wave
is
built
upon
OT,
users
can
be
assured
that
all
 functionality
of
Wave
(even
those
created
by
third
parties)
will
record
history
and
maintain
 consistency
between
collaborators.

 Within
a
creative
context,
the
main
benefit
of
OT
is
that
people
can
be
brought
in
and
taken
out
 throughout
the
creative
process,
rather
than
merely
providing
thoughts
on
creative
output.
OT
 allows
people
brought
into
the
process
later
an
opportunity
to
view
the
entire
creative
process
 from
its
inception,
giving
them
real
insight
into
the
creation
rather
than
just
its
final
delivered
 state.
While
there
exists
other
collaborative
tools
that
provide
this
ability,
they
are
limited
to
a
 single
type
of
media,
or
require
a
conscious
effort
by
the
user
to
ensure
that
the
process
is
 documented.
The
ability
to
peruse
the
entire
creative
process
by
external
users
also
can
support
 reinvigoration
of
ideas
that
were
considered
during
the
generative
part
of
the
process,
but
 discarded
later
on.

 The
other
foundation
technology
which
makes
Wave
such
a
powerful
tool
for
supporting
 creativity
is
that
Wave
servers
are
federated:
a
team
of
artists
or
designers
can
run
their
own
 Wave
server
locally,
and
have
the
option
of
connecting
to
other
Wave
servers.
Within
creative
 industries,
this
has
large
implications,
as
it
allows
artists
to
work
either
in
privacy
or
in
public,
 without
consequence.
This
also
allows
information
to
be
stored
on
a
privately
owned
machine,
 rather
than
giving
information
to
an
external
source
(such
as
Google).
Federation
also
allows
 multiple
Wave
servers
to
be
connected,
but
still
distributed,
so
that
working
with
external
 collaborators
is
possible
without
giving
them
direct
access
to
the
teams
Wave
server.
Due
to
this
 federated
nature,
it
also
dramatically
reduces
the
potential
downtime
which
other
collaboration
 services
suffer
from.

 From
this,
we
can
see
that
with
regards
to
the
initial
problems
identified
with
current
 collaboration
tools
and
their
support
for
the
creative
process,
the
Wave
infrastructure
addresses
 each
of
these
to
an
extent:

 •





• • •

media
support
‐
While
Wave
can
be
considered
primarily
text
based
interaction,
the
 infrastructure
allows
for
Gadget
Extensions
and
Robots
which
can
be
developed
by
the
 Wave
community
to
support
specific
media
types
and
automated
actions

 content
vs
process
‐
The
dynamic
structure
of
a
Wave
changes
over
time,
based
on
the
 direction
and
intention
of
collaboration,
and
this
puts
the
emphasis
on
the
evolving
 process,
rather
than
specific
pieces
of
content.
Wave
does
not
enforce
a
specific
set
of
 media
types,
rather
its
treats
all
media
the
same,
a
piece
in
the
overall
conversation.

 linear
model
‐
The
structure
of
a
Wave
is
not
determined
by
temporal
factors
(unlike
 more
other
collaborative
tools),
but
rather
it
is
based
on
a
tree
structure,
where
ideas
 can
be
expanded
upon
and
grouped
together.

 lack
of
replay
/
traceability
‐
The
infrastructure
of
Wave
allows
participants
to
play
 back
the
entire
evolution
of
a
Wave.

 lack
of
awareness
‐
Wave
promotes
awareness
by
offering
near
real‐time
collaboration,
 as
well
as
play
back
functionality.

 granularity
‐
The
Operational
Transformational
functionality
that
Wave
is
built
upon
is
 based
on
individual
interaction
synchronisation
(character
by
character,
line
by
line),
 while
still
supporting
asynchronous
collaboration
(in
that
temporal
differences
between
 users
are
not
only
allowed,
but
fully
supported
(due
to
the
play
back
ability
of
a
Wave)



While
Wave
does
address
these
initial
concerns,
the
concept
of
Wave
creates
others
which
may
 hinder
the
collaborative
process.
Due
to
a
wave
structure
being
dramatically
different
to
more
 traditional
means
of
collaboration,
it
can
become
confusing
and
frustrating
when
trying
to
 observe
and
collaborate.
New
information
added
to
a
wave
may
not
necessarily
be
found
at
the
 top
or
bottom
of
a
wave,
but
nested
deep
within
existing
content,
“Google
Wave's
inline
reply
 capability
turns
a
conversation
into
a
tree
that
can
grow
any
number
of
branches.
When
wave


participants
add
new
information
to
a
wave
on
different
branches
at
different
times,
the
non‐ linear
nature
of
the
discussion
can
be
overwhelming
and
feel
unnatural.”,
(Trapani
et
al.,
2009).
 While
the
OT
functionality
of
Wave
mitigates
the
current
awareness
and
granularity
issues,
the
 inherent
speed
at
which
a
wave
evolves
can
lead
to
a
feeling
of
anarchy,
especially
when
multiple
 users
are
adding
content
at
the
same
time
in
different
areas
of
the
wave.
OT
also
enforces
the
 need
to
summarise,
refactor
or
garden
a
wave
conversation
throughout
the
collaboration,
to
 ensure
legibility,
maintain
organisation
and
effectively
group
ideas
and
conversations.

 Waves
are
conceptually
a
single
entity,
with
users
either
actively
collaborating,
or
having
no
 accessibility
to
a
wave.
Once
a
user
becomes
part
of
a
wave,
they
inherently
own
the
Wave,
and
 cannot
be
removed.
This
may
change
as
the
product
evolves,
but
fits
with
the
current
conceptual
 model
of
Wave.
Another
issue
with
Wave
is
that
although
parts
of
a
wave
can
be
extracted
to
 form
a
new
wave,
there
is
no
support
for
a
hierarchal
structure
of
a
wave
(rather
waves
may
be
 connected
via
hyperlinks
similar
to
websites,
which
forces
the
user
to
browse
between
waves
to
 gain
an
overall
understanding).
This
structure
limitation
constrains
how
a
wave
can
shape
over
 time,
and
does
not
allow
the
ability
of
information
in
a
wave
to
be
efficiently
moved
between
 existing
waves.
 While
Wave
has
many
advantages
over
traditional
collaborative
methods,
currently
we
are
not
 able
to
get
streaming
updates
and
there
does
not
seem
to
be
any
way
to
know
if
a
wave
has
been
 updated
without
going
to
the
site
itself.
Unlike
other
social
media
tools,
there
is
no
incentive
to
go
 back
to
a
wave
and
continue
the
conversation.
It
is
still
a
very
reactive
tool
and
most
of
the
time
 there
is
no
one
else
is
around.
We
need
to
fix
these
issues
and
more
if
we
wish
to
attract
 newcomers
to
Google
Wave.
Email
may
not
be
ideal
as
a
collaboration
tool,
but
it
is
going
to
take
 more
than
a
good
user
interface
and
a
handful
of
content
widgets
to
make
Google
Wave
the
 collaboration
tool
of
choice
for
creative
collaboration.



CocoaWave
‐
Extending
Wave

 From
our
initial
investigation,
we
found
that
while
Wave
was
an
excellent
collaboration
tool,
the
 nature
of
the
application
(being
itself
a
website)
limited
how
we
could
use
it.
Wave
does
support
 the
ability
to
script
bots,
which
can
act
and
create
content
within
a
wave
automatically,
however
 these
bots
are
based
on
the
web,
rather
than
locally,
which
leads
to
the
same
limitations.
From
 this
finding
we
examined
potential
technologies
that
we
could
use
to
further
extend
Wave
that
 would
not
be
limited
by
the
web
sandbox.
Due
to
the
current
lack
of
an
official
external
 Application
Programming
Interface
(API)
to
Wave,
it
was
important
to
choose
a
tool
which
could
 easily
hook
into
the
web
interface
to
allow
the
sending
and
receiving
of
information.
The
 technology
directions
that
we
investigated
were
Cocoa,
Fluid,
Adobe
Air
and
Python
(Table
A).

 Table
A:
Comparison
of
development
platforms

 


Cocoa



Fluid



Adobe
Air
/
Flex

 Python



Platform



OS‐X



OS‐X



Windows,
OS‐X,
 Linux



Windows,
OS‐X,
 Linux



Application
Type

 Native



Native



Flash
Player
VM



VM
with
C
 extensions



Integration
with
 system



Full



None



Limited
*



Full



Development



Easy/Medium



Very
Easy



Easy/Medium



Medium/Difficult



Built
in
webkit
 Built
in
webkit
 Requires
 Some
 (web
browser)
 (web
browser)
 programming
 (Greasemonkey)

 APIs

 APIs

 HTTP
Requests

 *
Adobe
Air
is
run
within
a
sandbox,
limiting
its
connectivity
with
external
applications
and
 hardware

 Website
 Integration



Fluid
allows
users
to
wrap
a
website
in
a
native
application,
and
allows
us
to
hook
into
the
 website
to
manage
notifications.
The
main
problem
with
Fluid
was
the
inability
to
extend
the
 application
to
hook
into
other
applications/hardware.
While
languages
such
as
Python
give
us
a
 powerful
way
to
integrate
with
the
operating
system,
they
do
not
give
us
ready‐to‐use
 functionality
to
hook
directly
into
the
Wave
interface,
and
instead
require
a
lot
of
programming
 time
to
manually
connect
to
the
interface.
Alternatively,
Air/Flex
and
Cocoa
both
have
the
ability
 to
run
websites
within
the
application,
which
has
two
benefits:
we
can
build
upon
the
current
 Wave
interface
without
having
to
rebuild
it,
and
we
have
the
ability
to
send
and
receive
 information
to/from
Wave
through
the
provided
APIs.
We
chose
to
develop
prototypes
in
both
 Cocoa
and
Flex/Air
to
further
examine
the
appropriateness
of
the
platform.
However
we
 discovered
that
the
native
speed
(or
lack
thereof)
of
the
Flash
Player,
combined
with
the
 complexity
of
the
Wave
interface,
led
to
an
unstable
system
and
issues
with
sending
information
 to
Wave.
In
contrast,
Cocoa
suffered
none
of
these
issues,
and
while
it
is
a
more
complex
 language,
we
were
able
to
develop
the
application
(with
examples
of
sending/receiving
 information
from
Wave
through
the
Webkit
Javascript
API)
in
less
than
100
lines
of
Objective
C
 (64
lines
to
be
exact).
In
under
12
hours
we
were
able
to
creative
a
native
OS‐X
application
which
 allowed
us
to
extend
Wave
and
provided
us
with
a
custom
interface
that
we
could
modify
(Figure
 3).
We
have
now
distributed
CocoaWave
under
an
open
source
license
in
the
hope
of
seeing
how
 other
designers
can
utilise
it
for
integrating
with
external
applications
and
hardware
devices.
 The
full
source
code
is
available
at
http://code.google.com/p/cocoawave/.

 Figure
3.
The
current
CocoaWave
interface





 In
essence,
CocoaWave
is
simply
a
wrapper
application
for
Google
Wave,
that
enables
us
to
 customise
both
the
interface
as
well
as
interaction
with
Wave
beyond
what
is
possible
in
its
 current
web
based
implementation.
Our
interest
in
Wave
is
how
it
could
be
adapted
(apart
from
 the
official
Gadget,
Extensions
and
Bot
support
which
is
built
into
the
client)
to
better
support
 both
integration
with
other
applications,
as
well
as
external
devices.
From
our
exploration
of
 Wave
as
a
collaborative
tool,
we
found
that
the
web
browser
based
interface
limited
how
Wave
 could
be
adapted
into
existing
workflows,
especially
compared
with
legacy
platforms.
Other
 collaborative
technologies
such
as
Email
and
Instant
Messaging
desktop
applications
have
the


ability
to
notify
users
of
new
communication
via
the
client
applications,
and
already
have
the
 ability
to
be
integrated
with
other
applications
and
hardware
devices.

 While
Wave
can
already
support
a
large
number
forms
of
interaction,
the
use
of
the
Wave
 interface
as
a
creative
collaboration
tool
has
the
potential
to
hinder
the
creative
process,
and
we
 anticipate
that
the
current
interaction
is
more
likely
to
be
used
as
a
creative
output
tool,
rather
 than
integrated
into
the
existing
creative
process.
The
ability
to
connect
with
other
applications
 and
hardware
devices
would
enable
us
to
collaborate
around
information
directly
and
 automatically
(for
example
sending
MIDI
information
between
collaborators).
For
example,
a
 Musician
would
be
able
to
integrate
their
electric
guitar
directly
to
a
Wave
application,
which
 could
record
the
ongoing
process.
External
collaborators
could
view
and
collaborate
in
this
 process
in
realtime.
Rather
than
forcing
collaborators
to
exist
and
interact
within
the
Wave
 sandbox,
this
reinforces
Wave
as
an
underlying
framework
to
support
collaboration
(Figure
4).

 Figure
4.
CocoaWave
has
the
potential
to
allow
external
devices
to
communicate
through
 Wave




 In
the
current
implementation
of
CocoaWave,
we
have
successfully
been
able
to
send
information
 between
CocoaWave
clients
without
manual
interaction
with
the
Wave
interface.
This
 implementation
has
shown
that
with
extra
work
we
can
support
standard
protocol
 communication
through
the
Wave
infrastructure.
There
are
considerable
benefits
to
this
method
 of
communication
compared
with
existing
methods.
Firstly,
this
external
communication
 happens
in
tandem
with
the
existing
Wave
collaboration,
keeping
interaction
within
a
single
 application,
reducing
the
need
for
multitasking
between
collaborative
applications.
Secondly,
we
 have
the
ability
to
not
only
transfer
information
across
the
Wave,
but
augment
it
with
Gadgets
 that
exist
within
the
Wave.
For
instance,
a
musician
on
a
guitar
can
play,
other
participants
 within
the
Wave
could
listen
to
the
music,
and
additionally
a
graphic
of
a
guitar
could
be
 embedded
within
the
Wave,
and
visualise
the
notes
being
played.
This,
combined
with
the
ability
 to
communicate
within
the
Wave
itself,
as
well
as
the
underlying
features
of
Wave
(OT
and
 Federation),
leads
to
a
potentially
rich
and
powerful
collaborative
tool.
It
must
be
noted,


however,
that
although
Wave
provides
near
real‐time
synchronisation,
there
is
a
noticeable
lag
 with
regards
to
data.



Conclusions
and
Future
Work

 In
our
current
work,
there
are
two
directions
we
wish
to
explore
further.
While
we
have
explored
 Wave
and
determined
it
to
be
a
step
in
the
right
direction,
we
are
yet
to
conduct
a
formal
 evaluation
on
the
technology
with
regards
to
collaboration
in
the
creative
process.
To
formally
 evaluate
this,
we
are
planning
to
conduct
a
case
study
and
evaluate
Wave
in
an
undergraduate
 Studio
Design
course
in
first
half
of
2010
throughout
the
entire
design
process.
This
physical
 computing
course
focuses
on
groups
of
multidisciplinary
design
students
working
together
over
 a
4
month
period
to
produce
a
tangible
physical
computing
concept.
We
hypothesise
that
by
 utilising
and
formalising
Google
Wave
in
the
team
collaboration
process,
we
can
qualitatively
 analyse
the
effectiveness
of
Wave
in
creative
design
processes.

 Figure
5.
A
proof
of
concept
CocoaWave
implementation
which
demonstrates
the
 communication
of
external
applications/devices
through
standard
protocols
such
as
MIDI
 and
XML.




 We
are
also
planning
continued
development
and
extension
of
CocoaWave,
to
examine
the
 underlying
infrastructure
capabilities
of
Google
Wave
(including
Federation
and
Operational
 Transformations).
The
next
stage
of
the
CocoaWave
development
is
to
create
a
high
fidelity
 prototype
which
has
the
ability
to
connect
input
devices
(such
as
an
electric
guitar
or
sensor
 array)
to
output
devices
(audible
and
visual
displays)
via
the
Wave
protocols
and
interface
 (FIgure
5).
Through
this
we
hope
to
better
rich
collaboration
in
various
creative
fields
by


enabling
distributed
connection
of
devices,
as
well
as
effectively
integrate
with
the
existing
 collaborative
capabilities
of
Wave.

 In
this
paper,
we
have
examined
current
technologies
which
can
facilitate
collaboration
in
the
 creative
process,
and
proposed
that
Google
Wave
can
provide
collaborators
with
a
central
way
to
 collaborate
without
the
main
limitations
or
disadvantages
of
current
technologies.
We
have
also
 examined
the
issues
of
Wave,
and
developed
a
prototype
application
(CocoaWave)
to
see
how
we
 can
extend
Wave
to
provide
features
which
help
bring
Wave
into
existing
creative
processes.
 From
our
initial
investigation
of
Wave,
we
have
found
that
while
it
is
not
currently
the
ultimate
 collaborative
tool
to
support
the
creative
process,
the
foundation
of
Wave
allows
it
to
be
a
much
 more
powerful
tool
than
existing
systems,
and
is
certainly
a
step
in
the
right
direction



References

 BBC,
(2009),
Strength
in
Science
Collaboration,

available
at
 http://news.bbc.co.uk/2/hi/technology/8342851.stm
Retrieved
5/11/2009
 Coughlan,
T.
and
Johnson,
P.
2006.
Interaction
in
creative
tasks.
In
Proceedings
of
the
SIGCHI
 Conference
on
Human
Factors
in
Computing
Systems
(Montréal,
Québec,
Canada,
April
 22
‐
27,
2006)
CHI
'06.
New
York,
NY:
ACM
Press,
pp.
531‐540.


 Farooq,
U.,
Carroll,
J.
M.,
and
Ganoe,
C.
H.
(2007).
Supporting
creativity
with
awareness
in
 distributed
collaboration.
In
Proceedings
of
the
2007
international
ACM
Conference
on
 Supporting
Group
Work
(Sanibel
Island,
Florida,
USA,
November
04
‐
07,
2007).
GROUP
 '07.
New
York,
NY:
ACM
Press,
pp.
31‐40.

 Fischer,
G.
2005.
Distances
and
diversity:
sources
for
social
creativity.
In
Proceedings
of
the
5th
 Conference
on
Creativity
&
Cognition
(London,
United
Kingdom,
April
12
‐
15,
2005).
 C&C
'05.
New
York,
NY:
ACM
Press,
pp.
128‐136.



 Grudin,
J.
(1994).
Groupware
and
social
dynamics:
Eight
challenges
for
 developers.
Communications
of
the
ACM,
Vol.
37,
No.
1,
pp.
92‐105.

 Johansen,
R.
(Ed.),
(1988)
Groupware
:
Computer
support
for
business
teams.
New
York:
Free
 Press.

 Mamykina,
L.,
Candy,
L.,
and
Edmonds,
E.
(2002).
Collaborative
creativity.
Communications
of
the
 ACM
Vol.
45,
No.
10,
pp.
96‐99.
 Maslow,
A.H.,
1963.
The
creative
attitude.
Structuralist,
Vol.
3,
pp.
4‐10.
 Perry‐Smith,
J.E.
&
Shalley,
C.E.
(2003)
The
Social
Side
of
Creativity:
A
Static
and
Dynamic
Social
 Network
Perspective,
The
Academy
of
Management
Review,
Vol.
28,
No.
1,
pp.
89‐106
 Rittenbruch,
M.,
&
McEwan,
G.
(2009)
An
historical
reflection
on
awareness
in
collaboration,
In
 Markopoulos,
P.,
De
Ruyter,
B.,
&
Mackay,
W.
(Eds.)
Awareness
Systems:
Advances
in
 Theory,
Methodology
and
Design,
London:
Springer,
pp.
3‐48.

 Shirky,
C.
(2003).
A
group
is
its
own
worst
enemy
[Electronic
Version].
ETech
keynote.
available
 at
http://www.shirky.com/writings/group_enemy.html
Retrieved
5/11/2009
 Sternberg,
R.J.
and
Lubart,
T.I.,
1999.
The
Concept
of
Creativity:
Prospects
and
Paradigms.
In
 Sternberg,
R.J.
(Ed.)
Handbook
of
Creativity.
Cambridge,
UK:
Cambridge
University
Press,
 pp.
3‐15.

 Trapani,
G.
and
Pash,
A.
2008.
The
complete
guide
to
Google
Wave.
Available
at
 http://completewaveguide.com/

 Warr,
A.
and
O'Neill,
E.
2005.
Understanding
design
as
a
social
creative
process.
In
Proceedings
of
 the
5th
Conference
on
Creativity
&
Cognition
(London,
United
Kingdom,
April
12
‐
15,
 2005).
C&C
'05.
New
York,
NY:
ACM
Press,
pp.
118‐127.




CreateWorld 2009 – Mobile Me: Creativity on the Go, 30 November – 2 December, Brisbane, Australia

Authority 3.0: Toward a digital press for universitybased musicians, and its role in validating ERA outputs Paul Draper Queensland Conservatorium Griffith University South Bank, QLD, 4101 Australia +61 7 3735 6263 [email protected] ABSTRACT

This paper examines dilemmas for Australian music academics in terms of quantifying their research equivalence in the recent Federal government ERA preparations. To do so, short written statements and limited digital assets were offered in a trial evaluation framework somewhat disconnected from the original musical contexts and their meanings, yet this assessment model will increasingly impact upon career progression, esteem, and research funding in future ERA rounds. Consequently, this paper reviews the salient features of recent web 1.0 and web 2.0 activity to argue the case for a scholarly digital resource peer-review system as ‘authority 3.0’. Keywords

Digital resources, music research, peer review. INTRODUCTION While current computing practice abounds with innovations like online auctions, blogs, wikis, twitter, social networks and online social games, few if any genuinely new theories have taken root in the corresponding “top” academic journals . . Excess rigor supports the demands of appointment, grant and promotion committees, but is drying up the wells of academic inspiration . . the inevitable limits of what can only be called a feudal academic knowledge exchange system, with trends like exclusivity, slowness, narrowness, conservatism, self– involvement and inaccessibility. We predict an upcoming social upheaval in academic publishing as it shifts from a feudal to democratic form, from knowledge managed by the few to knowledge managed by the many . . The drive will be that only democratic knowledge exchange can scale up to support the breadth, speed and flexibility modern cross–disciplinary research needs. (Whitworth & Friedman, 2009, para. 1)

While these ideas encompass a wide-ranging agenda for academic research and peer-review, their thrust highlights some of the issues confronting university-based artists whose work falls in the ‘non-traditional’ research domain. This paper considers musicians in the academy, and the barriers and opportunities arising from the recent Excellence for Research in Australia (ERA) trial collection of Humanities and Creative Arts (HCA)

CreateWorld09

digital recordings, scores and written value claims. It questions the next steps for the ERA peer review processes in consideration of the confounding relationships between end-users (general public consumers of music), commercial approaches to online music endeavours, and a place for authentic peer-review of scholarly creative works. A BRIEF HISTORY OF ARTISTS IN THE UNIVERSITY

Australian Colleges of Advanced Education (CAEs) were amalgamated with universities following the radical restructuring of tertiary education begun by the Hawke government under education minister John Dawkins in the late 1980s. This required more than just educational convergence, but also an evaluation of what might constitute ‘researchequivalent’ outputs by creative and performing artists, now as university academics. Subsequently, a significant policy initiative was the release of Creative Nation: Commonwealth Cultural Policy (Keating, 1994), marking the first occasion of an Australian federal government enunciating a clearly articulated cultural policy and “vision of a cultureled future in a globalised society” (Craik, Davis & Sunderland, 2000, p. 1). This was soon followed by the Strand Report (1998) and the introduction of Categories H and J, mechanisms to report on academic creative outputs in an attempt to parallel traditional research quality and esteem indicators. Perhaps a little early for its time, ‘Cat.H/J’ proved conceptually flawed and administratively unwieldy, and was subsequently abandoned. A decade later the Howard government introduced the Research Quality Framework (RQF, 2006), modelled on the UK’s Research Assessment Exercise (RAE, 2008) and which once again offered a suite of indicators by which to measure and value creative arts outputs (now more widely available as digital proxies). This too never came to fruition and was quickly replaced by the Rudd government as the Excellence for Research in Australia Initiative (ERA, 2009). For the first time however, ERA does promise additional university research block funding on the basis of the quality and peer esteem of creative works. Now after some 20 years of speculation and false starts, a nationwide ERA trail evaluation of the Humanities and Creative Arts (HCA) research cluster took place during July-September of 2009.

1

ERA AND THE MUSIC EXPERIENCE

In recent times therefore, university academics and administrators have repeatedly navigated the RQF and ERA processes to consider, test, re-examine and recommend appropriate discipline-specific indicators in order to better define notions of research income, publications, quality and impact using citations against relevant Australian and world benchmarks, where relevant and applicable.



the separation of the reviewer from the abstraction of music making and/or its context;



the potential for various personal tastes to be applied to the ranking of any art-form; and,



the relative success of a digital proxy (variously limited by file size, platform, media or audience).

THE EXTERNAL ENVIRONMENT

For many university musicians however, this has been a blunt, force-fit approach with much guesswork involved in using the wrong instruments to evaluate digital proxies of the right activity – ie, traditional publishing conceptions and indicators used to measure artistic activity (often site-specific and/or real-time) via written commentary on digital representations restricted by the 2009 ERA trial requirements for a maximum 15MB file size (ERA, 2009). Nonetheless, the procedures have been useful in refining academic thinking around this, as per the following exercise at the author’s institution.

To the lay-person, when the question is asked ‘how do musicians publish?’, the answer is often ‘by records’ or perhaps ‘music scores’. While the contemporary field of activity is certainly much broader than this, recording and score publishing is useful as a starting point to examine the current environment. Similarly to books, the music field is broken up into many niche markets and publishing houses: from jazz, to classical, world, and popular music etc., although it would be fair to say that commercial interests are generally very narrow. Other academic limitations also include:

Within the Queensland Conservatorium Research Centre (QCRC) ERA data collection processes, the most common representation of practice-based music research outputs included:



finding and managing such interactions is time consuming and often unsuitable for the outputs of experimental or niche work so often common to the scholarly context;



music recordings;





video recordings;



music scores;

equivalents of ‘print and distribution’ (P&D) arrangements require cash up-front but do not guarantee distribution or review mechanisms suitable for academic publishing;



web-based creative and/or curated materials;





DVD-ROM interactive pieces.

difficulty in aggregating the metrics and/or esteem rankings on an institutional /sector basis as is required for academic research evaluation.

These outputs presented a non-linear mix of commercial, independent or un-published works, where the latter may have often attempted to verify impact, esteem or significant public recognition – there appeared to be no direct correlation with scholarly conventions for academic publishing. Therefore a major component of the ERA undertaking has been in gathering so-called ‘research statements’ from each of the relevant academics which include best-as-can information re. research background /contribution /significance. While this has clearly been a useful reflective exercise both for staff and institution, it would be fair to say that most academic musicians do not personally monitor, gather and rank these kinds of metrics on a regular basis – unlike in the traditional publishing domain where citations indices are provided in an arguably rigorous but certainly systematised way. In sum, this method of selfreview is neither suitable nor equitable. ERA next steps will invite external peer review of secure digital archives of such material. While representing a welcome step toward due recognition of the work of artists in the academy (and one presumes, resulting research block funding flow-on) the on-the-ground processes may prove problematic in terms of notions of ‘excellence’, for example:

CreateWorld09

Notwithstanding these restrictions, there remains the possibility that developmental or experimental works may be evolved over time and leveraged to wider distribution in much the same way that conference publications and journal articles may seed a significant book output. At least this could be the case if such a developmental mechanism existed, and if the relevant IP and copyright arrangements allowed. THE PROBLEM WITH MUSIC st

In the 21 century, the Internet has impacted on the dissemination, consumption and ranking of digital works, but arguably none terraformed more significantly than the international music recording industry (ie, digital downloads, iPods, Apple iTunes, .pdf scores etc). Also in the recent past there have been a range of initiatives by Australian universities and peak bodies which have progressed the idea of sector-wide peer-review of arts endeavours (eg, ANAT, 1998; Ausdance, 2007; Inter-arts, 2009). With music however, this has been somewhat problematic in that the online publication of works is subjected to a range of limitations. These include: sub-disciplinary or genre boundaries (eg, new music, vs. sonic art vs. popular music), competing

2

university branding obsessions; Australian collection society licensing laws where the material may not be 100% original (for example, jazz improvisation on set pieces, original performances of classical repertoire); or where authorship is often complex given the varying contributions of composer, performer, recordist and/or executive producer (Draper, 2008). Many Australian universities have developed varied approaches to measuring or profiling artistic works, but to date these are largely segregated along thematic or administrative lines, that is, digital works are utilised as: •

promotional materials (external relations and student recruitment departments);



commercial products (enterprise units);



research-equivalent outputs offices for research reporting);



live online digital activity and data sets (eresearch and computing clusters);



faculty/school level independent endeavours (podcasting, webcasting, internet radio, etc.).

(administrative

Clearly none of these viewpoints speak clearly to each other, highlighting a multitude of doxa-like assumptions about music-making, its quality, its value and its impact. More pertinently – there are no consistent peer-review criteria which as yet may be confidently applied to digital resources (Bates, Nelson, Roueché, Winters, & Wright, 2006). PEER REVIEW OF DIGITAL RESOURCES

Digital resources cannot not tell the whole story about music, but in the ERA context, recordings, web-sites and scores are being put forward as research proxies for evaluation. While mechanisms for the evaluation of the print outputs of traditional scholarly research are well established, no equivalent exists for assessing the value of digital arts by-products and/or ‘born digital’ outputs. Therefore if digital resources are genuinely to contribute to the research profile of Australian higher education institutions and form part of the ERA processes, it is essential that an authentic framework for evaluating these resources be established. A consistently-applied system of peer review of the artistic quality, intellectual content, and/or technical architecture should serve to: •

establish resources types which are of most use/interest to practice-led music research;



contribute to the development of common standards for accessibility and usability;



reassure academics and their host institutions of the worth of time spent in the creation of music and its representative digital resources;



inform proposals to ensure the sustainability, preservation or wider leverage of high-quality scholarly practices and material outputs (ibid).

CreateWorld09

With the next ERA sector-wide collection and evaluation imminent in 2010, this then prompts a re-examination of just how digital repositories, peer thinking and open publishing for music might be best conceptualised and leveraged. To do so, I will now turn to briefly examine a number of publishing scenarios, then draw upon these to discuss and argue some possible ways forward. AUTHORITY ONLINE Online scholarly publishing in Web 1.0 mimicked fundamental conceptions . . Most content was closed to nonsubscribers; exceedingly high subscription costs for specialty journals were retained; libraries continued to be the primary market; and the "authoritative" version was untouched by comments from the uninitiated. Authority was measured in the same way it was in the scarcity world of paper: by number of citations to or quotations from a book or article, the quality of journals in which an article was published, the institutional affiliation of the author, etc. (Jensen, 2007, p. 3)

‘Format shifting’ was the first wave. For example, in e-learning platforms such as Blackboard, lecture notes and content have simply been moved from hard-copy face-to-face formats to online delivery of digital materials. While podcasting, video lectures and the like are welcome for asynchronous engagement and review, delivery represents a traditional approach to the one-way transmission of authorised knowledge to learners as receptors. Similarly with scholarly e-journals. For example, First Monday (2009) is now a mature and widely read publication using the Open Journal Systems (OJS, 2002) platform to semi-automate peer review administration and publication processes. This allows for streamlined workflow and brings research articles to market much more rapidly than conventional print-press, yet, the framework is still that of old – authority blind reviews and edits, while readership consumes (and hopefully cites). In music, the recording industry is moving progressively from shop-front sales of records and CDs to digital downloads stores such as Amazon and iTunes. More recently, music buyers have witnessed the resurrection of the LP format online, seen as some commentators as simply a way for the record industry to perpetuate old bundled approaches to maximising copyright and IP returns. CMX (Wikipedia, 2009a) is just such a file format proposed by the record label majors and intended to be a successor to MP3. CMX's premise is similar to that of Apple's iTunes LP (Mortensen, 2009), with data such as audio, lyrics, and album art being contained in a single file. Such approaches represent convenience or novelty, but nonetheless conventional transmission. In parallel, formerly blind consumption is increasingly awakened by ‘long tail’ pattern intelligence (eg, Google, iTunes Genius), web 2.0 rankings and social networking sites (eg, LastFM, BeBo, etc) (Draper, 2009).

3

Following this, it is perhaps unsurprising that many university programs have borrowed from these developments: firstly through web 1.0 content delivery, but more recently incorporating web 2.0 technologies for social networking, wikis, blogs and tagging to value-add. To unpack this further, the following presents a brief overview of online university music publication systems. UNIVERSITY RECORD LABELS

Internationally, many universities have developed their own music dissemination platforms, often modelled on commercial online music stores. In most cases, it would appear a modest return is channelled back into the running of the enterprise, while the primary benefits would appear to be the promotion and viral marketing of the university and its programs. Indicative examples are as follows: The UK – Royal Academy of Music Record Label Working in the recording industry is increasingly central to the careers of many performers. The Academy's excellent recording facilities are available for producing demo tapes, and the Business for Musicians module of the BMus programme includes training on making and promoting a CD. (RAM, 2009)

The next two examples are from Australia. These both highlight variations in the student-led and commercial shop-front models by their inclusion of open publishing platforms, social networks and/or collaborative student-staff undertakings. Queensland – Radio IMERSD Radio IMERSD invites digital contributions and collaborative ideas from academic staff, practitioners, visitors, alumni and students in a range of areas including: public speeches, viva voce and workshop presentations; musical compositions, performances and sound recordings; commentary and review intended to stimulate critical discussion. (Radio IMERSD, 2006)

The open publishing component features podcasts which are disseminated under Creative Commons Australia licenses and via Apple iTunes and Griffith iTunes-U. All material comprises original works produced by staff, students and visitors as composers, arrangers, performers, or sound recordists/producers, as shown in Table 1 below: Table 1. Radio IMERSD Themes and Clusters Theme

Content

Concerts

Staff, students, alumni and guests http://www29.griffith.edu.au/radioimersd/index. php?option=com_content&task=blogcategory& id=17&Itemid=27

RAM’s raison d'être for making CDs is threefold: •

to provide studio experience for students;



to record music which reflects the disciplinary range and quality of RAMs’ musical activities;



to produce committed and discerning interpretations of interesting repertoire, “something which young, talented people often respond to spectacularly well” (ibid).

RAM recordings are regularly broadcast by BBC Radio 3, Classic FM and the BBC World Service, and selected discs are distributed world-wide. Most discs are available for sale in Academy Chimes campus stores, with all proceeds used to fund future recordings. Digital works are distributed through Naxos Music Library. USA – Intercollegiate Record Label Association IRLA membership is comprised of representatives from student-run record labels and other student-run music-related organizations. The IRLA exists to facilitate the sharing of information between studentrun record labels and to establish a mechanism for music industry entities to communicate with all student-run record labels easily. (IRLA, 2009)

The IRLA links up record labels across a very large number of US universities. It promotes the name of the member organizations, logos, web stores and social networking sites, and provides opportunities for artists to promote their music via the ESPNU intra-university entertainment and sports television channel (Wikipedia, 2009b). The IRLA primarily services the networking of student-run, university music recording organisations.

CreateWorld09

Public Lectures

Academics, VIPs & research partners

Creative Sparks

Original student work

http://www29.griffith.edu.au/radioimersd/index. php?option=com_content&task=blogcategory& id=16&Itemid=28 http://www29.griffith.edu.au/radioimersd/index. php?option=com_content&task=blogcategory& id=15&Itemid=29

Radio IMERSD also provides a 24/7 streaming NetRadio station which cycles and broadcasts recording studio and live concert productions. While external commercial recordings are not distributed, certain material can be made available only on the university intranet due to complex music collection society licensing laws, so limiting the distribution of those recorded performances which utilise compositions, scores or arrangements in-copyright with publishing houses (Draper, 2008). In sum, this model provides an open source platform to freely disseminate and promote the work of the Conservatorium’s diverse population. Google Analytics is employed to provide hard metrics about access numbers, downloads and international distribution. The value-adding is not only in terms of promotion, but also in terms of local exemplars, internal knowledge, popularity, and sharing ideas, opinions and cross-postings via other networks including discussion boards, blogs, and podcasting sites. A good deal of informal traffic returns to authors and creators, but to date this cannot be measured in terms of its impact.

4

Western Australia – Slow Release Music Welcome to Slow Release. Here we will be releasing music from the various streams of the Western Australian Academy of Performing Arts. A student run enterprise, the label will be produced, managed and promoted by a student team from the Bachelor of Music course. (WAAPA, 2009).

A new initiative with little content as yet. However, the design thoughtfully profiles and promotes WAAPA music departments in how it has designed and presented its shop-front by album genre: Classical, Jazz, New Music and Contemporary. Slow Release provides its own on-line store. Preview and purchase releases are available from the Slow Release label catalogue in both CD and digital format. All digital releases sold through the Slow Release Store are packaged as high quality non-protected MP3s and all albums and EPs come with printable cover art. More notably than other examples perhaps, the Slow Release Music design includes cross promotion and maintenance of a range of external social networking websites as shown in Table 2 below: Table 2. Slow Release Music Social Networks Web 2

Site

Facebook

WAAPA Music Label

DISCUSSION: AUTHORITY 3.0

Throughout the related e-publishing literature (eg, Whitworth & Friedman, 2009; Jenson, 2007; Bates et al., 2006), an overarching question is repeatedly raised: what are the implications for the future of scholarly communications and scholarly authority? Jensen (2007) proposes this as ‘Authority 3.0’, that is: the digital availability of a work for indexing, referencing, quoting, linking, and tagging; the existence of metadata that identifies the work, categorizes it, contextualizes it, and summarises it; the capacity for others to enrich it with their own comments, tags, and contextualizing elements. This last point was picked up in a significant report (Bates et al., 2006) by the UK’s Arts and Humanities Research Council (AHRC) following many RAE rounds which proved unsatisfactory in relation to the peer review and evaluation of digital resources for the arts and humanities. The document summarises its key recommendations based on the input of many universities across Britain, with particular reference to the interface between blind-peer review, authors and audiences. Firstly, it was apparent that an entirely new system of peer review and evaluation was not required, and indeed that . . it would be damaging to replace wholesale the

http://www.facebook.com/pages/WaapaMusic-Label/80616819424?ref=mf

LastFM

established methods of evaluating publicly-funded research . . Rather, the existing review structures should be developed to meet the specific challenges of the digital environment. Above all, peer review should retain its character as a measure of esteem (p. 18).

waapamusiclabel http://www.last.fm/user/waapamusiclabel

MySpace

WAAPA Music Label http://www.myspace.com/waapamusiclabel

Twitter

waapamusic http://twitter.com/waapamusic

YouTube

WAAPA Music Videos http://www.youtube.com/profile?user=waapa music

WAAPA’s platform makes a useful effort to link web 1.0 and 2.0 authority models: it provides a front-end which borrows on a conventional web-store sales approach, but also links content to a range of well maintained and highly active social networks as a way to promote and to a degree, review content.

However, it was urged that this model then should be expanded in an open process, to be debated in public and made available to the community via conferences, wikis, discussion lists, and web 2.0 technologies. The report’s final recommendations (pp. 31–32) convincingly argue that: •

Peer review of digital resources should be conducted with the evaluation report published on the respective project website. The process should be open, with all comments attributable.



Resource creators should be offered a public right of reply.



Post-completion review should be conducted in a spirit of openness, so that resource creators are encouraged to discuss freely any problems which they have encountered and any innovative solutions that they have adopted, for the benefit of the research community as whole.



Scholarly press are encouraged to commission reviews of significant digital resources, and to publish them routinely alongside other reviews.



Common and widely-publicised citation standards for digital resources should be established; resource creators encouraged to include citation instructions on their project websites and to maintain persistent URLs.

Synopsis

Across the wider breadth of university-based record labels that were examined, activity tends to remain focussed on student content, led by a decidedly commercial mimickry. While Radio IMERSD does provide for professional and/or academic content, what is missing is the capacity for review by expert professionals, academic authorities, or other leaders in the relevant fields as would be necessary as a basis for developing appropriate discipline-specific indicators against relevant Australian and world benchmarks, where relevant and applicable (see above, ERA discussion).

CreateWorld09

5

Michael Jensen (2007) takes this further in The New Metrics for Scholarly Authority (US, The Chronicle Review), that in a internet-connected world of pervasive and powerful computer processing, Authority 3.0 will likely include the following, as summarised in Table 3 below: Table 3. Authority 3.0 Framework Authority

Attributes

Scholarly Peer

Prestige of the publisher. Prestige of peer reviewers. Quality of the context: What else is on the site that holds the document, and what is its authority status?

Author

Social Network

Digital Asset

Quality of author's institutional affiliation(s). Significance of author's other work. Author's participation in other valued projects, as commenter, editor, etc. Reference network: The significance rating of all the texts the author has touched, viewed, read. Prestige of commentators and other participants. Valued links, in which the values of the linker and all his or her other links are also considered. Nature of the language in comments: Positive, negative, interconnected, expanded, clarified, reinterpreted. Percentage of phrases that are valued by a disciplinary community. Obvious attention: Discussions in blogspace, comments in posts, reclarification, and continued discussion. Percentage of a document quoted in other documents. Raw links to the document. Length of time a document has existed. Inclusion of a document in lists of ‘best of’, in syllabi, indexes, and other human-selected distillations. Types of tags assigned to it, the terms used, the authority of the taggers, the authority of the tagging system.

IMPLICATIONS AND CONCLUSIONS

ERA aims to identify excellence across the full spectrum of Australian research activity and emerging research areas. Meanwhile, a recent CHASS report (Haseman & Jaaniste, 2008) reiterates the imperative to strengthen the evidence base for arts in Australia: Current measures and benchmarking of the contributions of the humanities and creative arts to national innovation are inadequate . . better measures are critical for future models for public support of national innovation. (p. 32)

CreateWorld09

In Australia there is currently no academic press able to offer a publishing and peer review platform for the digital assets of academic musicians (and used as part of ERA submissions). Given the preceding discussion, such a hybridised 3.0 vehicle must be established to support the following: • The online dissemination of academic music publications, especially early-stage works, experimental or scholarly material unsuited to the commercial domain but highly relevant in areas of pure research and/or research-based learning, eg: Radio IMERSD (2006). • The aggregation of music academic peerreviewers, with editorial board and equivalent academic structures, eg: National Council of Tertiary Music Schools (NACTMUS) members. • Both secure (peer review) and public access (end-user) web 2.0 frameworks for dissemination of materials, eg: Open Journal Systems (OJS) in concert with a suitable web 2.0 Content Management System (CMS). • Initial Creative Commons licensing, to allow freedom for authors to leverage and further develop their creative works within a wider commercial or academic arena, as required. • A sideline enterprise label to leverage and promote material via commercially accepted production standards, distribution (P&D) and shop-front arrangements for online music, eg: Slow Release Music (WAAPA, 2009). • Recognition by peak bodies, press, industry and government, eg: Music Council of Australia (MCA), Australian Music Centre (AMC), Australasian Performing Right Association (APRA), Australian Research Council (ARC). • Knowledge transfer and the enhancement of both peer review and academic capacity, vocabulary, and navigational understanding to utilise such a framework to best advantage. • The realisation of authentic peer review of digital resources criteria as part of an National Competitive Grant (NCG) and/or theme within a Centre of Excellence in Music proposal. Finally, it should be carefully considered how a pilot platform might be branded and operated to best serve and attract intellectual investment across the Australian academic music sector. Authority 3.0 seeks to combine the most promising features of online evolution to date: from web 1.0, utilising established peer review and e-journal mechanisms to monitor the responsibilities associated with publically-funded research; and from web 2.0, to integrate end-user engagement, folksonomies and the viral dissemination of valued works. In such a landscape then, musicians may engage with peer review at multiple levels, while audiences can only benefit from greater access to the meanings behind the creation of music itself.

6

REFERENCES ANAT (1998). Australian network for art and technology. http://www.anat.org.au Ausdance (2007). http://www.ausdance.org.au Bates, D., Nelson, J., Roueché, C., Winters, J. & Wright, C. (2006). Peer review and evaluation of digital resources for the arts and humanities. Final report, ICT Strategy Projects, Arts and Humanities Research Council, UK. http://www.history.ac.uk/resources/digitisation/peerreview Craik, J., Davis, G. & Sunderland, N. (2000). Cultural policy and national identity. In G. Davis & M. Keating (Eds.) The future of governance (pp. 158–181). Melbourne: Macmillan. Draper, P. (2009). How online social networks are redefining knowledge, power, 21st century music-making and higher education. Journal of music research online. Adelaide: Music Council of Australia. http://www29.griffith.edu.au/imersd/draper/publications/re search/online_social_networks.pdf Draper, P. (2008). Who’s really doing the stealing? How the music industry’s pathological pursuit of profit and power robs us of innovation. In proceedings of Creating value between commerce and commons conference, 25– 27 June 2008, Brisbane, Australia. http://www29.griffith.edu.au/imersd/draper/publications/re search/draper_whos_really_doing_the_stealing.pdf ERA. (2009). The excellence for research in Australia initiative. Canberra: Australian Government and Australian Research Council. http://www.arc.gov.au/era First Monday (2009). Online e-journal. University of Illinois at Chicago. http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/ab out Haseman B. & Jaaniste, L. (2008). The arts and Australia's national innovation system 1994–2008: Arguments, recommendations, challenges. CHASS Occasional Paper #7. http://www.chass.org.au/papers/pdf/PAP20081101BH.pd f Inter-arts (2009). Australia council, Inter-arts office. http://www.australiacouncil.gov.au/about_us/artform_boa rds/inter-arts_office IRLA (2009). Intercollegiate record label association. USA. http://www.studentrecordlabels.com

CreateWorld09

Jenson, M. (2007). The new metrics of scholarly authority. Chronicle Review, 53(41), 1–9. http://chronicle.com/article/The-New-Metrics-ofScholarly/5449 Keating, P. (1994). Creative nation: Commonwealth cultural policy. Canberra: Australian government. http://www.nla.gov.au/creative.nation/contents.html Mortensen, P. (2009). iTunes LP: The first digital album good enough to criticize. Cult of Mac (10 Sept., 2009). http://www.cultofmac.com/itunes-lp-the-first-digitalalbum-good-enough-to-criticize/16132 OJS. (2002). Open journal systems. Public Knowledge Project (PKP), University of British Columbia, Simon Fraser University, Stanford University and Arizona State University. http://pkp.sfu.ca/?q=ojs Radio IMERSD (2006). Queensland Conservatorium Griffith University podcasting platform. http://www29.griffith.edu.au/radioimersd RAE. (2008). Research assessment exercise. Higher Education Funding Council for England, the Scottish Funding Council, the Higher Education Funding Council for Wales, and the Department for Employment and Learning, Northern Ireland. http://www.rae.ac.uk RAM. (2009). The Royal Academy of Music record label, UK. http://www.ram.ac.uk/audioandvideo/Pages Strand, D. (1998). Research in the creative arts. Canberra: Department of Employment, Education, Training and Youth Affairs. http://www.dest.gov.au/archive/highered/eippubs/eip986/execsum.htm Whitworth, B. & Friedman, R. (2009). Reinventing academic publishing online. First Monday, 14(8). http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/ article/view/2609/2248 RQF. (2006). Research quality framework (RQF) retrospective. UniSA. http://www.unisa.edu.au/rqie/rqfhistory WAAPA. (2009). Slow release music. West Australian Academic of performing arts, Edith Cowan university. http://slowrelease.waapamusic.com Wikipedia. (2009a). CMX (file format). http://en.wikipedia.org/wiki/CMX_(file_format Wikipedia. (2009b). ESPNU. US University Entertainment and sports programming network. http://en.wikipedia.org/wiki/ESPNU

7

CreateWorld 2009 – Mobile Me: Creativity on the Go, 30 November – 2 December, Brisbane, Australia


Vertical Integration through Blended Learning: a whole-of-program case study. Matt Hitchcock Queensland Conservatorium Griffith University Brisbane, Queensland, Australia [email protected]

Abstract
 The systematic segregation of students into class and year level groupings does not naturally support collaboration and project-based learning. At the same time, the Internet has enabled global social networking which has proven to be a source of engagement and an effective enabler of revised professional practices and artistic collaborations. From 2004 - 2008 a research project developed and interrogated a pedagogically embedded online whole-of-program community designed to reflect the sorts of knowledge sharing structures occurring in professional workplaces through the vertical integration of knowledge, skills awareness and professional attributes in students. Vertical integration in the context of Music Technology curricula is herein defined as the coordinated, purposeful and planned system of sharing teaching and learning roles, linkages and activities in the delivery of education and training across all learner stages. The focus of this paper is on some of the outcomes of the five-year study rather than the processes in development. It is shown that a whole-of-program discussion board, when blended in a face-to-face curriculum, can greatly assist the vertical integration of students in the program, fostering engagement in intellectual and practical pursuits that may be unfamiliar to them, but which they are likely to encounter in their professional careers.


Keywords
 Blended learning, vertical integration, discussion board, professional competencies.

Background
 This paper focuses on one program cohort (Music Technology) within an element (The Queensland Conservatorium) of Griffith University in Brisbane, Australia. The structure of the program is one typical to Universities, in that program delivery comprises a suite of modularised courses (in our case spanning three years) with a single academic to many students. These circumstances are the same for all courses whether lecture or practical. This has been the situation since the inclusion of music technology at the Conservatorium in the 1970’s. I came to the Conservatorium in 2001 after a long history as a professional practitioner, and over several

Createworld09 – Vertical Integration through Blended Learning

years as an academic became aware of how unprepared many graduates were for life as a creative professional. This was despite the music technology department housing a forward thinking and experienced faculty and an exemplary learning environment. When I started at the Conservatorium I entered an environment where there already existed an emphasis on problem based learning (Sweller, 1988; Hmelo-Silver et al., 2007), scaffolding (Brown et al., 1989; Rogof et al., 1996), and collaboration and mentoring from industry aware and capable staff. Since the mid-late 1990’s the area had self-developed extensive online and electronic resources and used many web-based mechanisms for delivery to students. This was not an environment in crisis, however there still existed a considerable gap between graduates and the demands of higher-end professional practice. The gaps however were not generally in the “know-what” or “know-how”, but more in relation to “being” someone, where “mastering a field of knowledge involves not only “learning about” the subject matter but also “learning to be” a full participant in the field” (Brown & Adler, 2008, p. 4). In this context, learning to “be” is about knowing how to learn, negotiate and appropriate the “ways” of different professions (Wenger et al., 2002). My early research and observations indicated common traits in the student cohorts as: 1. students remained separated into small cliques within year groupings; 2. networking was viewed as unimportant, in other words learning was perceived by students as being about what each individual was able to achieve; 3. learning transfer was poor across classes / year levels; 4. some students retained resilient ‘high-school’ perspectives; 5. there was little cross-year communication or interaction. Additionally, student and staff conversations were mostly held in isolation from the rest of the cohort, resulting in valuable insights being lost to other students and indeed different contexts for the same students. This was especially noticeable when discussions were covering areas that should have crossed course boundaries, and importantly, year level boundaries. Broad and inclusive discussion around learning and professional cultures did not seem to be occurring with depth or consistency. Discussions were predominantly being limited, segmented and compartmentalised by the artificial boundaries of tertiary education structures. With this as a provocation there was a need for the

1


CreateWorld 2009 – Mobile Me: Creativity on the Go, 30 November – 2 December, Brisbane, Australia


replication, more than a simulation, of a professionally oriented community within the music technology area in an effort to integrate professional traits into all aspects of the learning landscape. Accordingly, the need was to create a vertically integrated whole-of-program community that more aptly reflected the sorts of transformative knowledge sharing structures occurring in professional workplaces. Vertical integration in the context of Music Technology curricula is herein defined as the coordinated, purposeful and planned system of sharing teaching and learning roles, linkages and activities in the delivery of education and training across all learner stages. John Seely-Brown, in discussing this transformation from student to someone who has insight into being a practitioner, proposes that: We need to find ways that our students can learn more about learning-to-be much earlier in their education. Today’s students want to create and learn at the same time. They want to pull content into use immediately. They want it situated and actionable— all aspects of learning-to-be, which is also an identity-forming activity. This path bridges the gap between knowledge and knowing. (Brown, 2006, p. 11) The first step then was to establish a learning community with the ability to open the membership base up to as many people as possible. The intention was to include students, academics, alumni and industry professionals. In light of prior experience with creating online communities, the most viable immediate solution not requiring complete upheaval of the tertiary education environment was to create an online whole-of-program community by pedagogically embedding a web-based discussion board that was not tied to any University-based learning management systems, structure or courses. Significantly herein, the tensions between the concept of ‘replication’ and the action being ‘virtual’ were acknowledged, but not allowed to deter the action. The ensuing environment is therefore one that is situated as a blended learning (Bersin, 2004) environment. Defining
blended
learning
 The term ‘blended learning’ has many interpretations. Oliver and Trigwell note the subsequent difficulties for researchers arising from this variance and ensuing lack of clarity: The term ‘blended learning’ is ill-defined and inconsistently used. Whilst its popularity is increasing, its clarity is not. Under any current definition, it is either incoherent or redundant as a concept. Building a tradition of research around the term becomes an impossible project, since without a common conception of its meaning, there can be no coherent way of synthesising the findings of studies, let alone developing a consistent theoretical framework with which to interpret data. (Oliver & Trigwell, 2003, p. 24) Across the literature, ‘blended learning’ is a term that can refer to combining or mixing:

Createworld09 – Vertical Integration through Blended Learning

• online and face-to-face forms of learning; • different web-based technologies in an e-learning context; • pedagogical approaches (e.g. constructivism, behaviorism, and cognitivism); • different foci for learning, or intended learning (Valiathan, 2002)1; • any form of instructional technology (online or not) with any form of face-to-face learning. The most pertinent definitions in this context relate to the blending of online and face-to-face contexts for learning and learning experiences. This is not to diminish the importance of blending of pedagogical approaches, nor Valiathan’s different foci for learning. There is however recognition that: both occur naturally in the Music technology context as a by-product of the pedagogy; and neither forms the primary focus of this research. Subsequently, the following definition is used to create a common understanding of the term blended learning in this context. Blended Learning herein is when there is integration of “online with traditional face-to-face class activities in a planned, pedagogically valuable manner; and where a portion (institutionally defined) of face-to-face time is replaced by online activity.” (Picciano, 2006, p. 3)

Method
 A range of methods was used for the data gathering consisting of 3 longitudinal perspectives and three vertical snapshots. Four forms of data gathering evidence across the program (n=60) were then employed: 1. 2. 3. 4.

Discussion board interactions, Interviews (n=8) - purposive sampling (Berg, 2004), Surveys (2006 n=21, 2007 n=35) - purposive sampling, Participant observations (continual / longitudinal).

The longitudinal perspectives come from the participant observations and discussion board interactions, as well as a three-year span between the phase 2 interviews and phase 3-4 surveys. The vertical snapshots come from the interviews and two phases of surveys. Recruitment was voluntary and included low to high achievers, students with various post counts from lowest to highest for the year, and included graduates as well as local, interstate and international students in an age range as diverse as was available to me (17 – 28). The interviews were undertaken after yearly assessments had concluded. A multi-method approach using only qualitative methodologies has been used to reduce any methodological artefacts and ensure that variances in validation reflected variances in traits rather than method

























































 1

Valiathan (2002) describes blends in terms of the focus for learning, or ‘intended’ learning. Skill-driven learning combines self-paced learning with instructor or facilitator support to develop specific knowledge and skills; attitude-driven learning mixes various events and delivery media to develop specific behaviours; and competencydriven learning blends performance support tools with knowledge management resources and mentoring to develop workplace competencies.

2


CreateWorld 2009 – Mobile Me: Creativity on the Go, 30 November – 2 December, Brisbane, Australia


(Bouchard, 1976). To this end the range of qualitative methodologies utilised were: 1. Action research - Since 2002 all students have been surveyed each semester with a focus on course and program experience. Since 2004, academics also share their observational data at regular intervals. Action research in the area includes; work-integrated learning pathways on campus and in the field (Draper & Hitchcock, 2006; Hitchcock, 2007); perspectives on curriculum in changing contexts (Burt et al., 2007); and program-wide blended learning strategies (Hitchcock & Draper, 2008; Hitchcock, 2008). 2. Participant observation - a completely natural outcome of the circumstances and environment. Given that I was both immersed in the group and a key participant within the setting, the immediately observable details (such as online interactions) and the more hidden details (such as the changes in relational dynamics between students over time) were more easily observed and understandable. Subsequently, a richness of initial data was suggestive of directions and emergent themes for deeper investigation via interview and survey. 3. Grounded Theory - an extension of both action research and participant observation where the data generates theory rather than the other way round. The aim was to discover the theory implicit in the data (Dick, 2005). Herein, some theories have been developed inductively from a corpus of knowledge acquired by myself as both a participant and an observer.

4. Case study - I have taken a case-oriented perspective where suspicion exists of simple additive models. As is common with perspectives influenced by grounded theory, I take the view that I am theorising about the world as the respondents see it rather than how the world could ostensibly be (Strauss & Corbin, 1990), understanding that there is no one single reality, and that there can be more than one set of basic beliefs, or 'paradigms' about what constitutes reality and counts as knowledge (Hanson, 1958; Myers, 2000).

The discussion board is a further form of data with currently over 30,000 posts. Methodological triangulation has occurred in two forms. The first form of triangulation used is the ‘between methods’ approach and takes the form of participant observation, interview/survey, and discussion board posts. The second form of triangulation is reflected in the ‘multiple comparison groups’ and comes as a result of the longitudinal nature of the study. Responses in all surveys/interviews were analysed with themes arising from the student responses. Emergent themes were categorised, sorted and meta-tagged, and then further streamlined and sorted due to thematic overlaps. The stages included data-collection, note-taking, coding, sorting, meta-tagging, categorizing, comparing/merging and then write-up. Finally, the themes, data summary and subsequent interpretations of the data were circulated back to students as a member check or respondent validation (Creswell, 1998), using the principle of face validity (Anastasi, 1954) to review and clarify the content if necessary, and to establish greater trustworthiness in the research process. This strategy was used to establish trustworthiness by giving authority to the participants’ perspectives therefore managing the threat of bias (Padgett, 1998).

Createworld09 – Vertical Integration through Blended Learning

Findings:



Online
is
different
 There is an unfortunate and over-simplistic selfevidence to the statement “online is different”. The importance to the statement, however, is really to be found in the underlying resonance and meaning that students were reflecting on when making this claim. For many students it was a realisation that online personas are not only different, but that a depth of understanding of their peers was made possible in ways not otherwise achievable. Therefore, while the concept may seem self-evident, this in no way detracts from the value or indeed outcomes that students attributed to the concept of “online is different”. This finding contradicts research commonly depicting online social networks and knowledge communities as simple extensions of student lives offline (O’Reilly & Newton, 2002; Picciano, 2006; Driscoll, 2003; Allen et al., 2007). Contrary to previous research however, this study: occurred at a program rather than course level; invited a more extensive membership base than usual; and remained active for a considerably longer period of time than the typical course-based boundaries allow. In music technology, students across year-levels and across time consistently reflected on their new-found awareness of how peers displayed different and sometimes quite contrary personas between the online and face-toface, with this growing awareness contributing to a deeper understanding of their peers. Students reported that they got to know their peers more “fully” or more “completely” via the online space, some even going so far as to say that they “only really got to know some of the students because of the discussion board”. This resulted in the development of relationships and networks that may have otherwise not occurred, certainly contributing to a feeling of greater depth and diversity, therefore quality, amongst the cohort. There is also a newfound general recognition amongst students that diversity is as equally pertinent to their future directions as it is to their backgrounds. The recognition of this brought with it a perception of value in community involvement. The importance of the online space was therefore in affording a deeper understanding of the entire cohort beyond the quickly formed, small and resilient niche groups that were the typical formations found within the program. This is not to suggest that the discussion board was a stand-alone solution to creating learning networks, however it was certainly the online exposure of cohort depth that provoked face-to-face interactions. Apparently therefore, the whole-of-program discussion board provides dimensions of depth and breadth for students that are not being achieved through the face-to-face community. Students not only perceive and use the online space differently to their offline lives, but establish deeper and more extensive learning and social networks via the online interactions. Parity
of
esteem
 The second emergent theme is that of parity of esteem. In this paper, ‘parity of esteem’ is used in two ways. The 3


CreateWorld 2009 – Mobile Me: Creativity on the Go, 30 November – 2 December, Brisbane, Australia


first is in relation to a sense of equality between members where the concept of ‘parity of esteem’ presumes that there should exist an awareness of others on which to establish the foundations of esteem (Richardson & CCAE, 1979). Secondly, the need for a parity of esteem can exist where there are perceived inequalities around benchmarks/standards between groups (Banks, 1998). This inequality is apparent where assumptions are made about benchmarks that trivialise or depreciate a particular group while favouring another, for example where students are separated by year level. One example of this can be seen in blanket statements such as “second year students are more advanced than first year students”. Equally as misguided is the ensuing assumption that first year students therefore have little to offer to later year students. Within many of the online interactions we see these assumptions being overturned, with later year students becoming increasingly comfortable with the tenets of parity and first year students feeling empowered by later year students’ willingness to interact as equals rather than as superiors. Through the normal everyday online interactions then, students come to “understand the MT community’s hunger for knowledge and generosity in spreading their wisdoms.” This outcome is a vital contribution to their developing understanding of networks with respect to both value and formation. Students are often able to make more deeply informed decisions in relation to their networks, reporting that the discussion board has broken many barriers for them. The second aspect in relation to parity of esteem pertains to benchmarking. Many students brought this forward as a realisation with reference to how they saw themselves within the community. In this light, parity of esteem can be understood as arising from the capacity to benchmark themselves using standards from across the program rather than limiting themselves to the best in their particular year level. This is not to suggest that the act of benchmarking created parity of esteem, more that positive outcomes from the process of benchmarking led to greater opportunities for parity of esteem to develop than would otherwise have happened. Certainly, this was seen as the case for most students across time and across year levels. This supports literature that suggests parity can significantly enhance the intellectual quality of the learning environment (Swan & Shea, 2005; Picciano, 2006; Garrison et al., 2001). Social
ownership
and
critical
mass
 One important aspect in the pedagogical use of the discussion board was to not limit students’ social online engagement in any way. The underlying intent was that students should feel ownership and responsibility for the space, and as such would have the freedoms to use the discussion board as they saw fit. Throughout the interviews and surveys, students consistently commented on the personal freedoms associated with the discussion board, stating that the social design and use of the discussion board held a strongly positive significance for

Createworld09 – Vertical Integration through Blended Learning

them. What can also be seen in the student responses is a sense of connectedness between students and program, nourished through a sense of personalisation and ownership of the space and their contributions. This connectedness was subsequently perceived by students as arising from the blending of the personal and the academic, where the two came to be recognised as important in combination rather than separation, or where the “inside-of-class and outside-of-class” were blended. It can also be said that the members were taking responsibility for these interactions because of the social aspect, rather than leaving it up to their teachers. Consequently, the volume of activity generated by the students resulted in a critical mass which produced a rich diversity of consistently forthcoming new content. Again, according to the students, this came as a result of the discussion boards not being academically driven to the exclusion of the social. The social freedoms you allowed us on the discussion board were so important. It meant we got to relate to each other on real terms as complete human beings. Additionally, many attributed the achievement of a critical mass to the learning and discovery focus underpinning the social interactions. Across the interviews and the surveys it was apparent that students were increasingly becoming more comfortable with the idea that learning was an integral part of who they were becoming, not just something they ‘did’ when they were on campus. This underlines the importance of aspects such as shared passion and interest outside of the core mission; transmission of personal views about these passions; and social commentary around the areas of interest. This also supports literature that suggests communities must be allowed some latitude to shake themselves free of received wisdom (Brown & Duguid, 1991) to foster a community where students are encouraged to view the social as inseparable from the intellectual development of the self and identity (Erikson, 1959; Sheehy, 1976; Chickering & Reisser, 1993). Enculturation
 The theme of enculturation has two aspects. The first aspect of enculturation is the most tangible and is seen in the relationship between new community members and the pre-existing community. This is where new students learn the accepted norms, vocabulary and value emphases of the established community culture. Students commonly stated that the discussion board “massively” helped the speed and depth to which they were not only able see that a community existed, but to further understand the ways of the community toward assimilating its practices and values. The second and subtler aspect is one of subconscious socialisation or appropriation, where appropriation is understood to mean the interpretative process of constructing knowledge from social practice (Rogoff, 1994). This is a slower and more amorphous process whereby cultural shifts within the community over time,

4


CreateWorld 2009 – Mobile Me: Creativity on the Go, 30 November – 2 December, Brisbane, Australia


either via injection of new people or changes in thinking, filter throughout the community. One example can be seen in the shift in students’ thinking about the realities of the commercial post-graduation world, and how their mediadriven beliefs2 could be rationalised by more experienced students and graduates. In this aspect of enculturation, students are developing a shared collection of experiences, best practices and ways of solving problems in such a way as to create a common knowledge base on which they can draw. Participant roles and relationships are refashioned over time, so this more subtle aspect of enculturation then refers to the ease with which members can view and therefore better understand the changing relationship of the community and their place within it. This therefore suggests that engagement with ingrained cultural, social, linguistic and contextual nature of thought and action (Lave & Wenger, 1991) frames social learning through observing and modelling the behaviours, attitudes and emotional reactions of others (Bandura, 1977). Incongruity
and
multiple
perspectives
 Real world problems are often complex and ‘messy’ (Ackoff, 1974), with multiple sub-problems that demand a process of rationalising a series of smaller decisions leading toward a final solution for an overall ‘situation’. One of the inherent difficulties for educators is simply in describing or depicting aspects of a situation or ‘problem’ that are realistically beyond a students’ current comprehension, often contain many ambiguities, but are undoubtedly significant aspects to be critically considered toward a cogent solution. There are certainly associations between a students’ tolerance for ambiguity and critical thinking (Furnham, 1995; DeRoma, et al., 2003; Johnson, et al., 1995), with the need therefore to embrace the teaching of ambiguity and incongruity toward the development of integrative independent thinking (Johnson et al., 1995) in the development of professional attributes. It was common on the discussion board to see a single idea evoking a number of simultaneous but different ideas in others. These interactions were then characterised by three features: groups of students across years and across time were involved, multiple ideas branched out from single points of view, and earlier views were re-visited and reshaped as discussions unfolded. Additionally, the 24/7 aspects of the discussion board allowed time to think and respond which in turn enabled incongruities to be played with in a manner that led to more sophisticated solutions. Importantly, this involved students across time and across years, with some conversations being continued for 12 to 18 months after they had been initiated. Further, throughout this process, decisions that were generally agreed upon by the group came under further

























































 2

For example, some mistakenly believing that major record labels hire recording engineers and recording producers as ‘staff’ members and that the recording industry will supply jobs at the end of their degree.

Createworld09 – Vertical Integration through Blended Learning

scrutiny by others who were enquiring more deeply in an effort to understand intricacies. This process caused temporary disagreement, again leading to the generation of new explanations and a richer understanding of ambiguities within the problem and sub-problems. For many students, this new awareness and subsequent acceptance of incongruity led to a deeper investigation of how and why these differences existed and how they could be reconciled. Furthermore these discursive processes elicited problem-solving skills that were transferrable to multiple contexts - fundamentally a sophisticated process of synthesising in a world of complexity. Not only can this process provide richness and depth not normally attributed to sole effort, but can also be seen as a challenge to students’ individual assumptions and beliefs as they work to create new consensual ‘truths’ (Berger & Luchmann, 1966) thereby learning new frameworks from which to view their world.

Conclusions

 Through the display of unfolding online learning journeys, students can provide insights to their ways and means of learning, forming and presenting questions, investigating and researching. The discussion board is seen as a unique intellectual space within which to ‘work’ and ‘play’ at furthering transferrable skills in questioning and reasoning. These become patterns that all members can reflect on, follow and adapt to their individual learning styles (Smith et al., 2004). Additionally, small steps taken by many individuals coalesce into learning of consequence in the entire community (Vaughn & Garrison, 2005). These models of peer-to-peer, academic-to-academic and peer-to-academic interactions are underpinned by parity of esteem. This creates a shared ability to shape a new social and community structure that more closely resembles the sorts of passion based and intrinsically motivated interactions found in professional communities (Brown, 2006). Students have a growing sense of relevance and the ability to relate to others from across the program. Many students are observed to gain confidence in making networks outside their comfort zone as the program-wide discussion board exposes the fact that diversity is welcomed and valued across the program. Unlike course-based discussion forums, engaging in a program-wide community makes it easier for newcomers and the more experienced alike to blend into and comprehend a broad and varied community and to participate in multi-various and complex practices where the complexity more closely represents professional creative work places. This living historical record of program activity further blends fluidly with the face-toface community activities, contributing to enhance active participation in knowledge sharing, networking, movement between expert groups and professional socialisation. This may then afford students the ability to acquire both deep knowledge about a subject (“learning about”) as well as the ability to participate in the practices of a profession

5


CreateWorld 2009 – Mobile Me: Creativity on the Go, 30 November – 2 December, Brisbane, Australia


through productive enquiry and peer-based learning (“learning to be”) (Brown, 2008). Developing
professional
competencies

 Vertical integration via the discussion board has assisted in developing professional competencies in several ways. These include: concepts, theory, interpersonal relationships, social agility, active participation in knowledge sharing and mentoring, identity formation, managing incongruity and the development of professionally aware networks. There now exists in the program an extensive recorded history of community interactions provided by the discussion board, including interactions between current students, alumni, industry professionals and academics, past and present. The discussion board provides an entrée into their cultural norms, vocabulary, and form and function as a “community of learners” (Short & Burke, 1991) where learning is an integral part of practice (Wenger et al., 2002). Through the use of a whole-of-program discussion board, students are exposed to an online environment where socio-cultural interactions between them influence, and are influenced by, the environment with which they are interacting. Students are intellectually engaged through cognitive apprenticeship, developing cognitive presence, deeper understanding, and engagement with meaningful learning. Even before they have met their peers face-toface, participants have the opportunity to decipher existing community patterns and to experience a form of situated cognition (Lave & Wenger, 1991) where the focus of literacy is shifted from one of individual expression to community involvement (Jenkins, 2007).

Limitations
of
the
study
 It is recognised and acknowledged that there are several limitations to this study and the underlying research methodologies. One limitation is that the conclusions drawn are limited to the data examined, and although the emergent data has potential relevance to a wider field, the context for the research lies within a single program, in a single discipline, in one university. This has resulted not only in a small sample size, but also a lack of generalisability assessments (Hammersley, 1992) that could have been constructed if the research project had been run across a number of organisations or contexts. This may then mean that findings can only be applied to the situation in which this research was undertaken. It should be noted however that this paper makes no effort to argue for the generalisability of the findings to other contexts. Rather, the results and conclusions are presented as the basis for further study that extends the methodology used here.

Reference
List
 Ackoff, R. (1974). Redesigning the Future: A Systems Approach to Societal Problems. New York: John Wiley & Sons. Anastasi, A. (1954). Psychological testing. New York: Macmillan. Bandura, A. (1977). Social learning theory. Englewood Cliffs, N.J.: Prentice Hall. Banks, O. (1998). Parity and Prestige in English Secondary Education. London: Routledge & Kegan Paul. Berg, B.L. (2004). Qualitative Research Methods for the Social Sciences. Boston: Pearson. Berger, P. L. & Luchman, T. (1966). Social Construction of Reality. New York: Doubleday. Bersin, J. (2004). The blended learning handbook: Best practices, proven methodologies, and lessons learned. San Francisco, CA: Pfeiffer Wiley. Bouchard, T. J. (1976). Unobtrusive measures: An inventory of uses. Sociological Methods and Research, 4, 267-300. Brown, J. S., Collins, A. & Duguid, P. (1989). Situated cognition & the culture of learning. Education Researcher, 18(1), 32-42. Brown, J. S., & Duguid, P. (1991). Organizational learning and communities of practice: Toward a unifying view of working, learning, and innovation. In M. D. Cohen, & L. S. Sproull (Eds.), Organizational Learning (pp. 59-82). London, England: SAGE Publications. Brown, J.S., & Duguid, P. (2000). The social life of information. Cambridge, MA: Harvard Business School. Brown, J.S. (2006). New learning environments for the 21st century: Exploring the edge. Change: The Magazine of Higher Learning, 38(5), 18-24. Brown, J. S., & Adler, R.P. (2008). Minds on fire: Open education, the long tail, and learning 2.0. Educause Review, 43(1). Burt, R., Lancaster, H., Lebler, D., Carey, G., & Hitchcock, M. (2007). Shaping the tertiary music curriculum: What can we learn from different contexts? NACTMUS National Conference, Music in Australian Tertiary Institutions – Issues for the 21st Century, Brisbane 29 June–1 July 2007. Chickering, A. W., & Reisser, L. (1993). Education and Identity, Jossey- Bass, San Francisco. Creswell, J. W. (1998). Qualitative inquiry and research design: choosing among five traditions. Thousand Oaks, Ca.: Sage. DeRoma, V. M., Martin, K. M., & Kessler, M. L. (2003). The relationship between tolerance for ambiguity and need for course structure. Journal of Instructional Psychology, 30(2), 104-109.

Createworld09 – Vertical Integration through Blended Learning

6


CreateWorld 2009 – Mobile Me: Creativity on the Go, 30 November – 2 December, Brisbane, Australia


Dick, B. (2005). Grounded theory: a thumbnail sketch. Retrieved March 14, 2006, from http://www.scu.edu.au/schools/gcm/ar/arp/grounded.ht ml Draper, P. & Hitchcock, M. (2006). Work-integrated learning in music technology: Lessons learned in the creative industries. Asia-Pacific Journal of Cooperative Education, 7(2), 24–31. Erikson, E. H. (1959). Identity and the Life Cycle. New York: International Universities Press. Furnham, A. (1995). The relationship of personality and intelligence to cognitive learning style and achievement. In D.H. Saklofske & M. Zeidner (Eds). International Handbook of Personality and Intelligence, NY: Plenum Press. Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical Thinking, Cognitive Presence, and Computer Conferencing in Distance Education. Retrieved August 20, 2006, from http://communitiesofinquiry.com/files/CogPres_Final.p df Hammersley, M. (1992). What’s Wrong With Ethnography? London: Routledge. Hanson, N. (1958). Patterns of Discovery. Cambridge University Press, Cambridge. Hitchcock, M. & Draper, P. (2008). The hidden music curriculum: Utilising blended learning to enable a participatory culture. In proceedings of the 28th International Society for Music Education (ISME) World Conference, Bologna, Italy. Hitchcock, M. (2008). Making Music Together: The blending of an on-line learning environment for music artistic practice. In proceedings of Creative Industries and Innovation conference, Creating Value: Between Commerce and Commons, 25 – 27 June 2008, Brisbane, Australia. Hmelo-Silver, C. E., Duncan, R.G. & Chinn, C.A. (2007). Scaffolding and Achievement in Problem-Based and Inquiry Learning: A Response to Kirschner, Sweller, and Clark (2006) Educational Psychologist, 42(2), 99. Jenkins, H. (2007). Confronting the challenges of participatory culture: Media education for the 21st century. An occasional paper on digital media and learning. Chicago, Ill.: The MacArthur Foundation. Johnson, H.L., Court, K.L., Roersma, M.H. & Kinnaman, D.T. (1995). Integration as integration: Tolerance of ambiguity and the integrative process at the undergraduate level. Journal of Psychology and Theology, 23(4), 271-276.

Oliver, M. & Trigwell, K. (2005). Can 'Blended Learning' Be Redeemed? E-Learning, 2(1), 17-26. Padgett, D. (1998). Qualitative methods in social work research: challenges and rewards. Thousand Oaks, Calif.: Sage. Picciano, A. G. (2006). Blended Learning: Implications for Growth and Access. Journal of Asynchronous Learning Networks, 10(3), 95-102. Richardson, S. S., & Canberra College of Advanced Education. (1979). Parity of esteem. Belconnen, A.C.T.: Canberra College of Advanced Education. Rogoff, B. (1994). Developing understanding of the idea of communities of learners. Mind, Culture, and Activity, 4, 209 - 229. Rogoff, B., Matusov, E. & White, C. (1996) Models of Teaching and Learning: Participation in a community of learners. In D. R. Olson & N. Torrance (Eds.), The Handbook of Education and Human Development. Oxford: Blackwell. Sheehy, Gail. (1976). Passages: Predictable Crises of Adult Life. New York: E. P. Dutton. Short, K.G., Burke C.L., (1991). Creating Curriculum: Teachers and Students as a Community of Learners. Portsmouth, New Hampshire: Heinemann Smith, B. L., MacGregor, J., Matthews, R. S., & Gabelnick, F. (2004). Learning communities: reforming undergraduate education (1st ed.). San Francisco: Jossey-Bass. Strauss, A. & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park, CA.: SAGE. Swan, K. & Shea, P. (2005). The development of virtual learning communities. In S. R. Hiltz & R. Goldman, Asynchronous Learning Networks: The Research Frontier. (pp. 239-260). New York: Hampton Press. Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285. Valiathan, P. (2002). Blended Learning Models. Retrieved August 20, 2006, from http://www.learningcircuits.com/2002/aug2002/valiatha n.html Vaughan, N., & Garrison, D. R. (2005). Creating cognitive presence in a blended faculty development community. Internet and Higher Education, 8, 1-12. Wenger, E., McDermott, R. A., & Snyder, W. (2002). Cultivating communities of practice: a guide to managing knowledge. Boston, Mass.: Harvard Business School Press.

Lave, J., & Wenger, E. (1991). Situated learning: legitimate peripheral participation. Cambridge England; New York: Cambridge University Press. Myers, M. (2000). Qualitative research and the generalizability question: Standing firm with Proteus. The Qualitative Report, 4,(3/4).

Createworld09 – Vertical Integration through Blended Learning

7


Cat Hope Dr Malcolm Riddoch Western Australian Academy of Performing Arts Edith Cowan University 2 Bradford St Mt Lawley, WA, 6050 Australia

The Vanishing Bass - Possible implications of Internet centric listening on bass perception

[email protected] [email protected]

Abstract Internet streaming and downloads combined with mass consumption of music has involved new compression formats and budget consumer playback hardware for music. A victim of this trend has been the lower end of the sound spectrum, as listeners are becoming less accustomed to bass response through lossy compression formats played back via portable devices such as the ubiquitous iPod. Are listeners losing their ability to distinguish bass quality and the physicality of sound? This paper summarises some of the effects of digital compression for streaming and playback formats on bass frequencies and the impact these have to a new generation of music listeners.

Introduction Whilst important technological developments are being made in sound reproduction and compression, a large market segment of music consumers are downloading music in compressed file sizes to play on mp3 players and/or listening to streaming content online. As a consequence of this online delivery path music is being listened to through pod style headphones and computer headsets that come with this hardware creating a situation where listeners may well lose their ability to perceive the full spectrum of bass frequencies due to ongoing aural training on devices that cannot possibly provide the full range of the bass listening experience. The mainstream adoption and development of Internet based technologies since the mid 90’s has seen a revolution in the way music is delivered to its listeners. Access to mp3 encoding software, a modem and a social network led to the rapid expansion of copyright infringement as legions of music enthusiasts made their ripped Audio CD’s available online. From the beginning it has been the consumers of music that have boldly led the way where traditional music companies have feared to tread, following and pushing to the limits the technological potential of a new digital musical world where almost any style of

music is immediately accessible from the comfort of one’s own home. This online marketplace was at first constrained, by limited modem bandwidth, to the then ubiquitous 128kbps mp3 format and its relatively immature codecs, not to mention limited hard drive space and the expense of CD burning, as well as the skills threshold needed to cover the rip > encode > connect > share > download > listen delivery path. The ‘quality’ of the listening experience was obviously good enough such that ‘accessibility’ overcame all obstacles and drove the expansion of the sharing networks (Blesser and Pilkington, 2000). This listening pathway has developed over the last decade from poorly encoded music replete with audio artefacts shared over fragile networks vulnerable to legal action and played through computer based multimedia speakers by hobbyist enthusiasts into a streamlined and mainstream cultural activity belatedly supported by the mostly as yet still fearful corporate music community. Not only has codec development kept pace along with the maturation of our online social networks and ever increasing bandwidth, we now have an entire generation of Internet skilled music listening youth emerging into an online music experience exemplified by Apple’s automated commercial delivery system. From an iTunes MPEG 4 AAC rip to a mature commercial or sharing network back to iTunes, iPod and earphones. Portability, accessibility, ease of use, mass consumption and storage – any and all musics 24/7 played from any device anywhere is apparently what the marketplace demands. The ‘quality’ of the listening experience is here a captive of its ‘accessibility’ and it is our contention that the musical experience of especially bass frequencies is being lost in the mix.

Full frequency range in music An important part of the music listening experience is held in mastering; the final stage in the creation of a music recording, which involves crafting a full frequency range to shape

a warm sound spectrum enabling important elements of the music to come to the fore, and others to lie in the background. Different instrumental timbres are finely tuned and balanced. The role of bass ranges is important in these balances in a variety of music styles – from defined beats and drones, to the very quality of a singer’s voice. To date, releases are not specifically mastered for online distribution, rather, music is compressed after having been mastered for CD or DVD releases (Eide, 2001). Mastering provides subtle compression, equalisation and gain adjustment to make the recording suitable for commercial release, and many of these subtleties are lost once the final ‘master’ is both subject to compression and then played through consumer headphones.

Lossy compression formats MPEG compression as an industry standardized delivery mechanism is one of the fundamental technologies driving the new Internet based way of listening to music. As a lossy compression format MPEG 1 Layer 3 (mp3) and the newer MPEG 4 codecs allow for a balance between ‘musical quality’ and ease of delivery over bandwidth constrained networks to the music consumer. This ‘accessibility’ does come at the price of a degree of infidelity to the original audio waveform with noise artefacts introduced even at the highest bit rates and audible to those with an audiophile ear trained on high end audio equipment (Atkinson, 2008). This was especially the case early on with immature codecs and the common 128kbps mp3 bitrate. The development of the mp3 format outside of the Fraunhofer framework in projects such as the ongoing open source LAME encoder, along with the widespread adoption of 192kbps bit rates and higher for downloads, continue to improve the audio quality towards that of the uncompressed master. In fact in most cases bit rates above 200kbps are generally indistinguishable from the CD original, at least for the ‘average’ non-audiophile listener on consumer level receivers and speakers (Hydrogenaudio Listening Tests, n.d.). It remains however that encoding practices and skill sets are varied, many users may not even understand what “bit rate” means and many more may be somewhat reticent installing LAME 3.96 and invoking -V2 --vbr-new let alone refining their MPEG compression to suit the music. Apart from some codecs using a high pass filter at lower bit rates the bass frequencies generally fare no worse than mids and highs. Artefacts due to noise and distortion from the encoding process are introduced and resolution and dynamic range will be lost regardless, with some musics suffering more

than others. This problem is only partly addressed in the MPEG 4 AAC standard although it is generally a better compromise than the still dominant mp3. The audiophile debate over quality does seem to miss the point however (Atkinson, 1999) as mp3 and AAC are much more than just lossy digital audio compression formats, they are also a technological medium for a cultural practice (Sterne, 2006) and a way of listening far removed from either the rigours of the mixing desk or high end listening room. Portability and accessibility are the attractions and the destination is budget consumer playback via computer multimedia speakers, home stereo systems, headphones and earphones and it is here that the lack of decent bass resolution is especially apparent.

Internet streaming While one possible future of online music distribution, at least for some commercial music models (Last.FM, 2008), lies in the subscription to streaming delivery of content direct to the music consumer’s computer and multimedia playback system, this online radio/TV format is even more constrained by bandwidth problems than mp3 downloads. Earlier modem delivery of streams via streaming mp3 or proprietary formats such as RealAudio required rather severe compression down to around the 56kbps modem limit often involving high pass filters to cut off the lower bass frequencies (the cut off frequency was set at 300Hz on early LAME formats). The resulting often radical audio degradation was clearly audible even on the budget PC based playback systems of the modem era. At today’s broadband bandwidths however, along with the maturation of today’s computer based audio playback, this compression problem is being overcome and the listening experience provided by streaming sites such as Last.FM is nearing the ‘close enough is good enough’ threshold. Shockwave Flash players along with 128kbps and higher mp3 streams have become an Internet standard opening a new publishing avenue for both the larger music labels and independent musicians. It seems unlikely however that this way of organizing a personal music collection will ever displace actually downloading and ‘owning’ a music recording irrespective of whether the radio subscription and its ‘pay to listen’ business model is targeted at today’s PC or tomorrow’s increasingly sophisticated mobile telephony/Internet device. Internet streaming will most probably remain a form of radio playback like AM/FM broadcasting rather than replace the personal music collection once physically embodied in the vinyl/tape/CD and now virtually in gigabytes of MPEG audio

files that can be endlessly copied and transported beyond the online graphic user interface. Either way, whether downloading files to an iPod or subscribing to a personalised streaming music collection on tomorrow’s iPhone, the problem of the vanishing bass in our portable earphone centric music world remains the same.

The hardware problem While mid to late 90’s Internet based audio playback was constrained by the hardware specifications of the average PC sound card’s amplification stages with poor dynamic range and even worse bass response the industry has responded to the market’s requirements for better sound. The post 2000 introduction of Class D true digital audio amplification helped pave the way for today’s plethora of Internet audio consumer devices. Low voltage, low current digital amplification on a budget is no longer a serious impediment to the mass consumer online listening experience The software and hardware technologies at the heart of the new online music listening experience now offer a degree of music fidelity that is ‘close enough’ for the mass of music consumers if not for the audiophile. It is this maturing technology along with the increasing accessibility and ease of use of the online social networks that are driving both the popular demand for online music and the corporate scramble to formulate a business plan to commercialize that demand. The problem here is not the technology itself but rather the music experience it enables. The problem concerning the vanishing bass revolves around the shift in listening practices from a music collection based on vinyl, tape or CD - played back on a variety of home stereo Hi Fi systems or portable ‘boom boxes’ - towards massive collections organized on the PC and played back on iPod style mp3 players and the ubiquitous earphone. While earlier portable devices such as the Sony Walkman allowed for cassette tape, CD and MiniDisc playback these listening devices were peripheral add-ons to the mainstream music experience. It is only now with the emergence of the so-called iPod generation that headphones and earphones have become central to our digitally enabled ears. The main problem therefore lies in the difference between a full spectrum or embodied bass experience in an acoustic space, and the bass impoverished listening experience of MPEG compressed headphone/earphone audio on today’s Internet centric audio devices. Bass is perhaps the most difficult part of the frequency spectrum to faithfully reproduce in consumer playback systems simply due to

the physical limitations of small speaker cones and especially where MPEG compression has already limited the dynamic range and detail of the bass response. In the case of consumer headphones and earphones the listener is even further removed from an optimal bass listening experience. While today’s headphones are manufactured with a similar bass response range to consumer Hi Fi speakers, down to as low as 18Hz in many consumer models with various sophisticated techniques to compensate for roll off below 30Hz, it remains that headphones by their very nature fail to replicate one important feature of our perception of low frequency sound waves. Even if you can ‘hear’ as much bass, you cannot feel it in or on your body with headphones the way you can with external speakers resonating in an acoustic space. This physical ‘feeling’ of bass is an essential part of music listening.

Low frequency and the body What Philip Sherburne calls the ’micromaterialization of music’ results in loss of both high and low frequency extremes (Sherburne, 2003). Compression algorithms generally remove frequencies considered inaudible for the listener in an attempt to mimic the psychoacoustic masking phenomena apparent in human auditory perception. These psychoacoustic models however do not consider the way the body absorbs sound. Low frequency sound is audibly perceptible to the ear down to around 16Hz below which tonal recognition quickly degrades. Lower frequency bass ‘noise’ is perceptible down to 1Hz or lower although the sensitivity thresholds are extremely high, typically climbing from around 46dB at 50Hz to 107dB at 4HZ. Interestingly, older listeners are generally found to be 6-7dB less sensitive to low frequency sound than their younger counterparts (Watanabe and Møller, 1990). Now while consumer playback hardware is often rated to 20Hz ± 3-6 dB, with bass reflex porting used on most speakers to extend the bass response, the acoustic listening space itself can have a bass resonance from around 35Hz for dimensions around 5M and as low as 5-10Hz if acting as a Helmholtz resonator (Leventhall, 2003). Although these acoustic bass resonances are a problem for the acoustic engineer designing a sound studio they are a natural part of any musical listening experience sans headphones (Platz and Wharton, 1996). Skilled conductors for example will adjust the performance of an orchestral ensemble to the acoustic properties of the concert space. Furthermore, we do not perceive all bass frequencies merely with our ears. Low frequency perception is a whole body phenomenon. Research into infrasonic and low fre-

quency noise has discovered that a certain amount of bass is absorbed through the body, and at a certain volume the body cannot differentiate hearing though the ears or other organs. The largest organ of our body, our skin, has a number of receptors such as the Merkel cell, Meissners corpuscles and Pancinian corpuscles that respond to sonic vibrations. This sonic sensitivity may be particularly important at very low frequencies where the brain cannot discriminate the tonality of the auditory sensation (generally below 16-18Hz as noted above) with sensitivity decreasing as frequency increases and auditory perception via the ear takes over (Levinthall, 2003). It should be noted that this whole body low frequency skin reception is not the same as that provided by vibration devices such as ‘bass shakers’ that can be added to a listening or gaming chair to simulate the ‘bass kick’ of a subwoofer using infrasonic signals. Lastly, internal body resonance, probably due to the structure of the rib cage, also plays an important part in bass perception. Depending on body build these peak at around 3080Hz and are perceptible at high amplitudes, typically 100+ dB (Takahashi and Maeda, 2002) but may also play a part in lower amplitude bass perception. Whilst the low frequency effects of 5.1 surround sound systems often employed in home theatres, as well as the addition of subwoofers to home stereo systems, will allow for body resonance and skin reception they use a technique that separates the bass from a full, ‘heard’, audio mix. Although bass frequencies are generally omnidirectional, their source in this case can often be felt as separate from other layers of the music. Bass shakers used in modified car boots are another example of this ‘separated’ listening and, whilst catering to the rhythmic and structural role of bass in music, do not necessarily enhance the timbre of the bass components within instruments. Given this spatialized, embodied nature of our bass perception, the bass listening experience is severely limited by the physical limitations of headphone and small speaker design. Effective bass response requires a resonating cabinet and a live acoustic space that headphones - whether closed, noise cancelling, canalphone or ported – simply cannot provide.

Aural training and music perception The psycho-physiological effects of repetitive listening training are well documented. Aural training leads to frequency and pattern discrimination, due to brain plasticity, with improved sensitivity to the trained frequencies and measurable cortical changes in the brain, an effect often used in the treatment of speech

disorders and noise sensitivity (Pantev et al, 1998). The human brain is especially responsive to musical tones and this sensitivity is already apparent in new born infants but is also a dynamic feature of our fully matured brains which exhibit both “sensitive tuning” in response to different musical tones as well as “contour or pattern sensitivity” to different musical sequences (Weinberger, 2004). Neural retuning of these musical pathways is ongoing throughout our young and adult lives where axonal growth is stimulated by the strong, repeated signals generated by attentively listening to certain forms of music over others. This ‘cellular learning’ or retuning is persistent as loss of axonal connections is much slower than growing them and prolonged retuning caused by the prolonged stimulation of a neuronal pathway, such as occurs with musicians constantly returning their attention to their music, results in measurable physical changes to the brain as axonal branching increases. These changes can last a lifetime and whilst particularly pronounced in musicians are also a natural effect of the aural training the music consumer undergoes when listening to their favourite tunes. It is this psychoacoustic retuning effect that is of concern in regards to the technologically driven shift in music listening towards an iPod/earphone centric world of musical experience. Does regular listening to a particular limited bandwidth on earphone playback significantly retune the listener’s auditory perception? Just as musicians are trained to recognise pitch relationships through solfege and other methods, does the shift to a particular mode of bass impoverished listening affect the frequency discrimination and bass perception of the so-called iPod generation via the psychoacoustics of aural training? If so, this aural training could result in a decreased sensitivity to low frequencies and thus an inability to fully resolve the rich detail of bass in the musical listening experience. Important music from the past, where a great deal of attention has been given to shaping and mixing the bass sound, may no longer be fully appreciated as its composers intended as a new generation of listeners tunes out to the vanishing bass. Certain music genres and productions have a dedication to frequency range detail – the bass of Bootsy Collins on Motown recordings for example, or the famous Telarc Label recordings of classical masterworks. The care given to the mastering and indeed the craft involved in replicating the live instrument sound faithfully can be lost if it is not able to be studied or heard in mainstream music distribution. Once aurally trained to the vanishing bass will listeners be able to discern what makes a Bootsy Collins bass sound different?

The impact of reduced formats Perhaps the music industry is moving away from the recording being the central focus for a musical work, and we are heading to a time where live performance, with suitable acoustics and sound reinforcement systems, will be the place to experience music fully. The omnipresence of file sharing supports this trend, in many countries the majority of music is downloaded using the aforementioned compression formats, and the full quality listening experience is not an expectation or demand. However, data mining services allow music creators to track where and when their music is streamed on the Internet. This allows targeting of markets that could also promote music in other formats. But are young music consumers being trained to accept music without bass detail? Does this mean that the crafted sounds of the past will be lost, leaving fundamental musical concepts such as harmony and rhythm to function without a full range of timbre or the careful craft of mastering that has become a very important element of the audio engineering craft. What are the implications for music students for example? Will their training be effected when they spend up to 3 times longer listening to MPEG compressed recordings in earphones to and from study or work, compared to the amount of time they spend listening to true, acoustic sounds during their practice or rehearsal? Of more concern perhaps, is the trend for archival activity online to use MPEG compression. Sites such as archive.org enable contributors to decide on the format they choose, whilst claiming that “Libraries exist to preserve society's cultural artefacts” (Internet Archive: About IA, n.d.). The audio playback of materials on these public archives will almost always be on portable devices or computers themselves. If not enough care to the aforementioned details is taken, only certain elements of these audio artefacts will be retained, and our perception will have shifted as to render us unable to know what to look for. Without the mechanism to accurately memorise, store or listen to the variety and detail inherent in the many elements of music and its recording, our time may indeed be robbed of some of its most important cultural artefacts - like floppy discs without reading machines.

References Atkinson, J. 1999. “Mp3 and the Marginalization of High End Audio”, Stereophile, February: 22. Atkinson, J. 2008. “MP3 vs AAC vs FLAC vs CD”, Stereophile, March: 31. Blesser, B. and Pilkington, D. 2000. "Global Para-

digm Shifts in the Audio Industry", Journal of the Audio Engineering Society, Vol. 48, Nos. 9 and 10. Brand, S. 2003. “Escaping the Digital Dark Age” Library Journal, June, vol. 124, Issue 2, pp. 4649. Eide, Ø. 2001. “Bob Ludwig”, Mix, 1 December. Goblin, P. 1999. ”Sound Material: A New Reception”, Leonardo, Vol. 32, No. 4, pp. 317-323. Hydrogenaudio Listening Tests. n.d. Retrieved on 23/4/08 from http://wiki.hydrogenaudio.org/index.php?titl e=Hydrogenaudio_Listening_Tests#MP3 Internet Archive: About IA. n.d. Retrieved on 22/4/08 from http://www.archive.org/about/about.php Last.FM. 2008. “Free the Music”, Last.FM – The Blog. Retrieved on 22/4/08 from http://blog.last.fm/2008/01/23/free-themusic Leventhall, G. 2003. A Review of Published Research on Low Frequency Noise and its Effects. DEFRA, London. Pantev, C., Oostenveld, R. A. E., Ross, B., Roberts, L. E. and Hoke, M. 1998. “Increased auditory cortical response in musicians”, Nature, April 23 392 (6678), pp. 811-814.

Platz , R. and Wharton, F. 1995. “More Than Just Notes: Psychoacoustics and Composition”, Leonardo Music Journal, Vol. 5, pp. 23-28. Rumsey, F. and McCormick, T. 2006. Sound and Recording: An Introduction. Focal Press, London. Sherburne, P. 2003. “Digital DJing App that Pulls You In”, Grooves, Vol. 10, pp. 46-47. Sterne, J. 2006. “The mp3 as cultural artifact”, New Media Society, Vol. 8, pp. 825-842. Takahashi, Y. and Maeda, S. 2002. “Measurement of human body surface vibration induced by complex low frequency noise”, 10th International Meeting Low Frequency Noise and Vibration and its Control, pp. 135-142. York, UK. Watanabe, T., and Møller, H. 1990. “Low frequency hearing thresholds in pressure field and free field”, Journal of Low Frequency Noise Vibration, 9, pp. 106-115. Weinberger, N. M. 2004. “Music and the brain”, Scientific American, 291(5), pp. 88–95. [Brand Goblin Rumsey aren’t cited in text]

AUC Createworld09 Conference

Inside-Out Flutes: The Emergence of the Transformative Meta-flautist Jean Penny Queensland Conservatorium Griffith University Brisbane, Queensland, Australia [email protected]

ABSTRACT As musical and performance practices have evolved over the last half-century, the realm of the solo flautist has expanded to encompass an extensive array of emerging techniques and technologies. This paper discusses the performance of music for flute and electronics from the perspective of the flautist, the impact of technological interventions and the new performative elements introduced by this genre. The focus is on elements of performance that generate change, such as transformations of sonority, re-activated spaces, altered physicality and perceptions of performer identity, that occur in a digitally interactive environment. The “meta-flautist” emerges through a series of encounters – of performer, instrument, equipment, computer, technologist, space, and composer - and the construction of the instrumentalist-computer relationship. The aim is to reveal elements of the inside story of performance, where practices are constructed, revealed and re-defined.

INTRODUCTION The interface of performance and electronics generates changes in perceptions, understandings, connections and musical interactivity. This paper draws together flutes, technology and performance in a discussion exploring the impact of digitally activated environments on instrumentalists, and performative responses to a variety of platforms. The impact of technological interventions on performance is primarily grounded in the techniques demanded of the genre, the sonic, physical and implied repositioning of the soloist and sound technician in the performance space, an expansion of musical sonority and newly defined relationships. Extended flute techniques, electronic techniques and expanded performance elements create a new sonic world, open doors to change, transform perceptions of oneself as an instrumentalist, and engender a reappraisal and new understanding of performance practice. The traditional idea of the flute has evolved to become a meta-instrument, an interface between human and computer technology where precisely specified virtual spaces can create new perspectives and shifts in emphasis, where new partnerships form and expanded performance practices are fostered. As well as the new equipment and new physical and mental demands, flute music can now include the smallest perceivable sounds, the interior of sounds, as well as huge expansions of sound and stunning diffusion. There is a radical shift in playing an instrument in a technologically expanded environment as the parameters of the instrument itself change, as the sounds move around in new spatial dynamics, and the power and positions of instrument and player change. The performance space becomes a place for interaction with digital technologies, and an integral element in interpretation – its edges, density, and activity change. Much is invisible and intangible;

at times the source of sounds is not apparent, or is generated from outside the body, altering the perception of body-instrument-sound expectations, or the intimacy of the musician-instrument can become revealed as an expanded projection of self. This transformed meta-flautist consists of an array of elements that come together in an integrated ensemble, where the computer is the cohort, and exchanges between body, instrument and electronics meet in a new entity. Early works for flute and electronics largely involved working with fixed media, such as tape or CD accompaniment. In works such as Richard Karpen’s Exchange for flute and CD (1987) the tempi of the computer generated material force the flautist along at break-neck speed, imbuing the performance with a feeling of captivity, of a wish to escape by getting to the end - and a longing for the accompanying euphoria that a successful arrival brings. Technologies began to move away from this format to some extent from 1987, when score following was developed and used by such composers as Phillipe Manoury 1 and others. Tracing the musician’s responses to these shifting emphases in real-time performance, and transforming them into articulated documentation was a goal of my recently completed doctoral research, The Extended Flautist: Techniques, technologies and performer perceptions in music for flute and electronics (Penny, 2009). In this practice-led project, I used the experience of performance to investigate performer responses, specifically flautist responses, to playing in an electroacoustic environment. Two recitals provided the material and real-time experience of performance involving spatialisation and interactive live electronics for this study. This project was impelled by an intrigue with the changes I found in my own performance practice, and a desire to explore the rapidly expanding flute and electronics genre from a personal perspective. Many elements emerged as highly important in this investigation, and here I will relate a selection of these to specific works and technological platforms. AMPLIFICATION: THE POWER OF PROJECTION The first, and most obvious electroacoustic effect is amplification. Amplification is positively ubiquitous in this era to such an extent that acoustic music is starting to hold a re-newed attraction in some post-electronic performance circles. Amplification in the classically based flute music of western art music, however, remains something of a novelty. Its effect in changing power balances, generating an ability to use sounds that would be otherwise inaudible, to manipulate the flute tone in minute or massive ways, to create a startling sense of intimacy, repositioning the soloist in the performance space and providing the base for digital transformations generates an exciting and immediate change of scene – an array of new expectations, concepts and performative responses. Vivid effects can be created through amplifying “impossible” sounds such as very soft intimate tongue or key clicks, breath tones or fragile multiphonics. These sounds are often unstable and swinging, evoking uncertainty in player and listener. Amplified whistle tones suggest distance and may be used to depict a distant character or thought. Changes in vibrato intensity and speed can give a shimmering colour variation with amplification, especially in combination with reverberation. Other techniques may include combined flute tone and voice or breath, which can introduce a grainy, indistinct tone that is quite malleable with amplification. New dramatic elements join the performance, with implied, veiled or commanding characters mixed with the flute. 1

Manoury’s Jupiter for flute and electronics was the first work to use score following technologies, developed by researchers including Barry Vercoe, Miller Puckette and Manoury.

Amplified microtones and overtones can be mixed together effectively to distort pitch; and magnified percussive sounds bring completely contrasting sonic worlds into play with, for example, sharp, metallic key slaps or muffled articulations. Breath tones are an important and surprisingly intense element in this soundscape. The transformation of the breath tone, from its lowly position of extraneous noise needing eradication (as expected in conventional flute music) to a highly suggestive expressive element is a short but momentous journey. A variety of techniques may meld together, such as embouchure based techniques, tubular breath tones, and various quantities of normal resonant tone, or voice, multiphonics, or articulations, creating a profusion of sound choices. The breath tone becomes a new expressive tool, creating a new layer of colour and meaning, an illusion of proximity, of extending the inner self into the music and out into the hall, and a connector of inner and outer identities. The amplification may not even be immensely apparent, as in Mario Lavista’s Canto del alba for amplified flute (1979/2003), where microsounds are gently reinforced. Multiphonics, microtones, breath tone, altered timbre fingerings, whistle tones, harmonics, glissandi, voice and flute tone, varied vibrato, and the occasional resonant passage all become balanced and effective in this work through light amplification. A startling sense of place, of sitting in a beautiful forest, is evoked in an atmosphere of mindful meditation. The feeling generated by engagement with the music and the impact of the various techniques has a transforming effect, here illustrated from the performer’s perspective: An astonishing sense of anticipation is felt, the imminent connection to notes, the opening multiphonic, the breath that imbues this work with meditative and seductive style; quarter tones, altered tone fingerings, strands of tones linking and separating, spaces in sounds, spaces of sounds. Here in performance, the spinning threads construct the evocative soundscape, transporting us into the music. The breathing reflects a stylized gesture, locating the performer at once in that sound, and coming from that sound. An immersion is created, we glide together: the audience’s attention is palpable. The flute becomes a body extension, part flesh, part conduit, simultaneously directing and following the performance. Dreams again, of perfection and ease. This performance is difficult, there is so much at stake, what if, what if… My thoughts become the notes, travelers across the page, the room…. What a sweet sound, what a beautiful song of yearning, of wafting and dreaming. The music appears to move, to build invisible textures and structures, to draw us in. The amplification is amazing, it projects strength and colours, it provokes confidence and definition, it is in love with whistle tones so hard to pitch, it throws a resonant phrase to the back wall, it creates amazing whirls of harmonics and glissandi. An unimagined delicacy of tone arrives, a shimmering vibrato slides through the room, a sharp accent penetrates the fragile timbres. We are in a forest? What a place this is! I will blow on my flute forever here. (Penny, Journal entry,

2007)

REVERBERATION: ILLUSIONS AND DELUSIONS Reverberation and delay used in flute compositions are primarily textural effects that expand the soundscape to create a sense of place, multiple voices and specific atmospheres. Polyphonic build up and enlarged tonal spaces create a sense of immersion for the player, giving an imagined sense of support, and dialogue with the musical material or implied hidden persona. Katharine Norman suggests that reverberation puts a sound in a place, the echo confirming its existence. (Norman,

2004) This effect is a feature in much of Kaija Saariaho’s flute music, where reverberation has a strong influence on perception of space and placement. In NoaNoa (1999), for example, reverberation is used throughout to create sustained tones, introduce multiple voices and to add structural richness. A sense of the music existing as your whole world occurs in the performance of this work, through an immersive sound environment and the fully entrained attitude demanded of the techniques and interpretation. One’s whole body is participating here: chest, abdomen, vocal cords, arms, hands, legs, feet, throat, mouth, ears, eyes and mind. The surrounding virtual spaces appear to confirm the sense of involvement. In Thea Musgrave’s Narcissus (1988), the delay effect is used to invoke the character of the reflection. The mental chaos of Narcissus is depicted with a build up of the echoes and resultant harmonisation effects. For the performer, this creates an engagement with self-immersion, spatialisation of self through the reflective layering, dialogue, distortion and harmonisation of the tone. The delay sits at times at an easy distance, at others extremely close, and at others in harmonization and altered pitch. In performance, the impression of one’s own sound coming straight back as a new voice can be astonishing at first, but here serves as a brilliant underpinning of the dramatic character of the piece. This revelation has an empowering impact on interpretative understanding, intensifying the depiction of deluded megalomania evoked by the personality of Narcissus. Reverberation in Georg Hajdu’s Sleeplessness (1987, revised 1997) creates an atmosphere of anxiety, of aloneness and of the dimensions of the rooms of a house. Written for 3 flutes and Max/ MSP, Sleeplessness is episodic in structure, moving through sections for low flute (bass or contrabass), high flute (piccolo or concert flute), and finally the middle range alto flute. The electronics add undercurrents, especially through reverberation, capturing feelings of shakiness and nervous twitches. Shadows and confusion are also conjured up, through the harmonization effects, introducing new characters and voices. In this work the listener becomes situated within the house, the drama unfolding first in the living room, then bedroom and bathroom. This spatial metaphor for the psyche of the self, moving agitatedly through states (or rooms) of unease, is created through a repertoire of sound effects melded with the flute line. The reverberated staccatos and the harmonizations imply spaces through echoes and shadowed musical lines. These echoes confirm and surround the sound in the space, indicating dimensions of the place as well as emotional responses. Low, indistinct pitches imply intimacy; high screeching pitches signify panic. Unfocussed flute sounds, or rustle sounds, blur the edges of these spaces, and rushed sequences represent an agitated, restlessness within this confined area. EMBODIED GESTURE A focus on the microcosm of performance techniques inevitably leads to the highlighting of body use in performance. Technological influences increase bodily awareness in the performer, and change the meaning of movements in numerous ways. Most apparent is a striking increase in selfawareness, a magnification that correlates to the microphone effect. An intense focus on micro techniques highlights muscular awareness of embouchure flexibility and fluidity, throat relaxation, internal mouth shaping, and air pressure control, all of which require a relaxed and alert condition. Specific body movements open up a new world of cause and effect in combination with electronics, new ways to construct meaning in performance, and ways to establish transparency or obscuration.

The physical requirements of movements normally associated with playing the flute may confirm meaning through gesture, but additional actions such as pedaling or computer interaction extend this into a wider spectrum. Minute shifts of embouchure or breath, throat or fingers; large arm, leg or full body movements; all alter balance, muscular and visceral responses. Using the foot for pedaling creates a new set of balancing requirements, which in turn vigorously influences posture, breath and playing position. Whole body gestures may also be used to trigger events, creating intense focus and movement. The effects for both the Saariaho and Hajdu works mentioned here are activated through a MIDI pedal and Max/MSP. The pedaling techniques bring a strong sense of physicality to the performance, as leg and whole body muscle use rebalance the flautist, and engage with the propulsion and drama of the music. Warren Burt’s Mantrae For Flute and Live Electronics (2007) is an interactive work that explores the connections between the individual and the world. Stillness and movement, inner calm and chaotic change are juxtaposed in a setting that transforms physical movement into sonic forms. The shifting relations of the self to a digital other, the responses and controls, are activated through motion capture and sound modification, using Plogue Bidule as the host program for the processing and Cycling 74’s Hipno sound processing modules. In this work the flautist is instructed to intensely focus on each of the three mantrae and to also move randomly from one to another, and hence stand to stand. The flute begins solo, with the electronics appearing after the first thirty seconds. Then the flute sound is processed through the plug-in effects, which are programmed to change through Modulator / V Motion as the flautist sets off the camera / motion sensor by moving from stand to stand. The conceptual layers of meaning are revealed through the technological processes in this work, the intense chanter (the flautist) being surrounded by the sonic material of plug-in effects. The most dominant performative imperative of Mantrae is physicality, as full body gesture activates a whole sonic world, converting gesture into digital sound. From the outside, the body becomes the visual prompt, the revealer of process, the audience informer. The meaning of the piece, the trajectory of flute chanting, the (dis)connections to the outer world become focused through the image of performer. Intensity of purpose, sensed through musculature and postures of concentration, discloses the conceptual basis of the work, the centrality of the individual within the disarray of life. The invisible presence of the transformative technologies, the motion tracking and effect triggering, are representations of perceived connection, a linking fabric between gestures of exchange. The flute sounds are traditional at the source, resonant, articulate, and impelled. The altered sounds emanating from the loud speakers bear little relationship to this focused flautist; they are nebulous connections, the sounds of otherness. The technology creates the communication, and the dichotomy. The overlap, the linking performance gesture is changed into the technological gesture, the sonic changes incurred by the computer plug-in software, connect movement with sound texture and timbre, the ‘extended flautist’ here becomes a representation of individual and global relationships. The experience of performing Mantrae is an unfolding focus on motion and location. New balances and sensations evolve, challenging acquired performance knowledge, and merging with the desire to be completely free within this circle, to attempt to move with abandon, swiftness and grace, to

dissolve into the music, to become the meditation through a semi-immersion. The movements are sharp and fast, turning with seeming unpredictability from one stand to another. There is some awkwardness, some difficulty sustaining vision of the scores, moving without unravelling basic flute playing techniques and postures. Modulating this potential for instability begins a search for greater fluidity and cohesion, a shaping of the matrix of patterns INVISIBLE ALTERITIES In contrast to Burt’s piece is the connective environment activated by Max/MSP in Russell Pinkston’s Lizamander (2003). In this work pitch and threshold triggering occurs as driving rhythms are built in real time from the live flute sound. There is no visible triggering at all as the live electronic processing occurs as a result of sonic triggering – when the flute plays certain notes, or moves above a certain pitch threshold, sound effects are triggered, and the work progresses. This technique appears on the surface to be simple but it demands care in presentation. The acoustic space, for example, has a critical effect on the functioning. A pure clear pitch is needed, and if the reverberation in the space is too great, for instance, the computer has difficulty picking up the correct sound and the piece does not progress, and sound effects do not occur. Working as an invisible partner, the electronics encourage a sense of mystery and uncertainty. The interactivity may be imperceptible to an audience and sound events may lose their definition and source. Unawareness of processes and misunderstanding of intent can occur, the sense of the open or closed environment of the electroacoustic concert can alienate, the visual scene can be informing or confusing, the changed physical manners can surprise and provoke question. Alternatively, the audience may respond with pleasure to a repositioning of performer, to illusions of intimacy, to complex sound configurations and challenged expectations. CONCLUSION: EMERGING META-INSTRUMENT ENTITIES The performing flautist entity has thus transformed into a meta-instrument – an integrated ensemble that reflects the expanded identity, the new capacities and relationship dynamics, the ecology of sounds, processes, machines and people. This symbiosis generates changes of attitude that give permissions, controls and scope for expanded roles, expanded space and integrated narratives. Relationships of flautist to instrument, flautist to technology, and flautist to technologist are altered, adjusted and expanded. The computer has become the cohort, the accomplice in musical adventures, and knowledge of technological processes greatly enhances this development. New performative patterns are added to the body, new cognitive responses and a new sense of a multiplicitous identity is generated. This construct assimilates and extends the concept of ‘flautist’ through exterior, visible means and interior awareness and expression of the self. Simon Emmerson (2007) describes the instrument as an extension of the performer’s identity. If this is accepted, the instrument in combination with electronics can be considered a further expanded identity, that takes excursions into unusual sound and performance parameters, and altered spatial perceptions. The flautist can revel in the increased ease of projection, the empowering scale of refinement, and the capacities for enriched and engaging encounters provided by this environment. Dancer and academic, Susan Kozel, states that the computer “is not just an instrument . . . or the interval between clicking and getting somewhere else” (2008, p. 186). This comment provokes investigation of the meaning and realization of interactive technologies and instrumental performance. There is more than a duality here – the meta-instrument is a force greater than its

individual components, with transformative potentials worthy of deep attention. These technologically based encounters, the interconnections of machine and body, of machine and sound have the potential to profoundly inform us by questioning the nature of performance activities and relationships, through exploring human senses and extended lived experience.

REFERENCES Burt, W. (2007). Mantrae for flute and live electronics [Score]. Self published. Emmerson, S. (Ed.) (2000). Music, Electronic Media and Culture. Aldershot, UK: Ashgate Hajdu, G. (1997). Sleeplessness for flutes, live electronics and narrator. [Score] Hamburg, GbH: Peer Music Classical. Kozel, S. (2008). Closer: Performance, Technologies, Phenomenology. Cambridge, MA: MIT Press. Lavista, M. (2003). Canto del alba para flauta amplificada. (3rd ed.). [Score]. Mexico: Ediciones Mexicanas de Musica. Musgrave, T. (1988). Narcissus for Solo Flute and Digital Delay. [Score}. London, UK: Novello. Norman, K. (2004). Sounding Art: Eight Literary Excursions Through Electronic Music. Aldershot, UK: Ashgate. Pinkston, R. (2003). Lizamander for flute and Max/MSP. [Score]. Self published. Penny, J. (2009). The Extended Flautist: techniques, technologies and performer perceptions in music for flute and electronics. DMA dissertation, Queensland Conservatorium Griffith University. Penny, J. (2008) Amplified breath: (dis)embodied habitat. Published in Computer Music Modeling and Retrieval: Genesis of Meaning in Sound and Music, 5th International Symposium, CMMR 2008, Copenhagen, Denmark, May 2008. Berlin, Heidelberg, Germany: Springer. Saariaho, K. (1999). NoaNoa for flute and electronics. [Score] London, UK: Chester Music.

iPrac - twittering to survive practicum

ANONYMOUS For Peer Review

CREATEWORLD - IPRAC - TWITTERING TO SURVIVE PRACTICUM DR
JASON
ZAGAMI





 SCHOOL
OF
EDUCATION
AND
PROFESSIONAL
STUDIES,
GOLD
COAST
CAMPUS
 GRIFFITH
UNIVERSITY

 [email protected]



Abstract This study reports on student use of iPhones to maintain a strong social network during work integrated learning placements. Work integrated learning can be particularly intense and emotional experiences, particularly for education students in which it represents a pass-fail barrier for entry into the field. Using the twitter micro blogging service and iPhone mobile devices, students were encouraged to share the 'trivia' of their placement experience and through this sharing of seemingly mundane experiences establish a supportive learning environment. The play associated with the use of mobile devices for social networking reduced inhibitions in student sharing of their work integrated learning experiences and promoted shared learning experiences that reduced individual perceptions of isolation and uncertainty.

Most pre-service education students experience anxiety about work integrated learning placement or teaching practicum (Murray-Harvey, Slee, Lawson, Silins, Banfield & Russel, 2000) and this stress is based on student perceptions of the demands placed on them (expresses concerns) associated with the practicum, and their resources for coping (reported strategies). MacDonald (1993), Cambell-Evans & Maloney (1995), Capel (1997), D'Rozario & Wong (1996), Elkerton (1994), Moreton, Vesco, Williams & Awender (1997) have described teaching practicum as the most stressful component of teacher preparation courses which generally focus "more on methodology and less on preparing preservice teachers to cope with the inevitable anxieties and stress associated with students' roles, relationships, and responsibilities of teaching" (Murray-Harvey, et al, 2000). The attrition of preservice, novice, and experienced teachers is a widespread problem (Chaplain, 2008). In England, about 40% of those who embark on a training course (on all routes) never become teachers, and of those who do become teachers, about 40% are not teaching 5 years later (Kyriacou & Kunc, 2007). In the USA as few as 50% of pre-service teachers enter and remain in the US school system for longer than three years with many leaving to find less-stressful careers (Black-Branch & Lamont, 1998). Teaching has consistently been ranked as a high stress occupation (Beer & Beer, 1992; Borg, Riding, & Falzon, 1991), stress among pre-service teachers is less well researched, perhaps in part because (Murray-Harvey et al., 2000) it is viewed as a normal part of teacher development and therefore accepted as a natural element of the transition from novice to qualified teacher. Many pre-service student teachers discontinue with their studies due to excessive anxiety (Sanderson, 2003) with the most powerful predictor of retention among pre-service teachers being how much pleasure they anticipate they will get from the job, but the reality of teaching during practicum often results in their optimism being dampened (Veenman, 1984). While it is recognised that the most stressful component of teacher education is the practicum (Kyriacou & Stephens, 1999; Macdonald, 1993), it is generally expected by teacher preparation courses that the ‘wit and experience’ (Biggs, 1990) of students will ameliorate their concerns. While some universities offer units that touch on how to handle anxiety during the teaching practicum (Campbell & Uusimaki, 2006) it is argued (Black-Branch & Lamont, 1998) that teacher education programs have at least an ethical, if not a legal and professional responsibility, to provide support for pre-service teachers who are under high levels of stress during their teaching practicum.

iPrac - twittering to survive practicum

ANONYMOUS For Peer Review

Pre-service teachers have a range of concerns, balancing practicum and personal commitments, coping with the teaching workload, managing time, and concerns about others' expectations of their competence. Students develop a range of strategies to help cope with practicum stresses, of particular importance is the use of social support networks to develop and maintain coping strategies while on teaching practicum. Such networks may be newly established (in the case of supervising teachers) or existing (such as family and friends). Where a pre-service teacher is isolated from their existing social networks and have difficulty in developing new networks, a significant component of their coping strategy may be unavailable. While it is strongly suggested that pre-service teachers should do their practicum in pairs or clusters rather than isolated from their peers (Samaras & Gismondi, 1998; Tom, 1997; Winitzky, Stoddart, & O’Keefe, 1992), where this is not possible, social networking technologies may be able to assist in reducing practicum stress by sustaining social networks at a distance. In this study, 25 Primary School Graduate Diploma pre-service teachers on their first six week practicum experience were loaned Apple iPhones to enable them to sustain a social network using mobile devices and the social networking micro-blogging service - Twitter. The 25 students were randomly selected from 49 volunteers drawn from a cohort of approximately 200 primary school preservice teachers completing a one-year graduate diploma of education. All volunteers were required not to have used social networking software in the last 3 months. From the remaining pool of 24 students, a control group of 20 pre-service teachers was also studied; this group was not provided with a mobile device or used social networking services during the period of the study. In the week following their six-week practicum placement, students were surveyed on their experiences during their practicum placement and their micro-blog conversations analysed. Students in the mobile social network group were loaned a second-generation Apple iPhone with Twitter applets installed to provide mobile connectivity to a social networked comprising the 25 members of the mobile social network group. Students were provided with prepaid SIM cards with sufficient credit to facilitate use over the 6-week period. Pre -service teachers in the mobile device group were also provided with approximately 30 minutes of instruction on using the devices and the social networking software service Twitter. Over the six-week period, students in the mobile social network group used the personal device on a daily basis to record and share their thoughts and experiences during their teaching. No specific expectations or instructions were provided to the mobile social networking group on what to include in their micro-blog posts. From analysis of micro-blog messages during the study, students used the social network for five main purposes. Activity sharing, Achievement sharing, Attitude sharing, Resource sharing, and Event sharing. The following micro-blog postings are representative of the five categories. Activity sharing - this provided members of the network general information on the types of activities that other members were engaged in during their practicum placement. "Day of marking and writing reflections" "First full day oh the joys" "Teaching all day. A bunch of projects to mark should be a fun and exciting night of marking" "Now for a half day/stand up comedy. Should be a good afternoon" "Teaching all day. A bunch of projects to mark should be a fun and exciting night of marking" "Just finished teaching decimals to grade 7's. The class turned into long division. Now to teach technology" Marking then the movie Holes and then outside time. The movies has Shai leboueff so it must be good" "Slowly learning german while creating a work sheet for math" "Making poo today! (helps the students understand the digestive system)....haha"

iPrac - twittering to survive practicum

ANONYMOUS For Peer Review

"Just recorded my first rap track. It had some phat beats yo" "The high cheese hand shake is catching on. All the cool kids are doing it" "My students are chanting duck, duck, goose.....its causing a riot and I am laughing too hard to stop it!!!" "about to teach a math lesson based on grilled cheese consumption" "fun state of origin game this morning to teach good sportsmanship...GO THE BLUES!!!" "The nervous system is a hard concept...... Let's try teaching it to 10 year olds!!!!" "About to teach my first math class on percents and fractions going to be 150% awesome" "Lots of behavioural issues on this sunny Tuesday!" "The students are loving the iphone in their class!" "Getting observed by the liaison today,wish me luck" "Lesson planning and rubric making all day." "Going to be a great day. Half day of teaching and then a debrief session with classmates at the S club" "Friday and there is a substitute, lesson planning for this cat" "Sub today and my unit starts after little tea. Should be a good day" "Mr XXXXX needs to work on his cursive writing now and yes you can go to the toilet" "Taught measurement today. Teaching Ned Kelly tomorrow...or at least Sydney Nolan's interpretation. Should be covered in paint by day end!"

Achievement sharing - this provided an opportunity for members to publicly express their successes or failures to a supportive community. "Full day going well" "Full day of teaching went well. Loved the fact that it was photo day and I am in the class photo and grad photo." "Another successful lesson" "A teacher of many years thought my artwork was fabulous, ya I'm that good" "Testing my behavioural management...witnessed a complete melt down..throwing chairs, hitting, biting, spitting, yelling...wow what a day!" "All day to do work and I am up late doing it. I feel like I have acquired the attention span of my grade 7's." "Math lesson was awful but grammar made up for it" "complex sentences you will be the death of me" " I got in trouble for using I'd too much :P" "Worst day"

Attitude sharing - this provided members an opportunity to express how they were coping with the practicum experience. "Liking sports day today! Sitting under the teachers lounge all day. Gotta love the teaching lifestyle!!!!" "Blown away by the projects so far for the technology going to be fun to mark. Also some of them wrote raps and can't wait to lay down the tracks" "well getting there, preps can be little rat bags but ya gotta luv em!! tomorrow will be a fun day!! getting hooked on teaching,yay!!" "Did my first observation report today. Just felt like a good time to start" "what a week...taking a special ed class to gymnastics is more tiring for the teachers then the students i think!" "A kid just said her grandpa is dying this Christmas so she could get coloured paper to draw on. wow! :P" "'Miss XXXXX, you look beautiful today, you look like a princess' awww i love prep" "Easy Friday. Currently skipping a staff meeting. Beginning to get a little thirsty" "Last week if prac time to do some work!" "Easy morning of lesson making and relaxing then doing warm fuzzies with the class. Making a difference one lesson at a time." "Hey guys, what a blast. 32 little faces staring up at me as I indoctrinate them with utter crap!" "One week down, 5 to go. I *heart* prac =)" "WE had no closure today, a full days teaching and my bed sounds like a holiday

iPrac - twittering to survive practicum

ANONYMOUS For Peer Review

destination I am not going to enjoy for a while!" "week two done! preparing for my full day on monday... what a weekend" "Another day to tick off at 3, only 3 more to go" "I am going to cry tomorrow, Miss XXXXX. Don't leave me!!" "Friday is my favourite day of the week!" "Looking forward to a big taste of home this weekend...." "Now...drinking and recovering."

Resource sharing - this provided a forum for members to directly support each other in lesson preparation. "Any of your schools still doing outcomes based learning? Help required, back in the dark ages here." "Have you guys found any really useful interactive websites for maths?" "Thank you for the websites. Anyone know a good site to find outcomes like the KLAs" "interactive maths websites: http://resources.oswego.org/ not a bad one!" "http://bit.ly/IgOJ1 not bad for area/perimeter" "XXXXXX try enchantedlearning.com.au"

Event sharing - in which members could relate their thought on a shared event. "The weekend went by way to fast. Still have so much work this week. The strike will come in handy to get things done" "needs more sleep.. thank goodness for the strike!!" "Enjoying the strike!" "I love a good strike. A sleep in some relaxing and now working on my technology unit." "good day off... now mentally preparing for my half day of teaching, and its raining! Not sure how this rain will affect the kids attention spans" "No power? No problem... Nap time!!!! Stay dry Gold Coast! =)" "Nothing worse than a wet hump day. This weather should make for an interesting Wednesday. Meeting with XXXXX today as well" "I thought we were in a drought...,"

Analysis of the times when micro-blogs were produced revealed that approximately 70% of posts were made during school hours and supports the argument that for use of a mobile device as it allows participants the opportunity to produce micro-blogs at convenient times and close to the event/activity related to the post. This was reinforced in student surveys in which 85% of students reported that the immediate availability of the mobile device was of significant or very significant to their regular use of the micro-blogging service. On a scale of 1 to 5, with 5 being very significantly and 1 being very insignificantly, students responded to the question of how stressful they found their practicum experience. Table 1 Practicum Stress How stressful did you find Mobile Social Network you practicum? Group (n=25)

Control Group (n=20)

Difference

Pre-service teachers with access to other social networks

3.46 (n=19)

3.80 (n=16)

+0.34

Pre-service teachers placed in isolated practicum’s

3.68 (n=6)

4.00 (n=4)

+0.32

iPrac - twittering to survive practicum

ANONYMOUS For Peer Review

The difference between pre-service teachers using mobile devices and those without access to such devices suggests that access to mobile social networking devices provides some benefit to reducing stress during teaching practicum. A similar difference was recorded by students in isolated practicum placement but the sample size was insufficient to support that a mobile social network can act as the equivalent to a traditional social network in reducing stress during teaching practicum. Of the 14 students in the mobile social network group that reported significantly stressful events during their practicum, 75% indicated that the ability to share the experience with their online social network was a significant or very significant component of their coping strategy. 10% of the mobile social networking group indicated that they would not have successfully completed the practicum experience without the support of their social networks. Of 12 students in the control group that reported significantly stressful events during their practicum, 70% indicated that the ability to share the experience with their social network was a significant or very significant component of their coping strategy. 15% of the control group indicated that they would not have successfully completed the practicum experience without the support of their social networks. The number of participants in both the mobile social network and control groups did not provide sufficient numbers of isolated practicum experiences in which students were separated from their existing social networks to effectively differentiate between the two groups. Both groups strongly supported the importance of social networks in reducing stress during practicum placements but the effectiveness of mobile devices in providing access to a social network at a distance could not be substantiated, as most participants were able to draw upon existing social networks and were not reliant on the mobile device during stressful events. This study has shown that pre-service teachers can use a mobile social network for five main purposes during their teaching practicum: Activity sharing, Achievement sharing, Attitude sharing, Resource sharing, and Event sharing. It supports the use of mobile devices to provide the capacity to share experiences as close to the event as possible, and provides support to the importance of social networks in reducing stress in teaching practicum. The sample size was insufficient to clearly demonstrate that mobile social networking is more or less effective than traditional social networks or that a mobile social network can act as the equivalent to a traditional social network in reducing stress during teaching practicum.

References Beer, J. & Beer, J. (1992). Burnout and stress, depression and self-esteem of teachers. Psychological Reports, 71, 1331–1336. Biggs, J. (1990). Teaching design for learning. Paper presented at the annual conference of the Higher Education Research and Development Society of Australasia, Brisbane, Australia, 6-9 July, 1990. Black-Branch, J. & Lamont, W. (1998). Duty of care and teacher wellness: A rationale for providing support services in colleges of education. Journal of Collective Negotiations, 27 (3), 175-193. Borg, M., Riding, R. & Falzon, J. (1991). Stress in teaching: A study of occupational stress and its determinants, job satisfaction and career commitment among primary school teachers. Educational Psychology, 11, 59–75. Cambell-Evans, G. & Maloney, C. (1995). Trying to make a difference: Re-thinking the practicum. Paper presented at the annual conference of the Australian Association for research in Education, Hobart, Australia, 26-30 November, 1995. Campbell, M. & Uusimaki, L. (2006) A pilot study in challenging pre-service education students’

iPrac - twittering to survive practicum

ANONYMOUS For Peer Review

anxieties about their practical experiences in professional education. In Haigh, Mavis and Beddoe, Elizabeth and Rose, Dennis, Eds. Proceedings Practical Experiences in Professional Education Conference, pages pp. 60-67, Auckland, New Zealand. Capel, S. (1997). Changes in students' anxieties and concerns after their first and second teaching practices. Educational Research. 39, 211-228. Chaplain, R. (2008). Stress and psychological distress among trainee secondary teachers in England. Educational Psychology. 28(2), 195-209. D'Rozario, V. & Wong, A. (1996). A study of practicum-related stresses in a sample of first year student teachers in Singapore. Paper presented at the annual conference of the Singapore Educational Research Association and Australian Association for Research in Education, Singapore, 25-29 November 1996. Elkerton, C. (1994). Stress and the pre-service teacher. The Teacher Educator. 22(1), 2-8. Kyriacou, C. & Kunc, R. (2007). Beginning teachers' expectations of teaching, Teaching and Teacher Education, 23(8), 1246-1257. Kyriacou, C. & Stephens, P. (1999). Student teachers’ concerns during teaching practice. Evaluation and Research in Education, 13, 18–31. MacDonald, C. (1993). Coping with stress during the teaching practicum: The student teacher's perspective. The Alberta Journal of Educational Research. 39, 407-418. Murray-Harvey, R., Slee, P., Lawson, M., Silins, H., Banfield, G. & Russell, A. (2000). Under stress: The concerns and coping strategies of teacher education students. European Journal of Teacher Education, 23(1), 19-35. Moreton, L., Vesco, R., Williams, N. & Awender, M. (1997). Student teacher anxieties related to class management, pedagogy, evaluation, and staff relations. British Journal of Educational Psychology. 67, 69-89. Samaras, A. & Gismondi, S. (1998). Scaffolds in the field: Vygotskian interpretation in a teacher education program. Teaching and Teacher Education, 14(7), 715-733 Sanderson, D. (2003). Maximizing the student teaching experience: Cooperating teachers share strategies for success. Retrieved July 5, 2005, from http://www.usca.edu/essays/vol72003/sanderson.pdf Tom, A. (1997). Redesigning teacher education. Albany, NY: State University of New York Press. Veenman, S. (1984). Perceived problems of beginning teachers. Review of Educational Research, 54, 143–178. Winitzky, N., Stoddard, T. & O’Keefe, P. (1992). Great expectations: Emergent professional development schools. Journal of Teacher Education, 43(1), 3-18.

iPrac - use of spontaneous recording devices to enhance digital portfolios

CW2009 - IPRAC - USE OF SPONTANEOUS RECORDING DEVICES TO ENHANCE DIGITAL PORTFOLIOS DR
JASON
ZAGAMI





 SCHOOL
OF
EDUCATION
AND
PROFESSIONAL
STUDIES,
GOLD
COAST
CAMPUS
 GRIFFITH
UNIVERSITY

 [email protected]

 Abstract 25 pre-service teachers were provided with iPhones for use on practicum placement to capture artifacts for digital portfolios and compared with the digital portfolios generated by 20 pre-service students who did not have access to mobile recording devices. The ease and immediacy of access to mobile devices to record spontaneous events increased the number and scope of multimedia artifacts created. This then supported the development of narratives to student digital portfolios and supported improved reflection on the practicum learning experience.

The use of digital portfolios in pre-service education courses (Topp, Goeman & Clark, 2003) as a means of documenting student learning and encouraging reflection on the student learning process (Sung, et al., 2009) is increasing. Some pre-service education courses have developed digital portfolios into a single comprehensive assessment instrument that covers their entire pre-service teacher program (Fapojuwo, 2005) while many others are at the stage of exploring the effectiveness of digital portfolios to contribute to the overall pre-service teacher learning experience. Pre-service teachers who have been required to develop digital portfolios as part of their pr-eservice teacher education program have generally perceived portfolio assessment as a more authentic method of assessing their learning (Wilson, Wright, & Stallworth, 2003). Digital portfolios are increasingly seen as a form of digital storytelling (Kearney, 2009) in which the author, the student, weaves a narrative of their learning experiences. Multimedia in a digital portfolio extends the ability of students to present their stories through images, video, and audio recordings. The framing of collected artifacts as a rich, informative story places increased responsibilities on students to introduce a structure to their narratives. With this structure comes a planning process and the development of an explicit process of artifact collection. Multimedia rich digital portfolios also enable enhanced synthesis and analysis of the learning experiences associated with portfolio artifacts (Kearney, 2009) that in turn supports improvements in reflective metacognition. Digital portfolios involve the collection of artifacts and the presentation of these artifacts in a digital medium. While it is possible to collect analogue artifacts and convert these to digital artifacts for inclusion in a portfolio, for example scanning of paper documents, the process is cumbersome and often laborious. To facilitate this process, some universities have provided students on practicum placements with laptops (Kariuki & Turner, 2001) but students continue to report difficulties in capturing digital artifacts during classroom activities due to the disruptive nature of recording tools (Kearney, 2009). The digitization of analogue artifacts also lack immediacy, often occurring days or weeks after the artifact was used and this dislocation makes the contextualisation of the artifact in the digital portfolio difficult, reducing the effectiveness of the artifact for deep reflection on the learning processes occurring when the artifact was captured. With limited improvement in pre-service teacher reflection (Wilson, Wright, & Stallworth, 2003) being reported in large-scale implementations of digital portfolios, this study has investigated the impact of mobile recording devices on digital portfolio development. In the study, 25 primary school Graduate Diploma pre-service teachers on their first six-week practicum experience were loaned

iPrac - use of spontaneous recording devices to enhance digital portfolios

Apple iPhones to use in artifact collection. The group was randomly selected from 49 volunteers drawn from a cohort of approximately 200 primary school pre-service teachers completing a one-year graduate diploma of education. From the remaining pool of 24 students, a control group of 20 preservice teachers was also studied; this group also developed a digital portfolio but was not provided with a mobile device or instruction on their use. In the week following their six-week practicum placement, students were assessed by means of a reflective essay on their ability to reflect on their learning during the practicum experience. This assessment, which was conducted across the entire practicum cohort (n>200), was used as a measure of comparison between the mobile device group and the control group with respect to the influence of mobile devices on student capacity to reflect on their practicum experience. Students in both groups were interviewed and their digital portfolios examined. Students in the mobile device group were loaned a second-generation Apple iPhone with the capacity to record digital images, sound and take notes. Pre-service students in the mobile device group were provided with approximately 30 minutes of instruction on using the devices to record still images, sound/voice recordings and text-based note taking. Pre-service students in both the mobile device group and in the control group were provided with approximately two hours of instruction on the usefulness of narrative development in presentation of digital portfolios as part of their scheduled coursework. Over a six week period, students in the mobile device group used their personal device on a daily basis to record artifacts generated during their teaching that included images from whiteboards or lesson activities, conversations with students, images and recordings of student presentations, preservice teacher debriefs with supervisors, etc. No specific expectations or instructions were provided to either group on what to include in their digital portfolios beyond standard expectations set by the university practicum office. Students in the mobile device group reported that the convenience and ready availability of the mobile device was a very significant factor in their ability to generate multimedia artifacts of their practicum experiences. They also reported that in many instances, the opportunity to capture an artifact would have been lost if the device was not immediately to hand as many artifacts in a primary classroom existed for a very short time periods and occurred spontaneously, making prior planning to have a recording device available difficult. Several students reported missed opportunities when the devices were left in a bag or desk, even when only a few meters distant. Soon into the process, more than 70% of students using mobile devices began to incorporate the collection of digital portfolio artifacts into their lesson planning. This was not evidenced at all in the control group. Pre-service teachers reported increased consideration of how each individual day or learning activity would contribute to their digital portfolio, and as a result, they’re overall learning during the practicum experience. As can be seen from Tables 1, 2 and 3, the process of artifact collection planning and a continuous process of artifact collection resulted in improvements to the reflective process. Pre-service teachers using the mobile devices took much more interest in the creation of a narrative to their digital portfolios. 72% compared to 10% in the control developed their digital portfolio into a structured narrative of their learning experience. Table 1 Narrative Development Group Mobile Devices with narrative development

Reflective Essay (Mean) 88% (n=18)

Mobile Devices with no narrative in portfolio

69% (n=7)

Control with narrative development

80% (n=2)

Control with no narrative in portfolio

64% (n=18)

iPrac - use of spontaneous recording devices to enhance digital portfolios

Another unexpected aspect of the use of mobile devices for the collection of artifacts was a propensity for students to want to share the artifacts collected with their peers. 65% of pre-service students in the mobile device group shared at least one artifact with another student on practicum placement with 10% sharing over 50 artifacts. Within the control group no pre-service student shared an artifact with another student on practicum placement. The ease at which mobile devices facilitate a sharing of learning experiences represents an additional positive influence on the reflective process. Table 2 Artifact Sharing Sharing Did not share artifacts

Reflective Essay (Mean) 81% (n=9)

Shared some artifacts

87% (n=10)

Shared > 20 artifacts

92% (n=6)

The use of multimedia in pre-service digital portfolios was not confined to the mobile device group. 30% of the control group included at least on multimedia artifact, in all cases these were images, and Table 3 provides a comparison between the inclusion of multimedia artifacts and demonstrated reflection on the learning experience. Table 3 Artifact Generation Artifacts No multimedia artifacts

Mobile Device group Reflective Essay (Mean) NA% (n=0)

Control group - Reflective Essay (Mean) 62% (n=14)

1-10 multimedia artifacts

72% (n=2)

75% (n=4)

11-50 multimedia artifacts

83% (n=12)

82% (n=2)

>50 multimedia artifacts

89% (n=11)

NA% (n=0)

Findings Pre-service teacher use of mobile devices in this study resulted in a number of findings that inform future use of mobile devices for the development of digital portfolios. The immediacy of access to mobile devices to record spontaneous events as they occurred was of paramount importance in a rapidly changing primary classroom environment. The ease at which multimedia artifacts could be created and included in a collection of artifacts for use in pre-service student digital portfolios greatly increased the quantity and scope of multimedia artifacts included in pre-service teacher digital portfolios. The generation of multimedia artifacts increased the likelihood of students developing a narrative for their digital portfolios as opposed to treating the task of digital portfolio creation as a post practicum document archival exercise. Sharing of digital artifacts through the mobile devices increased the scope of student practicum experience and provided increased opportunities to examine their practicum

iPrac - use of spontaneous recording devices to enhance digital portfolios

experience from different perspectives. Finally, the development of digital portfolios of pre-service teaching practicum experiences in which a range of multimedia elements are incorporated to provide a rich narrative improved student reflections on their learning experience.

References Wilson, K., Wright, V. & Stallworth, J. (2003). Secondary Preservice Teachers' Development of Electronic Portfolios: An Examination of Perceptions. Journal of Technology and Teacher Education, 11: 45-49. Fapojuwo, M. (2005). Multimedia Capstone Digital Portfolio Assessment: A Comprehensive Assessment Tool for Pre-Service "Product of Learning" That Meets Current Accreditation Standards. Society for Information Technology and Teacher Education International Conference 2005, Phoenix, AZ, USA, AACE. Sung, Y., Chang, K., Yu, W. & Chang, T. (2009). "Supporting teachers' reflection and learning through structured digital teaching portfolios." Journal of Computer Assisted Learning 25(4): 375385. Kariuki, M. & Turner, S. (2001). Creating Electronic Portfolios Using Laptops: A Learning Experience for Preservice Teachers, Elementary School Pupils, and Elementary School Teachers. Journal of Technology and Teacher Education, 9: 22-28. Strickland, J., Moulton, S. & Strickland, A. (2005). One Large Technology Leap for Pre-Service Teachers; One Small Step for Humankind. In P. Kommers & G. Richards (Eds.), Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2005 (pp. 2350-2351). Chesapeake, VA: AACE. Retrieved from http://www.editlib.org/p/20424. Kearney, M. (2009). Investigating Digital Storytelling and Portfolios in Teacher Education. In Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2009 (pp. 1987-1996). Chesapeake, VA: AACE. Retrieved from http://www.editlib.org/p/31749. Topp, N., Goeman, B. & Clark, P. (2003). Extending Pre-service Teacher Learning with a Digital Portfolio. In C. Crawford et al. (Eds.), Proceedings of Society for Information Technology and Teacher Education International Conference 2003 (pp. 39-41). Chesapeake, VA: AACE. Retrieved from http://www.editlib.org/p/17822.

Suggest Documents