Amplifying Reflective Thinking in Musical Performance Andrew Johnston, Shigeki Amitani and Ernest Edmonds Creativity & Cognition Studios, Faculty of IT, University of Technology, Sydney PO Box 123 Broadway, NSW 2007, Australia {aj, shigeki}@it.uts.edu.au,
[email protected] ABSTRACT
Because of rapid technical advances it is understandable that there can be a tendency for tools to be developed based on technical feasibility rather than genuine utility and for this reason we feel it is necessary to outline the conceptual framework underpinning our work. In this section we first outline pedagogical approaches to musical skill development and then outline techniques for encouraging a reflective approach to creative work. Previous work in the area of using visualisation techniques to facilitate different perspectives on complex or ambiguous data is discussed as are some key attempts at creating useful musical visualisations.
In this paper we report on the development of tools that encourage both a creative and reflective approach to musicmaking and musical skill development. A theoretical approach to musical skill development is outlined and previous work in the area of music visualisation is discussed. In addition the characterisation of music performance as a type of design problem is discussed and the implications of this position for the design of tools for musicians are outlined. Prototype tools, the design of which is informed by the theories and previous work, are described and some preliminary evaluation of their effectiveness is discussed. Future directions are outlined.
Pedagogical Approaches to Instrumental Music
Instrumental musicians require skills in two key areas. Firstly they must develop the ability to physically manipulate their body and instrument to produce musical sounds and secondly they require creative skills in order that the music they produce is interesting to others. While various styles of music may emphasise either the craft of instrumental music or the artistic side of musical skill to greater or lesser degrees, musicians generally require a high level of ability in both areas.
Categories & Subject Descriptors: H5.5. Sound and Music
Computing. General Terms: Design; Performance; Keywords: Reflection-in/on-action; musical performance;
design problem; visualisation; INTRODUCTION
The authors are going to explore the possibilities of computers through implementations of systems for supporting musical performance. Our hypothesis is that systems that provide musicians with visual feedbacks promote reflective thinking on their performance. The principal research question is how computers play the role of a catalyst, or a stimulant [7], in order to amplify reflective thinking in musical performance.
Broadly speaking, pedagogical approaches to the development of the necessary physical skills to play an instrument can be divided into two camps. The first emphasises the need to understand the physiology of instrumental technique in order to facilitate conscious control of the various muscles involved so that “correct” technique can be used. Perhaps due to a desire to approach instrumental music in a more “scientific” way, there is often a tendency for musicians to attempt to take a reductionist approach to improving their playing. For example, singers may attempt to support their sounds by attempting to consciously control their diaphragm in some way in order to improve their range, volume or tone.
A THEORETICAL BASIS FOR INSTRUMENTAL MUSIC TOOLS
In order that tools for instrumental musicians are genuinely useful, it is important for software designers to consider carefully the theoretical position upon which their tools are based. An approach which fails to explicitly consider the pedagogical implications of tool use may lead to an implicit, simplistic theory of music learning being embedded into the system, leading to unsatisfactory outcomes for all concerned.
Kohut describes this approach as the “physiological-analysisconscious-control” method [18]; in which musicians attempt to understand the physical actions involved in music making in detail and exert control over them consciously while playing. The key assumptions here are that
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. C&C’05, April 12-15, 2005, London, United Kingdom. Copyright 2005 ACM 1-59593-025-6/05/0004…$5.00.
Complex involuntary muscle manoeuvres can be controlled consciously; Conscious control will lead to improved performance; Conscious control is best achieved by attempting to control the individual muscles/organs involved;
166
role models but retains qualities unique to the individual musician.
A detailed intellectual understanding of the physical actions involved in playing is the key to improving performance.
In this approach then, the musician gradually develops physical skills and an individual style as they build up a mental library of “target sounds” which trigger the appropriate physical responses. This mental library is reflected upon and refined through experience and exposure to new ideas, both musical and extra-musical. Thus regardless of the style of music performed, creativity is a fundamental part of musical skill development, as musicians constantly work towards an ideal sound which itself is being constantly refined. Even when musicians are playing music composed by someone else, creativity is required in the interpretation of the music notation chosen by the composer. The subtleties of musical phrasing - including tempo, articulation and dynamics – cannot be adequately represented by musical notation and thus the interpreting musician has a significant influence on the final musical result [11]. It could be said that in effect the performance is collaboration between composer and musician.
The second approach, dubbed the “imitation method” [18], rejects these assumptions, arguing that The muscle manoeuvres involved are too complex and subtle to be meaningfully controlled by the conscious mind and that many muscles (such as the diaphragm) cannot be controlled directly anyway. Recent studies of the physiology of memory seem to indicate that different regions of the brain are responsible for what is termed implicit and explicit memory and that motor skills are a form of the non-conscious implicit memory. Experiments with amnesiac patients for example show that patients who have no conscious recollection of learning a skill (i.e. they forget the training sessions during which the skill was learned) still demonstrate normal motor-skill learning ability [28]; Attempting to consciously control the minutiae of physical actions that take place while playing a musical instrument is at best likely to lead to a tense, mechanical-sounding musical outcome and at worst to “paralysis by analysis” [10, 18, 30], where the musician becomes overwhelmed by the complexity of detailed muscle control and loses the ability to play even simple tunes on their instrument;
Information Visualisation Research
The research field of information visualisation has been developed as computational power grows. Especially in Human-Computer Interaction field, the design of visualisations and interactions has been one of the central issues [5]. In the HCI research field, the aim of information visualisation is to facilitate people to understand large quantity of information and its tendencies.
Improved performance results from setting and refining specific musical goals rather than consciously attempting to control physical actions at a low level;
There have been several approaches to using the capabilities of computers to visualise aspects of musical performances to help musicians develop their skills. The approaches range from what might be called the “scientific” approach, where audio sounds are analysed and displayed in the form of graphs to more “artistic” visualisations where the correlation between musical input and computer display is more abstract.
A detailed intellectual understanding of the physical actions involved in playing is not necessary. For these reasons the trend in music pedagogy has been towards an approach based on the “natural learning process” (NLP) [12] which emphasises the importance of leaving the complexities of muscle control to the subconscious so that the conscious mind remains free to set high-level musical goals. The technique for developing new skills is based on imitation, with a strong emphasis on mental musical goal setting and excellence of role models. Teachers taking this approach for example would tend to spend more time on playing for students during lessons and encouraging them to try to copy aspects of the teacher’s sound, instrumental technique or musical phrasing and would discourage discussion of physical aspects of instrumental technique.
The “Sing-and-See” program, taking the former approach, [34] is designed for singers and singing teachers. The user sings into a microphone and the connected computer displays visualisations of various aspects of the sound. The available displays are a piano keyboard and traditional music notation staff which highlight the note currently being sung, a pitchtime graph showing the pitch of the sung note in comparison to the standard equal-temperament keyboard and a spectrogram and real-time spectrum display showing power in the voice at different frequencies.
Of course, the use of imitation to develop physical and musical skills does not preclude development of an individual style. An individual musician’s mental image of musical ideals for given situations develops as they learn of different approaches through listening and watching performances by musicians on their instrument and others. Thus the musician may in effect choose a sound for a particular section of music which has characteristics of several other musicians but which is nevertheless unique. That is to say, the musical approach is influenced by many
Another program which produces similar output - VoceVista (http://www.vocevista.com) - has been used to analyse various aspects of vocal performances [21]. Successfully incorporating these tools into everyday practice requires careful thought, as the complicated nature of the spectrogram display is open to interpretation and may encourage an overly analytical approach. A project which illustrates a more abstract approach to the mapping between sound and computer response is the
167
Musical Performance as a Design Problem
“Singing Tree” [23]. The singing tree provides both audio and visual feedback to singers in real time. The singer sings a pitch into a microphone and if the pitch is steady the computer provides an accompaniment of consonant harmonies from string, woodwind and vocal chorus. If the pitch deviates, the computer introduces gradually increasing dissonance to the accompaniment including brassier and more percussive instruments and more chaotic rhythms. In addition to the audio feedback, when the singer’s note is steady the computer generates video which is designed to give the impression of moving forwards towards an identifiable goal- an opening flower for example. If a steady pitch is not maintained the video reverses. Such an approach emphasises a playful approach and the link between audio input and the audio/visual output is intended to be less deterministic than the more “scientific” mapping between sound and vision used in tools such as “Sing and See”.
Design problems are characterised with ill-definedness and open-endedness, that is, one can neither specify a design problem completely before starting to solve it, nor articulate knowledge necessary for a design a priori. Designers have to gradually refine both the problem specification and the design solution at the same time. Musical performance can be characterised as a design problem because no single "ideal" performance can be defined a priori. Musicians change their performances along with the context [6], the history of interactions among the musicians, audiences and produced sound. Schoen [29] has explained a design process as "SeeingDrawing-Seeing Cycle". Design process starts from "Seeing", proceeds to "Drawing", or generating the partial solution to the design problem, returns to "Seeing" and repeats this cycle. This process is regarded as a "dialogue" between the designer and his/her material. In musical performance, a performer listens to existing sound (the context), conceptualises and plays a phrase as a partial solution to the musical “problem” posed by the context, and listens to the result. The cyclic process is "listening-playinglistening". As a way of reducing the chance the performer will become distracted by self-talk and possibly trigger “paralysis by analysis”, Jacobs [30] suggests the performer should picture in their mind what should happen and leave it to the subconscious to take care of the details. There are personal accounts of the musical flow of Mozart's composition [1]:
A prototype tool which analyses performances from another perspective has been created by Nishimoto and Oshima [22]. Their “Music-Aide” program takes standard MIDI (Musical Instrument Digital Interface) input and represents the musical phrases produced by the musician graphically. Different components of the phrases, such as consonant or dissonant notes are displayed in such a way that the relation between these components can be seen. For example, it might be that when the improvising musician plays phrases containing predominantly firsts and fifths (i.e. ‘monochrome chord tones’), she never includes the tritone. Music-Aide would show this characteristic graphically. It is hoped that the musician will therefore be able to notice aspects of their improvising that they might otherwise be unaware of, and thus help them to overcome bad habits or discover new ways of playing.
“My subject enlarges itself, becomes methodised and defined, and the whole, though it be long, stands almost complete and finished in my mind, so that I can survey it, like a fine picture or a beautiful statute, at a glance.” “Nor do I hear in my imagination the parts successively; I hear them all at once. What a delight this is! All this inventing, this producing, takes place in a pleasing, lively dream.”
Another way that Music-Aide might be able to help is in finding new directions and styles. For example, if a musician noticed that the representation of their improvisations always had an empty space in a particular part of the display, they could work out phrases to play that would move the representation into that space. In this way, perhaps tools such as Music-Aide could stimulate the performer to try a style of playing that they had previously not considered.
Though this anecdote may be a desirable goal for musicians, it is not the case in reality. Musicians reflect on their performance and adjust their performance in a proper way. This is borne out by the fact that concert halls are designed acoustically good not only for audiences but also for players, and that monitor speakers are set in front of musicians at a stage of live performance. Musicians need to know their own performance.
A tool which displays a view of the relationship between tempo and loudness rather than harmonic aspects is described by Langner and Goebl [19]. This tool shows a graph with the x-axis representing tempo and the y-axis the dynamic level. As the music is played, a dot moves around within the graph. As the tempo and loudness vary, the dot moves around the screen, leaving a kind of three-dimensional “trail” behind it that very effectively illustrates the high-level shaping of phrases by the performer. For example, using such a tool the performer might identify patterns in their playing – always slowing down when playing at lower dynamic levels perhaps – that they did not detect aurally.
Providing a proper way of externalisation for amplifying reflective thinking has been one of the most important challenges [14]. Traditionally, sketching process with pens and papers provides, for example, an architect with a place and a certain period of time to reflect on his/her thinking process and the result of the process. However, musicians have no practical external representation for reflection apart from the produced sound because sound is invisible and changes along with time.
168
Sketching with pens and papers is an essential process for architects because an information artefact they design on a paper is developed directly to a part of their outputs. That means sketching is not an extra load for architects. While papers and pens are available for musicians as well as for architects, writing a score is an additional work to their performances. In order to externalise their mental space, they have to stop their performance.
We chose to use the Max/MSP programming environment because of the inbuilt support for audio/visual processing and its visual approach to development which facilitated the rapid development and trial of prototypes. Also, members of our research group had previously constructed an application which used floor pads to control the display of a type of audio/visual montage and it was felt this could be adapted reasonably easily to be controlled musically instead.
One of the aims of the projects described in this paper is to provide musicians with practical visual and external representations and interactions that amplify reflection-onaction process in their performance.
It was interesting to note that the process of developing the application was not significantly different to developing an interactive performance piece. The process involved two people taking one of two roles: Artist and Technologist. In simple terms the Artist would evaluate the prototype and make requests and suggestions to the Technologist who would evaluate their feasibility and where possible implement them. The process was highly collaborative and roles were not rigidly defined, meaning the Technologist would make artistic contributions and the Artist technical suggestions where appropriate.
CURRENT PROJECTS
This section describes two projects currently at different stages of development. One, the Virtual Musical Environment (VME) aims to stimulate a creative, playful approach to improvisation by providing an external representation of musical elements of the musicians performance in real-time. The other, the “mirroring drum project” aims also at supporting improvisational performance by providing visualisation for observation on not only their sound features but also their entire performance.
Using the virtual musical environment
The VME has both audio and visual components. The visual component is dynamic and is affected by audio input. The visual display is broken up into a matrix of 36 squares. A cursor is constantly displayed on the screen in one of the 36 squares and moves from square to square in response to the musical input. A screenshot of the VME playing an abstract video with the cursor visible towards to top left of the screen can be seen in Figure 1.
Virtual Musical Environment (VME)
Our approach in this project has been to approach musical tool development from an ‘artistic’ perspective. That is, we are aiming to produce tools which encourage a creative, playful approach to music and skill development. Because we wished to align our tools with the Natural Learning Process, it was decided that the only variables affecting the graphical display would be musical – pitch and volume – rather than physical. That is, we are interested in the end result not the actions required to produce it.
The performer can move the cursor in the horizontal direction by varying the volume of sound, with louder notes moving the cursor to the right of screen and softer notes to the left. Different pitches move the cursor vertically, with higher pitches moving the cursor up the screen and lower pitches moving it down. Thus, the musician may move the cursor into desired positions on the screen by carefully choosing pitches and volumes. The program analyses MIDI (Musical Instrument Digital Interface) data that describes the pitch and onset velocity (roughly equivalent to loudness) of notes input to the computer. The program may be configured to respond to MIDI data created directly from a keyboard instrument or, in the case of acoustic instruments, input from a microphone can be converted to MIDI data via the Max object ‘fiddle~’ [25]. This cursor control system provides the basic interface for human interaction with the virtual environment. This is achieved by mapping the cursors location on the screen to particular behaviours (see Figure 2). For example, when the cursor occupies the square in the top left corner of the screen, the system records the performed audio for one second. When the cursor is moved to column 1, row 3 this recorded audio is played back in a loop. As can be seen from the figure, the difference cursor locations trigger four basic behaviours. All behaviours and the particular squares which trigger them may be changed relatively easily, allowing the environment to be customised for particular
This approach aligns with a theoretical framework for supporting the development of musical skills described by Johnston and Edmonds [16] which places strong emphasis on avoiding simplistic judgements about the music input to the system, focusing instead on providing stimuli to encourage the development of interesting and novel ideas by the musician. Our concept involved the creation of a ‘Virtual Musical Environment’ (VME) for musicians to explore using their instrument. This could be seen as a kind of ‘virtual world’, with which the musician interacts using music to control navigation rather than keyboard, mouse or joystick for example. It was felt that once the method of interaction and the basic mechanisms of control were decided, the actual environment itself could be changed with little difficulty allowing for maximum flexibility. Thus while our prototype environment uses video footage and still photos from the environment surrounding our lab and audio material contributed by other members of our research group, this could all be changed without difficulty to give the virtual world a completely different feel.
169
our performance there were three squares which triggered recording of either one, three or ten seconds. Depending on circumstances and desired effect, longer or shorted durations may be selected.
situations without changing the underlying method of controlling the cursor.
Playback of audio previously recorded
Once real-time recording had occurred, other on-screen cells triggered playback of the recorded audio files. Audio configuration As mentioned above, the vertical position of the on-screen cursor is determined by the volume of the instrumental performance. In order that our system would be versatile enough to work in performance environments containing various levels of ambient background noise, it was therefore necessary to allow the sensitivity of the input microphone to be controlled when configuring the program. This was achieved by providing four slide controllers which set the levels at which the cursor moves to particular positions (see Figure 3).
Figure 1: VME screenshot showing icon in one of the 36 possible screen positions. Here the VME is playing an abstract video. record live audio for 1 second (1)
stop recorded live audio 1
cursor
record live audio for 10 seconds (2)
play recorded live audio 1 (loop)
play prerecorded video 1 play prerecorded audio 1 (loop)
play prerecorded audio 2
play prerecorded video 2
play recorded live audio 3 (loop) play recorded live audio 2 (loop)
stop recorded live audio 2
play prerecorded audio 3
record live audio for 3 seconds (3) stop recorded live audio 3
play prerecorded video 3
Figure 2: ‘Virtual Musical Environment’ (VME) visual display, indicating the behaviours triggered when the cursor moves to particular squares.
Figure 3 Audio calibration of the VME.
The cursor has six possible horizontal positions, and the slider at the top of Figure 3 sets the volume at which the cursor would be placed in the leftmost position. The slider below sets the volume for the square second from the left of screen and so on. Thus, even though for our initial performance we set the sliders so that increasing volume moved the cursor across the screen from left to right, it would be possible to reconfigure the environment so that quite different cursor behaviours were triggered.
For an initial evaluation of the VME four different basic behaviours were triggered: Video playback of pre-recorded video files
In the demonstration performance, three squares triggered playback of pre-recorded video. Any video files may be chosen, and some appropriately moody video snippets were selected in this case.
Audio playback of pre-recorded audio files
As in the video playback, three screen icon positions triggered playback of pre-recorded audio. In this instance recordings of generative music by Dave Burraston were selected.
Audio recording of the performer in real-time
When the cursor was positioned on particular cells, recording was triggered. The computer recorded audio directly from the microphone input to a file for later playback. For
Preliminary Evaluation
It is intended that prototype tools such as those described in this paper be refined using an approach modelled on action research [17]. In this approach, theory is developed in cycles of theoretically informed action and reflection as illustrated in Figure 4. Thus, it can be seen that this section reports on the development of a prototype based on previously identified requirements and that the next phase will involve evaluation of the prototype. A thorough evaluation will
170
involve more comprehensive interviews and observations; however at this stage some preliminary feedback was sought in order to refine some aspects of the prototype before evaluation by larger numbers of musicians.
speakers
rearprojection video wall
subject
1. Theory development
microphone
2. Identification of opportunities and requirements
4. Evaluation and reflection
Figure 5 Configuration of the virtual musical environment for preliminary evaluation
3. Prototype development
Issues which emerged in the interview were grouped into three categories: performance quality, control and practical application.
Figure 4 Phases of investigation and development
A preliminary evaluation of the prototype VME was therefore given by a professional musician with a Masters qualification in music performance who has extensive performing credits with major symphony orchestras, professional recording sessions and jazz ensembles in Australia and overseas. In addition to her performance experience, the subject currently teaches or has taught instrumental music at all levels from primary to tertiary and is therefore well-placed to provide expert opinion on the capacity of the prototype to encourage a creative approach to music performance in users as well as its potential use in music teaching. Finally, her command of her instrument is such that she was able to provide good feedback on the quality and consistency of the user interface for controlling the on-screen cursor. That is, her instrumental technique is strong and therefore any issues of poor responsiveness or user-interface difficulties can be confidently ascribed to the prototype tool rather than lack of control of the instrument by the user.
Performance Quality
The subject indicated that improvising while using the VME was significantly easier than improvising unaccompanied and that it was easier to sustain interest in the improvisation. Largely this was attributed to the fact that working with the VME was somewhat similar to working with a human accompanist in that various ideas (visual and aural) were presented by the environment during the improvisation which provided the subject with ideas for new musical directions. The subject felt that the second improvisation (using the VME) would have been more interesting to listen to because of the more diverse range of sounds and ideas it contained. An interesting point which was made was that the subject was unclear whether the creative ideas for the improvisation were coming from within her or from the environment. This could indicate that the balance of the initial ‘seeding content’ – the pre-recorded audio and video in this case - of the environment is very important. There should be enough to stimulate interest and new ideas, but an overabundance of material could lead the musician to feel a loss of ‘ownership’ of the resulting improvisations.
The evaluation was conducted as follows. Firstly, the subject was asked to perform a ‘free’ unaccompanied improvisation. That is there was no specified harmonic or rhythmic structure, the subject simply improvised a solo for about 2 and a half minutes. Secondly, the prototype virtual musical environment was started with video projected onto a video wall and the audio played through a 5-speaker surround sound system. The input microphone was placed close to the subject to reduce the interference from audio playing from the speakers (see Figure 5). After the subject was made aware of the basic operations of the environment and had a brief opportunity to familiarise herself with it, a second free improvisation was performed which this time lasted just over 6 minutes. A semi-structured interview was then conducted to identify issues arising from the use of the prototype and the interview analysed using techniques from grounded theory [13]
Control
An important point which arose in the interview was a problem with the responsiveness of the prototype VME to musical input. As mentioned, the pitch and volume of the notes produced by the musician control the position of an onscreen cursor and the behaviour of the VME. In performance, the audio/visual components of the environment placed significant demands on the computer’s processors and this seemed to slow down the response of the pitch analysis object in Max /MSP. This meant that the cursor at times seemed to ‘stick’ in one part of the screen despite a change in pitch or volume. While the subject indicated that this did not adversely impact the quality of improvisation to a major degree, it was nonetheless a frustration at times and needs to be addressed. A quick solution implemented after the interview was to run the pitch analysis on another computer
171
could be configured to provide audio that was more harmonically structured. This would then allow the musician to practice skills required for more traditional jazz improvisation for example and would potentially broaden the scope for its use.
and send the results via a network to the main machine. This seemed to improve responsiveness, but further work is needed to evaluate the viability of this solution in performance situations. When it was suggested to the subject that a greater degree of control could be given to the performer so that certain designated pitches – middle C for example – could be set to always trigger particular behaviour regardless of the volume, she indicated a preference for less rigid control. In fact, she found that the occasional random glitches actually enhanced the performance by injecting an element of uncertainty that she needed to respond to. It may be that even if the technical issues of pitch change responsiveness are solved that some element of chance be deliberately incorporated into the VME to keep this characteristic to a desirable extent.
The possibility of using the VME in performance (eg. in a concert program) appealed to the subject, largely because it would introduce variety and a less traditional approach into concert programs that might otherwise contain music with a narrow range of styles. The novelty value of presenting a piece involving audio/visual elements mediated by a computer was seen as something that would interest audiences. “I suppose part of whether I’d use it or not depends on how portable it is and the access to that. For me that’s a big thing when I catch public transport everyone and I work at on average three venues every day… Is it portable? Is there a smaller version?” (Interview subject)
Practical Application
The subject was asked about potential uses of the VME in her instrumental practice, teaching and performance. Her responses indicated some potential use of this type of tool in all three areas, although she felt that the VME would be an extra tool to be used in particular situations rather than something that would be used constantly across a wide spectrum of her work.
Another important issue which would need to be considered if VMEs were to be used by professional musicians/teachers is that any necessary equipment would probably need to be highly portable, as musicians tend to move from venue to venue. The subject indicated that on an average day she would be practicing, teaching and performing at three different venues and therefore any computer-based tool would need to be small, light and easily transported. At this stage, the VME will run on a high-end laptop without additional microphones, etc but the user-experience would obviously be less immersive than with the set-up used for this evaluation. A possible area for future investigation might be the possible use of mobile devices to provide some of the benefits of the full VME without the need for full-size computers and screens.
“This could be a tool that I would use with those kind of students [who aren’t into traditional forms] because if you just say to them, ‘Right, let’s have a go at improvising- GO!’ it’s just too hard but if they’ve got something else there to help them to get into that and to help set some limits on what they’re doing I suppose they actually find that a lot easier.” (Interview subject) In teaching, it was observed that the VME could be quite useful in encouraging students who are struggling with motivation because they are not interested in the more traditional forms of music on typical instrumental music syllabi. For example, she suggested that a student who was not interested in practicing a classical piece may respond well to some structured use of the VME in lessons and/or at home. This is not to say that it would replace more conventional pieces altogether, but rather it would contribute to maintaining a student’s interest by providing an alternative approach to music making. The subject also indicated that the VME could be useful in introducing students to improvised music in a less-threatening setting.
“Mirroring Drum” project
In this project, we are going to visualise a drummer’s performance including visual actions in his/her performance because this project takes audiences’ viewpoints at a live concert into account. In order to develop musical skills, paying attention to physical features overly might not be useful or even harmful, as the pedagogical arguments have been against it. We agree with it from the viewpoint of developing musical skills. However, in a live performance, not only perfect play that musicians produce but also such elements as deviations, their movements, actions, interactions with other players and audiences play an important role. For musicians on a live stage, their visual feature is a part of their artistic expressions (eg. [3]).
“I don’t feel so inspired to practice [free improvisation] and so maybe in that instance having something like (the VME) might help me to just feel more comfortable just going with the flow and changing my ideas.” The subject was less enthusiastic about using the VME in her own instrumental practice, saying that she would possibly use it in preparation for performances that specifically needed free-improvisation skills but would be unlikely to use it regularly because she rarely performs in such an unstructured, freely-improvised style. A possibility arising from this observation is that it might be useful if the VME
“Mirroring” is considered as one of the important issues in performing art. Rizzolatti et al. [26] have discovered “mirror neurons” that respond to specific motor actions when they are performed and observed. They found that there was always a link between the effective observed movement and the effective executed movement. Pachet et al. [24] have
172
discussed the importance of confronting with a developing mirror of oneself to enhance musical experience. They developed an interactive system “the Continuator” that imitates user’s musical personality. Fels [9] has implemented several systems that enhance participants’ intimacy with his interactive art systems by providing a mirror of the participants. The importance of observation to human activities has been claimed in various fields [35, 2, 18]. A framework called "After-Action Review (AAR)" has been developed and used by the U.S. Army [4]. The original aim of the AAR method was to provide accountability of actions by examining what was supposed to happen in a mission or action, what actually happened, why there was a difference between the two, and what can be learned from the disparities. It enables musicians to grab and understand their own situated actions [31], i.e. how they appear to audiences. In the following sections, the design rationale of the system, the implementation of the system and expected interactions are described.
Figure 7 A Max/MSP patch for the system
Under the current implementation, Max/MSP handles recorded data as follows: The recorded performance is transmitted to a computer and stored.
Implementation
Figure 6 shows an image of the interactive art system with a drum. In order to maximise the practicality of tool, we are going to adopt readily available devices: a digital video camera, microphones, a note book PC, and a projector. The system is under development with Max/MSP and is implemented as a stand-alone application (Figure 7).
The performance is displayed in real time. The recorded data is replayed at different speed rates (with delay, double speed of a certain time period both forward and backward) We are going to develop the following functions along with analysis of the performance:
A performance is shot with a DV camera. A drummer sends start/stop-recording signals by hitting a particular tom-tom equipped with a small microphone. The speed is adjusted also by hitting a certain tom-tom. As the system is intended for practical use, the system components have to be kept as practical as possible. For drummers, the most natural interface is a tom-tom, not mouse-clicking a virtual button on a screen, but hitting a tom-tom with a stick.
The system replays the last phrase, a selected phrase right after the recording-stop signal has been sent on demand, or a randomly selected phrase. With the trajectory of the drummer’s performance including body and mallets/sticks movements highlighted Expected Interactions
The potential contributions of this work are to explore how mirroring performance works in the improvisational play and what features emerge in it. We are also going to contribute to design theories of tool that amplifies reflective thinking in musical performance. There are three expected interactions: First of all, it is expected that musicians notice aspects of their playing they were previously unaware of by obtaining the audiences’ viewpoint from the real-time replay. Secondly, replaying the musicians’ performance with effects will enhance their understanding on their own performance, as these effects are adopted which facilitate the analysis of temporary data [33]. Although analytical approach may not be necessary, it works for them to grab what they have conducted in their performance.
Figure 6 The implementation of the system
Thirdly, the system is also expected to play a role as a stimulant [7]. It can be regarded as a type of “duet” with
173
through interviewing and questionnaires are different from situated actions. Both interviewing and the retrospective report method will compensate with each other and be used for the evaluation of the systems.
oneself. In jazz performances, a musician is inspired by other musicians' play, and the musician plays back reactively. In a similar way, the replayed phrase can be a stimulant for the musician’s next musical statement. We are going to have user studies in order to see if these expected interactions and more unexpected interactions that works for amplifying reflective thinking in musical performance emerge. Issues on investigation methods are discussed in the next section.
CONCLUSION
In this paper, we have described on-going projects exploring the possibilities of visualisation for supporting musical performance. The aim is to provide practical tools which enhance players’ perception of the music they produce rather than encourage an overly analytical approach. We are currently developing and improving prototype systems based on theoretical frameworks previously applied to music pedagogy, HCI, and design perspectives. Following the Action Research approach, we will next evaluate the prototypes and reflect upon the validity of the theoretical approach that led to their development. It is expected that this will lead to further refinement of both the theory and the prototypes.
DISCUSSION
The research reported here grew out of a desire to explore the possibilities for technology to support musicians and music teachers in their work. The applications / environments we describe were designed to encourage an exploratory approach to music making, and there are other applications in development that allow musicians to visualise aspects of their playing and technique in a novel way. Possible enhancements to the prototype systems described here include building a more sophisticated interaction model, which could include more complex analysis by the computer of the music being played. For example, the computer may be programmed to recognise certain musical phrases rather than individual notes. Application of various artificial intelligence (AI) techniques such as those described by Rowe [27] for example could enable a far richer interaction with the virtual environment.
REFERENCES
1. Abell, A. Talks with Great Composers. Replica Books, (2000). 2. Amitani, S. A Method and a System for Supporting the Process of Knowledge Creation. PhD thesis, University of Tokyo, (2004). 3. Blast: http://www.blasttheshow.com/ 4. CALL. A Leader's Guide to After-Action Reviews. technical report, Headquarters Department of the Army, (1993).
The other issue is evaluation. In HCI research, it is a difficult problem that researchers must always confront. Hook et al. [15] have argued that what can be evaluated is the process of interactions, not the outcome of the interactions. So the questions to be asked are:
5. Card, S. Information Visualization. in The HumanComputer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications (Human Factors and Ergonomics), Julie A. Jacko, Andrew Sears (Ed.), Lawrence Erlbaum Associates, (2003) 544-582.
In which way do users understand the feedback? Does the system lead to the desired effects on the overall interaction? If not, why?
6. Dourish, P. What we talk about when we talk about context. Personal and Ubiquitous Computing, 8, 1, (2004), 19--30.
The interviewing method and retrospective report method of protocol analysis [8] may be suitable for analysing the process of interactions. Suchman [31] has claimed that neither questionnaires nor interviews can capture situated actions. She has also questioned the validity of observations of human practices using recording devices.
7. Edmonds, E. Artists augmented by agents. Proc. of the 5th international conference on Intelligent user interfaces (2000), 68-73 8. Ericsson, A. K., Simon, H. A. Protocol Analysis: Verbal Reports as Data. Cambridge, MA: MIT Press, (1993).
Suwa et al. [32] have analysed the process of an architectural design. They devised a scheme for coding designer's cognitive actions from video/audio design protocols. The relations among cognitive processes were systematically described and analysed. Through this analysis, Suwa concluded that the interaction between the designer and his/her sketches provides the designer with such effects as reinterpretation of the design problems and unexpected discoveries. This method is useful to code and to analyse human cognitive process.
9. Fels, S. Designing for intimacy: Creating new interfaces for musical expression. Proc. of IEEE, 92, 4 (2004), 672685. 10. Frederiksen, B. Arnold Jacobs: Song and Wind. Windsong Press Limited, Gurnee, Illinois, 1996. 11. Friberg, A. and Battel, G.U. Structural Communication. in Parcutt, R. and McPherson, G.E. eds. The Science and Psychology of Music Performance, Oxford University Press, New York, (2002), 199-218.
We are not denying the interviewing and questionnaire methods, however, we consider that what is obtainable
174
12. Gallwey, T. The Inner Game of Tennis. Random House, New York, (1974).
25. Puckette, M.S., Apel, T. and Zicarelli, D.D., Real-time audio analysis tools for Pd and MSP. in International Computer Music Conference, (San Francisco, 1998), International Computer Music Association, 109-112.
13. Glaser, B.G. and Strauss, A.L. The discovery of grounded theory: strategies for qualitative research. Aldine, Chicago, (1967).
26. Rizzolatti, G., Fadiga, L., Gallese, V. and Fogassi, L. Premotor cortex and the recognition of motor actions. Cognitive Brain Research, 3, 2 (1996), 131-141.
14. Hori, K. A System for Aiding Creative Concept Formation. IEEE Trans. Systems, Man, and Cybernetics, 24, 6, (1994), 882-894.
27. Rowe, R. Machine Musicianship. MIT Press, London, (2001).
15. Hook, K., Sengers, P., Andersson, G. Sense and Sensibility: evaluation and interactive art. Proc. of Conference on Human Factors in Computing Systems, (2003), 241--248.
28. Schacter, D.L. Implicit vs. Explicit Memory. in Wilson, R.A. and Keil, F.C. eds. The MIT Encyclopedia of the Cognitive Sciences, The MIT Press, Cambridge, Massachusetts, (1999), 394-395.
16. Johnston, A. and Edmonds, E., Creativity, Music and Computers: Guidelines for Computer-Based Instrumental Music Support Tools. in Australasian Conference of Information Systems, (Hobart, Tasmania, 2004).
29. Schoen, D. A. The Reflective Practitioner: How Professionals Think in Action. Basic Books, (1983). 30. Stewart, M.D. (ed.), Arnold Jacobs: The Legacy of a Master. The Instrumental Publishing Company, Northfield, Illinois, (1987).
17. Kemmis, S. and McTaggart, R. The Action Research Planner. Deakin University Press, Geelong, (1988). 18. Kohut, D.L. Musical performance: Learning theory and pedagogy. Stipes Publishing L.L.C, Champaign, Illinois, (1992).
31. Suchman, L. Plans and Situated Actions: The Problem of Human-Machine Communication (Learning in Doing: Social, Cognitive and Computational Perspectives). Cambridge Univ Press, (1987).
19. Langner, J. and Goebl, W. Visualising Expressive Performance in Tempo-Loudness Space. Computer Music Journal, 27 (4), (2003), 69-83.
32. Suwa, M., Purcell, T., Gero, J. Macroscopic analysis of design processes based on a scheme for coding designers' cognitive actions. Design Studies, 19, 4 (1998), 455-483.
20. Mamykina, L., Candy, L. and Edmonds, E. Collaborative Creativity. Communications of the ACM, 45. 96-99.
33. Takashima, A., Yamamoto, Y., Nakakoji, K. A Model and a Tool for Active Watching: Knowledge Construction through Interacting with Video. Proc. of INTERACTION: Systems, Practice and Theory, Sydney, Australia (2004), 331-358.
21. Miller, D.G. and Schutte, H.K. Documentation of the Elite Singing Voice: http://www.vocevista.com/contents.html, (2002). 22. Nishimoto, K. and Oshima, C., Computer Facilitated Creating in Musical Performance. In Proc. Scuola Superiore G. Reiss Tomoli, (2001).
34. Thorpe, W. Visual feedback of acoustic voice features in singing training International Conference on Physiology and Acoustics of Singing, Groningen, the Netherlands, (2002).
23. Oliver, W., Yu, J. and Metois, E., The Singing Tree: Design of An Interactive Musical Interface. In Proc. Designing Interactive Systems, (1997), ACM Press, 261264.
35. Underhill, P. Why We Buy: The Science of Shopping. Thomson Learning College, (2000).
24. Pachet, F., Addessi, Anna-Rita. When Children Reflect on Their Playing Style: The Continuator. ACM Computers in Entertainment, 1, 2 (2004).
175