A Toolkit for Explorations in Sonic Interaction Design Stefano Delle Monache
Pietro Polotti
Davide Rocchesso
IUAV - University of Venice Venezia, Italy
University of Verona Verona, Italy
IUAV - University of Venice Venezia, Italy
stefano.dellemonache@ gmail.com
[email protected]
[email protected]
ABSTRACT
should be carried out as research, and not for research “pursued primarily in service of a theoretical concern” [18]. In the same spirit, we take Stolterman’s assertion, according to whom “it is possible to predict the potential success of new approaches, methods, and tools based on how designerly they are” [34]. Sonic interaction design (SID) is positioned in between auditory display, interaction design, ubiquitous computing and interactive arts. In SID, the role of sound as natural carrier of information is exploited as effective means to establish continuous negotiation with interactive artifacts [21, 30]. Some clear everyday life examples are the vacuum cleaner, wherein sound conveys a strong sense of power and control over the suction, and the use of rhythm in activities such as hammering, rubbing with sand paper, or cutting vegetables on the chopping board. Therefore, research in SID relies on knowledge in ecological acoustics, everyday sound perception and categorization, sound and music computing and 1) investigates the iterative process of design for sound creation [6, 35], 2) considers evaluation as a fundamental stage of sound design [22], 3) applies different approaches and methods to ideas prototyping [17, 29, 27, 20], 4) aims at developing design-oriented sound synthesis tools for complex interactive applications. In fact, there’s a strong need to provide designers with a set of skills and tools that allow them to consciously design the acoustic behaviour of future artifacts. In this paper, we introduce the Sound Design Toolkit (SDT), a software package consisting in a set of physics-based sound synthesis models, especially addressed to SID research and education. As a work in progress, the development of the sound synthesis engines, graphical user interfaces (GUI) and control layers advances together with the realization of interactive workbenches, and the evaluation of the system in workshop settings. These activities serve as tools for reflection, in order to collect a useful repertoire of design ideas and concepts [31]. In this way, SID as research advances together with research in sound and music computing. The paper has the following structure: in Section 2, we consider why a physics-based approach can be suitable in interaction design; Section 3 describes the philosophy underlying the SDT, its features, the GUI design and its implementation; in Section 4, some interactive applications are briefly treated, where the SDT is employed to generate consistent continuous sound feedback; in Section 5, we draw our conclusions.
Physics-based sound synthesis represents a promising paradigm for the design of a veridical and effective continuous feedback in augmented everyday contexts. In this paper, we introduce the Sound Design Toolkit (SDT), a software package available as a complete front-end application, providing a palette of virtual lutheries and foley pits, that can be exploited in sonic interaction design research and education. In particular, the package includes polyphonic features and connectivity to multiple external devices and sensors in order to facilitate the embedding of sonic attributes in interactive artifacts. The present release represents an initial version towards an effective and usable tool for sonic interaction designers.
Keywords Sonic interaction design, physics-based sound synthesis, user interfaces.
Categories and Subject Descriptors H.5.2 [Information Interfaces and Presentation]: User Interfaces— Auditory (non-speech) feedback
1.
INTRODUCTION
One might assert that the core of interaction design research is the relationship between action, on one side, and function and meaning on the other. In the last decades, research in interaction design developed a multiplicity of theoretical approaches, methods and tools to steer interactive artifacts towards the naturalness of interaction we are used to in everyday environments. Recently, major discussions in the research community arose with respect to the reliability of research methodologies for interaction design. In particular, the debate concerns the relationship between design and science, questioning the points of convergence and divergence of design practices as compared to scientific approaches, in order to highlight fruitful implications for interaction design research [5]. The research community correctly agrees that HCI research aimed at supporting design should be grounded in design practice, theory and philosophy. That is, the design activities Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. AM’10, September 15–17, 2010, Piteå, Sweden. Copyright © 2010 ACM 978-1-4503-0046-9/10/09…$10.00.
2.
PHYSICS-BASED SOUND SYNTHESIS: FROM VOLATILITY TO PHYSICALITY.
In sound and music computing research, physics-based sound synthesis is indeed not a novelty anymore. In recent years, physical models have been applied to non-musical, everyday sounds, too [7, 10, 28]. According to this approach, sound generation can
7
be described in terms of physical events, as they occur in everyday life. Most of the experiences take inspiration from the musical field, mainly responding to the needs of composers and musicians who want to approach musical synthesis in terms of simulation of the physics of acoustic instruments [36]. The idea is that if a virtual instrument is designed in such a way that its behaviour responds much like the actual acoustic instrument, the synthesized sound will be natural and expressive in a performance context. Given the physical consistency of the models, it is straightforward to map their control parameters to continuous physical interactions, and to describe resonators and their interaction modalities by means of physical and geometric properties. Therefore, the intrinsically natural behaviour of the physics-based sound models can potentially provide an embodied experience, since the computergenerated sound feedback is energetically consistent with the action [4, 11, 14]. Nonetheless, the creation of a functional, continuous sound feedback requires different approaches and tools, that have to be grounded on design practice. The question is about how to make a product sensible to manipulative actions, not as an “intelligent” or knowledgeable subject, but as an object capable of feeding the stimuli back in a dialogic form. In everyday environments, this is especially important for those objects that are designed to provide multiple functions, thus changing their identity over time. In particular, this is the case of those objects integrating electronic technologies and computational elements, as in home automation, automotive applications, consumer electronics and entertainment. In these contexts, sound plays a crucial role to characterize the identity of an object and the experience of use. Being designers mainly confident with vision and visual thinking, current sketching approaches largely involve forms of visual storytelling [6] and the sonic dimension is mainly approached by intuition and creativity. Designers are often unaware of the auditory domain, of its complexity and potential, and they largely ignore sound processing and synthesis methods. Sounds are timebounded, volatile, and difficult to outline without an appropriate vocabulary. A designer is hardly able to describe sound attributes such as spectrum, transients, etc. . It would be even more complicated to ask a designer to “guess” how to produce the sound he or she imagines, even with the most advanced off-the-shelf software. Indeed, it is quite difficult to sketch expressive sound feedback starting from scratch: sample-based processing requires a sort of sculptural attitude, while abstract sound synthesis techniques need quite complex control layers, though some interesting tools for the generation of environmental sounds have been recently released1 . A physics-based approach results to be closer to designers thinking: sounds can be directly described in terms of configurations, geometries, materials and their properties, dynamics of gestures to the point that allows a certain immediate “visualization” and association of the sketched sound with its interactive context, not to mention an intuitive sensory connection with touch. We argue that a physical approach to sound can potentially encourage a certain operational physicality that is much needed in design practice. This approach strongly complements the established method of Foley, a common design practice for audiovisual media [7, 37, 28, 2]. Like
foley artists, sound designers are provided with a palette of virtual sounding objects that can be explored and combined, to create compound, elaborated sound events. One may oppose this assertion suggesting that designers do not have a deep understanding of mechanics and fluid dynamics, which seems to be needed to make sense of exotic parameters such as the Stribeck velocity or the Reynolds number. That’s precisely why there is the strong need of an interpretative layer that maps physical descriptions to a more naïve physics [32]. Most of this paper is dedicated to proposing one such layer. Finally, the increasing availability of sensors, actuators and low-cost embeddable CPUs makes physics-based sound synthesis a promising paradigm for the design of interactive continuous sound feedback in everyday contexts.
3.
IMPLEMENTATION AND DESCRIPTION
The SDT2 complies with multi-platform requirements of the European project CLOSED [35], and currently runs in Max/MSP environment3 , though the provided source code can be compiled for Pure Data (Pd)4 , as well. The provided physics-based sound models, developed and implemented as externals and patches, can be easily coupled with physical objects and are computationally affordable for real-time applications on ordinary hardware. The sound models algorithms are developed according to three main points: 1) auditory perceptual relevance; 2) cartoonification, i.e. simplification of the underlying physics and emphasizing of its most relevant aspects in order to increase both computational efficiency and perceptual sharpness; 3) parametric temporal control ensuring appropriate, natural and expressive articulations of sound event sequences [28]. As shown in figure 1, most of the provided sound algorithms follow the modular structure “resonator–interactor–resonator”, representing the interaction between two resonating objects. The resonators are modeled through the modal [1] or the digital waveguide [33] techniques. The resulting timbral richness – even when using very basic resonators – is due to the choice of non-linear interaction models. Models of impact [24] and friction [13] have been implemented and exploited as basic sonic events underlying complex sound phenomena. Rolling, bouncing, breaking and crumpling sounds are modeled through complex temporal patterns on top of the impact model, while the friction model is used to simulate rubbing, squeaking and braking sounds. Furthermore, different liquid-related sounds – as burbling, dripping, pouring and frying – have been implemented making use of a bubble model [37]. Hence, by means of SDT, sonically augmented (sounding) objects
Figure 1: The modular structure implements a feedback communication between interaction and resonator models.
1
See for example the “Sounds of nature” VST plugins, developed by xoxos (www.xoxos.net), the “Tapestrea” developed by the Sound Lab of Princeton University (http://taps.cs.princeton.edu/, the “Tassman” software by Applied Acoustics System (http://www. applied-acoustics.com/tassman/overview/) or Andy Farnell’s Pd patches for everyday-sound synthesis (www.obiwannabe.co.uk).
2 The SDT-0.5.0 can be freely downloaded at http: //www.soundobject.org/BasicSID/Gamelunch/ Resources.html . 3 www.cycling74.com . 4 www.puredata.info .
8
Simulation examples
footsteps on gravel
can crushing
paper crumpling
walking
Derived processes
hammered string
bouncing crushing
crumpling
rolling / braking wheel
falling coin
struck bar
glass cleaning
breaking dropping
hitting
squeaking door
bowed string
rubbed glass
squeaking splashing
sliding
pouring
fracturing
rubbing
IMPACT model
sucking
whooshing burbling
FRACTURE model
filling
rolling
Basic events And textures
Low-level models
Vacuum cleaner (turning on/off, corking tube)
dripping
burning
flowing exploding
braking
FRICTION model
BUBBLE model
FLUID FLOW model
TURBULENCE model
LIQUIDS
SOLIDS
popping
EXPLOSION model
GASSES
Figure 2: The proposed taxonomy of everyday sounds.
3.1
can be created with an expressive acoustic behaviour in the sense of ecological hearing introduced by Gaver [19]. Our main objective is to provide a tool that progressively can 1) grow away from a musical metaphor; 2) respond to the requirements of design thinking; 3) be sufficiently immediate and easy to use in the sketching and prototyping stages of the design process; 4) represent a valuable platform in SID activities. Given these guidelines, the main features of the current release of the SDT are:
Navigating the SDT package
Figure 2 shows the proposed taxonomy: the hierarchy is established from low-level sound events to more complex, patterned or compound processes. The corresponding low-level sound models are presented at the bottom of the graph, while the second level shows basic events and sound textures straightly derivable from them. Processes that can be related to temporal patterns of basic events and textures are presented in the third level. Lastly, the top level contains several examples of the implemented simulations, while dashed connections represent expected dependencies for the simulations yet to be developed. When the SDT application is started, a front-end shows up displaying the available sound models among low-level basic events and sound textures, while the derived processes are obtained through their combinations. Furthermore, such a taxonomy provides the designer with an initial palette of sound events to browse in the early stage of sound sketching.
• a high level, general template of available sound models, organized according to a taxonomy of everyday sounds; • the clustering of the control parameters into high and low level GUIs according to their more or less meaningful and immediate effects, thus providing a clear monitoring of the sound models; • an intuitive understanding of the control parameters, facilitated by a common sense and perceptually-based naming and ranging of the physical parameters;
3.2
User-centered features
Often, even the sketching stage of a sound design process requires the preparation and collection of a large amount of code, with consequent issues of redundancy, version control, necessity of cumbersome tweaks, latency issues, problems in software communication with sensing devices, exponential proliferation of control maps, and so forth.
• the possibility to allocate multiple instances (polyphony) of the sound models in order to facilitate the design of compound sound events; • a comfortable integration of external devices allowing an interactive control of the sound models.
9
The aim of the SDT is to provide interaction designers with a sound design application that runs on common real-time environments, such as Max/MSP and Pd, and that can be easily exploited in their activities. For this purpose, the overall framework of the SDT is designed in order to automatically rebuild the audio and control connections, where needed. In practice, it is possible to manage multiple instances of the same sound model within a single working session, without recourse to any editing of Max/MSP patches. In figure 3, three independent impact sound models are loaded.
Figure 4: OSC control patch of instance 1 of the impact model. The red bulleted matrix manages the routing of incoming OSC signals, while the colored boxes allow to access single parameter mapping patches, edit, save and recall them. In this figure, the Hammer mass control mapping is shown as an example.
Figure 3: Three independent impact models. In the model manager patch, the sound model instances are added indexically and can be browsed, removed or restored, at user’s choice. Furthermore, volume controls and audio recording tools are automatically generated and grouped in a separate window, providing a global mixer. Programming effective control maps is indeed a harsh task. Control maps give shape to the interaction, and represent the core of the coupling between action and function [25]. Thus, major efforts were undertaken in order to supply an environment sufficiently flexible, wherein various combinations of sensors data/physical parameters mapping can be easily accessed and tested. A preliminary and extensive work on the sound models led to a selection of those physical parameters endowed with a major effectiveness in shaping sonic behaviours. In general, these parameters have been chosen as interactively controllable dimensions, and displayed in the high-level interface of the sound models. Independent MIDI and OSC communication patches for each instance of a sound model can be loaded in the manager patch. As shown in figure 4, each MIDI/OSC control patch is provided with its own mapping setup section. A sensor data input/output matrix allows to choose a primary mapping configuration , one-to-one and one-to-many. It is possible to edit and save, as Max/MSP patch, each specific interactive parameter map. Newer maps are automatically listed and can be recalled in the drop-down menu. Such a feature makes it possible to collect a set of different solutions that can be easily combined at user’s choice. In practice, the user has an auditory analogue of the board used by designer to rapidly compare a large number of drawn sketches. Finally, figure 5 shows the modular organization of the software package in folders and subfolders, providing a general and clear framework of sub-patches that can be accessed in view of further development and updates. Far from being exhaustive, this high level organization of the SDT helps solving a set of bottlenecks encountered when making massive use of sound models: arrangement of patches, control
through a variety of sensors, sound mixing and so forth. Certainly, this framework doesn’t relieve from having a basic knowledge of software programming, but it facilitates a direct access and use of the physics-based sound models and their capabilities in an interactive context. Also this is a work in progress: experiences of use are collected, in order to improve the “plug & play” characteristics of the SDT environment, investigate and exploit the sonic aesthetics of the physics-based sound synthesis paradigm.
Figure 5: The SDT folders tree: Images, sub-patches, general preferences, presets and sound banks are respectively located in the pertinent subfolders. The Library folder is further split into as many sub-folders as the number of available sound models, and it contains all patches and abstractions. In the Presets folder, sub-folders that refer to each sound models contain separate lower level folders where Temporal control and Timbral palette presets are saved as XML file. Finally, in the Sound folder, samples can be recorded as AIFF files.
10
3.3
Clarity and intuitiveness
Physics-based sound synthesis generally requires the user to tune large sets of variables, and their mutual relationships are often difficult to perceive at a glance, especially in real time situations, where an immediacy of intervention is needed. In order to ease the access to sound models, the available control parameters are assigned either to low- or high-level interfaces. The current GUI provides, in fact, a functional hierarchy of parameters: the intelligibility of the most effective control parameters is facilitated in the high level interface, though their accessibility is retained in the low-level interface too. As shown in figure 6, the high-level interfaces are organized in three main sections: 1) physical parameters; 2) temporal control; 3) timbral palette. Figure 7: Impact model: the low-level interface gives access to all the available parameters.
4.
EMBODYING THE SDT
The SDT has been extensively used in various interactive installations, workshops and research activities [29]. In the Gamelunch installation [9, 26], an interactive dining table, mock-ups and design experiences have been sketched around well defined themes and constraints, with the aim of exploring the interplay between the sensory channels, testing and improving the sound models, and finally developing a logic and a vocabulary for the SDT. The interactive environment includes a) a table with embedded sensors, acquisition boards, loudspeakers; b) bowls and dishes; c) graspable sensor-augmented objects such as a water jug, a tray, cutlery and bottles. Continuous interactions, such as cutting, piercing, pouring, stirring, are captured in order to drive the control parameters of some SDT sound models and elaborate feedbacks with immediate, non-symbolic and pre-attentional meaning. Following a basic design approach [3, 15, 16, 23], all the realizations have been systematized as exercises around specific themes. In particular, three themes have been investigated: a) continuous sound feedback for mechanical connections; b) supportive and expressive feedback for cyclic continuous actions; c) contradictory feedback for continuous action [29]. In general, the provided sound feedback stressed the richness of the physics-based sound models, to the point that some users that experienced the Gamelunch tried to override the design space, challenging expressive misuses of the augmented objects. The experiences and activities carried out around the sonically augmented dining scenario have been collected and published on the website: http://www.soundobject.org/BasicSID/. Recently, the SDT was successfully employed in a new project about gesture sonification, out of the context for which it was originally conceived5 . Finally, a recent study explored the connections between narrative discourse, aesthetic attributes and sonic dynamic properties of interactive commodities. The investigation focused on how specific expressive qualities, that can be distilled from a discussion on film sound cases, could be transferred to the possibilities afforded by the physics-based approach in the SDT. The chosen narratives transferred well to the SDT implementation, making use of only two sound models, very simple control maps, and without sound processing. Results showed the effectiveness of the SDT in founding an appropriate, meaningful gesture-sound relationship, and constructing a semantic that related strongly to the designer’s intention [8].
Figure 6: High-level interface of the impact model: physical parameters, temporal control and timbral palette sections. The physical parameters section encompasses the most direct and effective parameters: for instance, in the case of the impact model, it includes the contact stiffness and the shape of the contact surface (parameters controlling interaction in figure 1), the hammer mass (parameter controlling resonator 1 in figure 1), and global factors of frequency, decay time and gain (parameters controlling resonator 2 in figure 1). The temporal control section provides an “off line” sequencing tool that allow to simulate gestures or patterns. Indeed, creating interesting and effective timbres is important but not sufficient. In continuous interaction the efficaciousness of the sound feedback largely depends on how the articulation of the sound properties is coupled with the human action over time. As an example, in the friction model this section allows to control the temporal patterns of the rubbing velocity and exerted pressure. The coupling of the two forces strongly affects the expressiveness of the resulting gesture. Typical examples are rubbing on a glass harmonica, or drawing a bow across a violin string. The timbral palette section allows to manage, store, recall, delete or interpolate configurations of sound parameters. The interpolation of reference sonic states (i.e. of parameter values) is one of the interactively controllable dimensions, as well. In previous works, the exploration of this feature revealed to be an effective and expressive means to convey information [12, 29]. Configurations are saved as presets in XML files, thus allowing them to be read, modified or generated from other XML-compatible software. Finally, the low-level interface gives access to all the available controls of the model, for further tuning and refining the sound (see figure 7). For example, it is possible to activate, deactivate or mute each resonant mode and singularly modify its frequency, decay time and gain.
5
11
http://www.visualsonic.eu .
5.
CONCLUSION AND FUTURE WORK
[11] P. Dourish. Where the Action Is: The Foundations of Embodied Interaction. MIT Press, Cambridge, Mass., 2001. [12] C. Drioli, P. Polotti, D. Rocchesso, S. Delle Monache, K. Adiloglu, R. Anniés, and K. Obermayer. Auditory representations as landmarks in the sound design space. In Proc. Conference on Sound and Music Computing, pages 315–320, Port, Portugal, july 2009. [13] P. Dupont, V. Hayward, B. Armstrong, and F. Altpeter. Single state elasto-plastic friction models. IEEE Trans. Automat. Contr., 47(5):787–792, 2002. [14] G. Essl and S. O’Modhrain. An enactive approach to the design of new tangible musical instruments. Organised Sound, 11(3):285–296, 2006. [15] A. Findeli. Rethinking design education for the 21st century: Theoretical, methodological, and ethical discussion. Design Issues, 17(1):5–17, 2001. [16] K. Franinovic. Toward basic interaction design. Elisava Temes de Disseny Journal, 2009. [17] K. Franinovic and Y. Visell. Strategies for sonic interaction design: from context to basic design. In Proceedings of the 14th International Conference on Auditory Display, Paris, France, 2008. inproceedings. [18] W. Gaver, J. Bowers, T. Kerridge, A. Boucher, and N. Jarvis. Anatomy of a failure: how we knew when our design went wrong, and what we learned from it. In CHI ’09: Proceedings of the 27th international conference on Human factors in computing systems, pages 2213–2222, New York, NY, USA, 2009. ACM. [19] W. W. Gaver. What in the world do we hear? an ecological approach to auditory event perception. Ecological Psychology, 5:1–29, 1993. [20] D. Hug. Investigating narrative and performative sound design strategies for interactive commodities. In S. Ystad, M. Aramaki, R. Kronland-Martinet, and K. Jensen, editors, Auditory Display - 6th International Symposium, CMMR/ICAD 2009, Copenhagen, Denmark, May 18-22, 2009, Revised Papers, volume 5954 of Lecture Notes in Computer Science. Springer, 2010. [21] A. Jylhä and C. Erkut. A hand clap interface for sonic interaction with the computer. In CHI EA ’09: Proceedings of the 27th international conference extended abstracts on Human factors in computing systems, pages 3175–3180, New York, NY, USA, 2009. ACM. [22] G. Lemaitre, O. Houix, Y. Visell, K. Franinovic, N. Misdariis, and P. S. P. Toward the design and evaluation of continuous sound in tangible interfaces: The spinotron. International Journal of Human-Computer Studies, 67, 2009. Special issue on Sonic Interaction Design, to appear. [23] E. Lupton and J. C. Phillips. Graphic Design: The New Basics. Princeton Architectural Press, 2008. [24] D. W. Marhefka and D. E. Orin. A compliant contact model with nonlinear damping for simulation of robotic systems. IEEE Transactions on Systems, Man, and Cybernetics, Part A, 29(6):566–572, 1999. [25] E. R. Miranda and M. Wanderley. New Digital Musical Instruments: Control And Interaction Beyond the Keyboard (Computer Music and Digital Audio Series). A-R Editions, Inc., 2006. [26] P. Polotti, S. Delle Monache, S. Papetti, and D. Rocchesso. Gamelunch: forging a dining experience through sound. In CHI ’08: CHI ’08 extended abstracts on Human factors in computing systems, pages 2281–2286, New York, NY, USA,
The Sound Design Toolkit (SDT) is a set of perceptually-oriented and physically-consistent tools for sound synthesis aimed at the next generation of sound designers. The SDT consists of several sound synthesis algorithms implemented as Max/MSP externals and patches, and following a certain taxonomy of everyday sounds. The possibility to map the control parameters of the models to continuous physical interactions, and their correspondence to the physical and geometric properties of the simulated objects, make the SDT an effective tool for designing expressive sound feedback in real-time applications. The SDT has recently been revised and refined in an iterative process aimed at improving its effectiveness and usability in interactive contexts. To this end, the development proceeded in parallel with the realization of several sonically augmented everyday objects which were used for testing the sound models, while investigating the expressiveness of continuous sound feedback in the user-object interaction loop. Future work will include the publishing of the SDT in various web repositories (e.g., www.maxobjects.com), thus collecting feedback and suggestions from a larger user base, in perspective of improvements and additions. Further realizations of sonic interactive everyday objects will help broadening the scope of the SDT. Moreover, all these experiences are being progressively systematized in order to collect a set of basic tutorials and exercises in sonic interaction design.
6.
ACKNOWLEDGEMENTS
This work is part of the research carried out within two EU funded project: CLOSED (Closing the Loop of Sound Evaluation and Design), FP6-NEST-PATH no. 29085, and NIW (Natural Interactive Walking), FP7-ICT-2007 FET Open no. 222107.
7.
REFERENCES
[1] J.-M. Adrien. The missing link: modal synthesis. In G. De Poli, A. Piccialli, and C. Roads, editors, Representations of musical signals, pages 269–298. MIT Press, Cambridge, MA, USA, 1991. [2] V. T. Ament. The Foley Grail. Focal Press, 2009. [3] G. Anceschi. Basic Design, fondamenta del design. In G. Anceschi, M. Botta, and M. A. Garito, editors, L’ambiente dell’apprendimento – Web design e processi cognitivi, pages 57–67. McGraw Hill, Milano, Italia, 2006. [4] N. Armstrong. An enactive approach to digital musical instrument design. PhD thesis, Princeton University, 2006. [5] C. Bartneck. Notes on design and science in the HCI community. Design Issues, 25(2):46–61, 2009. [6] B. Buxton. Sketching User Experiences: Getting the Design Right and the Right Design. Morgan Kaufmann, 2007. [7] P. R. Cook. Real Sound Synthesis for Interactive Applications. A. K. Peters, Ltd., Natick, MA, USA, 2002. [8] S. Delle Monache, D. Hug, and C. Erkut. Basic exploration of narration and performativity for sounding interactive commodities. In Proceedings of the 5th International Workshop on Haptic and Audio Interaction Design (HAID), Copenhagen, Denmark, 2010. [9] S. Delle Monache, P. Polotti, S. Papetti, and D. Rocchesso. Sonically augmented found objects. In Proc. New Interfaces for Musical Expression, pages 154–157, Genova, Italy, 2008. [10] K. v. d. Doel. Physically based models for liquid sounds. ACM Trans. Appl. Percept., 2(4):534–546, 2005.
12
2008. ACM. [27] M. Rinott and I. Ekman. Using vocal sketching for designing sonic interactions. In to be published in DIS 2010: Proceedings of the Designing Interactive Systems Conference, 2010. [28] D. Rocchesso, R. Bresin, and M. Fernström. Sounding objects. IEEE Multimedia, 10(2):42–52, april 2003. [29] D. Rocchesso, P. Polotti, and S. Delle Monache. Designing continuous sonic interaction. International Journal of Design, 3(3), December 2009. [30] D. Rocchesso, S. Serafin, F. Behrendt, N. Bernardini, R. Bresin, G. Eckel, K. Franinovic, T. Hermann, S. Pauletto, P. Susini, and Y. Visell. Sonic interaction design: sound, information and experience. In CHI ’08: CHI ’08 extended abstracts on Human factors in computing systems, pages 3969–3972, New York, NY, USA, 2008. ACM. [31] D. A. Schön. The Reflective Practitioner. Basic Books, London, UK, 1983. [32] B. Smith and R. Casati. Naive physics: An essay in ontology. Philosphical Psychology, 7(2):225–244, 1994. [33] J. O. Smith. Principles of digital waveguide models of musical instruments. In M. Kahrs and K. Brandenburg, editors, Applications of Digital Signal Processing to Audio and Acoustics, pages 417–466. Kluwer Academic Publishers, 1998. [34] E. Stolterman. The nature of design practice and implications for interaction design research. International Journal of Design, 2(1):55–65, April 2008. [35] P. Susini, N. Misdariis, G. Lemaitre, D. Rocchesso, P. Polotti, K. Franinovic, Y. Visell, and K. Obermayer. Closing the loop of sound evaluation and design. In Proceedings of the 2nd ISCA/DEGA Tutorial and Research Workshop on Perceptual Quality of Systems, Berlin, 2006. [36] V. Valimaki, J. Pakarinen, C. Erkut, and M. Karjalainen. Discrete-time modelling of musical instruments. Reports on Progress in Physics, 69(1):1–78, January 2006. [37] K. van den Doel, P. G. Kry, and D. K. Pai. Foleyautomatic: physically-based sound effects for interactive simulation and animation. In SIGGRAPH ’01: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 537–544, New York, NY, USA, 2001. ACM.
13