WaveCloud: an open source room acoustics simulator ... - CiteSeerX

12 downloads 0 Views 2MB Size Report
given to three-dimensional sound field visualization as well as auralization strategies. ... can be directly used inside Matlab (a Python inter- .... processing stage.
WaveCloud: an open source room acoustics simulator using the finite difference time domain method Jonathan Sheaffer Dept. of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Israel.

Bruno Fazenda School of Computing, Science and Engineering University of Salford, UK.

Summary Among the various wave-based simulation methods, the finite difference time domain (FDTD) method provides a good trade-off between applicability, accuracy and computational efficiency. With the growing availability of computing power and recent advances in parallel architectures, FDTD has become a feasible choice for room acoustics simulation. Operating in the discrete time domain, it is advantageous for obtaining wideband solutions within a single simulation, and is also able to tackle problems involving transient behavior and time variability, for example with moving sources, moving receivers and a moving medium. This paper portrays the WaveCloud Project, an open-source modeling framework designed with room acoustics simulation in mind. Practical concerns such as computation time, handling arbitrary room geometries, and modeling directional sources and receivers are addressed and their current implementations in WaveCloud are discussed. Consideration is also given to three-dimensional sound field visualization as well as auralization strategies. Future work and remaining challenges, which now form part of the collaborative open source Wave Cloud project, are also described. PACS no. 43.55.Ka, 43.58.Ta

1. Introduction The finite difference time domain (FDTD) method is a time stepping wave method for solving differential equations, in acoustics most noticeably the wave equation or Euler’s linearized equations. Whilst the roots of the FDTD method are dated a few decades back [1], in acoustics modeling it is not considered as mature as geometrical methods, or frequency domain wave methods such as Finite Element [2] and Boundary Element [3] methods. In room acoustics research, there are several documented cases where the FDTD method was employed, e.g. see [4, 5, 6, 7, 8], however the method is not widely used in the industry, most likely due to the unavailability of a commercial or community-supported software package. Ideally, such a software package would allow its user to: 1. Model wave propagation in a domain to which frequency dependent boundary conditions can be applied, with minimal numerical errors, and at a reasonable computation time. (c) European Acoustics Association

2. Model arbitrary room geometries. 3. Model acoustic sources as directional, frequency dependent entities. 4. Capture acoustic quantities, such as pressure and particle velocity, at single or multiple receiving positions. 5. Capture binaural room impulse responses (BRIRs) that may be directly used for auralization. 6. Visually inspect the time varying soundfield, in a clear and efficient way. Recently, a freely available FDTD toolbox for Matlab was released [9], which is aimed at modeling loudspeaker-boundary interaction. Other software packages reported in the literature [10, 11, 12] potentially include some advanced features, but unfortunately, have not been made available to the general research community, at least to the knowledge of the authors. An exception is a computer program reported in [13, 14] which is available for download, however its source code was not made open. Implementing all of the aforementioned aspects requires a great deal of work, and can be made possible in an open-source project only with support from the general research community. Accordingly, the long term goal of the WaveCloud project is to establish

FORUM ACUSTICUM 2014 7-12 September, Krakow

an open-source software framework for scientists and engineers wishing to utilize the FDTD method, with emphasis on enclosed spaces. With the aim of attracting users as well as new developers, this paper describes the current state of WaveCloud with regard to the aforementioned requirements. Section 2 explains the general structure of the WaveCloud framework, and an implementation on general purpose graphics hardware (GPU) is discussed in Section 3. Approaches for handling boundaries are then reviewed in Section 4, followed by a discussion on source and receiver modeling in Sections 5 and 6. Finally, example simulation results are presented in Section 7. A code sample for using the WaveCloud toolbox is also provided in Appendix A.

2. The modeling framework The WaveCloud project, whose general structure is shown in Figure 1, is a multi-environment software package, which is intended to integrate with the typical workflow used in scientific research. The WaveCloud engine provides a Programming Interface which can be directly used inside Matlab (a Python interface is currently work in progress), thus allowing the user to easily write scripts for FDTD simulation. A sample WaveCloud script is shown in Appendix A. The interface also provides functions to load, voxelize and embed arbitrary geometries in the FDTD grid, based on pre-designed CAD drawings saved in the STL file format (cross-conversion from other popular vector formats is straightforward). Inclusion of numerous sources1 is optional, making it possible to model some more composite sound reproduction paradigms such as line arrays, and wavefield synthesis. Accordingly, the user defines all required simulation parameters inside a Matlab struct and then passes it on to the computational engine. Depending on the available computer architecture, the engine would then execute the simulation either on the hosting computer’s CPU or if possible, on a dedicated graphics processor (GPGPU). Once simulation has completed, the engine outputs variables containing the recorded pressure and particle velocity at the defined receiving positions. Because computation does not happen in real time, WaveCloud saves the soundfield data as a library of Visualization Toolkit (VTK) files, which can then opened in an external visualization application for further analysis.

3. GPGPU acceleration Even with highly optimized numerical methods, the bandwidth of the FDTD method is constrained, to 1

The total amount of possible sources is only limited by the physical size of the modeled domain.

Sheaffer and Fazenda: WaveCloud FDTD

STL File

C/CUDA

C++

GPU Kernel

CPU Kernel Visualisation

API Preprocessing in Matlab/Python*

WaveCloud Engine MEX Interface

Paraview Data out Postprocessing

Figure 1: General structure of the WaveCloud open software package. *Python API is work in progress.

some extent, by numerical dispersion. For example, with the interpolated-wideband scheme [15], the 2% dispersion error limit is at a frequency of 0.186fs both in terms of accuracy and isotropy. Thus, to be able to simulate the entire audible spectrum, one would need a sample rate of over 100kHz. With the more widely known standard rectilinear scheme, achieving this would require a sample rate of over 250kHz, as the dispersion limit is at 0.075fs . Thus it appears that even when considering modern FDTD methods, oversampling is required in order to yield accurate results in a wide band of frequencies. Naturally, oversampling results in a significant increase in problem size which requires more computational resources. Being grid-based and time-stepping, the explicit-scheme FDTD method is highly suitable for parallelization, which is possible using multicore computer architectures. Harnessing the power of graphics processors for general purposes (GPGPU) is gaining popularity in many computational sciences, with recent evidence of successful finite difference applications in electro-dynamics [16] and seismic modeling [17]. The introduction of the CUDA programming language [18] provides an attractive solution for implementing algorithms on GPGPUs at a comfortable learning curve, hence allowing scientists and engineers to easily and quickly adapt their codes to run on graphics hardware. In acoustics, an implementation for 2D structures has been proposed by Southern and colleagues [19], and more recently focus on 3D problems has been discussed in [20, 21, 22]. Some authors have also suggested novel numerical methods which are related to FDTD but more efficiently exploit the strengths of GPU computing [23]. Currently, only a simple element-wise GPU algorithm is employed in WaveCloud, as described in [20, 24]. Nonetheless, much more efficient realizations [25, 26] have been proposed since the first coding of the WaveCloud engine, and these could be implemented in the future.

FORUM ACUSTICUM 2014 7-12 September, Krakow

4. Boundaries Computer-aided architectural designs are usually stored in vector graphics format, which normally contains vertex locations defining the geometry of different objects. Thus, solid objects are not represented directly and are rendered from the vector definitions by the graphics software being used. In order to include such a data format in an FDTD simulation, it is required that the model geometry is discretized according to the spatial parameters of the grid [13]. The process of converting a vector definition into such a discrete volumetric representation is known in the field of computer graphics as voxellization, which is typically performed by means of ray-tracing, e.g. see [27] for a description of the voxellization method used in WaveCloud.

Figure 2: Computer model of the Elmia concert hall, which is typically used in acoustics simulation software tests [28]. Left - original vector model, Right - voxellized model with a spatial resolution of X = 29.75mm, corresponding to a sample rate of fs = 2kHz in an SRL scheme.

An inherent problem of voxelized models, or more accurately of non-conformal boundary models, is that oblique surfaces become staircase-represented, as can be seen in Figure 2. If the spatial sample period is small enough in comparison to the considered wavelength, then a good approximation is possible. A more accurate approach is to employ conformal boundary models [29, 30] or finite volume techniques [31], which in context of WaveCloud remain a good possibility for future work. In non-conformal methods, once the model has been voxellized, each boundary node is classified according to the specific boundary condition which is applied to it and boundary impedance value (or values, in the case of frequency dependent boundaries) is assigned to it. This classification process is essential, as each type of boundary node (e.g. front face, rightback edge, front-back-left corner) involves solving a different explicit boundary update equation. This approach, which is a physically correct way to solve the problem, is easily applied to most cases of model geometries. However, if parts of the model involve geometrical shapes of high complexity, then one may

Sheaffer and Fazenda: WaveCloud FDTD

encounter cases where it is difficult to uniquely classify a boundary node to an update equation. If such an ambiguity is not resolved, then the stability of the boundary model may be compromised. This is especially evident in interpolated schemes, as for the standard rectilinear method the update equation for any re-entrant (outer) corners or edges reduces to the update equation for air [32]. WaveCloud currently addresses this issue by means of an automatic node type classification algorithm which is executed in a preprocessing stage. To optimize the boundary update process for GPU execution, Webb and Bilbao [22] suggested a simpler boundary formulation which requires classification only into three types of boundaries. This has the potential to resolve some issues arising from node classification ambiguities, and is planned to be implemented in a future release of WaveCloud. Most realistic cases of room acoustics involve elements which need to be modeled as frequency dependent quantities. These are typically radiation patterns of sound sources, boundary impedances or receiver directivity patterns. As described in [15], an effective way to handle frequency dependent boundaries is to employ digital impedance filters which can be designed according to analytical principles or empirical data. In [21] this approach was taken, yet it was suggested that when modeling rooms of high geometrical complexity on a GPGPU, it might be simpler to run multiple frequency independent simulations with different boundary impedance values, as shown in Figure 3. In [33], this approach was applied to modeling frequency dependent sources as well as boundaries. The main advantage was shown to be simplicity and generalizability to different types of frequency dependent elements. The main shortcoming of this approach, is that only real impedance boundaries can be included, and that it presents an inherent trade off between computational efficiency and frequency resolution. Accordingly, further development in WaveCloud should include the digital impedance filter method as originally proposed by Kowalczyk and van Walstijn [15].

Figure 3: General structure of the multiband method.

FORUM ACUSTICUM 2014 7-12 September, Krakow

Sheaffer and Fazenda: WaveCloud FDTD

A

5. Sources General considerations regarding the physics and realization of FDTD sources was recently discussed by a number of authors [34, 35]. Drawing from their findings, hard sources, soft sources and physically constrained sources are included in WaveCloud, as well as a tool for shaping the excitation signal itself. Directional sources have been studied in [36] for the related Digital Waveguide Mesh method, however, this approach has not been adapted for use in FDTD yet. Escolano and colleagues [37] proposed a method to model directional sources in FDTD methods, by employing an array of sinusoidal monopoles in the near field of a virtual source. The method has been extended to using broadband sources in [38], and requires pre-processing in the frequency domain in order to calculate the different weights for the near field array. In [39] the weights for first order differential sources have been proposed analytically, effectively providing a simple means for modelling sources of first order directivity patterns. This method was also shown to operate well with frequency dependent modeling conditions [24]. Currently, directional sources are not explicitly included in the WaveCloud toolbox, however, first order directional sources can be trivially implemented using a dual-node array.

6. Binaural receivers Another challenge in FDTD simulation of room acoustics, is modeling binaural receivers. Unfortunately, this is not as straightforward as in geometrical methods, because of the wave nature of the FDTD method. A versatile yet indirect method for auralization is to capture and spatially encode the output of an FDTD model [40]. Using this approach, nth -order ambisonic signals are obtained, which can be further post-processed to generate auralizations. This is most commonly achieved using a loudspeaker array, although methods of transcoding ambisonics into binaural signals have been suggested in the literature, e.g see [41, 42]. A simpler way to approximate a binaural response, is to record sound pressure at two spaced receivers on the grid. However, in the absence of an efficient shadowing object between the two receivers, interaural level differences (ILDs) are not correctly reproduced. Murphy and Beeson [43, 44] have shown that in a 2D Digital Waveguide Mesh, approximation of correct ITD values is possible by embedding a circular object between two spaced receivers, and Sheaffer et al. [45] extended this approach to 3D and showed that ILD cues are also correctly reproduced. A more complete way of modeling HRIRs, including all binaural and monaural cues, was successfully achieved in [46, 47] using specialized FDTD schemes for propagation of sound in inhomogeneous media.

B

C

D

E

Figure 4: Geometry of a KEMAR mannikin used in this study. A. Polygon reduced laser scans, B. Voxellised for 176.4kHz (X = 3.37mm), C. Voxellised for 65kHz (X = 9.15mm), D. Voxellised for 44.1kHz (X = 13.5mm) and E. Voxellised for 32kHz (X = 18.59mm).

Whilst their results are in good agreement with measured responses, employing such an approach in room acoustics FDTD is challenging. First, the spatial period required to achieve such accuracy is in the order of a single millimeter which is computationally expensive given a uniformly sampled room. Furthermore, it is not clear whether sufficient accuracy can be achieved using efficient numerical schemes more commonly used for room acoustics simulation, as they are principally different and normally employed in a coarser grid setup, as was discussed in [45]. Very recently, Sheaffer et al. [48] suggested to directly model a binaural listener in an FDTD simulation by means of plane-wave decomposition of the soundfield, and reconstruction of a binaural response in the spherical harmonics domain. Currently, it is possible to model binaural receivers in WaveCloud only by directly embedding in the grid an actual model of a human head. In WaveCloud, a laser scan of a KEMAR manikin2 . was polygonreduced and voxelized as shown in Figure 4. The amount of spatial detail which is preserved after discretizing the model is dependent on the temporal sample rate, and as such, the shape of the head approaches that of a sphere as the sampling frequency is reduced. Due to copyright concerns, the KEMAR model is not distributed integrally with WaveCloud, and a rigid sphere model is provided as an alternative. Nonetheless, the code for embedding the KEMAR model is provided for the benefit of authors who wish to obtain the KEMAR geometry directly from [49]. To gain a better sense of understanding of the merits and shortcomings of using a laser-scanned head against a rigid sphere model in FDTD, readers are referred to 2

Laser scan courtesy of Yuvi Kahana and the ISVR, University of Southampton

FORUM ACUSTICUM 2014 7-12 September, Krakow

Sheaffer and Fazenda: WaveCloud FDTD

[45]. In the near future, the binaural receiver model recently proposed by Sheaffer et al. [48] is planned to be coded into WaveCloud.

7. Output and visualization In context of numerical modelling, the importance of visualization is twofold. First and foremost, it provides visual feedback allowing the user to efficiently inspect time-varying processes across the entire soundfield. Secondly, it is instrumental in tracking down modeling mistakes, such as boundary geometries and source and receiver placements. In WaveCloud, only limited visualization can be achieved in real time, and more complex visualization is performed offline using ParaView [50], which is an opensource application aimed at scientific visualization. Figure 5 shows a complete volume rendering of a modelled soundfield captured 20ms after simulation onset, as generated by WaveCloud. As the 3D soundfield is visually complex, a more informative means of visualisation is to slice the volume along one of its principal axis, and accordingly, to plot the 2D pressure on that plane. Figures 6 and 7 depict the soundfield on the planes normal to the y and z directions, respectively.

Figure 6: Top-slice (Z-normal) visualisation of the FDTD model of the Elmia concert hall, for at 20ms from simulation onset.

Figure 7: Side-slice (Y-normal) visualisation of the FDTD model of the Elmia concert hall, for at 20ms from simulation onset.

Figure 5: Volume rendering of the FDTD model of the Elmia concert hall, for at 20ms from simulation onset.

of the project was reviewed with regard to the framework structure, computational engine, boundaries, sources, receivers and options for soundfield visualization. Most importantly, missing features and ideas for future development were outlined. WaveCloud is available online for download at http://wavecloud. jonsh.net, and readers wishing to participate in future development are invited to contact the authors. Acknowledgement

At this time, communication between WaveCloud and ParaView is performed manually, meaning that the user needs to define the appropriate object pipeline inside ParaView. Since WaveCloud requires only specific visualization features, a customized ParaView derivative is planned for future work. This would allow WaveCloud to directly communicate with its tailor-made engine, allowing for a simpler visualization process.

8. Concluding Remarks This paper has provided a brief introduction to the WaveCloud modeling framework. The current state

Presentation of this work at Forum Acusticum was made possible with the financial support of the ARUP Room Acoustics Bursary for 2014. References [1] K. Yee, “Numerical solution of initial boundary value problems involving maxwell’s equations in isotropic media,” IEEE Transactions on Antennas and Propagation, vol. 14, no. 3, pp. 302–307, 1966. [2] F. Ihlenburg, Finite element analysis of acoustic scattering, vol. 132. Springer, 1998. [3] R. Ciskowski and C. Brebbia, Boundary element methods in acoustics. Computational Mechanics Publications Southampton, 1991.

FORUM ACUSTICUM 2014 7-12 September, Krakow [4] J. LoVetri, D. Mardare, and G. Soulodre, “Modeling of the seat dip effect using the finite-difference time-domain method,” The Journal of the Acoustical Society of America, vol. 100, p. 2204, 1996. [5] I. Drumm, J. Hirst, and R. Oldfield, “A finite difference time domain approach to analysing room effects on wave field synthesis reproduction,” in Audio Engineering Society Convention 124, 5 2008. [6] S. Sakamoto, H. Nagatomo, A. Ushiyama, and H. Tachibana, “Calculation of impulse responses and acoustic parameters in a hall by the finite-difference time-domain method,” Acoustical science and technology, vol. 29, no. 4, pp. 256–265, 2008. [7] R. Collecchia, M. Kolar, and J. Abel, “A computational acoustic model of the coupled interior architecture of ancient chavín,” in Audio Eng. Soc. Convention 133, 2012. [8] T. Lokki, A. Southern, S. Siltanen, and L. Savioja, “Acoustics of epidaurus studies with room acoustics modelling methods,” Acta Acustica united with Acustica, vol. 99, no. 1, pp. 40–47, 2013. [9] A. Hill and M. Hawksford, “Visualization and analysis tools for low-frequency propagation in a generalized 3d acoustic space,” Journal of the Audio Engineering Society, vol. 59, no. 5, pp. 321–337, 2011. [10] M. Beeson and D. Murphy, “Roomweaver: A digital waveguide mesh based room acoustics research tool,” in Int. Conf. DAFxâĂŹ04, pp. 268–273, 2004. [11] J. J. Lopez, J. Escolano, and B. Pueo, “Simulation of complex and large rooms using a digital waveguide mesh,” in Audio Engineering Society Convention 123, 10 2007. [12] T. Lokki, A. Southern, and L. Savioja, “Studies on seat-dip effect with 3d fdtd modeling,” in Proc. Forum Acusticum, Aalborg, Denmark, June 27 - July 1, pp. 1517–1522, 2011. [13] I. Drumm and Y. Lam, “Development and assessment of a finite difference time domain room acoustic prediction model that uses hall data in popular formats,” in INCE Conference Proceedings, vol. 209, pp. 211– 220, 2007.

Sheaffer and Fazenda: WaveCloud FDTD

[20] J. Sheaffer and B. Fazenda, “FDTD/K-DWM simulation of 3D room acoustics on general purpose graphics hardware using compute unified device architecture (CUDA),” in Proceedings of the Institute of Acoustics, vol. 32, Institute of Acoustics, 2010. [21] L. Savioja, “Real-time 3D finite-difference timedomain simulation of low-and mid-frequency room acoustics,” in 13th Int. Conf on Digital Audio Effects, vol. 1, p. 75, 2010. [22] C. Webb and S. Bilbao, “Computing room acoustics with CUDA-3D FDTD schemes with boundary losses and viscosity,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 317–320, IEEE, 2011. [23] R. Mehra, N. Raghuvanshi, L. Savioja, M. Lin, and D. Manocha, “An efficient GPU-based time domain solver for the acoustic wave equation,” Applied Acoustics, vol. 73, no. 2, pp. 83–94, 2012. [24] J. Sheaffer, B. Fazenda, D. Murphy, and J. Angus, “A simple multiband approach for solving frequency dependent problems in numerical time domain methods,” in Proceedings of Forum Acusticum, pp. 269– 274, S. Hirzel, 2011. [25] C. J. Webb, “Computing virtual acoustics using the 3d finite difference time domain method and kepler architecture gpus,” in Proc. Stockholm Musical Acoustics Conf.(SMAC), Stockholm, Sweden, 2013. [26] B. Hamilton and C. J. Webb, “Room acoustics modelling using gpu-accelerated finite difference and finite volume methods on a face-centered cubic grid,” Proc. Digital Audio Effects (DAFx), Maynooth, Ireland, 2013. [27] S. Patil and B. Ravi, “Voxel-based representation, display and thickness analysis of intricate shapes,” in Computer Aided Design and Computer Graphics, 2005. Ninth International Conference on, pp. 6–pp, IEEE, 2005. [28] I. Bork, “A comparison of room simulation softwarethe 2nd round robin on room acoustical computer simulation,” Acta Acustica u/w Acustica, vol. 86, no. 6, pp. 943–956, 2000.

[14] I. Drumm, “A hybrid finite element/finite difference time domain technique for modelling the acoustics of surfaces within a medium,” Acta acustica united with acustica, vol. 93, no. 5, pp. 804–809, 2007.

[29] S. Dey and R. Mittra, “A locally conformal finitedifference time-domain (fdtd) algorithm for modeling three-dimensional perfectly conducting objects,” Microwave and Guided Wave Letters, IEEE, vol. 7, no. 9, pp. 273–275, 1997.

[15] K. Kowalczyk and M. van Walstijn, “Room acoustics simulation using 3-D compact explicit FDTD schemes,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 1, pp. 34–46, 2011.

[30] J. Tolan and J. Schneider, “Locally conformal method for acoustic finite-difference time-domain modeling of rigid surfaces,” The Journal of the Acoustical Society of America, vol. 114, p. 2575, 2003.

[16] P. Sypek, A. Dziekonski, and M. Mrozowski, “How to render FDTD computations more effective using a graphics accelerator,” IEEE Transactions on Magnetics, vol. 45, no. 3, pp. 1324–1327, 2009.

[31] S. Bilbao, “Modeling of complex geometries and boundary conditions in finite difference/finite volume time domain room acoustics simulation,” Audio, Speech, and Language Processing, IEEE Transactions on, vol. 21, no. 7, pp. 1524–1533, 2013.

[17] D. Michéa and D. Komatitsch, “Accelerating a threedimensional finite-difference wave propagation code using gpu graphics cards,” Geophysical Journal International, vol. 182, no. 1, pp. 389–402, 2010. [18] Nvidia, “Cuda programming guide.” Online, 2008. [19] A. Southern, D. Murphy, G. Campos, and P. Dias, “Finite difference room acoustic modelling on a general purpose graphics processing unit,” in Audio Engineering Society Convention 128, 5 2010.

[32] K. Kowalczyk and M. van Walstijn, “Formulation of locally reacting surfaces in fdtd/k-dwm modelling of acoustic spaces,” Acta Acustica united with Acustica, vol. 94, no. 6, pp. 891–906, 2008. [33] J. Sheaffer, B. Fazenda, and J. Angus, “Computational modelling techniques for small room acoustics (A),” in Proceedings of the 1st CSE Doctoral School Research Conference, (University of Salford), University of Salford, November 2010.

FORUM ACUSTICUM 2014 7-12 September, Krakow [34] D. T. Murphy, A. Southern, and L. Savioja, “Source excitation strategies for obtaining impulse responses in finite difference time domain room acoustics simulation,” Applied Acoustics, vol. 82, pp. 6–14, 2014. [35] J. Sheaffer, M. van Walstijn, and B. Fazenda, “Physical and numerical constraints in source modeling for finite difference simulation of room acousticsa),” The Journal of the Acoustical Society of America, vol. 135, no. 1, pp. 251–261, 2014. [36] H. Hacihabiboglu, B. Gunel, and A. Kondoz, “Timedomain simulation of directive sources in 3-D digital waveguide mesh-based acoustical models,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 16, no. 5, pp. 934–946, 2008. [37] J. Escolano, J. Lopez, and B. Pueo, “Directive sources in acoustic discrete-time domain simulations based on directivity diagrams,” The Journal of the Acoustical Society of America, vol. 121, no. 6, pp. EL256–EL262, 2007. [38] J. Escolano, J. Lopez, and B. Pueo, “Broadband directive sources for acoustic discrete-time simulations,” The Journal of the Acoustical Society of America, vol. 126, p. 2856, 2009. [39] A. Southern and D. Murphy, “Low complexity directional sound sources for finite difference time domain room acoustic models,” in Audio Engineering Society Convention 126, 5 2009. [40] A. Southern, D. Murphy, and L. Savioja, “Spatial encoding of finite difference time domain acoustic models for auralization.,” IEEE Trans. on Audio, Speech, and Lang. Proc., vol. PP, no. 99, p. 1, 2012. [41] J. Daniel, J.-B. Rault, and J.-D. Polack, “Ambisonics encoding of other audio formats for multiple listening conditions,” in Audio Engineering Society Convention 105, Audio Engineering Society, 1998. [42] R. Nishimura and K. Sonoda, “B-format for binaural listening of higher order ambisonics,” in Proceedings of Meetings on Acoustics, vol. 19, p. 055025, Acoustical Society of America, 2013. [43] D. Murphy and M. Beeson, “Modelling spatial sound occlusion and diffraction effects using the digital waveguide mesh,” in Proc. AES 24th International Conference, Multichannel Audio, pp. 26–28, 2003. [44] D. Murphy and M. Beeson, “The KW-boundary hybrid digital waveguide mesh for room acoustics applications,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 2, pp. 552–564, 2007. [45] J. Sheaffer, C. Webb, and B. Fazenda, “Modelling binaural receivers in finite difference simulation of room acoustics,” in Accepted to Proceedings of the 21st International Congress on Acoustics (ICA), 2013. [46] T. Xiao and Q. Liu, “Finite difference computation of head-related transfer function for human hearing,” The Journal of the Acoustical Society of America, vol. 113, p. 2434, 2003. [47] P. Mokhtari, H. Takemoto, R. Nishimura, and H. Kato, “Computer simulation of hrtfs for personalization of 3d audio,” in 2nd Intl. Symp. on Universal Communication, pp. 435–440, IEEE, 2008. [48] J. Sheaffer, M. van Walstijn, B. Rafaely, and K. Kowalczyk, “A spherical array approach for simulation of binaural impulse responses using the finite difference time domain method,” in Submitted to Proceedings of the Forum Acusticum 2014, 2014.

Sheaffer and Fazenda: WaveCloud FDTD

[49] Y. Kahana, “Numerical modelling (1997-2001), available http://resource.isvr.soton.ac.uk/ FDAG/VAP/html/nmh_hrtf.html,” 2001. [50] J. Ahrens, B. Geveci, and C. Law, “Paraview: An end user tool for large data visualization,” the Visualization Handbook. Edited by CD Hansen and CR Johnson. Elsevier, 2005.

FORUM ACUSTICUM 2014 7-12 September, Krakow

Sheaffer and Fazenda: WaveCloud FDTD

A. Appendix: sample WaveCloud script In this appendix, a sample MATLAB script for solving a room using WaveCloud is given. Its purpose is to demonstrate how a WaveCloud model can be programmed, and to provide a quickstart guide which could be used until more thorough documentation is published. First, we initialize a new MATLAB struct myModel: Listing 1: Define new model myModel=newModel ;

% C o n s t r u c t new model

Next, we define some global modeling parameters: Listing 2: Global parameters myModel=setSLF ( myModel ) ; myModel . sampli ngFreq =40000; myModel . runTime =1;

% S e t n u m e r i c a l scheme % S e t sample r a t e % Simulation length ( s )

Note that one can replace setSLF with any other scheme code, e.g. setIWB, to change the numerical scheme being used. The next step is to define calculation parameters: Listing 3: Calculation parameters myModel . gpuOptions . isSP=t r u e ; % t r u e i f s i n g l e p r e c i s i o n myModel . onGPU=t r u e ; % t r u e f o r GPU k e r n e l myModel . compOptions . s t a n d a l o n e=t r u e ; myModel . showAnimate=f a l s e ; % true for v i s u a l i s a t i o n The standalone parameter is set to false only for debugging purposes which require that a larger portion of the code runs from within MATLAB. The showAnimate parameter is for coarse visualization inside MATLAB, whereas generation of VTK files for offline visualization is accomplished by using the saveAnimate parameters. We define the dimensions of the model by setting: Listing 4: Model extents (meters) myModel . room . l e n g t h = 4 ; myModel . room . width = 4 ; myModel . room . h e i g h t = 3 ; The next step is to define the type and position for the receiver: Listing 5: Receiver settings myModel . e l e m e n t s . recX = 2 ; myModel . e l e m e n t s . recY = 2 ; myModel . e l e m e n t s . r e c Z = 1 . 5 ; myModel = s e t R e c B i n a u r a l ( myModel , 0 ) ; myModel . e l e m e n t s . headImpedance = 9 0 ; The setRecBinaural function places two spaced pressure probes at the positions of the ears (a single pressure/velocity receiver can be defined instead of a binaural receiver by using setRecMono instead), and headImpedance sets the specific boundary impedance of the head object. Next the source parameters are defined: Listing 6: Source settings myModel . e l e m e n t s . s r c S h a p e =’ maxflat ’ ; myModel . e l e m e n t s . s o u r c e A m p l i t u d e = 250 e −6; myModel . e l e m e n t s . s r c F c = 0 . 1 8 6 ; The srcShape flag tells WaveCloud how to design the excitation function (e.g. ’Gaussian’ could also be used), sourceAmplitude is the peak amplitude, and srcFc is the normalized high cutoff frequency of the function. To design a physically-constrained source, one may use a dedicated function designPCS instead. The next step is to tell WaveCloud to calculate some intermediate variables which are necessary for subsequent functions:

FORUM ACUSTICUM 2014 7-12 September, Krakow

Sheaffer and Fazenda: WaveCloud FDTD

Listing 7: Intermediate calculation myModel=c a l c V a r s ( myModel ) ; Next, the mesh is created for the domain: Listing 8: Create the grid myModel=meshRect ( myModel , 4 1 5 . 6 ) ; The second parameter represents the characteristic impedance of the boundary truncating the grid, which here is set to 415.6 to imitate absorbing boundaries at the edges of the domain. To overlay an arbitrary room geometry on the grid, we call the function: Listing 9: Load geometry file myModel = loadGeom_stl ( myModel , ’ geometry . s t l ’ , 8 0 0 0 ) ; where geometry.stl is the 3D geometry file in STL format, and the last parameter denotes the characteristic acoustic impedance of the geometry’s boundaries, in this case 8000. Thus to import a composite model which includes multiple materials, one would export each material as a different layer in a corresponding STL file, and multiple instances of loadGeom would be used. In a similar manner, we import the geometry of the head by calling the function: Listing 10: Load head geometry myModel=addHead ( myModel ) ; The head geometry is automatically decimated according to the chosen sample rate. Next, we add sources: Listing 11: Add sources %addSource ( myModel , x , y , z , amplitude , type , t i m e l i m i t ) myModel=addSource ( myModel , 1 , 1 , 1 . 5 , 1 , ’ phys ’ , f a l s e ) ; myModel=addSource ( myModel , 0 . 5 , 1 . 2 , 1 . 5 , 1 , ’ phys ’ , f a l s e ) ; The values corresponding to x,y,z denote the source position, amplitude is the relative amplitude of the source (e.g., 1=unchanged, 0.5=half of the defined amplitude), type defines the injection filter (’phys’ is for PCS injection, ’soft’ and ’hard’ for soft and hard injection, respectively), and timelimit is used for debugging purposes only. It is possible to add as many sources as needed. Finally, simulation is executed by calling the function: Listing 12: Execute simulation myModel=s i m u l a t e M o d e l ( myModel ) ; For a binaural receiver, resulting impulse responses for the left and right ears are stored in the variables: Listing 13: Results m. r e s u l t s . i r l m. r e s u l t s . i r r For a monophonic receiver, resulting impulse responses for pressure and particle velocity are stored in the variables: Listing 14: Results m. m. m. m.

results results results results

. irl . ux . uy . uz

Suggest Documents