Over fifty years, researchers around the globe have developed countless artificial reverberators and among them prominently the Feedback Delay Network (FDN) and its many cousins. Although explicit system architectures are rarely published, nonetheless, there exists a set of notable examples. In this work, we collect, implement, and analyze delay feedback structures from the vast literature in a unified framework.
Related Work Välimäki, V., Parker, J., Savioja, L., III, J.
The aim of Mixed Reality (MR) technology is to overlay the physical reality with virtual stimuli indistinguishable from real stimuli. In the visual domains, the main technical paradigms of MR are see-through cameras or transparent glasses. Analogously, in the auditory domain, the two competing models are hear-through and transparent headphones. In this work, we evaluate the fidelity and transparency of multiple commercial and experimental headphones. Further research directions include the perceptual influence of headphones for listening to real as well as virtual sound sources.
Feedback Delay Networks (FDNs) are one of the most efficient methods to synthesize artificial reverberation. Although high-order FDNs can achieve high-quality results, they also tend to be too dense and have increased computational complexity. On the contrary, low-order FDNs exhibit (if not carefully tuned) metallic and unpleasant resonances. This work aims to investigate the mathematical properties necessary for FDNs to achieve lush and colorless reverberation. In previous work, we have presented a sophisticated method for identifying all resonating modes (up to a million simultaneously) and residual energy.
Virtual Acoustics (VA), also known as Reverberation Enhancement Systems, are an electro-acoustic setup of multiple loudspeakers and microphones to alter the sonic environment in-situ. While VAs are typically employed to adjust room acoustic parameters for musical performances, in this project, we explore the artistic potential of such systems for immersive storytelling. The acoustic environment can be changed in a physically plausible way, as well as in a non-physical manner. This interactive sound piece places the audience into an ever-changing sonic atmosphere and alters the sense of space.
Augmented Reality (AR) techniques aim to place virtual entities into the physical realm. While mainstream spatial computing techniques are developed for visual embedding, realistic sound embeddings are equally required to produce a convincing mixed reality. In this project, the students develop a novel method for real-time generation of artificial reverberation, which is adjusted to the surrounding environment.
Related Work ScatAR: a mobile augmented reality application that uses scattering delay networks for room acoustic synthesis
Spatial sound is often concerned with the reproduction of a spherical sound field towards a central listener position. On the contrary, spherical loudspeaker arrays project spherical sound fields outward into the surrounding space. In a recent project, we built a mixed-order loudspeaker array with 15 transducers. In this project, we conduct an artistic exploration of the spatial sound design by juxtaposing compact and distributed spherical loudspeakers arrays.
Related Work Riedel, S.
The goal of this project is to explore state-of-the-art techniques to create a vibrant and convincing sound environment in virtual reality. The project takes into account physical sound synthesis techniques as well as spatial audio methods. The technical setup includes virtual reality goggles, hand tracking, and object tracking techniques.
Related Work Serafin, S., Geronazzo, M., Erkut, C., Nilsson, N., Nordahl, R. (2018). Sonic Interactions in Virtual Reality: State of the Art, Current Challenges, and Future Directions IEEE Computer Graphics and Applications 38(2), 31-43.
The ideal goal of mixed reality (MR) is to interleave virtual and physical reality in a way such that it is imperceptible to a human observer. While MR is still challenging in the visual domain, recent developments in spatial audio bring MR into the reach. In this study, the perceptual cues for successful auditory MR scenarios are investigated. The projects involve both spatial audio signal processing techniques, virtual acoustics, and psychoacoustic testing.
Avatars are a central component of virtual reality experience. In this project, we explore the sonic avatars and their perceptual implications. The experiments include recording and reproduction of various self-generated sounds (speed, footsteps, cloths, etc.). Further, we study real-time alteration of such sounds by audio effects such as voice alteration, artificial reverberation, and other means. As a result, we consider the sense of self and immersion induced by the sonic interactions of the person.
The goal of this project is to establish an immersive audio infrastructure in game engines to facilitate innovative research in VR. The project evolves around the translation of the SPARTA spatial audio plugins to Unity and Wwise audio engines. Good knowledge of C++ and a basic understanding of spatial audio techniques are required.
Related Work McCormack, L., Politis, A. (2019). SPARTA & COMPASS: Real-Time Implementations of Linear and Parametric Spatial Audio Reproduction and Processing Methods AES International Conference on Immersive and Interactive Audio