Over fifty years, researchers around the globe have developed countless artificial reverberators and among them prominently the Feedback Delay Network (FDN) and its many cousins. Although explicit system architectures are rarely published, nonetheless, there exists a set of notable examples.
The aim of Mixed Reality (MR) technology is to overlay the physical reality with virtual stimuli indistinguishable from real stimuli. In the visual domains, the main technical paradigms of MR are see-through cameras or transparent glasses.
Feedback Delay Networks (FDNs) are one of the most efficient methods to synthesize artificial reverberation. Although high-order FDNs can achieve high-quality results, they also tend to be too dense and have increased computational complexity.
Virtual Acoustics (VA), also known as Reverberation Enhancement Systems, are an electro-acoustic setup of multiple loudspeakers and microphones to alter the sonic environment in-situ. While VAs are typically employed to adjust room acoustic parameters for musical performances, in this project, we explore the artistic potential of such systems for immersive storytelling.
Augmented Reality (AR) techniques aim to place virtual entities into the physical realm. While mainstream spatial computing techniques are developed for visual embedding, realistic sound embeddings are equally required to produce a convincing mixed reality.
Spatial sound is often concerned with the reproduction of a spherical sound field towards a central listener position. On the contrary, spherical loudspeaker arrays project spherical sound fields outward into the surrounding space.
The goal of this project is to explore state-of-the-art techniques to create a vibrant and convincing sound environment in virtual reality. The project takes into account physical sound synthesis techniques as well as spatial audio methods.
The ideal goal of mixed reality (MR) is to interleave virtual and physical reality in a way such that it is imperceptible to a human observer. While MR is still challenging in the visual domain, recent developments in spatial audio bring MR into the reach.
Avatars are a central component of virtual reality experience. In this project, we explore the sonic avatars and their perceptual implications. The experiments include recording and reproduction of various self-generated sounds (speed, footsteps, cloths, etc.
The goal of this project is to establish an immersive audio infrastructure in game engines to facilitate innovative research in VR. The project evolves around the translation of the SPARTA spatial audio plugins to Unity and Wwise audio engines.