Avatars are a central component of the virtual reality experience. In this project, we explore avatars' spatial capturing visually and sonically and integrate them into a VR environment. The visual volumetric capturing uses the new Azure Kinect depth-sensing cameras and Depthkit.
Hide and Seek is an audio-based VR game strongly inspired by the minigames of Half+Half. The project is meant to be a technical proof-of-concept establishing a Unity VR multiplayer game with voice and sound communication.
Goal Create photogrammetric scans of AcousticsLab facilities (10-15 rooms) in Otaniemi, Espoo.
Work packages Make scans of the rooms with iPad Clean up scans with 3D modeling software, e.g., Blender Create VR-ready version: smoothing, decimation, re-texturing Hardware iPad Pro 2020 with LIDAR scanner
Over fifty years, researchers around the globe have developed countless artificial reverberators and among them prominently the Feedback Delay Network (FDN) and its many cousins. Although explicit system architectures are rarely published, nonetheless, there exists a set of notable examples.
The aim of Mixed Reality (MR) technology is to overlay the physical reality with virtual stimuli indistinguishable from real stimuli. In the visual domains, the main technical paradigms of MR are see-through cameras or transparent glasses.
Feedback Delay Networks (FDNs) are one of the most efficient methods to synthesize artificial reverberation. Although high-order FDNs can achieve high-quality results, they also tend to be too dense and have increased computational complexity.
Virtual Acoustics (VA), also known as Reverberation Enhancement Systems, are an electro-acoustic setup of multiple loudspeakers and microphones to alter the sonic environment in-situ. While VAs are typically employed to adjust room acoustic parameters for musical performances, in this project, we explore the artistic potential of such systems for immersive storytelling.
Augmented Reality (AR) techniques aim to place virtual entities into the physical realm. While mainstream spatial computing techniques are developed for visual embedding, realistic sound embeddings are equally required to produce a convincing mixed reality.
Spatial sound is often concerned with the reproduction of a spherical sound field towards a central listener position. On the contrary, spherical loudspeaker arrays project spherical sound fields outward into the surrounding space.
The goal of this project is to explore state-of-the-art techniques to create a vibrant and convincing sound environment in virtual reality. The project takes into account physical sound synthesis techniques as well as spatial audio methods.