Convincing spatial audio systems must render room acoustics for speech intelligibility, externalisation and spaciousness. Typically, evaluation of such systems uses the same source signal for each condition in multiple stimulus comparison (such as MUSHRA) tests. However in an augmented reality scenario, it is unlikely to have the exact same source signal at the exact same position in space, both real and virtual: instead, a real source would be in one position in the room and a virtual source in a different position, both with different source signals. This paper presents a perceptual study on the effect of source signal similarity when distinguishing different positions in a room. Three source signal types (all speech) are investigated in a multiple stimulus paradigm: the same source signal for all conditions, the same speaker but a different sentence for each condition, and a different speaker and different sentence for each condition. Results show that source signal significantly impacts the similarity rating between different receiver positions in the same room, and that depending on the source signals to be used in the target application, the conclusions in this paper offer different spatial audio system fidelity requirements.
The BRIRs are composed of two measurement positions as seen in the figure. The BRIRs on this website are the measurements in magenta: S2→R5 and S2→R2.Figure 1: Illustration of the source (S) and receiver (R) positions used to make up the listening test conditions. Microphones and loudspeakers were oriented facing north and south, respectively, according to the illustration orientation. Figure 2: Conditions showing the transformation from BRIR1 to BRIR2 (S2→R5 in blue, S2→R2 in red, as colored magenta in Fig. 1), left signal.
BRIRs with different cut-on times.
Here, the listening test stimuli are presented. The key is that the BRIR selection is constant. However, the source signals similarity changes between tests.
Listening test with same speaker, same sentence.
Listening test with same speaker, different sentence.
Listening test with different speaker, different sentence.
Trackswitch.js was developed by Nils Werner, Stefan Balke, Fabian-Rober Stöter, Meinard Müller and Bernd Edler.