Magnitude-Corrected and Time-Aligned HRTF Interpolation: Effect of Interpolation and Alignment Method (en)
* Presenting author
Abstract:
Virtual acoustic realities require head-related transfer functions (HRTFs) on a spatially dense sampling grid with many directions. A common method for obtaining high-quality HRTFs is acoustic measurements. To reduce measurement time, cost, and complexity of measurement systems, a promising approach is to capture only a few HRTFs on a sparse sampling grid and then upsample them to a dense HRTF set by interpolation. However, interpolating sparsely sampled HRTFs is challenging because small directional changes result in significant changes in the HRTF phase and magnitude response. Previous studies considerably improved interpolation results by time-aligning the HRTFs in preprocessing, but magnitude interpolation errors, especially in contralateral regions, remain a problem. We propose an additional post-interpolation magnitude correction derived from a frequency-smoothed HRTF representation. Employing all 96 individual simulated HRTF sets of the HUTUBS database, we show that the additional magnitude correction significantly reduces interpolation errors compared to state-of-the-art methods applying only time-alignment. In particular, we show that the magnitude correction works with all recent time-alignment and interpolation approaches and consistently improves interpolation results, highlighting the generic nature of the algorithm. Thus, the proposed method can further reduce the minimum number of HRTFs required for perceptually transparent interpolation.