Abstract:Individual Head Related Transfer Functions (HRTFs), crucial for realistic virtual audio rendering, can be efficiently numerically computed on precise three-dimensional head and ear scans. While photogrammetry scanning is promising, it generally lacks in accuracy, leading to HRTFs showing significant perceptual deviation from reference data, owing to the scanning error mainly affecting the most occluded pinna structures. This papers analyses the use of Deep Neural Networks (DNNs) for denoising photogrammetric ear scans. Various DNNs, fine-tuned on pinna samples corrupted with modelled synthetic error mimicking that observed in photogrammetric dummy head ear scans, are tested and benchmarked against a classical denoising approach. One DNN is further modified and retrained to increase its denoising performance. The HRTFs computed on original and denoised scans are compared to those of a reference scan, showing that the best-performing DNN is capable of generally decreasing the deviation of photogrammetric dummy head HRTFs to levels obtained with accurately measured individual data. Correlation analysis between the geometrical metrics, computed on the scanned point clouds, and the related HRTFs is used to identify the most relevant metrics to assess the geometrical deviation between target and reference scans, in terms of the similarity of the HRTFs computed on them.