Abstract:Speech separation approaches for single-channel, dry speech mixtures have significantly improved. However, real-world spatial and reverberant acoustic environments remain challenging, limiting the effectiveness of these approaches for assistive hearing devices like cochlear implants (CIs). To address this, we quantify the impact of real-world acoustic scenes on speech separation and explore how spatial cues can enhance separation quality efficiently. We analyze performance based on implicit spatial cues (inherent in the acoustic input and learned by the model) and explicit spatial cues (manually calculated spatial features added as auxiliary inputs). Our findings show that spatial cues (both implicit and explicit) improve separation for mixtures with spatially separated and nearby talkers. Furthermore, spatial cues enhance separation when spectral cues are ambiguous, such as when voices are similar. Explicit spatial cues are particularly beneficial when implicit spatial cues are weak. For instance, single CI microphone recordings provide weaker implicit spatial cues than bilateral CIs, but even single CIs benefit from explicit cues. These results emphasize the importance of training models on real-world data to improve generalizability in everyday listening scenarios. Additionally, our statistical analyses offer insights into how data properties influence model performance, supporting the development of efficient speech separation approaches for CIs and other assistive devices in real-world settings.
Abstract:Performing simulations with a realistic biophysical auditory nerve fiber model can be very time consuming, due to the complex nature of the calculations involved. Here, a surrogate (approximate) model of such an auditory nerve fiber model was developed using machine learning methods, to perform simulations more efficiently. Several machine learning models were compared, of which a Convolutional Neural Network showed the best performance. In fact, the Convolutional Neural Network was able to emulate the behavior of the auditory nerve fiber model with extremely high similarity ($R^2 > 0.99$), tested under a wide range of experimental conditions, whilst reducing the simulation time by five orders of magnitude. In addition, we introduce a method for randomly generating charge-balanced waveforms using hyperplane projection. In the second part of this paper, the Convolutional Neural Network surrogate model was used by an Evolutionary Algorithm to optimize the shape of the stimulus waveform in terms energy efficiency. The resulting waveforms resemble a positive Gaussian-like peak, preceded by an elongated negative phase. When comparing the energy of the waveforms generated by the Evolutionary Algorithm with the commonly used square wave, energy decreases of 8% - 45% were observed for different pulse durations. These results were validated with the original auditory nerve fiber model, which demonstrates that our proposed surrogate model can be used as its accurate and efficient replacement.