Abstract:Objective: In cochlear implant (CI) users with residual acoustic hearing, compound action potentials (CAPs) can be evoked by acoustic or electric stimulation and recorded through the electrodes of the CI. We propose a novel computational model to simulate electrically and acoustically evoked CAPs in humans, taking into account the interaction between combined electric-acoustic stimulation that occurs at the level of the auditory nerve. Methods: The model consists of three components: a 3D finite element method model of an implanted cochlea, a phenomenological single-neuron spiking model for electric-acoustic stimulation, and a physiological multi-compartment neuron model to simulate the individual nerve fiber contributions to the CAP. Results: The CAP morphologies predicted for electric pulses and for acoustic clicks, chirps, and tone bursts closely resembled those known from humans. The spread of excitation derived from electrically evoked CAPs by varying the recording electrode along the CI electrode array was consistent with published human data. The predicted CAP amplitude growth functions for both electric and acoustic stimulation largely resembled human data, with deviations in absolute CAP amplitudes for acoustic stimulation. The model reproduced the suppression of electrically evoked CAPs by simultaneously presented acoustic tone bursts for different masker frequencies and probe stimulation electrodes. Conclusion: The proposed model can simulate CAP responses to electric, acoustic, or combined electric-acoustic stimulation. It takes into account the dependence on stimulation and recording sites in the cochlea, as well as the interaction between electric and acoustic stimulation. Significance: The model can be used in the future to investigate objective methods, such as hearing threshold assessment or estimation of neural health through electrically or acoustically evoked CAPs.
Abstract:Cochlear implants (CIs) provide a solution for individuals with severe sensorineural hearing loss to regain their hearing abilities. When someone experiences this form of hearing impairment in both ears, they may be equipped with two separate CI devices, which will typically further improve the CI benefits. This spatial hearing is particularly crucial when tackling the challenge of understanding speech in noisy environments, a common issue CI users face. Currently, extensive research is dedicated to developing algorithms that can autonomously filter out undesired background noises from desired speech signals. At present, some research focuses on achieving end-to-end denoising, either as an integral component of the initial CI signal processing or by fully integrating the denoising process into the CI sound coding strategy. This work is presented in the context of bilateral CI (BiCI) systems, where we propose a deep-learning-based bilateral speech enhancement model that shares information between both hearing sides. Specifically, we connect two monaural end-to-end deep denoising sound coding techniques through intermediary latent fusion layers. These layers amalgamate the latent representations generated by these techniques by multiplying them together, resulting in an enhanced ability to reduce noise and improve learning generalization. The objective instrumental results demonstrate that the proposed fused BiCI sound coding strategy achieves higher interaural coherence, superior noise reduction, and enhanced predicted speech intelligibility scores compared to the baseline methods. Furthermore, our speech-in-noise intelligibility results in BiCI users reveal that the deep denoising sound coding strategy can attain scores similar to those achieved in quiet conditions.
Abstract:Cochlear implants (CIs) are implantable medical devices that can restore the hearing sense of people suffering from profound hearing loss. The CI uses a set of electrode contacts placed inside the cochlea to stimulate the auditory nerve with current pulses. The exact location of these electrodes may be an important parameter to improve and predict the performance with these devices. Currently the methods used in clinics to characterize the geometry of the cochlea as well as to estimate the electrode positions are manual, error-prone and time consuming. We propose a Markov random field (MRF) model for CI electrode localization for cone beam computed tomography (CBCT) data-sets. Intensity and shape of electrodes are included as prior knowledge as well as distance and angles between contacts. MRF inference is based on slice sampling particle belief propagation and guided by several heuristics. A stochastic search finds the best maximum a posteriori estimation among sampled MRF realizations. We evaluate our algorithm on synthetic and real CBCT data-sets and compare its performance with two state of the art algorithms. An increase of localization precision up to 31.5% (mean), or 48.6% (median) respectively, on real CBCT data-sets is shown.