Abstract:Tens of millions of people live blind, and their number is ever increasing. Visual-to-auditory sensory substitution (SS) encompasses a family of cheap, generic solutions to assist the visually impaired by conveying visual information through sound. The required SS training is lengthy: months of effort is necessary to reach a practical level of adaptation. There are two reasons for the tedious training process: the elongated substituting audio signal, and the disregard for the compressive characteristics of the human hearing system. To overcome these obstacles, we developed a novel class of SS methods, by training deep recurrent autoencoders for image-to-sound conversion. We successfully trained deep learning models on different datasets to execute visual-to-auditory stimulus conversion. By constraining the visual space, we demonstrated the viability of shortened substituting audio signals, while proposing mechanisms, such as the integration of computational hearing models, to optimally convey visual features in the substituting stimulus as perceptually discernible auditory components. We tested our approach in two separate cases. In the first experiment, the author went blindfolded for 5 days, while performing SS training on hand posture discrimination. The second experiment assessed the accuracy of reaching movements towards objects on a table. In both test cases, above-chance-level accuracy was attained after a few hours of training. Our novel SS architecture broadens the horizon of rehabilitation methods engineered for the visually impaired. Further improvements on the proposed model shall yield hastened rehabilitation of the blind and a wider adaptation of SS devices as a consequence.
Abstract:Deep Neural Networks have been applied very successfully in image recognition and natural language processing. Recently these powerful methods have received attention also in the brain-computer interface (BCI) community. Here, we introduce a convolutional neural network (CNN) architecture optimized for classification of brain states from non-invasive magnetoencephalographic (MEG) measurements. The model structure is motivated by a state-of-the-art generative model of the MEG signal and is thus readily interpretable in neurophysiological terms. We demonstrate that the proposed model is highly accurate in decoding event-related responses as well as modulations of oscillatory brain activity, and is robust with respect to inter-individual differences. Importantly, the model generalizes well across users: when trained on data pooled from previous users, it can successfully perform on new users. Thus, the time-consuming BCI calibration can be omitted. Moreover, the model can be incrementally updated, resulting in +8.9% average accuracy improvement in offline experiments and +17.0% in a real-time BCI. We argue that this model can be used in practical BCIs and basic neuroscience research.