Abstract:Concurrent Speaker Detection (CSD), the task of identifying the presence and overlap of active speakers in an audio signal, is crucial for many audio tasks such as meeting transcription, speaker diarization, and speech separation. This study introduces a multimodal deep learning approach that leverages both audio and visual information. The proposed model employs an early fusion strategy combining audio and visual features through cross-modal attention mechanisms, with a learnable [CLS] token capturing the relevant audio-visual relationships. The model is extensively evaluated on two real-world datasets, AMI and the recently introduced EasyCom dataset. Experiments validate the effectiveness of the multimodal fusion strategy. Ablation studies further support the design choices and the training procedure of the model. As this is the first work reporting CSD results on the challenging EasyCom dataset, the findings demonstrate the potential of the proposed multimodal approach for CSD in real-world scenarios.
Abstract:In this paper, we propose a model which can generate a singing voice from normal speech utterance by harnessing zero-shot, many-to-many style transfer learning. Our goal is to give anyone the opportunity to sing any song in a timely manner. We present a system comprising several available blocks, as well as a modified auto-encoder, and show how this highly-complex challenge can be achieved by tailoring rather simple solutions together. We demonstrate the applicability of the proposed system using a group of 25 non-expert listeners. Samples of the data generated from our model are provided.
Abstract:We present a deep-learning approach for the task of Concurrent Speaker Detection (CSD) using a modified transformer model. Our model is designed to handle multi-microphone data but can also work in the single-microphone case. The method can classify audio segments into one of three classes: 1) no speech activity (noise only), 2) only a single speaker is active, and 3) more than one speaker is active. We incorporate a Cost-Sensitive (CS) loss and a confidence calibration to the training procedure. The approach is evaluated using three real-world databases: AMI, AliMeeting, and CHiME 5, demonstrating an improvement over existing approaches.