Picture for William Hartmann

William Hartmann

TOGGL: Transcribing Overlapping Speech with Staggered Labeling

Add code
Aug 12, 2024
Viaarxiv icon

Cross-Lingual Conversational Speech Summarization with Large Language Models

Add code
Aug 12, 2024
Viaarxiv icon

Using i-vectors for subject-independent cross-session EEG transfer learning

Add code
Jan 16, 2024
Viaarxiv icon

Training Autoregressive Speech Recognition Models with Limited in-domain Supervision

Add code
Oct 27, 2022
Viaarxiv icon

Combining Unsupervised and Text Augmented Semi-Supervised Learning for Low Resourced Autoregressive Speech Recognition

Add code
Oct 29, 2021
Figure 1 for Combining Unsupervised and Text Augmented Semi-Supervised Learning for Low Resourced Autoregressive Speech Recognition
Figure 2 for Combining Unsupervised and Text Augmented Semi-Supervised Learning for Low Resourced Autoregressive Speech Recognition
Figure 3 for Combining Unsupervised and Text Augmented Semi-Supervised Learning for Low Resourced Autoregressive Speech Recognition
Figure 4 for Combining Unsupervised and Text Augmented Semi-Supervised Learning for Low Resourced Autoregressive Speech Recognition
Viaarxiv icon

Overcoming Domain Mismatch in Low Resource Sequence-to-Sequence ASR Models using Hybrid Generated Pseudotranscripts

Add code
Jun 14, 2021
Figure 1 for Overcoming Domain Mismatch in Low Resource Sequence-to-Sequence ASR Models using Hybrid Generated Pseudotranscripts
Figure 2 for Overcoming Domain Mismatch in Low Resource Sequence-to-Sequence ASR Models using Hybrid Generated Pseudotranscripts
Figure 3 for Overcoming Domain Mismatch in Low Resource Sequence-to-Sequence ASR Models using Hybrid Generated Pseudotranscripts
Figure 4 for Overcoming Domain Mismatch in Low Resource Sequence-to-Sequence ASR Models using Hybrid Generated Pseudotranscripts
Viaarxiv icon

Using heterogeneity in semi-supervised transcription hypotheses to improve code-switched speech recognition

Add code
Jun 14, 2021
Figure 1 for Using heterogeneity in semi-supervised transcription hypotheses to improve code-switched speech recognition
Figure 2 for Using heterogeneity in semi-supervised transcription hypotheses to improve code-switched speech recognition
Figure 3 for Using heterogeneity in semi-supervised transcription hypotheses to improve code-switched speech recognition
Figure 4 for Using heterogeneity in semi-supervised transcription hypotheses to improve code-switched speech recognition
Viaarxiv icon

Learning from Noisy Labels with Noise Modeling Network

Add code
May 01, 2020
Figure 1 for Learning from Noisy Labels with Noise Modeling Network
Figure 2 for Learning from Noisy Labels with Noise Modeling Network
Figure 3 for Learning from Noisy Labels with Noise Modeling Network
Figure 4 for Learning from Noisy Labels with Noise Modeling Network
Viaarxiv icon

Cross-lingual Information Retrieval with BERT

Add code
Apr 24, 2020
Figure 1 for Cross-lingual Information Retrieval with BERT
Figure 2 for Cross-lingual Information Retrieval with BERT
Figure 3 for Cross-lingual Information Retrieval with BERT
Figure 4 for Cross-lingual Information Retrieval with BERT
Viaarxiv icon

Towards a New Understanding of the Training of Neural Networks with Mislabeled Training Data

Add code
Sep 18, 2019
Figure 1 for Towards a New Understanding of the Training of Neural Networks with Mislabeled Training Data
Figure 2 for Towards a New Understanding of the Training of Neural Networks with Mislabeled Training Data
Figure 3 for Towards a New Understanding of the Training of Neural Networks with Mislabeled Training Data
Viaarxiv icon