Picture for Eugen Beck

Eugen Beck

Dynamic Acoustic Model Architecture Optimization in Training for ASR

Add code
Jun 16, 2025
Viaarxiv icon

Efficient Supernet Training with Orthogonal Softmax for Scalable ASR Model Compression

Add code
Jan 31, 2025
Viaarxiv icon

RASR2: The RWTH ASR Toolkit for Generic Sequence-to-sequence Speech Recognition

Add code
May 28, 2023
Viaarxiv icon

Improving Factored Hybrid HMM Acoustic Modeling without State Tying

Add code
Jan 24, 2022
Figure 1 for Improving Factored Hybrid HMM Acoustic Modeling without State Tying
Figure 2 for Improving Factored Hybrid HMM Acoustic Modeling without State Tying
Figure 3 for Improving Factored Hybrid HMM Acoustic Modeling without State Tying
Figure 4 for Improving Factored Hybrid HMM Acoustic Modeling without State Tying
Viaarxiv icon

Towards Consistent Hybrid HMM Acoustic Modeling

Add code
Apr 28, 2021
Figure 1 for Towards Consistent Hybrid HMM Acoustic Modeling
Figure 2 for Towards Consistent Hybrid HMM Acoustic Modeling
Figure 3 for Towards Consistent Hybrid HMM Acoustic Modeling
Viaarxiv icon

Context-Dependent Acoustic Modeling without Explicit Phone Clustering

Add code
May 15, 2020
Figure 1 for Context-Dependent Acoustic Modeling without Explicit Phone Clustering
Figure 2 for Context-Dependent Acoustic Modeling without Explicit Phone Clustering
Figure 3 for Context-Dependent Acoustic Modeling without Explicit Phone Clustering
Figure 4 for Context-Dependent Acoustic Modeling without Explicit Phone Clustering
Viaarxiv icon

LSTM Language Models for LVCSR in First-Pass Decoding and Lattice-Rescoring

Add code
Jul 01, 2019
Figure 1 for LSTM Language Models for LVCSR in First-Pass Decoding and Lattice-Rescoring
Figure 2 for LSTM Language Models for LVCSR in First-Pass Decoding and Lattice-Rescoring
Figure 3 for LSTM Language Models for LVCSR in First-Pass Decoding and Lattice-Rescoring
Figure 4 for LSTM Language Models for LVCSR in First-Pass Decoding and Lattice-Rescoring
Viaarxiv icon

RWTH ASR Systems for LibriSpeech: Hybrid vs Attention - w/o Data Augmentation

Add code
May 08, 2019
Figure 1 for RWTH ASR Systems for LibriSpeech: Hybrid vs Attention - w/o Data Augmentation
Figure 2 for RWTH ASR Systems for LibriSpeech: Hybrid vs Attention - w/o Data Augmentation
Figure 3 for RWTH ASR Systems for LibriSpeech: Hybrid vs Attention - w/o Data Augmentation
Figure 4 for RWTH ASR Systems for LibriSpeech: Hybrid vs Attention - w/o Data Augmentation
Viaarxiv icon