Picture for Albert Zeyer

Albert Zeyer

The Conformer Encoder May Reverse the Time Dimension

Add code
Oct 01, 2024
Viaarxiv icon

Chunked Attention-based Encoder-Decoder Model for Streaming Speech Recognition

Add code
Sep 15, 2023
Viaarxiv icon

Monotonic segmental attention for automatic speech recognition

Add code
Oct 26, 2022
Viaarxiv icon

Why does CTC result in peaky behavior?

Add code
Jun 03, 2021
Figure 1 for Why does CTC result in peaky behavior?
Figure 2 for Why does CTC result in peaky behavior?
Figure 3 for Why does CTC result in peaky behavior?
Figure 4 for Why does CTC result in peaky behavior?
Viaarxiv icon

Equivalence of Segmental and Neural Transducer Modeling: A Proof of Concept

Add code
Apr 13, 2021
Figure 1 for Equivalence of Segmental and Neural Transducer Modeling: A Proof of Concept
Figure 2 for Equivalence of Segmental and Neural Transducer Modeling: A Proof of Concept
Viaarxiv icon

Investigating Methods to Improve Language Model Integration for Attention-based Encoder-Decoder ASR Models

Add code
Apr 12, 2021
Figure 1 for Investigating Methods to Improve Language Model Integration for Attention-based Encoder-Decoder ASR Models
Figure 2 for Investigating Methods to Improve Language Model Integration for Attention-based Encoder-Decoder ASR Models
Figure 3 for Investigating Methods to Improve Language Model Integration for Attention-based Encoder-Decoder ASR Models
Figure 4 for Investigating Methods to Improve Language Model Integration for Attention-based Encoder-Decoder ASR Models
Viaarxiv icon

Librispeech Transducer Model with Internal Language Model Prior Correction

Add code
Apr 07, 2021
Figure 1 for Librispeech Transducer Model with Internal Language Model Prior Correction
Figure 2 for Librispeech Transducer Model with Internal Language Model Prior Correction
Figure 3 for Librispeech Transducer Model with Internal Language Model Prior Correction
Figure 4 for Librispeech Transducer Model with Internal Language Model Prior Correction
Viaarxiv icon

A study of latent monotonic attention variants

Add code
Mar 30, 2021
Figure 1 for A study of latent monotonic attention variants
Figure 2 for A study of latent monotonic attention variants
Figure 3 for A study of latent monotonic attention variants
Figure 4 for A study of latent monotonic attention variants
Viaarxiv icon

Investigations on Phoneme-Based End-To-End Speech Recognition

Add code
May 19, 2020
Figure 1 for Investigations on Phoneme-Based End-To-End Speech Recognition
Figure 2 for Investigations on Phoneme-Based End-To-End Speech Recognition
Figure 3 for Investigations on Phoneme-Based End-To-End Speech Recognition
Figure 4 for Investigations on Phoneme-Based End-To-End Speech Recognition
Viaarxiv icon

A New Training Pipeline for an Improved Neural Transducer

Add code
May 19, 2020
Figure 1 for A New Training Pipeline for an Improved Neural Transducer
Figure 2 for A New Training Pipeline for an Improved Neural Transducer
Figure 3 for A New Training Pipeline for an Improved Neural Transducer
Figure 4 for A New Training Pipeline for an Improved Neural Transducer
Viaarxiv icon