Picture for Timo Lohrenz

Timo Lohrenz

Relaxed Attention for Transformer Models

Add code
Sep 20, 2022
Figure 1 for Relaxed Attention for Transformer Models
Figure 2 for Relaxed Attention for Transformer Models
Figure 3 for Relaxed Attention for Transformer Models
Figure 4 for Relaxed Attention for Transformer Models
Viaarxiv icon

Relaxed Attention: A Simple Method to Boost Performance of End-to-End Automatic Speech Recognition

Add code
Jul 02, 2021
Figure 1 for Relaxed Attention: A Simple Method to Boost Performance of End-to-End Automatic Speech Recognition
Figure 2 for Relaxed Attention: A Simple Method to Boost Performance of End-to-End Automatic Speech Recognition
Figure 3 for Relaxed Attention: A Simple Method to Boost Performance of End-to-End Automatic Speech Recognition
Figure 4 for Relaxed Attention: A Simple Method to Boost Performance of End-to-End Automatic Speech Recognition
Viaarxiv icon

Multi-Encoder Learning and Stream Fusion for Transformer-Based End-to-End Automatic Speech Recognition

Add code
Mar 31, 2021
Figure 1 for Multi-Encoder Learning and Stream Fusion for Transformer-Based End-to-End Automatic Speech Recognition
Figure 2 for Multi-Encoder Learning and Stream Fusion for Transformer-Based End-to-End Automatic Speech Recognition
Figure 3 for Multi-Encoder Learning and Stream Fusion for Transformer-Based End-to-End Automatic Speech Recognition
Figure 4 for Multi-Encoder Learning and Stream Fusion for Transformer-Based End-to-End Automatic Speech Recognition
Viaarxiv icon