Picture for Siegfried Kunzmann

Siegfried Kunzmann

CoMERA: Computing- and Memory-Efficient Training via Rank-Adaptive Tensor Optimization

Add code
May 23, 2024
Viaarxiv icon

Quantization-Aware and Tensor-Compressed Training of Transformers for Natural Language Understanding

Add code
Jun 01, 2023
Viaarxiv icon

Dual-Attention Neural Transducers for Efficient Wake Word Spotting in Speech Recognition

Add code
Apr 05, 2023
Viaarxiv icon

Contextual Adapters for Personalized Speech Recognition in Neural Transducers

Add code
May 26, 2022
Figure 1 for Contextual Adapters for Personalized Speech Recognition in Neural Transducers
Figure 2 for Contextual Adapters for Personalized Speech Recognition in Neural Transducers
Figure 3 for Contextual Adapters for Personalized Speech Recognition in Neural Transducers
Figure 4 for Contextual Adapters for Personalized Speech Recognition in Neural Transducers
Viaarxiv icon

Context-Aware Transformer Transducer for Speech Recognition

Add code
Nov 05, 2021
Figure 1 for Context-Aware Transformer Transducer for Speech Recognition
Figure 2 for Context-Aware Transformer Transducer for Speech Recognition
Figure 3 for Context-Aware Transformer Transducer for Speech Recognition
Figure 4 for Context-Aware Transformer Transducer for Speech Recognition
Viaarxiv icon

FANS: Fusing ASR and NLU for on-device SLU

Add code
Oct 31, 2021
Figure 1 for FANS: Fusing ASR and NLU for on-device SLU
Figure 2 for FANS: Fusing ASR and NLU for on-device SLU
Figure 3 for FANS: Fusing ASR and NLU for on-device SLU
Figure 4 for FANS: Fusing ASR and NLU for on-device SLU
Viaarxiv icon

Exploiting Large-scale Teacher-Student Training for On-device Acoustic Models

Add code
Jun 11, 2021
Figure 1 for Exploiting Large-scale Teacher-Student Training for On-device Acoustic Models
Figure 2 for Exploiting Large-scale Teacher-Student Training for On-device Acoustic Models
Figure 3 for Exploiting Large-scale Teacher-Student Training for On-device Acoustic Models
Figure 4 for Exploiting Large-scale Teacher-Student Training for On-device Acoustic Models
Viaarxiv icon

End-to-End Multi-Channel Transformer for Speech Recognition

Add code
Feb 08, 2021
Figure 1 for End-to-End Multi-Channel Transformer for Speech Recognition
Figure 2 for End-to-End Multi-Channel Transformer for Speech Recognition
Figure 3 for End-to-End Multi-Channel Transformer for Speech Recognition
Figure 4 for End-to-End Multi-Channel Transformer for Speech Recognition
Viaarxiv icon

Tie Your Embeddings Down: Cross-Modal Latent Spaces for End-to-end Spoken Language Understanding

Add code
Nov 18, 2020
Figure 1 for Tie Your Embeddings Down: Cross-Modal Latent Spaces for End-to-end Spoken Language Understanding
Figure 2 for Tie Your Embeddings Down: Cross-Modal Latent Spaces for End-to-end Spoken Language Understanding
Figure 3 for Tie Your Embeddings Down: Cross-Modal Latent Spaces for End-to-end Spoken Language Understanding
Figure 4 for Tie Your Embeddings Down: Cross-Modal Latent Spaces for End-to-end Spoken Language Understanding
Viaarxiv icon

End-to-End Neural Transformer Based Spoken Language Understanding

Add code
Aug 12, 2020
Figure 1 for End-to-End Neural Transformer Based Spoken Language Understanding
Figure 2 for End-to-End Neural Transformer Based Spoken Language Understanding
Figure 3 for End-to-End Neural Transformer Based Spoken Language Understanding
Figure 4 for End-to-End Neural Transformer Based Spoken Language Understanding
Viaarxiv icon