Picture for Khe Chai Sim

Khe Chai Sim

TransformerFAM: Feedback attention is working memory

Add code
Apr 14, 2024
Figure 1 for TransformerFAM: Feedback attention is working memory
Figure 2 for TransformerFAM: Feedback attention is working memory
Figure 3 for TransformerFAM: Feedback attention is working memory
Figure 4 for TransformerFAM: Feedback attention is working memory
Viaarxiv icon

Hierarchical Recurrent Adapters for Efficient Multi-Task Adaptation of Large Speech Models

Add code
Mar 25, 2024
Figure 1 for Hierarchical Recurrent Adapters for Efficient Multi-Task Adaptation of Large Speech Models
Figure 2 for Hierarchical Recurrent Adapters for Efficient Multi-Task Adaptation of Large Speech Models
Figure 3 for Hierarchical Recurrent Adapters for Efficient Multi-Task Adaptation of Large Speech Models
Figure 4 for Hierarchical Recurrent Adapters for Efficient Multi-Task Adaptation of Large Speech Models
Viaarxiv icon

Profit: Benchmarking Personalization and Robustness Trade-off in Federated Prompt Tuning

Add code
Oct 06, 2023
Viaarxiv icon

Contextual Biasing with the Knuth-Morris-Pratt Matching Algorithm

Add code
Sep 29, 2023
Figure 1 for Contextual Biasing with the Knuth-Morris-Pratt Matching Algorithm
Figure 2 for Contextual Biasing with the Knuth-Morris-Pratt Matching Algorithm
Figure 3 for Contextual Biasing with the Knuth-Morris-Pratt Matching Algorithm
Viaarxiv icon

Massive End-to-end Models for Short Search Queries

Add code
Sep 22, 2023
Figure 1 for Massive End-to-end Models for Short Search Queries
Figure 2 for Massive End-to-end Models for Short Search Queries
Figure 3 for Massive End-to-end Models for Short Search Queries
Figure 4 for Massive End-to-end Models for Short Search Queries
Viaarxiv icon

Improving Speech Recognition for African American English With Audio Classification

Add code
Sep 16, 2023
Viaarxiv icon

Edit Distance based RL for RNNT decoding

Add code
May 31, 2023
Viaarxiv icon

Efficient Domain Adaptation for Speech Foundation Models

Add code
Feb 03, 2023
Viaarxiv icon

Resource-Efficient Transfer Learning From Speech Foundation Model Using Hierarchical Feature Fusion

Add code
Nov 04, 2022
Figure 1 for Resource-Efficient Transfer Learning From Speech Foundation Model Using Hierarchical Feature Fusion
Figure 2 for Resource-Efficient Transfer Learning From Speech Foundation Model Using Hierarchical Feature Fusion
Figure 3 for Resource-Efficient Transfer Learning From Speech Foundation Model Using Hierarchical Feature Fusion
Figure 4 for Resource-Efficient Transfer Learning From Speech Foundation Model Using Hierarchical Feature Fusion
Viaarxiv icon

Comparison of Soft and Hard Target RNN-T Distillation for Large-scale ASR

Add code
Oct 11, 2022
Figure 1 for Comparison of Soft and Hard Target RNN-T Distillation for Large-scale ASR
Figure 2 for Comparison of Soft and Hard Target RNN-T Distillation for Large-scale ASR
Figure 3 for Comparison of Soft and Hard Target RNN-T Distillation for Large-scale ASR
Figure 4 for Comparison of Soft and Hard Target RNN-T Distillation for Large-scale ASR
Viaarxiv icon