Knowledge Distillation for Neural Transducers from Large Self-Supervised Pre-trained Models

Add code
Oct 07, 2021
Figure 1 for Knowledge Distillation for Neural Transducers from Large Self-Supervised Pre-trained Models
Figure 2 for Knowledge Distillation for Neural Transducers from Large Self-Supervised Pre-trained Models
Figure 3 for Knowledge Distillation for Neural Transducers from Large Self-Supervised Pre-trained Models
Figure 4 for Knowledge Distillation for Neural Transducers from Large Self-Supervised Pre-trained Models

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: