Picture for Gokmen Oz

Gokmen Oz

Knowledge Distillation Transfer Sets and their Impact on Downstream NLU Tasks

Add code
Oct 11, 2022
Figure 1 for Knowledge Distillation Transfer Sets and their Impact on Downstream NLU Tasks
Figure 2 for Knowledge Distillation Transfer Sets and their Impact on Downstream NLU Tasks
Figure 3 for Knowledge Distillation Transfer Sets and their Impact on Downstream NLU Tasks
Figure 4 for Knowledge Distillation Transfer Sets and their Impact on Downstream NLU Tasks
Viaarxiv icon

Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems

Add code
Jun 15, 2022
Figure 1 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 2 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 3 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 4 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Viaarxiv icon

Using multiple ASR hypotheses to boost i18n NLU performance

Add code
Dec 14, 2020
Figure 1 for Using multiple ASR hypotheses to boost i18n NLU performance
Figure 2 for Using multiple ASR hypotheses to boost i18n NLU performance
Figure 3 for Using multiple ASR hypotheses to boost i18n NLU performance
Figure 4 for Using multiple ASR hypotheses to boost i18n NLU performance
Viaarxiv icon