MixKD: Towards Efficient Distillation of Large-scale Language Models

Add code
Nov 01, 2020
Figure 1 for MixKD: Towards Efficient Distillation of Large-scale Language Models
Figure 2 for MixKD: Towards Efficient Distillation of Large-scale Language Models
Figure 3 for MixKD: Towards Efficient Distillation of Large-scale Language Models
Figure 4 for MixKD: Towards Efficient Distillation of Large-scale Language Models

Share this with someone who'll enjoy it:

View paper onarxiv iconopen_review iconOpenReview

Share this with someone who'll enjoy it: