Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders

Add code
Nov 20, 2022
Figure 1 for Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders
Figure 2 for Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders
Figure 3 for Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders
Figure 4 for Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: