Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders

Add code
Nov 20, 2022

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: