Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models

Add code
Jul 12, 2023

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: