Prefixing Attention Sinks can Mitigate Activation Outliers for Large Language Model Quantization

Add code
Jun 17, 2024

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: