Picture for Kyuyeun Kim

Kyuyeun Kim

Post-training quantization of vision encoders needs prefixing registers

Add code
Oct 06, 2025
Viaarxiv icon

MimiQ: Low-Bit Data-Free Quantization of Vision Transformers

Add code
Jul 29, 2024
Viaarxiv icon

Prefixing Attention Sinks can Mitigate Activation Outliers for Large Language Model Quantization

Add code
Jun 17, 2024
Figure 1 for Prefixing Attention Sinks can Mitigate Activation Outliers for Large Language Model Quantization
Figure 2 for Prefixing Attention Sinks can Mitigate Activation Outliers for Large Language Model Quantization
Figure 3 for Prefixing Attention Sinks can Mitigate Activation Outliers for Large Language Model Quantization
Figure 4 for Prefixing Attention Sinks can Mitigate Activation Outliers for Large Language Model Quantization
Viaarxiv icon