Picture for Kedong Xu

Kedong Xu

ViM-VQ: Efficient Post-Training Vector Quantization for Visual Mamba

Add code
Mar 12, 2025
Viaarxiv icon

SSVQ: Unleashing the Potential of Vector Quantization with Sign-Splitting

Add code
Mar 11, 2025
Viaarxiv icon

VQ4ALL: Efficient Neural Network Representation via a Universal Codebook

Add code
Dec 09, 2024
Viaarxiv icon

Efficiency Meets Fidelity: A Novel Quantization Framework for Stable Diffusion

Add code
Dec 09, 2024
Figure 1 for Efficiency Meets Fidelity: A Novel Quantization Framework for Stable Diffusion
Figure 2 for Efficiency Meets Fidelity: A Novel Quantization Framework for Stable Diffusion
Figure 3 for Efficiency Meets Fidelity: A Novel Quantization Framework for Stable Diffusion
Figure 4 for Efficiency Meets Fidelity: A Novel Quantization Framework for Stable Diffusion
Viaarxiv icon

VQ4DiT: Efficient Post-Training Vector Quantization for Diffusion Transformers

Add code
Aug 30, 2024
Figure 1 for VQ4DiT: Efficient Post-Training Vector Quantization for Diffusion Transformers
Figure 2 for VQ4DiT: Efficient Post-Training Vector Quantization for Diffusion Transformers
Figure 3 for VQ4DiT: Efficient Post-Training Vector Quantization for Diffusion Transformers
Figure 4 for VQ4DiT: Efficient Post-Training Vector Quantization for Diffusion Transformers
Viaarxiv icon