Picture for Zhikai Li

Zhikai Li

TTAQ: Towards Stable Post-training Quantization in Continuous Domain Adaptation

Add code
Dec 13, 2024
Viaarxiv icon

A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for Accelerating Large VLMs

Add code
Dec 05, 2024
Figure 1 for A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for Accelerating Large VLMs
Figure 2 for A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for Accelerating Large VLMs
Figure 3 for A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for Accelerating Large VLMs
Figure 4 for A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for Accelerating Large VLMs
Viaarxiv icon

DilateQuant: Accurate and Efficient Diffusion Quantization via Weight Dilation

Add code
Sep 25, 2024
Viaarxiv icon

K-Sort Arena: Efficient and Reliable Benchmarking for Generative Models via K-wise Human Preferences

Add code
Aug 26, 2024
Figure 1 for K-Sort Arena: Efficient and Reliable Benchmarking for Generative Models via K-wise Human Preferences
Figure 2 for K-Sort Arena: Efficient and Reliable Benchmarking for Generative Models via K-wise Human Preferences
Figure 3 for K-Sort Arena: Efficient and Reliable Benchmarking for Generative Models via K-wise Human Preferences
Figure 4 for K-Sort Arena: Efficient and Reliable Benchmarking for Generative Models via K-wise Human Preferences
Viaarxiv icon

MGRQ: Post-Training Quantization For Vision Transformer With Mixed Granularity Reconstruction

Add code
Jun 13, 2024
Viaarxiv icon

LLM Inference Unveiled: Survey and Roofline Model Insights

Add code
Mar 11, 2024
Viaarxiv icon

RepQuant: Towards Accurate Post-Training Quantization of Large Transformer Models via Scale Reparameterization

Add code
Feb 08, 2024
Figure 1 for RepQuant: Towards Accurate Post-Training Quantization of Large Transformer Models via Scale Reparameterization
Figure 2 for RepQuant: Towards Accurate Post-Training Quantization of Large Transformer Models via Scale Reparameterization
Figure 3 for RepQuant: Towards Accurate Post-Training Quantization of Large Transformer Models via Scale Reparameterization
Figure 4 for RepQuant: Towards Accurate Post-Training Quantization of Large Transformer Models via Scale Reparameterization
Viaarxiv icon

RTA-Former: Reverse Transformer Attention for Polyp Segmentation

Add code
Jan 22, 2024
Viaarxiv icon

An Improved Grey Wolf Optimization Algorithm for Heart Disease Prediction

Add code
Jan 22, 2024
Viaarxiv icon

Enhanced Distribution Alignment for Post-Training Quantization of Diffusion Models

Add code
Jan 09, 2024
Viaarxiv icon