Picture for Mengzhao Chen

Mengzhao Chen

PrefixQuant: Static Quantization Beats Dynamic through Prefixed Outliers in LLMs

Add code
Oct 07, 2024
Figure 1 for PrefixQuant: Static Quantization Beats Dynamic through Prefixed Outliers in LLMs
Figure 2 for PrefixQuant: Static Quantization Beats Dynamic through Prefixed Outliers in LLMs
Figure 3 for PrefixQuant: Static Quantization Beats Dynamic through Prefixed Outliers in LLMs
Figure 4 for PrefixQuant: Static Quantization Beats Dynamic through Prefixed Outliers in LLMs
Viaarxiv icon

Adapting LLaMA Decoder to Vision Transformer

Add code
Apr 13, 2024
Viaarxiv icon

BESA: Pruning Large Language Models with Blockwise Parameter-Efficient Sparsity Allocation

Add code
Feb 18, 2024
Viaarxiv icon

I&S-ViT: An Inclusive & Stable Method for Pushing the Limit of Post-Training ViTs Quantization

Add code
Nov 16, 2023
Viaarxiv icon

OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models

Add code
Aug 25, 2023
Figure 1 for OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
Figure 2 for OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
Figure 3 for OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
Figure 4 for OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
Viaarxiv icon

Spatial Re-parameterization for N:M Sparsity

Add code
Jun 09, 2023
Viaarxiv icon

DiffRate : Differentiable Compression Rate for Efficient Vision Transformers

Add code
May 29, 2023
Viaarxiv icon

MultiQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width Network Quantization

Add code
May 14, 2023
Viaarxiv icon

SMMix: Self-Motivated Image Mixing for Vision Transformers

Add code
Dec 26, 2022
Viaarxiv icon

Super Vision Transformer

Add code
May 26, 2022
Figure 1 for Super Vision Transformer
Figure 2 for Super Vision Transformer
Figure 3 for Super Vision Transformer
Figure 4 for Super Vision Transformer
Viaarxiv icon