Picture for Fangmin Chen

Fangmin Chen

GQSA: Group Quantization and Sparsity for Accelerating Large Language Model Inference

Add code
Dec 23, 2024
Viaarxiv icon

ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models

Add code
Aug 16, 2024
Figure 1 for ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models
Figure 2 for ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models
Figure 3 for ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models
Figure 4 for ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models
Viaarxiv icon

FoldGPT: Simple and Effective Large Language Model Compression Scheme

Add code
Jul 01, 2024
Viaarxiv icon

SparseByteNN: A Novel Mobile Inference Acceleration Framework Based on Fine-Grained Group Sparsity

Add code
Oct 30, 2023
Viaarxiv icon

Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution

Add code
Aug 05, 2023
Figure 1 for Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution
Figure 2 for Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution
Figure 3 for Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution
Figure 4 for Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution
Viaarxiv icon

Residual Local Feature Network for Efficient Super-Resolution

Add code
May 16, 2022
Figure 1 for Residual Local Feature Network for Efficient Super-Resolution
Figure 2 for Residual Local Feature Network for Efficient Super-Resolution
Figure 3 for Residual Local Feature Network for Efficient Super-Resolution
Figure 4 for Residual Local Feature Network for Efficient Super-Resolution
Viaarxiv icon