Picture for Fangmin Chen

Fangmin Chen

ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models

Add code
Aug 16, 2024
Viaarxiv icon

FoldGPT: Simple and Effective Large Language Model Compression Scheme

Add code
Jul 01, 2024
Viaarxiv icon

SparseByteNN: A Novel Mobile Inference Acceleration Framework Based on Fine-Grained Group Sparsity

Add code
Oct 30, 2023
Viaarxiv icon

Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution

Add code
Aug 05, 2023
Viaarxiv icon

Residual Local Feature Network for Efficient Super-Resolution

Add code
May 16, 2022
Figure 1 for Residual Local Feature Network for Efficient Super-Resolution
Figure 2 for Residual Local Feature Network for Efficient Super-Resolution
Figure 3 for Residual Local Feature Network for Efficient Super-Resolution
Figure 4 for Residual Local Feature Network for Efficient Super-Resolution
Viaarxiv icon