MixLLM: LLM Quantization with Global Mixed-precision between Output-features and Highly-efficient System Design

Add code
Dec 19, 2024

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: