SqueezeAttention: 2D Management of KV-Cache in LLM Inference via Layer-wise Optimal Budget

Add code
Apr 07, 2024

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: