Compressing KV Cache for Long-Context LLM Inference with Inter-Layer Attention Similarity

Add code
Dec 03, 2024

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: