Picture for Yizhou Shan

Yizhou Shan

InstInfer: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference

Add code
Sep 08, 2024
Viaarxiv icon

The CAP Principle for LLM Serving

Add code
May 18, 2024
Viaarxiv icon