Picture for Yizhou Shan

Yizhou Shan

EPIC: Efficient Position-Independent Context Caching for Serving Large Language Models

Add code
Oct 20, 2024
Viaarxiv icon

InstInfer: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference

Add code
Sep 08, 2024
Figure 1 for InstInfer: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference
Figure 2 for InstInfer: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference
Figure 3 for InstInfer: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference
Figure 4 for InstInfer: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference
Viaarxiv icon

The CAP Principle for LLM Serving

Add code
May 18, 2024
Viaarxiv icon