Picture for Yineng Zhang

Yineng Zhang

Locality-aware Fair Scheduling in LLM Serving

Add code
Jan 24, 2025
Viaarxiv icon

FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving

Add code
Jan 02, 2025
Figure 1 for FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving
Figure 2 for FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving
Figure 3 for FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving
Figure 4 for FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving
Viaarxiv icon

QQQ: Quality Quattuor-Bit Quantization for Large Language Models

Add code
Jun 14, 2024
Viaarxiv icon

Re-evaluating the Memory-balanced Pipeline Parallelism: BPipe

Add code
Jan 04, 2024
Viaarxiv icon