Picture for Jayashree Mohan

Jayashree Mohan

POD-Attention: Unlocking Full Prefill-Decode Overlap for Faster LLM Inference

Add code
Oct 23, 2024
Viaarxiv icon

ASTRA: Accurate and Scalable ANNS-based Training of Extreme Classifiers

Add code
Sep 30, 2024
Viaarxiv icon

Metron: Holistic Performance Evaluation Framework for LLM Inference Systems

Add code
Jul 09, 2024
Figure 1 for Metron: Holistic Performance Evaluation Framework for LLM Inference Systems
Figure 2 for Metron: Holistic Performance Evaluation Framework for LLM Inference Systems
Figure 3 for Metron: Holistic Performance Evaluation Framework for LLM Inference Systems
Figure 4 for Metron: Holistic Performance Evaluation Framework for LLM Inference Systems
Viaarxiv icon

Vidur: A Large-Scale Simulation Framework For LLM Inference

Add code
May 08, 2024
Figure 1 for Vidur: A Large-Scale Simulation Framework For LLM Inference
Figure 2 for Vidur: A Large-Scale Simulation Framework For LLM Inference
Figure 3 for Vidur: A Large-Scale Simulation Framework For LLM Inference
Figure 4 for Vidur: A Large-Scale Simulation Framework For LLM Inference
Viaarxiv icon

vAttention: Dynamic Memory Management for Serving LLMs without PagedAttention

Add code
May 07, 2024
Viaarxiv icon

Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve

Add code
Mar 04, 2024
Figure 1 for Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve
Figure 2 for Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve
Figure 3 for Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve
Figure 4 for Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve
Viaarxiv icon

SARATHI: Efficient LLM Inference by Piggybacking Decodes with Chunked Prefills

Add code
Aug 31, 2023
Figure 1 for SARATHI: Efficient LLM Inference by Piggybacking Decodes with Chunked Prefills
Figure 2 for SARATHI: Efficient LLM Inference by Piggybacking Decodes with Chunked Prefills
Figure 3 for SARATHI: Efficient LLM Inference by Piggybacking Decodes with Chunked Prefills
Figure 4 for SARATHI: Efficient LLM Inference by Piggybacking Decodes with Chunked Prefills
Viaarxiv icon

Synergy: Resource Sensitive DNN Scheduling in Multi-Tenant Clusters

Add code
Oct 12, 2021
Figure 1 for Synergy: Resource Sensitive DNN Scheduling in Multi-Tenant Clusters
Figure 2 for Synergy: Resource Sensitive DNN Scheduling in Multi-Tenant Clusters
Figure 3 for Synergy: Resource Sensitive DNN Scheduling in Multi-Tenant Clusters
Figure 4 for Synergy: Resource Sensitive DNN Scheduling in Multi-Tenant Clusters
Viaarxiv icon

Memory Optimization for Deep Networks

Add code
Oct 29, 2020
Figure 1 for Memory Optimization for Deep Networks
Figure 2 for Memory Optimization for Deep Networks
Figure 3 for Memory Optimization for Deep Networks
Figure 4 for Memory Optimization for Deep Networks
Viaarxiv icon

Analyzing and Mitigating Data Stalls in DNN Training

Add code
Jul 14, 2020
Figure 1 for Analyzing and Mitigating Data Stalls in DNN Training
Figure 2 for Analyzing and Mitigating Data Stalls in DNN Training
Figure 3 for Analyzing and Mitigating Data Stalls in DNN Training
Figure 4 for Analyzing and Mitigating Data Stalls in DNN Training
Viaarxiv icon