Picture for Muralidhar Andoorveedu

Muralidhar Andoorveedu

Mist: Efficient Distributed Training of Large Language Models via Memory-Parallelism Co-Optimization

Add code
Mar 24, 2025
Viaarxiv icon

Seesaw: High-throughput LLM Inference via Model Re-sharding

Add code
Mar 09, 2025
Viaarxiv icon

Tempo: Accelerating Transformer-Based Model Training through Memory Footprint Reduction

Add code
Oct 19, 2022
Figure 1 for Tempo: Accelerating Transformer-Based Model Training through Memory Footprint Reduction
Figure 2 for Tempo: Accelerating Transformer-Based Model Training through Memory Footprint Reduction
Figure 3 for Tempo: Accelerating Transformer-Based Model Training through Memory Footprint Reduction
Figure 4 for Tempo: Accelerating Transformer-Based Model Training through Memory Footprint Reduction
Viaarxiv icon