Picture for Vikram Appia

Vikram Appia

FarSkip-Collective: Unhobbling Blocking Communication in Mixture of Experts Models

Add code
Nov 14, 2025
Figure 1 for FarSkip-Collective: Unhobbling Blocking Communication in Mixture of Experts Models
Figure 2 for FarSkip-Collective: Unhobbling Blocking Communication in Mixture of Experts Models
Figure 3 for FarSkip-Collective: Unhobbling Blocking Communication in Mixture of Experts Models
Figure 4 for FarSkip-Collective: Unhobbling Blocking Communication in Mixture of Experts Models
Viaarxiv icon

Zebra-Llama: Towards Extremely Efficient Hybrid Models

Add code
May 22, 2025
Viaarxiv icon

X-EcoMLA: Upcycling Pre-Trained Attention into MLA for Efficient and Extreme KV Compression

Add code
Mar 14, 2025
Figure 1 for X-EcoMLA: Upcycling Pre-Trained Attention into MLA for Efficient and Extreme KV Compression
Figure 2 for X-EcoMLA: Upcycling Pre-Trained Attention into MLA for Efficient and Extreme KV Compression
Figure 3 for X-EcoMLA: Upcycling Pre-Trained Attention into MLA for Efficient and Extreme KV Compression
Figure 4 for X-EcoMLA: Upcycling Pre-Trained Attention into MLA for Efficient and Extreme KV Compression
Viaarxiv icon