Picture for Wenxiang Lin

Wenxiang Lin

Efficient MoE Inference with Fine-Grained Scheduling of Disaggregated Expert Parallelism

Add code
Dec 25, 2025
Figure 1 for Efficient MoE Inference with Fine-Grained Scheduling of Disaggregated Expert Parallelism
Figure 2 for Efficient MoE Inference with Fine-Grained Scheduling of Disaggregated Expert Parallelism
Figure 3 for Efficient MoE Inference with Fine-Grained Scheduling of Disaggregated Expert Parallelism
Figure 4 for Efficient MoE Inference with Fine-Grained Scheduling of Disaggregated Expert Parallelism
Viaarxiv icon

FSMoE: A Flexible and Scalable Training System for Sparse Mixture-of-Experts Models

Add code
Jan 18, 2025
Figure 1 for FSMoE: A Flexible and Scalable Training System for Sparse Mixture-of-Experts Models
Figure 2 for FSMoE: A Flexible and Scalable Training System for Sparse Mixture-of-Experts Models
Figure 3 for FSMoE: A Flexible and Scalable Training System for Sparse Mixture-of-Experts Models
Figure 4 for FSMoE: A Flexible and Scalable Training System for Sparse Mixture-of-Experts Models
Viaarxiv icon