Picture for Chandler Zhou

Chandler Zhou

NVIDIA

Scalable Training of Mixture-of-Experts Models with Megatron Core

Add code
Mar 10, 2026
Viaarxiv icon

MoE Parallel Folding: Heterogeneous Parallelism Mappings for Efficient Large-Scale MoE Model Training with Megatron Core

Add code
Apr 21, 2025
Figure 1 for MoE Parallel Folding: Heterogeneous Parallelism Mappings for Efficient Large-Scale MoE Model Training with Megatron Core
Figure 2 for MoE Parallel Folding: Heterogeneous Parallelism Mappings for Efficient Large-Scale MoE Model Training with Megatron Core
Figure 3 for MoE Parallel Folding: Heterogeneous Parallelism Mappings for Efficient Large-Scale MoE Model Training with Megatron Core
Figure 4 for MoE Parallel Folding: Heterogeneous Parallelism Mappings for Efficient Large-Scale MoE Model Training with Megatron Core
Viaarxiv icon

Aligning Language Models with Offline Reinforcement Learning from Human Feedback

Add code
Aug 23, 2023
Viaarxiv icon