Picture for Derya Cavdar

Derya Cavdar

Amazon SageMaker Model Parallelism: A General and Flexible Framework for Large Model Training

Add code
Nov 10, 2021
Figure 1 for Amazon SageMaker Model Parallelism: A General and Flexible Framework for Large Model Training
Figure 2 for Amazon SageMaker Model Parallelism: A General and Flexible Framework for Large Model Training
Figure 3 for Amazon SageMaker Model Parallelism: A General and Flexible Framework for Large Model Training
Figure 4 for Amazon SageMaker Model Parallelism: A General and Flexible Framework for Large Model Training
Viaarxiv icon

Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models

Add code
May 10, 2019
Figure 1 for Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models
Figure 2 for Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models
Figure 3 for Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models
Figure 4 for Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models
Viaarxiv icon