Picture for Brian Van Essen

Brian Van Essen

Lion Cub: Minimizing Communication Overhead in Distributed Lion

Add code
Nov 25, 2024
Viaarxiv icon

The Case for Strong Scaling in Deep Learning: Training Large 3D CNNs with Hybrid Parallelism

Add code
Jul 25, 2020
Viaarxiv icon

Merlin: Enabling Machine Learning-Ready HPC Ensembles

Add code
Dec 05, 2019
Figure 1 for Merlin: Enabling Machine Learning-Ready HPC Ensembles
Figure 2 for Merlin: Enabling Machine Learning-Ready HPC Ensembles
Figure 3 for Merlin: Enabling Machine Learning-Ready HPC Ensembles
Figure 4 for Merlin: Enabling Machine Learning-Ready HPC Ensembles
Viaarxiv icon

Parallelizing Training of Deep Generative Models on Massive Scientific Datasets

Add code
Oct 05, 2019
Figure 1 for Parallelizing Training of Deep Generative Models on Massive Scientific Datasets
Figure 2 for Parallelizing Training of Deep Generative Models on Massive Scientific Datasets
Figure 3 for Parallelizing Training of Deep Generative Models on Massive Scientific Datasets
Figure 4 for Parallelizing Training of Deep Generative Models on Massive Scientific Datasets
Viaarxiv icon

Improving Strong-Scaling of CNN Training by Exploiting Finer-Grained Parallelism

Add code
Mar 15, 2019
Figure 1 for Improving Strong-Scaling of CNN Training by Exploiting Finer-Grained Parallelism
Figure 2 for Improving Strong-Scaling of CNN Training by Exploiting Finer-Grained Parallelism
Figure 3 for Improving Strong-Scaling of CNN Training by Exploiting Finer-Grained Parallelism
Figure 4 for Improving Strong-Scaling of CNN Training by Exploiting Finer-Grained Parallelism
Viaarxiv icon

Large-Scale Deep Learning on the YFCC100M Dataset

Add code
Feb 11, 2015
Figure 1 for Large-Scale Deep Learning on the YFCC100M Dataset
Figure 2 for Large-Scale Deep Learning on the YFCC100M Dataset
Figure 3 for Large-Scale Deep Learning on the YFCC100M Dataset
Figure 4 for Large-Scale Deep Learning on the YFCC100M Dataset
Viaarxiv icon