Picture for Eiman Ebrahimi

Eiman Ebrahimi

Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture

Add code
Mar 04, 2021
Figure 1 for Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture
Figure 2 for Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture
Figure 3 for Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture
Figure 4 for Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture
Viaarxiv icon

PyTorch-Direct: Enabling GPU Centric Data Access for Very Large Graph Neural Network Training with Irregular Accesses

Add code
Jan 20, 2021
Figure 1 for PyTorch-Direct: Enabling GPU Centric Data Access for Very Large Graph Neural Network Training with Irregular Accesses
Figure 2 for PyTorch-Direct: Enabling GPU Centric Data Access for Very Large Graph Neural Network Training with Irregular Accesses
Figure 3 for PyTorch-Direct: Enabling GPU Centric Data Access for Very Large Graph Neural Network Training with Irregular Accesses
Figure 4 for PyTorch-Direct: Enabling GPU Centric Data Access for Very Large Graph Neural Network Training with Irregular Accesses
Viaarxiv icon

At-Scale Sparse Deep Neural Network Inference with Efficient GPU Implementation

Add code
Sep 02, 2020
Figure 1 for At-Scale Sparse Deep Neural Network Inference with Efficient GPU Implementation
Figure 2 for At-Scale Sparse Deep Neural Network Inference with Efficient GPU Implementation
Figure 3 for At-Scale Sparse Deep Neural Network Inference with Efficient GPU Implementation
Figure 4 for At-Scale Sparse Deep Neural Network Inference with Efficient GPU Implementation
Viaarxiv icon

Optimizing Multi-GPU Parallelization Strategies for Deep Learning Training

Add code
Jul 30, 2019
Figure 1 for Optimizing Multi-GPU Parallelization Strategies for Deep Learning Training
Figure 2 for Optimizing Multi-GPU Parallelization Strategies for Deep Learning Training
Figure 3 for Optimizing Multi-GPU Parallelization Strategies for Deep Learning Training
Figure 4 for Optimizing Multi-GPU Parallelization Strategies for Deep Learning Training
Viaarxiv icon