Picture for Vikram Sharma Mailthody

Vikram Sharma Mailthody

TBA: Faster Large Language Model Training Using SSD-Based Activation Offloading

Add code
Aug 19, 2024
Figure 1 for TBA: Faster Large Language Model Training Using SSD-Based Activation Offloading
Figure 2 for TBA: Faster Large Language Model Training Using SSD-Based Activation Offloading
Figure 3 for TBA: Faster Large Language Model Training Using SSD-Based Activation Offloading
Figure 4 for TBA: Faster Large Language Model Training Using SSD-Based Activation Offloading
Viaarxiv icon

LSM-GNN: Large-scale Storage-based Multi-GPU GNN Training by Optimizing Data Transfer Scheme

Add code
Jul 21, 2024
Viaarxiv icon

Accelerating Sampling and Aggregation Operations in GNN Frameworks with GPU Initiated Direct Storage Accesses

Add code
Jun 28, 2023
Viaarxiv icon

IGB: Addressing The Gaps In Labeling, Features, Heterogeneity, and Size of Public Graph Datasets for Deep Learning Research

Add code
Feb 27, 2023
Viaarxiv icon

At-Scale Sparse Deep Neural Network Inference with Efficient GPU Implementation

Add code
Sep 02, 2020
Figure 1 for At-Scale Sparse Deep Neural Network Inference with Efficient GPU Implementation
Figure 2 for At-Scale Sparse Deep Neural Network Inference with Efficient GPU Implementation
Figure 3 for At-Scale Sparse Deep Neural Network Inference with Efficient GPU Implementation
Figure 4 for At-Scale Sparse Deep Neural Network Inference with Efficient GPU Implementation
Viaarxiv icon

I-BERT: Inductive Generalization of Transformer to Arbitrary Context Lengths

Add code
Jun 19, 2020
Figure 1 for I-BERT: Inductive Generalization of Transformer to Arbitrary Context Lengths
Figure 2 for I-BERT: Inductive Generalization of Transformer to Arbitrary Context Lengths
Figure 3 for I-BERT: Inductive Generalization of Transformer to Arbitrary Context Lengths
Figure 4 for I-BERT: Inductive Generalization of Transformer to Arbitrary Context Lengths
Viaarxiv icon