Picture for Omar Mohamed Awad

Omar Mohamed Awad

SkipViT: Speeding Up Vision Transformers with a Token-Level Skip Connection

Add code
Jan 27, 2024
Viaarxiv icon

SwiftLearn: A Data-Efficient Training Method of Deep Learning Models using Importance Sampling

Add code
Nov 25, 2023
Viaarxiv icon

GQKVA: Efficient Pre-training of Transformers by Grouping Queries, Keys, and Values

Add code
Nov 06, 2023
Viaarxiv icon

Improving Resnet-9 Generalization Trained on Small Datasets

Add code
Sep 07, 2023
Viaarxiv icon

FPRaker: A Processing Element For Accelerating Neural Network Training

Add code
Oct 15, 2020
Figure 1 for FPRaker: A Processing Element For Accelerating Neural Network Training
Figure 2 for FPRaker: A Processing Element For Accelerating Neural Network Training
Figure 3 for FPRaker: A Processing Element For Accelerating Neural Network Training
Figure 4 for FPRaker: A Processing Element For Accelerating Neural Network Training
Viaarxiv icon

TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference

Add code
Sep 01, 2020
Figure 1 for TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference
Figure 2 for TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference
Figure 3 for TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference
Figure 4 for TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference
Viaarxiv icon