Picture for Habib Hajimolahoseini

Habib Hajimolahoseini

Accelerating the Low-Rank Decomposed Models

Add code
Jul 24, 2024
Figure 1 for Accelerating the Low-Rank Decomposed Models
Figure 2 for Accelerating the Low-Rank Decomposed Models
Figure 3 for Accelerating the Low-Rank Decomposed Models
Figure 4 for Accelerating the Low-Rank Decomposed Models
Viaarxiv icon

Is 3D Convolution with 5D Tensors Really Necessary for Video Analysis?

Add code
Jul 23, 2024
Figure 1 for Is 3D Convolution with 5D Tensors Really Necessary for Video Analysis?
Figure 2 for Is 3D Convolution with 5D Tensors Really Necessary for Video Analysis?
Figure 3 for Is 3D Convolution with 5D Tensors Really Necessary for Video Analysis?
Figure 4 for Is 3D Convolution with 5D Tensors Really Necessary for Video Analysis?
Viaarxiv icon

Single Parent Family: A Spectrum of Family Members from a Single Pre-Trained Foundation Model

Add code
Jun 28, 2024
Figure 1 for Single Parent Family: A Spectrum of Family Members from a Single Pre-Trained Foundation Model
Figure 2 for Single Parent Family: A Spectrum of Family Members from a Single Pre-Trained Foundation Model
Figure 3 for Single Parent Family: A Spectrum of Family Members from a Single Pre-Trained Foundation Model
Figure 4 for Single Parent Family: A Spectrum of Family Members from a Single Pre-Trained Foundation Model
Viaarxiv icon

SkipViT: Speeding Up Vision Transformers with a Token-Level Skip Connection

Add code
Jan 27, 2024
Viaarxiv icon

SwiftLearn: A Data-Efficient Training Method of Deep Learning Models using Importance Sampling

Add code
Nov 25, 2023
Figure 1 for SwiftLearn: A Data-Efficient Training Method of Deep Learning Models using Importance Sampling
Figure 2 for SwiftLearn: A Data-Efficient Training Method of Deep Learning Models using Importance Sampling
Figure 3 for SwiftLearn: A Data-Efficient Training Method of Deep Learning Models using Importance Sampling
Viaarxiv icon

GQKVA: Efficient Pre-training of Transformers by Grouping Queries, Keys, and Values

Add code
Nov 06, 2023
Figure 1 for GQKVA: Efficient Pre-training of Transformers by Grouping Queries, Keys, and Values
Figure 2 for GQKVA: Efficient Pre-training of Transformers by Grouping Queries, Keys, and Values
Figure 3 for GQKVA: Efficient Pre-training of Transformers by Grouping Queries, Keys, and Values
Viaarxiv icon

Speeding up Resnet Architecture with Layers Targeted Low Rank Decomposition

Add code
Sep 21, 2023
Viaarxiv icon

Improving Resnet-9 Generalization Trained on Small Datasets

Add code
Sep 07, 2023
Viaarxiv icon

Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization

Add code
Sep 07, 2023
Figure 1 for Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization
Figure 2 for Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization
Figure 3 for Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization
Figure 4 for Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization
Viaarxiv icon

A Short Study on Compressing Decoder-Based Language Models

Add code
Oct 16, 2021
Figure 1 for A Short Study on Compressing Decoder-Based Language Models
Figure 2 for A Short Study on Compressing Decoder-Based Language Models
Figure 3 for A Short Study on Compressing Decoder-Based Language Models
Figure 4 for A Short Study on Compressing Decoder-Based Language Models
Viaarxiv icon