Picture for Mohammadreza Tayaranian

Mohammadreza Tayaranian

Automatic Pruning of Fine-tuning Datasets for Transformer-based Language Models

Add code
Jul 11, 2024
Viaarxiv icon

Faster Inference of Integer SWIN Transformer by Removing the GELU Activation

Add code
Feb 02, 2024
Viaarxiv icon

Integer Fine-tuning of Transformer-based Models

Add code
Sep 20, 2022
Figure 1 for Integer Fine-tuning of Transformer-based Models
Figure 2 for Integer Fine-tuning of Transformer-based Models
Figure 3 for Integer Fine-tuning of Transformer-based Models
Figure 4 for Integer Fine-tuning of Transformer-based Models
Viaarxiv icon

Efficient Fine-Tuning of Compressed Language Models with Learners

Add code
Aug 03, 2022
Figure 1 for Efficient Fine-Tuning of Compressed Language Models with Learners
Figure 2 for Efficient Fine-Tuning of Compressed Language Models with Learners
Figure 3 for Efficient Fine-Tuning of Compressed Language Models with Learners
Figure 4 for Efficient Fine-Tuning of Compressed Language Models with Learners
Viaarxiv icon

Is Integer Arithmetic Enough for Deep Learning Training?

Add code
Jul 18, 2022
Figure 1 for Is Integer Arithmetic Enough for Deep Learning Training?
Figure 2 for Is Integer Arithmetic Enough for Deep Learning Training?
Figure 3 for Is Integer Arithmetic Enough for Deep Learning Training?
Figure 4 for Is Integer Arithmetic Enough for Deep Learning Training?
Viaarxiv icon

Efficient Fine-Tuning of BERT Models on the Edge

Add code
May 03, 2022
Figure 1 for Efficient Fine-Tuning of BERT Models on the Edge
Figure 2 for Efficient Fine-Tuning of BERT Models on the Edge
Figure 3 for Efficient Fine-Tuning of BERT Models on the Edge
Figure 4 for Efficient Fine-Tuning of BERT Models on the Edge
Viaarxiv icon