Picture for Alireza Ghaffari

Alireza Ghaffari

Huawei Noah's Ark Lab, Department of Mathematics and Statistics, McGill University

OAC: Output-adaptive Calibration for Accurate Post-training Quantization

Add code
May 23, 2024
Figure 1 for OAC: Output-adaptive Calibration for Accurate Post-training Quantization
Figure 2 for OAC: Output-adaptive Calibration for Accurate Post-training Quantization
Figure 3 for OAC: Output-adaptive Calibration for Accurate Post-training Quantization
Figure 4 for OAC: Output-adaptive Calibration for Accurate Post-training Quantization
Viaarxiv icon

AdpQ: A Zero-shot Calibration Free Adaptive Post Training Quantization Method for LLMs

Add code
May 22, 2024
Figure 1 for AdpQ: A Zero-shot Calibration Free Adaptive Post Training Quantization Method for LLMs
Figure 2 for AdpQ: A Zero-shot Calibration Free Adaptive Post Training Quantization Method for LLMs
Figure 3 for AdpQ: A Zero-shot Calibration Free Adaptive Post Training Quantization Method for LLMs
Figure 4 for AdpQ: A Zero-shot Calibration Free Adaptive Post Training Quantization Method for LLMs
Viaarxiv icon

Mitigating Outlier Activations in Low-Precision Fine-Tuning of Language Models

Add code
Dec 15, 2023
Figure 1 for Mitigating Outlier Activations in Low-Precision Fine-Tuning of Language Models
Figure 2 for Mitigating Outlier Activations in Low-Precision Fine-Tuning of Language Models
Figure 3 for Mitigating Outlier Activations in Low-Precision Fine-Tuning of Language Models
Figure 4 for Mitigating Outlier Activations in Low-Precision Fine-Tuning of Language Models
Viaarxiv icon

Statistical Hardware Design With Multi-model Active Learning

Add code
Mar 26, 2023
Figure 1 for Statistical Hardware Design With Multi-model Active Learning
Figure 2 for Statistical Hardware Design With Multi-model Active Learning
Figure 3 for Statistical Hardware Design With Multi-model Active Learning
Figure 4 for Statistical Hardware Design With Multi-model Active Learning
Viaarxiv icon

On the Convergence of Stochastic Gradient Descent in Low-precision Number Formats

Add code
Jan 09, 2023
Viaarxiv icon

EuclidNets: An Alternative Operation for Efficient Inference of Deep Learning Models

Add code
Dec 22, 2022
Viaarxiv icon

Integer Fine-tuning of Transformer-based Models

Add code
Sep 20, 2022
Figure 1 for Integer Fine-tuning of Transformer-based Models
Figure 2 for Integer Fine-tuning of Transformer-based Models
Figure 3 for Integer Fine-tuning of Transformer-based Models
Figure 4 for Integer Fine-tuning of Transformer-based Models
Viaarxiv icon

Is Integer Arithmetic Enough for Deep Learning Training?

Add code
Jul 18, 2022
Figure 1 for Is Integer Arithmetic Enough for Deep Learning Training?
Figure 2 for Is Integer Arithmetic Enough for Deep Learning Training?
Figure 3 for Is Integer Arithmetic Enough for Deep Learning Training?
Figure 4 for Is Integer Arithmetic Enough for Deep Learning Training?
Viaarxiv icon

Rethinking Pareto Frontier for Performance Evaluation of Deep Neural Networks

Add code
Feb 18, 2022
Figure 1 for Rethinking Pareto Frontier for Performance Evaluation of Deep Neural Networks
Figure 2 for Rethinking Pareto Frontier for Performance Evaluation of Deep Neural Networks
Figure 3 for Rethinking Pareto Frontier for Performance Evaluation of Deep Neural Networks
Figure 4 for Rethinking Pareto Frontier for Performance Evaluation of Deep Neural Networks
Viaarxiv icon

CNN2Gate: Toward Designing a General Framework for Implementation of Convolutional Neural Networks on FPGA

Add code
Apr 10, 2020
Figure 1 for CNN2Gate: Toward Designing a General Framework for Implementation of Convolutional Neural Networks on FPGA
Figure 2 for CNN2Gate: Toward Designing a General Framework for Implementation of Convolutional Neural Networks on FPGA
Figure 3 for CNN2Gate: Toward Designing a General Framework for Implementation of Convolutional Neural Networks on FPGA
Figure 4 for CNN2Gate: Toward Designing a General Framework for Implementation of Convolutional Neural Networks on FPGA
Viaarxiv icon