Picture for Mark Grobman

Mark Grobman

QFT: Post-training quantization via fast joint finetuning of all degrees of freedom

Add code
Dec 05, 2022
Viaarxiv icon

Tiled Squeeze-and-Excite: Channel Attention With Local Spatial Context

Add code
Jul 05, 2021
Figure 1 for Tiled Squeeze-and-Excite: Channel Attention With Local Spatial Context
Figure 2 for Tiled Squeeze-and-Excite: Channel Attention With Local Spatial Context
Figure 3 for Tiled Squeeze-and-Excite: Channel Attention With Local Spatial Context
Figure 4 for Tiled Squeeze-and-Excite: Channel Attention With Local Spatial Context
Viaarxiv icon

Exploring Neural Networks Quantization via Layer-Wise Quantization Analysis

Add code
Dec 15, 2020
Figure 1 for Exploring Neural Networks Quantization via Layer-Wise Quantization Analysis
Figure 2 for Exploring Neural Networks Quantization via Layer-Wise Quantization Analysis
Figure 3 for Exploring Neural Networks Quantization via Layer-Wise Quantization Analysis
Figure 4 for Exploring Neural Networks Quantization via Layer-Wise Quantization Analysis
Viaarxiv icon

Fighting Quantization Bias With Bias

Add code
Jun 07, 2019
Figure 1 for Fighting Quantization Bias With Bias
Figure 2 for Fighting Quantization Bias With Bias
Figure 3 for Fighting Quantization Bias With Bias
Figure 4 for Fighting Quantization Bias With Bias
Viaarxiv icon

Same, Same But Different - Recovering Neural Network Quantization Error Through Weight Factorization

Add code
Feb 05, 2019
Figure 1 for Same, Same But Different - Recovering Neural Network Quantization Error Through Weight Factorization
Figure 2 for Same, Same But Different - Recovering Neural Network Quantization Error Through Weight Factorization
Figure 3 for Same, Same But Different - Recovering Neural Network Quantization Error Through Weight Factorization
Figure 4 for Same, Same But Different - Recovering Neural Network Quantization Error Through Weight Factorization
Viaarxiv icon