Picture for Tariq Afzal

Tariq Afzal

Super Efficient Neural Network for Compression Artifacts Reduction and Super Resolution

Add code
Jan 26, 2024
Viaarxiv icon

MRQ:Support Multiple Quantization Schemes through Model Re-Quantization

Add code
Aug 04, 2023
Figure 1 for MRQ:Support Multiple Quantization Schemes through Model Re-Quantization
Figure 2 for MRQ:Support Multiple Quantization Schemes through Model Re-Quantization
Figure 3 for MRQ:Support Multiple Quantization Schemes through Model Re-Quantization
Figure 4 for MRQ:Support Multiple Quantization Schemes through Model Re-Quantization
Viaarxiv icon

Accelerator-Aware Training for Transducer-Based Speech Recognition

Add code
May 12, 2023
Viaarxiv icon

Sub-8-Bit Quantization Aware Training for 8-Bit Neural Network Accelerator with On-Device Speech Recognition

Add code
Jun 30, 2022
Figure 1 for Sub-8-Bit Quantization Aware Training for 8-Bit Neural Network Accelerator with On-Device Speech Recognition
Figure 2 for Sub-8-Bit Quantization Aware Training for 8-Bit Neural Network Accelerator with On-Device Speech Recognition
Figure 3 for Sub-8-Bit Quantization Aware Training for 8-Bit Neural Network Accelerator with On-Device Speech Recognition
Figure 4 for Sub-8-Bit Quantization Aware Training for 8-Bit Neural Network Accelerator with On-Device Speech Recognition
Viaarxiv icon

A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation

Add code
Apr 03, 2017
Figure 1 for A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation
Figure 2 for A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation
Figure 3 for A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation
Figure 4 for A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation
Viaarxiv icon