Picture for Rathinakumar Appuswamy

Rathinakumar Appuswamy

Efficient and Effective Methods for Mixed Precision Neural Network Quantization for Faster, Energy-efficient Inference

Add code
Jan 30, 2023
Figure 1 for Efficient and Effective Methods for Mixed Precision Neural Network Quantization for Faster, Energy-efficient Inference
Figure 2 for Efficient and Effective Methods for Mixed Precision Neural Network Quantization for Faster, Energy-efficient Inference
Figure 3 for Efficient and Effective Methods for Mixed Precision Neural Network Quantization for Faster, Energy-efficient Inference
Figure 4 for Efficient and Effective Methods for Mixed Precision Neural Network Quantization for Faster, Energy-efficient Inference
Viaarxiv icon

Learned Step Size Quantization

Add code
Feb 21, 2019
Figure 1 for Learned Step Size Quantization
Figure 2 for Learned Step Size Quantization
Figure 3 for Learned Step Size Quantization
Figure 4 for Learned Step Size Quantization
Viaarxiv icon

Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference

Add code
Sep 11, 2018
Figure 1 for Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference
Figure 2 for Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference
Figure 3 for Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference
Figure 4 for Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference
Viaarxiv icon

Structured Convolution Matrices for Energy-efficient Deep learning

Add code
Jun 08, 2016
Figure 1 for Structured Convolution Matrices for Energy-efficient Deep learning
Figure 2 for Structured Convolution Matrices for Energy-efficient Deep learning
Figure 3 for Structured Convolution Matrices for Energy-efficient Deep learning
Figure 4 for Structured Convolution Matrices for Energy-efficient Deep learning
Viaarxiv icon

Deep neural networks are robust to weight binarization and other non-linear distortions

Add code
Jun 07, 2016
Figure 1 for Deep neural networks are robust to weight binarization and other non-linear distortions
Figure 2 for Deep neural networks are robust to weight binarization and other non-linear distortions
Figure 3 for Deep neural networks are robust to weight binarization and other non-linear distortions
Figure 4 for Deep neural networks are robust to weight binarization and other non-linear distortions
Viaarxiv icon

Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing

Add code
May 24, 2016
Figure 1 for Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing
Figure 2 for Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing
Figure 3 for Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing
Figure 4 for Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing
Viaarxiv icon