Picture for Ryoji Ikegaya

Ryoji Ikegaya

n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization

Add code
Mar 22, 2021
Figure 1 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Figure 2 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Figure 3 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Figure 4 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Viaarxiv icon

Filter Pre-Pruning for Improved Fine-tuning of Quantized Deep Neural Networks

Add code
Nov 25, 2020
Figure 1 for Filter Pre-Pruning for Improved Fine-tuning of Quantized Deep Neural Networks
Figure 2 for Filter Pre-Pruning for Improved Fine-tuning of Quantized Deep Neural Networks
Figure 3 for Filter Pre-Pruning for Improved Fine-tuning of Quantized Deep Neural Networks
Figure 4 for Filter Pre-Pruning for Improved Fine-tuning of Quantized Deep Neural Networks
Viaarxiv icon