Picture for Nicholas Fraser

Nicholas Fraser

SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks

Add code
Jul 01, 2018
Figure 1 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Figure 2 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Figure 3 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Figure 4 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Viaarxiv icon

Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic

Add code
Jun 26, 2018
Figure 1 for Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic
Figure 2 for Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic
Figure 3 for Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic
Figure 4 for Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic
Viaarxiv icon

Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices

Add code
Jun 21, 2018
Figure 1 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Figure 2 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Figure 3 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Figure 4 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Viaarxiv icon

Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines

Add code
May 21, 2018
Figure 1 for Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines
Figure 2 for Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines
Figure 3 for Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines
Figure 4 for Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines
Viaarxiv icon

Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks

Add code
Oct 10, 2017
Figure 1 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 2 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 3 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 4 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Viaarxiv icon