Picture for Nicholas J. Fraser

Nicholas J. Fraser

Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference

Add code
Feb 22, 2021
Figure 1 for Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
Figure 2 for Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
Figure 3 for Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
Figure 4 for Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
Viaarxiv icon

FAT: Training Neural Networks for Reliable Inference Under Hardware Faults

Add code
Nov 11, 2020
Figure 1 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Figure 2 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Figure 3 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Figure 4 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Viaarxiv icon

LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications

Add code
Apr 06, 2020
Figure 1 for LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications
Figure 2 for LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications
Figure 3 for LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications
Figure 4 for LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications
Viaarxiv icon

Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic

Add code
Jul 17, 2018
Figure 1 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Figure 2 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Figure 3 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Figure 4 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Viaarxiv icon

Scaling Binarized Neural Networks on Reconfigurable Logic

Add code
Jan 27, 2017
Figure 1 for Scaling Binarized Neural Networks on Reconfigurable Logic
Figure 2 for Scaling Binarized Neural Networks on Reconfigurable Logic
Figure 3 for Scaling Binarized Neural Networks on Reconfigurable Logic
Figure 4 for Scaling Binarized Neural Networks on Reconfigurable Logic
Viaarxiv icon

FINN: A Framework for Fast, Scalable Binarized Neural Network Inference

Add code
Dec 01, 2016
Figure 1 for FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Figure 2 for FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Figure 3 for FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Figure 4 for FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Viaarxiv icon