Picture for Ardavan Pedram

Ardavan Pedram

Rethinking Floating Point Overheads for Mixed Precision DNN Accelerators

Add code
Jan 27, 2021
Figure 1 for Rethinking Floating Point Overheads for Mixed Precision DNN Accelerators
Figure 2 for Rethinking Floating Point Overheads for Mixed Precision DNN Accelerators
Figure 3 for Rethinking Floating Point Overheads for Mixed Precision DNN Accelerators
Figure 4 for Rethinking Floating Point Overheads for Mixed Precision DNN Accelerators
Viaarxiv icon

Campfire: Compressible, Regularization-Free, Structured Sparse Training for Hardware Accelerators

Add code
Jan 13, 2020
Figure 1 for Campfire: Compressible, Regularization-Free, Structured Sparse Training for Hardware Accelerators
Figure 2 for Campfire: Compressible, Regularization-Free, Structured Sparse Training for Hardware Accelerators
Figure 3 for Campfire: Compressible, Regularization-Free, Structured Sparse Training for Hardware Accelerators
Figure 4 for Campfire: Compressible, Regularization-Free, Structured Sparse Training for Hardware Accelerators
Viaarxiv icon

CATERPILLAR: Coarse Grain Reconfigurable Architecture for Accelerating the Training of Deep Neural Networks

Add code
Jun 08, 2017
Figure 1 for CATERPILLAR: Coarse Grain Reconfigurable Architecture for Accelerating the Training of Deep Neural Networks
Figure 2 for CATERPILLAR: Coarse Grain Reconfigurable Architecture for Accelerating the Training of Deep Neural Networks
Figure 3 for CATERPILLAR: Coarse Grain Reconfigurable Architecture for Accelerating the Training of Deep Neural Networks
Figure 4 for CATERPILLAR: Coarse Grain Reconfigurable Architecture for Accelerating the Training of Deep Neural Networks
Viaarxiv icon

A Systematic Approach to Blocking Convolutional Neural Networks

Add code
Jun 14, 2016
Figure 1 for A Systematic Approach to Blocking Convolutional Neural Networks
Figure 2 for A Systematic Approach to Blocking Convolutional Neural Networks
Figure 3 for A Systematic Approach to Blocking Convolutional Neural Networks
Figure 4 for A Systematic Approach to Blocking Convolutional Neural Networks
Viaarxiv icon

EIE: Efficient Inference Engine on Compressed Deep Neural Network

Add code
May 03, 2016
Figure 1 for EIE: Efficient Inference Engine on Compressed Deep Neural Network
Figure 2 for EIE: Efficient Inference Engine on Compressed Deep Neural Network
Figure 3 for EIE: Efficient Inference Engine on Compressed Deep Neural Network
Figure 4 for EIE: Efficient Inference Engine on Compressed Deep Neural Network
Viaarxiv icon