Picture for Paolo D'Alberto

Paolo D'Alberto

Weight Block Sparsity: Training, Compilation, and AI Engine Accelerators

Add code
Jul 12, 2024
Viaarxiv icon

DPUV3INT8: A Compiler View to programmable FPGA Inference Engines

Add code
Oct 08, 2021
Figure 1 for DPUV3INT8: A Compiler View to programmable FPGA Inference Engines
Figure 2 for DPUV3INT8: A Compiler View to programmable FPGA Inference Engines
Figure 3 for DPUV3INT8: A Compiler View to programmable FPGA Inference Engines
Figure 4 for DPUV3INT8: A Compiler View to programmable FPGA Inference Engines
Viaarxiv icon

Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines

Add code
May 21, 2018
Figure 1 for Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines
Figure 2 for Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines
Figure 3 for Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines
Figure 4 for Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines
Viaarxiv icon