Compressing deep neural networks on FPGAs to binary and ternary precision with HLS4ML

Add code
Mar 11, 2020
Figure 1 for Compressing deep neural networks on FPGAs to binary and ternary precision with HLS4ML
Figure 2 for Compressing deep neural networks on FPGAs to binary and ternary precision with HLS4ML
Figure 3 for Compressing deep neural networks on FPGAs to binary and ternary precision with HLS4ML
Figure 4 for Compressing deep neural networks on FPGAs to binary and ternary precision with HLS4ML

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: