Picture for Erwei Wang

Erwei Wang

Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference

Add code
Jan 02, 2022
Figure 1 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Figure 2 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Figure 3 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Figure 4 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Viaarxiv icon

Accelerating Recurrent Neural Networks for Gravitational Wave Experiments

Add code
Jun 26, 2021
Figure 1 for Accelerating Recurrent Neural Networks for Gravitational Wave Experiments
Figure 2 for Accelerating Recurrent Neural Networks for Gravitational Wave Experiments
Figure 3 for Accelerating Recurrent Neural Networks for Gravitational Wave Experiments
Figure 4 for Accelerating Recurrent Neural Networks for Gravitational Wave Experiments
Viaarxiv icon

Enabling Binary Neural Network Training on the Edge

Add code
Feb 10, 2021
Figure 1 for Enabling Binary Neural Network Training on the Edge
Figure 2 for Enabling Binary Neural Network Training on the Edge
Figure 3 for Enabling Binary Neural Network Training on the Edge
Figure 4 for Enabling Binary Neural Network Training on the Edge
Viaarxiv icon

LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference

Add code
Oct 24, 2019
Figure 1 for LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference
Figure 2 for LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference
Figure 3 for LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference
Figure 4 for LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference
Viaarxiv icon

Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs

Add code
Oct 21, 2019
Figure 1 for Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs
Figure 2 for Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs
Figure 3 for Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs
Figure 4 for Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs
Viaarxiv icon

LUTNet: Rethinking Inference in FPGA Soft Logic

Add code
Apr 01, 2019
Figure 1 for LUTNet: Rethinking Inference in FPGA Soft Logic
Figure 2 for LUTNet: Rethinking Inference in FPGA Soft Logic
Figure 3 for LUTNet: Rethinking Inference in FPGA Soft Logic
Figure 4 for LUTNet: Rethinking Inference in FPGA Soft Logic
Viaarxiv icon

Deep Neural Network Approximation for Custom Hardware: Where We've Been, Where We're Going

Add code
Jan 21, 2019
Figure 1 for Deep Neural Network Approximation for Custom Hardware: Where We've Been, Where We're Going
Figure 2 for Deep Neural Network Approximation for Custom Hardware: Where We've Been, Where We're Going
Figure 3 for Deep Neural Network Approximation for Custom Hardware: Where We've Been, Where We're Going
Figure 4 for Deep Neural Network Approximation for Custom Hardware: Where We've Been, Where We're Going
Viaarxiv icon