Picture for Philip H. W. Leong

Philip H. W. Leong

fSEAD: a Composable FPGA-based Streaming Ensemble Anomaly Detection Library

Add code
Jun 10, 2024
Viaarxiv icon

PolyLUT-Add: FPGA-based LUT Inference with Wide Inputs

Add code
Jun 07, 2024
Viaarxiv icon

The Wyner Variational Autoencoder for Unsupervised Multi-Layer Wireless Fingerprinting

Add code
Mar 28, 2023
Viaarxiv icon

NITI: Training Integer Neural Networks Using Integer-only Arithmetic

Add code
Sep 28, 2020
Figure 1 for NITI: Training Integer Neural Networks Using Integer-only Arithmetic
Figure 2 for NITI: Training Integer Neural Networks Using Integer-only Arithmetic
Figure 3 for NITI: Training Integer Neural Networks Using Integer-only Arithmetic
Figure 4 for NITI: Training Integer Neural Networks Using Integer-only Arithmetic
Viaarxiv icon

MajorityNets: BNNs Utilising Approximate Popcount for Improved Efficiency

Add code
Feb 27, 2020
Figure 1 for MajorityNets: BNNs Utilising Approximate Popcount for Improved Efficiency
Figure 2 for MajorityNets: BNNs Utilising Approximate Popcount for Improved Efficiency
Figure 3 for MajorityNets: BNNs Utilising Approximate Popcount for Improved Efficiency
Figure 4 for MajorityNets: BNNs Utilising Approximate Popcount for Improved Efficiency
Viaarxiv icon

AddNet: Deep Neural Networks Using FPGA-Optimized Multipliers

Add code
Nov 19, 2019
Figure 1 for AddNet: Deep Neural Networks Using FPGA-Optimized Multipliers
Figure 2 for AddNet: Deep Neural Networks Using FPGA-Optimized Multipliers
Figure 3 for AddNet: Deep Neural Networks Using FPGA-Optimized Multipliers
Figure 4 for AddNet: Deep Neural Networks Using FPGA-Optimized Multipliers
Viaarxiv icon

Unrolling Ternary Neural Networks

Add code
Sep 09, 2019
Figure 1 for Unrolling Ternary Neural Networks
Figure 2 for Unrolling Ternary Neural Networks
Figure 3 for Unrolling Ternary Neural Networks
Figure 4 for Unrolling Ternary Neural Networks
Viaarxiv icon

SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks

Add code
Jul 01, 2018
Figure 1 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Figure 2 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Figure 3 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Figure 4 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Viaarxiv icon

Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks

Add code
Oct 10, 2017
Figure 1 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 2 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 3 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 4 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Viaarxiv icon