Picture for Naigang Wang

Naigang Wang

Unlocking Real-Time Fluorescence Lifetime Imaging: Multi-Pixel Parallelism for FPGA-Accelerated Processing

Add code
Oct 09, 2024
Figure 1 for Unlocking Real-Time Fluorescence Lifetime Imaging: Multi-Pixel Parallelism for FPGA-Accelerated Processing
Figure 2 for Unlocking Real-Time Fluorescence Lifetime Imaging: Multi-Pixel Parallelism for FPGA-Accelerated Processing
Figure 3 for Unlocking Real-Time Fluorescence Lifetime Imaging: Multi-Pixel Parallelism for FPGA-Accelerated Processing
Figure 4 for Unlocking Real-Time Fluorescence Lifetime Imaging: Multi-Pixel Parallelism for FPGA-Accelerated Processing
Viaarxiv icon

Compressing Recurrent Neural Networks for FPGA-accelerated Implementation in Fluorescence Lifetime Imaging

Add code
Oct 01, 2024
Viaarxiv icon

MagR: Weight Magnitude Reduction for Enhancing Post-Training Quantization

Add code
Jun 02, 2024
Viaarxiv icon

A Provably Effective Method for Pruning Experts in Fine-tuned Sparse Mixture-of-Experts

Add code
May 28, 2024
Viaarxiv icon

Mitigating the Impact of Outlier Channels for Language Model Quantization with Activation Regularization

Add code
Apr 04, 2024
Viaarxiv icon

COMQ: A Backpropagation-Free Algorithm for Post-Training Quantization

Add code
Mar 11, 2024
Viaarxiv icon

4-bit Quantization of LSTM-based Speech Recognition Models

Add code
Aug 27, 2021
Figure 1 for 4-bit Quantization of LSTM-based Speech Recognition Models
Figure 2 for 4-bit Quantization of LSTM-based Speech Recognition Models
Figure 3 for 4-bit Quantization of LSTM-based Speech Recognition Models
Figure 4 for 4-bit Quantization of LSTM-based Speech Recognition Models
Viaarxiv icon

ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training

Add code
Apr 21, 2021
Figure 1 for ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Figure 2 for ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Figure 3 for ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Figure 4 for ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Viaarxiv icon

All at Once Network Quantization via Collaborative Knowledge Transfer

Add code
Mar 02, 2021
Figure 1 for All at Once Network Quantization via Collaborative Knowledge Transfer
Figure 2 for All at Once Network Quantization via Collaborative Knowledge Transfer
Figure 3 for All at Once Network Quantization via Collaborative Knowledge Transfer
Figure 4 for All at Once Network Quantization via Collaborative Knowledge Transfer
Viaarxiv icon

A Comprehensive Survey on Hardware-Aware Neural Architecture Search

Add code
Jan 22, 2021
Figure 1 for A Comprehensive Survey on Hardware-Aware Neural Architecture Search
Figure 2 for A Comprehensive Survey on Hardware-Aware Neural Architecture Search
Figure 3 for A Comprehensive Survey on Hardware-Aware Neural Architecture Search
Figure 4 for A Comprehensive Survey on Hardware-Aware Neural Architecture Search
Viaarxiv icon