Picture for Yongpan Liu

Yongpan Liu

A 65nm 8b-Activation 8b-Weight SRAM-Based Charge-Domain Computing-in-Memory Macro Using A Fully-Parallel Analog Adder Network and A Single-ADC Interface

Add code
Nov 23, 2022
Viaarxiv icon

Block-Wise Dynamic-Precision Neural Network Training Acceleration via Online Quantization Sensitivity Analytics

Add code
Oct 31, 2022
Viaarxiv icon

SEFormer: Structure Embedding Transformer for 3D Object Detection

Add code
Sep 05, 2022
Figure 1 for SEFormer: Structure Embedding Transformer for 3D Object Detection
Figure 2 for SEFormer: Structure Embedding Transformer for 3D Object Detection
Figure 3 for SEFormer: Structure Embedding Transformer for 3D Object Detection
Figure 4 for SEFormer: Structure Embedding Transformer for 3D Object Detection
Viaarxiv icon

Adaptive Structured Sparse Network for Efficient CNNs with Feature Regularization

Add code
Oct 21, 2020
Figure 1 for Adaptive Structured Sparse Network for Efficient CNNs with Feature Regularization
Figure 2 for Adaptive Structured Sparse Network for Efficient CNNs with Feature Regularization
Figure 3 for Adaptive Structured Sparse Network for Efficient CNNs with Feature Regularization
Figure 4 for Adaptive Structured Sparse Network for Efficient CNNs with Feature Regularization
Viaarxiv icon

ADMP: An Adversarial Double Masks Based Pruning Framework For Unsupervised Cross-Domain Compression

Add code
Jun 07, 2020
Figure 1 for ADMP: An Adversarial Double Masks Based Pruning Framework For Unsupervised Cross-Domain Compression
Figure 2 for ADMP: An Adversarial Double Masks Based Pruning Framework For Unsupervised Cross-Domain Compression
Figure 3 for ADMP: An Adversarial Double Masks Based Pruning Framework For Unsupervised Cross-Domain Compression
Figure 4 for ADMP: An Adversarial Double Masks Based Pruning Framework For Unsupervised Cross-Domain Compression
Viaarxiv icon

Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM

Add code
Mar 30, 2019
Figure 1 for Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM
Figure 2 for Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM
Figure 3 for Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM
Figure 4 for Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM
Viaarxiv icon