Picture for Zhenman Fang

Zhenman Fang

Towards Accurate and Efficient Sub-8-Bit Integer Training

Add code
Nov 17, 2024
Figure 1 for Towards Accurate and Efficient Sub-8-Bit Integer Training
Figure 2 for Towards Accurate and Efficient Sub-8-Bit Integer Training
Figure 3 for Towards Accurate and Efficient Sub-8-Bit Integer Training
Figure 4 for Towards Accurate and Efficient Sub-8-Bit Integer Training
Viaarxiv icon

Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers

Add code
Jul 25, 2024
Figure 1 for Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers
Figure 2 for Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers
Figure 3 for Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers
Figure 4 for Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers
Viaarxiv icon

HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers

Add code
Nov 15, 2022
Viaarxiv icon

SuperYOLO: Super Resolution Assisted Object Detection in Multimodal Remote Sensing Imagery

Add code
Sep 27, 2022
Figure 1 for SuperYOLO: Super Resolution Assisted Object Detection in Multimodal Remote Sensing Imagery
Figure 2 for SuperYOLO: Super Resolution Assisted Object Detection in Multimodal Remote Sensing Imagery
Figure 3 for SuperYOLO: Super Resolution Assisted Object Detection in Multimodal Remote Sensing Imagery
Figure 4 for SuperYOLO: Super Resolution Assisted Object Detection in Multimodal Remote Sensing Imagery
Viaarxiv icon

Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization

Add code
Aug 10, 2022
Figure 1 for Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization
Figure 2 for Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization
Figure 3 for Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization
Figure 4 for Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization
Viaarxiv icon

BDFA: A Blind Data Adversarial Bit-flip Attack on Deep Neural Networks

Add code
Jan 07, 2022
Figure 1 for BDFA: A Blind Data Adversarial Bit-flip Attack on Deep Neural Networks
Figure 2 for BDFA: A Blind Data Adversarial Bit-flip Attack on Deep Neural Networks
Viaarxiv icon

FitAct: Error Resilient Deep Neural Networks via Fine-Grained Post-Trainable Activation Functions

Add code
Dec 27, 2021
Figure 1 for FitAct: Error Resilient Deep Neural Networks via Fine-Grained Post-Trainable Activation Functions
Figure 2 for FitAct: Error Resilient Deep Neural Networks via Fine-Grained Post-Trainable Activation Functions
Figure 3 for FitAct: Error Resilient Deep Neural Networks via Fine-Grained Post-Trainable Activation Functions
Figure 4 for FitAct: Error Resilient Deep Neural Networks via Fine-Grained Post-Trainable Activation Functions
Viaarxiv icon