Picture for Peiyan Dong

Peiyan Dong

Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers

Add code
Jul 25, 2024
Figure 1 for Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers
Figure 2 for Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers
Figure 3 for Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers
Figure 4 for Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers
Viaarxiv icon

EdgeQAT: Entropy and Distribution Guided Quantization-Aware Training for the Acceleration of Lightweight LLMs on the Edge

Add code
Feb 16, 2024
Viaarxiv icon

Agile-Quant: Activation-Guided Quantization for Faster Inference of LLMs on the Edge

Add code
Dec 09, 2023
Viaarxiv icon

SupeRBNN: Randomized Binary Neural Network Using Adiabatic Superconductor Josephson Devices

Add code
Sep 21, 2023
Viaarxiv icon

Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training

Add code
Nov 19, 2022
Viaarxiv icon

HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers

Add code
Nov 15, 2022
Viaarxiv icon

The Lottery Ticket Hypothesis for Vision Transformers

Add code
Nov 02, 2022
Viaarxiv icon

Quantum Neural Network Compression

Add code
Jul 05, 2022
Figure 1 for Quantum Neural Network Compression
Figure 2 for Quantum Neural Network Compression
Figure 3 for Quantum Neural Network Compression
Figure 4 for Quantum Neural Network Compression
Viaarxiv icon

SPViT: Enabling Faster Vision Transformers via Soft Token Pruning

Add code
Dec 27, 2021
Figure 1 for SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
Figure 2 for SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
Figure 3 for SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
Figure 4 for SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
Viaarxiv icon

GRIM: A General, Real-Time Deep Learning Inference Framework for Mobile Devices based on Fine-Grained Structured Weight Sparsity

Add code
Aug 25, 2021
Figure 1 for GRIM: A General, Real-Time Deep Learning Inference Framework for Mobile Devices based on Fine-Grained Structured Weight Sparsity
Figure 2 for GRIM: A General, Real-Time Deep Learning Inference Framework for Mobile Devices based on Fine-Grained Structured Weight Sparsity
Figure 3 for GRIM: A General, Real-Time Deep Learning Inference Framework for Mobile Devices based on Fine-Grained Structured Weight Sparsity
Figure 4 for GRIM: A General, Real-Time Deep Learning Inference Framework for Mobile Devices based on Fine-Grained Structured Weight Sparsity
Viaarxiv icon