Picture for Yanyue Xie

Yanyue Xie

MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the Hints from Its Router

Add code
Oct 15, 2024
Figure 1 for MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the Hints from Its Router
Figure 2 for MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the Hints from Its Router
Figure 3 for MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the Hints from Its Router
Figure 4 for MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the Hints from Its Router
Viaarxiv icon

Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers

Add code
Jul 25, 2024
Figure 1 for Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers
Figure 2 for Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers
Figure 3 for Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers
Figure 4 for Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers
Viaarxiv icon

HybridFlow: Infusing Continuity into Masked Codebook for Extreme Low-Bitrate Image Compression

Add code
Apr 20, 2024
Figure 1 for HybridFlow: Infusing Continuity into Masked Codebook for Extreme Low-Bitrate Image Compression
Figure 2 for HybridFlow: Infusing Continuity into Masked Codebook for Extreme Low-Bitrate Image Compression
Figure 3 for HybridFlow: Infusing Continuity into Masked Codebook for Extreme Low-Bitrate Image Compression
Figure 4 for HybridFlow: Infusing Continuity into Masked Codebook for Extreme Low-Bitrate Image Compression
Viaarxiv icon

SupeRBNN: Randomized Binary Neural Network Using Adiabatic Superconductor Josephson Devices

Add code
Sep 21, 2023
Viaarxiv icon

Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training

Add code
Nov 19, 2022
Viaarxiv icon

HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers

Add code
Nov 15, 2022
Viaarxiv icon

Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization

Add code
Aug 10, 2022
Figure 1 for Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization
Figure 2 for Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization
Figure 3 for Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization
Figure 4 for Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization
Viaarxiv icon