Picture for Wei-Cheng Lin

Wei-Cheng Lin

ELSA: Exploiting Layer-wise N:M Sparsity for Vision Transformer Acceleration

Add code
Sep 15, 2024
Figure 1 for ELSA: Exploiting Layer-wise N:M Sparsity for Vision Transformer Acceleration
Figure 2 for ELSA: Exploiting Layer-wise N:M Sparsity for Vision Transformer Acceleration
Figure 3 for ELSA: Exploiting Layer-wise N:M Sparsity for Vision Transformer Acceleration
Figure 4 for ELSA: Exploiting Layer-wise N:M Sparsity for Vision Transformer Acceleration
Viaarxiv icon

Palu: Compressing KV-Cache with Low-Rank Projection

Add code
Jul 30, 2024
Viaarxiv icon

Q-YOLOP: Quantization-aware You Only Look Once for Panoptic Driving Perception

Add code
Jul 10, 2023
Figure 1 for Q-YOLOP: Quantization-aware You Only Look Once for Panoptic Driving Perception
Figure 2 for Q-YOLOP: Quantization-aware You Only Look Once for Panoptic Driving Perception
Figure 3 for Q-YOLOP: Quantization-aware You Only Look Once for Panoptic Driving Perception
Viaarxiv icon

Versatile Audio-Visual Learning for Handling Single and Multi Modalities in Emotion Regression and Classification Tasks

Add code
May 12, 2023
Viaarxiv icon