Picture for Chong Yu

Chong Yu

S2HPruner: Soft-to-Hard Distillation Bridges the Discretization Gap in Pruning

Add code
Oct 09, 2024
Viaarxiv icon

Communication-Efficient Hybrid Federated Learning for E-health with Horizontal and Vertical Data Partitioning

Add code
Apr 15, 2024
Viaarxiv icon

Once for Both: Single Stage of Importance and Sparsity Search for Vision Transformer Compression

Add code
Mar 23, 2024
Viaarxiv icon

Enhanced Sparsification via Stimulative Training

Add code
Mar 11, 2024
Figure 1 for Enhanced Sparsification via Stimulative Training
Figure 2 for Enhanced Sparsification via Stimulative Training
Figure 3 for Enhanced Sparsification via Stimulative Training
Figure 4 for Enhanced Sparsification via Stimulative Training
Viaarxiv icon

MADTP: Multimodal Alignment-Guided Dynamic Token Pruning for Accelerating Vision-Language Transformer

Add code
Mar 05, 2024
Viaarxiv icon

Efficient Architecture Search via Bi-level Data Pruning

Add code
Dec 21, 2023
Viaarxiv icon

SpVOS: Efficient Video Object Segmentation with Triple Sparse Convolution

Add code
Oct 23, 2023
Viaarxiv icon

Boosting Residual Networks with Group Knowledge

Add code
Aug 26, 2023
Viaarxiv icon

Adversarial Amendment is the Only Force Capable of Transforming an Enemy into a Friend

Add code
May 18, 2023
Viaarxiv icon

Boost Vision Transformer with GPU-Friendly Sparsity and Quantization

Add code
May 18, 2023
Viaarxiv icon