Picture for Chongjun Tu

Chongjun Tu

FAVOR-Bench: A Comprehensive Benchmark for Fine-Grained Video Motion Understanding

Add code
Mar 19, 2025
Viaarxiv icon

TokenCarve: Information-Preserving Visual Token Compression in Multimodal Large Language Models

Add code
Mar 13, 2025
Viaarxiv icon

Attention Reallocation: Towards Zero-cost and Controllable Hallucination Mitigation of MLLMs

Add code
Mar 12, 2025
Viaarxiv icon

$Δ$-DiT: A Training-Free Acceleration Method Tailored for Diffusion Transformers

Add code
Jun 03, 2024
Figure 1 for $Δ$-DiT: A Training-Free Acceleration Method Tailored for Diffusion Transformers
Figure 2 for $Δ$-DiT: A Training-Free Acceleration Method Tailored for Diffusion Transformers
Figure 3 for $Δ$-DiT: A Training-Free Acceleration Method Tailored for Diffusion Transformers
Figure 4 for $Δ$-DiT: A Training-Free Acceleration Method Tailored for Diffusion Transformers
Viaarxiv icon

ClipSAM: CLIP and SAM Collaboration for Zero-Shot Anomaly Segmentation

Add code
Jan 29, 2024
Figure 1 for ClipSAM: CLIP and SAM Collaboration for Zero-Shot Anomaly Segmentation
Figure 2 for ClipSAM: CLIP and SAM Collaboration for Zero-Shot Anomaly Segmentation
Figure 3 for ClipSAM: CLIP and SAM Collaboration for Zero-Shot Anomaly Segmentation
Figure 4 for ClipSAM: CLIP and SAM Collaboration for Zero-Shot Anomaly Segmentation
Viaarxiv icon

Partial Fine-Tuning: A Successor to Full Fine-Tuning for Vision Transformers

Add code
Dec 25, 2023
Viaarxiv icon

Efficient Architecture Search via Bi-level Data Pruning

Add code
Dec 21, 2023
Viaarxiv icon