Picture for Zhongwei Wan

Zhongwei Wan

SVD-LLM V2: Optimizing Singular Value Truncation for Large Language Model Compression

Add code
Mar 16, 2025
Viaarxiv icon

Knowledge-enhanced Multimodal ECG Representation Learning with Arbitrary-Lead Inputs

Add code
Feb 25, 2025
Viaarxiv icon

MEDA: Dynamic KV Cache Allocation for Efficient Multimodal Long-Context Inference

Add code
Feb 24, 2025
Viaarxiv icon

Recent Advances in Large Langauge Model Benchmarks against Data Contamination: From Static to Dynamic Evaluation

Add code
Feb 23, 2025
Viaarxiv icon

ParallelComp: Parallel Long-Context Compressor for Length Extrapolation

Add code
Feb 20, 2025
Viaarxiv icon

ClinicalBench: Can LLMs Beat Traditional ML Models in Clinical Prediction?

Add code
Nov 10, 2024
Viaarxiv icon

Autoregressive Models in Vision: A Survey

Add code
Nov 08, 2024
Figure 1 for Autoregressive Models in Vision: A Survey
Figure 2 for Autoregressive Models in Vision: A Survey
Figure 3 for Autoregressive Models in Vision: A Survey
Figure 4 for Autoregressive Models in Vision: A Survey
Viaarxiv icon

NeuroClips: Towards High-fidelity and Smooth fMRI-to-Video Reconstruction

Add code
Oct 28, 2024
Viaarxiv icon

Can Medical Vision-Language Pre-training Succeed with Purely Synthetic Data?

Add code
Oct 17, 2024
Figure 1 for Can Medical Vision-Language Pre-training Succeed with Purely Synthetic Data?
Figure 2 for Can Medical Vision-Language Pre-training Succeed with Purely Synthetic Data?
Figure 3 for Can Medical Vision-Language Pre-training Succeed with Purely Synthetic Data?
Figure 4 for Can Medical Vision-Language Pre-training Succeed with Purely Synthetic Data?
Viaarxiv icon

UNComp: Uncertainty-Aware Long-Context Compressor for Efficient Large Language Model Inference

Add code
Oct 04, 2024
Figure 1 for UNComp: Uncertainty-Aware Long-Context Compressor for Efficient Large Language Model Inference
Figure 2 for UNComp: Uncertainty-Aware Long-Context Compressor for Efficient Large Language Model Inference
Figure 3 for UNComp: Uncertainty-Aware Long-Context Compressor for Efficient Large Language Model Inference
Figure 4 for UNComp: Uncertainty-Aware Long-Context Compressor for Efficient Large Language Model Inference
Viaarxiv icon