Picture for Tao Chen

Tao Chen

IEEE Fellow

Unveiling Many Faces of Surrogate Models for Configuration Tuning: A Fitness Landscape Analysis Perspective

Add code
Sep 26, 2025
Viaarxiv icon

HiPhO: How Far Are (M)LLMs from Humans in the Latest High School Physics Olympiad Benchmark?

Add code
Sep 10, 2025
Figure 1 for HiPhO: How Far Are (M)LLMs from Humans in the Latest High School Physics Olympiad Benchmark?
Figure 2 for HiPhO: How Far Are (M)LLMs from Humans in the Latest High School Physics Olympiad Benchmark?
Figure 3 for HiPhO: How Far Are (M)LLMs from Humans in the Latest High School Physics Olympiad Benchmark?
Figure 4 for HiPhO: How Far Are (M)LLMs from Humans in the Latest High School Physics Olympiad Benchmark?
Viaarxiv icon

Wisdom of the Crowd: Reinforcement Learning from Coevolutionary Collective Feedback

Add code
Aug 17, 2025
Viaarxiv icon

SC-Captioner: Improving Image Captioning with Self-Correction by Reinforcement Learning

Add code
Aug 08, 2025
Viaarxiv icon

EarthLink: A Self-Evolving AI Agent for Climate Science

Add code
Jul 24, 2025
Figure 1 for EarthLink: A Self-Evolving AI Agent for Climate Science
Figure 2 for EarthLink: A Self-Evolving AI Agent for Climate Science
Figure 3 for EarthLink: A Self-Evolving AI Agent for Climate Science
Viaarxiv icon

MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM

Add code
Jul 16, 2025
Viaarxiv icon

LangMamba: A Language-driven Mamba Framework for Low-dose CT Denoising with Vision-language Models

Add code
Jul 08, 2025
Viaarxiv icon

MME-Reasoning: A Comprehensive Benchmark for Logical Reasoning in MLLMs

Add code
May 27, 2025
Viaarxiv icon

Pioneering 4-Bit FP Quantization for Diffusion Models: Mixup-Sign Quantization and Timestep-Aware Fine-Tuning

Add code
May 27, 2025
Viaarxiv icon

Think Twice, Act Once: Token-Aware Compression and Action Reuse for Efficient Inference in Vision-Language-Action Models

Add code
May 27, 2025
Figure 1 for Think Twice, Act Once: Token-Aware Compression and Action Reuse for Efficient Inference in Vision-Language-Action Models
Figure 2 for Think Twice, Act Once: Token-Aware Compression and Action Reuse for Efficient Inference in Vision-Language-Action Models
Figure 3 for Think Twice, Act Once: Token-Aware Compression and Action Reuse for Efficient Inference in Vision-Language-Action Models
Figure 4 for Think Twice, Act Once: Token-Aware Compression and Action Reuse for Efficient Inference in Vision-Language-Action Models
Viaarxiv icon