Picture for Yi Chen

Yi Chen

Refer to the report for detailed contributions

Supervised Optimism Correction: Be Confident When LLMs Are Sure

Add code
Apr 10, 2025
Viaarxiv icon

Harnessing the Reasoning Economy: A Survey of Efficient Reasoning for Large Language Models

Add code
Mar 31, 2025
Viaarxiv icon

Exploring the Effect of Reinforcement Learning on Video Understanding: Insights from SEED-Bench-R1

Add code
Mar 31, 2025
Viaarxiv icon

Strategic priorities for transformative progress in advancing biology with proteomics and artificial intelligence

Add code
Feb 21, 2025
Figure 1 for Strategic priorities for transformative progress in advancing biology with proteomics and artificial intelligence
Figure 2 for Strategic priorities for transformative progress in advancing biology with proteomics and artificial intelligence
Viaarxiv icon

Uncertainty-Participation Context Consistency Learning for Semi-supervised Semantic Segmentation

Add code
Dec 24, 2024
Viaarxiv icon

ITPNet: Towards Instantaneous Trajectory Prediction for Autonomous Driving

Add code
Dec 10, 2024
Figure 1 for ITPNet: Towards Instantaneous Trajectory Prediction for Autonomous Driving
Figure 2 for ITPNet: Towards Instantaneous Trajectory Prediction for Autonomous Driving
Figure 3 for ITPNet: Towards Instantaneous Trajectory Prediction for Autonomous Driving
Figure 4 for ITPNet: Towards Instantaneous Trajectory Prediction for Autonomous Driving
Viaarxiv icon

EgoPlan-Bench2: A Benchmark for Multimodal Large Language Model Planning in Real-World Scenarios

Add code
Dec 05, 2024
Figure 1 for EgoPlan-Bench2: A Benchmark for Multimodal Large Language Model Planning in Real-World Scenarios
Figure 2 for EgoPlan-Bench2: A Benchmark for Multimodal Large Language Model Planning in Real-World Scenarios
Figure 3 for EgoPlan-Bench2: A Benchmark for Multimodal Large Language Model Planning in Real-World Scenarios
Figure 4 for EgoPlan-Bench2: A Benchmark for Multimodal Large Language Model Planning in Real-World Scenarios
Viaarxiv icon

Moto: Latent Motion Token as the Bridging Language for Robot Manipulation

Add code
Dec 05, 2024
Figure 1 for Moto: Latent Motion Token as the Bridging Language for Robot Manipulation
Figure 2 for Moto: Latent Motion Token as the Bridging Language for Robot Manipulation
Figure 3 for Moto: Latent Motion Token as the Bridging Language for Robot Manipulation
Figure 4 for Moto: Latent Motion Token as the Bridging Language for Robot Manipulation
Viaarxiv icon

HunyuanVideo: A Systematic Framework For Large Video Generative Models

Add code
Dec 03, 2024
Figure 1 for HunyuanVideo: A Systematic Framework For Large Video Generative Models
Figure 2 for HunyuanVideo: A Systematic Framework For Large Video Generative Models
Figure 3 for HunyuanVideo: A Systematic Framework For Large Video Generative Models
Figure 4 for HunyuanVideo: A Systematic Framework For Large Video Generative Models
Viaarxiv icon

Sonic: Shifting Focus to Global Audio Perception in Portrait Animation

Add code
Nov 25, 2024
Figure 1 for Sonic: Shifting Focus to Global Audio Perception in Portrait Animation
Figure 2 for Sonic: Shifting Focus to Global Audio Perception in Portrait Animation
Figure 3 for Sonic: Shifting Focus to Global Audio Perception in Portrait Animation
Figure 4 for Sonic: Shifting Focus to Global Audio Perception in Portrait Animation
Viaarxiv icon