Picture for Hao Chen

Hao Chen

Charlie

OmniEduBench: A Comprehensive Chinese Benchmark for Evaluating Large Language Models in Education

Add code
Oct 30, 2025
Viaarxiv icon

Scalable Vision-Language-Action Model Pretraining for Robotic Manipulation with Real-Life Human Activity Videos

Add code
Oct 24, 2025
Viaarxiv icon

Exploring Image Representation with Decoupled Classical Visual Descriptors

Add code
Oct 16, 2025
Viaarxiv icon

Revisit Modality Imbalance at the Decision Layer

Add code
Oct 16, 2025
Viaarxiv icon

Evolutionary Profiles for Protein Fitness Prediction

Add code
Oct 08, 2025
Figure 1 for Evolutionary Profiles for Protein Fitness Prediction
Figure 2 for Evolutionary Profiles for Protein Fitness Prediction
Figure 3 for Evolutionary Profiles for Protein Fitness Prediction
Figure 4 for Evolutionary Profiles for Protein Fitness Prediction
Viaarxiv icon

A Clinical-grade Universal Foundation Model for Intraoperative Pathology

Add code
Oct 06, 2025
Viaarxiv icon

StaMo: Unsupervised Learning of Generalizable Robot Motion from Compact State Representation

Add code
Oct 06, 2025
Viaarxiv icon

GenAR: Next-Scale Autoregressive Generation for Spatial Gene Expression Prediction

Add code
Oct 05, 2025
Figure 1 for GenAR: Next-Scale Autoregressive Generation for Spatial Gene Expression Prediction
Figure 2 for GenAR: Next-Scale Autoregressive Generation for Spatial Gene Expression Prediction
Figure 3 for GenAR: Next-Scale Autoregressive Generation for Spatial Gene Expression Prediction
Figure 4 for GenAR: Next-Scale Autoregressive Generation for Spatial Gene Expression Prediction
Viaarxiv icon

Growing Visual Generative Capacity for Pre-Trained MLLMs

Add code
Oct 02, 2025
Viaarxiv icon

MLA: A Multisensory Language-Action Model for Multimodal Understanding and Forecasting in Robotic Manipulation

Add code
Sep 30, 2025
Figure 1 for MLA: A Multisensory Language-Action Model for Multimodal Understanding and Forecasting in Robotic Manipulation
Figure 2 for MLA: A Multisensory Language-Action Model for Multimodal Understanding and Forecasting in Robotic Manipulation
Figure 3 for MLA: A Multisensory Language-Action Model for Multimodal Understanding and Forecasting in Robotic Manipulation
Figure 4 for MLA: A Multisensory Language-Action Model for Multimodal Understanding and Forecasting in Robotic Manipulation
Viaarxiv icon