Picture for Yinghui Li

Yinghui Li

TangramPuzzle: Evaluating Multimodal Large Language Models with Compositional Spatial Reasoning

Add code
Jan 23, 2026
Viaarxiv icon

EvoConfig: Self-Evolving Multi-Agent Systems for Efficient Autonomous Environment Configuration

Add code
Jan 23, 2026
Viaarxiv icon

Youtu-LLM: Unlocking the Native Agentic Potential for Lightweight Large Language Models

Add code
Dec 31, 2025
Viaarxiv icon

SpecRouter: Adaptive Routing for Multi-Level Speculative Decoding in Large Language Models

Add code
May 12, 2025
Figure 1 for SpecRouter: Adaptive Routing for Multi-Level Speculative Decoding in Large Language Models
Figure 2 for SpecRouter: Adaptive Routing for Multi-Level Speculative Decoding in Large Language Models
Figure 3 for SpecRouter: Adaptive Routing for Multi-Level Speculative Decoding in Large Language Models
Figure 4 for SpecRouter: Adaptive Routing for Multi-Level Speculative Decoding in Large Language Models
Viaarxiv icon

From Token to Line: Enhancing Code Generation with a Long-Term Perspective

Add code
Apr 10, 2025
Figure 1 for From Token to Line: Enhancing Code Generation with a Long-Term Perspective
Figure 2 for From Token to Line: Enhancing Code Generation with a Long-Term Perspective
Figure 3 for From Token to Line: Enhancing Code Generation with a Long-Term Perspective
Figure 4 for From Token to Line: Enhancing Code Generation with a Long-Term Perspective
Viaarxiv icon

MDIT: A Model-free Data Interpolation Method for Diverse Instruction Tuning

Add code
Apr 09, 2025
Viaarxiv icon

RAISE: Reinforenced Adaptive Instruction Selection For Large Language Models

Add code
Apr 09, 2025
Viaarxiv icon

Corrections Meet Explanations: A Unified Framework for Explainable Grammatical Error Correction

Add code
Feb 21, 2025
Viaarxiv icon

Revisiting Classification Taxonomy for Grammatical Errors

Add code
Feb 18, 2025
Viaarxiv icon

DAST: Context-Aware Compression in LLMs via Dynamic Allocation of Soft Tokens

Add code
Feb 17, 2025
Figure 1 for DAST: Context-Aware Compression in LLMs via Dynamic Allocation of Soft Tokens
Figure 2 for DAST: Context-Aware Compression in LLMs via Dynamic Allocation of Soft Tokens
Figure 3 for DAST: Context-Aware Compression in LLMs via Dynamic Allocation of Soft Tokens
Figure 4 for DAST: Context-Aware Compression in LLMs via Dynamic Allocation of Soft Tokens
Viaarxiv icon