Picture for Peng Ye

Peng Ye

Learning Compact Representations of LLM Abilities via Item Response Theory

Add code
Oct 01, 2025
Viaarxiv icon

Private Online Learning against an Adaptive Adversary: Realizable and Agnostic Settings

Add code
Oct 01, 2025
Viaarxiv icon

Private Realizable-to-Agnostic Transformation with Near-Optimal Sample Complexity

Add code
Oct 01, 2025
Viaarxiv icon

HiPhO: How Far Are (M)LLMs from Humans in the Latest High School Physics Olympiad Benchmark?

Add code
Sep 10, 2025
Figure 1 for HiPhO: How Far Are (M)LLMs from Humans in the Latest High School Physics Olympiad Benchmark?
Figure 2 for HiPhO: How Far Are (M)LLMs from Humans in the Latest High School Physics Olympiad Benchmark?
Figure 3 for HiPhO: How Far Are (M)LLMs from Humans in the Latest High School Physics Olympiad Benchmark?
Figure 4 for HiPhO: How Far Are (M)LLMs from Humans in the Latest High School Physics Olympiad Benchmark?
Viaarxiv icon

Beyond GPT-5: Making LLMs Cheaper and Better via Performance-Efficiency Optimized Routing

Add code
Aug 18, 2025
Viaarxiv icon

Wisdom of the Crowd: Reinforcement Learning from Coevolutionary Collective Feedback

Add code
Aug 17, 2025
Viaarxiv icon

Think Twice, Act Once: Token-Aware Compression and Action Reuse for Efficient Inference in Vision-Language-Action Models

Add code
May 27, 2025
Figure 1 for Think Twice, Act Once: Token-Aware Compression and Action Reuse for Efficient Inference in Vision-Language-Action Models
Figure 2 for Think Twice, Act Once: Token-Aware Compression and Action Reuse for Efficient Inference in Vision-Language-Action Models
Figure 3 for Think Twice, Act Once: Token-Aware Compression and Action Reuse for Efficient Inference in Vision-Language-Action Models
Figure 4 for Think Twice, Act Once: Token-Aware Compression and Action Reuse for Efficient Inference in Vision-Language-Action Models
Viaarxiv icon

The Avengers: A Simple Recipe for Uniting Smaller Language Models to Challenge Proprietary Giants

Add code
May 26, 2025
Viaarxiv icon

Doc-CoB: Enhancing Multi-Modal Document Understanding with Visual Chain-of-Boxes Reasoning

Add code
May 24, 2025
Figure 1 for Doc-CoB: Enhancing Multi-Modal Document Understanding with Visual Chain-of-Boxes Reasoning
Figure 2 for Doc-CoB: Enhancing Multi-Modal Document Understanding with Visual Chain-of-Boxes Reasoning
Figure 3 for Doc-CoB: Enhancing Multi-Modal Document Understanding with Visual Chain-of-Boxes Reasoning
Figure 4 for Doc-CoB: Enhancing Multi-Modal Document Understanding with Visual Chain-of-Boxes Reasoning
Viaarxiv icon

NovelSeek: When Agent Becomes the Scientist -- Building Closed-Loop System from Hypothesis to Verification

Add code
May 22, 2025
Viaarxiv icon