Picture for Haoyu Li

Haoyu Li

Aggregation Queries over Unstructured Text: Benchmark and Agentic Method

Add code
Feb 03, 2026
Viaarxiv icon

Probing RLVR training instability through the lens of objective-level hacking

Add code
Feb 01, 2026
Viaarxiv icon

Machine learning based radiative parameterization scheme and its performance in operational reforecast experiments

Add code
Jan 20, 2026
Viaarxiv icon

HoneyTrap: Deceiving Large Language Model Attackers to Honeypot Traps with Resilient Multi-Agent Defense

Add code
Jan 07, 2026
Viaarxiv icon

No Cache Left Idle: Accelerating diffusion model via Extreme-slimming Caching

Add code
Dec 14, 2025
Figure 1 for No Cache Left Idle: Accelerating diffusion model via Extreme-slimming Caching
Figure 2 for No Cache Left Idle: Accelerating diffusion model via Extreme-slimming Caching
Figure 3 for No Cache Left Idle: Accelerating diffusion model via Extreme-slimming Caching
Figure 4 for No Cache Left Idle: Accelerating diffusion model via Extreme-slimming Caching
Viaarxiv icon

Efficient Level-Crossing Probability Calculation for Gaussian Process Modeled Data

Add code
Dec 13, 2025
Viaarxiv icon

Time-Layer Adaptive Alignment for Speaker Similarity in Flow-Matching Based Zero-Shot TTS

Add code
Nov 13, 2025
Figure 1 for Time-Layer Adaptive Alignment for Speaker Similarity in Flow-Matching Based Zero-Shot TTS
Figure 2 for Time-Layer Adaptive Alignment for Speaker Similarity in Flow-Matching Based Zero-Shot TTS
Figure 3 for Time-Layer Adaptive Alignment for Speaker Similarity in Flow-Matching Based Zero-Shot TTS
Figure 4 for Time-Layer Adaptive Alignment for Speaker Similarity in Flow-Matching Based Zero-Shot TTS
Viaarxiv icon

Gradient-based multi-focus image fusion with focus-aware saliency enhancement

Add code
Sep 26, 2025
Viaarxiv icon

Dual-Stage Safe Herding Framework for Adversarial Attacker in Dynamic Environment

Add code
Sep 10, 2025
Viaarxiv icon

OmniVTLA: Vision-Tactile-Language-Action Model with Semantic-Aligned Tactile Sensing

Add code
Aug 12, 2025
Figure 1 for OmniVTLA: Vision-Tactile-Language-Action Model with Semantic-Aligned Tactile Sensing
Figure 2 for OmniVTLA: Vision-Tactile-Language-Action Model with Semantic-Aligned Tactile Sensing
Figure 3 for OmniVTLA: Vision-Tactile-Language-Action Model with Semantic-Aligned Tactile Sensing
Figure 4 for OmniVTLA: Vision-Tactile-Language-Action Model with Semantic-Aligned Tactile Sensing
Viaarxiv icon