Picture for Han Bao

Han Bao

Drift-Bench: Diagnosing Cooperative Breakdowns in LLM Agents under Input Faults via Multi-Turn Interaction

Add code
Feb 02, 2026
Viaarxiv icon

Mitigating Hallucinations in Video Large Language Models via Spatiotemporal-Semantic Contrastive Decoding

Add code
Jan 30, 2026
Viaarxiv icon

Spectral Gradient Descent Mitigates Anisotropy-Driven Misalignment: A Case Study in Phase Retrieval

Add code
Jan 30, 2026
Viaarxiv icon

Non-Stationary Online Structured Prediction with Surrogate Losses

Add code
Oct 08, 2025
Viaarxiv icon

VModA: An Effective Framework for Adaptive NSFW Image Moderation

Add code
May 29, 2025
Figure 1 for VModA: An Effective Framework for Adaptive NSFW Image Moderation
Figure 2 for VModA: An Effective Framework for Adaptive NSFW Image Moderation
Figure 3 for VModA: An Effective Framework for Adaptive NSFW Image Moderation
Figure 4 for VModA: An Effective Framework for Adaptive NSFW Image Moderation
Viaarxiv icon

MentalMAC: Enhancing Large Language Models for Detecting Mental Manipulation via Multi-Task Anti-Curriculum Distillation

Add code
May 21, 2025
Viaarxiv icon

Establishing Linear Surrogate Regret Bounds for Convex Smooth Losses via Convolutional Fenchel-Young Losses

Add code
May 15, 2025
Viaarxiv icon

Many-to-Many Matching via Sparsity Controlled Optimal Transport

Add code
Mar 31, 2025
Figure 1 for Many-to-Many Matching via Sparsity Controlled Optimal Transport
Figure 2 for Many-to-Many Matching via Sparsity Controlled Optimal Transport
Figure 3 for Many-to-Many Matching via Sparsity Controlled Optimal Transport
Figure 4 for Many-to-Many Matching via Sparsity Controlled Optimal Transport
Viaarxiv icon

Any-stepsize Gradient Descent for Separable Data under Fenchel--Young Losses

Add code
Feb 07, 2025
Figure 1 for Any-stepsize Gradient Descent for Separable Data under Fenchel--Young Losses
Figure 2 for Any-stepsize Gradient Descent for Separable Data under Fenchel--Young Losses
Figure 3 for Any-stepsize Gradient Descent for Separable Data under Fenchel--Young Losses
Figure 4 for Any-stepsize Gradient Descent for Separable Data under Fenchel--Young Losses
Viaarxiv icon

TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models

Add code
Jan 29, 2025
Figure 1 for TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models
Figure 2 for TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models
Figure 3 for TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models
Figure 4 for TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models
Viaarxiv icon