Picture for Taesup Kim

Taesup Kim

Efficient Epistemic Uncertainty Estimation for Large Language Models via Knowledge Distillation

Add code
Feb 02, 2026
Viaarxiv icon

Robust Domain Generalization under Divergent Marginal and Conditional Distributions

Add code
Feb 02, 2026
Viaarxiv icon

Attention-space Contrastive Guidance for Efficient Hallucination Mitigation in LVLMs

Add code
Jan 20, 2026
Viaarxiv icon

What If TSF: A Benchmark for Reframing Forecasting as Scenario-Guided Multimodal Forecasting

Add code
Jan 13, 2026
Viaarxiv icon

Stable On-Policy Distillation through Adaptive Target Reformulation

Add code
Jan 12, 2026
Viaarxiv icon

EpiCaR: Knowing What You Don't Know Matters for Better Reasoning in LLMs

Add code
Jan 11, 2026
Viaarxiv icon

Garbage Attention in Large Language Models: BOS Sink Heads and Sink-aware Pruning

Add code
Jan 11, 2026
Viaarxiv icon

Learning to Act Robustly with View-Invariant Latent Actions

Add code
Jan 06, 2026
Viaarxiv icon

Angular Gradient Sign Method: Uncovering Vulnerabilities in Hyperbolic Networks

Add code
Nov 17, 2025
Viaarxiv icon

ATAS: Any-to-Any Self-Distillation for Enhanced Open-Vocabulary Dense Prediction

Add code
Jun 10, 2025
Viaarxiv icon