Picture for Minwu Kim

Minwu Kim

On the Limits of Layer Pruning for Generative Reasoning in LLMs

Add code
Feb 02, 2026
Viaarxiv icon

Training Reasoning Models on Saturated Problems via Failure-Prefix Conditioning

Add code
Jan 28, 2026
Viaarxiv icon

Reinforcement Learning vs. Distillation: Understanding Accuracy and Capability in LLM Reasoning

Add code
May 20, 2025
Figure 1 for Reinforcement Learning vs. Distillation: Understanding Accuracy and Capability in LLM Reasoning
Figure 2 for Reinforcement Learning vs. Distillation: Understanding Accuracy and Capability in LLM Reasoning
Figure 3 for Reinforcement Learning vs. Distillation: Understanding Accuracy and Capability in LLM Reasoning
Figure 4 for Reinforcement Learning vs. Distillation: Understanding Accuracy and Capability in LLM Reasoning
Viaarxiv icon

Warm Up Before You Train: Unlocking General Reasoning in Resource-Constrained Settings

Add code
May 19, 2025
Viaarxiv icon

Mathematical Reasoning in Large Language Models: Assessing Logical and Arithmetic Errors across Wide Numerical Ranges

Add code
Feb 12, 2025
Viaarxiv icon