Picture for Doyoung Kim

Doyoung Kim

Amazon, KAIST

TSLM: Tree-Structured Language Modeling for Divergent Thinking

Add code
Jan 30, 2026
Viaarxiv icon

RIM Hand : A Robotic Hand with an Accurate Carpometacarpal Joint and Nitinol-Supported Skeletal Structure

Add code
Jan 20, 2026
Viaarxiv icon

GlueNN: gluing patchwise analytic solutions with neural networks

Add code
Jan 09, 2026
Viaarxiv icon

Beyond Perfect APIs: A Comprehensive Evaluation of LLM Agents Under Real-World API Complexity

Add code
Jan 01, 2026
Viaarxiv icon

References Indeed Matter? Reference-Free Preference Optimization for Conversational Query Reformulation

Add code
May 10, 2025
Figure 1 for References Indeed Matter? Reference-Free Preference Optimization for Conversational Query Reformulation
Figure 2 for References Indeed Matter? Reference-Free Preference Optimization for Conversational Query Reformulation
Figure 3 for References Indeed Matter? Reference-Free Preference Optimization for Conversational Query Reformulation
Figure 4 for References Indeed Matter? Reference-Free Preference Optimization for Conversational Query Reformulation
Viaarxiv icon

Cognitive Map for Language Models: Optimal Planning via Verbally Representing the World Model

Add code
Jun 21, 2024
Figure 1 for Cognitive Map for Language Models: Optimal Planning via Verbally Representing the World Model
Figure 2 for Cognitive Map for Language Models: Optimal Planning via Verbally Representing the World Model
Figure 3 for Cognitive Map for Language Models: Optimal Planning via Verbally Representing the World Model
Figure 4 for Cognitive Map for Language Models: Optimal Planning via Verbally Representing the World Model
Viaarxiv icon

Mélange: Cost Efficient Large Language Model Serving by Exploiting GPU Heterogeneity

Add code
Apr 22, 2024
Figure 1 for Mélange: Cost Efficient Large Language Model Serving by Exploiting GPU Heterogeneity
Figure 2 for Mélange: Cost Efficient Large Language Model Serving by Exploiting GPU Heterogeneity
Figure 3 for Mélange: Cost Efficient Large Language Model Serving by Exploiting GPU Heterogeneity
Figure 4 for Mélange: Cost Efficient Large Language Model Serving by Exploiting GPU Heterogeneity
Viaarxiv icon

Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards

Add code
Apr 16, 2024
Figure 1 for Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards
Figure 2 for Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards
Figure 3 for Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards
Figure 4 for Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards
Viaarxiv icon

Semiparametric Token-Sequence Co-Supervision

Add code
Mar 14, 2024
Figure 1 for Semiparametric Token-Sequence Co-Supervision
Figure 2 for Semiparametric Token-Sequence Co-Supervision
Figure 3 for Semiparametric Token-Sequence Co-Supervision
Figure 4 for Semiparametric Token-Sequence Co-Supervision
Viaarxiv icon

Joint Mechanical and Electrical Adjustment of IRS-aided LEO Satellite MIMO Communications

Add code
Jan 12, 2024
Viaarxiv icon