Picture for Yejin Choi

Yejin Choi

Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models

Add code
Apr 10, 2025
Viaarxiv icon

OLMoTrace: Tracing Language Model Outputs Back to Trillions of Training Tokens

Add code
Apr 09, 2025
Viaarxiv icon

One-Minute Video Generation with Test-Time Training

Add code
Apr 07, 2025
Viaarxiv icon

Retro-Search: Exploring Untaken Paths for Deeper and Efficient Reasoning

Add code
Apr 06, 2025
Viaarxiv icon

SuperBPE: Space Travel for Language Models

Add code
Mar 17, 2025
Viaarxiv icon

Information-Guided Identification of Training Data Imprint in (Proprietary) Large Language Models

Add code
Mar 15, 2025
Viaarxiv icon

Hypothesis-Driven Theory-of-Mind Reasoning for Large Language Models

Add code
Feb 17, 2025
Viaarxiv icon

When One LLM Drools, Multi-LLM Collaboration Rules

Add code
Feb 06, 2025
Viaarxiv icon

ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning

Add code
Feb 03, 2025
Figure 1 for ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning
Figure 2 for ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning
Figure 3 for ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning
Figure 4 for ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning
Viaarxiv icon

International AI Safety Report

Add code
Jan 29, 2025
Viaarxiv icon