Picture for Yejin Choi

Yejin Choi

Hypothesis-Driven Theory-of-Mind Reasoning for Large Language Models

Add code
Feb 17, 2025
Viaarxiv icon

When One LLM Drools, Multi-LLM Collaboration Rules

Add code
Feb 06, 2025
Viaarxiv icon

ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning

Add code
Feb 03, 2025
Viaarxiv icon

International AI Safety Report

Add code
Jan 29, 2025
Viaarxiv icon

HALoGEN: Fantastic LLM Hallucinations and Where to Find Them

Add code
Jan 14, 2025
Figure 1 for HALoGEN: Fantastic LLM Hallucinations and Where to Find Them
Figure 2 for HALoGEN: Fantastic LLM Hallucinations and Where to Find Them
Figure 3 for HALoGEN: Fantastic LLM Hallucinations and Where to Find Them
Figure 4 for HALoGEN: Fantastic LLM Hallucinations and Where to Find Them
Viaarxiv icon

Multi-Attribute Constraint Satisfaction via Language Model Rewriting

Add code
Dec 26, 2024
Viaarxiv icon

Explore Theory of Mind: Program-guided adversarial data generation for theory of mind reasoning

Add code
Dec 12, 2024
Viaarxiv icon

Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice

Add code
Dec 09, 2024
Figure 1 for Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
Figure 2 for Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
Figure 3 for Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
Figure 4 for Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
Viaarxiv icon

Negative Token Merging: Image-based Adversarial Feature Guidance

Add code
Dec 02, 2024
Viaarxiv icon

BLIP3-KALE: Knowledge Augmented Large-Scale Dense Captions

Add code
Nov 12, 2024
Viaarxiv icon