Picture for Yangsibo Huang

Yangsibo Huang

On Evaluating the Durability of Safeguards for Open-Weight LLMs

Add code
Dec 10, 2024
Viaarxiv icon

Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice

Add code
Dec 09, 2024
Viaarxiv icon

On Memorization of Large Language Models in Logical Reasoning

Add code
Oct 30, 2024
Figure 1 for On Memorization of Large Language Models in Logical Reasoning
Figure 2 for On Memorization of Large Language Models in Logical Reasoning
Figure 3 for On Memorization of Large Language Models in Logical Reasoning
Figure 4 for On Memorization of Large Language Models in Logical Reasoning
Viaarxiv icon

An Adversarial Perspective on Machine Unlearning for AI Safety

Add code
Sep 26, 2024
Viaarxiv icon

ConceptMix: A Compositional Image Generation Benchmark with Controllable Difficulty

Add code
Aug 26, 2024
Viaarxiv icon

MUSE: Machine Unlearning Six-Way Evaluation for Language Models

Add code
Jul 08, 2024
Viaarxiv icon

Evaluating Copyright Takedown Methods for Language Models

Add code
Jun 26, 2024
Viaarxiv icon

Crosslingual Capabilities and Knowledge Barriers in Multilingual Large Language Models

Add code
Jun 23, 2024
Viaarxiv icon

SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal Behaviors

Add code
Jun 20, 2024
Viaarxiv icon

Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning

Add code
Jun 20, 2024
Viaarxiv icon