Picture for Peter Henderson

Peter Henderson

On Evaluating the Durability of Safeguards for Open-Weight LLMs

Add code
Dec 10, 2024
Viaarxiv icon

The Mirage of Artificial Intelligence Terms of Use Restrictions

Add code
Dec 10, 2024
Viaarxiv icon

An Adversarial Perspective on Machine Unlearning for AI Safety

Add code
Sep 26, 2024
Viaarxiv icon

The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources

Add code
Jun 26, 2024
Viaarxiv icon

Evaluating Copyright Takedown Methods for Language Models

Add code
Jun 26, 2024
Viaarxiv icon

SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal Behaviors

Add code
Jun 20, 2024
Viaarxiv icon

Fantastic Copyrighted Beasts and How (Not) to Generate Them

Add code
Jun 20, 2024
Viaarxiv icon

Safety Alignment Should Be Made More Than Just a Few Tokens Deep

Add code
Jun 10, 2024
Viaarxiv icon

JIGMARK: A Black-Box Approach for Enhancing Image Watermarks against Diffusion Model Edits

Add code
Jun 06, 2024
Figure 1 for JIGMARK: A Black-Box Approach for Enhancing Image Watermarks against Diffusion Model Edits
Figure 2 for JIGMARK: A Black-Box Approach for Enhancing Image Watermarks against Diffusion Model Edits
Figure 3 for JIGMARK: A Black-Box Approach for Enhancing Image Watermarks against Diffusion Model Edits
Figure 4 for JIGMARK: A Black-Box Approach for Enhancing Image Watermarks against Diffusion Model Edits
Viaarxiv icon

AI Risk Management Should Incorporate Both Safety and Security

Add code
May 29, 2024
Viaarxiv icon