Picture for Seth Neel

Seth Neel

Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice

Add code
Dec 09, 2024
Figure 1 for Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
Figure 2 for Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
Figure 3 for Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
Figure 4 for Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
Viaarxiv icon

Attribute-to-Delete: Machine Unlearning via Datamodel Matching

Add code
Oct 30, 2024
Figure 1 for Attribute-to-Delete: Machine Unlearning via Datamodel Matching
Figure 2 for Attribute-to-Delete: Machine Unlearning via Datamodel Matching
Figure 3 for Attribute-to-Delete: Machine Unlearning via Datamodel Matching
Figure 4 for Attribute-to-Delete: Machine Unlearning via Datamodel Matching
Viaarxiv icon

Machine Unlearning Fails to Remove Data Poisoning Attacks

Add code
Jun 25, 2024
Viaarxiv icon

Pandora's White-Box: Increased Training Data Leakage in Open LLMs

Add code
Feb 26, 2024
Viaarxiv icon

Privacy Issues in Large Language Models: A Survey

Add code
Dec 11, 2023
Viaarxiv icon

MoPe: Model Perturbation-based Privacy Attacks on Language Models

Add code
Oct 22, 2023
Viaarxiv icon

Black-Box Training Data Identification in GANs via Detector Networks

Add code
Oct 18, 2023
Viaarxiv icon

In-Context Unlearning: Language Models as Few Shot Unlearners

Add code
Oct 12, 2023
Viaarxiv icon

PRIMO: Private Regression in Multiple Outcomes

Add code
Mar 07, 2023
Viaarxiv icon

Model Explanation Disparities as a Fairness Diagnostic

Add code
Mar 06, 2023
Viaarxiv icon