Picture for Andreas Terzis

Andreas Terzis

Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice

Add code
Dec 09, 2024
Viaarxiv icon

The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD

Add code
Oct 10, 2024
Viaarxiv icon

Private prediction for large-scale synthetic text generation

Add code
Jul 16, 2024
Viaarxiv icon

Harnessing large-language models to generate private synthetic text

Add code
Jun 02, 2023
Viaarxiv icon

Poisoning Web-Scale Training Datasets is Practical

Add code
Feb 20, 2023
Viaarxiv icon

Tight Auditing of Differentially Private Machine Learning

Add code
Feb 15, 2023
Viaarxiv icon

The Privacy Onion Effect: Memorization is Relative

Add code
Jun 22, 2022
Figure 1 for The Privacy Onion Effect: Memorization is Relative
Figure 2 for The Privacy Onion Effect: Memorization is Relative
Figure 3 for The Privacy Onion Effect: Memorization is Relative
Figure 4 for The Privacy Onion Effect: Memorization is Relative
Viaarxiv icon

Debugging Differential Privacy: A Case Study for Privacy Auditing

Add code
Mar 28, 2022
Figure 1 for Debugging Differential Privacy: A Case Study for Privacy Auditing
Viaarxiv icon

Toward Training at ImageNet Scale with Differential Privacy

Add code
Feb 09, 2022
Figure 1 for Toward Training at ImageNet Scale with Differential Privacy
Figure 2 for Toward Training at ImageNet Scale with Differential Privacy
Figure 3 for Toward Training at ImageNet Scale with Differential Privacy
Figure 4 for Toward Training at ImageNet Scale with Differential Privacy
Viaarxiv icon

Membership Inference Attacks From First Principles

Add code
Dec 07, 2021
Figure 1 for Membership Inference Attacks From First Principles
Figure 2 for Membership Inference Attacks From First Principles
Figure 3 for Membership Inference Attacks From First Principles
Figure 4 for Membership Inference Attacks From First Principles
Viaarxiv icon