Picture for Santiago Zanella-Béguelin

Santiago Zanella-Béguelin

Microsoft Research

Securing AI Agents with Information-Flow Control

Add code
May 29, 2025
Viaarxiv icon

The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text

Add code
Feb 19, 2025
Figure 1 for The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
Figure 2 for The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
Figure 3 for The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
Figure 4 for The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
Viaarxiv icon

Permissive Information-Flow Analysis for Large Language Models

Add code
Oct 04, 2024
Figure 1 for Permissive Information-Flow Analysis for Large Language Models
Figure 2 for Permissive Information-Flow Analysis for Large Language Models
Figure 3 for Permissive Information-Flow Analysis for Large Language Models
Figure 4 for Permissive Information-Flow Analysis for Large Language Models
Viaarxiv icon

Closed-Form Bounds for DP-SGD against Record-level Inference

Add code
Feb 22, 2024
Figure 1 for Closed-Form Bounds for DP-SGD against Record-level Inference
Figure 2 for Closed-Form Bounds for DP-SGD against Record-level Inference
Figure 3 for Closed-Form Bounds for DP-SGD against Record-level Inference
Figure 4 for Closed-Form Bounds for DP-SGD against Record-level Inference
Viaarxiv icon

Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective

Add code
Nov 27, 2023
Figure 1 for Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective
Figure 2 for Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective
Figure 3 for Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective
Figure 4 for Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective
Viaarxiv icon

Analyzing Leakage of Personally Identifiable Information in Language Models

Add code
Feb 01, 2023
Figure 1 for Analyzing Leakage of Personally Identifiable Information in Language Models
Figure 2 for Analyzing Leakage of Personally Identifiable Information in Language Models
Figure 3 for Analyzing Leakage of Personally Identifiable Information in Language Models
Figure 4 for Analyzing Leakage of Personally Identifiable Information in Language Models
Viaarxiv icon

SoK: Let The Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning

Add code
Dec 21, 2022
Figure 1 for SoK: Let The Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning
Figure 2 for SoK: Let The Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning
Figure 3 for SoK: Let The Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning
Viaarxiv icon

Bayesian Estimation of Differential Privacy

Add code
Jun 15, 2022
Figure 1 for Bayesian Estimation of Differential Privacy
Figure 2 for Bayesian Estimation of Differential Privacy
Figure 3 for Bayesian Estimation of Differential Privacy
Figure 4 for Bayesian Estimation of Differential Privacy
Viaarxiv icon

Analyzing Privacy Loss in Updates of Natural Language Models

Add code
Jan 14, 2020
Figure 1 for Analyzing Privacy Loss in Updates of Natural Language Models
Figure 2 for Analyzing Privacy Loss in Updates of Natural Language Models
Figure 3 for Analyzing Privacy Loss in Updates of Natural Language Models
Figure 4 for Analyzing Privacy Loss in Updates of Natural Language Models
Viaarxiv icon