Picture for Peter Kairouz

Peter Kairouz

Federated Learning in Practice: Reflections and Projections

Add code
Oct 11, 2024
Viaarxiv icon

Randomization Techniques to Mitigate the Risk of Copyright Infringement

Add code
Aug 21, 2024
Figure 1 for Randomization Techniques to Mitigate the Risk of Copyright Infringement
Figure 2 for Randomization Techniques to Mitigate the Risk of Copyright Infringement
Figure 3 for Randomization Techniques to Mitigate the Risk of Copyright Infringement
Figure 4 for Randomization Techniques to Mitigate the Risk of Copyright Infringement
Viaarxiv icon

Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition

Add code
Jun 13, 2024
Figure 1 for Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition
Figure 2 for Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition
Figure 3 for Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition
Figure 4 for Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition
Viaarxiv icon

Air Gap: Protecting Privacy-Conscious Conversational Agents

Add code
May 08, 2024
Viaarxiv icon

Improved Communication-Privacy Trade-offs in $L_2$ Mean Estimation under Streaming Differential Privacy

Add code
May 02, 2024
Figure 1 for Improved Communication-Privacy Trade-offs in $L_2$ Mean Estimation under Streaming Differential Privacy
Figure 2 for Improved Communication-Privacy Trade-offs in $L_2$ Mean Estimation under Streaming Differential Privacy
Figure 3 for Improved Communication-Privacy Trade-offs in $L_2$ Mean Estimation under Streaming Differential Privacy
Figure 4 for Improved Communication-Privacy Trade-offs in $L_2$ Mean Estimation under Streaming Differential Privacy
Viaarxiv icon

Confidential Federated Computations

Add code
Apr 16, 2024
Viaarxiv icon

Can LLMs get help from other LLMs without revealing private information?

Add code
Apr 02, 2024
Viaarxiv icon

Privacy-Preserving Instructions for Aligning Large Language Models

Add code
Feb 21, 2024
Viaarxiv icon

User Inference Attacks on Large Language Models

Add code
Oct 13, 2023
Figure 1 for User Inference Attacks on Large Language Models
Figure 2 for User Inference Attacks on Large Language Models
Figure 3 for User Inference Attacks on Large Language Models
Figure 4 for User Inference Attacks on Large Language Models
Viaarxiv icon

Private Federated Learning with Autotuned Compression

Add code
Jul 20, 2023
Viaarxiv icon