Picture for Antoine Boutet

Antoine Boutet

PRIVATICS

Leveraging Algorithmic Fairness to Mitigate Blackbox Attribute Inference Attacks

Add code
Nov 18, 2022
Viaarxiv icon

Inferring Sensitive Attributes from Model Explanations

Add code
Aug 21, 2022
Figure 1 for Inferring Sensitive Attributes from Model Explanations
Figure 2 for Inferring Sensitive Attributes from Model Explanations
Figure 3 for Inferring Sensitive Attributes from Model Explanations
Figure 4 for Inferring Sensitive Attributes from Model Explanations
Viaarxiv icon

I-GWAS: Privacy-Preserving Interdependent Genome-Wide Association Studies

Add code
Aug 17, 2022
Figure 1 for I-GWAS: Privacy-Preserving Interdependent Genome-Wide Association Studies
Figure 2 for I-GWAS: Privacy-Preserving Interdependent Genome-Wide Association Studies
Figure 3 for I-GWAS: Privacy-Preserving Interdependent Genome-Wide Association Studies
Figure 4 for I-GWAS: Privacy-Preserving Interdependent Genome-Wide Association Studies
Viaarxiv icon

Dikaios: Privacy Auditing of Algorithmic Fairness via Attribute Inference Attacks

Add code
Feb 04, 2022
Figure 1 for Dikaios: Privacy Auditing of Algorithmic Fairness via Attribute Inference Attacks
Figure 2 for Dikaios: Privacy Auditing of Algorithmic Fairness via Attribute Inference Attacks
Figure 3 for Dikaios: Privacy Auditing of Algorithmic Fairness via Attribute Inference Attacks
Figure 4 for Dikaios: Privacy Auditing of Algorithmic Fairness via Attribute Inference Attacks
Viaarxiv icon

MixNN: Protection of Federated Learning Against Inference Attacks by Mixing Neural Network Layers

Add code
Sep 26, 2021
Figure 1 for MixNN: Protection of Federated Learning Against Inference Attacks by Mixing Neural Network Layers
Figure 2 for MixNN: Protection of Federated Learning Against Inference Attacks by Mixing Neural Network Layers
Figure 3 for MixNN: Protection of Federated Learning Against Inference Attacks by Mixing Neural Network Layers
Figure 4 for MixNN: Protection of Federated Learning Against Inference Attacks by Mixing Neural Network Layers
Viaarxiv icon

Privacy Assessment of Federated Learning using Private Personalized Layers

Add code
Jun 15, 2021
Figure 1 for Privacy Assessment of Federated Learning using Private Personalized Layers
Figure 2 for Privacy Assessment of Federated Learning using Private Personalized Layers
Figure 3 for Privacy Assessment of Federated Learning using Private Personalized Layers
Figure 4 for Privacy Assessment of Federated Learning using Private Personalized Layers
Viaarxiv icon

GECKO: Reconciling Privacy, Accuracy and Efficiency in Embedded Deep Learning

Add code
Oct 02, 2020
Figure 1 for GECKO: Reconciling Privacy, Accuracy and Efficiency in Embedded Deep Learning
Figure 2 for GECKO: Reconciling Privacy, Accuracy and Efficiency in Embedded Deep Learning
Figure 3 for GECKO: Reconciling Privacy, Accuracy and Efficiency in Embedded Deep Learning
Figure 4 for GECKO: Reconciling Privacy, Accuracy and Efficiency in Embedded Deep Learning
Viaarxiv icon

DYSAN: Dynamically sanitizing motion sensor data against sensitive inferences through adversarial networks

Add code
Mar 23, 2020
Figure 1 for DYSAN: Dynamically sanitizing motion sensor data against sensitive inferences through adversarial networks
Figure 2 for DYSAN: Dynamically sanitizing motion sensor data against sensitive inferences through adversarial networks
Figure 3 for DYSAN: Dynamically sanitizing motion sensor data against sensitive inferences through adversarial networks
Figure 4 for DYSAN: Dynamically sanitizing motion sensor data against sensitive inferences through adversarial networks
Viaarxiv icon