Picture for Michael K. Reiter

Michael K. Reiter

A General Framework for Data-Use Auditing of ML Models

Add code
Jul 21, 2024
Viaarxiv icon

Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning

Add code
May 10, 2024
Viaarxiv icon

Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models

Add code
Feb 22, 2024
Viaarxiv icon

Mendata: A Framework to Purify Manipulated Training Data

Add code
Dec 03, 2023
Viaarxiv icon

Group-based Robustness: A General Framework for Customized Robustness in the Real World

Add code
Jun 29, 2023
Viaarxiv icon

Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks

Add code
Dec 28, 2021
Figure 1 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Figure 2 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Figure 3 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Figure 4 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Viaarxiv icon

Defense Through Diverse Directions

Add code
Mar 24, 2020
Figure 1 for Defense Through Diverse Directions
Figure 2 for Defense Through Diverse Directions
Figure 3 for Defense Through Diverse Directions
Figure 4 for Defense Through Diverse Directions
Viaarxiv icon

Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection

Add code
Dec 19, 2019
Figure 1 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Figure 2 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Figure 3 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Figure 4 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Viaarxiv icon

$n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers

Add code
Dec 19, 2019
Figure 1 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Figure 2 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Figure 3 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Figure 4 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Viaarxiv icon

On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples

Add code
Jul 27, 2018
Figure 1 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Figure 2 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Figure 3 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Figure 4 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Viaarxiv icon