Picture for Mahmood Sharif

Mahmood Sharif

Impactful Bit-Flip Search on Full-precision Models

Add code
Nov 12, 2024
Viaarxiv icon

Adversarial Robustness Through Artifact Design

Add code
Feb 07, 2024
Viaarxiv icon

The Ultimate Combo: Boosting Adversarial Example Transferability by Composing Data Augmentations

Add code
Dec 18, 2023
Viaarxiv icon

Group-based Robustness: A General Framework for Customized Robustness in the Real World

Add code
Jun 29, 2023
Viaarxiv icon

Scalable Verification of GNN-based Job Schedulers

Add code
Mar 07, 2022
Figure 1 for Scalable Verification of GNN-based Job Schedulers
Figure 2 for Scalable Verification of GNN-based Job Schedulers
Figure 3 for Scalable Verification of GNN-based Job Schedulers
Figure 4 for Scalable Verification of GNN-based Job Schedulers
Viaarxiv icon

Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks

Add code
Dec 28, 2021
Figure 1 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Figure 2 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Figure 3 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Figure 4 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Viaarxiv icon

Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection

Add code
Dec 19, 2019
Figure 1 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Figure 2 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Figure 3 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Figure 4 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Viaarxiv icon

$n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers

Add code
Dec 19, 2019
Figure 1 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Figure 2 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Figure 3 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Figure 4 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Viaarxiv icon

On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples

Add code
Jul 27, 2018
Figure 1 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Figure 2 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Figure 3 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Figure 4 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Viaarxiv icon

Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition

Add code
Dec 31, 2017
Figure 1 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Figure 2 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Figure 3 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Figure 4 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Viaarxiv icon