Picture for Marco Melis

Marco Melis

FADER: Fast Adversarial Example Rejection

Add code
Oct 18, 2020
Figure 1 for FADER: Fast Adversarial Example Rejection
Figure 2 for FADER: Fast Adversarial Example Rejection
Figure 3 for FADER: Fast Adversarial Example Rejection
Figure 4 for FADER: Fast Adversarial Example Rejection
Viaarxiv icon

Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?

Add code
May 04, 2020
Figure 1 for Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?
Figure 2 for Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?
Figure 3 for Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?
Figure 4 for Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?
Viaarxiv icon

secml: A Python Library for Secure and Explainable Machine Learning

Add code
Dec 20, 2019
Figure 1 for secml: A Python Library for Secure and Explainable Machine Learning
Figure 2 for secml: A Python Library for Secure and Explainable Machine Learning
Figure 3 for secml: A Python Library for Secure and Explainable Machine Learning
Viaarxiv icon

Deep Neural Rejection against Adversarial Examples

Add code
Oct 01, 2019
Figure 1 for Deep Neural Rejection against Adversarial Examples
Figure 2 for Deep Neural Rejection against Adversarial Examples
Figure 3 for Deep Neural Rejection against Adversarial Examples
Figure 4 for Deep Neural Rejection against Adversarial Examples
Viaarxiv icon

Explaining Black-box Android Malware Detection

Add code
Oct 29, 2018
Figure 1 for Explaining Black-box Android Malware Detection
Figure 2 for Explaining Black-box Android Malware Detection
Figure 3 for Explaining Black-box Android Malware Detection
Figure 4 for Explaining Black-box Android Malware Detection
Viaarxiv icon

On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks

Add code
Sep 08, 2018
Figure 1 for On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks
Figure 2 for On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks
Figure 3 for On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks
Figure 4 for On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks
Viaarxiv icon

Super-sparse Learning in Similarity Spaces

Add code
Dec 17, 2017
Figure 1 for Super-sparse Learning in Similarity Spaces
Figure 2 for Super-sparse Learning in Similarity Spaces
Figure 3 for Super-sparse Learning in Similarity Spaces
Figure 4 for Super-sparse Learning in Similarity Spaces
Viaarxiv icon

Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid

Add code
Aug 23, 2017
Figure 1 for Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid
Figure 2 for Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid
Figure 3 for Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid
Figure 4 for Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid
Viaarxiv icon