Picture for Alessandro Biondi

Alessandro Biondi

Edge-Only Universal Adversarial Attacks in Distributed Learning

Add code
Nov 15, 2024
Viaarxiv icon

Attention-Based Real-Time Defenses for Physical Adversarial Attacks in Vision Applications

Add code
Nov 19, 2023
Viaarxiv icon

Robust-by-Design Classification via Unitary-Gradient Neural Networks

Add code
Sep 09, 2022
Figure 1 for Robust-by-Design Classification via Unitary-Gradient Neural Networks
Figure 2 for Robust-by-Design Classification via Unitary-Gradient Neural Networks
Figure 3 for Robust-by-Design Classification via Unitary-Gradient Neural Networks
Figure 4 for Robust-by-Design Classification via Unitary-Gradient Neural Networks
Viaarxiv icon

CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models

Add code
Jun 09, 2022
Figure 1 for CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
Figure 2 for CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
Figure 3 for CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
Figure 4 for CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
Viaarxiv icon

Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis

Add code
Mar 14, 2022
Figure 1 for Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis
Figure 2 for Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis
Figure 3 for Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis
Figure 4 for Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis
Viaarxiv icon

On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving

Add code
Jan 05, 2022
Figure 1 for On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving
Figure 2 for On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving
Figure 3 for On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving
Figure 4 for On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving
Viaarxiv icon

On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error

Add code
Jan 04, 2022
Figure 1 for On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error
Figure 2 for On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error
Figure 3 for On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error
Figure 4 for On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error
Viaarxiv icon

Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks

Add code
Aug 13, 2021
Figure 1 for Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
Figure 2 for Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
Figure 3 for Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
Figure 4 for Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
Viaarxiv icon

Increasing the Confidence of Deep Neural Networks by Coverage Analysis

Add code
Jan 28, 2021
Figure 1 for Increasing the Confidence of Deep Neural Networks by Coverage Analysis
Figure 2 for Increasing the Confidence of Deep Neural Networks by Coverage Analysis
Figure 3 for Increasing the Confidence of Deep Neural Networks by Coverage Analysis
Figure 4 for Increasing the Confidence of Deep Neural Networks by Coverage Analysis
Viaarxiv icon

Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting

Add code
Jan 27, 2021
Figure 1 for Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting
Figure 2 for Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting
Figure 3 for Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting
Figure 4 for Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting
Viaarxiv icon