Picture for Alessandro Biondi

Alessandro Biondi

Benchmarking the Spatial Robustness of DNNs via Natural and Adversarial Localized Corruptions

Add code
Apr 02, 2025
Viaarxiv icon

Loss Landscape Analysis for Reliable Quantized ML Models for Scientific Sensing

Add code
Feb 12, 2025
Viaarxiv icon

Edge-Only Universal Adversarial Attacks in Distributed Learning

Add code
Nov 15, 2024
Figure 1 for Edge-Only Universal Adversarial Attacks in Distributed Learning
Figure 2 for Edge-Only Universal Adversarial Attacks in Distributed Learning
Figure 3 for Edge-Only Universal Adversarial Attacks in Distributed Learning
Figure 4 for Edge-Only Universal Adversarial Attacks in Distributed Learning
Viaarxiv icon

Attention-Based Real-Time Defenses for Physical Adversarial Attacks in Vision Applications

Add code
Nov 19, 2023
Viaarxiv icon

Robust-by-Design Classification via Unitary-Gradient Neural Networks

Add code
Sep 09, 2022
Figure 1 for Robust-by-Design Classification via Unitary-Gradient Neural Networks
Figure 2 for Robust-by-Design Classification via Unitary-Gradient Neural Networks
Figure 3 for Robust-by-Design Classification via Unitary-Gradient Neural Networks
Figure 4 for Robust-by-Design Classification via Unitary-Gradient Neural Networks
Viaarxiv icon

CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models

Add code
Jun 09, 2022
Figure 1 for CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
Figure 2 for CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
Figure 3 for CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
Figure 4 for CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
Viaarxiv icon

Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis

Add code
Mar 14, 2022
Figure 1 for Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis
Figure 2 for Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis
Figure 3 for Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis
Figure 4 for Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis
Viaarxiv icon

On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving

Add code
Jan 05, 2022
Figure 1 for On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving
Figure 2 for On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving
Figure 3 for On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving
Figure 4 for On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving
Viaarxiv icon

On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error

Add code
Jan 04, 2022
Figure 1 for On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error
Figure 2 for On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error
Figure 3 for On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error
Figure 4 for On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error
Viaarxiv icon

Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks

Add code
Aug 13, 2021
Figure 1 for Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
Figure 2 for Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
Figure 3 for Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
Figure 4 for Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
Viaarxiv icon