Picture for Giulio Rossolini

Giulio Rossolini

Edge-Only Universal Adversarial Attacks in Distributed Learning

Add code
Nov 15, 2024
Viaarxiv icon

Concise Thoughts: Impact of Output Length on LLM Reasoning and Cost

Add code
Jul 29, 2024
Viaarxiv icon

Attention-Based Real-Time Defenses for Physical Adversarial Attacks in Vision Applications

Add code
Nov 19, 2023
Viaarxiv icon

TrainSim: A Railway Simulation Framework for LiDAR and Camera Dataset Generation

Add code
Feb 28, 2023
Viaarxiv icon

Robust-by-Design Classification via Unitary-Gradient Neural Networks

Add code
Sep 09, 2022
Figure 1 for Robust-by-Design Classification via Unitary-Gradient Neural Networks
Figure 2 for Robust-by-Design Classification via Unitary-Gradient Neural Networks
Figure 3 for Robust-by-Design Classification via Unitary-Gradient Neural Networks
Figure 4 for Robust-by-Design Classification via Unitary-Gradient Neural Networks
Viaarxiv icon

CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models

Add code
Jun 09, 2022
Figure 1 for CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
Figure 2 for CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
Figure 3 for CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
Figure 4 for CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
Viaarxiv icon

Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis

Add code
Mar 14, 2022
Figure 1 for Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis
Figure 2 for Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis
Figure 3 for Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis
Figure 4 for Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis
Viaarxiv icon

On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving

Add code
Jan 05, 2022
Figure 1 for On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving
Figure 2 for On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving
Figure 3 for On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving
Figure 4 for On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving
Viaarxiv icon

On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error

Add code
Jan 04, 2022
Figure 1 for On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error
Figure 2 for On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error
Figure 3 for On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error
Figure 4 for On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error
Viaarxiv icon

Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks

Add code
Aug 13, 2021
Figure 1 for Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
Figure 2 for Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
Figure 3 for Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
Figure 4 for Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
Viaarxiv icon