Picture for Peter Schlicht

Peter Schlicht

What should AI see? Using the Public's Opinion to Determine the Perception of an AI

Add code
Jun 09, 2022
Figure 1 for What should AI see? Using the Public's Opinion to Determine the Perception of an AI
Figure 2 for What should AI see? Using the Public's Opinion to Determine the Perception of an AI
Figure 3 for What should AI see? Using the Public's Opinion to Determine the Perception of an AI
Figure 4 for What should AI see? Using the Public's Opinion to Determine the Perception of an AI
Viaarxiv icon

Tailored Uncertainty Estimation for Deep Learning Systems

Add code
Apr 29, 2022
Figure 1 for Tailored Uncertainty Estimation for Deep Learning Systems
Figure 2 for Tailored Uncertainty Estimation for Deep Learning Systems
Figure 3 for Tailored Uncertainty Estimation for Deep Learning Systems
Figure 4 for Tailored Uncertainty Estimation for Deep Learning Systems
Viaarxiv icon

Validation of Simulation-Based Testing: Bypassing Domain Shift with Label-to-Image Synthesis

Add code
Jun 10, 2021
Figure 1 for Validation of Simulation-Based Testing: Bypassing Domain Shift with Label-to-Image Synthesis
Figure 2 for Validation of Simulation-Based Testing: Bypassing Domain Shift with Label-to-Image Synthesis
Figure 3 for Validation of Simulation-Based Testing: Bypassing Domain Shift with Label-to-Image Synthesis
Figure 4 for Validation of Simulation-Based Testing: Bypassing Domain Shift with Label-to-Image Synthesis
Viaarxiv icon

The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing

Add code
Jan 13, 2021
Figure 1 for The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
Figure 2 for The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
Figure 3 for The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
Figure 4 for The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
Viaarxiv icon

Approaching Neural Network Uncertainty Realism

Add code
Jan 08, 2021
Figure 1 for Approaching Neural Network Uncertainty Realism
Figure 2 for Approaching Neural Network Uncertainty Realism
Figure 3 for Approaching Neural Network Uncertainty Realism
Figure 4 for Approaching Neural Network Uncertainty Realism
Viaarxiv icon

Improving Video Instance Segmentation by Light-weight Temporal Uncertainty Estimates

Add code
Dec 14, 2020
Figure 1 for Improving Video Instance Segmentation by Light-weight Temporal Uncertainty Estimates
Figure 2 for Improving Video Instance Segmentation by Light-weight Temporal Uncertainty Estimates
Figure 3 for Improving Video Instance Segmentation by Light-weight Temporal Uncertainty Estimates
Figure 4 for Improving Video Instance Segmentation by Light-weight Temporal Uncertainty Estimates
Viaarxiv icon

From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation

Add code
Dec 02, 2020
Figure 1 for From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation
Figure 2 for From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation
Figure 3 for From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation
Figure 4 for From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation
Viaarxiv icon

A Self-Supervised Feature Map Augmentation (FMA) Loss and Combined Augmentations Finetuning to Efficiently Improve the Robustness of CNNs

Add code
Dec 02, 2020
Figure 1 for A Self-Supervised Feature Map Augmentation (FMA) Loss and Combined Augmentations Finetuning to Efficiently Improve the Robustness of CNNs
Figure 2 for A Self-Supervised Feature Map Augmentation (FMA) Loss and Combined Augmentations Finetuning to Efficiently Improve the Robustness of CNNs
Figure 3 for A Self-Supervised Feature Map Augmentation (FMA) Loss and Combined Augmentations Finetuning to Efficiently Improve the Robustness of CNNs
Figure 4 for A Self-Supervised Feature Map Augmentation (FMA) Loss and Combined Augmentations Finetuning to Efficiently Improve the Robustness of CNNs
Viaarxiv icon

Risk Assessment for Machine Learning Models

Add code
Nov 09, 2020
Figure 1 for Risk Assessment for Machine Learning Models
Figure 2 for Risk Assessment for Machine Learning Models
Figure 3 for Risk Assessment for Machine Learning Models
Figure 4 for Risk Assessment for Machine Learning Models
Viaarxiv icon

Self-Supervised Domain Mismatch Estimation for Autonomous Perception

Add code
Jun 15, 2020
Figure 1 for Self-Supervised Domain Mismatch Estimation for Autonomous Perception
Figure 2 for Self-Supervised Domain Mismatch Estimation for Autonomous Perception
Figure 3 for Self-Supervised Domain Mismatch Estimation for Autonomous Perception
Figure 4 for Self-Supervised Domain Mismatch Estimation for Autonomous Perception
Viaarxiv icon