Picture for Kathrin Grosse

Kathrin Grosse

Manipulating Trajectory Prediction with Backdoors

Add code
Jan 03, 2024
Viaarxiv icon

Towards more Practical Threat Models in Artificial Intelligence Security

Add code
Nov 16, 2023
Viaarxiv icon

A Survey on Reinforcement Learning Security with Application to Autonomous Driving

Add code
Dec 12, 2022
Viaarxiv icon

"Why do so?" -- A Practical Perspective on Machine Learning Security

Add code
Jul 11, 2022
Figure 1 for "Why do so?" -- A Practical Perspective on Machine Learning Security
Figure 2 for "Why do so?" -- A Practical Perspective on Machine Learning Security
Figure 3 for "Why do so?" -- A Practical Perspective on Machine Learning Security
Figure 4 for "Why do so?" -- A Practical Perspective on Machine Learning Security
Viaarxiv icon

Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning

Add code
May 04, 2022
Figure 1 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Figure 2 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Figure 3 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Figure 4 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Viaarxiv icon

Machine Learning Security against Data Poisoning: Are We There Yet?

Add code
Apr 12, 2022
Figure 1 for Machine Learning Security against Data Poisoning: Are We There Yet?
Figure 2 for Machine Learning Security against Data Poisoning: Are We There Yet?
Figure 3 for Machine Learning Security against Data Poisoning: Are We There Yet?
Viaarxiv icon

Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions

Add code
Jun 14, 2021
Figure 1 for Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions
Figure 2 for Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions
Figure 3 for Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions
Figure 4 for Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions
Viaarxiv icon

Mental Models of Adversarial Machine Learning

Add code
May 08, 2021
Figure 1 for Mental Models of Adversarial Machine Learning
Figure 2 for Mental Models of Adversarial Machine Learning
Figure 3 for Mental Models of Adversarial Machine Learning
Figure 4 for Mental Models of Adversarial Machine Learning
Viaarxiv icon

Adversarial Examples and Metrics

Add code
Jul 15, 2020
Figure 1 for Adversarial Examples and Metrics
Viaarxiv icon

A new measure for overfitting and its implications for backdooring of deep learning

Add code
Jun 18, 2020
Figure 1 for A new measure for overfitting and its implications for backdooring of deep learning
Figure 2 for A new measure for overfitting and its implications for backdooring of deep learning
Figure 3 for A new measure for overfitting and its implications for backdooring of deep learning
Figure 4 for A new measure for overfitting and its implications for backdooring of deep learning
Viaarxiv icon