Picture for Shawn Shan

Shawn Shan

Disrupting Style Mimicry Attacks on Video Imagery

Add code
May 11, 2024
Viaarxiv icon

Organic or Diffused: Can We Distinguish Human Art from AI-generated Images?

Add code
Feb 06, 2024
Viaarxiv icon

Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models

Add code
Oct 20, 2023
Viaarxiv icon

SoK: Anti-Facial Recognition Technology

Add code
Dec 08, 2021
Figure 1 for SoK: Anti-Facial Recognition Technology
Figure 2 for SoK: Anti-Facial Recognition Technology
Figure 3 for SoK: Anti-Facial Recognition Technology
Figure 4 for SoK: Anti-Facial Recognition Technology
Viaarxiv icon

Traceback of Data Poisoning Attacks in Neural Networks

Add code
Oct 13, 2021
Figure 1 for Traceback of Data Poisoning Attacks in Neural Networks
Figure 2 for Traceback of Data Poisoning Attacks in Neural Networks
Figure 3 for Traceback of Data Poisoning Attacks in Neural Networks
Figure 4 for Traceback of Data Poisoning Attacks in Neural Networks
Viaarxiv icon

Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks

Add code
Jun 24, 2020
Figure 1 for Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks
Figure 2 for Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks
Figure 3 for Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks
Figure 4 for Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks
Viaarxiv icon

Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models

Add code
Feb 19, 2020
Figure 1 for Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models
Figure 2 for Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models
Figure 3 for Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models
Figure 4 for Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models
Viaarxiv icon

Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks

Add code
Apr 18, 2019
Figure 1 for Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks
Figure 2 for Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks
Figure 3 for Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks
Figure 4 for Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks
Viaarxiv icon