Picture for Zitao Chen

Zitao Chen

Catch Me if You Can: Detecting Unauthorized Data Use in Deep Learning Models

Add code
Sep 10, 2024
Viaarxiv icon

A Method to Facilitate Membership Inference Attacks in Deep Learning Models

Add code
Jul 02, 2024
Viaarxiv icon

Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction

Add code
Jul 04, 2023
Viaarxiv icon

Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attack

Add code
Aug 11, 2021
Figure 1 for Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attack
Figure 2 for Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attack
Figure 3 for Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attack
Figure 4 for Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attack
Viaarxiv icon

TensorFI: A Flexible Fault Injection Framework for TensorFlow Applications

Add code
Apr 03, 2020
Figure 1 for TensorFI: A Flexible Fault Injection Framework for TensorFlow Applications
Figure 2 for TensorFI: A Flexible Fault Injection Framework for TensorFlow Applications
Figure 3 for TensorFI: A Flexible Fault Injection Framework for TensorFlow Applications
Figure 4 for TensorFI: A Flexible Fault Injection Framework for TensorFlow Applications
Viaarxiv icon

Ranger: Boosting Error Resilience of Deep Neural Networks through Range Restriction

Add code
Mar 30, 2020
Figure 1 for Ranger: Boosting Error Resilience of Deep Neural Networks through Range Restriction
Figure 2 for Ranger: Boosting Error Resilience of Deep Neural Networks through Range Restriction
Figure 3 for Ranger: Boosting Error Resilience of Deep Neural Networks through Range Restriction
Figure 4 for Ranger: Boosting Error Resilience of Deep Neural Networks through Range Restriction
Viaarxiv icon