Picture for Lijia Yu

Lijia Yu

Generalizability of Memorization Neural Networks

Add code
Nov 01, 2024
Viaarxiv icon

Out-of-Bounding-Box Triggers: A Stealthy Approach to Cheat Object Detectors

Add code
Oct 14, 2024
Figure 1 for Out-of-Bounding-Box Triggers: A Stealthy Approach to Cheat Object Detectors
Figure 2 for Out-of-Bounding-Box Triggers: A Stealthy Approach to Cheat Object Detectors
Figure 3 for Out-of-Bounding-Box Triggers: A Stealthy Approach to Cheat Object Detectors
Figure 4 for Out-of-Bounding-Box Triggers: A Stealthy Approach to Cheat Object Detectors
Viaarxiv icon

T2VSafetyBench: Evaluating the Safety of Text-to-Video Generative Models

Add code
Jul 08, 2024
Viaarxiv icon

Generalization Bound and New Algorithm for Clean-Label Backdoor Attack

Add code
Jun 02, 2024
Viaarxiv icon

Detection and Defense of Unlearnable Examples

Add code
Dec 14, 2023
Figure 1 for Detection and Defense of Unlearnable Examples
Figure 2 for Detection and Defense of Unlearnable Examples
Figure 3 for Detection and Defense of Unlearnable Examples
Figure 4 for Detection and Defense of Unlearnable Examples
Viaarxiv icon

Restore Translation Using Equivariant Neural Networks

Add code
Jun 29, 2023
Viaarxiv icon

Achieve Optimal Adversarial Accuracy for Adversarial Deep Learning using Stackelberg Game

Add code
Jul 17, 2022
Figure 1 for Achieve Optimal Adversarial Accuracy for Adversarial Deep Learning using Stackelberg Game
Figure 2 for Achieve Optimal Adversarial Accuracy for Adversarial Deep Learning using Stackelberg Game
Viaarxiv icon

Adversarial Parameter Attack on Deep Neural Networks

Add code
Mar 20, 2022
Figure 1 for Adversarial Parameter Attack on Deep Neural Networks
Figure 2 for Adversarial Parameter Attack on Deep Neural Networks
Figure 3 for Adversarial Parameter Attack on Deep Neural Networks
Figure 4 for Adversarial Parameter Attack on Deep Neural Networks
Viaarxiv icon

Robust and Information-theoretically Safe Bias Classifier against Adversarial Attacks

Add code
Nov 08, 2021
Figure 1 for Robust and Information-theoretically Safe Bias Classifier against Adversarial Attacks
Figure 2 for Robust and Information-theoretically Safe Bias Classifier against Adversarial Attacks
Figure 3 for Robust and Information-theoretically Safe Bias Classifier against Adversarial Attacks
Figure 4 for Robust and Information-theoretically Safe Bias Classifier against Adversarial Attacks
Viaarxiv icon

A Robust Classification-autoencoder to Defend Outliers and Adversaries

Add code
Jun 30, 2021
Figure 1 for A Robust Classification-autoencoder to Defend Outliers and Adversaries
Figure 2 for A Robust Classification-autoencoder to Defend Outliers and Adversaries
Figure 3 for A Robust Classification-autoencoder to Defend Outliers and Adversaries
Figure 4 for A Robust Classification-autoencoder to Defend Outliers and Adversaries
Viaarxiv icon