Picture for Leo Yu Zhang

Leo Yu Zhang

Test-Time Backdoor Detection for Object Detection Models

Add code
Mar 19, 2025
Viaarxiv icon

Improving Generalization of Universal Adversarial Perturbation via Dynamic Maximin Optimization

Add code
Mar 17, 2025
Viaarxiv icon

Not All Edges are Equally Robust: Evaluating the Robustness of Ranking-Based Federated Learning

Add code
Mar 12, 2025
Viaarxiv icon

Data Duplication: A Novel Multi-Purpose Attack Paradigm in Machine Unlearning

Add code
Jan 28, 2025
Viaarxiv icon

Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI

Add code
Jan 28, 2025
Figure 1 for Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI
Figure 2 for Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI
Figure 3 for Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI
Figure 4 for Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI
Viaarxiv icon

NumbOD: A Spatial-Frequency Fusion Attack Against Object Detectors

Add code
Dec 22, 2024
Viaarxiv icon

PB-UAP: Hybrid Universal Adversarial Attack For Image Segmentation

Add code
Dec 21, 2024
Figure 1 for PB-UAP: Hybrid Universal Adversarial Attack For Image Segmentation
Figure 2 for PB-UAP: Hybrid Universal Adversarial Attack For Image Segmentation
Figure 3 for PB-UAP: Hybrid Universal Adversarial Attack For Image Segmentation
Figure 4 for PB-UAP: Hybrid Universal Adversarial Attack For Image Segmentation
Viaarxiv icon

FLARE: Towards Universal Dataset Purification against Backdoor Attacks

Add code
Nov 29, 2024
Viaarxiv icon

TrojanRobot: Backdoor Attacks Against Robotic Manipulation in the Physical World

Add code
Nov 18, 2024
Viaarxiv icon

Deferred Poisoning: Making the Model More Vulnerable via Hessian Singularization

Add code
Nov 06, 2024
Viaarxiv icon