Picture for Leo Yu Zhang

Leo Yu Zhang

FLARE: Towards Universal Dataset Purification against Backdoor Attacks

Add code
Nov 29, 2024
Viaarxiv icon

TrojanRobot: Backdoor Attacks Against Robotic Manipulation in the Physical World

Add code
Nov 18, 2024
Viaarxiv icon

Deferred Poisoning: Making the Model More Vulnerable via Hessian Singularization

Add code
Nov 06, 2024
Viaarxiv icon

DarkSAM: Fooling Segment Anything Model to Segment Nothing

Add code
Sep 26, 2024
Figure 1 for DarkSAM: Fooling Segment Anything Model to Segment Nothing
Figure 2 for DarkSAM: Fooling Segment Anything Model to Segment Nothing
Figure 3 for DarkSAM: Fooling Segment Anything Model to Segment Nothing
Figure 4 for DarkSAM: Fooling Segment Anything Model to Segment Nothing
Viaarxiv icon

ECLIPSE: Expunging Clean-label Indiscriminate Poisons via Sparse Diffusion Purification

Add code
Jun 25, 2024
Viaarxiv icon

Memorization in deep learning: A survey

Add code
Jun 06, 2024
Figure 1 for Memorization in deep learning: A survey
Figure 2 for Memorization in deep learning: A survey
Figure 3 for Memorization in deep learning: A survey
Figure 4 for Memorization in deep learning: A survey
Viaarxiv icon

Large Language Model Watermark Stealing With Mixed Integer Programming

Add code
May 30, 2024
Viaarxiv icon

IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency

Add code
May 16, 2024
Viaarxiv icon

Algorithmic Fairness: A Tolerance Perspective

Add code
Apr 26, 2024
Figure 1 for Algorithmic Fairness: A Tolerance Perspective
Figure 2 for Algorithmic Fairness: A Tolerance Perspective
Figure 3 for Algorithmic Fairness: A Tolerance Perspective
Figure 4 for Algorithmic Fairness: A Tolerance Perspective
Viaarxiv icon

Detector Collapse: Backdooring Object Detection to Catastrophic Overload or Blindness

Add code
Apr 17, 2024
Viaarxiv icon