Picture for Leo Yu Zhang

Leo Yu Zhang

Deferred Poisoning: Making the Model More Vulnerable via Hessian Singularization

Add code
Nov 06, 2024
Viaarxiv icon

DarkSAM: Fooling Segment Anything Model to Segment Nothing

Add code
Sep 26, 2024
Figure 1 for DarkSAM: Fooling Segment Anything Model to Segment Nothing
Figure 2 for DarkSAM: Fooling Segment Anything Model to Segment Nothing
Figure 3 for DarkSAM: Fooling Segment Anything Model to Segment Nothing
Figure 4 for DarkSAM: Fooling Segment Anything Model to Segment Nothing
Viaarxiv icon

ECLIPSE: Expunging Clean-label Indiscriminate Poisons via Sparse Diffusion Purification

Add code
Jun 25, 2024
Viaarxiv icon

Memorization in deep learning: A survey

Add code
Jun 06, 2024
Viaarxiv icon

Large Language Model Watermark Stealing With Mixed Integer Programming

Add code
May 30, 2024
Viaarxiv icon

IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency

Add code
May 16, 2024
Viaarxiv icon

Algorithmic Fairness: A Tolerance Perspective

Add code
Apr 26, 2024
Viaarxiv icon

Detector Collapse: Backdooring Object Detection to Catastrophic Overload or Blindness

Add code
Apr 17, 2024
Viaarxiv icon

Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples

Add code
Mar 19, 2024
Viaarxiv icon

Fluent: Round-efficient Secure Aggregation for Private Federated Learning

Add code
Mar 10, 2024
Viaarxiv icon