Picture for Shengshan Hu

Shengshan Hu

Visual Adversarial Attack on Vision-Language Models for Autonomous Driving

Add code
Nov 27, 2024
Figure 1 for Visual Adversarial Attack on Vision-Language Models for Autonomous Driving
Figure 2 for Visual Adversarial Attack on Vision-Language Models for Autonomous Driving
Figure 3 for Visual Adversarial Attack on Vision-Language Models for Autonomous Driving
Figure 4 for Visual Adversarial Attack on Vision-Language Models for Autonomous Driving
Viaarxiv icon

TrojanRobot: Backdoor Attacks Against Robotic Manipulation in the Physical World

Add code
Nov 18, 2024
Viaarxiv icon

Unlearnable 3D Point Clouds: Class-wise Transformation Is All You Need

Add code
Oct 04, 2024
Viaarxiv icon

DarkSAM: Fooling Segment Anything Model to Segment Nothing

Add code
Sep 26, 2024
Figure 1 for DarkSAM: Fooling Segment Anything Model to Segment Nothing
Figure 2 for DarkSAM: Fooling Segment Anything Model to Segment Nothing
Figure 3 for DarkSAM: Fooling Segment Anything Model to Segment Nothing
Figure 4 for DarkSAM: Fooling Segment Anything Model to Segment Nothing
Viaarxiv icon

ECLIPSE: Expunging Clean-label Indiscriminate Poisons via Sparse Diffusion Purification

Add code
Jun 25, 2024
Viaarxiv icon

Large Language Model Watermark Stealing With Mixed Integer Programming

Add code
May 30, 2024
Viaarxiv icon

Variational Bayes for Federated Continual Learning

Add code
May 23, 2024
Figure 1 for Variational Bayes for Federated Continual Learning
Figure 2 for Variational Bayes for Federated Continual Learning
Figure 3 for Variational Bayes for Federated Continual Learning
Figure 4 for Variational Bayes for Federated Continual Learning
Viaarxiv icon

Detector Collapse: Backdooring Object Detection to Catastrophic Overload or Blindness

Add code
Apr 17, 2024
Viaarxiv icon

Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples

Add code
Mar 19, 2024
Figure 1 for Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples
Figure 2 for Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples
Figure 3 for Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples
Figure 4 for Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples
Viaarxiv icon

Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks

Add code
Jan 30, 2024
Figure 1 for Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks
Figure 2 for Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks
Figure 3 for Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks
Figure 4 for Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks
Viaarxiv icon