Picture for Siyuan Liang

Siyuan Liang

CopyrightShield: Spatial Similarity Guided Backdoor Defense against Copyright Infringement in Diffusion Models

Add code
Dec 02, 2024
Viaarxiv icon

Visual Adversarial Attack on Vision-Language Models for Autonomous Driving

Add code
Nov 27, 2024
Figure 1 for Visual Adversarial Attack on Vision-Language Models for Autonomous Driving
Figure 2 for Visual Adversarial Attack on Vision-Language Models for Autonomous Driving
Figure 3 for Visual Adversarial Attack on Vision-Language Models for Autonomous Driving
Figure 4 for Visual Adversarial Attack on Vision-Language Models for Autonomous Driving
Viaarxiv icon

Interpreting Object-level Foundation Models via Visual Precision Search

Add code
Nov 25, 2024
Figure 1 for Interpreting Object-level Foundation Models via Visual Precision Search
Figure 2 for Interpreting Object-level Foundation Models via Visual Precision Search
Figure 3 for Interpreting Object-level Foundation Models via Visual Precision Search
Figure 4 for Interpreting Object-level Foundation Models via Visual Precision Search
Viaarxiv icon

NoVo: Norm Voting off Hallucinations with Attention Heads in Large Language Models

Add code
Oct 11, 2024
Viaarxiv icon

Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models

Add code
Oct 07, 2024
Figure 1 for Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models
Figure 2 for Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models
Figure 3 for Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models
Figure 4 for Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models
Viaarxiv icon

Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats

Add code
Sep 29, 2024
Figure 1 for Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats
Figure 2 for Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats
Figure 3 for Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats
Figure 4 for Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats
Viaarxiv icon

TA-Cleaner: A Fine-grained Text Alignment Backdoor Defense Strategy for Multimodal Contrastive Learning

Add code
Sep 26, 2024
Figure 1 for TA-Cleaner: A Fine-grained Text Alignment Backdoor Defense Strategy for Multimodal Contrastive Learning
Figure 2 for TA-Cleaner: A Fine-grained Text Alignment Backdoor Defense Strategy for Multimodal Contrastive Learning
Figure 3 for TA-Cleaner: A Fine-grained Text Alignment Backdoor Defense Strategy for Multimodal Contrastive Learning
Figure 4 for TA-Cleaner: A Fine-grained Text Alignment Backdoor Defense Strategy for Multimodal Contrastive Learning
Viaarxiv icon

Adversarial Backdoor Defense in CLIP

Add code
Sep 24, 2024
Viaarxiv icon

Towards Robust Object Detection: Identifying and Removing Backdoors via Module Inconsistency Analysis

Add code
Sep 24, 2024
Viaarxiv icon

Module-wise Adaptive Adversarial Training for End-to-end Autonomous Driving

Add code
Sep 11, 2024
Figure 1 for Module-wise Adaptive Adversarial Training for End-to-end Autonomous Driving
Figure 2 for Module-wise Adaptive Adversarial Training for End-to-end Autonomous Driving
Figure 3 for Module-wise Adaptive Adversarial Training for End-to-end Autonomous Driving
Figure 4 for Module-wise Adaptive Adversarial Training for End-to-end Autonomous Driving
Viaarxiv icon