Picture for Siyuan Liang

Siyuan Liang

NoVo: Norm Voting off Hallucinations with Attention Heads in Large Language Models

Add code
Oct 11, 2024
Viaarxiv icon

Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models

Add code
Oct 07, 2024
Figure 1 for Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models
Figure 2 for Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models
Figure 3 for Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models
Figure 4 for Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models
Viaarxiv icon

Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats

Add code
Sep 29, 2024
Figure 1 for Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats
Figure 2 for Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats
Figure 3 for Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats
Figure 4 for Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats
Viaarxiv icon

TA-Cleaner: A Fine-grained Text Alignment Backdoor Defense Strategy for Multimodal Contrastive Learning

Add code
Sep 26, 2024
Viaarxiv icon

Towards Robust Object Detection: Identifying and Removing Backdoors via Module Inconsistency Analysis

Add code
Sep 24, 2024
Viaarxiv icon

Adversarial Backdoor Defense in CLIP

Add code
Sep 24, 2024
Viaarxiv icon

Module-wise Adaptive Adversarial Training for End-to-end Autonomous Driving

Add code
Sep 11, 2024
Viaarxiv icon

Compromising Embodied Agents with Contextual Backdoor Attacks

Add code
Aug 06, 2024
Figure 1 for Compromising Embodied Agents with Contextual Backdoor Attacks
Figure 2 for Compromising Embodied Agents with Contextual Backdoor Attacks
Figure 3 for Compromising Embodied Agents with Contextual Backdoor Attacks
Figure 4 for Compromising Embodied Agents with Contextual Backdoor Attacks
Viaarxiv icon

GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing

Add code
Jun 30, 2024
Viaarxiv icon

Revisiting Backdoor Attacks against Large Vision-Language Models

Add code
Jun 27, 2024
Viaarxiv icon