Picture for Siyuan Liang

Siyuan Liang

LEDiT: Your Length-Extrapolatable Diffusion Transformer without Positional Encoding

Add code
Mar 07, 2025
Viaarxiv icon

Adversarial Training for Multimodal Large Language Models against Jailbreak Attacks

Add code
Mar 05, 2025
Viaarxiv icon

ELBA-Bench: An Efficient Learning Backdoor Attacks Benchmark for Large Language Models

Add code
Feb 22, 2025
Viaarxiv icon

CogMorph: Cognitive Morphing Attacks for Text-to-Image Models

Add code
Jan 21, 2025
Viaarxiv icon

Red Pill and Blue Pill: Controllable Website Fingerprinting Defense via Dynamic Backdoor Learning

Add code
Dec 16, 2024
Figure 1 for Red Pill and Blue Pill: Controllable Website Fingerprinting Defense via Dynamic Backdoor Learning
Figure 2 for Red Pill and Blue Pill: Controllable Website Fingerprinting Defense via Dynamic Backdoor Learning
Figure 3 for Red Pill and Blue Pill: Controllable Website Fingerprinting Defense via Dynamic Backdoor Learning
Figure 4 for Red Pill and Blue Pill: Controllable Website Fingerprinting Defense via Dynamic Backdoor Learning
Viaarxiv icon

CopyrightShield: Spatial Similarity Guided Backdoor Defense against Copyright Infringement in Diffusion Models

Add code
Dec 02, 2024
Viaarxiv icon

Visual Adversarial Attack on Vision-Language Models for Autonomous Driving

Add code
Nov 27, 2024
Figure 1 for Visual Adversarial Attack on Vision-Language Models for Autonomous Driving
Figure 2 for Visual Adversarial Attack on Vision-Language Models for Autonomous Driving
Figure 3 for Visual Adversarial Attack on Vision-Language Models for Autonomous Driving
Figure 4 for Visual Adversarial Attack on Vision-Language Models for Autonomous Driving
Viaarxiv icon

Interpreting Object-level Foundation Models via Visual Precision Search

Add code
Nov 25, 2024
Figure 1 for Interpreting Object-level Foundation Models via Visual Precision Search
Figure 2 for Interpreting Object-level Foundation Models via Visual Precision Search
Figure 3 for Interpreting Object-level Foundation Models via Visual Precision Search
Figure 4 for Interpreting Object-level Foundation Models via Visual Precision Search
Viaarxiv icon

NoVo: Norm Voting off Hallucinations with Attention Heads in Large Language Models

Add code
Oct 11, 2024
Figure 1 for NoVo: Norm Voting off Hallucinations with Attention Heads in Large Language Models
Figure 2 for NoVo: Norm Voting off Hallucinations with Attention Heads in Large Language Models
Figure 3 for NoVo: Norm Voting off Hallucinations with Attention Heads in Large Language Models
Figure 4 for NoVo: Norm Voting off Hallucinations with Attention Heads in Large Language Models
Viaarxiv icon

Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models

Add code
Oct 07, 2024
Figure 1 for Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models
Figure 2 for Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models
Figure 3 for Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models
Figure 4 for Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models
Viaarxiv icon