Picture for Hanxun Huang

Hanxun Huang

Toward Universal and Transferable Jailbreak Attacks on Vision-Language Models

Add code
Feb 01, 2026
Viaarxiv icon

Just Ask: Curious Code Agents Reveal System Prompts in Frontier LLMs

Add code
Jan 29, 2026
Viaarxiv icon

AUDETER: A Large-scale Dataset for Deepfake Audio Detection in Open Worlds

Add code
Sep 04, 2025
Figure 1 for AUDETER: A Large-scale Dataset for Deepfake Audio Detection in Open Worlds
Figure 2 for AUDETER: A Large-scale Dataset for Deepfake Audio Detection in Open Worlds
Figure 3 for AUDETER: A Large-scale Dataset for Deepfake Audio Detection in Open Worlds
Figure 4 for AUDETER: A Large-scale Dataset for Deepfake Audio Detection in Open Worlds
Viaarxiv icon

X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP

Add code
May 08, 2025
Figure 1 for X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP
Figure 2 for X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP
Figure 3 for X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP
Figure 4 for X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP
Viaarxiv icon

CURVALID: Geometrically-guided Adversarial Prompt Detection

Add code
Mar 05, 2025
Viaarxiv icon

Detecting Backdoor Samples in Contrastive Language Image Pretraining

Add code
Feb 03, 2025
Viaarxiv icon

Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks

Add code
Nov 20, 2024
Figure 1 for Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks
Figure 2 for Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks
Figure 3 for Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks
Figure 4 for Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks
Viaarxiv icon

Expose Before You Defend: Unifying and Enhancing Backdoor Defenses via Exposed Models

Add code
Oct 25, 2024
Figure 1 for Expose Before You Defend: Unifying and Enhancing Backdoor Defenses via Exposed Models
Figure 2 for Expose Before You Defend: Unifying and Enhancing Backdoor Defenses via Exposed Models
Figure 3 for Expose Before You Defend: Unifying and Enhancing Backdoor Defenses via Exposed Models
Figure 4 for Expose Before You Defend: Unifying and Enhancing Backdoor Defenses via Exposed Models
Viaarxiv icon

BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models

Add code
Aug 23, 2024
Figure 1 for BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models
Figure 2 for BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models
Figure 3 for BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models
Figure 4 for BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models
Viaarxiv icon

Downstream Transfer Attack: Adversarial Attacks on Downstream Models with Pre-trained Vision Transformers

Add code
Aug 03, 2024
Figure 1 for Downstream Transfer Attack: Adversarial Attacks on Downstream Models with Pre-trained Vision Transformers
Figure 2 for Downstream Transfer Attack: Adversarial Attacks on Downstream Models with Pre-trained Vision Transformers
Figure 3 for Downstream Transfer Attack: Adversarial Attacks on Downstream Models with Pre-trained Vision Transformers
Figure 4 for Downstream Transfer Attack: Adversarial Attacks on Downstream Models with Pre-trained Vision Transformers
Viaarxiv icon