Picture for Xingjun Ma

Xingjun Ma

CALM: Curiosity-Driven Auditing for Large Language Models

Add code
Jan 06, 2025
Figure 1 for CALM: Curiosity-Driven Auditing for Large Language Models
Figure 2 for CALM: Curiosity-Driven Auditing for Large Language Models
Figure 3 for CALM: Curiosity-Driven Auditing for Large Language Models
Figure 4 for CALM: Curiosity-Driven Auditing for Large Language Models
Viaarxiv icon

Free-Form Motion Control: A Synthetic Video Generation Dataset with Controllable Camera and Object Motions

Add code
Jan 03, 2025
Viaarxiv icon

AIM: Additional Image Guided Generation of Transferable Adversarial Attacks

Add code
Jan 02, 2025
Viaarxiv icon

HoneypotNet: Backdoor Attacks Against Model Extraction

Add code
Jan 02, 2025
Figure 1 for HoneypotNet: Backdoor Attacks Against Model Extraction
Figure 2 for HoneypotNet: Backdoor Attacks Against Model Extraction
Figure 3 for HoneypotNet: Backdoor Attacks Against Model Extraction
Figure 4 for HoneypotNet: Backdoor Attacks Against Model Extraction
Viaarxiv icon

DiffPatch: Generating Customizable Adversarial Patches using Diffusion Model

Add code
Dec 02, 2024
Viaarxiv icon

Adversarial Prompt Distillation for Vision-Language Models

Add code
Nov 22, 2024
Viaarxiv icon

Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks

Add code
Nov 20, 2024
Viaarxiv icon

TAPT: Test-Time Adversarial Prompt Tuning for Robust Inference in Vision-Language Models

Add code
Nov 20, 2024
Viaarxiv icon

IDEATOR: Jailbreaking VLMs Using VLMs

Add code
Oct 29, 2024
Viaarxiv icon

BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks

Add code
Oct 28, 2024
Viaarxiv icon