Picture for Xiongtao Sun

Xiongtao Sun

Utilizing Jailbreak Probability to Attack and Safeguard Multimodal LLMs

Add code
Mar 10, 2025
Viaarxiv icon

DePrompt: Desensitization and Evaluation of Personal Identifiable Information in Large Language Model Prompts

Add code
Aug 16, 2024
Viaarxiv icon

Multi-Turn Context Jailbreak Attack on Large Language Models From First Principles

Add code
Aug 08, 2024
Viaarxiv icon