Picture for Zedian Shao

Zedian Shao

Making LLMs Vulnerable to Prompt Injection via Poisoning Alignment

Add code
Oct 18, 2024
Viaarxiv icon

Automatically Generating Visual Hallucination Test Cases for Multimodal Large Language Models

Add code
Oct 15, 2024
Viaarxiv icon

Refusing Safe Prompts for Multi-modal Large Language Models

Add code
Jul 12, 2024
Viaarxiv icon