Picture for Shiyao Cui

Shiyao Cui

The Missing Half: Unveiling Training-time Implicit Safety Risks Beyond Deployment

Add code
Feb 04, 2026
Viaarxiv icon

The Side Effects of Being Smart: Safety Risks in MLLMs' Multi-Image Reasoning

Add code
Jan 20, 2026
Viaarxiv icon

JPS: Jailbreak Multimodal Large Language Models with Collaborative Visual Perturbation and Textual Steering

Add code
Aug 07, 2025
Viaarxiv icon

Exploring Multimodal Challenges in Toxic Chinese Detection: Taxonomy, Benchmark, and Findings

Add code
May 30, 2025
Viaarxiv icon

Be Careful When Fine-tuning On Open-Source LLMs: Your Fine-tuning Data Could Be Secretly Stolen!

Add code
May 21, 2025
Viaarxiv icon

How Should We Enhance the Safety of Large Reasoning Models: An Empirical Study

Add code
May 21, 2025
Viaarxiv icon

ShieldVLM: Safeguarding the Multimodal Implicit Toxicity via Deliberative Reasoning with LVLMs

Add code
May 20, 2025
Figure 1 for ShieldVLM: Safeguarding the Multimodal Implicit Toxicity via Deliberative Reasoning with LVLMs
Figure 2 for ShieldVLM: Safeguarding the Multimodal Implicit Toxicity via Deliberative Reasoning with LVLMs
Figure 3 for ShieldVLM: Safeguarding the Multimodal Implicit Toxicity via Deliberative Reasoning with LVLMs
Figure 4 for ShieldVLM: Safeguarding the Multimodal Implicit Toxicity via Deliberative Reasoning with LVLMs
Viaarxiv icon

AISafetyLab: A Comprehensive Framework for AI Safety Evaluation and Improvement

Add code
Feb 24, 2025
Viaarxiv icon

LongSafety: Evaluating Long-Context Safety of Large Language Models

Add code
Feb 24, 2025
Figure 1 for LongSafety: Evaluating Long-Context Safety of Large Language Models
Figure 2 for LongSafety: Evaluating Long-Context Safety of Large Language Models
Figure 3 for LongSafety: Evaluating Long-Context Safety of Large Language Models
Figure 4 for LongSafety: Evaluating Long-Context Safety of Large Language Models
Viaarxiv icon

Human Decision-making is Susceptible to AI-driven Manipulation

Add code
Feb 11, 2025
Viaarxiv icon