Picture for Chunyi Zhou

Chunyi Zhou

ACIArena: Toward Unified Evaluation for Agent Cascading Injection

Add code
Apr 09, 2026
Viaarxiv icon

"I See What You Did There": Can Large Vision-Language Models Understand Multimodal Puns?

Add code
Apr 07, 2026
Viaarxiv icon

When Agents "Misremember" Collectively: Exploring the Mandela Effect in LLM-based Multi-Agent Systems

Add code
Jan 31, 2026
Viaarxiv icon

FraudShield: Knowledge Graph Empowered Defense for LLMs against Fraud Attacks

Add code
Jan 30, 2026
Viaarxiv icon

Bridging the Copyright Gap: Do Large Vision-Language Models Recognize and Respect Copyrighted Content?

Add code
Dec 26, 2025
Viaarxiv icon

The Eminence in Shadow: Exploiting Feature Boundary Ambiguity for Robust Backdoor Attacks

Add code
Dec 17, 2025
Viaarxiv icon

Poison in the Well: Feature Embedding Disruption in Backdoor Attacks

Add code
May 26, 2025
Figure 1 for Poison in the Well: Feature Embedding Disruption in Backdoor Attacks
Figure 2 for Poison in the Well: Feature Embedding Disruption in Backdoor Attacks
Figure 3 for Poison in the Well: Feature Embedding Disruption in Backdoor Attacks
Figure 4 for Poison in the Well: Feature Embedding Disruption in Backdoor Attacks
Viaarxiv icon

UNIDOOR: A Universal Framework for Action-Level Backdoor Attacks in Deep Reinforcement Learning

Add code
Jan 26, 2025
Figure 1 for UNIDOOR: A Universal Framework for Action-Level Backdoor Attacks in Deep Reinforcement Learning
Figure 2 for UNIDOOR: A Universal Framework for Action-Level Backdoor Attacks in Deep Reinforcement Learning
Figure 3 for UNIDOOR: A Universal Framework for Action-Level Backdoor Attacks in Deep Reinforcement Learning
Figure 4 for UNIDOOR: A Universal Framework for Action-Level Backdoor Attacks in Deep Reinforcement Learning
Viaarxiv icon

Navigating the Risks: A Survey of Security, Privacy, and Ethics Threats in LLM-Based Agents

Add code
Nov 14, 2024
Figure 1 for Navigating the Risks: A Survey of Security, Privacy, and Ethics Threats in LLM-Based Agents
Figure 2 for Navigating the Risks: A Survey of Security, Privacy, and Ethics Threats in LLM-Based Agents
Figure 3 for Navigating the Risks: A Survey of Security, Privacy, and Ethics Threats in LLM-Based Agents
Figure 4 for Navigating the Risks: A Survey of Security, Privacy, and Ethics Threats in LLM-Based Agents
Viaarxiv icon

Intellectual Property Protection for Deep Learning Model and Dataset Intelligence

Add code
Nov 07, 2024
Figure 1 for Intellectual Property Protection for Deep Learning Model and Dataset Intelligence
Figure 2 for Intellectual Property Protection for Deep Learning Model and Dataset Intelligence
Figure 3 for Intellectual Property Protection for Deep Learning Model and Dataset Intelligence
Figure 4 for Intellectual Property Protection for Deep Learning Model and Dataset Intelligence
Viaarxiv icon