Picture for Zhilong Wang

Zhilong Wang

Image-based Prompt Injection: Hijacking Multimodal LLMs through Visually Embedded Adversarial Instructions

Add code
Mar 04, 2026
Viaarxiv icon

To Protect the LLM Agent Against the Prompt Injection Attack with Polymorphic Prompt

Add code
Jun 06, 2025
Viaarxiv icon

NovelSeek: When Agent Becomes the Scientist -- Building Closed-Loop System from Hypothesis to Verification

Add code
May 22, 2025
Figure 1 for NovelSeek: When Agent Becomes the Scientist -- Building Closed-Loop System from Hypothesis to Verification
Figure 2 for NovelSeek: When Agent Becomes the Scientist -- Building Closed-Loop System from Hypothesis to Verification
Figure 3 for NovelSeek: When Agent Becomes the Scientist -- Building Closed-Loop System from Hypothesis to Verification
Figure 4 for NovelSeek: When Agent Becomes the Scientist -- Building Closed-Loop System from Hypothesis to Verification
Viaarxiv icon

Hide Your Malicious Goal Into Benign Narratives: Jailbreak Large Language Models through Neural Carrier Articles

Add code
Aug 20, 2024
Figure 1 for Hide Your Malicious Goal Into Benign Narratives: Jailbreak Large Language Models through Neural Carrier Articles
Figure 2 for Hide Your Malicious Goal Into Benign Narratives: Jailbreak Large Language Models through Neural Carrier Articles
Figure 3 for Hide Your Malicious Goal Into Benign Narratives: Jailbreak Large Language Models through Neural Carrier Articles
Figure 4 for Hide Your Malicious Goal Into Benign Narratives: Jailbreak Large Language Models through Neural Carrier Articles
Viaarxiv icon

Hidden You Malicious Goal Into Benign Narratives: Jailbreak Large Language Models through Logic Chain Injection

Add code
Apr 16, 2024
Figure 1 for Hidden You Malicious Goal Into Benign Narratives: Jailbreak Large Language Models through Logic Chain Injection
Figure 2 for Hidden You Malicious Goal Into Benign Narratives: Jailbreak Large Language Models through Logic Chain Injection
Viaarxiv icon

Hidden You Malicious Goal Into Benigh Narratives: Jailbreak Large Language Models through Logic Chain Injection

Add code
Apr 07, 2024
Figure 1 for Hidden You Malicious Goal Into Benigh Narratives: Jailbreak Large Language Models through Logic Chain Injection
Figure 2 for Hidden You Malicious Goal Into Benigh Narratives: Jailbreak Large Language Models through Logic Chain Injection
Viaarxiv icon

ChatGPT for Software Security: Exploring the Strengths and Limitations of ChatGPT in the Security Applications

Add code
Aug 10, 2023
Figure 1 for ChatGPT for Software Security: Exploring the Strengths and Limitations of ChatGPT in the Security Applications
Figure 2 for ChatGPT for Software Security: Exploring the Strengths and Limitations of ChatGPT in the Security Applications
Figure 3 for ChatGPT for Software Security: Exploring the Strengths and Limitations of ChatGPT in the Security Applications
Viaarxiv icon

Which Features are Learned by CodeBert: An Empirical Study of the BERT-based Source Code Representation Learning

Add code
Jan 20, 2023
Figure 1 for Which Features are Learned by CodeBert: An Empirical Study of the BERT-based Source Code Representation Learning
Figure 2 for Which Features are Learned by CodeBert: An Empirical Study of the BERT-based Source Code Representation Learning
Viaarxiv icon