Picture for Fangzhou Wu

Fangzhou Wu

FATH: Authentication-based Test-time Defense against Indirect Prompt Injection Attacks

Add code
Oct 28, 2024
Viaarxiv icon

A New Era in LLM Security: Exploring Security Concerns in Real-World LLM-based Systems

Add code
Feb 28, 2024
Viaarxiv icon

WIPI: A New Web Threat for LLM-Driven Web Agents

Add code
Feb 26, 2024
Viaarxiv icon

DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions

Add code
Dec 12, 2023
Figure 1 for DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions
Figure 2 for DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions
Figure 3 for DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions
Figure 4 for DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions
Viaarxiv icon

Exploring the Limits of ChatGPT in Software Security Applications

Add code
Dec 08, 2023
Figure 1 for Exploring the Limits of ChatGPT in Software Security Applications
Figure 2 for Exploring the Limits of ChatGPT in Software Security Applications
Figure 3 for Exploring the Limits of ChatGPT in Software Security Applications
Figure 4 for Exploring the Limits of ChatGPT in Software Security Applications
Viaarxiv icon

Towards Efficient Data-Centric Robust Machine Learning with Noise-based Augmentation

Add code
Mar 08, 2022
Figure 1 for Towards Efficient Data-Centric Robust Machine Learning with Noise-based Augmentation
Figure 2 for Towards Efficient Data-Centric Robust Machine Learning with Noise-based Augmentation
Figure 3 for Towards Efficient Data-Centric Robust Machine Learning with Noise-based Augmentation
Figure 4 for Towards Efficient Data-Centric Robust Machine Learning with Noise-based Augmentation
Viaarxiv icon