Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context

Add code
Dec 20, 2024
Figure 1 for Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context
Figure 2 for Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context
Figure 3 for Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context
Figure 4 for Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: