PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts

Add code
Jun 13, 2023
Figure 1 for PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts
Figure 2 for PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts
Figure 3 for PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts
Figure 4 for PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: