Picture for Nilanjana Das

Nilanjana Das

Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context

Add code
Dec 20, 2024
Viaarxiv icon

Human-Interpretable Adversarial Prompt Attack on Large Language Models with Situational Context

Add code
Jul 25, 2024
Figure 1 for Human-Interpretable Adversarial Prompt Attack on Large Language Models with Situational Context
Figure 2 for Human-Interpretable Adversarial Prompt Attack on Large Language Models with Situational Context
Figure 3 for Human-Interpretable Adversarial Prompt Attack on Large Language Models with Situational Context
Figure 4 for Human-Interpretable Adversarial Prompt Attack on Large Language Models with Situational Context
Viaarxiv icon

Change Management using Generative Modeling on Digital Twins

Add code
Sep 21, 2023
Viaarxiv icon