Picture for Runpeng Geng

Runpeng Geng

PISanitizer: Preventing Prompt Injection to Long-Context LLMs via Prompt Sanitization

Add code
Nov 13, 2025
Figure 1 for PISanitizer: Preventing Prompt Injection to Long-Context LLMs via Prompt Sanitization
Figure 2 for PISanitizer: Preventing Prompt Injection to Long-Context LLMs via Prompt Sanitization
Figure 3 for PISanitizer: Preventing Prompt Injection to Long-Context LLMs via Prompt Sanitization
Figure 4 for PISanitizer: Preventing Prompt Injection to Long-Context LLMs via Prompt Sanitization
Viaarxiv icon

UniC-RAG: Universal Knowledge Corruption Attacks to Retrieval-Augmented Generation

Add code
Aug 26, 2025
Viaarxiv icon

TracLLM: A Generic Framework for Attributing Long Context LLMs

Add code
Jun 06, 2025
Viaarxiv icon

PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models

Add code
Feb 12, 2024
Figure 1 for PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models
Figure 2 for PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models
Figure 3 for PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models
Figure 4 for PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models
Viaarxiv icon

Prompt Injection Attacks and Defenses in LLM-Integrated Applications

Add code
Oct 19, 2023
Figure 1 for Prompt Injection Attacks and Defenses in LLM-Integrated Applications
Figure 2 for Prompt Injection Attacks and Defenses in LLM-Integrated Applications
Figure 3 for Prompt Injection Attacks and Defenses in LLM-Integrated Applications
Figure 4 for Prompt Injection Attacks and Defenses in LLM-Integrated Applications
Viaarxiv icon