Picture for G. Edward Suh

G. Edward Suh

Architecting Secure AI Agents: Perspectives on System-Level Defenses Against Indirect Prompt Injection Attacks

Add code
Mar 31, 2026
Viaarxiv icon

SideQuest: Model-Driven KV Cache Management for Long-Horizon Agentic Reasoning

Add code
Feb 26, 2026
Viaarxiv icon

Privasis: Synthesizing the Largest "Public" Private Dataset from Scratch

Add code
Feb 03, 2026
Viaarxiv icon

ReasoningBomb: A Stealthy Denial-of-Service Attack by Inducing Pathologically Long Reasoning in Large Reasoning Models

Add code
Jan 29, 2026
Viaarxiv icon

ReasAlign: Reasoning Enhanced Safety Alignment against Prompt Injection Attack

Add code
Jan 15, 2026
Viaarxiv icon

Machine Learning with Privacy for Protected Attributes

Add code
Jun 24, 2025
Figure 1 for Machine Learning with Privacy for Protected Attributes
Figure 2 for Machine Learning with Privacy for Protected Attributes
Figure 3 for Machine Learning with Privacy for Protected Attributes
Figure 4 for Machine Learning with Privacy for Protected Attributes
Viaarxiv icon

How much do language models memorize?

Add code
May 30, 2025
Viaarxiv icon

Stronger Enforcement of Instruction Hierarchy via Augmented Intermediate Representations

Add code
May 25, 2025
Figure 1 for Stronger Enforcement of Instruction Hierarchy via Augmented Intermediate Representations
Figure 2 for Stronger Enforcement of Instruction Hierarchy via Augmented Intermediate Representations
Figure 3 for Stronger Enforcement of Instruction Hierarchy via Augmented Intermediate Representations
Figure 4 for Stronger Enforcement of Instruction Hierarchy via Augmented Intermediate Representations
Viaarxiv icon

Leveraging ASIC AI Chips for Homomorphic Encryption

Add code
Jan 13, 2025
Figure 1 for Leveraging ASIC AI Chips for Homomorphic Encryption
Figure 2 for Leveraging ASIC AI Chips for Homomorphic Encryption
Figure 3 for Leveraging ASIC AI Chips for Homomorphic Encryption
Figure 4 for Leveraging ASIC AI Chips for Homomorphic Encryption
Viaarxiv icon

Sequence-Level Analysis of Leakage Risk of Training Data in Large Language Models

Add code
Dec 15, 2024
Figure 1 for Sequence-Level Analysis of Leakage Risk of Training Data in Large Language Models
Figure 2 for Sequence-Level Analysis of Leakage Risk of Training Data in Large Language Models
Figure 3 for Sequence-Level Analysis of Leakage Risk of Training Data in Large Language Models
Figure 4 for Sequence-Level Analysis of Leakage Risk of Training Data in Large Language Models
Viaarxiv icon