Picture for Amir Houmansadr

Amir Houmansadr

Data Extraction Attacks in Retrieval-Augmented Generation via Backdoors

Add code
Nov 03, 2024
Figure 1 for Data Extraction Attacks in Retrieval-Augmented Generation via Backdoors
Figure 2 for Data Extraction Attacks in Retrieval-Augmented Generation via Backdoors
Figure 3 for Data Extraction Attacks in Retrieval-Augmented Generation via Backdoors
Figure 4 for Data Extraction Attacks in Retrieval-Augmented Generation via Backdoors
Viaarxiv icon

Bias Similarity Across Large Language Models

Add code
Oct 15, 2024
Viaarxiv icon

Injecting Bias in Text-To-Image Models via Composite-Trigger Backdoors

Add code
Jun 21, 2024
Viaarxiv icon

PostMark: A Robust Blackbox Watermark for Large Language Models

Add code
Jun 20, 2024
Viaarxiv icon

MeanSparse: Post-Training Robustness Enhancement Through Mean-Centered Feature Sparsification

Add code
Jun 09, 2024
Viaarxiv icon

OSLO: One-Shot Label-Only Membership Inference Attacks

Add code
May 27, 2024
Figure 1 for OSLO: One-Shot Label-Only Membership Inference Attacks
Figure 2 for OSLO: One-Shot Label-Only Membership Inference Attacks
Figure 3 for OSLO: One-Shot Label-Only Membership Inference Attacks
Figure 4 for OSLO: One-Shot Label-Only Membership Inference Attacks
Viaarxiv icon

Iteratively Prompting Multimodal LLMs to Reproduce Natural and AI-Generated Images

Add code
Apr 21, 2024
Viaarxiv icon

Fake or Compromised? Making Sense of Malicious Clients in Federated Learning

Add code
Mar 10, 2024
Viaarxiv icon

SoK: Challenges and Opportunities in Federated Unlearning

Add code
Mar 04, 2024
Viaarxiv icon

Diffence: Fencing Membership Privacy With Diffusion Models

Add code
Dec 07, 2023
Viaarxiv icon