Picture for Alina Oprea

Alina Oprea

Adversarial Inception for Bounded Backdoor Poisoning in Deep Reinforcement Learning

Add code
Oct 21, 2024
Viaarxiv icon

Model-agnostic clean-label backdoor mitigation in cybersecurity environments

Add code
Jul 11, 2024
Viaarxiv icon

Phantom: General Trigger Attacks on Retrieval Augmented Language Generation

Add code
May 30, 2024
Viaarxiv icon

SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents

Add code
May 30, 2024
Viaarxiv icon

User Inference Attacks on Large Language Models

Add code
Oct 13, 2023
Figure 1 for User Inference Attacks on Large Language Models
Figure 2 for User Inference Attacks on Large Language Models
Figure 3 for User Inference Attacks on Large Language Models
Figure 4 for User Inference Attacks on Large Language Models
Viaarxiv icon

Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning

Add code
Oct 05, 2023
Figure 1 for Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning
Figure 2 for Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning
Figure 3 for Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning
Figure 4 for Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning
Viaarxiv icon

Dropout Attacks

Add code
Sep 04, 2023
Viaarxiv icon

Poisoning Network Flow Classifiers

Add code
Jun 02, 2023
Viaarxiv icon

TMI! Finetuned Models Leak Private Information from their Pretraining Data

Add code
Jun 01, 2023
Viaarxiv icon

Unleashing the Power of Randomization in Auditing Differentially Private ML

Add code
May 29, 2023
Viaarxiv icon