Picture for Shagufta Mehnaz

Shagufta Mehnaz

Forget to Flourish: Leveraging Machine-Unlearning on Pretrained Language Models for Privacy Leakage

Add code
Aug 30, 2024
Viaarxiv icon

Second-Order Information Matters: Revisiting Machine Unlearning for Large Language Models

Add code
Mar 13, 2024
Viaarxiv icon

FLTrojan: Privacy Leakage Attacks against Federated Language Models Through Selective Weight Tampering

Add code
Oct 24, 2023
Figure 1 for FLTrojan: Privacy Leakage Attacks against Federated Language Models Through Selective Weight Tampering
Figure 2 for FLTrojan: Privacy Leakage Attacks against Federated Language Models Through Selective Weight Tampering
Figure 3 for FLTrojan: Privacy Leakage Attacks against Federated Language Models Through Selective Weight Tampering
Figure 4 for FLTrojan: Privacy Leakage Attacks against Federated Language Models Through Selective Weight Tampering
Viaarxiv icon

FLShield: A Validation Based Federated Learning Framework to Defend Against Poisoning Attacks

Add code
Aug 10, 2023
Viaarxiv icon

Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models

Add code
Jan 23, 2022
Figure 1 for Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models
Figure 2 for Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models
Figure 3 for Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models
Figure 4 for Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models
Viaarxiv icon

Black-box Model Inversion Attribute Inference Attacks on Classification Models

Add code
Dec 07, 2020
Figure 1 for Black-box Model Inversion Attribute Inference Attacks on Classification Models
Figure 2 for Black-box Model Inversion Attribute Inference Attacks on Classification Models
Figure 3 for Black-box Model Inversion Attribute Inference Attacks on Classification Models
Figure 4 for Black-box Model Inversion Attribute Inference Attacks on Classification Models
Viaarxiv icon