Picture for Liam Fowl

Liam Fowl

Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion

Add code
Mar 25, 2024
Viaarxiv icon

Exploring Sequence-to-Sequence Transformer-Transducer Models for Keyword Spotting

Add code
Nov 11, 2022
Viaarxiv icon

Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning

Add code
Oct 17, 2022
Figure 1 for Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning
Figure 2 for Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning
Figure 3 for Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning
Figure 4 for Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning
Viaarxiv icon

Poisons that are learned faster are more effective

Add code
Apr 19, 2022
Figure 1 for Poisons that are learned faster are more effective
Figure 2 for Poisons that are learned faster are more effective
Figure 3 for Poisons that are learned faster are more effective
Figure 4 for Poisons that are learned faster are more effective
Viaarxiv icon

Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective

Add code
Mar 15, 2022
Figure 1 for Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective
Figure 2 for Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective
Figure 3 for Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective
Figure 4 for Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective
Viaarxiv icon

Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification

Add code
Feb 01, 2022
Figure 1 for Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification
Figure 2 for Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification
Figure 3 for Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification
Figure 4 for Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification
Viaarxiv icon

Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models

Add code
Jan 29, 2022
Viaarxiv icon

Execute Order 66: Targeted Data Poisoning for Reinforcement Learning

Add code
Jan 03, 2022
Figure 1 for Execute Order 66: Targeted Data Poisoning for Reinforcement Learning
Figure 2 for Execute Order 66: Targeted Data Poisoning for Reinforcement Learning
Figure 3 for Execute Order 66: Targeted Data Poisoning for Reinforcement Learning
Figure 4 for Execute Order 66: Targeted Data Poisoning for Reinforcement Learning
Viaarxiv icon

Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models

Add code
Oct 25, 2021
Figure 1 for Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Figure 2 for Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Figure 3 for Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Figure 4 for Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Viaarxiv icon

Adversarial Examples Make Strong Poisons

Add code
Jun 21, 2021
Figure 1 for Adversarial Examples Make Strong Poisons
Figure 2 for Adversarial Examples Make Strong Poisons
Figure 3 for Adversarial Examples Make Strong Poisons
Figure 4 for Adversarial Examples Make Strong Poisons
Viaarxiv icon