Picture for Benjamin Zi Hao Zhao

Benjamin Zi Hao Zhao

Preempting Text Sanitization Utility in Resource-Constrained Privacy-Preserving LLM Interactions

Add code
Nov 18, 2024
Viaarxiv icon

On the Robustness of Malware Detectors to Adversarial Samples

Add code
Aug 05, 2024
Viaarxiv icon

Privacy-Preserving Aggregation for Decentralized Learning with Byzantine-Robustness

Add code
Apr 27, 2024
Viaarxiv icon

Privacy-Preserving, Dropout-Resilient Aggregation in Decentralized Learning

Add code
Apr 27, 2024
Viaarxiv icon

Those Aren't Your Memories, They're Somebody Else's: Seeding Misinformation in Chat Bot Memories

Add code
Apr 06, 2023
Viaarxiv icon

DDoD: Dual Denial of Decision Attacks on Human-AI Teams

Add code
Dec 07, 2022
Viaarxiv icon

Unintended Memorization and Timing Attacks in Named Entity Recognition Models

Add code
Nov 04, 2022
Viaarxiv icon

MANDERA: Malicious Node Detection in Federated Learning via Ranking

Add code
Oct 22, 2021
Figure 1 for MANDERA: Malicious Node Detection in Federated Learning via Ranking
Figure 2 for MANDERA: Malicious Node Detection in Federated Learning via Ranking
Figure 3 for MANDERA: Malicious Node Detection in Federated Learning via Ranking
Figure 4 for MANDERA: Malicious Node Detection in Federated Learning via Ranking
Viaarxiv icon

Hidden Backdoors in Human-Centric Language Models

Add code
May 01, 2021
Figure 1 for Hidden Backdoors in Human-Centric Language Models
Figure 2 for Hidden Backdoors in Human-Centric Language Models
Figure 3 for Hidden Backdoors in Human-Centric Language Models
Figure 4 for Hidden Backdoors in Human-Centric Language Models
Viaarxiv icon

On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models

Add code
Mar 12, 2021
Figure 1 for On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models
Figure 2 for On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models
Figure 3 for On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models
Figure 4 for On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models
Viaarxiv icon