Picture for Neil Gong

Neil Gong

Byzantine-Robust Decentralized Federated Learning

Add code
Jun 18, 2024
Figure 1 for Byzantine-Robust Decentralized Federated Learning
Figure 2 for Byzantine-Robust Decentralized Federated Learning
Figure 3 for Byzantine-Robust Decentralized Federated Learning
Figure 4 for Byzantine-Robust Decentralized Federated Learning
Viaarxiv icon

PLeak: Prompt Leaking Attacks against Large Language Model Applications

Add code
May 14, 2024
Figure 1 for PLeak: Prompt Leaking Attacks against Large Language Model Applications
Figure 2 for PLeak: Prompt Leaking Attacks against Large Language Model Applications
Figure 3 for PLeak: Prompt Leaking Attacks against Large Language Model Applications
Figure 4 for PLeak: Prompt Leaking Attacks against Large Language Model Applications
Viaarxiv icon

Stable Signature is Unstable: Removing Image Watermark from Diffusion Models

Add code
May 12, 2024
Viaarxiv icon

Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning

Add code
May 10, 2024
Viaarxiv icon

A Transfer Attack to Image Watermarks

Add code
Mar 25, 2024
Viaarxiv icon

GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis

Add code
Feb 21, 2024
Viaarxiv icon

Mendata: A Framework to Purify Manipulated Training Data

Add code
Dec 03, 2023
Viaarxiv icon

SneakyPrompt: Jailbreaking Text-to-image Generative Models

Add code
May 20, 2023
Viaarxiv icon