Picture for Neil Gong

Neil Gong

GraphRAG under Fire

Add code
Jan 23, 2025
Viaarxiv icon

GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models

Add code
Jan 19, 2025
Viaarxiv icon

AI-generated Image Detection: Passive or Watermark?

Add code
Nov 20, 2024
Viaarxiv icon

Byzantine-Robust Decentralized Federated Learning

Add code
Jun 18, 2024
Figure 1 for Byzantine-Robust Decentralized Federated Learning
Figure 2 for Byzantine-Robust Decentralized Federated Learning
Figure 3 for Byzantine-Robust Decentralized Federated Learning
Figure 4 for Byzantine-Robust Decentralized Federated Learning
Viaarxiv icon

PLeak: Prompt Leaking Attacks against Large Language Model Applications

Add code
May 14, 2024
Figure 1 for PLeak: Prompt Leaking Attacks against Large Language Model Applications
Figure 2 for PLeak: Prompt Leaking Attacks against Large Language Model Applications
Figure 3 for PLeak: Prompt Leaking Attacks against Large Language Model Applications
Figure 4 for PLeak: Prompt Leaking Attacks against Large Language Model Applications
Viaarxiv icon

Stable Signature is Unstable: Removing Image Watermark from Diffusion Models

Add code
May 12, 2024
Viaarxiv icon

Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning

Add code
May 10, 2024
Figure 1 for Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning
Figure 2 for Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning
Figure 3 for Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning
Figure 4 for Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning
Viaarxiv icon

A Transfer Attack to Image Watermarks

Add code
Mar 25, 2024
Figure 1 for A Transfer Attack to Image Watermarks
Figure 2 for A Transfer Attack to Image Watermarks
Figure 3 for A Transfer Attack to Image Watermarks
Figure 4 for A Transfer Attack to Image Watermarks
Viaarxiv icon

GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis

Add code
Feb 21, 2024
Viaarxiv icon

Mendata: A Framework to Purify Manipulated Training Data

Add code
Dec 03, 2023
Viaarxiv icon