Picture for Peizhuo Lv

Peizhuo Lv

PersonaMark: Personalized LLM watermarking for model protection and user attribution

Add code
Sep 15, 2024
Figure 1 for PersonaMark: Personalized LLM watermarking for model protection and user attribution
Figure 2 for PersonaMark: Personalized LLM watermarking for model protection and user attribution
Figure 3 for PersonaMark: Personalized LLM watermarking for model protection and user attribution
Figure 4 for PersonaMark: Personalized LLM watermarking for model protection and user attribution
Viaarxiv icon

MEA-Defender: A Robust Watermark against Model Extraction Attack

Add code
Jan 26, 2024
Viaarxiv icon

DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models

Add code
Dec 20, 2023
Viaarxiv icon

A Novel Membership Inference Attack against Dynamic Neural Networks by Utilizing Policy Networks Information

Add code
Oct 17, 2022
Figure 1 for A Novel Membership Inference Attack against Dynamic Neural Networks by Utilizing Policy Networks Information
Figure 2 for A Novel Membership Inference Attack against Dynamic Neural Networks by Utilizing Policy Networks Information
Figure 3 for A Novel Membership Inference Attack against Dynamic Neural Networks by Utilizing Policy Networks Information
Figure 4 for A Novel Membership Inference Attack against Dynamic Neural Networks by Utilizing Policy Networks Information
Viaarxiv icon

SSL-WM: A Black-Box Watermarking Approach for Encoders Pre-trained by Self-supervised Learning

Add code
Sep 08, 2022
Figure 1 for SSL-WM: A Black-Box Watermarking Approach for Encoders Pre-trained by Self-supervised Learning
Figure 2 for SSL-WM: A Black-Box Watermarking Approach for Encoders Pre-trained by Self-supervised Learning
Figure 3 for SSL-WM: A Black-Box Watermarking Approach for Encoders Pre-trained by Self-supervised Learning
Figure 4 for SSL-WM: A Black-Box Watermarking Approach for Encoders Pre-trained by Self-supervised Learning
Viaarxiv icon

Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain

Add code
Jul 09, 2022
Figure 1 for Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain
Figure 2 for Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain
Figure 3 for Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain
Figure 4 for Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain
Viaarxiv icon

DBIA: Data-free Backdoor Injection Attack against Transformer Networks

Add code
Nov 22, 2021
Figure 1 for DBIA: Data-free Backdoor Injection Attack against Transformer Networks
Figure 2 for DBIA: Data-free Backdoor Injection Attack against Transformer Networks
Figure 3 for DBIA: Data-free Backdoor Injection Attack against Transformer Networks
Figure 4 for DBIA: Data-free Backdoor Injection Attack against Transformer Networks
Viaarxiv icon

HufuNet: Embedding the Left Piece as Watermark and Keeping the Right Piece for Ownership Verification in Deep Neural Networks

Add code
Mar 25, 2021
Figure 1 for HufuNet: Embedding the Left Piece as Watermark and Keeping the Right Piece for Ownership Verification in Deep Neural Networks
Figure 2 for HufuNet: Embedding the Left Piece as Watermark and Keeping the Right Piece for Ownership Verification in Deep Neural Networks
Figure 3 for HufuNet: Embedding the Left Piece as Watermark and Keeping the Right Piece for Ownership Verification in Deep Neural Networks
Figure 4 for HufuNet: Embedding the Left Piece as Watermark and Keeping the Right Piece for Ownership Verification in Deep Neural Networks
Viaarxiv icon