Picture for Jinyuan Jia

Jinyuan Jia

Data Free Backdoor Attacks

Add code
Dec 09, 2024
Viaarxiv icon

Stealing Training Graphs from Graph Neural Networks

Add code
Nov 17, 2024
Viaarxiv icon

Defending Deep Regression Models against Backdoor Attacks

Add code
Nov 07, 2024
Viaarxiv icon

PrivateGaze: Preserving User Privacy in Black-box Mobile Gaze Tracking Services

Add code
Aug 01, 2024
Viaarxiv icon

Certifiably Robust Image Watermark

Add code
Jul 04, 2024
Viaarxiv icon

Graph Neural Network Explanations are Fragile

Add code
Jun 05, 2024
Viaarxiv icon

Towards General Robustness Verification of MaxPool-based Convolutional Neural Networks via Tightening Linear Approximation

Add code
Jun 02, 2024
Figure 1 for Towards General Robustness Verification of MaxPool-based Convolutional Neural Networks via Tightening Linear Approximation
Figure 2 for Towards General Robustness Verification of MaxPool-based Convolutional Neural Networks via Tightening Linear Approximation
Figure 3 for Towards General Robustness Verification of MaxPool-based Convolutional Neural Networks via Tightening Linear Approximation
Figure 4 for Towards General Robustness Verification of MaxPool-based Convolutional Neural Networks via Tightening Linear Approximation
Viaarxiv icon

ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning

Add code
May 31, 2024
Viaarxiv icon

MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models

Add code
Apr 02, 2024
Viaarxiv icon

SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding

Add code
Feb 24, 2024
Viaarxiv icon