Picture for Jinyuan Jia

Jinyuan Jia

Defending Deep Regression Models against Backdoor Attacks

Add code
Nov 07, 2024
Viaarxiv icon

PrivateGaze: Preserving User Privacy in Black-box Mobile Gaze Tracking Services

Add code
Aug 01, 2024
Viaarxiv icon

Certifiably Robust Image Watermark

Add code
Jul 04, 2024
Viaarxiv icon

Graph Neural Network Explanations are Fragile

Add code
Jun 05, 2024
Viaarxiv icon

Towards General Robustness Verification of MaxPool-based Convolutional Neural Networks via Tightening Linear Approximation

Add code
Jun 02, 2024
Viaarxiv icon

ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning

Add code
May 31, 2024
Viaarxiv icon

MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models

Add code
Apr 02, 2024
Viaarxiv icon

SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding

Add code
Feb 24, 2024
Viaarxiv icon

PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models

Add code
Feb 12, 2024
Viaarxiv icon

Brave: Byzantine-Resilient and Privacy-Preserving Peer-to-Peer Federated Learning

Add code
Jan 10, 2024
Viaarxiv icon