Picture for Shouling Ji

Shouling Ji

UNIDOOR: A Universal Framework for Action-Level Backdoor Attacks in Deep Reinforcement Learning

Add code
Jan 26, 2025
Viaarxiv icon

Defending against Adversarial Malware Attacks on ML-based Android Malware Detection Systems

Add code
Jan 23, 2025
Viaarxiv icon

Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction Attacks

Add code
Jan 16, 2025
Viaarxiv icon

Fine-tuning is Not Fine: Mitigating Backdoor Attacks in GNNs with Limited Clean Data

Add code
Jan 10, 2025
Figure 1 for Fine-tuning is Not Fine: Mitigating Backdoor Attacks in GNNs with Limited Clean Data
Figure 2 for Fine-tuning is Not Fine: Mitigating Backdoor Attacks in GNNs with Limited Clean Data
Figure 3 for Fine-tuning is Not Fine: Mitigating Backdoor Attacks in GNNs with Limited Clean Data
Figure 4 for Fine-tuning is Not Fine: Mitigating Backdoor Attacks in GNNs with Limited Clean Data
Viaarxiv icon

AEIOU: A Unified Defense Framework against NSFW Prompts in Text-to-Image Models

Add code
Dec 24, 2024
Viaarxiv icon

WaterPark: A Robustness Assessment of Language Model Watermarking

Add code
Nov 20, 2024
Viaarxiv icon

FLMarket: Enabling Privacy-preserved Pre-training Data Pricing for Federated Learning

Add code
Nov 18, 2024
Figure 1 for FLMarket: Enabling Privacy-preserved Pre-training Data Pricing for Federated Learning
Figure 2 for FLMarket: Enabling Privacy-preserved Pre-training Data Pricing for Federated Learning
Figure 3 for FLMarket: Enabling Privacy-preserved Pre-training Data Pricing for Federated Learning
Figure 4 for FLMarket: Enabling Privacy-preserved Pre-training Data Pricing for Federated Learning
Viaarxiv icon

Navigating the Risks: A Survey of Security, Privacy, and Ethics Threats in LLM-Based Agents

Add code
Nov 14, 2024
Viaarxiv icon

"No Matter What You Do!": Mitigating Backdoor Attacks in Graph Neural Networks

Add code
Oct 02, 2024
Figure 1 for "No Matter What You Do!": Mitigating Backdoor Attacks in Graph Neural Networks
Figure 2 for "No Matter What You Do!": Mitigating Backdoor Attacks in Graph Neural Networks
Figure 3 for "No Matter What You Do!": Mitigating Backdoor Attacks in Graph Neural Networks
Figure 4 for "No Matter What You Do!": Mitigating Backdoor Attacks in Graph Neural Networks
Viaarxiv icon

CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models

Add code
Sep 02, 2024
Figure 1 for CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models
Figure 2 for CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models
Figure 3 for CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models
Figure 4 for CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models
Viaarxiv icon