Data Poisoning


Data poisoning is the process of manipulating training data to compromise the performance of machine learning models.

Safety, Security, and Cognitive Risks in World Models

Add code
Apr 01, 2026
Viaarxiv icon

FL-PBM: Pre-Training Backdoor Mitigation for Federated Learning

Add code
Mar 30, 2026
Viaarxiv icon

Beyond Corner Patches: Semantics-Aware Backdoor Attack in Federated Learning

Add code
Mar 31, 2026
Viaarxiv icon

Learning Diagnostic Reasoning for Decision Support in Toxicology

Add code
Mar 31, 2026
Viaarxiv icon

FedFG: Privacy-Preserving and Robust Federated Learning via Flow-Matching Generation

Add code
Mar 30, 2026
Viaarxiv icon

Hidden Ads: Behavior Triggered Semantic Backdoors for Advertisement Injection in Vision Language Models

Add code
Mar 29, 2026
Viaarxiv icon

PIDP-Attack: Combining Prompt Injection with Database Poisoning Attacks on Retrieval-Augmented Generation Systems

Add code
Mar 26, 2026
Viaarxiv icon

DP^2-VL: Private Photo Dataset Protection by Data Poisoning for Vision-Language Models

Add code
Mar 25, 2026
Viaarxiv icon

AI Security in the Foundation Model Era: A Comprehensive Survey from a Unified Perspective

Add code
Mar 25, 2026
Viaarxiv icon

Towards Secure Retrieval-Augmented Generation: A Comprehensive Review of Threats, Defenses and Benchmarks

Add code
Mar 23, 2026
Viaarxiv icon