Picture for Yanjun Zhang

Yanjun Zhang

UnlearnShield: Shielding Forgotten Privacy against Unlearning Inversion

Add code
Jan 28, 2026
Viaarxiv icon

Beyond Denial-of-Service: The Puppeteer's Attack for Fine-Grained Control in Ranking-Based Federated Learning

Add code
Jan 21, 2026
Viaarxiv icon

Less Is More -- Until It Breaks: Security Pitfalls of Vision Token Compression in Large Vision-Language Models

Add code
Jan 17, 2026
Viaarxiv icon

Dual-View Inference Attack: Machine Unlearning Amplifies Privacy Exposure

Add code
Dec 18, 2025
Viaarxiv icon

Character-Level Perturbations Disrupt LLM Watermarks

Add code
Sep 11, 2025
Viaarxiv icon

Towards Reliable Forgetting: A Survey on Machine Unlearning Verification, Challenges, and Future Directions

Add code
Jun 18, 2025
Viaarxiv icon

When Better Features Mean Greater Risks: The Performance-Privacy Trade-Off in Contrastive Learning

Add code
Jun 06, 2025
Figure 1 for When Better Features Mean Greater Risks: The Performance-Privacy Trade-Off in Contrastive Learning
Figure 2 for When Better Features Mean Greater Risks: The Performance-Privacy Trade-Off in Contrastive Learning
Figure 3 for When Better Features Mean Greater Risks: The Performance-Privacy Trade-Off in Contrastive Learning
Figure 4 for When Better Features Mean Greater Risks: The Performance-Privacy Trade-Off in Contrastive Learning
Viaarxiv icon

Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach

Add code
May 22, 2025
Viaarxiv icon

Exploring Gradient-Guided Masked Language Model to Detect Textual Adversarial Attacks

Add code
Apr 08, 2025
Viaarxiv icon

Test-Time Backdoor Detection for Object Detection Models

Add code
Mar 19, 2025
Viaarxiv icon