Picture for Qingqing Ye

Qingqing Ye

Machine Unlearning in Low-Dimensional Feature Subspace

Add code
Jan 30, 2026
Viaarxiv icon

FIT: Defying Catastrophic Forgetting in Continual LLM Unlearning

Add code
Jan 29, 2026
Viaarxiv icon

On the Adversarial Robustness of Large Vision-Language Models under Visual Token Compression

Add code
Jan 29, 2026
Viaarxiv icon

Diffusion-Guided Backdoor Attacks in Real-World Reinforcement Learning

Add code
Jan 20, 2026
Viaarxiv icon

From Domains to Instances: Dual-Granularity Data Synthesis for LLM Unlearning

Add code
Jan 07, 2026
Viaarxiv icon

Class-feature Watermark: A Resilient Black-box Watermark Against Model Extraction Attacks

Add code
Nov 16, 2025
Viaarxiv icon

SEBA: Sample-Efficient Black-Box Attacks on Visual Reinforcement Learning

Add code
Nov 12, 2025
Viaarxiv icon

Reminiscence Attack on Residuals: Exploiting Approximate Machine Unlearning for Privacy

Add code
Jul 28, 2025
Viaarxiv icon

Unlearning Isn't Deletion: Investigating Reversibility of Machine Unlearning in LLMs

Add code
May 22, 2025
Figure 1 for Unlearning Isn't Deletion: Investigating Reversibility of Machine Unlearning in LLMs
Figure 2 for Unlearning Isn't Deletion: Investigating Reversibility of Machine Unlearning in LLMs
Figure 3 for Unlearning Isn't Deletion: Investigating Reversibility of Machine Unlearning in LLMs
Figure 4 for Unlearning Isn't Deletion: Investigating Reversibility of Machine Unlearning in LLMs
Viaarxiv icon

Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?

Add code
May 19, 2025
Viaarxiv icon