Picture for Li Xiong

Li Xiong

Exposing Vulnerabilities in Explanation for Time Series Classifiers via Dual-Target Attacks

Add code
Feb 02, 2026
Viaarxiv icon

BicKD: Bilateral Contrastive Knowledge Distillation

Add code
Feb 01, 2026
Viaarxiv icon

Collaborative Reconstruction and Repair for Multi-class Industrial Anomaly Detection

Add code
Dec 12, 2025
Viaarxiv icon

FusionDP: Foundation Model-Assisted Differentially Private Learning for Partially Sensitive Features

Add code
Nov 05, 2025
Figure 1 for FusionDP: Foundation Model-Assisted Differentially Private Learning for Partially Sensitive Features
Figure 2 for FusionDP: Foundation Model-Assisted Differentially Private Learning for Partially Sensitive Features
Figure 3 for FusionDP: Foundation Model-Assisted Differentially Private Learning for Partially Sensitive Features
Figure 4 for FusionDP: Foundation Model-Assisted Differentially Private Learning for Partially Sensitive Features
Viaarxiv icon

Search is All You Need for Few-shot Anomaly Detection

Add code
Apr 16, 2025
Figure 1 for Search is All You Need for Few-shot Anomaly Detection
Figure 2 for Search is All You Need for Few-shot Anomaly Detection
Figure 3 for Search is All You Need for Few-shot Anomaly Detection
Figure 4 for Search is All You Need for Few-shot Anomaly Detection
Viaarxiv icon

Sharpness-Aware Parameter Selection for Machine Unlearning

Add code
Apr 08, 2025
Viaarxiv icon

Node-level Contrastive Unlearning on Graph Neural Networks

Add code
Mar 04, 2025
Figure 1 for Node-level Contrastive Unlearning on Graph Neural Networks
Figure 2 for Node-level Contrastive Unlearning on Graph Neural Networks
Figure 3 for Node-level Contrastive Unlearning on Graph Neural Networks
Figure 4 for Node-level Contrastive Unlearning on Graph Neural Networks
Viaarxiv icon

Tokens for Learning, Tokens for Unlearning: Mitigating Membership Inference Attacks in Large Language Models via Dual-Purpose Training

Add code
Feb 27, 2025
Figure 1 for Tokens for Learning, Tokens for Unlearning: Mitigating Membership Inference Attacks in Large Language Models via Dual-Purpose Training
Figure 2 for Tokens for Learning, Tokens for Unlearning: Mitigating Membership Inference Attacks in Large Language Models via Dual-Purpose Training
Figure 3 for Tokens for Learning, Tokens for Unlearning: Mitigating Membership Inference Attacks in Large Language Models via Dual-Purpose Training
Figure 4 for Tokens for Learning, Tokens for Unlearning: Mitigating Membership Inference Attacks in Large Language Models via Dual-Purpose Training
Viaarxiv icon

Auto-Search and Refinement: An Automated Framework for Gender Bias Mitigation in Large Language Models

Add code
Feb 17, 2025
Viaarxiv icon

HARBOR: Exploring Persona Dynamics in Multi-Agent Competition

Add code
Feb 17, 2025
Viaarxiv icon