Picture for Xueluan Gong

Xueluan Gong

Wuhan University

Beyond Max Tokens: Stealthy Resource Amplification via Tool Calling Chains in LLM Agents

Add code
Jan 16, 2026
Viaarxiv icon

Evaluating and Mitigating LLM-as-a-judge Bias in Communication Systems

Add code
Oct 14, 2025
Figure 1 for Evaluating and Mitigating LLM-as-a-judge Bias in Communication Systems
Figure 2 for Evaluating and Mitigating LLM-as-a-judge Bias in Communication Systems
Figure 3 for Evaluating and Mitigating LLM-as-a-judge Bias in Communication Systems
Figure 4 for Evaluating and Mitigating LLM-as-a-judge Bias in Communication Systems
Viaarxiv icon

Lethe: Purifying Backdoored Large Language Models with Knowledge Dilution

Add code
Aug 28, 2025
Viaarxiv icon

TrojanDam: Detection-Free Backdoor Defense in Federated Learning through Proactive Model Robustification utilizing OOD Data

Add code
Apr 22, 2025
Viaarxiv icon

ARMOR: Shielding Unlearnable Examples against Data Augmentation

Add code
Jan 15, 2025
Viaarxiv icon

A Survey on Facial Image Privacy Preservation in Cloud-Based Services

Add code
Jan 15, 2025
Figure 1 for A Survey on Facial Image Privacy Preservation in Cloud-Based Services
Figure 2 for A Survey on Facial Image Privacy Preservation in Cloud-Based Services
Figure 3 for A Survey on Facial Image Privacy Preservation in Cloud-Based Services
Figure 4 for A Survey on Facial Image Privacy Preservation in Cloud-Based Services
Viaarxiv icon

An Effective and Resilient Backdoor Attack Framework against Deep Neural Networks and Vision Transformers

Add code
Dec 09, 2024
Figure 1 for An Effective and Resilient Backdoor Attack Framework against Deep Neural Networks and Vision Transformers
Figure 2 for An Effective and Resilient Backdoor Attack Framework against Deep Neural Networks and Vision Transformers
Figure 3 for An Effective and Resilient Backdoor Attack Framework against Deep Neural Networks and Vision Transformers
Figure 4 for An Effective and Resilient Backdoor Attack Framework against Deep Neural Networks and Vision Transformers
Viaarxiv icon

Megatron: Evasive Clean-Label Backdoor Attacks against Vision Transformer

Add code
Dec 06, 2024
Figure 1 for Megatron: Evasive Clean-Label Backdoor Attacks against Vision Transformer
Figure 2 for Megatron: Evasive Clean-Label Backdoor Attacks against Vision Transformer
Figure 3 for Megatron: Evasive Clean-Label Backdoor Attacks against Vision Transformer
Figure 4 for Megatron: Evasive Clean-Label Backdoor Attacks against Vision Transformer
Viaarxiv icon

Neutralizing Backdoors through Information Conflicts for Large Language Models

Add code
Nov 27, 2024
Figure 1 for Neutralizing Backdoors through Information Conflicts for Large Language Models
Figure 2 for Neutralizing Backdoors through Information Conflicts for Large Language Models
Figure 3 for Neutralizing Backdoors through Information Conflicts for Large Language Models
Figure 4 for Neutralizing Backdoors through Information Conflicts for Large Language Models
Viaarxiv icon

Hidden Data Privacy Breaches in Federated Learning

Add code
Nov 27, 2024
Figure 1 for Hidden Data Privacy Breaches in Federated Learning
Figure 2 for Hidden Data Privacy Breaches in Federated Learning
Figure 3 for Hidden Data Privacy Breaches in Federated Learning
Figure 4 for Hidden Data Privacy Breaches in Federated Learning
Viaarxiv icon