Picture for Mingli Zhu

Mingli Zhu

Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack

Add code
May 30, 2024
Viaarxiv icon

BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning

Add code
Jan 26, 2024
Viaarxiv icon

Enhanced Few-Shot Class-Incremental Learning via Ensemble Models

Add code
Jan 14, 2024
Viaarxiv icon

Defenses in Adversarial Machine Learning: A Survey

Add code
Dec 13, 2023
Viaarxiv icon

BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive Learning

Add code
Nov 20, 2023
Viaarxiv icon

Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features

Add code
Jun 29, 2023
Figure 1 for Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features
Figure 2 for Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features
Figure 3 for Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features
Figure 4 for Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features
Viaarxiv icon

Enhancing Fine-Tuning Based Backdoor Defense with Sharpness-Aware Minimization

Add code
Apr 24, 2023
Viaarxiv icon

Rethinking Data Augmentation in Knowledge Distillation for Object Detection

Add code
Sep 20, 2022
Figure 1 for Rethinking Data Augmentation in Knowledge Distillation for Object Detection
Figure 2 for Rethinking Data Augmentation in Knowledge Distillation for Object Detection
Figure 3 for Rethinking Data Augmentation in Knowledge Distillation for Object Detection
Figure 4 for Rethinking Data Augmentation in Knowledge Distillation for Object Detection
Viaarxiv icon