Picture for Sheng Wen

Sheng Wen

Query-Efficient Video Adversarial Attack with Stylized Logo

Add code
Aug 22, 2024
Viaarxiv icon

Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems

Add code
Jul 11, 2024
Figure 1 for Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems
Figure 2 for Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems
Figure 3 for Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems
Figure 4 for Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems
Viaarxiv icon

AI Agents Under Threat: A Survey of Key Security Challenges and Future Pathways

Add code
Jun 04, 2024
Viaarxiv icon

The "Beatrix'' Resurrections: Robust Backdoor Detection via Gram Matrices

Add code
Sep 26, 2022
Viaarxiv icon

StyleFool: Fooling Video Classification Systems via Style Transfer

Add code
Mar 30, 2022
Figure 1 for StyleFool: Fooling Video Classification Systems via Style Transfer
Figure 2 for StyleFool: Fooling Video Classification Systems via Style Transfer
Figure 3 for StyleFool: Fooling Video Classification Systems via Style Transfer
Figure 4 for StyleFool: Fooling Video Classification Systems via Style Transfer
Viaarxiv icon

DeFuzz: Deep Learning Guided Directed Fuzzing

Add code
Oct 23, 2020
Figure 1 for DeFuzz: Deep Learning Guided Directed Fuzzing
Figure 2 for DeFuzz: Deep Learning Guided Directed Fuzzing
Figure 3 for DeFuzz: Deep Learning Guided Directed Fuzzing
Figure 4 for DeFuzz: Deep Learning Guided Directed Fuzzing
Viaarxiv icon

Man-in-the-Middle Attacks against Machine Learning Classifiers via Malicious Generative Models

Add code
Oct 14, 2019
Figure 1 for Man-in-the-Middle Attacks against Machine Learning Classifiers via Malicious Generative Models
Figure 2 for Man-in-the-Middle Attacks against Machine Learning Classifiers via Malicious Generative Models
Figure 3 for Man-in-the-Middle Attacks against Machine Learning Classifiers via Malicious Generative Models
Figure 4 for Man-in-the-Middle Attacks against Machine Learning Classifiers via Malicious Generative Models
Viaarxiv icon

Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples

Add code
Feb 06, 2019
Figure 1 for Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples
Figure 2 for Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples
Figure 3 for Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples
Figure 4 for Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples
Viaarxiv icon

Defensive Collaborative Multi-task Training - Defending against Adversarial Attack towards Deep Neural Networks

Add code
Jul 03, 2018
Figure 1 for Defensive Collaborative Multi-task Training - Defending against Adversarial Attack towards Deep Neural Networks
Figure 2 for Defensive Collaborative Multi-task Training - Defending against Adversarial Attack towards Deep Neural Networks
Figure 3 for Defensive Collaborative Multi-task Training - Defending against Adversarial Attack towards Deep Neural Networks
Figure 4 for Defensive Collaborative Multi-task Training - Defending against Adversarial Attack towards Deep Neural Networks
Viaarxiv icon