Picture for Weilin Xu

Weilin Xu

Investigating the Semantic Robustness of CLIP-based Zero-Shot Anomaly Segmentation

Add code
May 13, 2024
Viaarxiv icon

Robust Principles: Architectural Design Principles for Adversarially Robust CNNs

Add code
Sep 01, 2023
Figure 1 for Robust Principles: Architectural Design Principles for Adversarially Robust CNNs
Figure 2 for Robust Principles: Architectural Design Principles for Adversarially Robust CNNs
Figure 3 for Robust Principles: Architectural Design Principles for Adversarially Robust CNNs
Figure 4 for Robust Principles: Architectural Design Principles for Adversarially Robust CNNs
Viaarxiv icon

RobArch: Designing Robust Architectures against Adversarial Attacks

Add code
Jan 08, 2023
Viaarxiv icon

Membership-Doctor: Comprehensive Assessment of Membership Inference Against Machine Learning Models

Add code
Aug 22, 2022
Figure 1 for Membership-Doctor: Comprehensive Assessment of Membership Inference Against Machine Learning Models
Figure 2 for Membership-Doctor: Comprehensive Assessment of Membership Inference Against Machine Learning Models
Figure 3 for Membership-Doctor: Comprehensive Assessment of Membership Inference Against Machine Learning Models
Figure 4 for Membership-Doctor: Comprehensive Assessment of Membership Inference Against Machine Learning Models
Viaarxiv icon

Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks

Add code
Dec 05, 2017
Figure 1 for Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
Figure 2 for Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
Figure 3 for Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
Figure 4 for Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
Viaarxiv icon

Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples

Add code
May 30, 2017
Figure 1 for Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples
Figure 2 for Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples
Viaarxiv icon

DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples

Add code
Apr 17, 2017
Figure 1 for DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples
Figure 2 for DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples
Figure 3 for DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples
Figure 4 for DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples
Viaarxiv icon