Picture for Arnout Van Messem

Arnout Van Messem

Leveraging Human-Machine Interactions for Computer Vision Dataset Quality Enhancement

Add code
Jan 31, 2024
Viaarxiv icon

Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training

Add code
May 23, 2023
Figure 1 for Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training
Figure 2 for Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training
Figure 3 for Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training
Figure 4 for Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training
Viaarxiv icon

A Principled Evaluation Protocol for Comparative Investigation of the Effectiveness of DNN Classification Models on Similar-but-non-identical Datasets

Add code
Sep 05, 2022
Figure 1 for A Principled Evaluation Protocol for Comparative Investigation of the Effectiveness of DNN Classification Models on Similar-but-non-identical Datasets
Figure 2 for A Principled Evaluation Protocol for Comparative Investigation of the Effectiveness of DNN Classification Models on Similar-but-non-identical Datasets
Figure 3 for A Principled Evaluation Protocol for Comparative Investigation of the Effectiveness of DNN Classification Models on Similar-but-non-identical Datasets
Figure 4 for A Principled Evaluation Protocol for Comparative Investigation of the Effectiveness of DNN Classification Models on Similar-but-non-identical Datasets
Viaarxiv icon

Exact Feature Collisions in Neural Networks

Add code
May 31, 2022
Figure 1 for Exact Feature Collisions in Neural Networks
Figure 2 for Exact Feature Collisions in Neural Networks
Figure 3 for Exact Feature Collisions in Neural Networks
Figure 4 for Exact Feature Collisions in Neural Networks
Viaarxiv icon

Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes

Add code
Nov 22, 2021
Figure 1 for Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes
Figure 2 for Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes
Figure 3 for Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes
Figure 4 for Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes
Viaarxiv icon

Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks

Add code
Jun 16, 2021
Figure 1 for Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Figure 2 for Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Figure 3 for Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Figure 4 for Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Viaarxiv icon

Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems

Add code
Jan 26, 2021
Figure 1 for Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems
Figure 2 for Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems
Figure 3 for Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems
Figure 4 for Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems
Viaarxiv icon

Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability

Add code
Jul 07, 2020
Figure 1 for Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability
Figure 2 for Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability
Figure 3 for Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability
Figure 4 for Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability
Viaarxiv icon

Perturbation Analysis of Gradient-based Adversarial Attacks

Add code
Jun 02, 2020
Figure 1 for Perturbation Analysis of Gradient-based Adversarial Attacks
Figure 2 for Perturbation Analysis of Gradient-based Adversarial Attacks
Figure 3 for Perturbation Analysis of Gradient-based Adversarial Attacks
Figure 4 for Perturbation Analysis of Gradient-based Adversarial Attacks
Viaarxiv icon

Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation

Add code
Jul 30, 2019
Figure 1 for Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation
Figure 2 for Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation
Figure 3 for Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation
Figure 4 for Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation
Viaarxiv icon