Picture for Utku Ozbulak

Utku Ozbulak

Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training

Add code
May 23, 2023
Figure 1 for Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training
Figure 2 for Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training
Figure 3 for Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training
Figure 4 for Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training
Viaarxiv icon

Utilizing Mutations to Evaluate Interpretability of Neural Networks on Genomic Data

Add code
Dec 12, 2022
Viaarxiv icon

Exact Feature Collisions in Neural Networks

Add code
May 31, 2022
Figure 1 for Exact Feature Collisions in Neural Networks
Figure 2 for Exact Feature Collisions in Neural Networks
Figure 3 for Exact Feature Collisions in Neural Networks
Figure 4 for Exact Feature Collisions in Neural Networks
Viaarxiv icon

Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes

Add code
Nov 22, 2021
Figure 1 for Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes
Figure 2 for Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes
Figure 3 for Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes
Figure 4 for Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes
Viaarxiv icon

Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks

Add code
Jun 16, 2021
Figure 1 for Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Figure 2 for Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Figure 3 for Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Figure 4 for Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Viaarxiv icon

Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems

Add code
Jan 26, 2021
Figure 1 for Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems
Figure 2 for Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems
Figure 3 for Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems
Figure 4 for Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems
Viaarxiv icon

Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability

Add code
Jul 07, 2020
Figure 1 for Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability
Figure 2 for Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability
Figure 3 for Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability
Figure 4 for Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability
Viaarxiv icon

Perturbation Analysis of Gradient-based Adversarial Attacks

Add code
Jun 02, 2020
Figure 1 for Perturbation Analysis of Gradient-based Adversarial Attacks
Figure 2 for Perturbation Analysis of Gradient-based Adversarial Attacks
Figure 3 for Perturbation Analysis of Gradient-based Adversarial Attacks
Figure 4 for Perturbation Analysis of Gradient-based Adversarial Attacks
Viaarxiv icon

Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation

Add code
Jul 30, 2019
Figure 1 for Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation
Figure 2 for Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation
Figure 3 for Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation
Figure 4 for Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation
Viaarxiv icon

Not All Adversarial Examples Require a Complex Defense: Identifying Over-optimized Adversarial Examples with IQR-based Logit Thresholding

Add code
Jul 30, 2019
Figure 1 for Not All Adversarial Examples Require a Complex Defense: Identifying Over-optimized Adversarial Examples with IQR-based Logit Thresholding
Figure 2 for Not All Adversarial Examples Require a Complex Defense: Identifying Over-optimized Adversarial Examples with IQR-based Logit Thresholding
Figure 3 for Not All Adversarial Examples Require a Complex Defense: Identifying Over-optimized Adversarial Examples with IQR-based Logit Thresholding
Figure 4 for Not All Adversarial Examples Require a Complex Defense: Identifying Over-optimized Adversarial Examples with IQR-based Logit Thresholding
Viaarxiv icon