Picture for Utku Ozbulak

Utku Ozbulak

Identifying Critical Tokens for Accurate Predictions in Transformer-based Medical Imaging Models

Add code
Jan 26, 2025
Viaarxiv icon

Self-supervised Benchmark Lottery on ImageNet: Do Marginal Improvements Translate to Improvements on Similar Datasets?

Add code
Jan 26, 2025
Viaarxiv icon

Color Flow Imaging Microscopy Improves Identification of Stress Sources of Protein Aggregates in Biopharmaceuticals

Add code
Jan 26, 2025
Viaarxiv icon

Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training

Add code
May 23, 2023
Figure 1 for Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training
Figure 2 for Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training
Figure 3 for Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training
Figure 4 for Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training
Viaarxiv icon

Utilizing Mutations to Evaluate Interpretability of Neural Networks on Genomic Data

Add code
Dec 12, 2022
Viaarxiv icon

Exact Feature Collisions in Neural Networks

Add code
May 31, 2022
Figure 1 for Exact Feature Collisions in Neural Networks
Figure 2 for Exact Feature Collisions in Neural Networks
Figure 3 for Exact Feature Collisions in Neural Networks
Figure 4 for Exact Feature Collisions in Neural Networks
Viaarxiv icon

Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes

Add code
Nov 22, 2021
Figure 1 for Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes
Figure 2 for Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes
Figure 3 for Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes
Figure 4 for Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes
Viaarxiv icon

Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks

Add code
Jun 16, 2021
Figure 1 for Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Figure 2 for Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Figure 3 for Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Figure 4 for Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Viaarxiv icon

Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems

Add code
Jan 26, 2021
Figure 1 for Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems
Figure 2 for Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems
Figure 3 for Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems
Figure 4 for Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems
Viaarxiv icon

Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability

Add code
Jul 07, 2020
Figure 1 for Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability
Figure 2 for Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability
Figure 3 for Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability
Figure 4 for Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability
Viaarxiv icon