Picture for Shashank Kotyan

Shashank Kotyan

Linking Robustness and Generalization: A k* Distribution Analysis of Concept Clustering in Latent Space for Vision Models

Add code
Aug 17, 2024
Viaarxiv icon

EvoSeed: Unveiling the Threat on Deep Neural Networks with Real-World Illusions

Add code
Feb 07, 2024
Viaarxiv icon

k* Distribution: Evaluating the Latent Space of Deep Neural Networks using Local Neighborhood Analysis

Add code
Dec 07, 2023
Viaarxiv icon

The Challenges of Image Generation Models in Generating Multi-Component Images

Add code
Nov 22, 2023
Viaarxiv icon

Towards Improving Robustness Against Common Corruptions using Mixture of Class Specific Experts

Add code
Nov 16, 2023
Viaarxiv icon

Towards Improving Robustness Against Common Corruptions in Object Detectors Using Adversarial Contrastive Learning

Add code
Nov 14, 2023
Viaarxiv icon

Improving Robustness for Vision Transformer with a Simple Dynamic Scanning Augmentation

Add code
Nov 01, 2023
Viaarxiv icon

A reading survey on adversarial machine learning: Adversarial attacks and their understanding

Add code
Aug 07, 2023
Viaarxiv icon

Deep neural network loses attention to adversarial images

Add code
Jun 10, 2021
Figure 1 for Deep neural network loses attention to adversarial images
Figure 2 for Deep neural network loses attention to adversarial images
Figure 3 for Deep neural network loses attention to adversarial images
Figure 4 for Deep neural network loses attention to adversarial images
Viaarxiv icon

Evolving Robust Neural Architectures to Defend from Adversarial Attacks

Add code
Jun 27, 2019
Figure 1 for Evolving Robust Neural Architectures to Defend from Adversarial Attacks
Figure 2 for Evolving Robust Neural Architectures to Defend from Adversarial Attacks
Figure 3 for Evolving Robust Neural Architectures to Defend from Adversarial Attacks
Figure 4 for Evolving Robust Neural Architectures to Defend from Adversarial Attacks
Viaarxiv icon