Picture for Paul Gavrikov

Paul Gavrikov

How Do Training Methods Influence the Utilization of Vision Models?

Add code
Oct 18, 2024
Viaarxiv icon

GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models

Add code
Oct 08, 2024
Viaarxiv icon

Can Biases in ImageNet Models Explain Generalization?

Add code
Apr 01, 2024
Viaarxiv icon

Are Vision Language Models Texture or Shape Biased and Can We Steer Them?

Add code
Mar 14, 2024
Viaarxiv icon

Don't Look into the Sun: Adversarial Solarization Attacks on Image Classifiers

Add code
Aug 24, 2023
Viaarxiv icon

On the Interplay of Convolutional Padding and Adversarial Robustness

Add code
Aug 12, 2023
Viaarxiv icon

An Extended Study of Human-like Behavior under Adversarial Training

Add code
Mar 22, 2023
Viaarxiv icon

Rethinking 1x1 Convolutions: Can we train CNNs with Frozen Random Filters?

Add code
Jan 26, 2023
Viaarxiv icon

Does Medical Imaging learn different Convolution Filters?

Add code
Oct 25, 2022
Viaarxiv icon

Robust Models are less Over-Confident

Add code
Oct 12, 2022
Figure 1 for Robust Models are less Over-Confident
Figure 2 for Robust Models are less Over-Confident
Figure 3 for Robust Models are less Over-Confident
Figure 4 for Robust Models are less Over-Confident
Viaarxiv icon