Picture for Sravanti Addepalli

Sravanti Addepalli

ProFeAT: Projected Feature Adversarial Training for Self-Supervised Learning of Robust Representations

Add code
Jun 09, 2024
Viaarxiv icon

Distilling from Vision-Language Models for Improved OOD Generalization in Vision Tasks

Add code
Oct 12, 2023
Viaarxiv icon

Boosting Adversarial Robustness using Feature Level Stochastic Smoothing

Add code
Jun 10, 2023
Viaarxiv icon

Certified Adversarial Robustness Within Multiple Perturbation Bounds

Add code
Apr 20, 2023
Viaarxiv icon

DART: Diversify-Aggregate-Repeat Training Improves Generalization of Neural Networks

Add code
Feb 28, 2023
Viaarxiv icon

Efficient and Effective Augmentation Strategy for Adversarial Training

Add code
Oct 27, 2022
Viaarxiv icon

Towards Efficient and Effective Self-Supervised Learning of Visual Representations

Add code
Oct 18, 2022
Figure 1 for Towards Efficient and Effective Self-Supervised Learning of Visual Representations
Figure 2 for Towards Efficient and Effective Self-Supervised Learning of Visual Representations
Figure 3 for Towards Efficient and Effective Self-Supervised Learning of Visual Representations
Figure 4 for Towards Efficient and Effective Self-Supervised Learning of Visual Representations
Viaarxiv icon

Scaling Adversarial Training to Large Perturbation Bounds

Add code
Oct 18, 2022
Figure 1 for Scaling Adversarial Training to Large Perturbation Bounds
Figure 2 for Scaling Adversarial Training to Large Perturbation Bounds
Figure 3 for Scaling Adversarial Training to Large Perturbation Bounds
Figure 4 for Scaling Adversarial Training to Large Perturbation Bounds
Viaarxiv icon

Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks

Add code
Oct 04, 2022
Figure 1 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Figure 2 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Figure 3 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Figure 4 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Viaarxiv icon

DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization

Add code
Aug 19, 2022
Figure 1 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Figure 2 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Figure 3 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Figure 4 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Viaarxiv icon