Picture for Ashish Ramayee Asokan

Ashish Ramayee Asokan

DeiT-LT Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets

Add code
Apr 03, 2024
Figure 1 for DeiT-LT Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets
Figure 2 for DeiT-LT Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets
Figure 3 for DeiT-LT Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets
Figure 4 for DeiT-LT Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets
Viaarxiv icon

Aligning Non-Causal Factors for Transformer-Based Source-Free Domain Adaptation

Add code
Nov 27, 2023
Viaarxiv icon

Distilling from Vision-Language Models for Improved OOD Generalization in Vision Tasks

Add code
Oct 12, 2023
Viaarxiv icon

Domain-Specificity Inducing Transformers for Source-Free Domain Adaptation

Add code
Aug 27, 2023
Viaarxiv icon

Interpretability for Multimodal Emotion Recognition using Concept Activation Vectors

Add code
Feb 02, 2022
Figure 1 for Interpretability for Multimodal Emotion Recognition using Concept Activation Vectors
Figure 2 for Interpretability for Multimodal Emotion Recognition using Concept Activation Vectors
Figure 3 for Interpretability for Multimodal Emotion Recognition using Concept Activation Vectors
Viaarxiv icon