Picture for Saurav Jha

Saurav Jha

GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models

Add code
Oct 08, 2024
Viaarxiv icon

Mining Your Own Secrets: Diffusion Classifier Scores for Continual Personalization of Text-to-Image Diffusion Models

Add code
Oct 02, 2024
Figure 1 for Mining Your Own Secrets: Diffusion Classifier Scores for Continual Personalization of Text-to-Image Diffusion Models
Figure 2 for Mining Your Own Secrets: Diffusion Classifier Scores for Continual Personalization of Text-to-Image Diffusion Models
Figure 3 for Mining Your Own Secrets: Diffusion Classifier Scores for Continual Personalization of Text-to-Image Diffusion Models
Figure 4 for Mining Your Own Secrets: Diffusion Classifier Scores for Continual Personalization of Text-to-Image Diffusion Models
Viaarxiv icon

On the relevance of pre-neural approaches in natural language processing pedagogy

Add code
May 16, 2024
Figure 1 for On the relevance of pre-neural approaches in natural language processing pedagogy
Figure 2 for On the relevance of pre-neural approaches in natural language processing pedagogy
Figure 3 for On the relevance of pre-neural approaches in natural language processing pedagogy
Figure 4 for On the relevance of pre-neural approaches in natural language processing pedagogy
Viaarxiv icon

CLAP4CLIP: Continual Learning with Probabilistic Finetuning for Vision-Language Models

Add code
Mar 28, 2024
Viaarxiv icon

NPCL: Neural Processes for Uncertainty-Aware Continual Learning

Add code
Oct 30, 2023
Viaarxiv icon

Distilled Reverse Attention Network for Open-world Compositional Zero-Shot Learning

Add code
Mar 01, 2023
Viaarxiv icon

Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization

Add code
Mar 28, 2022
Figure 1 for Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization
Figure 2 for Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization
Figure 3 for Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization
Figure 4 for Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization
Viaarxiv icon

Continual Learning in Sensor-based Human Activity Recognition: an Empirical Benchmark Analysis

Add code
Apr 19, 2021
Figure 1 for Continual Learning in Sensor-based Human Activity Recognition: an Empirical Benchmark Analysis
Figure 2 for Continual Learning in Sensor-based Human Activity Recognition: an Empirical Benchmark Analysis
Figure 3 for Continual Learning in Sensor-based Human Activity Recognition: an Empirical Benchmark Analysis
Figure 4 for Continual Learning in Sensor-based Human Activity Recognition: an Empirical Benchmark Analysis
Viaarxiv icon

Continual Learning in Human Activity Recognition: an Empirical Analysis of Regularization

Add code
Jul 06, 2020
Figure 1 for Continual Learning in Human Activity Recognition: an Empirical Analysis of Regularization
Figure 2 for Continual Learning in Human Activity Recognition: an Empirical Analysis of Regularization
Figure 3 for Continual Learning in Human Activity Recognition: an Empirical Analysis of Regularization
Figure 4 for Continual Learning in Human Activity Recognition: an Empirical Analysis of Regularization
Viaarxiv icon

Neural Machine Translation based Word Transduction Mechanisms for Low-Resource Languages

Add code
Nov 21, 2018
Figure 1 for Neural Machine Translation based Word Transduction Mechanisms for Low-Resource Languages
Figure 2 for Neural Machine Translation based Word Transduction Mechanisms for Low-Resource Languages
Figure 3 for Neural Machine Translation based Word Transduction Mechanisms for Low-Resource Languages
Figure 4 for Neural Machine Translation based Word Transduction Mechanisms for Low-Resource Languages
Viaarxiv icon