Picture for Trishul Chilimbi

Trishul Chilimbi

DreamBlend: Advancing Personalized Fine-tuning of Text-to-Image Diffusion Models

Add code
Nov 28, 2024
Viaarxiv icon

Evolutionary Contrastive Distillation for Language Model Alignment

Add code
Oct 10, 2024
Figure 1 for Evolutionary Contrastive Distillation for Language Model Alignment
Figure 2 for Evolutionary Contrastive Distillation for Language Model Alignment
Figure 3 for Evolutionary Contrastive Distillation for Language Model Alignment
Figure 4 for Evolutionary Contrastive Distillation for Language Model Alignment
Viaarxiv icon

X-Former: Unifying Contrastive and Reconstruction Learning for MLLMs

Add code
Jul 18, 2024
Viaarxiv icon

Open Vocabulary Multi-Label Video Classification

Add code
Jul 12, 2024
Viaarxiv icon

VidLA: Video-Language Alignment at Scale

Add code
Mar 21, 2024
Viaarxiv icon

Robust Multi-Task Learning with Excess Risks

Add code
Feb 14, 2024
Viaarxiv icon

Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications

Add code
Jun 05, 2023
Figure 1 for Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
Figure 2 for Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
Figure 3 for Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
Figure 4 for Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
Viaarxiv icon

Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning

Add code
Mar 10, 2023
Figure 1 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Figure 2 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Figure 3 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Figure 4 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Viaarxiv icon

SMILE: Scaling Mixture-of-Experts with Efficient Bi-level Routing

Add code
Dec 10, 2022
Figure 1 for SMILE: Scaling Mixture-of-Experts with Efficient Bi-level Routing
Figure 2 for SMILE: Scaling Mixture-of-Experts with Efficient Bi-level Routing
Figure 3 for SMILE: Scaling Mixture-of-Experts with Efficient Bi-level Routing
Figure 4 for SMILE: Scaling Mixture-of-Experts with Efficient Bi-level Routing
Viaarxiv icon

MICO: Selective Search with Mutual Information Co-training

Add code
Sep 09, 2022
Figure 1 for MICO: Selective Search with Mutual Information Co-training
Figure 2 for MICO: Selective Search with Mutual Information Co-training
Figure 3 for MICO: Selective Search with Mutual Information Co-training
Figure 4 for MICO: Selective Search with Mutual Information Co-training
Viaarxiv icon