Picture for Yuting Gao

Yuting Gao

Multi-Modal Prompt Learning on Blind Image Quality Assessment

Add code
Apr 23, 2024
Viaarxiv icon

RESTORE: Towards Feature Shift for Vision-Language Prompt Learning

Add code
Mar 10, 2024
Viaarxiv icon

Sinkhorn Distance Minimization for Knowledge Distillation

Add code
Feb 27, 2024
Viaarxiv icon

MMICT: Boosting Multi-Modal Fine-Tuning with In-Context Examples

Add code
Dec 12, 2023
Viaarxiv icon

Less is More: Learning Reference Knowledge Using No-Reference Image Quality Assessment

Add code
Dec 01, 2023
Viaarxiv icon

Towards Robust Text Retrieval with Progressive Learning

Add code
Nov 20, 2023
Viaarxiv icon

SoftCLIP: Softer Cross-modal Alignment Makes CLIP Stronger

Add code
Mar 30, 2023
Figure 1 for SoftCLIP: Softer Cross-modal Alignment Makes CLIP Stronger
Figure 2 for SoftCLIP: Softer Cross-modal Alignment Makes CLIP Stronger
Figure 3 for SoftCLIP: Softer Cross-modal Alignment Makes CLIP Stronger
Figure 4 for SoftCLIP: Softer Cross-modal Alignment Makes CLIP Stronger
Viaarxiv icon

Efficient Decoder-free Object Detection with Transformers

Add code
Jun 17, 2022
Figure 1 for Efficient Decoder-free Object Detection with Transformers
Figure 2 for Efficient Decoder-free Object Detection with Transformers
Figure 3 for Efficient Decoder-free Object Detection with Transformers
Figure 4 for Efficient Decoder-free Object Detection with Transformers
Viaarxiv icon

PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining

Add code
Apr 29, 2022
Figure 1 for PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining
Figure 2 for PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining
Figure 3 for PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining
Figure 4 for PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining
Viaarxiv icon

DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning

Add code
Apr 19, 2021
Figure 1 for DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning
Figure 2 for DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning
Figure 3 for DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning
Figure 4 for DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning
Viaarxiv icon