Picture for Zhengbo Wang

Zhengbo Wang

Towards Compatible Fine-tuning for Vision-Language Model Updates

Add code
Dec 30, 2024
Viaarxiv icon

LoRA-Pro: Are Low-Rank Adapters Properly Optimized?

Add code
Jul 25, 2024
Figure 1 for LoRA-Pro: Are Low-Rank Adapters Properly Optimized?
Figure 2 for LoRA-Pro: Are Low-Rank Adapters Properly Optimized?
Viaarxiv icon

Connecting the Dots: Collaborative Fine-tuning for Black-Box Vision-Language Models

Add code
Feb 06, 2024
Viaarxiv icon

A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation

Add code
Feb 06, 2024
Figure 1 for A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation
Figure 2 for A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation
Figure 3 for A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation
Figure 4 for A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation
Viaarxiv icon

Self-training solutions for the ICCV 2023 GeoNet Challenge

Add code
Nov 28, 2023
Figure 1 for Self-training solutions for the ICCV 2023 GeoNet Challenge
Figure 2 for Self-training solutions for the ICCV 2023 GeoNet Challenge
Figure 3 for Self-training solutions for the ICCV 2023 GeoNet Challenge
Viaarxiv icon

Towards Realistic Unsupervised Fine-tuning with CLIP

Add code
Aug 24, 2023
Figure 1 for Towards Realistic Unsupervised Fine-tuning with CLIP
Figure 2 for Towards Realistic Unsupervised Fine-tuning with CLIP
Figure 3 for Towards Realistic Unsupervised Fine-tuning with CLIP
Figure 4 for Towards Realistic Unsupervised Fine-tuning with CLIP
Viaarxiv icon

Improving Zero-Shot Generalization for CLIP with Synthesized Prompts

Add code
Jul 14, 2023
Viaarxiv icon

Exploiting Semantic Attributes for Transductive Zero-Shot Learning

Add code
Mar 17, 2023
Viaarxiv icon