Picture for Shibo Jie

Shibo Jie

Token Compensator: Altering Inference Cost of Vision Transformer without Re-Tuning

Add code
Aug 13, 2024
Viaarxiv icon

Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning

Add code
May 09, 2024
Viaarxiv icon

Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy

Add code
Jul 31, 2023
Viaarxiv icon

Detachedly Learn a Classifier for Class-Incremental Learning

Add code
Feb 23, 2023
Viaarxiv icon

FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer

Add code
Dec 06, 2022
Viaarxiv icon

Convolutional Bypasses Are Better Vision Transformer Adapters

Add code
Jul 18, 2022
Figure 1 for Convolutional Bypasses Are Better Vision Transformer Adapters
Figure 2 for Convolutional Bypasses Are Better Vision Transformer Adapters
Figure 3 for Convolutional Bypasses Are Better Vision Transformer Adapters
Figure 4 for Convolutional Bypasses Are Better Vision Transformer Adapters
Viaarxiv icon

Bypassing Logits Bias in Online Class-Incremental Learning with a Generative Framework

Add code
May 19, 2022
Figure 1 for Bypassing Logits Bias in Online Class-Incremental Learning with a Generative Framework
Figure 2 for Bypassing Logits Bias in Online Class-Incremental Learning with a Generative Framework
Figure 3 for Bypassing Logits Bias in Online Class-Incremental Learning with a Generative Framework
Figure 4 for Bypassing Logits Bias in Online Class-Incremental Learning with a Generative Framework
Viaarxiv icon

Alleviating Representational Shift for Continual Fine-tuning

Add code
Apr 22, 2022
Figure 1 for Alleviating Representational Shift for Continual Fine-tuning
Figure 2 for Alleviating Representational Shift for Continual Fine-tuning
Figure 3 for Alleviating Representational Shift for Continual Fine-tuning
Figure 4 for Alleviating Representational Shift for Continual Fine-tuning
Viaarxiv icon