Picture for Chongyang Gao

Chongyang Gao

AlphaLoRA: Assigning LoRA Experts Based on Layer Training Quality

Add code
Oct 14, 2024
Viaarxiv icon

Practical Unlearning for Large Language Models

Add code
Jul 14, 2024
Viaarxiv icon

Memory-Efficient Sparse Pyramid Attention Networks for Whole Slide Image Analysis

Add code
Jun 13, 2024
Viaarxiv icon

Exploring the Distinctiveness and Fidelity of the Descriptions Generated by Large Vision-Language Models

Add code
Apr 26, 2024
Viaarxiv icon

Higher Layers Need More LoRA Experts

Add code
Feb 13, 2024
Viaarxiv icon

How to Configure Good In-Context Sequence for Visual Question Answering

Add code
Dec 04, 2023
Viaarxiv icon

Improving Representation Learning for Histopathologic Images with Cluster Constraints

Add code
Oct 18, 2023
Viaarxiv icon

Bootstrapping Vision-Language Learning with Decoupled Language Pre-training

Add code
Jul 13, 2023
Viaarxiv icon

Knowledge from Large-Scale Protein Contact Prediction Models Can Be Transferred to the Data-Scarce RNA Contact Prediction Task

Add code
Feb 13, 2023
Viaarxiv icon

Learning to Collocate Visual-Linguistic Neural Modules for Image Captioning

Add code
Oct 04, 2022
Figure 1 for Learning to Collocate Visual-Linguistic Neural Modules for Image Captioning
Figure 2 for Learning to Collocate Visual-Linguistic Neural Modules for Image Captioning
Figure 3 for Learning to Collocate Visual-Linguistic Neural Modules for Image Captioning
Figure 4 for Learning to Collocate Visual-Linguistic Neural Modules for Image Captioning
Viaarxiv icon