Picture for Guoxing Yang

Guoxing Yang

Expert-level vision-language foundation model for real-world radiology and comprehensive evaluation

Add code
Sep 24, 2024
Viaarxiv icon

MoVL:Exploring Fusion Strategies for the Domain-Adaptive Application of Pretrained Models in Medical Imaging Tasks

Add code
May 13, 2024
Viaarxiv icon

TCM-GPT: Efficient Pre-training of Large Language Models for Domain Adaptation in Traditional Chinese Medicine

Add code
Nov 03, 2023
Viaarxiv icon

ClinicalGPT: Large Language Models Finetuned with Diverse Medical Data and Comprehensive Evaluation

Add code
Jun 16, 2023
Viaarxiv icon

VDT: An Empirical Study on Video Diffusion with Transformers

Add code
May 22, 2023
Viaarxiv icon

UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling

Add code
Feb 13, 2023
Viaarxiv icon

WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model

Add code
Oct 27, 2021
Figure 1 for WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model
Figure 2 for WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model
Figure 3 for WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model
Figure 4 for WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model
Viaarxiv icon

WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training

Add code
Mar 19, 2021
Figure 1 for WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training
Figure 2 for WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training
Figure 3 for WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training
Figure 4 for WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training
Viaarxiv icon