Picture for Junying Chen

Junying Chen

RAG-Instruct: Boosting LLMs with Diverse Retrieval-Augmented Instructions

Add code
Dec 31, 2024
Viaarxiv icon

On the Compositional Generalization of Multimodal LLMs for Medical Imaging

Add code
Dec 28, 2024
Viaarxiv icon

HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs

Add code
Dec 25, 2024
Figure 1 for HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs
Figure 2 for HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs
Figure 3 for HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs
Figure 4 for HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs
Viaarxiv icon

Second Language (Arabic) Acquisition of LLMs via Progressive Vocabulary Expansion

Add code
Dec 16, 2024
Figure 1 for Second Language (Arabic) Acquisition of LLMs via Progressive Vocabulary Expansion
Figure 2 for Second Language (Arabic) Acquisition of LLMs via Progressive Vocabulary Expansion
Figure 3 for Second Language (Arabic) Acquisition of LLMs via Progressive Vocabulary Expansion
Figure 4 for Second Language (Arabic) Acquisition of LLMs via Progressive Vocabulary Expansion
Viaarxiv icon

CoD, Towards an Interpretable Medical Agent using Chain of Diagnosis

Add code
Jul 18, 2024
Figure 1 for CoD, Towards an Interpretable Medical Agent using Chain of Diagnosis
Figure 2 for CoD, Towards an Interpretable Medical Agent using Chain of Diagnosis
Figure 3 for CoD, Towards an Interpretable Medical Agent using Chain of Diagnosis
Figure 4 for CoD, Towards an Interpretable Medical Agent using Chain of Diagnosis
Viaarxiv icon

HuatuoGPT-Vision, Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale

Add code
Jun 27, 2024
Figure 1 for HuatuoGPT-Vision, Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale
Figure 2 for HuatuoGPT-Vision, Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale
Figure 3 for HuatuoGPT-Vision, Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale
Figure 4 for HuatuoGPT-Vision, Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale
Viaarxiv icon

LLMs for Doctors: Leveraging Medical LLMs to Assist Doctors, Not Replace Them

Add code
Jun 26, 2024
Figure 1 for LLMs for Doctors: Leveraging Medical LLMs to Assist Doctors, Not Replace Them
Figure 2 for LLMs for Doctors: Leveraging Medical LLMs to Assist Doctors, Not Replace Them
Figure 3 for LLMs for Doctors: Leveraging Medical LLMs to Assist Doctors, Not Replace Them
Figure 4 for LLMs for Doctors: Leveraging Medical LLMs to Assist Doctors, Not Replace Them
Viaarxiv icon

LLMs Could Autonomously Learn Without External Supervision

Add code
Jun 02, 2024
Viaarxiv icon

ALLaVA: Harnessing GPT4V-synthesized Data for A Lite Vision-Language Model

Add code
Feb 18, 2024
Figure 1 for ALLaVA: Harnessing GPT4V-synthesized Data for A Lite Vision-Language Model
Figure 2 for ALLaVA: Harnessing GPT4V-synthesized Data for A Lite Vision-Language Model
Figure 3 for ALLaVA: Harnessing GPT4V-synthesized Data for A Lite Vision-Language Model
Figure 4 for ALLaVA: Harnessing GPT4V-synthesized Data for A Lite Vision-Language Model
Viaarxiv icon

MLLM-Bench, Evaluating Multi-modal LLMs using GPT-4V

Add code
Nov 23, 2023
Figure 1 for MLLM-Bench, Evaluating Multi-modal LLMs using GPT-4V
Figure 2 for MLLM-Bench, Evaluating Multi-modal LLMs using GPT-4V
Figure 3 for MLLM-Bench, Evaluating Multi-modal LLMs using GPT-4V
Figure 4 for MLLM-Bench, Evaluating Multi-modal LLMs using GPT-4V
Viaarxiv icon