Picture for Yuelin Bai

Yuelin Bai

MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale

Add code
Dec 06, 2024
Viaarxiv icon

Teach Multimodal LLMs to Comprehend Electrocardiographic Images

Add code
Oct 21, 2024
Viaarxiv icon

Can MLLMs Understand the Deep Implication Behind Chinese Images?

Add code
Oct 17, 2024
Figure 1 for Can MLLMs Understand the Deep Implication Behind Chinese Images?
Figure 2 for Can MLLMs Understand the Deep Implication Behind Chinese Images?
Figure 3 for Can MLLMs Understand the Deep Implication Behind Chinese Images?
Figure 4 for Can MLLMs Understand the Deep Implication Behind Chinese Images?
Viaarxiv icon

DeliLaw: A Chinese Legal Counselling System Based on a Large Language Model

Add code
Aug 01, 2024
Figure 1 for DeliLaw: A Chinese Legal Counselling System Based on a Large Language Model
Figure 2 for DeliLaw: A Chinese Legal Counselling System Based on a Large Language Model
Figure 3 for DeliLaw: A Chinese Legal Counselling System Based on a Large Language Model
Viaarxiv icon

II-Bench: An Image Implication Understanding Benchmark for Multimodal Large Language Models

Add code
Jun 11, 2024
Figure 1 for II-Bench: An Image Implication Understanding Benchmark for Multimodal Large Language Models
Figure 2 for II-Bench: An Image Implication Understanding Benchmark for Multimodal Large Language Models
Figure 3 for II-Bench: An Image Implication Understanding Benchmark for Multimodal Large Language Models
Figure 4 for II-Bench: An Image Implication Understanding Benchmark for Multimodal Large Language Models
Viaarxiv icon

Enhancing Noise Robustness of Retrieval-Augmented Language Models with Adaptive Adversarial Training

Add code
May 31, 2024
Figure 1 for Enhancing Noise Robustness of Retrieval-Augmented Language Models with Adaptive Adversarial Training
Figure 2 for Enhancing Noise Robustness of Retrieval-Augmented Language Models with Adaptive Adversarial Training
Figure 3 for Enhancing Noise Robustness of Retrieval-Augmented Language Models with Adaptive Adversarial Training
Figure 4 for Enhancing Noise Robustness of Retrieval-Augmented Language Models with Adaptive Adversarial Training
Viaarxiv icon

MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series

Add code
May 29, 2024
Figure 1 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Figure 2 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Figure 3 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Figure 4 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Viaarxiv icon

MuPT: A Generative Symbolic Music Pretrained Transformer

Add code
Apr 10, 2024
Figure 1 for MuPT: A Generative Symbolic Music Pretrained Transformer
Figure 2 for MuPT: A Generative Symbolic Music Pretrained Transformer
Figure 3 for MuPT: A Generative Symbolic Music Pretrained Transformer
Figure 4 for MuPT: A Generative Symbolic Music Pretrained Transformer
Viaarxiv icon

COIG-CQIA: Quality is All You Need for Chinese Instruction Fine-tuning

Add code
Mar 26, 2024
Figure 1 for COIG-CQIA: Quality is All You Need for Chinese Instruction Fine-tuning
Figure 2 for COIG-CQIA: Quality is All You Need for Chinese Instruction Fine-tuning
Figure 3 for COIG-CQIA: Quality is All You Need for Chinese Instruction Fine-tuning
Figure 4 for COIG-CQIA: Quality is All You Need for Chinese Instruction Fine-tuning
Viaarxiv icon

MoZIP: A Multilingual Benchmark to Evaluate Large Language Models in Intellectual Property

Add code
Feb 26, 2024
Viaarxiv icon