Picture for Yizhi Li

Yizhi Li

Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, Content Generation, and Evaluation

Add code
Feb 07, 2025
Viaarxiv icon

Bridging the Data Provenance Gap Across Text, Speech and Video

Add code
Dec 19, 2024
Figure 1 for Bridging the Data Provenance Gap Across Text, Speech and Video
Figure 2 for Bridging the Data Provenance Gap Across Text, Speech and Video
Figure 3 for Bridging the Data Provenance Gap Across Text, Speech and Video
Figure 4 for Bridging the Data Provenance Gap Across Text, Speech and Video
Viaarxiv icon

MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale

Add code
Dec 06, 2024
Viaarxiv icon

A Comparative Study on Reasoning Patterns of OpenAI's o1 Model

Add code
Oct 17, 2024
Figure 1 for A Comparative Study on Reasoning Patterns of OpenAI's o1 Model
Figure 2 for A Comparative Study on Reasoning Patterns of OpenAI's o1 Model
Figure 3 for A Comparative Study on Reasoning Patterns of OpenAI's o1 Model
Figure 4 for A Comparative Study on Reasoning Patterns of OpenAI's o1 Model
Viaarxiv icon

MIO: A Foundation Model on Multimodal Tokens

Add code
Sep 26, 2024
Figure 1 for MIO: A Foundation Model on Multimodal Tokens
Figure 2 for MIO: A Foundation Model on Multimodal Tokens
Figure 3 for MIO: A Foundation Model on Multimodal Tokens
Figure 4 for MIO: A Foundation Model on Multimodal Tokens
Viaarxiv icon

LIME-M: Less Is More for Evaluation of MLLMs

Add code
Sep 10, 2024
Figure 1 for LIME-M: Less Is More for Evaluation of MLLMs
Figure 2 for LIME-M: Less Is More for Evaluation of MLLMs
Figure 3 for LIME-M: Less Is More for Evaluation of MLLMs
Figure 4 for LIME-M: Less Is More for Evaluation of MLLMs
Viaarxiv icon

Foundation Models for Music: A Survey

Add code
Aug 27, 2024
Figure 1 for Foundation Models for Music: A Survey
Figure 2 for Foundation Models for Music: A Survey
Figure 3 for Foundation Models for Music: A Survey
Figure 4 for Foundation Models for Music: A Survey
Viaarxiv icon

Overview of the NLPCC 2024 Shared Task on Chinese Metaphor Generation

Add code
Aug 08, 2024
Viaarxiv icon

MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models

Add code
Aug 06, 2024
Figure 1 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Figure 2 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Figure 3 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Figure 4 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Viaarxiv icon

MMRA: A Benchmark for Multi-granularity Multi-image Relational Association

Add code
Jul 24, 2024
Figure 1 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Figure 2 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Figure 3 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Figure 4 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Viaarxiv icon