Picture for Xingwei Qu

Xingwei Qu

Aligning Instruction Tuning with Pre-training

Add code
Jan 16, 2025
Viaarxiv icon

Observing Micromotives and Macrobehavior of Large Language Models

Add code
Dec 10, 2024
Viaarxiv icon

Can MLLMs Understand the Deep Implication Behind Chinese Images?

Add code
Oct 17, 2024
Figure 1 for Can MLLMs Understand the Deep Implication Behind Chinese Images?
Figure 2 for Can MLLMs Understand the Deep Implication Behind Chinese Images?
Figure 3 for Can MLLMs Understand the Deep Implication Behind Chinese Images?
Figure 4 for Can MLLMs Understand the Deep Implication Behind Chinese Images?
Viaarxiv icon

LIME-M: Less Is More for Evaluation of MLLMs

Add code
Sep 10, 2024
Figure 1 for LIME-M: Less Is More for Evaluation of MLLMs
Figure 2 for LIME-M: Less Is More for Evaluation of MLLMs
Figure 3 for LIME-M: Less Is More for Evaluation of MLLMs
Figure 4 for LIME-M: Less Is More for Evaluation of MLLMs
Viaarxiv icon

Foundation Models for Music: A Survey

Add code
Aug 27, 2024
Figure 1 for Foundation Models for Music: A Survey
Figure 2 for Foundation Models for Music: A Survey
Figure 3 for Foundation Models for Music: A Survey
Figure 4 for Foundation Models for Music: A Survey
Viaarxiv icon

I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm

Add code
Aug 15, 2024
Figure 1 for I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm
Figure 2 for I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm
Figure 3 for I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm
Figure 4 for I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm
Viaarxiv icon

Overview of the NLPCC 2024 Shared Task on Chinese Metaphor Generation

Add code
Aug 08, 2024
Viaarxiv icon

MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models

Add code
Aug 06, 2024
Figure 1 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Figure 2 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Figure 3 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Figure 4 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Viaarxiv icon

MMRA: A Benchmark for Multi-granularity Multi-image Relational Association

Add code
Jul 24, 2024
Figure 1 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Figure 2 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Figure 3 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Figure 4 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Viaarxiv icon

GraphReader: Building Graph-based Agent to Enhance Long-Context Abilities of Large Language Models

Add code
Jun 20, 2024
Viaarxiv icon