Picture for Yiming Liang

Yiming Liang

YuE: Scaling Open Foundation Models for Long-Form Music Generation

Add code
Mar 11, 2025
Viaarxiv icon

Keeping Representation Similarity in Finetuning for Medical Image Analysis

Add code
Mar 10, 2025
Viaarxiv icon

SuperGPQA: Scaling LLM Evaluation across 285 Graduate Disciplines

Add code
Feb 20, 2025
Viaarxiv icon

EfficientLLM: Scalable Pruning-Aware Pretraining for Architecture-Agnostic Edge Language Models

Add code
Feb 10, 2025
Viaarxiv icon

Aligning Instruction Tuning with Pre-training

Add code
Jan 16, 2025
Figure 1 for Aligning Instruction Tuning with Pre-training
Figure 2 for Aligning Instruction Tuning with Pre-training
Figure 3 for Aligning Instruction Tuning with Pre-training
Figure 4 for Aligning Instruction Tuning with Pre-training
Viaarxiv icon

A Progressive Transformer for Unifying Binary Code Embedding and Knowledge Transfer

Add code
Dec 15, 2024
Viaarxiv icon

Can MLLMs Understand the Deep Implication Behind Chinese Images?

Add code
Oct 17, 2024
Figure 1 for Can MLLMs Understand the Deep Implication Behind Chinese Images?
Figure 2 for Can MLLMs Understand the Deep Implication Behind Chinese Images?
Figure 3 for Can MLLMs Understand the Deep Implication Behind Chinese Images?
Figure 4 for Can MLLMs Understand the Deep Implication Behind Chinese Images?
Viaarxiv icon

I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm

Add code
Aug 15, 2024
Figure 1 for I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm
Figure 2 for I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm
Figure 3 for I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm
Figure 4 for I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm
Viaarxiv icon

MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models

Add code
Aug 06, 2024
Figure 1 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Figure 2 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Figure 3 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Figure 4 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Viaarxiv icon

MMRA: A Benchmark for Multi-granularity Multi-image Relational Association

Add code
Jul 24, 2024
Figure 1 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Figure 2 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Figure 3 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Figure 4 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Viaarxiv icon