Picture for Jianbo Yuan

Jianbo Yuan

Unconstrained Model Merging for Enhanced LLM Reasoning

Add code
Oct 17, 2024
Figure 1 for Unconstrained Model Merging for Enhanced LLM Reasoning
Figure 2 for Unconstrained Model Merging for Enhanced LLM Reasoning
Figure 3 for Unconstrained Model Merging for Enhanced LLM Reasoning
Figure 4 for Unconstrained Model Merging for Enhanced LLM Reasoning
Viaarxiv icon

BabelBench: An Omni Benchmark for Code-Driven Analysis of Multimodal and Multistructured Data

Add code
Oct 01, 2024
Figure 1 for BabelBench: An Omni Benchmark for Code-Driven Analysis of Multimodal and Multistructured Data
Figure 2 for BabelBench: An Omni Benchmark for Code-Driven Analysis of Multimodal and Multistructured Data
Figure 3 for BabelBench: An Omni Benchmark for Code-Driven Analysis of Multimodal and Multistructured Data
Figure 4 for BabelBench: An Omni Benchmark for Code-Driven Analysis of Multimodal and Multistructured Data
Viaarxiv icon

Law of Vision Representation in MLLMs

Add code
Aug 29, 2024
Figure 1 for Law of Vision Representation in MLLMs
Figure 2 for Law of Vision Representation in MLLMs
Figure 3 for Law of Vision Representation in MLLMs
Figure 4 for Law of Vision Representation in MLLMs
Viaarxiv icon

An Expert is Worth One Token: Synergizing Multiple Expert LLMs as Generalist via Expert Token Routing

Add code
Mar 25, 2024
Figure 1 for An Expert is Worth One Token: Synergizing Multiple Expert LLMs as Generalist via Expert Token Routing
Figure 2 for An Expert is Worth One Token: Synergizing Multiple Expert LLMs as Generalist via Expert Token Routing
Figure 3 for An Expert is Worth One Token: Synergizing Multiple Expert LLMs as Generalist via Expert Token Routing
Figure 4 for An Expert is Worth One Token: Synergizing Multiple Expert LLMs as Generalist via Expert Token Routing
Viaarxiv icon

How Can LLM Guide RL? A Value-Based Approach

Add code
Feb 25, 2024
Viaarxiv icon

Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning

Add code
Jan 18, 2024
Figure 1 for Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning
Figure 2 for Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning
Figure 3 for Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning
Figure 4 for Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning
Viaarxiv icon

InfiAgent-DABench: Evaluating Agents on Data Analysis Tasks

Add code
Jan 10, 2024
Viaarxiv icon

InfiMM-Eval: Complex Open-Ended Reasoning Evaluation For Multi-Modal Large Language Models

Add code
Dec 04, 2023
Figure 1 for InfiMM-Eval: Complex Open-Ended Reasoning Evaluation For Multi-Modal Large Language Models
Figure 2 for InfiMM-Eval: Complex Open-Ended Reasoning Evaluation For Multi-Modal Large Language Models
Figure 3 for InfiMM-Eval: Complex Open-Ended Reasoning Evaluation For Multi-Modal Large Language Models
Figure 4 for InfiMM-Eval: Complex Open-Ended Reasoning Evaluation For Multi-Modal Large Language Models
Viaarxiv icon

Improving In-Context Learning in Diffusion Models with Visual Context-Modulated Prompts

Add code
Dec 03, 2023
Figure 1 for Improving In-Context Learning in Diffusion Models with Visual Context-Modulated Prompts
Figure 2 for Improving In-Context Learning in Diffusion Models with Visual Context-Modulated Prompts
Figure 3 for Improving In-Context Learning in Diffusion Models with Visual Context-Modulated Prompts
Figure 4 for Improving In-Context Learning in Diffusion Models with Visual Context-Modulated Prompts
Viaarxiv icon

Self-Infilling Code Generation

Add code
Nov 29, 2023
Viaarxiv icon