Picture for Ruibo Liu

Ruibo Liu

MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models

Add code
Aug 06, 2024
Figure 1 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Figure 2 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Figure 3 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Figure 4 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Viaarxiv icon

MMRA: A Benchmark for Multi-granularity Multi-image Relational Association

Add code
Jul 24, 2024
Figure 1 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Figure 2 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Figure 3 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Figure 4 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Viaarxiv icon

II-Bench: An Image Implication Understanding Benchmark for Multimodal Large Language Models

Add code
Jun 11, 2024
Figure 1 for II-Bench: An Image Implication Understanding Benchmark for Multimodal Large Language Models
Figure 2 for II-Bench: An Image Implication Understanding Benchmark for Multimodal Large Language Models
Figure 3 for II-Bench: An Image Implication Understanding Benchmark for Multimodal Large Language Models
Figure 4 for II-Bench: An Image Implication Understanding Benchmark for Multimodal Large Language Models
Viaarxiv icon

MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series

Add code
May 29, 2024
Figure 1 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Figure 2 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Figure 3 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Figure 4 for MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Viaarxiv icon

Best Practices and Lessons Learned on Synthetic Data for Language Models

Add code
Apr 11, 2024
Figure 1 for Best Practices and Lessons Learned on Synthetic Data for Language Models
Viaarxiv icon

CodeEditorBench: Evaluating Code Editing Capability of Large Language Models

Add code
Apr 06, 2024
Figure 1 for CodeEditorBench: Evaluating Code Editing Capability of Large Language Models
Figure 2 for CodeEditorBench: Evaluating Code Editing Capability of Large Language Models
Figure 3 for CodeEditorBench: Evaluating Code Editing Capability of Large Language Models
Figure 4 for CodeEditorBench: Evaluating Code Editing Capability of Large Language Models
Viaarxiv icon

Long-form factuality in large language models

Add code
Apr 03, 2024
Figure 1 for Long-form factuality in large language models
Figure 2 for Long-form factuality in large language models
Figure 3 for Long-form factuality in large language models
Figure 4 for Long-form factuality in large language models
Viaarxiv icon

Reconfigurable Robot Identification from Motion Data

Add code
Mar 15, 2024
Viaarxiv icon

Gemma: Open Models Based on Gemini Research and Technology

Add code
Mar 13, 2024
Figure 1 for Gemma: Open Models Based on Gemini Research and Technology
Figure 2 for Gemma: Open Models Based on Gemini Research and Technology
Figure 3 for Gemma: Open Models Based on Gemini Research and Technology
Figure 4 for Gemma: Open Models Based on Gemini Research and Technology
Viaarxiv icon

Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context

Add code
Mar 08, 2024
Viaarxiv icon