Picture for Quanzeng You

Quanzeng You

DreamClear: High-Capacity Real-World Image Restoration with Privacy-Safe Dataset Curation

Add code
Oct 24, 2024
Figure 1 for DreamClear: High-Capacity Real-World Image Restoration with Privacy-Safe Dataset Curation
Figure 2 for DreamClear: High-Capacity Real-World Image Restoration with Privacy-Safe Dataset Curation
Figure 3 for DreamClear: High-Capacity Real-World Image Restoration with Privacy-Safe Dataset Curation
Figure 4 for DreamClear: High-Capacity Real-World Image Restoration with Privacy-Safe Dataset Curation
Viaarxiv icon

BabelBench: An Omni Benchmark for Code-Driven Analysis of Multimodal and Multistructured Data

Add code
Oct 01, 2024
Figure 1 for BabelBench: An Omni Benchmark for Code-Driven Analysis of Multimodal and Multistructured Data
Figure 2 for BabelBench: An Omni Benchmark for Code-Driven Analysis of Multimodal and Multistructured Data
Figure 3 for BabelBench: An Omni Benchmark for Code-Driven Analysis of Multimodal and Multistructured Data
Figure 4 for BabelBench: An Omni Benchmark for Code-Driven Analysis of Multimodal and Multistructured Data
Viaarxiv icon

Law of Vision Representation in MLLMs

Add code
Aug 29, 2024
Figure 1 for Law of Vision Representation in MLLMs
Figure 2 for Law of Vision Representation in MLLMs
Figure 3 for Law of Vision Representation in MLLMs
Figure 4 for Law of Vision Representation in MLLMs
Viaarxiv icon

Visual Anchors Are Strong Information Aggregators For Multimodal Large Language Model

Add code
May 28, 2024
Viaarxiv icon

ViTAR: Vision Transformer with Any Resolution

Add code
Mar 28, 2024
Viaarxiv icon

InfiMM-HD: A Leap Forward in High-Resolution Multimodal Understanding

Add code
Mar 03, 2024
Viaarxiv icon

Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning

Add code
Jan 18, 2024
Figure 1 for Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning
Figure 2 for Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning
Figure 3 for Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning
Figure 4 for Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning
Viaarxiv icon

COCO is "ALL'' You Need for Visual Instruction Fine-tuning

Add code
Jan 17, 2024
Viaarxiv icon

InfiMM-Eval: Complex Open-Ended Reasoning Evaluation For Multi-Modal Large Language Models

Add code
Dec 04, 2023
Figure 1 for InfiMM-Eval: Complex Open-Ended Reasoning Evaluation For Multi-Modal Large Language Models
Figure 2 for InfiMM-Eval: Complex Open-Ended Reasoning Evaluation For Multi-Modal Large Language Models
Figure 3 for InfiMM-Eval: Complex Open-Ended Reasoning Evaluation For Multi-Modal Large Language Models
Figure 4 for InfiMM-Eval: Complex Open-Ended Reasoning Evaluation For Multi-Modal Large Language Models
Viaarxiv icon

Improving In-Context Learning in Diffusion Models with Visual Context-Modulated Prompts

Add code
Dec 03, 2023
Figure 1 for Improving In-Context Learning in Diffusion Models with Visual Context-Modulated Prompts
Figure 2 for Improving In-Context Learning in Diffusion Models with Visual Context-Modulated Prompts
Figure 3 for Improving In-Context Learning in Diffusion Models with Visual Context-Modulated Prompts
Figure 4 for Improving In-Context Learning in Diffusion Models with Visual Context-Modulated Prompts
Viaarxiv icon