Picture for Haozhan Shen

Haozhan Shen

GeoRSMLLM: A Multimodal Large Language Model for Vision-Language Tasks in Geoscience and Remote Sensing

Add code
Mar 16, 2025
Viaarxiv icon

GUI Testing Arena: A Unified Benchmark for Advancing Autonomous GUI Testing Agent

Add code
Dec 24, 2024
Figure 1 for GUI Testing Arena: A Unified Benchmark for Advancing Autonomous GUI Testing Agent
Figure 2 for GUI Testing Arena: A Unified Benchmark for Advancing Autonomous GUI Testing Agent
Figure 3 for GUI Testing Arena: A Unified Benchmark for Advancing Autonomous GUI Testing Agent
Figure 4 for GUI Testing Arena: A Unified Benchmark for Advancing Autonomous GUI Testing Agent
Viaarxiv icon

ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration

Add code
Nov 25, 2024
Figure 1 for ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration
Figure 2 for ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration
Figure 3 for ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration
Figure 4 for ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration
Viaarxiv icon

Enhancing Ultra High Resolution Remote Sensing Imagery Analysis with ImageRAG

Add code
Nov 12, 2024
Figure 1 for Enhancing Ultra High Resolution Remote Sensing Imagery Analysis with ImageRAG
Figure 2 for Enhancing Ultra High Resolution Remote Sensing Imagery Analysis with ImageRAG
Figure 3 for Enhancing Ultra High Resolution Remote Sensing Imagery Analysis with ImageRAG
Viaarxiv icon

GroundVLP: Harnessing Zero-shot Visual Grounding from Vision-Language Pre-training and Open-Vocabulary Object Detection

Add code
Dec 22, 2023
Figure 1 for GroundVLP: Harnessing Zero-shot Visual Grounding from Vision-Language Pre-training and Open-Vocabulary Object Detection
Figure 2 for GroundVLP: Harnessing Zero-shot Visual Grounding from Vision-Language Pre-training and Open-Vocabulary Object Detection
Figure 3 for GroundVLP: Harnessing Zero-shot Visual Grounding from Vision-Language Pre-training and Open-Vocabulary Object Detection
Figure 4 for GroundVLP: Harnessing Zero-shot Visual Grounding from Vision-Language Pre-training and Open-Vocabulary Object Detection
Viaarxiv icon

VL-CheckList: Evaluating Pre-trained Vision-Language Models with Objects, Attributes and Relations

Add code
Jul 01, 2022
Figure 1 for VL-CheckList: Evaluating Pre-trained Vision-Language Models with Objects, Attributes and Relations
Figure 2 for VL-CheckList: Evaluating Pre-trained Vision-Language Models with Objects, Attributes and Relations
Figure 3 for VL-CheckList: Evaluating Pre-trained Vision-Language Models with Objects, Attributes and Relations
Figure 4 for VL-CheckList: Evaluating Pre-trained Vision-Language Models with Objects, Attributes and Relations
Viaarxiv icon