Picture for Hongqi Wang

Hongqi Wang

ViRefSAM: Visual Reference-Guided Segment Anything Model for Remote Sensing Segmentation

Add code
Jul 03, 2025
Viaarxiv icon

AgMTR: Agent Mining Transformer for Few-shot Segmentation in Remote Sensing

Add code
Sep 26, 2024
Figure 1 for AgMTR: Agent Mining Transformer for Few-shot Segmentation in Remote Sensing
Figure 2 for AgMTR: Agent Mining Transformer for Few-shot Segmentation in Remote Sensing
Figure 3 for AgMTR: Agent Mining Transformer for Few-shot Segmentation in Remote Sensing
Figure 4 for AgMTR: Agent Mining Transformer for Few-shot Segmentation in Remote Sensing
Viaarxiv icon

Prompt-and-Transfer: Dynamic Class-aware Enhancement for Few-shot Segmentation

Add code
Sep 16, 2024
Viaarxiv icon

Not Just Learning from Others but Relying on Yourself: A New Perspective on Few-Shot Segmentation in Remote Sensing

Add code
Oct 19, 2023
Figure 1 for Not Just Learning from Others but Relying on Yourself: A New Perspective on Few-Shot Segmentation in Remote Sensing
Figure 2 for Not Just Learning from Others but Relying on Yourself: A New Perspective on Few-Shot Segmentation in Remote Sensing
Figure 3 for Not Just Learning from Others but Relying on Yourself: A New Perspective on Few-Shot Segmentation in Remote Sensing
Figure 4 for Not Just Learning from Others but Relying on Yourself: A New Perspective on Few-Shot Segmentation in Remote Sensing
Viaarxiv icon

RingMo-lite: A Remote Sensing Multi-task Lightweight Network with CNN-Transformer Hybrid Framework

Add code
Sep 16, 2023
Figure 1 for RingMo-lite: A Remote Sensing Multi-task Lightweight Network with CNN-Transformer Hybrid Framework
Figure 2 for RingMo-lite: A Remote Sensing Multi-task Lightweight Network with CNN-Transformer Hybrid Framework
Figure 3 for RingMo-lite: A Remote Sensing Multi-task Lightweight Network with CNN-Transformer Hybrid Framework
Figure 4 for RingMo-lite: A Remote Sensing Multi-task Lightweight Network with CNN-Transformer Hybrid Framework
Viaarxiv icon

Learning to Evaluate Performance of Multi-modal Semantic Localization

Add code
Sep 19, 2022
Figure 1 for Learning to Evaluate Performance of Multi-modal Semantic Localization
Figure 2 for Learning to Evaluate Performance of Multi-modal Semantic Localization
Figure 3 for Learning to Evaluate Performance of Multi-modal Semantic Localization
Figure 4 for Learning to Evaluate Performance of Multi-modal Semantic Localization
Viaarxiv icon

Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image Retrieval

Add code
Apr 21, 2022
Figure 1 for Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image Retrieval
Figure 2 for Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image Retrieval
Figure 3 for Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image Retrieval
Figure 4 for Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image Retrieval
Viaarxiv icon

Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and Local Information

Add code
Apr 21, 2022
Figure 1 for Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and Local Information
Figure 2 for Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and Local Information
Figure 3 for Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and Local Information
Figure 4 for Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and Local Information
Viaarxiv icon

Oriented Objects as pairs of Middle Lines

Add code
Dec 24, 2019
Figure 1 for Oriented Objects as pairs of Middle Lines
Figure 2 for Oriented Objects as pairs of Middle Lines
Figure 3 for Oriented Objects as pairs of Middle Lines
Figure 4 for Oriented Objects as pairs of Middle Lines
Viaarxiv icon

A novel transfer learning method based on common space mapping and weighted domain matching

Add code
Aug 16, 2016
Figure 1 for A novel transfer learning method based on common space mapping and weighted domain matching
Viaarxiv icon