Picture for Shigemichi Matsuzaki

Shigemichi Matsuzaki

CLIP-Clique: Graph-based Correspondence Matching Augmented by Vision Language Models for Object-based Global Localization

Add code
Oct 04, 2024
Figure 1 for CLIP-Clique: Graph-based Correspondence Matching Augmented by Vision Language Models for Object-based Global Localization
Figure 2 for CLIP-Clique: Graph-based Correspondence Matching Augmented by Vision Language Models for Object-based Global Localization
Figure 3 for CLIP-Clique: Graph-based Correspondence Matching Augmented by Vision Language Models for Object-based Global Localization
Figure 4 for CLIP-Clique: Graph-based Correspondence Matching Augmented by Vision Language Models for Object-based Global Localization
Viaarxiv icon

CLIP-Loc: Multi-modal Landmark Association for Global Localization in Object-based Maps

Add code
Feb 08, 2024
Viaarxiv icon

Single-Shot Global Localization via Graph-Theoretic Correspondence Matching

Add code
Jun 06, 2023
Viaarxiv icon

Multi-Source Soft Pseudo-Label Learning with Domain Similarity-based Weighting for Semantic Segmentation

Add code
Mar 02, 2023
Viaarxiv icon

Online Refinement of a Scene Recognition Model for Mobile Robots by Observing Human's Interaction with Environments

Add code
Aug 13, 2022
Figure 1 for Online Refinement of a Scene Recognition Model for Mobile Robots by Observing Human's Interaction with Environments
Figure 2 for Online Refinement of a Scene Recognition Model for Mobile Robots by Observing Human's Interaction with Environments
Figure 3 for Online Refinement of a Scene Recognition Model for Mobile Robots by Observing Human's Interaction with Environments
Figure 4 for Online Refinement of a Scene Recognition Model for Mobile Robots by Observing Human's Interaction with Environments
Viaarxiv icon

Semantic-aware plant traversability estimation in plant-rich environments for agricultural mobile robots

Add code
Aug 02, 2021
Figure 1 for Semantic-aware plant traversability estimation in plant-rich environments for agricultural mobile robots
Figure 2 for Semantic-aware plant traversability estimation in plant-rich environments for agricultural mobile robots
Figure 3 for Semantic-aware plant traversability estimation in plant-rich environments for agricultural mobile robots
Figure 4 for Semantic-aware plant traversability estimation in plant-rich environments for agricultural mobile robots
Viaarxiv icon

Multi-source Pseudo-label Learning of Semantic Segmentation for the Scene Recognition of Agricultural Mobile Robots

Add code
Feb 12, 2021
Figure 1 for Multi-source Pseudo-label Learning of Semantic Segmentation for the Scene Recognition of Agricultural Mobile Robots
Figure 2 for Multi-source Pseudo-label Learning of Semantic Segmentation for the Scene Recognition of Agricultural Mobile Robots
Figure 3 for Multi-source Pseudo-label Learning of Semantic Segmentation for the Scene Recognition of Agricultural Mobile Robots
Figure 4 for Multi-source Pseudo-label Learning of Semantic Segmentation for the Scene Recognition of Agricultural Mobile Robots
Viaarxiv icon