Picture for Jun Miura

Jun Miura

Combining Ontological Knowledge and Large Language Model for User-Friendly Service Robots

Add code
Oct 22, 2024
Viaarxiv icon

Natural Language as Polices: Reasoning for Coordinate-Level Embodied Control with LLMs

Add code
Mar 20, 2024
Viaarxiv icon

DeepIPCv2: LiDAR-powered Robust Environmental Perception and Navigational Control for Autonomous Vehicle

Add code
Jul 31, 2023
Viaarxiv icon

Multi-Source Soft Pseudo-Label Learning with Domain Similarity-based Weighting for Semantic Segmentation

Add code
Mar 02, 2023
Viaarxiv icon

Online Refinement of a Scene Recognition Model for Mobile Robots by Observing Human's Interaction with Environments

Add code
Aug 13, 2022
Figure 1 for Online Refinement of a Scene Recognition Model for Mobile Robots by Observing Human's Interaction with Environments
Figure 2 for Online Refinement of a Scene Recognition Model for Mobile Robots by Observing Human's Interaction with Environments
Figure 3 for Online Refinement of a Scene Recognition Model for Mobile Robots by Observing Human's Interaction with Environments
Figure 4 for Online Refinement of a Scene Recognition Model for Mobile Robots by Observing Human's Interaction with Environments
Viaarxiv icon

DeepIPC: Deeply Integrated Perception and Control for Mobile Robot in Real Environments

Add code
Aug 02, 2022
Figure 1 for DeepIPC: Deeply Integrated Perception and Control for Mobile Robot in Real Environments
Figure 2 for DeepIPC: Deeply Integrated Perception and Control for Mobile Robot in Real Environments
Figure 3 for DeepIPC: Deeply Integrated Perception and Control for Mobile Robot in Real Environments
Figure 4 for DeepIPC: Deeply Integrated Perception and Control for Mobile Robot in Real Environments
Viaarxiv icon

Fully End-to-end Autonomous Driving with Semantic Depth Cloud Mapping and Multi-Agent

Add code
Apr 12, 2022
Figure 1 for Fully End-to-end Autonomous Driving with Semantic Depth Cloud Mapping and Multi-Agent
Figure 2 for Fully End-to-end Autonomous Driving with Semantic Depth Cloud Mapping and Multi-Agent
Figure 3 for Fully End-to-end Autonomous Driving with Semantic Depth Cloud Mapping and Multi-Agent
Figure 4 for Fully End-to-end Autonomous Driving with Semantic Depth Cloud Mapping and Multi-Agent
Viaarxiv icon

Semantic-aware plant traversability estimation in plant-rich environments for agricultural mobile robots

Add code
Aug 02, 2021
Figure 1 for Semantic-aware plant traversability estimation in plant-rich environments for agricultural mobile robots
Figure 2 for Semantic-aware plant traversability estimation in plant-rich environments for agricultural mobile robots
Figure 3 for Semantic-aware plant traversability estimation in plant-rich environments for agricultural mobile robots
Figure 4 for Semantic-aware plant traversability estimation in plant-rich environments for agricultural mobile robots
Viaarxiv icon

Multi-task Learning with Attention for End-to-end Autonomous Driving

Add code
Apr 21, 2021
Figure 1 for Multi-task Learning with Attention for End-to-end Autonomous Driving
Figure 2 for Multi-task Learning with Attention for End-to-end Autonomous Driving
Figure 3 for Multi-task Learning with Attention for End-to-end Autonomous Driving
Figure 4 for Multi-task Learning with Attention for End-to-end Autonomous Driving
Viaarxiv icon

Multi-source Pseudo-label Learning of Semantic Segmentation for the Scene Recognition of Agricultural Mobile Robots

Add code
Feb 12, 2021
Figure 1 for Multi-source Pseudo-label Learning of Semantic Segmentation for the Scene Recognition of Agricultural Mobile Robots
Figure 2 for Multi-source Pseudo-label Learning of Semantic Segmentation for the Scene Recognition of Agricultural Mobile Robots
Figure 3 for Multi-source Pseudo-label Learning of Semantic Segmentation for the Scene Recognition of Agricultural Mobile Robots
Figure 4 for Multi-source Pseudo-label Learning of Semantic Segmentation for the Scene Recognition of Agricultural Mobile Robots
Viaarxiv icon