Picture for Tetsunari Inamura

Tetsunari Inamura

Latent Representation in Human-Robot Interaction with Explicit Consideration of Periodic Dynamics

Add code
Jun 16, 2021
Figure 1 for Latent Representation in Human-Robot Interaction with Explicit Consideration of Periodic Dynamics
Figure 2 for Latent Representation in Human-Robot Interaction with Explicit Consideration of Periodic Dynamics
Figure 3 for Latent Representation in Human-Robot Interaction with Explicit Consideration of Periodic Dynamics
Figure 4 for Latent Representation in Human-Robot Interaction with Explicit Consideration of Periodic Dynamics
Viaarxiv icon

SIGVerse: A cloud-based VR platform for research on social and embodied human-robot interaction

Add code
May 02, 2020
Figure 1 for SIGVerse: A cloud-based VR platform for research on social and embodied human-robot interaction
Figure 2 for SIGVerse: A cloud-based VR platform for research on social and embodied human-robot interaction
Figure 3 for SIGVerse: A cloud-based VR platform for research on social and embodied human-robot interaction
Figure 4 for SIGVerse: A cloud-based VR platform for research on social and embodied human-robot interaction
Viaarxiv icon

Spatial Concept-Based Navigation with Human Speech Instructions via Probabilistic Inference on Bayesian Generative Model

Add code
Feb 18, 2020
Figure 1 for Spatial Concept-Based Navigation with Human Speech Instructions via Probabilistic Inference on Bayesian Generative Model
Figure 2 for Spatial Concept-Based Navigation with Human Speech Instructions via Probabilistic Inference on Bayesian Generative Model
Figure 3 for Spatial Concept-Based Navigation with Human Speech Instructions via Probabilistic Inference on Bayesian Generative Model
Figure 4 for Spatial Concept-Based Navigation with Human Speech Instructions via Probabilistic Inference on Bayesian Generative Model
Viaarxiv icon

Learning multimodal representations for sample-efficient recognition of human actions

Add code
Mar 06, 2019
Figure 1 for Learning multimodal representations for sample-efficient recognition of human actions
Figure 2 for Learning multimodal representations for sample-efficient recognition of human actions
Figure 3 for Learning multimodal representations for sample-efficient recognition of human actions
Figure 4 for Learning multimodal representations for sample-efficient recognition of human actions
Viaarxiv icon

Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping

Add code
Jan 04, 2019
Figure 1 for Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping
Figure 2 for Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping
Figure 3 for Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping
Figure 4 for Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping
Viaarxiv icon

Online Spatial Concept and Lexical Acquisition with Simultaneous Localization and Mapping

Add code
Mar 09, 2018
Figure 1 for Online Spatial Concept and Lexical Acquisition with Simultaneous Localization and Mapping
Figure 2 for Online Spatial Concept and Lexical Acquisition with Simultaneous Localization and Mapping
Figure 3 for Online Spatial Concept and Lexical Acquisition with Simultaneous Localization and Mapping
Figure 4 for Online Spatial Concept and Lexical Acquisition with Simultaneous Localization and Mapping
Viaarxiv icon

Bayesian Body Schema Estimation using Tactile Information obtained through Coordinated Random Movements

Add code
Dec 01, 2016
Figure 1 for Bayesian Body Schema Estimation using Tactile Information obtained through Coordinated Random Movements
Figure 2 for Bayesian Body Schema Estimation using Tactile Information obtained through Coordinated Random Movements
Figure 3 for Bayesian Body Schema Estimation using Tactile Information obtained through Coordinated Random Movements
Figure 4 for Bayesian Body Schema Estimation using Tactile Information obtained through Coordinated Random Movements
Viaarxiv icon

Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences

Add code
May 07, 2016
Figure 1 for Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences
Figure 2 for Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences
Figure 3 for Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences
Figure 4 for Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences
Viaarxiv icon