Picture for Hiroki Mori

Hiroki Mori

A Peg-in-hole Task Strategy for Holes in Concrete

Add code
Mar 29, 2024
Viaarxiv icon

Visual Spatial Attention and Proprioceptive Data-Driven Reinforcement Learning for Robust Peg-in-Hole Task Under Variable Conditions

Add code
Dec 27, 2023
Viaarxiv icon

Real-time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes

Add code
Sep 22, 2023
Figure 1 for Real-time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes
Figure 2 for Real-time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes
Figure 3 for Real-time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes
Figure 4 for Real-time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes
Viaarxiv icon

A generative framework for conversational laughter: Its 'language model' and laughter sound synthesis

Add code
Jun 06, 2023
Viaarxiv icon

Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use

Add code
Jun 29, 2022
Figure 1 for Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use
Figure 2 for Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use
Figure 3 for Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use
Figure 4 for Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use
Viaarxiv icon

Collision-free Path Planning on Arbitrary Optimization Criteria in the Latent Space through cGANs

Add code
Feb 26, 2022
Figure 1 for Collision-free Path Planning on Arbitrary Optimization Criteria in the Latent Space through cGANs
Figure 2 for Collision-free Path Planning on Arbitrary Optimization Criteria in the Latent Space through cGANs
Figure 3 for Collision-free Path Planning on Arbitrary Optimization Criteria in the Latent Space through cGANs
Figure 4 for Collision-free Path Planning on Arbitrary Optimization Criteria in the Latent Space through cGANs
Viaarxiv icon

Guided Visual Attention Model Based on Interactions Between Top-down and Bottom-up Information for Robot Pose Prediction

Add code
Feb 21, 2022
Figure 1 for Guided Visual Attention Model Based on Interactions Between Top-down and Bottom-up Information for Robot Pose Prediction
Figure 2 for Guided Visual Attention Model Based on Interactions Between Top-down and Bottom-up Information for Robot Pose Prediction
Figure 3 for Guided Visual Attention Model Based on Interactions Between Top-down and Bottom-up Information for Robot Pose Prediction
Figure 4 for Guided Visual Attention Model Based on Interactions Between Top-down and Bottom-up Information for Robot Pose Prediction
Viaarxiv icon

Collision-free Path Planning in the Latent Space through cGANs

Add code
Feb 15, 2022
Figure 1 for Collision-free Path Planning in the Latent Space through cGANs
Figure 2 for Collision-free Path Planning in the Latent Space through cGANs
Figure 3 for Collision-free Path Planning in the Latent Space through cGANs
Figure 4 for Collision-free Path Planning in the Latent Space through cGANs
Viaarxiv icon

Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility

Add code
Dec 13, 2021
Figure 1 for Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility
Figure 2 for Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility
Figure 3 for Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility
Figure 4 for Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility
Viaarxiv icon

How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning

Add code
Jun 04, 2021
Figure 1 for How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning
Figure 2 for How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning
Figure 3 for How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning
Figure 4 for How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning
Viaarxiv icon