Abstract:We propose a robotic manipulation system that can pivot objects on a surface using vision, wrist force and tactile sensing. We aim to control the rotation of an object around the grip point of a parallel gripper by allowing rotational slip, while maintaining a desired wrist force profile. Our approach runs an end-effector position controller and a gripper width controller concurrently in a closed loop. The position controller maintains a desired force using vision and wrist force. The gripper controller uses tactile sensing to keep the grip firm enough to prevent translational slip, but loose enough to induce rotational slip. Our sensor-based control approach relies on matching a desired force profile derived from object dimensions and weight and vision-based monitoring of the object pose. The gripper controller uses tactile sensors to detect and prevent translational slip by tightening the grip when needed. Experimental results where the robot was tasked with rotating cuboid objects 90 degrees show that the multi-modal pivoting approach was able to rotate the objects without causing lift or slip, and was more energy-efficient compared to using a single sensor modality and to pick-and-place. While our work demonstrated the benefit of multi-modal sensing for the pivoting task, further work is needed to generalize our approach to any given object.
Abstract:We study human-robot handovers in a naturalistic collaboration scenario, where a mobile manipulator robot assists a person during a crafting session by providing and retrieving objects used for wooden piece assembly (functional activities) and painting (creative activities). We collect quantitative and qualitative data from 20 participants in a Wizard-of-Oz study, generating the Functional And Creative Tasks Human-Robot Collaboration dataset (the FACT HRC dataset), available to the research community. This work illustrates how social cues and task context inform the temporal-spatial coordination in human-robot handovers, and how human-robot collaboration is shaped by and in turn influences people's functional and creative activities.
Abstract:In this paper, we present the implementation details of a Virtual Reality (VR)-based teleoperation interface for moving a robotic manipulator. We propose an iterative human-in-the-loop design where the user sets the next task-space waypoint for the robot's end effector and executes the action on the physical robot before setting the next waypoints. Information from the robot's surroundings is provided to the user in two forms: as a point cloud in 3D space and a video stream projected on a virtual wall. The feasibility of the selected end effector pose is communicated to the user by the color of the virtual end effector. The interface is demonstrated to successfully work for a pick and place scenario, however, our trials showed that the fluency of the interaction and the autonomy level of the system can be increased.