Abstract:Magnetic robotics obviate the physical connections between the actuators and end effectors resulting in ultra-minimally invasive surgeries. Even though such a wireless actuation method is highly advantageous in medical applications, the trade-off between the applied force and miniature magnetic end effector dimensions has been one of the main challenges in practical applications in clinically relevant conditions. This trade-off is crucial for applications where in-tissue penetration is required (e.g., needle access, biopsy, and suturing). To increase the forces of such magnetic miniature end effectors to practically useful levels, we propose an impact-force-based suturing needle that is capable of penetrating into in-vitro and ex-vivo samples with 3-DoF planar freedom (planar positioning and in-plane orienting). The proposed optimized design is a custom-built 12 G needle that can generate 1.16 N penetration force which is 56 times stronger than its magnetic counterparts with the same size without such an impact force. By containing the fast-moving permanent magnet within the needle in a confined tubular structure, the movement of the overall needle remains slow and easily controllable. The achieved force is in the range of tissue penetration limits allowing the needle to be able to penetrate through tissues to follow a suturing method in a teleoperated fashion. We demonstrated in-vitro needle penetration into a bacon strip and successful suturing of a gauze mesh onto an agar gel mimicking a hernia repair procedure.
Abstract:Real-time visual localization of needles is necessary for various surgical applications, including surgical automation and visual feedback. In this study we investigate localization and autonomous robotic control of needles in the context of our magneto-suturing system. Our system holds the potential for surgical manipulation with the benefit of minimal invasiveness and reduced patient side effects. However, the non-linear magnetic fields produce unintuitive forces and demand delicate position-based control that exceeds the capabilities of direct human manipulation. This makes automatic needle localization a necessity. Our localization method combines neural network-based segmentation and classical techniques, and we are able to consistently locate our needle with 0.73 mm RMS error in clean environments and 2.72 mm RMS error in challenging environments with blood and occlusion. The average localization RMS error is 2.16 mm for all environments we used in the experiments. We combine this localization method with our closed-loop feedback control system to demonstrate the further applicability of localization to autonomous control. Our needle is able to follow a running suture path in (1) no blood, no tissue; (2) heavy blood, no tissue; (3) no blood, with tissue; and (4) heavy blood, with tissue environments. The tip position tracking error ranges from 2.6 mm to 3.7 mm RMS, opening the door towards autonomous suturing tasks.
Abstract:As surgical robots become more common, automating away some of the burden of complex direct human operation becomes ever more feasible. Model-free reinforcement learning (RL) is a promising direction toward generalizable automated surgical performance, but progress has been slowed by the lack of efficient and realistic learning environments. In this paper, we describe adding reinforcement learning support to the da Vinci Skill Simulator, a training simulation used around the world to allow surgeons to learn and rehearse technical skills. We successfully teach an RL-based agent to perform sub-tasks in the simulator environment, using either image or state data. As far as we know, this is the first time an RL-based agent is taught from visual data in a surgical robotics environment. Additionally, we tackle the sample inefficiency of RL using a simple-to-implement system which we term hybrid-batch learning (HBL), effectively adding a second, long-term replay buffer to the Q-learning process. Additionally, this allows us to bootstrap learning from images from the data collected using the easier task of learning from state. We show that HBL decreases our learning times significantly.
Abstract:Prospection, the act of predicting the consequences of many possible futures, is intrinsic to human planning and action, and may even be at the root of consciousness. Surprisingly, this idea has been explored comparatively little in robotics. In this work, we propose a neural network architecture and associated planning algorithm that (1) learns a representation of the world useful for generating prospective futures after the application of high-level actions, (2) uses this generative model to simulate the result of sequences of high-level actions in a variety of environments, and (3) uses this same representation to evaluate these actions and perform tree search to find a sequence of high-level actions in a new environment. Models are trained via imitation learning on a variety of domains, including navigation, pick-and-place, and a surgical robotics task. Our approach allows us to visualize intermediate motion goals and learn to plan complex activity from visual information.