Abstract:Path planning plays an essential role in many areas of robotics. Various planning techniques have been presented, either focusing on learning a specific task from demonstrations or retrieving trajectories by optimizing for hand-crafted cost functions which are well defined a priori. In this work, we present an incremental adversarial learning-based framework that allows inferring implicit behaviour, i.e. the natural characteristic of a set of given trajectories. To achieve adversarial learning, a zero-sum game is constructed between a planning algorithm and an adversary - the discriminator. We employ the discriminator within an optimal motion planning algorithm, such that costs can be learned and optimized iteratively, improving the integration of implicit behavior. By combining a cost-based planning approach with trained intrinsic behaviour, this can be be integrated also with other constraints such as obstacles or general cost factors within a single planning framework. We demonstrate the proposed method on a dataset for collision avoidance, as well as for the generation of human-like trajectories from motion capture data. Our results show that incremental adversarial learning is able to generate paths that reflect the natural implicit behaviour of a dataset, with the ability to improve on performance using iterative learning and generation.
Abstract:Tracking of rotation and translation of medical instruments plays a substantial role in many modern interventions. Traditional external optical tracking systems are often subject to line-of-sight issues, in particular when the region of interest is difficult to access or the procedure allows only for limited rigid body markers. The introduction of inside-out tracking systems aims to overcome these issues. We propose a marker-less tracking system based on visual SLAM to enable tracking of instruments in an interventional scenario. To achieve this goal, we mount a miniature multi-modal (monocular, stereo, active depth) vision system on the object of interest and relocalize its pose within an adaptive map of the operating room. We compare state-of-the-art algorithmic pipelines and apply the idea to transrectal 3D Ultrasound (TRUS) compounding of the prostate. Obtained volumes are compared to reconstruction using a commercial optical tracking system as well as a robotic manipulator. Feature-based binocular SLAM is identified as the most promising method and is tested extensively in challenging clinical environment under severe occlusion and for the use case of prostate US biopsies.
Abstract:Robotic ultrasound has the potential to assist and guide physicians during interventions. In this work, we present a set of methods and a workflow to enable autonomous MRI-guided ultrasound acquisitions. Our approach uses a structured-light 3D scanner for patient-to-robot and image-to-patient calibration, which in turn is used to plan 3D ultrasound trajectories. These MRI-based trajectories are followed autonomously by the robot and are further refined online using automatic MRI/US registration. Despite the low spatial resolution of structured light scanners, the initial planned acquisition path can be followed with an accuracy of 2.46 +/- 0.96 mm. This leads to a good initialization of the MRI/US registration: the 3D-scan-based alignment for planning and acquisition shows an accuracy (distance between planned ultrasound and MRI) of 4.47 mm, and 0.97 mm after an online-update of the calibration based on a closed loop registration.