Abstract:In ultrasound imaging the appearance of homogeneous regions of tissue is subject to speckle, which for certain applications can make the detection of tissue irregularities difficult. To cope with this, it is common practice to apply speckle reduction filters to the images. Most conventional filtering techniques are fairly hand-crafted and often need to be finely tuned to the present hardware, imaging scheme and application. Learning based techniques on the other hand suffer from the need for a target image for training (in case of fully supervised techniques) or require narrow, complex physics-based models of the speckle appearance that might not apply in all cases. With this work we propose a deep-learning based method for speckle removal without these limitations. To enable this, we make use of realistic ultrasound simulation techniques that allow for instantiation of several independent speckle realizations that represent the exact same tissue, thus allowing for the application of image reconstruction techniques that work with pairs of differently corrupted data. Compared to two other state-of-the-art approaches (non-local means and the Optimized Bayesian non-local means filter) our method performs favorably in qualitative comparisons and quantitative evaluation, despite being trained on simulations alone, and is several orders of magnitude faster.
Abstract:Intra-operative ultrasound is an increasingly important imaging modality in neurosurgery. However, manual interaction with imaging data during the procedures, for example to select landmarks or perform segmentation, is difficult and can be time consuming. Yet, as registration to other imaging modalities is required in most cases, some annotation is necessary. We propose a segmentation method based on DeepVNet and specifically evaluate the integration of pre-training with simulated ultrasound sweeps to improve automatic segmentation and enable a fully automatic initialization of registration. In this view, we show that despite training on coarse and incomplete semi-automatic annotations, our approach is able to capture the desired superficial structures such as \textit{sulci}, the \textit{cerebellar tentorium}, and the \textit{falx cerebri}. We perform a five-fold cross-validation on the publicly available RESECT dataset. Trained on the dataset alone, we report a Dice and Jaccard coefficient of $0.45 \pm 0.09$ and $0.30 \pm 0.07$ respectively, as well as an average distance of $0.78 \pm 0.36~mm$. With the suggested pre-training, we computed a Dice and Jaccard coefficient of $0.47 \pm 0.10$ and $0.31 \pm 0.08$, and an average distance of $0.71 \pm 0.38~mm$. The qualitative evaluation suggest that with pre-training the network can learn to generalize better and provide refined and more complete segmentations in comparison to incomplete annotations provided as input.
Abstract:Freehand three-dimensional ultrasound (3D-US) has gained considerable interest in research, but even today suffers from its high inter-operator variability in clinical practice. The high variability mainly arises from tracking inaccuracies as well as the directionality of the ultrasound data, being neglected in most of today's reconstruction methods. By providing a novel paradigm for the acquisition and reconstruction of tracked freehand 3D ultrasound, this work presents the concept of Computational Sonography (CS) to model the directionality of ultrasound information. CS preserves the directionality of the acquired data, and allows for its exploitation by computational algorithms. In this regard, we propose a set of mathematical models to represent 3D-US data, inspired by the physics of ultrasound imaging. We compare different models of Computational Sonography to classical scalar compounding for freehand acquisitions, providing both an improved preservation of US directionality as well as improved image quality in 3D. The novel concept is evaluated for a set of phantom datasets, as well as for in-vivo acquisitions of muscoloskeletal and vascular applications.
Abstract:Path planning plays an essential role in many areas of robotics. Various planning techniques have been presented, either focusing on learning a specific task from demonstrations or retrieving trajectories by optimizing for hand-crafted cost functions which are well defined a priori. In this work, we present an incremental adversarial learning-based framework that allows inferring implicit behaviour, i.e. the natural characteristic of a set of given trajectories. To achieve adversarial learning, a zero-sum game is constructed between a planning algorithm and an adversary - the discriminator. We employ the discriminator within an optimal motion planning algorithm, such that costs can be learned and optimized iteratively, improving the integration of implicit behavior. By combining a cost-based planning approach with trained intrinsic behaviour, this can be be integrated also with other constraints such as obstacles or general cost factors within a single planning framework. We demonstrate the proposed method on a dataset for collision avoidance, as well as for the generation of human-like trajectories from motion capture data. Our results show that incremental adversarial learning is able to generate paths that reflect the natural implicit behaviour of a dataset, with the ability to improve on performance using iterative learning and generation.
Abstract:Tracking of rotation and translation of medical instruments plays a substantial role in many modern interventions. Traditional external optical tracking systems are often subject to line-of-sight issues, in particular when the region of interest is difficult to access or the procedure allows only for limited rigid body markers. The introduction of inside-out tracking systems aims to overcome these issues. We propose a marker-less tracking system based on visual SLAM to enable tracking of instruments in an interventional scenario. To achieve this goal, we mount a miniature multi-modal (monocular, stereo, active depth) vision system on the object of interest and relocalize its pose within an adaptive map of the operating room. We compare state-of-the-art algorithmic pipelines and apply the idea to transrectal 3D Ultrasound (TRUS) compounding of the prostate. Obtained volumes are compared to reconstruction using a commercial optical tracking system as well as a robotic manipulator. Feature-based binocular SLAM is identified as the most promising method and is tested extensively in challenging clinical environment under severe occlusion and for the use case of prostate US biopsies.
Abstract:Registration of partial-view 3D US volumes with MRI data is influenced by initialization. The standard of practice is using extrinsic or intrinsic landmarks, which can be very tedious to obtain. To overcome the limitations of registration initialization, we present a novel approach that is based on Euclidean distance maps derived from easily obtainable coarse segmentations. We evaluate our approach quantitatively on the publicly available RESECT dataset and show that it is robust regarding overlap of target area and initial position. Furthermore, our method provides initializations that are suitable for state-of-the-art nonlinear, deformable image registration algorithm's capture ranges.
Abstract:Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. Methods: With SUPRA we propose an open-source pipeline for fully Software Defined Ultrasound Processing for Real-time Applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run-time. Results: The pipeline shows image quality comparable to a clinical system and backed by point-spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real-time. Conclusions: Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data) it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.
Abstract:Robotic ultrasound has the potential to assist and guide physicians during interventions. In this work, we present a set of methods and a workflow to enable autonomous MRI-guided ultrasound acquisitions. Our approach uses a structured-light 3D scanner for patient-to-robot and image-to-patient calibration, which in turn is used to plan 3D ultrasound trajectories. These MRI-based trajectories are followed autonomously by the robot and are further refined online using automatic MRI/US registration. Despite the low spatial resolution of structured light scanners, the initial planned acquisition path can be followed with an accuracy of 2.46 +/- 0.96 mm. This leads to a good initialization of the MRI/US registration: the 3D-scan-based alignment for planning and acquisition shows an accuracy (distance between planned ultrasound and MRI) of 4.47 mm, and 0.97 mm after an online-update of the calibration based on a closed loop registration.