Abstract:In ultrasound imaging the appearance of homogeneous regions of tissue is subject to speckle, which for certain applications can make the detection of tissue irregularities difficult. To cope with this, it is common practice to apply speckle reduction filters to the images. Most conventional filtering techniques are fairly hand-crafted and often need to be finely tuned to the present hardware, imaging scheme and application. Learning based techniques on the other hand suffer from the need for a target image for training (in case of fully supervised techniques) or require narrow, complex physics-based models of the speckle appearance that might not apply in all cases. With this work we propose a deep-learning based method for speckle removal without these limitations. To enable this, we make use of realistic ultrasound simulation techniques that allow for instantiation of several independent speckle realizations that represent the exact same tissue, thus allowing for the application of image reconstruction techniques that work with pairs of differently corrupted data. Compared to two other state-of-the-art approaches (non-local means and the Optimized Bayesian non-local means filter) our method performs favorably in qualitative comparisons and quantitative evaluation, despite being trained on simulations alone, and is several orders of magnitude faster.
Abstract:Intra-operative ultrasound is an increasingly important imaging modality in neurosurgery. However, manual interaction with imaging data during the procedures, for example to select landmarks or perform segmentation, is difficult and can be time consuming. Yet, as registration to other imaging modalities is required in most cases, some annotation is necessary. We propose a segmentation method based on DeepVNet and specifically evaluate the integration of pre-training with simulated ultrasound sweeps to improve automatic segmentation and enable a fully automatic initialization of registration. In this view, we show that despite training on coarse and incomplete semi-automatic annotations, our approach is able to capture the desired superficial structures such as \textit{sulci}, the \textit{cerebellar tentorium}, and the \textit{falx cerebri}. We perform a five-fold cross-validation on the publicly available RESECT dataset. Trained on the dataset alone, we report a Dice and Jaccard coefficient of $0.45 \pm 0.09$ and $0.30 \pm 0.07$ respectively, as well as an average distance of $0.78 \pm 0.36~mm$. With the suggested pre-training, we computed a Dice and Jaccard coefficient of $0.47 \pm 0.10$ and $0.31 \pm 0.08$, and an average distance of $0.71 \pm 0.38~mm$. The qualitative evaluation suggest that with pre-training the network can learn to generalize better and provide refined and more complete segmentations in comparison to incomplete annotations provided as input.
Abstract:Ultrasound imaging is caught between the quest for the highest image quality, and the necessity for clinical usability. Our contribution is two-fold: First, we propose a novel fully convolutional neural network for ultrasound reconstruction. Second, a custom loss function tailored to the modality is employed for end-to-end training of the network. We demonstrate that training a network to map time-delayed raw data to a minimum variance ground truth offers performance increases in a clinical environment. In doing so, a path is explored towards improved clinically viable ultrasound reconstruction. The proposed method displays both promising image reconstruction quality and acquisition frequency when integrated for live ultrasound scanning. A clinical evaluation is conducted to verify the diagnostic usefulness of the proposed method in a clinical setting.
Abstract:In this paper we propose a novel augmentation technique that improves not only the performance of deep neural networks on clean test data, but also significantly increases their robustness to random transformations, both affine and projective. Inspired by ManiFool, the augmentation is performed by a line-search manifold-exploration method that learns affine geometric transformations that lead to the misclassification on an image, while ensuring that it remains on the same manifold as the training data. This augmentation method populates any training dataset with images that lie on the border of the manifolds between two-classes and maximizes the variance the network is exposed to during training. Our method was thoroughly evaluated on the challenging tasks of fine-grained skin lesion classification from limited data, and breast tumor classification of mammograms. Compared with traditional augmentation methods, and with images synthesized by Generative Adversarial Networks our method not only achieves state-of-the-art performance but also significantly improves the network's robustness.
Abstract:Freehand three-dimensional ultrasound (3D-US) has gained considerable interest in research, but even today suffers from its high inter-operator variability in clinical practice. The high variability mainly arises from tracking inaccuracies as well as the directionality of the ultrasound data, being neglected in most of today's reconstruction methods. By providing a novel paradigm for the acquisition and reconstruction of tracked freehand 3D ultrasound, this work presents the concept of Computational Sonography (CS) to model the directionality of ultrasound information. CS preserves the directionality of the acquired data, and allows for its exploitation by computational algorithms. In this regard, we propose a set of mathematical models to represent 3D-US data, inspired by the physics of ultrasound imaging. We compare different models of Computational Sonography to classical scalar compounding for freehand acquisitions, providing both an improved preservation of US directionality as well as improved image quality in 3D. The novel concept is evaluated for a set of phantom datasets, as well as for in-vivo acquisitions of muscoloskeletal and vascular applications.
Abstract:Registration of partial-view 3D US volumes with MRI data is influenced by initialization. The standard of practice is using extrinsic or intrinsic landmarks, which can be very tedious to obtain. To overcome the limitations of registration initialization, we present a novel approach that is based on Euclidean distance maps derived from easily obtainable coarse segmentations. We evaluate our approach quantitatively on the publicly available RESECT dataset and show that it is robust regarding overlap of target area and initial position. Furthermore, our method provides initializations that are suitable for state-of-the-art nonlinear, deformable image registration algorithm's capture ranges.
Abstract:Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. Methods: With SUPRA we propose an open-source pipeline for fully Software Defined Ultrasound Processing for Real-time Applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run-time. Results: The pipeline shows image quality comparable to a clinical system and backed by point-spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real-time. Conclusions: Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data) it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.