Abstract:Hemodynamic parameters such as pressure and wall shear stress play an important role in diagnosis, prognosis, and treatment planning in cardiovascular diseases. These parameters can be accurately computed using computational fluid dynamics (CFD), but CFD is computationally intensive. Hence, deep learning methods have been adopted as a surrogate to rapidly estimate CFD outcomes. A drawback of such data-driven models is the need for time-consuming reference CFD simulations for training. In this work, we introduce an active learning framework to reduce the number of CFD simulations required for the training of surrogate models, lowering the barriers to their deployment in new applications. We propose three distinct querying strategies to determine for which unlabeled samples CFD simulations should be obtained. These querying strategies are based on geometrical variance, ensemble uncertainty, and adherence to the physics governing fluid dynamics. We benchmark these methods on velocity field estimation in synthetic coronary artery bifurcations and find that they allow for substantial reductions in annotation cost. Notably, we find that our strategies reduce the number of samples required by up to 50% and make the trained models more robust to difficult cases. Our results show that active learning is a feasible strategy to increase the potential of deep learning-based CFD surrogates.
Abstract:Resolving arterial flows is essential for understanding cardiovascular pathologies, improving diagnosis, and monitoring patient condition. Ultrasound contrast imaging uses microbubbles to enhance the scattering of the blood pool, allowing for real-time visualization of blood flow. Recent developments in vector flow imaging further expand the imaging capabilities of ultrasound by temporally resolving fast arterial flow. The next obstacle to overcome is the lack of spatial resolution. Super-resolved ultrasound images can be obtained by deconvolving radiofrequency (RF) signals before beamforming, breaking the link between resolution and pulse duration. Convolutional neural networks (CNNs) can be trained to locally estimate the deconvolution kernel and consequently super-localize the microbubbles directly within the RF signal. However, microbubble contrast is highly nonlinear, and the potential of CNNs in microbubble localization has not yet been fully exploited. Assessing deep learning-based deconvolution performance for non-trivial imaging pulses is therefore essential for successful translation to a practical setting, where the signal-to-noise ratio is limited, and transmission schemes should comply with safety guidelines. In this study, we train CNNs to deconvolve RF signals and localize the microbubbles driven by harmonic pulses, chirps, or delay-encoded pulse trains. Furthermore, we discuss potential hurdles for in-vitro and in-vivo super-resolution by presenting preliminary experimental results. We find that, whereas the CNNs can accurately localize microbubbles for all pulses, a short imaging pulse offers the best performance in noise-free conditions. However, chirps offer a comparable performance without noise, but are more robust to noise and outperform all other pulses in low-signal-to-noise ratio conditions.
Abstract:Cardiovascular hemodynamic fields provide valuable medical decision markers for coronary artery disease. Computational fluid dynamics (CFD) is the gold standard for accurate, non-invasive evaluation of these quantities in vivo. In this work, we propose a time-efficient surrogate model, powered by machine learning, for the estimation of pulsatile hemodynamics based on steady-state priors. We introduce deep vectorised operators, a modelling framework for discretisation independent learning on infinite-dimensional function spaces. The underlying neural architecture is a neural field conditioned on hemodynamic boundary conditions. Importantly, we show how relaxing the requirement of point-wise action to permutation-equivariance leads to a family of models that can be parametrised by message passing and self-attention layers. We evaluate our approach on a dataset of 74 stenotic coronary arteries extracted from coronary computed tomography angiography (CCTA) with patient-specific pulsatile CFD simulations as ground truth. We show that our model produces accurate estimates of the pulsatile velocity and pressure while being agnostic to re-sampling of the source domain (discretisation independence). This shows that deep vectorised operators are a powerful modelling tool for cardiovascular hemodynamics estimation in coronary arteries and beyond.
Abstract:Deep learning-based medical image segmentation and surface mesh generation typically involve a sequential pipeline from image to segmentation to meshes, often requiring large training datasets while making limited use of prior geometric knowledge. This may lead to topological inconsistencies and suboptimal performance in low-data regimes. To address these challenges, we propose a data-efficient deep learning method for direct 3D anatomical object surface meshing using geometric priors. Our approach employs a multi-resolution graph neural network that operates on a prior geometric template which is deformed to fit object boundaries of interest. We show how different templates may be used for the different surface meshing targets, and introduce a novel masked autoencoder pretraining strategy for 3D spherical data. The proposed method outperforms nnUNet in a one-shot setting for segmentation of the pericardium, left ventricle (LV) cavity and the LV myocardium. Similarly, the method outperforms other lumen segmentation operating on multi-planar reformatted images. Results further indicate that mesh quality is on par with or improves upon marching cubes post-processing of voxel mask predictions, while remaining flexible in the choice of mesh triangulation prior, thus paving the way for more accurate and topologically consistent 3D medical object surface meshing.
Abstract:Time-resolved three-dimensional flow MRI (4D flow MRI) provides a unique non-invasive solution to visualize and quantify hemodynamics in blood vessels such as the aortic arch. However, most current analysis methods for arterial 4D flow MRI use static artery walls because of the difficulty in obtaining a full cycle segmentation. To overcome this limitation, we propose a neural fields-based method that directly estimates continuous periodic wall deformations throughout the cardiac cycle. For a 3D + time imaging dataset, we optimize an implicit neural representation (INR) that represents a time-dependent velocity vector field (VVF). An ODE solver is used to integrate the VVF into a deformation vector field (DVF), that can deform images, segmentation masks, or meshes over time, thereby visualizing and quantifying local wall motion patterns. To properly reflect the periodic nature of 3D + time cardiovascular data, we impose periodicity in two ways. First, by periodically encoding the time input to the INR, and hence VVF. Second, by regularizing the DVF. We demonstrate the effectiveness of this approach on synthetic data with different periodic patterns, ECG-gated CT, and 4D flow MRI data. The obtained method could be used to improve 4D flow MRI analysis.
Abstract:Vestibular schwannomas (VS) are benign tumors that are generally managed by active surveillance with MRI examination. To further assist clinical decision-making and avoid overtreatment, an accurate prediction of tumor growth based on longitudinal imaging is highly desirable. In this paper, we introduce DeepGrowth, a deep learning method that incorporates neural fields and recurrent neural networks for prospective tumor growth prediction. In the proposed method, each tumor is represented as a signed distance function (SDF) conditioned on a low-dimensional latent code. Unlike previous studies that perform tumor shape prediction directly in the image space, we predict the latent codes instead and then reconstruct future shapes from it. To deal with irregular time intervals, we introduce a time-conditioned recurrent module based on a ConvLSTM and a novel temporal encoding strategy, which enables the proposed model to output varying tumor shapes over time. The experiments on an in-house longitudinal VS dataset showed that the proposed model significantly improved the performance ($\ge 1.6\%$ Dice score and $\ge0.20$ mm 95\% Hausdorff distance), in particular for top 20\% tumors that grow or shrink the most ($\ge 4.6\%$ Dice score and $\ge 0.73$ mm 95\% Hausdorff distance). Our code is available at ~\burl{https://github.com/cyjdswx/DeepGrowth}
Abstract:Chronic subdural hematoma (cSDH) is a common neurological condition characterized by the accumulation of blood between the brain and the dura mater. This accumulation of blood can exert pressure on the brain, potentially leading to fatal outcomes. Treatment options for cSDH are limited to invasive surgery or non-invasive management. Traditionally, the midline shift, hand-measured by experts from an ideal sagittal plane, and the hematoma volume have been the primary metrics for quantifying and analyzing cSDH. However, these approaches do not quantify the local 3D brain deformation caused by cSDH. We propose a novel method using anatomy-aware unsupervised diffeomorphic pseudo-healthy synthesis to generate brain deformation fields. The deformation fields derived from this process are utilized to extract biomarkers that quantify the shift in the brain due to cSDH. We use CT scans of 121 patients for training and validation of our method and find that our metrics allow the identification of patients who require surgery. Our results indicate that automatically obtained brain deformation fields might contain prognostic value for personalized cSDH treatment. Our implementation is available on: github.com/Barisimre/brain-morphing
Abstract:Laparoscopic video tracking primarily focuses on two target types: surgical instruments and anatomy. The former could be used for skill assessment, while the latter is necessary for the projection of virtual overlays. Where instrument and anatomy tracking have often been considered two separate problems, in this paper, we propose a method for joint tracking of all structures simultaneously. Based on a single 2D monocular video clip, we train a neural field to represent a continuous spatiotemporal scene, used to create 3D tracks of all surfaces visible in at least one frame. Due to the small size of instruments, they generally cover a small part of the image only, resulting in decreased tracking accuracy. Therefore, we propose enhanced class weighting to improve the instrument tracks. We evaluate tracking on video clips from laparoscopic cholecystectomies, where we find mean tracking accuracies of 92.4% for anatomical structures and 87.4% for instruments. Additionally, we assess the quality of depth maps obtained from the method's scene reconstructions. We show that these pseudo-depths have comparable quality to a state-of-the-art pre-trained depth estimator. On laparoscopic videos in the SCARED dataset, the method predicts depth with an MAE of 2.9 mm and a relative error of 9.2%. These results show the feasibility of using neural fields for monocular 3D reconstruction of laparoscopic scenes.
Abstract:Personalized 3D vascular models can aid in a range of diagnostic, prognostic, and treatment-planning tasks relevant to cardiovascular disease management. Deep learning provides a means to automatically obtain such models. Ideally, a user should have control over the exact region of interest (ROI) to be included in a vascular model, and the model should be watertight and highly accurate. To this end, we propose a combination of a global controller leveraging voxel mask segmentations to provide boundary conditions for vessels of interest to a local, iterative vessel segmentation model. We introduce the preservation of scale- and rotational symmetries in the local segmentation model, leading to generalisation to vessels of unseen sizes and orientations. Combined with the global controller, this enables flexible 3D vascular model building, without additional retraining. We demonstrate the potential of our method on a dataset containing abdominal aortic aneurysms (AAAs). Our method performs on par with a state-of-the-art segmentation model in the segmentation of AAAs, iliac arteries and renal arteries, while providing a watertight, smooth surface segmentation. Moreover, we demonstrate that by adapting the global controller, we can easily extend vessel sections in the 3D model.
Abstract:Many anatomical structures can be described by surface or volume meshes. Machine learning is a promising tool to extract information from these 3D models. However, high-fidelity meshes often contain hundreds of thousands of vertices, which creates unique challenges in building deep neural network architectures. Furthermore, patient-specific meshes may not be canonically aligned which limits the generalisation of machine learning algorithms. We propose LaB-GATr, a transfomer neural network with geometric tokenisation that can effectively learn with large-scale (bio-)medical surface and volume meshes through sequence compression and interpolation. Our method extends the recently proposed geometric algebra transformer (GATr) and thus respects all Euclidean symmetries, i.e. rotation, translation and reflection, effectively mitigating the problem of canonical alignment between patients. LaB-GATr achieves state-of-the-art results on three tasks in cardiovascular hemodynamics modelling and neurodevelopmental phenotype prediction, featuring meshes of up to 200,000 vertices. Our results demonstrate that LaB-GATr is a powerful architecture for learning with high-fidelity meshes which has the potential to enable interesting downstream applications. Our implementation is publicly available.