Abstract:The integration of machine learning in magnetic resonance imaging (MRI), specifically in neuroimaging, is proving to be incredibly effective, leading to better diagnostic accuracy, accelerated image analysis, and data-driven insights, which can potentially transform patient care. Deep learning models utilize multiple layers of processing to capture intricate details of complex data, which can then be used on a variety of tasks, including brain tumor classification, segmentation, image synthesis, and registration. Previous research demonstrates high accuracy in tumor segmentation using various model architectures, including nn-UNet and Swin-UNet. U-Mamba, which uses state space modeling, also achieves high accuracy in medical image segmentation. To leverage these models, we propose a deep learning framework that ensembles these state-of-the-art architectures to achieve accurate segmentation and produce finely synthesized images.
Abstract:Lithium metal battery (LMB) has the potential to be the next-generation battery system because of their high theoretical energy density. However, defects known as dendrites are formed by heterogeneous lithium (Li) plating, which hinder the development and utilization of LMBs. Non-destructive techniques to observe the dendrite morphology often use computerized X-ray tomography (XCT) imaging to provide cross-sectional views. To retrieve three-dimensional structures inside a battery, image segmentation becomes essential to quantitatively analyze XCT images. This work proposes a new binary semantic segmentation approach using a transformer-based neural network (T-Net) model capable of segmenting out dendrites from XCT data. In addition, we compare the performance of the proposed T-Net with three other algorithms, such as U-Net, Y-Net, and E-Net, consisting of an Ensemble Network model for XCT analysis. Our results show the advantages of using T-Net in terms of object metrics, such as mean Intersection over Union (mIoU) and mean Dice Similarity Coefficient (mDSC) as well as qualitatively through several comparative visualizations.
Abstract:In computational fluid dynamics, it often takes days or weeks to simulate the aerodynamic behavior of designs such as jets, spacecraft, or gas turbine engines. One of the biggest open problems in the field is how to simulate such systems much more quickly with sufficient accuracy. Many approaches have been tried; some involve models of the underlying physics, while others are model-free and make predictions based only on existing simulation data. However, all previous approaches have severe shortcomings or limitations. We present a novel approach: we reformulate the prediction problem to effectively increase the size of the otherwise tiny datasets, and we introduce a new neural network architecture (called a cluster network) with an inductive bias well-suited to fluid dynamics problems. Compared to state-of-the-art model-based approximations, we show that our approach is nearly as accurate, an order of magnitude faster and vastly easier to apply. Moreover, our method outperforms previous model-free approaches.