Abstract:Several physics and engineering applications involve the solution of a minimisation problem to compute an approximation of the input signal. Modern computing hardware and software apply high-performance computing to solve and considerably reduce the execution time. We compare and analyse different minimisation methods in terms of functional computation, convergence, execution time, and scalability properties, for the solution of two minimisation problems (i.e., approximation and denoising) with different constraints that involve computationally expensive operations. These problems are attractive due to their numerical and analytical properties, and our general analysis can be extended to most signal-processing problems. We perform our tests on the Cineca Marconi100 cluster, at the 26th position in the top500 list. Our experimental results show that PRAXIS is the best optimiser in terms of minima computation: the efficiency of the approximation is 38% with 256 processes, while the denoising has 46% with 32 processes.
Abstract:Several applications require the super-resolution of noisy images and the preservation of geometrical and texture features. State-of-the-art super-resolution methods do not account for noise and generally enhance the output image's artefacts (e.g., aliasing, blurring). We propose a learning-based method that accounts for the presence of noise and preserves the properties of the input image, as measured by quantitative metrics (e.g., normalised crossed correlation, normalised mean squared error, peak-signal-to-noise-ration, structural similarity feature-based similarity, universal image quality). We train our network to up-sample a low-resolution noisy image while preserving its properties. We perform our tests on the Cineca Marconi100 cluster, at the 26th position in the top500 list. The experimental results show that our method outperforms learning-based methods, has comparable results with standard methods, preserves the properties of the input image as contours, brightness, and textures, and reduces the artefacts. As average quantitative metrics, our method has a PSNR value of 23.81 on the super-resolution of Gaussian noise images with a 2X up-sampling factor. In contrast, previous work has a PSNR value of 23.09 (standard method) and 21.78 (learning-based method). Our learning-based and quality-preserving super-resolution improves the high-resolution prediction of noisy images with respect to state-of-the-art methods with different noise types and up-sampling factors.
Abstract:We propose a novel deep-learning framework for super-resolution ultrasound images and videos in terms of spatial resolution and line reconstruction. We up-sample the acquired low-resolution image through a vision-based interpolation method; then, we train a learning-based model to improve the quality of the up-sampling. We qualitatively and quantitatively test our model on different anatomical districts (e.g., cardiac, obstetric) images and with different up-sampling resolutions (i.e., 2X, 4X). Our method improves the PSNR median value with respect to SOTA methods of $1.7\%$ on obstetric 2X raw images, $6.1\%$ on cardiac 2X raw images, and $4.4\%$ on abdominal raw 4X images; it also improves the number of pixels with a low prediction error of $9.0\%$ on obstetric 4X raw images, $5.2\%$ on cardiac 4X raw images, and $6.2\%$ on abdominal 4X raw images. The proposed method is then applied to the spatial super-resolution of 2D videos, by optimising the sampling of lines acquired by the probe in terms of the acquisition frequency. Our method specialises trained networks to predict the high-resolution target through the design of the network architecture and the loss function, taking into account the anatomical district and the up-sampling factor and exploiting a large ultrasound data set. The use of deep learning on large data sets overcomes the limitations of vision-based algorithms that are general and do not encode the characteristics of the data. Furthermore, the data set can be enriched with images selected by medical experts to further specialise the individual networks. Through learning and high-performance computing, our super-resolution is specialised to different anatomical districts by training multiple networks. Furthermore, the computational demand is shifted to centralised hardware resources with a real-time execution of the network's prediction on local devices.
Abstract:Ultrasound images are widespread in medical diagnosis for muscle-skeletal, cardiac, and obstetrical diseases, due to the efficiency and non-invasiveness of the acquisition methodology. However, ultrasound acquisition introduces a speckle noise in the signal, that corrupts the resulting image and affects further processing operations, and the visual analysis that medical experts conduct to estimate patient diseases. Our main goal is to define a universal deep learning framework for real-time denoising of ultrasound images. We analyse and compare state-of-the-art methods for the smoothing of ultrasound images (e.g., spectral, low-rank, and deep learning denoising algorithms), in order to select the best one in terms of accuracy, preservation of anatomical features, and computational cost. Then, we propose a tuned version of the selected state-of-the-art denoising methods (e.g., WNNM), to improve the quality of the denoised images, and extend its applicability to ultrasound images. To handle large data sets of ultrasound images with respect to applications and industrial requirements, we introduce a denoising framework that exploits deep learning and HPC tools, and allows us to replicate the results of state-of-the-art denoising methods in a real-time execution.