Abstract:Foundation models, such as OpenAI's GPT-3 and GPT-4, Meta's LLaMA, and Google's PaLM2, have revolutionized the field of artificial intelligence. A notable paradigm shift has been the advent of the Segment Anything Model (SAM), which has exhibited a remarkable capability to segment real-world objects, trained on 1 billion masks and 11 million images. Although SAM excels in general object segmentation, it lacks the intrinsic ability to detect salient objects, resulting in suboptimal performance in this domain. To address this challenge, we present the Segment Salient Object Model (SSOM), an innovative approach that adaptively fine-tunes SAM for salient object detection by harnessing the low-rank structure inherent in deep learning. Comprehensive qualitative and quantitative evaluations across five challenging RGB benchmark datasets demonstrate the superior performance of our approach, surpassing state-of-the-art methods.
Abstract:Computer vision and machine learning are playing an increasingly important role in computer-assisted diagnosis; however, the application of deep learning to medical imaging has challenges in data availability and data imbalance, and it is especially important that models for medical imaging are built to be trustworthy. Therefore, we propose TRUDLMIA, a trustworthy deep learning framework for medical image analysis, which adopts a modular design, leverages self-supervised pre-training, and utilizes a novel surrogate loss function. Experimental evaluations indicate that models generated from the framework are both trustworthy and high-performing. It is anticipated that the framework will support researchers and clinicians in advancing the use of deep learning for dealing with public health crises including COVID-19.
Abstract:Effective representation learning is the key in improving model performance for medical image analysis. In training deep learning models, a compromise often must be made between performance and trust, both of which are essential for medical applications. Moreover, models optimized with cross-entropy loss tend to suffer from unwarranted overconfidence in the majority class and over-cautiousness in the minority class. In this work, we integrate a new surrogate loss with self-supervised learning for computer-aided screening of COVID-19 patients using radiography images. In addition, we adopt a new quantification score to measure a model's trustworthiness. Ablation study is conducted for both the performance and the trust on feature learning methods and loss functions. Comparisons show that leveraging the new surrogate loss on self-supervised models can produce label-efficient networks that are both high-performing and trustworthy.
Abstract:Objective: Skin stretching around the forehead wrinkles is an important method in rhytidectomy. Proper parameters are required to evaluate the surgical effect. In this paper, a simulation method was proposed to obtain the parameters. Methods: Three-dimensional point cloud data with a resolution of 50 {\mu}m were employed. First, a smooth supporting contour under the wrinkled forehead was generated via b-spline interpolation and extrapolation to constrain the deformation of the wrinkled zone. Then, based on the vector formed intrinsic finite element (VFIFE) algorithm, the simulation was implemented in Matlab for the deformation of wrinkled forehead skin in the stretching process. Finally, the stress distribution and the residual wrinkles of forehead skin were employed to evaluate the surgical effect. Results: Although the residual wrinkles are similar when forehead wrinkles are finitely stretched, their stress distribution changes greatly. This indicates that the stress distribution in the skin is effective to evaluate the surgical effect, and the forehead wrinkles are easily to be overstretched, which may lead to potential skin injuries. Conclusion: The simulation method can predict stress distribution and residual wrinkles after forehead wrinkle stretching surgery, which can be potentially used to control the surgical process and further reduce risks of skin injury.