Abstract:In this paper, we present OpenMedIA, an open-source toolbox library containing a rich set of deep learning methods for medical image analysis under heterogeneous Artificial Intelligence (AI) computing platforms. Various medical image analysis methods, including 2D$/$3D medical image classification, segmentation, localisation, and detection, have been included in the toolbox with PyTorch and$/$or MindSpore implementations under heterogeneous NVIDIA and Huawei Ascend computing systems. To our best knowledge, OpenMedIA is the first open-source algorithm library providing compared PyTorch and MindSp
Abstract:As an important scan plane, four chamber view is routinely performed in both second trimester perinatal screening and fetal echocardiographic examinations. The biometrics in this plane including cardio-thoracic ratio (CTR) and cardiac axis are usually measured by sonographers for diagnosing congenital heart disease. However, due to the commonly existing artifacts like acoustic shadowing, the traditional manual measurements not only suffer from the low efficiency, but also with the inconsistent results depending on the operators' skills. In this paper, we present an anchor-free ellipse detection network, namely EllipseNet, which detects the cardiac and thoracic regions in ellipse and automatically calculates the CTR and cardiac axis for fetal cardiac biometrics in 4-chamber view. In particular, we formulate the network that detects the center of each object as points and regresses the ellipses' parameters simultaneously. We define an intersection-over-union loss to further regulate the regression procedure. We evaluate EllipseNet on clinical echocardiogram dataset with more than 2000 subjects. Experimental results show that the proposed framework outperforms several state-of-the-art methods. Source code will be available at https://git.openi.org.cn/capepoint/EllipseNet .
Abstract:Despite tremendous efforts, it is very challenging to generate a robust model to assist in the accurate quantification assessment of COVID-19 on chest CT images. Due to the nature of blurred boundaries, the supervised segmentation methods usually suffer from annotation biases. To support unbiased lesion localisation and to minimise the labeling costs, we propose a data-driven framework supervised by only image-level labels. The framework can explicitly separate potential lesions from original images, with the help of a generative adversarial network and a lesion-specific decoder. Experiments on two COVID-19 datasets demonstrate the effectiveness of the proposed framework and its superior performance to several existing methods.