Abstract:Objective: Recognizing retinal fundus vessel abnormity is vital to early diagnosis of ophthalmological diseases and cardiovascular events. However, segmentation results are highly influenced by elusive thin vessels. In this work, we present a synthetic network, including a symmetric equilibrium generative adversarial network (SEGAN), mul-ti-scale features refine blocks (MSFRB), and attention mechanism (AM) to enhance the performance on vessel segmentation especially for thin vessels. Method: The proposed network is granted powerful multi-scale repre-sentation capability. First, SEGAN is proposed to construct a symmetric adversarial architecture, which forces gener-ator to produce more realistic images with local details. Second, MSFRB are devised to prevent high-resolution features from being obscured, thereby preserving multi-scale features. Finally, the AM is employed to encourage the network to concentrate on discriminative features. Results: On public dataset DRIVE, STARE, and CHASEDB1, we evaluate our network quantitatively and compare it with state-of-the-art works. The ablation experiment shows that SEGAN, MSFRB, and AM both contribute to the desirable performance of our network. Conclusion: The proposed network outperforms other strategies and effectively functions in elusive vessels segmentation, achieving highest scores in Sensitivity, G-Mean, Precision, and F1-Score while maintaining the top level in other metrics. Significance: The appreciable per-formance and high computational efficiency offer great potential in clinical retinal vessel segmentation application. Meanwhile, the network could be utilized to extract detail information on other biomedical issues.
Abstract:The research on recognizing the most discriminative regions provides referential information for weakly supervised object localization with only image-level annotations. However, the most discriminative regions usually conceal the other parts of the object, thereby impeding entire object recognition and localization. To tackle this problem, the Dual-attention Focused Module (DFM) is proposed to enhance object localization performance. Specifically, we present a dual attention module for information fusion, consisting of a position branch and a channel one. In each branch, the input feature map is deduced into an enhancement map and a mask map, thereby highlighting the most discriminative parts or hiding them. For the position mask map, we introduce a focused matrix to enhance it, which utilizes the principle that the pixels of an object are continuous. Between these two branches, the enhancement map is integrated with the mask map, aiming at partially compensating the lost information and diversifies the features. With the dual-attention module and focused matrix, the entire object region could be precisely recognized with implicit information. We demonstrate outperforming results of DFM in experiments. In particular, DFM achieves state-of-the-art performance in localization accuracy in ILSVRC 2016 and CUB-200-2011.