Abstract:Multi-class cell segmentation in high-resolution gigapixel whole slide images (WSI) is crucial for various clinical applications. However, training such models typically requires labor-intensive, pixel-wise annotations by domain experts. Recent efforts have democratized this process by involving lay annotators without medical expertise. However, conventional non-agent-based approaches struggle to handle annotation noise adaptively, as they lack mechanisms to mitigate false positives (FP) and false negatives (FN) at both the image-feature and pixel levels. In this paper, we propose a consensus-aware self-corrective AI agent that leverages the Consensus Matrix to guide its learning process. The Consensus Matrix defines regions where both the AI and annotators agree on cell and non-cell annotations, which are prioritized with stronger supervision. Conversely, areas of disagreement are adaptively weighted based on their feature similarity to high-confidence agreement regions, with more similar regions receiving greater attention. Additionally, contrastive learning is employed to separate features of noisy regions from those of reliable agreement regions by maximizing their dissimilarity. This paradigm enables the AI to iteratively refine noisy labels, enhancing its robustness. Validated on one real-world lay-annotated cell dataset and two simulated noisy datasets, our method demonstrates improved segmentation performance, effectively correcting FP and FN errors and showcasing its potential for training robust models on noisy datasets. The official implementation and cell annotations are publicly available at https://github.com/ddrrnn123/CASC-AI.
Abstract:In recent years, deep learning technology has developed rapidly, and the application of deep neural networks in the medical image processing field has become the focus of the spotlight. This paper aims to achieve needle position detection in medical retinal surgery by adopting the target detection algorithm based on YOLOv5 as the basic deep neural network model. The state-of-the-art needle detection approaches for medical surgery mainly focus on needle structure segmentation. Instead of the needle segmentation, the proposed method in this paper contains the angle examination during the needle detection process. This approach also adopts a novel classification method based on the different positions of the needle to improve the model. The experiments demonstrate that the proposed network can accurately detect the needle position and measure the needle angle. The performance test of the proposed method achieves 4.80 for the average Euclidean distance between the detected tip position and the actual tip position. It also obtains an average error of 0.85 degrees for the tip angle across all test sets.