Abstract:Background: The 2024 Mpox outbreak, particularly severe in Africa with clade 1b emergence, has highlighted critical gaps in diagnostic capabilities in resource-limited settings. This study aimed to develop and validate an artificial intelligence (AI)-driven, on-device screening tool for Mpox, designed to function offline in low-resource environments. Methods: We developed a YOLOv8n-based deep learning model trained on 2,700 images (900 each of Mpox, other skin conditions, and normal skin), including synthetic data. The model was validated on 360 images and tested on 540 images. A larger external validation was conducted using 1,500 independent images. Performance metrics included accuracy, precision, recall, F1-score, sensitivity, and specificity. Findings: The model demonstrated high accuracy (96%) in the final test set. For Mpox detection, it achieved 93% precision, 97% recall, and an F1-score of 95%. Sensitivity and specificity for Mpox detection were 97% and 96%, respectively. Performance remained consistent in the larger external validation, confirming the model's robustness and generalizability. Interpretation: This AI-driven screening tool offers a rapid, accurate, and scalable solution for Mpox detection in resource-constrained settings. Its offline functionality and high performance across diverse datasets suggest significant potential for improving Mpox surveillance and management, particularly in areas lacking traditional diagnostic infrastructure.
Abstract:Rapid development of disease detection models using computer vision is crucial in responding to medical emergencies, such as epidemics or bioterrorism events. Traditional data collection methods are often too slow in these scenarios, requiring innovative approaches for quick, reliable model generation from minimal data. Our study introduces a novel approach by constructing a comprehensive computer vision model to detect Mpox lesions using only synthetic data. Initially, these models generated a diverse set of synthetic images representing Mpox lesions on various body parts (face, back, chest, leg, neck, arm) across different skin tones as defined by the Fitzpatrick scale (fair, brown, dark skin). Subsequently, we trained and tested a vision model with this synthetic dataset to evaluate the diffusion models' efficacy in producing high-quality training data and its impact on the vision model's medical image recognition performance. The results were promising; the vision model achieved a 97% accuracy rate, with 96% precision and recall for Mpox cases, and similarly high metrics for normal and other skin disorder cases, demonstrating its ability to correctly identify true positives and minimize false positives. The model achieved an F1-Score of 96% for Mpox cases and 98% for normal and other skin disorders, reflecting a balanced precision-recall relationship, thus ensuring reliability and robustness in its predictions. Our proposed SynthVision methodology indicates the potential to develop accurate computer vision models with minimal data input for future medical emergencies.
Abstract:Rapid development of disease detection computer vision models is vital in response to urgent medical crises like epidemics or events of bioterrorism. However, traditional data gathering methods are too slow for these scenarios necessitating innovative approaches to generate reliable models quickly from minimal data. We demonstrate our new approach by building a comprehensive computer vision model for detecting Human Papilloma Virus Genital warts using only synthetic data. In our study, we employed a two phase experimental design using diffusion models. In the first phase diffusion models were utilized to generate a large number of diverse synthetic images from 10 HPV guide images explicitly focusing on accurately depicting genital warts. The second phase involved the training and testing vision model using this synthetic dataset. This method aimed to assess the effectiveness of diffusion models in rapidly generating high quality training data and the subsequent impact on the vision model performance in medical image recognition. The study findings revealed significant insights into the performance of the vision model trained on synthetic images generated through diffusion models. The vision model showed exceptional performance in accurately identifying cases of genital warts. It achieved an accuracy rate of 96% underscoring its effectiveness in medical image classification. For HPV cases the model demonstrated a high precision of 99% and a recall of 94%. In normal cases the precision was 95% with an impressive recall of 99%. These metrics indicate the model capability to correctly identify true positive cases and minimize false positives. The model achieved an F1 Score of 96% for HPV cases and 97% for normal cases. The high F1 Score across both categories highlights the balanced nature of the model precision and recall ensuring reliability and robustness in its predictions.