Abstract:Monitoring plant health is crucial for maintaining agricultural productivity and food safety. Disruptions in the plant's normal state, caused by diseases, often interfere with essential plant activities, and timely detection of these diseases can significantly mitigate crop loss. In this study, we propose a deep learning-based approach for efficient detection of plant diseases using drone-captured imagery. A comprehensive database of various plant species, exhibiting numerous diseases, was compiled from the Internet and utilized as the training and test dataset. A Convolutional Neural Network (CNN), renowned for its performance in image classification tasks, was employed as our primary predictive model. The CNN model, trained on this rich dataset, demonstrated superior proficiency in crop disease categorization and detection, even under challenging imaging conditions. For field implementation, we deployed a prototype drone model equipped with a high-resolution camera for live monitoring of extensive agricultural fields. The captured images served as the input for our trained model, enabling real-time identification of healthy and diseased plants. Our approach promises an efficient and scalable solution for improving crop health monitoring systems.
Abstract:Breast cancer (BC) remains a significant health threat, with no long-term cure currently available. Early detection is crucial, yet mammography interpretation is hindered by high false positives and negatives. With BC incidence projected to surpass lung cancer, improving early detection methods is vital. Thermography, using high-resolution infrared cameras, offers promise, especially when combined with artificial intelligence (AI). This work presents an attention-based convolutional neural network for segmentation, providing increased speed and precision in BC detection and classification. The system enhances images and performs cancer segmentation with explainable AI. We propose a transformer-attention-based convolutional architecture (UNet) for fault identification and employ Gradient-weighted Class Activation Mapping (Grad-CAM) to analyze areas of bias and weakness in the UNet architecture with IRT images. The superiority of our proposed framework is confirmed when compared with existing deep learning frameworks.
Abstract:Human drivers have distinct driving techniques, knowledge, and sentiments due to unique driving traits. Driver drowsiness has been a serious issue endangering road safety; therefore, it is essential to design an effective drowsiness detection algorithm to bypass road accidents. Miscellaneous research efforts have been approached the problem of detecting anomalous human driver behaviour to examine the frontal face of the driver and automobile dynamics via computer vision techniques. Still, the conventional methods cannot capture complicated driver behaviour features. However, with the origin of deep learning architectures, a substantial amount of research has also been executed to analyze and recognize driver's drowsiness using neural network algorithms. This paper introduces a novel framework based on vision transformers and YoloV5 architectures for driver drowsiness recognition. A custom YoloV5 pre-trained architecture is proposed for face extraction with the aim of extracting Region of Interest (ROI). Owing to the limitations of previous architectures, this paper introduces vision transformers for binary image classification which is trained and validated on a public dataset UTA-RLDD. The model had achieved 96.2\% and 97.4\% as it's training and validation accuracies respectively. For the further evaluation, proposed framework is tested on a custom dataset of 39 participants in various light circumstances and achieved 95.5\% accuracy. The conducted experimentations revealed the significant potential of our framework for practical applications in smart transportation systems.