Abstract:Marine ecosystems are vital for the planet's health, but human activities such as climate change, pollution, and overfishing pose a constant threat to marine species. Accurate classification and monitoring of these species can aid in understanding their distribution, population dynamics, and the impact of human activities on them. However, classifying marine species can be challenging due to their vast diversity and the complex underwater environment. With advancements in computer performance and GPU-based computing, deep-learning algorithms can now efficiently classify marine species, making it easier to monitor and manage marine ecosystems. In this paper, we propose an optimization to the MobileNetV2 model to achieve a 99.83% average validation accuracy by highlighting specific guidelines for creating a dataset and augmenting marine species images. This transfer learning algorithm can be deployed successfully on a mobile application for on-site classification at fisheries.
Abstract:Road accidents involving autonomous vehicles commonly occur in situations where a (pedestrian) obstacle presents itself in the path of the moving vehicle at very sudden time intervals, leaving the robot even lesser time to react to the change in scene. In order to tackle this issue, we propose a novel algorithmic implementation that classifies the intent of a single arbitrarily chosen pedestrian in a two dimensional frame into logic states in a procedural manner using quaternions generated from a MediaPipe pose estimation model. This bypasses the need to employ any relatively high latency deep-learning algorithms primarily due to the lack of necessity for depth perception as well as an implicit cap on the computational resources that most IoT edge devices present. The model was able to achieve an average testing accuracy of 83.56% with a reliable variance of 0.0042 while operating with an average latency of 48 milliseconds, demonstrating multiple notable advantages over the current standard of using spatio-temporal convolutional networks for these perceptive tasks.
Abstract:In this paper, we present a novel algorithm to extract a quaternion from a two dimensional camera frame for estimating a contained human skeletal pose. The problem of pose estimation is usually tackled through the usage of stereo cameras and intertial measurement units for obtaining depth and euclidean distance for measurement of points in 3D space. However, the usage of these devices comes with a high signal processing latency as well as a significant monetary cost. By making use of MediaPipe, a framework for building perception pipelines for human pose estimation, the proposed algorithm extracts a quaternion from a 2-D frame capturing an image of a human object at a sub-fifty millisecond latency while also being capable of deployment at edges with a single camera frame and a generally low computational resource availability, especially for use cases involving last-minute detection and reaction by autonomous robots. The algorithm seeks to bypass the funding barrier and improve accessibility for robotics researchers involved in designing control systems.