Abstract:The development of self-driving cars has garnered significant attention from researchers, universities, and industries worldwide. Autonomous vehicles integrate numerous subsystems, including lane tracking, object detection, and vehicle control, which require thorough testing and validation. Scaled-down vehicles offer a cost-effective and accessible platform for experimentation, providing researchers with opportunities to optimize algorithms under constraints of limited computational power. This paper presents a four-wheeled autonomous vehicle platform designed to facilitate research and prototyping in autonomous driving. Key contributions include (1) a novel density-based clustering approach utilizing histogram statistics for landmark tracking, (2) a lateral controller, and (3) the integration of these innovations into a cohesive platform. Additionally, the paper explores object detection through systematic dataset augmentation and introduces an autonomous parking procedure. The results demonstrate the platform's effectiveness in achieving reliable lane tracking under varying lighting conditions, smooth trajectory following, and consistent object detection performance. Though developed for small-scale vehicles, these modular solutions are adaptable for full-scale autonomous systems, offering a versatile and cost-efficient framework for advancing research and industry applications.
Abstract:This paper presents the development of the Auriga @Work robot, designed by the Robotics and Intelligent Automation Lab at Shahid Beheshti University, Department of Electrical Engineering, for the RoboCup 2024 competition. The robot is tailored for industrial applications, focusing on enhancing efficiency in repetitive or hazardous environments. It is equipped with a 4-wheel Mecanum drive system for omnidirectional mobility and a 5-degree-of-freedom manipulator arm with a custom 3D-printed gripper for object manipulation and navigation tasks. The robot's electronics are powered by custom-designed boards utilizing ESP32 microcontrollers and an Nvidia Jetson Nano for real-time control and decision-making. The key software stack integrates Hector SLAM for mapping, the A* algorithm for path planning, and YOLO for object detection, along with advanced sensor fusion for improved navigation and collision avoidance.