Abstract:Multi-robot collaboration has become a needed component in unknown environment exploration due to its ability to accomplish various challenging situations. Potential-field-based methods are widely used for autonomous exploration because of their high efficiency and low travel cost. However, exploration speed and collaboration ability are still challenging topics. Therefore, we propose a Distributed Multi-Robot Potential-Field-Based Exploration (DMPF-Explore). In particular, we first present a Distributed Submap-Based Multi-Robot Collaborative Mapping Method (DSMC-Map), which can efficiently estimate the robot trajectories and construct the global map by merging the local maps from each robot. Second, we introduce a Potential-Field-Based Exploration Strategy Augmented with Modified Wave-Front Distance and Colored Noises (MWF-CN), in which the accessible frontier neighborhood is extended, and the colored noise provokes the enhancement of exploration performance. The proposed exploration method is deployed for simulation and real-world scenarios. The results show that our approach outperforms the existing ones regarding exploration speed and collaboration ability.
Abstract:Nowadays, several real-world tasks require adequate environment coverage for maintaining communication between multiple robots, for example, target search tasks, environmental monitoring, and post-disaster rescues. In this study, we look into a situation where there are a human operator and multiple robots, and we assume that each human or robot covers a certain range of areas. We want them to maximize their area of coverage collectively. Therefore, in this paper, we propose the Graph-Based Multi-Robot Coverage Positioning Method (GMC-Pos) to find strategic positions for robots that maximize the area coverage. Our novel approach consists of two main modules: graph generation and node selection. Firstly, graph generation represents the environment using a weighted connected graph. Then, we present a novel generalized graph-based distance and utilize it together with the graph degrees to be the conditions for node selection in a recursive manner. Our method is deployed in three environments with different settings. The results show that it outperforms the benchmark method by 15.13% to 24.88% regarding the area coverage percentage.
Abstract:Localization of objects is vital for robot-object interaction. Light Detection and Ranging (LiDAR) application in robotics is an emerging and widely used object localization technique due to its accurate distance measurement, long-range, wide field of view, and robustness in different conditions. However, LiDAR is unable to identify the objects when they are obstructed by obstacles, resulting in inaccuracy and noise in localization. To address this issue, we present an approach incorporating LiDAR and Ultra-Wideband (UWB) ranging for object localization. The UWB is popular in sensor fusion localization algorithms due to its low weight and low power consumption. In addition, the UWB is able to return ranging measurements even when the object is not within line-of-sight. Our approach provides an efficient solution to combine an anonymous optical sensor (LiDAR) with an identity-based radio sensor (UWB) to improve the localization accuracy of the object. Our approach consists of three modules. The first module is an object-identification algorithm that compares successive scans from the LiDAR to detect a moving object in the environment and returns the position with the closest range to UWB ranging. The second module estimates the moving object's moving direction using the previous and current estimated position from our object-identification module. It removes the suspicious estimations through an outlier rejection criterion. Lastly, we fuse the LiDAR, UWB ranging, and odometry measurements in pose graph optimization (PGO) to recover the entire trajectory of the robot and object. Extensive experiments were performed to evaluate the performance of the proposed approach.
Abstract:Precise calibration is the basis for the vision-guided robot system to achieve high-precision operations. Systems with multiple eyes (cameras) and multiple hands (robots) are particularly sensitive to calibration errors, such as micro-assembly systems. Most existing methods focus on the calibration of a single unit of the whole system, such as poses between hand and eye, or between two hands. These methods can be used to determine the relative pose between each unit, but the serialized incremental calibration strategy cannot avoid the problem of error accumulation in a large-scale system. Instead of focusing on a single unit, this paper models the multi-eye and multi-hand system calibration problem as a graph and proposes a method based on the minimum spanning tree and graph optimization. This method can automatically plan the serialized optimal calibration strategy in accordance with the system settings to get coarse calibration results initially. Then, with these initial values, the closed-loop constraints are introduced to carry out global optimization. Simulation experiments demonstrate the performance of the proposed algorithm under different noises and various hand-eye configurations. In addition, experiments on real robot systems are presented to further verify the proposed method.
Abstract:To accomplish task efficiently in a multiple robots system, a problem that has to be addressed is Simultaneous Localization and Mapping (SLAM). LiDAR (Light Detection and Ranging) has been used for many SLAM solutions due to its superb accuracy, but its performance degrades in featureless environments, like tunnels or long corridors. Centralized SLAM solves the problem with a cloud server, which requires a huge amount of computational resources and lacks robustness against central node failure. To address these issues, we present a distributed SLAM solution to estimate the trajectory of a group of robots using Ultra-WideBand (UWB) ranging and odometry measurements. The proposed approach distributes the processing among the robot team and significantly mitigates the computation concern emerged from the centralized SLAM. Our solution determines the relative pose (also known as loop closure) between two robots by minimizing the UWB ranging measurements taken at different positions when the robots are in close proximity. UWB provides a good distance measure in line-of-sight conditions, but retrieving a precise pose estimation remains a challenge, due to ranging noise and unpredictable path traveled by the robot. To deal with the suspicious loop closures, we use Pairwise Consistency Maximization (PCM) to examine the quality of loop closures and perform outlier rejections. The filtered loop closures are then fused with odometry in a distributed pose graph optimization (DPGO) module to recover the full trajectory of the robot team. Extensive experiments are conducted to validate the effectiveness of the proposed approach.
Abstract:This paper aims to improve the performance and positioning accuracy of a robot by using the particle filter method. The laser range information is a wireless navigation system mainly used to measure, position, and control autonomous robots. Its localization is more flexible to control than wired guidance systems. However, the navigation through the laser range finder occurs with a large positioning error while it moves or turns fast. For solving this problem, the paper proposes a method to improve the positioning accuracy of a robot in an indoor environment by using a particle filter with robust characteristics in a nonlinear or non-Gaussian system. In this experiment, a robot is equipped with a laser range finder, two encoders, and a gyro for navigation to verify the positioning accuracy and performance. The positioning accuracy and performance could improve by approximately 85.5% in this proposed method.
Abstract:As one of the basic tasks of computer vision, object detection has been widely used in many intelligent applications. However, object detection algorithms are usually heavyweight in computation, hindering their implementations on resource-constrained edge devices. Current edge-cloud collaboration methods, such as CNN partition over Edge-cloud devices, are not suitable for object detection since the huge data size of the intermediate results will introduce extravagant communication costs. To address this challenge, we propose a small-big model framework that deploys a big model in the cloud and a small model on the edge devices. Upon receiving data, the edge device operates a difficult-case discriminator to classify the images into easy cases and difficult cases according to the specific semantics of the images. The easy cases will be processed locally at the edge, and the difficult cases will be uploaded to the cloud. Experimental results on the VOC, COCO, HELMET datasets using two different object detection algorithms demonstrate that the small-big model system can detect 94.01%-97.84% of objects with only about 50% images uploaded to the cloud when using SSD. In addition, the small-big model averagely reaches 91.22%- 92.52% end-to-end mAP of the scheme that uploading all images to the cloud.
Abstract:Relative localization between autonomous robots without infrastructure is crucial to achieve their navigation, path planning, and formation in many applications, such as emergency response, where acquiring a prior knowledge of the environment is not possible. The traditional Ultra-WideBand (UWB)-based approach provides a good estimation of the distance between the robots, but obtaining the relative pose (including the displacement and orientation) remains challenging. We propose an approach to estimate the relative pose between a group of robots by equipping each robot with multiple UWB ranging nodes. We determine the pose between two robots by minimizing the residual error of the ranging measurements from all UWB nodes. To improve the localization accuracy, we propose to utilize the odometry constraints through a sliding window-based optimization. The optimized pose is then fused with the odometry in a particle filtering for pose tracking among a group of mobile robots. We have conducted extensive experiments to validate the effectiveness of the proposed approach.
Abstract:Environment mapping is an essential prerequisite for mobile robots to perform different tasks such as navigation and mission planning. With the availability of low-cost 2D LiDARs, there are increasing applications of such 2D LiDARs in industrial environments. However, environment mapping in an unknown and feature-less environment with such low-cost 2D LiDARs remains a challenge. The challenge mainly originates from the short-range of LiDARs and complexities in performing scan matching in these environments. In order to resolve these shortcomings, we propose to fuse the ultra-wideband (UWB) with 2D LiDARs to improve the mapping quality of a mobile robot. The optimization-based approach is utilized for the fusion of UWB ranging information and odometry to first optimize the trajectory. Then the LiDAR-based loop closures are incorporated to improve the accuracy of the trajectory estimation. Finally, the optimized trajectory is combined with the LiDAR scans to produce the occupancy map of the environment. The performance of the proposed approach is evaluated in an indoor feature-less environment with a size of 20m*20m. Obtained results show that the mapping error of the proposed scheme is 85.5% less than that of the conventional GMapping algorithm with short-range LiDAR (for example Hokuyo URG-04LX in our experiment with a maximum range of 5.6m).