Fellow, IEEE
Abstract:In the tasks of multi-robot collaborative area search, we propose the unified approach for simultaneous mapping for sensing more targets (exploration) while searching and locating the targets (coverage). Specifically, we implement a hierarchical multi-agent reinforcement learning algorithm to decouple task planning from task execution. The role concept is integrated into the upper-level task planning for role selection, which enables robots to learn the role based on the state status from the upper-view. Besides, an intelligent role switching mechanism enables the role selection module to function between two timesteps, promoting both exploration and coverage interchangeably. Then the primitive policy learns how to plan based on their assigned roles and local observation for sub-task execution. The well-designed experiments show the scalability and generalization of our method compared with state-of-the-art approaches in the scenes with varying complexity and number of robots.
Abstract:Collaboration is one of the most important factors in multi-robot systems. Considering certain real-world applications and to further promote its development, we propose a new benchmark to evaluate multi-robot collaboration in Target Trapping Environment (T2E). In T2E, two kinds of robots (called captor robot and target robot) share the same space. The captors aim to catch the target collaboratively, while the target will try to escape from the trap. Both the trapping and escaping process can use the environment layout to help achieve the corresponding objective, which requires high collaboration between robots and the utilization of the environment. For the benchmark, we present and evaluate multiple learning-based baselines in T2E, and provide insights into regimes of multi-robot collaboration. We also make our benchmark publicly available and encourage researchers from related robotics disciplines to propose, evaluate, and compare their solutions in this benchmark. Our project is released at https://github.com/Dr-Xiaogaren/T2E.
Abstract:The environment that the robot operating in is becoming more and more complex, which poses great challenges on robot navigation. This paper gives an overview of the navigation framework for robot running in dense environment. The path planning in the navigation framework of mobile robots is divided into global planning and local planning according to the planning scope and the executability. Robot navigation is a multi-objective problem, which not only needs to complete the given tasks but also needs to simultaneously maintain the social comfort level. Consequently, we focus on the reinforcement learning-based path planning algorithms and analyze the development status, advantages, and disadvantages of the existing algorithms. Besides, path planning in a dynamic environment for robots will be further studied in the future in the areas of the advanced algorithm, hybrid algorithm, mu10.15878/j.cnki.instrumentation.2019.02.010lti-robot collaboration, social model, and artificial intelligence algorithm combination.