Abstract:Efficient data transmission and reasonable task allocation are important to improve multi-robot exploration efficiency. However, most communication data types typically contain redundant information and thus require massive communication volume. Moreover, exploration-oriented task allocation is far from trivial and becomes even more challenging for resource-limited unmanned aerial vehicles (UAVs). In this paper, we propose a fast and communication-efficient multi-UAV exploration method for exploring large environments. We first design a multi-robot dynamic topological graph (MR-DTG) consisting of nodes representing the explored and exploring regions and edges connecting nodes. Supported by MR-DTG, our method achieves efficient communication by only transferring the necessary information required by exploration planning. To further improve the exploration efficiency, a hierarchical multi-UAV exploration method is devised using MR-DTG. Specifically, the \emph{graph Voronoi partition} is used to allocate MR-DTG's nodes to the closest UAVs, considering the actual motion cost, thus achieving reasonable task allocation. To our knowledge, this is the first work to address multi-UAV exploration using \emph{graph Voronoi partition}. The proposed method is compared with a state-of-the-art method in simulations. The results show that the proposed method is able to reduce the exploration time and communication volume by up to 38.3\% and 95.5\%, respectively. Finally, the effectiveness of our method is validated in the real-world experiment with 6 UAVs. We will release the source code to benefit the community.
Abstract:The development of aerial autonomy has enabled aerial robots to fly agilely in complex environments. However, dodging fast-moving objects in flight remains a challenge, limiting the further application of unmanned aerial vehicles (UAVs). The bottleneck of solving this problem is the accurate perception of rapid dynamic objects. Recently, event cameras have shown great potential in solving this problem. This paper presents a complete perception system including ego-motion compensation, object detection, and trajectory prediction for fast-moving dynamic objects with low latency and high precision. Firstly, we propose an accurate ego-motion compensation algorithm by considering both rotational and translational motion for more robust object detection. Then, for dynamic object detection, an event camera-based efficient regression algorithm is designed. Finally, we propose an optimizationbased approach that asynchronously fuses event and depth cameras for trajectory prediction. Extensive real-world experiments and benchmarks are performed to validate our framework. Moreover, our code will be released to benefit related researches.