Abstract:Robotic Template Library (RTL) is a set of tools for dealing with geometry and point cloud processing, especially in robotic applications. The software package covers basic objects such as vectors, line segments, quaternions, rigid transformations, etc., however, its main contribution lies in the more advanced modules: The segmentation module for batch or stream clustering of point clouds, the fast vectorization module for approximation of continuous point clouds by geometric objects of higher grade and the LaTeX export module enabling automated generation of high-quality visual outputs. It is a header-only library written in C++17, uses the Eigen library as a linear algebra back-end, and is designed with high computational performance in mind. RTL can be used in all robotic tasks such as motion planning, map building, object recognition and many others, but the point cloud processing utilities are general enough to be employed in any field touching object reconstruction and computer vision applications as well.
Abstract:Research on autonomous driving is advancing dramatically and requires new data and techniques to progress even further. To reflect this pressure, we present an extension of our recent work - the Brno Urban Dataset (BUD). The new data focus on winter conditions in various snow-covered environments and feature additional LiDAR and radar sensors for object detection in front of the vehicle. The improvement affects the old data as well. We provide YOLO detection annotations for all old RGB images in the dataset. The detections are further also transferred by our original algorithm into the infra-red (IR) images, captured by the thermal camera. To our best knowledge, it makes this dataset the largest source of machine-annotated thermal images currently available. The dataset is published under MIT license on https://github.com/Robotics-BUT/Brno-Urban-Dataset.
Abstract:In this paper, we present our new sensor fusion framework for self-driving cars and other autonomous robots. We have designed our framework as a universal and scalable platform for building up a robust 3D model of the agent's surrounding environment by fusing a wide range of various sensors into the data model that we can use as a basement for the decision making and planning algorithms. Our software currently covers the data fusion of the RGB and thermal cameras, 3D LiDARs, 3D IMU, and a GNSS positioning. The framework covers a complete pipeline from data loading, filtering, preprocessing, environment model construction, visualization, and data storage. The architecture allows the community to modify the existing setup or to extend our solution with new ideas. The entire software is fully compatible with ROS (Robotic Operation System), which allows the framework to cooperate with other ROS-based software. The source codes are fully available as an open-source under the MIT license. See https://github.com/Robotics-BUT/Atlas-Fusion.
Abstract:Autonomous driving is a dynamically growing field of research, where quality and amount of experimental data is critical. Although several rich datasets are available these days, the demands of researchers and technical possibilities are evolving. Through this paper, we bring a new dataset recorded in Brno, Czech Republic. It offers data from four WUXGA cameras, two 3D LiDARs, inertial measurement unit, infrared camera and especially differential RTK GNSS receiver with centimetre accuracy which, to the best knowledge of the authors, is not available from any other public dataset so far. In addition, all the data are precisely timestamped with sub-millisecond precision to allow wider range of applications. At the time of publishing of this paper, recordings of more than 350 km of rides in varying environment are shared at: https: //github.com/RoboticsBUT/Brno-Urban-Dataset.