Abstract:The European Moon Rover System (EMRS) Pre-Phase A activity is part of the European Exploration Envelope Programme (E3P) that seeks to develop a versatile surface mobility solution for future lunar missions. These missions include: the Polar Explorer (PE), In-Situ Resource Utilization (ISRU), and Astrophysics Lunar Observatory (ALO) and Lunar Geological Exploration Mission (LGEM). Therefore, designing a multipurpose rover that can serve these missions is crucial. The rover needs to be compatible with three different mission scenarios, each with an independent payload, making flexibility the key driver. This study focuses on modularity in the rover's locomotion solution and autonomous on-board system. Moreover, the proposed EMRS solution has been tested at an analogue facility to prove the modular mobility concept. The tests involved the rover's mobility in a lunar soil simulant testbed and different locomotion modes in a rocky and uneven terrain, as well as robustness against obstacles and excavation of lunar regolith. As a result, the EMRS project has developed a multipurpose modular rover concept, with power, thermal control, insulation, and dust protection systems designed for further phases. This paper highlights the potential of the EMRS system for lunar exploration and the importance of modularity in rover design.
Abstract:This document presents the study conducted during the European Moon Rover System Pre-Phase A project, in which we have developed a lunar rover system, with a modular approach, capable of carrying out different missions with different objectives. This includes excavating and transporting over 200kg of regolith, building an astrophysical observatory on the far side of the Moon, placing scientific instrumentation at the lunar south pole, or studying the volcanic history of our satellite. To achieve this, a modular approach has been adopted for the design of the platform in terms of locomotion and mobility, which includes onboard autonomy, of course. A modular platform allows for accommodating different payloads and allocating them in the most advantageous positions for the mission they are going to undertake (for example, having direct access to the lunar surface for the payloads that require it), while also allowing for the relocation of payloads and reconfiguring the rover design itself to perform completely different tasks.
Abstract:We present the DLR Planetary Stereo, Solid-State LiDAR, Inertial (S3LI) dataset, recorded on Mt. Etna, Sicily, an environment analogous to the Moon and Mars, using a hand-held sensor suite with attributes suitable for implementation on a space-like mobile rover. The environment is characterized by challenging conditions regarding both the visual and structural appearance: severe visual aliasing poses significant limitations to the ability of visual SLAM systems to perform place recognition, while the absence of outstanding structural details, joined with the limited Field-of-View of the utilized Solid-State LiDAR sensor, challenges traditional LiDAR SLAM for the task of pose estimation using point clouds alone. With this data, that covers more than 4 kilometers of travel on soft volcanic slopes, we aim to: 1) provide a tool to expose limitations of state-of-the-art SLAM systems with respect to environments, which are not present in widely available datasets and 2) motivate the development of novel localization and mapping approaches, that rely efficiently on the complementary capabilities of the two sensors. The dataset is accessible at the following url: https://rmc.dlr.de/s3li_dataset
Abstract:In the future, extraterrestrial expeditions will not only be conducted by rovers but also by flying robots. The technical demonstration drone Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by terrain traversability. Robust self-localization is crucial for that. Cameras that are lightweight, cheap and information-rich sensors are already used to estimate the ego-motion of vehicles. However, methods proven to work in man-made environments cannot simply be deployed on other planets. The highly repetitive textures present in the wastelands of Mars pose a huge challenge to descriptor matching based approaches. In this paper, we present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking to obtain feature correspondences between images and a refined keyframe selection criterion. In contrast to most other approaches, our framework can also handle rotation-only motions that are particularly challenging for monocular odometry systems. Furthermore, we present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix. This way we obtain an implicit measure of uncertainty. We evaluate the validity of our approach on all sequences of a challenging real-world dataset captured in a Mars-like environment and show that it outperforms state-of-the-art approaches.
Abstract:Future planetary missions will rely on rovers that can autonomously explore and navigate in unstructured environments. An essential element is the ability to recognize places that were already visited or mapped. In this work we leverage the ability of stereo cameras to provide both visual and depth information, guiding the search and validation of loop closures from a multi-modal perspective. We propose to augment submaps that are created by aggregating stereo point clouds, with visual keyframes. Point clouds matches are found by comparing CSHOT descriptors and validated by clustering while visual matches are established by comparing keyframes using Bag-of-Words (BoW) and ORB descriptors. The relative transformations resulting from both keyframe and point cloud matches are then fused to provide pose constraints between submaps in our graph-based SLAM framework. Using the LRU rover, we performed several tests in both an indoor laboratory environment as well as a challenging planetary analog environment on Mount Etna, Italy. These environments consist of areas where either keyframes or point clouds alone fail to provide adequate matches, thus demonstrating the benefit of the proposed multi-modal approach.