Abstract:Simulation of vehicle motion in planetary environments is challenging. This is due to the modeling of complex terrain, optical conditions, and terrain-aware vehicle dynamics. One of the critical issues of typical simulators is that they assume terrain is a rigid body, which limits their ability to render wheel traces and compute the wheel-terrain interactions. This prevents, for example, the use of wheel traces as landmarks for localization, as well as the accurate simulation of motion. In the context of lunar regolith, the surface is not rigid but granular. As such, there are differences in the rover's motion, such as sinkage and slippage, and a clear wheel trace left behind the rover, compared to that on a rigid terrain. This study presents a novel approach to integrating a terramechanics-aware terrain deformation engine to simulate a realistic wheel trace in a digital lunar environment. By leveraging Discrete Element Method simulation results alongside experimental single-wheel test data, we construct a regression model to derive deformation height as a function of contact normal force. The region of interest in a height map is retrieved from the wheel poses. The elevation values of corresponding pixels are subsequently modified using contact normal forces and the regression model. Finally, we apply the determined elevation change to each mesh vertex to render wheel traces during runtime. The deformation engine is integrated into our ongoing development of a lunar simulator based on NVIDIA's Omniverse IsaacSim. We hypothesize that our work will be crucial to testing perception and downstream navigation systems under conditions similar to outdoor or terrestrial fields. A demonstration video is available here: https://www.youtube.com/watch?v=TpzD0h-5hv4
Abstract:The ability to automatically learn movements and behaviors of increasing complexity is a long-term goal in autonomous systems. Indeed, this is a very complex problem that involves understanding how knowledge is acquired and reused by humans as well as proposing mechanisms that allow artificial agents to reuse previous knowledge. Inspired by Jean Piaget's theory's first three sensorimotor substages, this work presents a cognitive agent based on CONAIM (Conscious Attention-Based Integrated Model) that can learn procedures incrementally. Throughout the paper, we show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent. Experiments were conducted with a humanoid robot in a simulated environment modeled with the Cognitive Systems Toolkit (CST) performing an object tracking task. The system is modeled using a single procedural learning mechanism based on Reinforcement Learning. The increasing agent's cognitive complexity is managed by adding new terms to the reward function for each learning phase. Results show that this approach is capable of solving complex tasks incrementally.