Abstract:Autonomous exploration is a crucial aspect of robotics, enabling robots to explore unknown environments and generate maps without prior knowledge. This paper proposes a method to enhance exploration efficiency by integrating neural network-based occupancy grid map prediction with uncertainty-aware Bayesian neural network. Uncertainty from neural network-based occupancy grid map prediction is probabilistically integrated into mutual information for exploration. To demonstrate the effectiveness of the proposed method, we conducted comparative simulations within a frontier exploration framework in a realistic simulator environment against various information metrics. The proposed method showed superior performance in terms of exploration efficiency.
Abstract:Reliable perception of targets is crucial for the stable operation of autonomous robots. A widely preferred method is keypoint identification in an image, as it allows direct mapping from raw images to 2D coordinates, facilitating integration with other algorithms like localization and path planning. In this study, we closely examine the design and identification of keypoint patches in cluttered environments, where factors such as blur and shadows can hinder detection. We propose four simple yet distinct designs that consider various scale, rotation and camera projection using a limited number of pixels. Additionally, we customize the Superpoint network to ensure robust detection under various types of image degradation. The effectiveness of our approach is demonstrated through real-world video tests, highlighting potential for vision-based autonomous systems.
Abstract:This paper reviews exploration techniques in deep reinforcement learning. Exploration techniques are of primary importance when solving sparse reward problems. In sparse reward problems, the reward is rare, which means that the agent will not find the reward often by acting randomly. In such a scenario, it is challenging for reinforcement learning to learn rewards and actions association. Thus more sophisticated exploration methods need to be devised. This review provides a comprehensive overview of existing exploration approaches, which are categorized based on the key contributions as follows reward novel states, reward diverse behaviours, goal-based methods, probabilistic methods, imitation-based methods, safe exploration and random-based methods. Then, the unsolved challenges are discussed to provide valuable future research directions. Finally, the approaches of different categories are compared in terms of complexity, computational effort and overall performance.