Abstract:This paper presents a technique for navigation of mobile robot with Deep Q-Network (DQN) combined with Gated Recurrent Unit (GRU). The DQN integrated with the GRU allows action skipping for improved navigation performance. This technique aims at efficient navigation of mobile robot such as autonomous parking robot. Framework for reinforcement learning can be applied to the DQN combined with the GRU in a real environment, which can be modeled by the Partially Observable Markov Decision Process (POMDP). By allowing action skipping, the ability of the DQN combined with the GRU in learning key-action can be improved. The proposed algorithm is applied to explore the feasibility of solution in real environment by the ROS-Gazebo simulator, and the simulation results show that the proposed algorithm achieves improved performance in navigation and collision avoidance as compared to the results obtained by DQN alone and DQN combined with GRU without allowing action skipping.