Abstract:Recent advancements in legged locomotion research have made legged robots a preferred choice for navigating challenging terrains when compared to their wheeled counterparts. This paper presents a novel locomotion policy, trained using Deep Reinforcement Learning, for a quadrupedal robot equipped with an additional prismatic joint between the knee and foot of each leg. The training is performed in NVIDIA Isaac Gym simulation environment. Our study investigates the impact of these joints on maintaining the quadruped's desired height and following commanded velocities while traversing challenging terrains. We provide comparison results, based on a Cost of Transport (CoT) metric, between quadrupeds with and without prismatic joints. The learned policy is evaluated on a set of challenging terrains using the CoT metric in simulation. Our results demonstrate that the added degrees of actuation offer the locomotion policy more flexibility to use the extra joints to traverse terrains that would be deemed infeasible or prohibitively expensive for the conventional quadrupedal design, resulting in significantly improved efficiency.
Abstract:Sampling based probabilistic roadmap planners (PRM) have been successful in motion planning of robots with higher degrees of freedom, but may fail to capture the connectivity of the configuration space in scenarios with a critical narrow passage. In this paper, we show a novel technique based on Levy Flights to generate key samples in the narrow regions of configuration space, which, when combined with a PRM, improves the completeness of the planner. The technique substantially improves sample quality at the expense of a minimal additional computation, when compared with pure random walk based methods, however, still outperforms state of the art random bridge building method, in terms of number of collision calls, computational overhead and sample quality. The method is robust to the changes in the parameters related to the structure of the narrow passage, thus giving an additional generality. A number of 2D & 3D motion planning simulations are presented which shows the effectiveness of the method.