Abstract:In robotics, contemporary strategies are learning-based, characterized by a complex black-box nature and a lack of interpretability, which may pose challenges in ensuring stability and safety. To address these issues, we propose integrating an obstacle-free deep reinforcement learning (DRL) trajectory planner with a novel auto-tuning low- and joint-level control strategy, all while actively engaging in the learning phase through interactions with the environment. This approach circumvents the complexities associated with computations while also addressing nonrepetitive and random obstacle avoidance tasks. First, a model-free DRL agent to plan velocity-bounded and obstacle-free motion is employed for a manipulator with 'n' degrees of freedom (DoF) in task space through joint-level reasoning. This plan is then input into a robust subsystem-based adaptive controller, which produces the necessary torques, while the Cuckoo Search Optimization (CSO) algorithm enhances control gains to minimize the time required to reach, time taken to stabilize, the maximum deviation from the desired value, and persistent tracking error in the steady state. This approach guarantees that position and velocity errors exponentially converge to zero in an unfamiliar environment, despite unknown robotic manipulator modeling. Theoretical assertions are validated through the presentation of simulation outcomes.
Abstract:This paper presents a novel auto-tuning subsystem-based fault-tolerant control (SBFC) system designed for robot manipulator systems with n degrees of freedom. It first employs an actuator fault model to account for various faults that may occur, and second, a mathematical saturation function is incorporated to address torque constraints. Subsequently, a novel robust subsystem-based adaptive control method is proposed to direct system states to follow desired trajectories closely in the presence of input constraints, unknown modeling errors, and actuator faults, which are primary considerations of the proposed system. This ensures uniform exponential stability and sustained performance. In addition, optimal values are identified by tuning the SBFC gains and customizing the JAYA algorithm (JA), a high-performance swarm intelligence technique. Theoretical assertions are validated through the presentation of simulation outcomes.