Abstract:This paper introduces the Minimal Biorobotic Stealth Distance (MBSD), a novel quantitative metric to evaluate the bionic resemblance of biorobotic aircraft. Current technological limitations prevent dragonfly-inspired aircrafts from achieving optimal performance at biological scales. To address these challenges, we use the DDD-1 dragonfly-inspired aircraft, a hover-capable direct-drive aircraft, to explore the impact of the MBSD on aircraft design. Key contributions of this research include: (1) the establishment of the MBSD as a quantifiable and operable evaluation metric that influences aircraft design, integrating seamlessly with the overall design process and providing a new dimension for optimizing bionic aircraft, balancing mechanical attributes and bionic characteristics; (2) the creation and analysis of a typical aircraft in four directions: essential characteristics of the MBSD, its coupling relationship with existing performance metrics (Longest Hover Duration and Maximum Instantaneous Forward Flight Speed), multi-objective optimization, and application in a typical mission scenario; (3) the construction and validation of a full-system model for the direct-drive dragonfly-inspired aircraft, demonstrating the design model's effectiveness against existing aircraft data. Detailed calculations of the MBSD consider appearance similarity, dynamic similarity, and environmental similarity.
Abstract:The nonlinear and unstable aerodynamic interference generated by the tandem wings of such biomimetic systems poses substantial challenges for motion control, especially under multiple random operating conditions. To address these challenges, the Concerto Reinforcement Learning Extension (CRL2E) algorithm has been developed. This plug-and-play, fully on-the-job, real-time reinforcement learning algorithm incorporates a novel Physics-Inspired Rule-Based Policy Composer Strategy with a Perturbation Module alongside a lightweight network optimized for real-time control. To validate the performance and the rationality of the module design, experiments were conducted under six challenging operating conditions, comparing seven different algorithms. The results demonstrate that the CRL2E algorithm achieves safe and stable training within the first 500 steps, improving tracking accuracy by 14 to 66 times compared to the Soft Actor-Critic, Proximal Policy Optimization, and Twin Delayed Deep Deterministic Policy Gradient algorithms. Additionally, CRL2E significantly enhances performance under various random operating conditions, with improvements in tracking accuracy ranging from 8.3% to 60.4% compared to the Concerto Reinforcement Learning (CRL) algorithm. The convergence speed of CRL2E is 36.11% to 57.64% faster than the CRL algorithm with only the Composer Perturbation and 43.52% to 65.85% faster than the CRL algorithm when both the Composer Perturbation and Time-Interleaved Capability Perturbation are introduced, especially in conditions where the standard CRL struggles to converge. Hardware tests indicate that the optimized lightweight network structure excels in weight loading and average inference time, meeting real-time control requirements.