Abstract:This paper considers cooperative control of robots involving two different testbed systems in remote locations with communication on the internet. This provides us the capability to exchange robots status like positions, velocities and directions needed for the swarming algorithm. The results show that all robots properly follow some leader defined one of the testbeds. Measurement of data exchange rates show no loss of packets, and average transfer delays stay within tolerance limits for practical applications. In our knowledge, the novelty of this paper concerns this kind of control over a large network like internet.
Abstract:The analysis of tabular data has traditionally been dominated by gradient-boosted decision trees (GBDTs), known for their proficiency with mixed categorical and numerical features. However, recent deep learning innovations are challenging this dominance. We introduce Mambular, an adaptation of the Mamba architecture optimized for tabular data. We extensively benchmark Mambular against state-of-the-art models, including neural networks and tree-based methods, and demonstrate its competitive performance across diverse datasets. Additionally, we explore various adaptations of Mambular to understand its effectiveness for tabular data. We investigate different pooling strategies, feature interaction mechanisms, and bi-directional processing. Our analysis shows that interpreting features as a sequence and passing them through Mamba layers results in surprisingly performant models. The results highlight Mambulars potential as a versatile and powerful architecture for tabular data analysis, expanding the scope of deep learning applications in this domain. The source code is available at https://github.com/basf/mamba-tabular.
Abstract:Multi Agent Path Finding (MAPF) seeks the optimal set of paths for multiple agents from respective start to goal locations such that no paths conflict. We address the MAPF problem for a fleet of hybrid-fuel unmanned aerial vehicles which are subject to location-dependent noise restrictions. We solve this problem by searching a constraint tree for which the subproblem at each node is a set of shortest path problems subject to the noise and fuel constraints and conflict zone avoidance. A labeling algorithm is presented to solve this subproblem, including the conflict zones which are treated as dynamic obstacles. We present the experimental results of the algorithms for various graph sizes and number of agents.
Abstract:The uncertainty principle is a fundamental principle in theoretical physics, such as quantum mechanics and classical mechanics. It plays a prime role in signal processing, including optics, where a signal is to be analyzed simultaneously in both domains; for instance, in harmonic analysis, both time and frequency domains, and in quantum mechanics, both time and momentum. On the other hand, many mathematicians, physicists, and other related domain researchers have paid more attention to the octonion-related integral transforms in recent years. In this paper, we define important properties of the windowed octonion linear canonical transform (WOCLCT), such as inversion, linearity, parity, shifting, and the relationship between OCLCT and WOCLCT. Further, we derived sharp Pitt's and sharp Young-Hausdorff inequalities for 3D WOCLCT. We obtain the logarithmic uncertainty principle for the 3D WOCLCT. Furthermore, Heisenberg's and Donoho-Stark's uncertainty principles are derived for WOCLCT, and the potential applications of WOCLCT are also discussed.
Abstract:We are considering the geometric amoebot model where a set of $n$ amoebots is placed on the triangular grid. An amoebot is able to send information to its neighbors, and to move via expansions and contractions. Since amoebots and information can only travel node by node, most problems have a natural lower bound of $\Omega(D)$ where $D$ denotes the diameter of the structure. Inspired by the nervous and muscular system, Feldmann et al. have proposed the reconfigurable circuit extension and the joint movement extension of the amoebot model with the goal of breaking this lower bound. In the joint movement extension, the way amoebots move is altered. Amoebots become able to push and pull other amoebots. Feldmann et al. demonstrated the power of joint movements by transforming a line of amoebots into a rhombus within $O(\log n)$ rounds. However, they left the details of the extension open. The goal of this paper is therefore to formalize and extend the joint movement extension. In order to provide a proof of concept for the extension, we consider two fundamental problems of modular robot systems: shape formation and locomotion. We approach these problems by defining meta-modules of rhombical and hexagonal shape, respectively. The meta-modules are capable of movement primitives like sliding, rotating, and tunneling. This allows us to simulate shape formation algorithms of various modular robot systems. Finally, we construct three amoebot structures capable of locomotion by rolling, crawling, and walking, respectively.
Abstract:Central Pattern Generators (CPGs) form the neural basis of the observed rhythmic behaviors for locomotion in legged animals. The CPG dynamics organized into networks allow the emergence of complex locomotor behaviors. In this work, we take this inspiration for developing walking behaviors in multi-legged robots. We present novel DeepCPG policies that embed CPGs as a layer in a larger neural network and facilitate end-to-end learning of locomotion behaviors in deep reinforcement learning (DRL) setup. We demonstrate the effectiveness of this approach on physics engine-based insectoid robots. We show that, compared to traditional approaches, DeepCPG policies allow sample-efficient end-to-end learning of effective locomotion strategies even in the case of high-dimensional sensor spaces (vision). We scale the DeepCPG policies using a modular robot configuration and multi-agent DRL. Our results suggest that gradual complexification with embedded priors of these policies in a modular fashion could achieve non-trivial sensor and motor integration on a robot platform. These results also indicate the efficacy of bootstrapping more complex intelligent systems from simpler ones based on biological principles. Finally, we present the experimental results for a proof-of-concept insectoid robot system for which DeepCPG learned policies initially using the simulation engine and these were afterwards transferred to real-world robots without any additional fine-tuning.
Abstract:Reconfiguring two shortest paths in a graph means modifying one shortest path to the other by changing one vertex at a time so that all the intermediate paths are also shortest paths. This problem has several natural applications, namely: (a) revamping road networks, (b) rerouting data packets in synchronous multiprocessing setting, (c) the shipping container stowage problem, and (d) the train marshalling problem. When modelled as graph problems, (a) is the most general case while (b), (c) and (d) are restrictions to different graph classes. We show that (a) is intractable, even for relaxed variants of the problem. For (b), (c) and (d), we present efficient algorithms to solve the respective problems. We also generalize the problem to when at most $k$ (for a fixed integer $k\geq 2$) contiguous vertices on a shortest path can be changed at a time.
Abstract:Deep reinforcement learning (RL) has made it possible to solve complex robotics problems using neural networks as function approximators. However, the policies trained on stationary environments suffer in terms of generalization when transferred from one environment to another. In this work, we use Robust Markov Decision Processes (RMDP) to train the drone control policy, which combines ideas from Robust Control and RL. It opts for pessimistic optimization to handle potential gaps between policy transfer from one environment to another. The trained control policy is tested on the task of quadcopter positional control. RL agents were trained in a MuJoCo simulator. During testing, different environment parameters (unseen during the training) were used to validate the robustness of the trained policy for transfer from one environment to another. The robust policy outperformed the standard agents in these environments, suggesting that the added robustness increases generality and can adapt to non-stationary environments. Codes: https://github.com/adipandas/gym_multirotor
Abstract:In this paper, we present a novel developmental reinforcement learning-based controller for a quadcopter with thrust vectoring capabilities. This multirotor UAV design has tilt-enabled rotors. It utilizes the rotor force magnitude and direction to achieve the desired state during flight. The control policy of this robot is learned using the policy transfer from the learned controller of the quadcopter (comparatively simple UAV design without thrust vectoring). This approach allows learning a control policy for systems with multiple inputs and multiple outputs. The performance of the learned policy is evaluated by physics-based simulations for the tasks of hovering and way-point navigation. The flight simulations utilize a flight controller based on reinforcement learning without any additional PID components. The results show faster learning with the presented approach as opposed to learning the control policy from scratch for this new UAV design created by modifications in a conventional quadcopter, i.e., the addition of more degrees of freedom (4-actuators in conventional quadcopter to 8-actuators in tilt-rotor quadcopter). We demonstrate the robustness of our learned policy by showing the recovery of the tilt-rotor platform in the simulation from various non-static initial conditions in order to reach a desired state. The developmental policy for the tilt-rotor UAV also showed superior fault tolerance when compared with the policy learned from the scratch. The results show the ability of the presented approach to bootstrap the learned behavior from a simpler system (lower-dimensional action-space) to a more complex robot (comparatively higher-dimensional action-space) and reach better performance faster.
Abstract:In this paper, we present an autonomous flight controller for a quadcopter with thrust vectoring capabilities. This UAV falls in the category of multirotors with tilt-motion enabled rotors. Since the vehicle considered is over-actuated in nature, the dynamics and control allocation have to be analysed carefully. Moreover, the possibility of hovering at large attitude maneuvers of this novel vehicle requires singularity-free attitude control. Hence, quaternion state feedback is utilized to compute the control commands for the UAV motors while avoiding the gimbal lock condition experienced by Euler angle based controllers. The quaternion implementation also reduces the overall complexity of state estimation due to absence of trigonometric parameters. The quadcopter dynamic model and state space is utilized to design the attitude controller and control allocation for the UAV. The control allocation, in particular, is derived by linearizing the system about hover condition. This mathematical method renders the control allocation more accurate than existing approaches. Lyapunov stability analysis of the attitude controller is shown to prove global stability. The quaternion feedback attitude controller is commanded by an outer position controller loop which generates rotor-tilt and desired quaternions commands for the system. The performance of the UAV is evaluated by numerical simulations for tracking attitude step commands and for following a way-point navigation mission.