University of New South Wales Canberra
Abstract:Robotic shepherding is a bio-inspired approach to autonomously guiding a swarm of agents towards a desired location and has earned increasing research interest recently. However, shepherding a highly dispersed swarm in an obstructive environment remains challenging for the existing methods. To improve the shepherding efficacy in complex environments with obstacles and dispersed sheep, this paper proposes a planning-assisted autonomous shepherding framework with collision avoidance. The proposed approach transforms the swarm shepherding problem into a single Travelling Salesman Problem (TSP), with the sheepdog moving mode classified into non-interaction and interaction mode. Additionally, an adaptive switching approach is integrated into the framework to guide real-time path planning for avoiding collisions with obstacles and sometimes with sheep swarm. Then the overarching hierarchical mission planning system is presented, which consists of a grouping approach to obtain sheep sub-swarms, a general TSP solver for determining the optimal push sequence of sub-swarms, and an online path planner for calculating optimal paths for both sheepdogs and sheep. The experiments on a range of environments, both with and without obstacles, quantitatively demonstrate the effectiveness of the proposed shepherding framework and planning approaches.
Abstract:An emerging challenge in swarm shepherding research is to design effective and efficient artificial intelligence algorithms that maintain a low-computational ceiling while increasing the swarm's abilities to operate in diverse contexts. We propose a methodology to design a context-aware swarm-control intelligent agent. The intelligent control agent (shepherd) first uses swarm metrics to recognise the type of swarm it interacts with to then select a suitable parameterisation from its behavioural library for that particular swarm type. The design principle of our methodology is to increase the situation awareness (i.e. information contents) of the control agent without sacrificing the low-computational cost necessary for efficient swarm control. We demonstrate successful shepherding in both homogeneous and heterogeneous swarms.
Abstract:Contemporary swarm indicators are often used in isolation, focused on extracting information at the individual or collective levels. These are seldom integrated to infer a top-level operating picture of the swarm, its individual members, and its overall collective dynamics. The primary contribution of this paper is to organise a suite of indicators about swarms into an ontologically-arranged collection of information markers to characterise the swarm from the perspective of an external observer-, a recognition agent. Our contribution shows the foundations for a new area of research that we title \emph{swarm analytics}, which its primary concern is with the design and organisation of collections of swarm markers to understand, detect, recognise, track, and learn a particular insight about a swarm system. We present our designed framework of information markers presents a new avenue for swarm research, especially for heterogeneous and cognitive swarms that may require more advanced capabilities to detect agencies and categorise agent influences and responses.
Abstract:Neural-based learning agents make decisions using internal artificial neural networks. In certain situations, it becomes pertinent that this knowledge is re-interpreted in a friendly form to both the human and the machine. These situations include: when agents are required to communicate the knowledge they learn to each other in a transparent way in the presence of an external human observer, in human-machine teaming settings where humans and machines need to collaborate on a task, or where there is a requirement to verify the knowledge exchanged between the agents. We propose an interpretable knowledge fusion framework suited for neural-based learning agents, and propose a Priority on Weak State Areas (PoWSA) retraining technique. We first test the proposed framework on a synthetic binary classification task before evaluating it on a shepherding-based multi-agent swarm guidance task. Results demonstrate that the proposed framework increases the success rate on the swarm-guidance environment by 11% and better stability in return for a modest increase in computational cost of 14.5% to achieve interpretability. Moreover, the framework presents the knowledge learnt by an agent in a human-friendly representation, leading to a better descriptive visual representation of an agent's knowledge.
Abstract:Artificial Intelligence (AI) is rapidly becoming integrated into military Command and Control (C2) systems as a strategic priority for many defence forces. The successful implementation of AI is promising to herald a significant leap in C2 agility through automation. However, realistic expectations need to be set on what AI can achieve in the foreseeable future. This paper will argue that AI could lead to a fragility trap, whereby the delegation of C2 functions to an AI could increase the fragility of C2, resulting in catastrophic strategic failures. This calls for a new framework for AI in C2 to avoid this trap. We will argue that antifragility along with agility should form the core design principles for AI-enabled C2 systems. This duality is termed Agile, Antifragile, AI-Enabled Command and Control (A3IC2). An A3IC2 system continuously improves its capacity to perform in the face of shocks and surprises through overcompensation from feedback during the C2 decision-making cycle. An A3IC2 system will not only be able to survive within a complex operational environment, it will also thrive, benefiting from the inevitable shocks and volatility of war.
Abstract:The guidance of a large swarm is a challenging control problem. Shepherding offers one approach to guide a large swarm using a few shepherding agents (sheepdogs). While noise is an inherent characteristic in many real-world problems, the impact of noise on shepherding is not a well-studied problem. We study two forms of noise. First, we evaluate noise in the sensorial information received by the shepherd about the location of sheep. Second, we evaluate noise in the ability of the sheepdog to influence sheep due to disturbance forces occurring during actuation. We study both types of noise in this paper, and investigate the performance of Str\"{o}mbom's approach under these actuation and perception noises. To ensure that the parameterisation of the algorithm creates a stable performance, we need to run a large number of simulations, while increasing the number of random episodes until stability is achieved. We then systematically study the impact of sensorial and actuation noise on performance. Str\"{o}mbom's approach is found to be more sensitive to actuation noise than perception noise. This implies that it is more important for the shepherding agent to influence the sheep more accurately by reducing actuation noise than attempting to reduce noise in its sensors. Moreover, different levels of noise required different parameterisation for the shepherding agent, where the threshold needed by an agent to decide whether or not to collect astray sheep is different for different noise levels.
Abstract:Shepherding involves herding a swarm of agents (\emph{sheep}) by another a control agent (\emph{sheepdog}) towards a goal. Multiple approaches have been documented in the literature to model this behaviour. In this paper, we present a modification to a well-known shepherding approach, and show, via simulation, that this modification improves shepherding efficacy. We then argue that given complexity arising from obstacles laden environments, path planning approaches could further enhance this model. To validate this hypothesis, we present a 2-stage evolutionary-based path planning algorithm for shepherding a swarm of agents in 2D environments. In the first stage, the algorithm attempts to find the best path for the sheepdog to move from its initial location to a strategic driving location behind the sheep. In the second stage, it calculates and optimises a path for the sheep. It does so by using \emph{way points} on that path as the sequential sub-goals for the sheepdog to aim towards. The proposed algorithm is evaluated in obstacle laden environments via simulation with further improvements achieved.
Abstract:The simultaneous control of multiple coordinated robotic agents represents an elaborate problem. If solved, however, the interaction between the agents can lead to solutions to sophisticated problems. The concept of swarming, inspired by nature, can be described as the emergence of complex system-level behaviors from the interactions of relatively elementary agents. Due to the effectiveness of solutions found in nature, bio-inspired swarming-based control techniques are receiving a lot of attention in robotics. One method, known as swarm shepherding, is founded on the sheep herding behavior exhibited by sheepdogs, where a swarm of relatively simple agents are governed by a shepherd (or shepherds) which is responsible for high-level guidance and planning. Many studies have been conducted on shepherding as a control technique, ranging from the replication of sheep herding via simulation, to the control of uninhabited vehicles and robots for a variety of applications. We present a comprehensive review of the literature on swarm shepherding to reveal the advantages and potential of the approach to be applied to a plethora of robotic systems in the future.
Abstract:The design of reward functions in reinforcement learning is a human skill that comes with experience. Unfortunately, there is not any methodology in the literature that could guide a human to design the reward function or to allow a human to transfer the skills developed in designing reward functions to another human and in a systematic manner. In this paper, we use Systematic Instructional Design, an approach in human education, to engineer a machine education methodology to design reward functions for reinforcement learning. We demonstrate the methodology in designing a hierarchical genetic reinforcement learner that adopts a neural network representation to evolve a swarm controller for an agent shepherding a boids-based swarm. The results reveal that the methodology is able to guide the design of hierarchical reinforcement learners, with each model in the hierarchy learning incrementally through a multi-part reward function. The hierarchy acts as a decision fusion function that combines the individual behaviours and skills learnt by each instruction to create a smart shepherd to control the swarm.
Abstract:Artificial Intelligence (AI) technologies could be broadly categorised into Analytics and Autonomy. Analytics focuses on algorithms offering perception, comprehension, and projection of knowledge gleaned from sensorial data. Autonomy revolves around decision making, and influencing and shaping the environment through action production. A smart autonomous system (SAS) combines analytics and autonomy to understand, learn, decide and act autonomously. To be useful, SAS must be trusted and that requires testing. Lifelong learning of a SAS compounds the testing process. In the remote chance that it is possible to fully test and certify the system pre-release, which is theoretically an undecidable problem, it is near impossible to predict the future behaviours that these systems, alone or collectively, will exhibit. While it may be feasible to severely restrict such systems\textquoteright \ learning abilities to limit the potential unpredictability of their behaviours, an undesirable consequence may be severely limiting their utility. In this paper, we propose the architecture for a watchdog AI (WAI) agent dedicated to lifelong functional testing of SAS. We further propose system specifications including a level of abstraction whereby humans shepherd a swarm of WAI agents to oversee an ecosystem made of humans and SAS. The discussion extends to the challenges, pros, and cons of the proposed concept.