Abstract:Robot swarms are composed of many simple robots that communicate and collaborate to fulfill complex tasks. Robot controllers usually need to be specified by experts on a case-by-case basis via programming code. This process is time-consuming, prone to errors, and unable to take into account all situations that may be encountered during deployment. On the other hand, recent Large Language Models (LLMs) have demonstrated reasoning and planning capabilities, introduced new ways to interact with and program machines, and represent domain and commonsense knowledge. Hence, we propose to address the aforementioned challenges by integrating LLMs with robot swarms and show the potential in proofs of concept (showcases). For this integration, we explore two approaches. The first approach is 'indirect integration,' where LLMs are used to synthesize and validate the robot controllers. This approach may reduce development time and human error before deployment. Moreover, during deployment, it could be used for on-the-fly creation of new robot behaviors. The second approach is 'direct integration,' where each robot locally executes a separate LLM instance during deployment for robot-robot collaboration and human-swarm interaction. These local LLM instances enable each robot to reason, plan, and collaborate using natural language. To enable further research on our mainly conceptual contribution, we release the software and videos for our LLM2Swarm system: https://github.com/Pold87/LLM2Swarm.
Abstract:Federated learning is a new approach to distributed machine learning that offers potential advantages such as reducing communication requirements and distributing the costs of training algorithms. Therefore, it could hold great promise in swarm robotics applications. However, federated learning usually requires a centralized server for the aggregation of the models. In this paper, we present a proof-of-concept implementation of federated learning in a robot swarm that does not compromise decentralization. To do so, we use blockchain technology to enable our robot swarm to securely synchronize a shared model that is the aggregation of the individual models without relying on a central server. We then show that introducing a single malfunctioning robot can, however, heavily disrupt the training process. To prevent such situations, we devise protection mechanisms that are implemented through secure and tamper-proof blockchain smart contracts. Our experiments are conducted in ARGoS, a physics-based simulator for swarm robotics, using the Ethereum blockchain protocol which is executed by each simulated robot.
Abstract:In swarm robotics, decentralized control is often proposed as a more scalable and fault-tolerant alternative to centralized control. However, centralized behaviors are often faster and more efficient than their decentralized counterparts. In any given application, the goals and constraints of the task being solved should guide the choice to use centralized control, decentralized control, or a combination of the two. Currently, the tradeoffs that exist between centralization and decentralization have not been thoroughly studied. In this paper, we investigate these tradeoffs for multi-robot coverage, and find that they are more nuanced than expected. For instance, our findings reinforce the expectation that more decentralized control will provide better scalability, but contradict the expectation that more decentralized control will perform better in environments with randomized obstacles. Beginning with a group of fully independent ground robots executing coverage, we add unmanned aerial vehicles as supervisors and progressively increase the degree to which the supervisors use centralized control, in terms of access to global information and a central coordinating entity. We compare, using the multi-robot physics-based simulation environment ARGoS, the following four control approaches: decentralized control, hybrid control, centralized control, and predetermined control. In comparing the ground robots performing the coverage task, we assess the speed and efficiency advantages of centralization -- in terms of coverage completeness and coverage uniformity -- and we assess the scalability and fault tolerance advantages of decentralization. We also assess the energy expenditure disadvantages of centralization due to different energy consumption rates of ground robots and unmanned aerial vehicles, according to the specifications of robots available off-the-shelf.
Abstract:This technical report describes the implementation of Toychain: a simple, lightweight blockchain implemented in Python, designed for ease of deployment and practicality in robotics research. It can be integrated with various software and simulation tools used in robotics (we have integrated it with ARGoS, Gazebo, and ROS2), and also be deployed on real robots capable of Wi-Fi communications. The Toychain package supports the deployment of smart contracts written in Python (computer programs that can be executed by and synchronized across a distributed network). The nodes in the blockchain can execute smart contract functions by broadcasting transactions, which update the state of the blockchain upon agreement by all other nodes. The conditions for this agreement are established by a consensus protocol. The Toychain package allows for custom implementations of the consensus protocol, which can be useful for research or meeting specific application requirements. Currently, Proof-of-Work and Proof-of-Authority are implemented.
Abstract:We investigate the emergence of language convention within a swarm of robots foraging in an open environment from two identical resources. While foraging, the swarm needs to explore and decide which resource to exploit, moving through complex transitory dynamics towards different possible equilibria, such as, selection of a single resource or spread across the two. Our point of interest is the understanding of possible correlations between the emergent, evolving, task-induced interaction network and the language dynamics. In particular, our goal is to determine whether the dynamics of the interaction network are sufficient to determine emergent naming conventions that represent features of the task execution (e.g., choice of one or the other resource) and of the environment, In other words, we look for an emergent vocabulary that is both complete (a word for each resource) and correct (no misnomer) for as long as each resource is relevant to the swarm. In this study, robots are playing two variants of the minimal language game. The classic one, where words are created when needed, and a new variant we introduce in this article: the spatial minimal naming game, where the creation of words is linked with the discovery of resources by exploring robots. We end the article by proposing a proof of concept extension of the spatial minimal naming game that assures the completeness and correctness of the swarms vocabulary.
Abstract:Modern cities are growing ecosystems that face new challenges due to the increasing population demands. One of the many problems they face nowadays is waste management, which has become a pressing issue requiring new solutions. Swarm robotics systems have been attracting an increasing amount of attention in the past years and they are expected to become one of the main driving factors for innovation in the field of robotics. The research presented in this paper explores the feasibility of a swarm robotics system in an urban environment. By using bio-inspired foraging methods such as multi-place foraging and stigmergy-based navigation, a swarm of robots is able to improve the efficiency and autonomy of the urban waste management system in a realistic scenario. To achieve this, a diverse set of simulation experiments was conducted using real-world GIS data and implementing different garbage collection scenarios driven by robot swarms. Results presented in this research show that the proposed system outperforms current approaches. Moreover, results not only show the efficiency of our solution, but also give insights about how to design and customize these systems.
Abstract:We define the nervous system of a robot as the processing unit responsible for controlling the robot body, together with the links between the processing unit and the sensorimotor hardware of the robot - i.e., the equivalent of the central nervous system in biological organisms. We present autonomous robots that can merge their nervous systems when they physically connect to each other, creating a "virtual nervous system" (VNS). We show that robots with a VNS have capabilities beyond those found in any existing robotic system or biological organism: they can merge into larger bodies with a single brain (i.e., processing unit), split into separate bodies with independent brains, and temporarily acquire sensing and actuating capabilities of specialized peer robots. VNS-based robots can also self-heal by removing or replacing malfunctioning body parts, including the brain.