Abstract:Mapping is a time-consuming process for deploying robotic systems to new environments. The handling of maps is also risk-adverse when not managed effectively. We propose here, a standardised approach to handling such maps in a manner which focuses on the information contained wherein such as global location, object positions, topology, and occupancy. As part of this approach, associated management scripts are able to assist with generation of maps both through direct and indirect information restructuring, and with template and procedural generation of missing data. These approaches are able to, when combined, improve the handling of maps to enable more efficient deployments and higher interoperability between platforms. Alongside this, a collection of sample datasets of fully-mapped environments are included covering areas such as agriculture, urban roadways, and indoor environments.
Abstract:This work extends an existing virtual multi-agent platform called RoboSumo to create TripleSumo -- a platform for investigating multi-agent cooperative behaviors in continuous action spaces, with physical contact in an adversarial environment. In this paper we investigate a scenario in which two agents, namely `Bug' and `Ant', must team up and push another agent `Spider' out of the arena. To tackle this goal, the newly added agent `Bug' is trained during an ongoing match between `Ant' and `Spider'. `Bug' must develop awareness of the other agents' actions, infer the strategy of both sides, and eventually learn an action policy to cooperate. The reinforcement learning algorithm Deep Deterministic Policy Gradient (DDPG) is implemented with a hybrid reward structure combining dense and sparse rewards. The cooperative behavior is quantitatively evaluated by the mean probability of winning the match and mean number of steps needed to win.