University of Vermont
Abstract:The automatic design of robots has existed for 30 years but has been constricted by serial non-differentiable design evaluations, premature convergence to simple bodies or clumsy behaviors, and a lack of sim2real transfer to physical machines. Thus, here we employ massively-parallel differentiable simulations to rapidly and simultaneously optimize individual neural control of behavior across a large population of candidate body plans and return a fitness score for each design based on the performance of its fully optimized behavior. Non-differentiable changes to the mechanical structure of each robot in the population -- mutations that rearrange, combine, add, or remove body parts -- were applied by a genetic algorithm in an outer loop of search, generating a continuous flow of novel morphologies with highly-coordinated and graceful behaviors honed by gradient descent. This enabled the exploration of several orders-of-magnitude more designs than all previous methods, despite the fact that robots here have the potential to be much more complex, in terms of number of independent motors, than those in prior studies. We found that evolution reliably produces ``increasingly differentiable'' robots: body plans that smooth the loss landscape in which learning operates and thereby provide better training paths toward performant behaviors. Finally, one of the highly differentiable morphologies discovered in simulation was realized as a physical robot and shown to retain its optimized behavior. This provides a cyberphysical platform to investigate the relationship between evolution and learning in biological systems and broadens our understanding of how a robot's physical structure can influence the ability to train policies for it. Videos and code at https://sites.google.com/view/eldir.
Abstract:Space-filling building blocks of diverse shape permeate nature at all levels of organization, from atoms to honeycombs, and have proven useful in artificial systems, from molecular containers to clay bricks. But, despite the wide variety of space-filling polyhedra known to mathematics, only the cube has been explored in robotics. Thus, here we roboticize a non-cubic space-filling shape: the rhombic dodecahedron. This geometry offers an appealing alternative to cubes as it greatly simplifies rotational motion of one cell about the edge of another, and increases the number of neighbors each cell can communicate with and hold on to. To better understand the challenges and opportunities of these and other space-filling machines, we manufactured 48 rhombic dodecahedral cells and used them to build various superstructures. We report locomotive ability of some of the structures we built, and discuss the dis/advantages of the different designs we tested. We also introduce a strategy for genderless passive docking of cells that generalizes to any polyhedra with radially symmetrical faces. Future work will allow the cells to freely roll/rotate about one another so that they may realize the full potential of their unique shape.
Abstract:Inspired by the necessity of morphological adaptation in animals, a growing body of work has attempted to expand robot training to encompass physical aspects of a robot's design. However, reinforcement learning methods capable of optimizing the 3D morphology of a robot have been restricted to reorienting or resizing the limbs of a predetermined and static topological genus. Here we show policy gradients for designing freeform robots with arbitrary external and internal structure. This is achieved through actions that deposit or remove bundles of atomic building blocks to form higher-level nonparametric macrostructures such as appendages, organs and cavities. Although results are provided for open loop control only, we discuss how this method could be adapted for closed loop control and sim2real transfer to physical machines in future.
Abstract:Embodiment has recently enjoyed renewed consideration as a means to amplify the faculties of smart machines. Proponents of embodiment seem to imply that optimizing for movement in physical space promotes something more than the acquisition of niche capabilities for solving problems in physical space. However, there is nothing in principle which should so distinguish the problem of action selection in physical space from the problem of action selection in more abstract spaces, like that of language. Rather, what makes embodiment persuasive as a means toward higher intelligence is that it promises to capture, but does not actually realize, contingent facts about certain bodies (living intelligence) and the patterns of activity associated with them. These include an active resistance to annihilation and revisable constraints on the processes that make the world intelligible. To be theoretically or practically useful beyond the creation of niche tools, we argue that "embodiment" cannot be the trivial fact of a body, nor its movement through space, but the perpetual negotiation of the function, design, and integrity of that body$\unicode{x2013}$that is, to participate in what it means to $\textit{constitute}$ a given body. It follows that computer programs which are strictly incapable of traversing physical space might, under the right conditions, be more embodied than a walking, talking robot.
Abstract:Robots are notoriously difficult to design because of complex interdependencies between their physical structure, sensory and motor layouts, and behavior. Despite this, almost every detail of every robot built to date has been manually determined by a human designer after several months or years of iterative ideation, prototyping, and testing. Inspired by evolutionary design in nature, the automated design of robots using evolutionary algorithms has been attempted for two decades, but it too remains inefficient: days of supercomputing are required to design robots in simulation that, when manufactured, exhibit desired behavior. Here we show for the first time de-novo optimization of a robot's structure to exhibit a desired behavior, within seconds on a single consumer-grade computer, and the manufactured robot's retention of that behavior. Unlike other gradient-based robot design methods, this algorithm does not presuppose any particular anatomical form; starting instead from a randomly-generated apodous body plan, it consistently discovers legged locomotion, the most efficient known form of terrestrial movement. If combined with automated fabrication and scaled up to more challenging tasks, this advance promises near instantaneous design, manufacture, and deployment of unique and useful machines for medical, environmental, vehicular, and space-based tasks.
Abstract:Robots deployed at orders of magnitude different size scales, and that retain the same desired behavior at any of those scales, would greatly expand the environments in which the robots could operate. However it is currently not known whether such robots exist, and, if they do, how to design them. Since self similar structures in nature often exhibit self similar behavior at different scales, we hypothesize that there may exist robot designs that have the same property. Here we demonstrate that this is indeed the case for some, but not all, modular soft robots: there are robot designs that exhibit a desired behavior at a small size scale, and if copies of that robot are attached together to realize the same design at higher scales, those larger robots exhibit similar behavior. We show how to find such designs in simulation using an evolutionary algorithm. Further, when fractal attachment is not assumed and attachment geometries must thus be evolved along with the design of the base robot unit, scale invariant behavior is not achieved, demonstrating that structural self similarity, when combined with appropriate designs, is a useful path to realizing scale invariant robot behavior. We validate our findings by demonstrating successful transferal of self similar structure and behavior to pneumatically-controlled soft robots. Finally, we show that biobots can spontaneously exhibit self similar attachment geometries, thereby suggesting that self similar behavior via self similar structure may be realizable across a wide range of robot platforms in future.
Abstract:Many organisms, including various species of spiders and caterpillars, change their shape to switch gaits and adapt to different environments. Recent technological advances, ranging from stretchable circuits to highly deformable soft robots, have begun to make shape changing robots a possibility. However, it is currently unclear how and when shape change should occur, and what capabilities could be gained, leading to a wide range of unsolved design and control problems. To begin addressing these questions, here we simulate, design, and build a soft robot that utilizes shape change to achieve locomotion over both a flat and inclined surface. Modeling this robot in simulation, we explore its capabilities in two environments and demonstrate the automated discovery of environment-specific shapes and gaits that successfully transfer to the physical hardware. We found that the shape-changing robot traverses these environments better than an equivalent but non-morphing robot, in simulation and reality.
Abstract:The manual design of soft robots and their controllers is notoriously challenging, but it could be augmented---or, in some cases, entirely replaced---by automated design tools. Machine learning algorithms can automatically propose, test, and refine designs in simulation, and the most promising ones can then be manufactured in reality (sim2real). However, it is currently not known how to guarantee that behavior generated in simulation can be preserved when deployed in reality. Although many previous studies have devised training protocols that facilitate sim2real transfer of control polices, little to no work has investigated the simulation-reality gap as a function of morphology. This is due in part to an overall lack of tools capable of systematically designing and rapidly manufacturing robots. Here we introduce a low cost, open source, and modular soft robot design and construction kit, and use it to simulate, fabricate, and measure the simulation-reality gap of minimally complex yet soft, locomoting machines. We prove the scalability of this approach by transferring an order of magnitude more robot designs from simulation to reality than any other method. The kit and its instructions can be found here: https://github.com/skriegman/sim2real4designs
Abstract:Catastrophic forgetting continues to severely restrict the learnability of controllers suitable for multiple task environments. Efforts to combat catastrophic forgetting reported in the literature to date have focused on how control systems can be updated more rapidly, hastening their adjustment from good initial settings to new environments, or more circumspectly, suppressing their ability to overfit to any one environment. When using robots, the environment includes the robot's own body, its shape and material properties, and how its actuators and sensors are distributed along its mechanical structure. Here we demonstrate for the first time how one such design decision (sensor placement) can alter the landscape of the loss function itself, either expanding or shrinking the weight manifolds containing suitable controllers for each individual task, thus increasing or decreasing their probability of overlap across tasks, and thus reducing or inducing the potential for catastrophic forgetting.
Abstract:Enabling machines to respond appropriately to natural language commands could greatly expand the number of people to whom they could be of service. Recently, advances in neural network-trained word embeddings have empowered non-embodied text-processing algorithms, and suggest they could be of similar utility for embodied machines. Here we introduce a method that does so by training robots to act similarly to semantically-similar word2vec encoded commands. We show that this enables them to act appropriately, after training, to previously-unheard commands. Finally, we show that inducing such an alignment between motoric and linguistic similarities can be facilitated or hindered by the mechanical structure of the robot. This points to future, large scale methods that find and exploit relationships between action, language, and robot structure.