Abstract:Multi-flagellated bacteria utilize the hydrodynamic interaction between their filamentary tails, known as flagella, to swim and change their swimming direction in low Reynolds number flow. This interaction, referred to as bundling and tumbling, is often overlooked in simplified hydrodynamic models such as Resistive Force Theories (RFT). However, for the development of efficient and steerable robots inspired by bacteria, it becomes crucial to exploit this interaction. In this paper, we present the construction of a macroscopic bio-inspired robot featuring two rigid flagella arranged as right-handed helices, along with a cylindrical head. By rotating the flagella in opposite directions, the robot's body can reorient itself through repeatable and controllable tumbling. To accurately model this bi-flagellated mechanism in low Reynolds flow, we employ a coupling of rigid body dynamics and the method of Regularized Stokeslet Segments (RSS). Unlike RFT, RSS takes into account the hydrodynamic interaction between distant filamentary structures. Furthermore, we delve into the exploration of the parameter space to optimize the propulsion and torque of the system. To achieve the desired reorientation of the robot, we propose a tumble control scheme that involves modulating the rotation direction and speed of the two flagella. By implementing this scheme, the robot can effectively reorient itself to attain the desired attitude. Notably, the overall scheme boasts a simplified design and control as it only requires two control inputs. With our macroscopic framework serving as a foundation, we envision the eventual miniaturization of this technology to construct mobile and controllable micro-scale bacterial robots.
Abstract:The successful implementation of vision-based navigation in agricultural fields hinges upon two critical components: 1) the accurate identification of key components within the scene, and 2) the identification of lanes through the detection of boundary lines that separate the crops from the traversable ground. We propose Agronav, an end-to-end vision-based autonomous navigation framework, which outputs the centerline from the input image by sequentially processing it through semantic segmentation and semantic line detection models. We also present Agroscapes, a pixel-level annotated dataset collected across six different crops, captured from varying heights and angles. This ensures that the framework trained on Agroscapes is generalizable across both ground and aerial robotic platforms. Codes, models and dataset will be released at \href{https://github.com/shivamkumarpanda/agronav}{github.com/shivamkumarpanda/agronav}.
Abstract:Deformable linear objects, such as rods, cables, and ropes, play important roles in daily life. However, manipulation of DLOs is challenging as large geometrically nonlinear deformations may occur during the manipulation process. This problem is made even more difficult as the different deformation modes (e.g., stretching, bending, and twisting) may result in elastic instabilities during manipulation. In this paper, we formulate a physics-guided data-driven method to solve a challenging manipulation task -- accurately deploying a DLO (an elastic rod) onto a rigid substrate along various prescribed patterns. Our framework combines machine learning, scaling analysis, and physics-based simulations to develop a physically informed neural controller for deployment. We explore the complex interplay between the gravitational and elastic energies of the manipulated DLO and obtain a control method for DLO deployment that is robust against friction and material properties. Out of the numerous geometrical and material properties of the rod and substrate, we show that only three non-dimensional parameters are needed to describe the deployment process with physical analysis. Therefore, the essence of the controlling law for the manipulation task can be constructed with a low-dimensional model, drastically increasing the computation speed. The effectiveness of our optimal control scheme is shown through a comprehensive robotic case study comparing against a heuristic control method for deploying rods for a wide variety of patterns. In addition to this, we also showcase the practicality of our control scheme by having a robot accomplish challenging high-level tasks such as mimicking human handwriting and tying knots.
Abstract:Soft deployable structures - unlike conventional piecewise rigid deployables based on hinges and springs - can assume intricate 3-D shapes, thereby enabling transformative technologies in soft robotics, shape-morphing architecture, and pop-up manufacturing. Their virtually infinite degrees of freedom allow precise control over the final shape. The same enabling high dimensionality, however, poses a challenge for solving the inverse design problem involving this class of structures: to achieve desired 3D structures it typically requires manufacturing technologies with extensive local actuation and control during fabrication, and a trial and error search over a large design space. We address both of these shortcomings by first developing a simplified planar fabrication approach that combines two ingredients: strain mismatch between two layers of a composite shell and kirigami cuts that relieves localized stress. In principle, it is possible to generate targeted 3-D shapes by designing the appropriate kirigami cuts and selecting the right amount of prestretch, thereby eliminating the need for local control. Second, we formulate a data-driven physics-guided framework that reduces the dimensionality of the inverse design problem using autoencoders and efficiently searches through the ``latent" parameter space in an active learning approach. We demonstrate the effectiveness of the rapid design procedure via a range of target shapes, such as peanuts, pringles, flowers, and pyramids. Tabletop experiments are conducted to fabricate the target shapes. Experimental results and numerical predictions from our framework are found to be in good agreement.
Abstract:Fully soft bistable mechanisms have shown extensive applications ranging from soft robotics, wearable devices, and medical tools, to energy harvesting. However, the lack of design and fabrication methods that are easy and potentially scalable limits their further adoption into mainstream applications. Here a top-down planar approach is presented by introducing Kirigami-inspired engineering combined with a pre-stretching process. Using this method, Kirigami-Pre-stretched Substrate-Kirigami trilayered precursors are created in a planar manner; upon release, the strain mismatch -- due to the pre-stretching of substrate -- between layers would induce an out-of-plane buckling to achieve targeted three dimensional (3D) bistable structures. By combining experimental characterization, analytical modeling, and finite element simulation, the effect of the pattern size of Kirigami layers and pre-stretching on the geometry and stability of resulting 3D composites is explored. In addition, methods to realize soft bistable structures with arbitrary shapes and soft composites with multistable configurations are investigated, which could encourage further applications. Our method is demonstrated by using bistable soft Kirigami composites to construct two soft machines: (i) a bistable soft gripper that can gently grasp delicate objects with different shapes and sizes and (ii) a flytrap-inspired robot that can autonomously detect and capture objects.
Abstract:Robotic manipulation of slender objects is challenging, especially when the induced deformations are large and nonlinear. Traditionally, learning-based control approaches, e.g., imitation learning, have been used to tackle deformable material manipulation. Such approaches lack generality and often suffer critical failure from a simple switch of material, geometric, and/or environmental (e.g., friction) properties. In this article, we address a fundamental but difficult step of robotic origami: forming a predefined fold in paper with only a single manipulator. A data-driven framework combining physically-accurate simulation and machine learning is used to train deep neural network models capable of predicting the external forces induced on the paper given a grasp position. We frame the problem using scaling analysis, resulting in a control framework robust against material and geometric changes. Path planning is carried out over the generated manifold to produce robot manipulation trajectories optimized to prevent sliding. Furthermore, the inference speed of the trained model enables the incorporation of real-time visual feedback to achieve closed-loop sensorimotor control. Real-world experiments demonstrate that our framework can greatly improve robotic manipulation performance compared against natural paper folding strategies, even when manipulating paper objects of various materials and shapes.
Abstract:We present the interpretable meta neural ordinary differential equation (iMODE) method to rapidly learn generalizable (i.e., not parameter-specific) dynamics from trajectories of multiple dynamical systems that vary in their physical parameters. The iMODE method learns meta-knowledge, the functional variations of the force field of dynamical system instances without knowing the physical parameters, by adopting a bi-level optimization framework: an outer level capturing the common force field form among studied dynamical system instances and an inner level adapting to individual system instances. A priori physical knowledge can be conveniently embedded in the neural network architecture as inductive bias, such as conservative force field and Euclidean symmetry. With the learned meta-knowledge, iMODE can model an unseen system within seconds, and inversely reveal knowledge on the physical parameters of a system, or as a Neural Gauge to "measure" the physical parameters of an unseen system with observed trajectories. We test the validity of the iMODE method on bistable, double pendulum, Van der Pol, Slinky, and reaction-diffusion systems.
Abstract:Smart weeding systems to perform plant-specific operations can contribute to the sustainability of agriculture and the environment. Despite monumental advances in autonomous robotic technologies for precision weed management in recent years, work on under-canopy weeding in fields is yet to be realized. A prerequisite of such systems is reliable detection and classification of weeds to avoid mistakenly spraying and, thus, damaging the surrounding plants. Real-time multi-class weed identification enables species-specific treatment of weeds and significantly reduces the amount of herbicide use. Here, our first contribution is the first adequately large realistic image dataset \textit{AIWeeds} (one/multiple kinds of weeds in one image), a library of about 10,000 annotated images of flax, and the 14 most common weeds in fields and gardens taken from 20 different locations in North Dakota, California, and Central China. Second, we provide a full pipeline from model training with maximum efficiency to deploying the TensorRT-optimized model onto a single board computer. Based on \textit{AIWeeds} and the pipeline, we present a baseline for classification performance using five benchmark CNN models. Among them, MobileNetV2, with both the shortest inference time and lowest memory consumption, is the qualified candidate for real-time applications. Finally, we deploy MobileNetV2 onto our own compact autonomous robot \textit{SAMBot} for real-time weed detection. The 90\% test accuracy realized in previously unseen scenes in flax fields (with a row spacing of 0.2-0.3 m), with crops and weeds, distortion, blur, and shadows, is a milestone towards precision weed control in the real world. We have publicly released the dataset and code to generate the results at \url{https://github.com/StructuresComp/Multi-class-Weed-Classification}.
Abstract:We explore the locomotion of soft robots in granular medium (GM) resulting from the elastic deformation of slender rods. A low-cost, rapidly fabricable robot inspired by the physiological structure of bacteria is presented. It consists of a rigid head, with a motor and batteries embedded, and multiple elastic rods (our model for flagella) to investigate locomotion in GM. The elastic flagella are rotated at one end by the motor, and they deform due to the drag from GM, propelling the robot. The external drag is determined by the flagellar shape, while the latter changes due to the competition between external loading and elastic forces. In this coupled fluid-structure interaction problem, we observe that increasing the number of flagella can decrease or increase the propulsive speed of the robot, depending on the physical parameters of the system. This nonlinearity in the functional relation between propulsion and the parameters of this simple robot motivates us to fundamentally analyze its mechanics using theory, numerical simulation, and experiments. We present a simple Euler-Bernoulli beam theory-based analytical framework that is capable of qualitatively capturing both cases. Theoretical prediction quantitatively matches experiments when the flagellar deformation is small. To account for the geometrically nonlinear deformation often encountered in soft robots and microbes, we implement a simulation framework that incorporates discrete differential geometry-based simulations of elastic rods, a resistive force theory-based model for drag, and a modified Stokes law for the hydrodynamics of the robot head. Comparison with experimental data indicates that the simulations can quantitatively predict robotic motion. Overall, the theoretical and numerical tools presented in this paper can shed light on the design and control of this class of articulated robots in granular or fluid media.
Abstract:Experimental analysis of the mechanics of a deformable object, and particularly its stability, requires repetitive testing and, depending on the complexity of the object's shape, a testing setup that can manipulate many degrees of freedom at the object's boundary. Motivated by recent advancements in robotic manipulation of deformable objects, this paper addresses these challenges by constructing a method for automated stability testing of a slender elastic rod -- a canonical example of a deformable object -- using a robotic system. We focus on rod configurations with helical centerlines since the stability of a helical rod can be described using only three parameters, but experimentally determining the stability requires manipulation of both the position and orientation at one end of the rod, which is not possible using traditional experimental methods that only actuate a limited number of degrees of freedom. Using a recent geometric characterization of stability for helical rods, we construct and implement a manipulation scheme to explore the space of stable helices, and we use a vision system to detect the onset of instabilities within this space. The experimental results obtained by our automated testing system show good agreement with numerical simulations of elastic rods in helical configurations. The methods described in this paper lay the groundwork for automation to grow within the field of experimental mechanics.