Abstract:Reconfigurable robots that can change their physical configuration post-fabrication have demonstrate their potential in adapting to different environments or tasks. However, it is challenging to determine how to optimally adjust reconfigurable parameters for a given task, especially when the controller depends on the robot's configuration. In this paper, we address this problem using a tendon-driven reconfigurable manipulator composed of multiple serially connected origami-inspired modules as an example. Under tendon actuation, these modules can achieve different shapes and motions, governed by joint stiffnesses (reconfiguration parameters) and the tendon displacements (control inputs). We leverage recent advances in co-optimization of design and control for robotic system to treat reconfiguration parameters as design variables and optimize them using reinforcement learning techniques. We first establish a forward model based on the minimum potential energy method to predict the shape of the manipulator under tendon actuations. Using the forward model as the environment dynamics, we then co-optimize the control policy (on the tendon displacements) and joint stiffnesses of the modules for goal reaching tasks while ensuring collision avoidance. Through co-optimization, we obtain optimized joint stiffness and the corresponding optimal control policy to enable the manipulator to accomplish the task that would be infeasible with fixed reconfiguration parameters (i.e., fixed joint stiffness). We envision the co-optimization framework can be extended to other reconfigurable robotic systems, enabling them to optimally adapt their configuration and behavior for diverse tasks and environments.
Abstract:Grasping using an aerial robot can have many applications ranging from infrastructure inspection and maintenance to precise agriculture. However, aerial grasping is a challenging problem since the robot has to maintain an accurate position and orientation relative to the grasping object, while negotiating various forms of uncertainties (e.g., contact force from the object). To address such challenges, in this paper, we integrate a novel passive gripper design and advanced adaptive control methods to enable robust aerial grasping. The gripper is enabled by a pre-stressed band with two stable states (a flat shape and a curled shape). In this case, it can automatically initiate the grasping process upon contact with an object. The gripper also features a cable-driven system by a single DC motor to open the gripper without using cumbersome pneumatics. Since the gripper is passively triggered and initially has a straight shape, it can function without precisely aligning the gripper with the object (within an $80$ mm tolerance). Our adaptive control scheme eliminates the need for any a priori knowledge (nominal or upper bounds) of uncertainties. The closed-loop stability of the system is analyzed via Lyapunov-based method. Combining the gripper and the adaptive control, we conduct comparative real-time experimental results to demonstrate the effectiveness of the proposed integrated system for grasping. Our integrated approach can pave the way to enhance aerial grasping for different applications.
Abstract:In deep learning, Multi-Layer Perceptrons (MLPs) have once again garnered attention from researchers. This paper introduces MC-MLP, a general MLP-like backbone for computer vision that is composed of a series of fully-connected (FC) layers. In MC-MLP, we propose that the same semantic information has varying levels of difficulty in learning, depending on the coordinate frame of features. To address this, we perform an orthogonal transform on the feature information, equivalent to changing the coordinate frame of features. Through this design, MC-MLP is equipped with multi-coordinate frame receptive fields and the ability to learn information across different coordinate frames. Experiments demonstrate that MC-MLP outperforms most MLPs in image classification tasks, achieving better performance at the same parameter level. The code will be available at: https://github.com/ZZM11/MC-MLP.