Abstract:There is a need for precision pathological sensing, imaging, and tissue manipulation in neurosurgical procedures, such as brain tumor resection. Precise tumor margin identification and resection can prevent further growth and protect critical structures. Surgical lasers with small laser diameters and steering capabilities can allow for new minimally invasive procedures by traversing through complex anatomy, then providing energy to sense, visualize, and affect tissue. In this paper, we present the design of a small-scale tendon-actuated galvanometer (TAG) that can serve as an end-effector tool for a steerable surgical laser. The galvanometer sensor design, fabrication, and kinematic modeling are presented and derived. It can accurately rotate up to 30.14 degrees (or a laser reflection angle of 60.28 degrees). A kinematic mapping of input tendon stroke to output galvanometer angle change and a forward-kinematics model relating the end of the continuum joint to the laser end-point are derived and validated.
Abstract:In-vivo tissue stiffness identification can be useful in pulmonary fibrosis diagnostics and minimally invasive tumor identification, among many other applications. In this work, we propose a palpation-based method for tissue stiffness estimation that uses a sensorized beam buckled onto the surface of a tissue. Fiber Bragg Gratings (FBGs) are used in our sensor as a shape-estimation modality to get real-time beam shape, even while the device is not visually monitored. A mechanical model is developed to predict the behavior of a buckling beam and is validated using finite element analysis and bench-top testing with phantom tissue samples (made of PDMS and PA-Gel). Bench-top estimations were conducted and the results were compared with the actual stiffness values. Mean RMSE and standard deviation (from the actual stiffnesses) values of 413.86 KPa and 313.82 KPa were obtained. Estimations for softer samples were relatively closer to the actual values. Ultimately, we used the stiffness sensor within a mock concentric tube robot as a demonstration of \textit{in-vivo} sensor feasibility. Bench-top trials with and without the robot demonstrate the effectiveness of this unique sensing modality in \textit{in-vivo} applications.
Abstract:The ability to accurately model mechanical hysteretic behavior in tendon-actuated continuum robots using deep learning approaches is a growing area of interest. In this paper, we investigate the hysteretic response of two types of tendon-actuated continuum robots and, ultimately, compare three types of neural network modeling approaches with both forward and inverse kinematic mappings: feedforward neural network (FNN), FNN with a history input buffer, and long short-term memory (LSTM) network. We seek to determine which model best captures temporal dependent behavior. We find that, depending on the robot's design, choosing different kinematic inputs can alter whether hysteresis is exhibited by the system. Furthermore, we present the results of the model fittings, revealing that, in contrast to the standard FNN, both FNN with a history input buffer and the LSTM model exhibit the capacity to model historical dependence with comparable performance in capturing rate-dependent hysteresis.
Abstract:A hybrid continuum robot design is introduced that combines a proximal tendon-actuated section with a distal telescoping section comprised of permanent-magnet spheres actuated using an external magnet. While, individually, each section can approach a point in its workspace from one or at most several orientations, the two-section combination possesses a dexterous workspace. The paper describes kinematic modeling of the hybrid design and provides a description of the dexterous workspace. We present experimental validation which shows that a simplified kinematic model produces tip position mean and maximum errors of 3% and 7% of total robot length, respectively.
Abstract:Robots have the potential to assist people in bed, such as in healthcare settings, yet bedding materials like sheets and blankets can make observation of the human body difficult for robots. A pressure-sensing mat on a bed can provide pressure images that are relatively insensitive to bedding materials. However, prior work on estimating human pose from pressure images has been restricted to 2D pose estimates and flat beds. In this work, we present two convolutional neural networks to estimate the 3D joint positions of a person in a configurable bed from a single pressure image. The first network directly outputs 3D joint positions, while the second outputs a kinematic model that includes estimated joint angles and limb lengths. We evaluated our networks on data from 17 human participants with two bed configurations: supine and seated. Our networks achieved a mean joint position error of 77 mm when tested with data from people outside the training set, outperforming several baselines. We also present a simple mechanical model that provides insight into ambiguity associated with limbs raised off of the pressure mat, and demonstrate that Monte Carlo dropout can be used to estimate pose confidence in these situations. Finally, we provide a demonstration in which a mobile manipulator uses our network's estimated kinematic model to reach a location on a person's body in spite of the person being seated in a bed and covered by a blanket.