Abstract:Teleoperated robotic systems have introduced more intuitive control for minimally invasive surgery, but the optimal method for training remains unknown. Recent motor learning studies have demonstrated that exaggeration of errors helps trainees learn to perform tasks with greater speed and accuracy. We hypothesized that training in a force field that pushes the operator away from a desired path would improve their performance on a virtual reality ring-on-wire task. Forty surgical novices trained under a no-force, guidance, or error-amplifying force field over five days. Completion time, translational and rotational path error, and combined error-time were evaluated under no force field on the final day. The groups significantly differed in combined error-time, with the guidance group performing the worst. Error-amplifying field participants showed the most improvement and did not plateau in their performance during training, suggesting that learning was still ongoing. Guidance field participants had the worst performance on the final day, confirming the guidance hypothesis. Participants with high initial path error benefited more from guidance. Participants with high initial combined error-time benefited more from guidance and error-amplifying force field training. Our results suggest that error-amplifying and error-reducing haptic training for robot-assisted telesurgery benefits trainees of different abilities differently.
Abstract:Epidural analgesia involves injection of anesthetics into the epidural space, using a Touhy needle to proceed through the layers in the epidural region and a "loss of resistance" (LOR) syringe to sense the environment stiffness. The anesthesiologist's case experience is one of the leading causes of accidental dural puncture and failed epidural - the two most common complications of epidural analgesia. Robotic simulation is an appealing solution to help train novices in this task. Another benefit of it is the ability to record the kinematic information throughout the procedure. In this work, we used a haptic bimanual simulator, that we designed and validated in previous work, to explore the effect LOR probing strategies had on procedure outcomes. Our results indicate that most participants probed more with the LOR syringe in successful trials, compared to unsuccessful trials. Furthermore, this result was more prominent in the three layers preceding the epidural space. Our findings can assist in creating better instructions for training novices in the task of epidural analgesia. We posit that instructing anesthesia residents to use the LOR syringe more extensively and educating them to do so more when they are in proximity to the epidural space can help improve skill acquisition in this task.
Abstract:The case experience of anesthesiologists is one of the leading causes of accidental dural puncture and failed epidural - the most common complications of epidural analgesia. We designed a bimanual haptic simulator to train anesthesiologists and optimize epidural analgesia skill acquisition, and present a validation study conducted with 15 anesthesiologists of different competency levels from several hospitals in Israel. Our simulator emulates the forces applied on the epidural (Touhy) needle, held by one hand, and those applied on the Loss of Resistance (LOR) syringe, held by the second hand. The resistance is calculated based on a model of the Epidural region layers that is parameterized by the weight of the patient. We measured the movements of both haptic devices, and quantified the rate of results (success, failed epidurals and dural punctures), insertion strategies, and answers of participants to questionnaires about their perception of the realism of the simulation. We demonstrated good construct validity by showing that the simulator can distinguish between real-life novices and experts. Good face and content validity were shown in experienced users' perception of the simulator as realistic and well-targeted. We found differences in strategies between different level anesthesiologists, and suggest trainee-based instruction in advanced training stages.
Abstract:Surgical procedures require a high level of technical skill to ensure efficiency and patient safety. Due to the direct effect of surgeon skill on patient outcomes, the development of cost-effective and realistic training methods is imperative to accelerate skill acquisition. Teleoperated robotic devices allow for intuitive ergonomic control, but the learning curve for these systems remains steep. Recent studies in motor learning have shown that visual or physical exaggeration of errors helps trainees to learn to perform tasks faster and more accurately. In this study, we extended the work from two previous studies to investigate the performance of subjects in different force field training conditions, including convergent (assistive), divergent (resistive), and no force field (null).
Abstract:Robotic-assisted surgeries benefit both surgeons and patients, however, surgeons frequently need to adjust the endoscopic camera to achieve good viewpoints. Simultaneously controlling the camera and the surgical instruments is impossible, and consequentially, these camera adjustments repeatedly interrupt the surgery. Autonomous camera control could help overcome this challenge, but most existing systems are reactive, e.g., by having the camera follow the surgical instruments. We propose a predictive approach for anticipating when camera movements will occur using artificial neural networks. We used the kinematic data of the surgical instruments, which were recorded during robotic-assisted surgical training on porcine models. We split the data into segments, and labeled each either as a segment that immediately precedes a camera movement, or one that does not. Due to the large class imbalance, we trained an ensemble of networks, each on a balanced sub-set of the training data. We found that the instruments' kinematic data can be used to predict when camera movements will occur, and evaluated the performance on different segment durations and ensemble sizes. We also studied how much in advance an upcoming camera movement can be predicted, and found that predicting a camera movement 0.25, 0.5, and 1 second before they occurred achieved 98%, 94%, and 84% accuracy relative to the prediction of an imminent camera movement. This indicates that camera movement events can be predicted early enough to leave time for computing and executing an autonomous camera movement and suggests that an autonomous camera controller for RAMIS may one day be feasible.
Abstract:Teleoperated robot-assisted minimally-invasive surgery (RAMIS) offers many advantages over open surgery. However, there are still no guidelines for training skills in RAMIS. Motor learning theories have the potential to improve the design of RAMIS training but they are based on simple movements that do not resemble the complex movements required in surgery. To fill this gap, we designed an experiment to investigate the effect of time-dependent force perturbations on the learning of a pattern-cutting surgical task. Thirty participants took part in the experiment: (1) a control group that trained without perturbations, and (2) a 1Hz group that trained with 1Hz periodic force perturbations that pushed each participant's hand inwards and outwards in the radial direction. We monitored their learning using four objective metrics and found that participants in the 1Hz group learned how to overcome the perturbations and improved their performances during training without impairing their performances after the perturbations were removed. Our results present an important step toward understanding the effect of adding perturbations to RAMIS training protocols and improving RAMIS training for the benefit of surgeons and patients.
Abstract:We developed a new grip force measurement concept that allows for embedding tactile stimulation mechanisms in a gripper. This concept is based on a single force sensor to measure the force applied on each side of the gripper, and it substantially reduces artifacts of force measurement caused by tactor motion. To test the feasibility of this new concept, we built a device that measures control of grip force in response to a tactile stimulation from a moving tactor. First, we used a custom designed testing setup with a second force sensor to calibrate our device over a range of 0 to 20 N without movement of the tactors. Second, we tested the effect of tactor movement on the measured grip force and measured artifacts of 1% of the measured force. Third, we demonstrated that during the application of dynamically changing grip forces, the average errors were 2.9% and 3.7% for the left and right sides of the gripper, respectively. Finally, we conducted a user study and found that in response to tactor movement, participants increased their grip force, and that the increase was larger for a smaller target force and depended on the amount of tactile stimulation.
Abstract:The lack of haptic feedback in Robot-assisted Minimally Invasive Surgery (RMIS) is a potential barrier to safe tissue handling during surgery. Bayesian modeling theory suggests that surgeons with experience in open or laparoscopic surgery can develop priors of tissue stiffness that translate to better force estimation abilities during RMIS compared to surgeons with no experience. To test if prior haptic experience leads to improved force estimation ability in teleoperation, 33 participants were assigned to one of three training conditions: manual manipulation, teleoperation with force feedback, or teleoperation without force feedback, and learned to tension a silicone sample to a set of force values. They were then asked to perform the tension task, and a previously unencountered palpation task, to a different set of force values under teleoperation without force feedback. Compared to the teleoperation groups, the manual group had higher force error in the tension task outside the range of forces they had trained on, but showed better speed-accuracy functions in the palpation task at low force levels. This suggests that the dynamics of the training modality affect force estimation ability during teleoperation, with the prior haptic experience accessible if formed under the same dynamics as the task.
Abstract:We investigate grasping of rigid objects in unilateral robot-assisted minimally invasive surgery (RAMIS) in this paper. We define a human-centered transparency that quantifies natural action and perception in RAMIS. We demonstrate this human-centered transparency analysis for different values of gripper scaling - the scaling between the grasp aperture of the surgeon-side manipulator and the aperture of the surgical instrument grasper. Thirty-one participants performed teleoperated grasping and perceptual assessment of rigid objects in one of three gripper scaling conditions (fine, normal, and quick, trading off precision and responsiveness). Psychophysical analysis of the variability of maximal grasping aperture during prehension and of the reported size of the object revealed that in normal and quick (but not in the fine) gripper scaling conditions, teleoperated grasping with our system was similar to natural grasping, and therefore, human-centered transparent. We anticipate that using motor control and psychophysics for human-centered optimizing of teleoperation control will eventually improve the usability of RAMIS.
Abstract:In many human-in-the-loop robotic applications such as robot-assisted surgery and remote teleoperation, predicting the intended motion of the human operator may be useful for successful implementation of shared control, guidance virtual fixtures, and predictive control. Developing computational models of human movements is a critical foundation for such motion prediction frameworks. With this motivation, we present a computational framework for modeling reaching movements in the presence of obstacles. We propose a stochastic optimal control framework that consists of probabilistic collision avoidance constraints and a cost function that trades-off between effort and end-state variance in the presence of a signal-dependent noise. First, we present a series of reformulations to convert the original non-linear and non-convex optimal control into a parametric quadratic programming problem. We show that the parameters can be tuned to model various collision avoidance strategies, thereby capturing the quintessential variability associated with human motion. Then, we present a simulation study that demonstrates the complex interaction between avoidance strategies, control cost, and the probability of collision avoidance. The proposed framework can benefit a variety of applications that require teleoperation in cluttered spaces, including robot-assisted surgery. In addition, it can also be viewed as a new optimizer which produces smooth and probabilistically-safe trajectories under signal dependent noise.