Abstract:Vascular anastomosis, the surgical connection of blood vessels, is essential in procedures such as organ transplants and reconstructive surgeries. The precision required limits accessibility due to the extensive training needed, with manual suturing leading to variable outcomes and revision rates up to 7.9%. Existing robotic systems, while promising, are either fully teleoperated or lack the capabilities necessary for autonomous vascular anastomosis. We present the Micro Smart Tissue Autonomous Robot (micro-STAR), an autonomous robotic system designed to perform vascular anastomosis on small-diameter vessels. The micro-STAR system integrates a novel suturing tool equipped with Optical Coherence Tomography (OCT) fiber-optic sensor and a microcamera, enabling real-time tissue detection and classification. Our system autonomously places sutures and manipulates tissue with minimal human intervention. In an ex vivo study, micro-STAR achieved outcomes competitive with experienced surgeons in terms of leak pressure, lumen reduction, and suture placement variation, completing 90% of sutures without human intervention. This represents the first instance of a robotic system autonomously performing vascular anastomosis on real tissue, offering significant potential for improving surgical precision and expanding access to high-quality care.
Abstract:Purpose: To achieve effective robot-assisted laparoscopic prostatectomy, the integration of transrectal ultrasound (TRUS) imaging system which is the most widely used imaging modelity in prostate imaging is essential. However, manual manipulation of the ultrasound transducer during the procedure will significantly interfere with the surgery. Therefore, we propose an image co-registration algorithm based on a photoacoustic marker method, where the ultrasound / photoacoustic (US/PA) images can be registered to the endoscopic camera images to ultimately enable the TRUS transducer to automatically track the surgical instrument Methods: An optimization-based algorithm is proposed to co-register the images from the two different imaging modalities. The principles of light propagation and an uncertainty in PM detection were assumed in this algorithm to improve the stability and accuracy of the algorithm. The algorithm is validated using the previously developed US/PA image-guided system with a da Vinci surgical robot. Results: The target-registration-error (TRE) is measured to evaluate the proposed algorithm. In both simulation and experimental demonstration, the proposed algorithm achieved a sub-centimeter accuracy which is acceptable in practical clinics. The result is also comparable with our previous approach, and the proposed method can be implemented with a normal white light stereo camera and doesn't require highly accurate localization of the PM. Conclusion: The proposed frame registration algorithm enabled a simple yet efficient integration of commercial US/PA imaging system into laparoscopic surgical setting by leveraging the characteristic properties of acoustic wave propagation and laser excitation, contributing to automated US/PA image-guided surgical intervention applications.
Abstract:Recent technological advancements in retinal surgery has led to the modern operating room consisting of a surgical robot, microscope, and intraoperative optical coherence tomography (iOCT). The integration of these tools raises the fundamental question of how to effectively combine them to enable surgical autonomy. In this work, we address this question by developing a unified framework that enables real-time autonomous surgical workflows utilizing the aforementioned devices. To achieve this, we make the following contributions: (1) we develop a novel imaging system that integrates microscopy and iOCT in real-time, accomplished by dynamically tracking the surgical instrument via a small iOCT scanning region (e.g. B-scan), which was not previously possible; (2) implementing various convolutional neural networks (CNN) that automatically segment and detect task-relevant information for surgical autonomy; (3) enabling surgeons to intuitively select goal waypoints within both the microscope and iOCT views through simple mouse-click interactions; (4) integrating model predictive control (MPC) for real-time trajectory generation that respects kinematic constraints to ensure patient safety. We show the utility of our system by tackling subretinal injection (SI), a challenging procedure that involves inserting a microneedle below the retinal tissue for targeted drug delivery, a task surgeons find challenging due to requiring tens-of-micrometers of accuracy and precise depth perception. We validate our system by conducting 30 successful SI trials on pig eyes, achieving needle insertion accuracy of $26 \pm 12 \mu m$ to various subretinal goals and duration of $55 \pm 10.8$ seconds. Preliminary comparisons to a human operator performing SI in robot-assisted mode highlight the enhanced safety of our system.