Abstract:Surgical simulation is an increasingly important element of surgical education. Using simulation can be a means to address some of the significant challenges in developing surgical skills with limited time and resources. The photo-realistic fidelity of simulations is a key feature that can improve the experience and transfer ratio of trainees. In this paper, we demonstrate how we can enhance the visual fidelity of existing surgical simulation by performing style transfer of multi-class labels from real surgical video onto synthetic content. We demonstrate our approach on simulations of cataract surgery using real data labels from an existing public dataset. Our results highlight the feasibility of the approach and also the powerful possibility to extend this technique to incorporate additional temporal constraints and to different applications.
Abstract:Robotic surgery and novel surgical instrumentation present great potentials towards safer, more accurate and consistent minimally invasive surgery. However, their adoption is dependent to the access to training facilities and extensive surgical training. Robotic instruments require different dexterity skills compared to open or laparoscopic. Surgeons, therefore, are required to invest significant time by attending extensive training programs. Contrary, hands on experiences represent an additional operational cost for hospitals as the availability of robotic systems for training purposes is limited. All these technological and financial barriers for surgeons and hospitals hinder the adoption of robotic surgery. In this paper, we present a mobile dexterity training kit to develop basic surgical techniques within an affordable setting. The system could be used to train basic surgical gestures and to develop the motor skills needed for manoeuvring robotic instruments. Our work presents the architecture and components needed to create a simulated environment for training sub-tasks as well as a design for portable mobile manipulators that can be used as master controllers of different instruments. A preliminary study results demonstrate usability and skills development with this system.
Abstract:Automated surgical workflow analysis and understanding can assist surgeons to standardize procedures and enhance post-surgical assessment and indexing, as well as, interventional monitoring. Computer-assisted interventional (CAI) systems based on video can perform workflow estimation through surgical instruments' recognition while linking them to an ontology of procedural phases. In this work, we adopt a deep learning paradigm to detect surgical instruments in cataract surgery videos which in turn feed a surgical phase inference recurrent network that encodes temporal aspects of phase steps within the phase classification. Our models present comparable to state-of-the-art results for surgical tool detection and phase recognition with accuracies of 99 and 78% respectively.