Abstract:Understanding and anticipating intraoperative events and actions is critical for intraoperative assistance and decision-making during minimally invasive surgery. Automated prediction of events, actions, and the following consequences is addressed through various computational approaches with the objective of augmenting surgeons' perception and decision-making capabilities. We propose a predictive neural network that is capable of understanding and predicting critical interactive aspects of surgical workflow from intra-abdominal video, while flexibly leveraging surgical knowledge graphs. The approach incorporates a hypergraph-transformer (HGT) structure that encodes expert knowledge into the network design and predicts the hidden embedding of the graph. We verify our approach on established surgical datasets and applications, including the detection and prediction of action triplets, and the achievement of the Critical View of Safety (CVS). Moreover, we address specific, safety-related tasks, such as predicting the clipping of cystic duct or artery without prior achievement of the CVS. Our results demonstrate the superiority of our approach compared to unstructured alternatives.
Abstract:Comprehension of surgical workflow is the foundation upon which computers build the understanding of surgery. In this work, we moved beyond just the identification of surgical phases to predict future surgical phases and the transitions between them. We used a novel GAN formulation that sampled the future surgical phases trajectory conditioned, on past laparoscopic video frames, and compared it to state-of-the-art approaches for surgical video analysis and alternative prediction methods. We demonstrated its effectiveness in inferring and predicting the progress of laparoscopic cholecystectomy videos. We quantified the horizon-accuracy trade-off and explored average performance as well as the performance on the more difficult, and clinically important, transitions between phases. Lastly, we surveyed surgeons to evaluate the plausibility of these predicted trajectories.
Abstract:Analyzing surgical workflow is crucial for computers to understand surgeries. Deep learning techniques have recently been widely applied to recognize surgical workflows. Many of the existing temporal neural network models are limited in their capability to handle long-term dependencies in the data, instead of relying upon strong performance of the underlying per-frame visual models. We propose a new temporal network structure that leverages task-specific network representation to collect long-term sufficient statistics that are propagated by a sufficient statistics model (SSM). We leverage our approach within an LSTM back-bone for the task of surgical phase recognition and explore several choices for propagated statistics. We demonstrate superior results over existing state-of-the-art segmentation and novel segmentation techniques, on two laparoscopic cholecystectomy datasets: the already published Cholec80dataset and MGH100, a novel dataset with more challenging, yet clinically meaningful, segment labels.