Abstract:Minimally invasive surgery (MIS) offers several advantages including minimum tissue injury and blood loss, and quick recovery time, however, it imposes some limitations on surgeons ability. Among others such as lack of tactile or haptic feedback, poor visualization of the surgical site is one of the most acknowledged factors that exhibits several surgical drawbacks including unintentional tissue damage. To the context of robot assisted surgery, lack of frame contextual details makes vision task challenging when it comes to tracking tissue and tools, segmenting scene, and estimating pose and depth. In MIS the acquired frames are compromised by different noises and get blurred caused by motions from different sources. Moreover, when underwater environment is considered for instance knee arthroscopy, mostly visible noises and blur effects are originated from the environment, poor control on illuminations and imaging conditions. Additionally, in MIS, procedure like automatic white balancing and transformation between the raw color information to its standard RGB color space are often absent due to the hardware miniaturization. There is a high demand of an online preprocessing framework that can circumvent these drawbacks. Our proposed method is able to restore a latent clean and sharp image in standard RGB color space from its noisy, blur and raw observation in a single preprocessing stage.
Abstract:Minimally invasive surgery (MIS) has many documented advantages, but the surgeon's limited visual contact with the scene can be problematic. Hence, systems that can help surgeons navigate, such as a method that can produce a 3D semantic map, can compensate for the limitation above. In theory, we can borrow 3D semantic mapping techniques developed for robotics, but this requires finding solutions to the following challenges in MIS: 1) semantic segmentation, 2) depth estimation, and 3) pose estimation. In this paper, we propose the first 3D semantic mapping system from knee arthroscopy that solves the three challenges above. Using out-of-distribution non-human datasets, where pose could be labeled, we jointly train depth+pose estimators using selfsupervised and supervised losses. Using an in-distribution human knee dataset, we train a fully-supervised semantic segmentation system to label arthroscopic image pixels into femur, ACL, and meniscus. Taking testing images from human knees, we combine the results from these two systems to automatically create 3D semantic maps of the human knee. The result of this work opens the pathway to the generation of intraoperative 3D semantic mapping, registration with pre-operative data, and robotic-assisted arthroscopy
Abstract:Knee arthroscopy is a minimally invasive surgical (MIS) procedure which is performed to treat knee-joint ailment. Lack of visual information of the surgical site obtained from miniaturized cameras make this surgical procedure more complex. Knee cavity is a very confined space; therefore, surgical scenes are captured at close proximity. Insignificant context of knee atlas often makes them unrecognizable as a consequence unintentional tissue damage often occurred and shows a long learning curve to train new surgeons. Automatic context awareness through labeling of the surgical site can be an alternative to mitigate these drawbacks. However, from the previous studies, it is confirmed that the surgical site exhibits several limitations, among others, lack of discriminative contextual information such as texture and features which drastically limits this vision task. Additionally, poor imaging conditions and lack of accurate ground-truth labels are also limiting the accuracy. To mitigate these limitations of knee arthroscopy, in this work we proposed a scene segmentation method that successfully segments multi structures.
Abstract:Robotic-assisted orthopaedic surgeries demand accurate, automated leg manipulation for improved spatial accuracy to reduce iatrogenic damage. In this study, we propose novel rigid body designs and an optical tracking volume setup for tracking of the femur, tibia and surgical instruments. Anatomical points inside the leg are measured using Computed Tomography with an accuracy of 0.3mm. Combined with kinematic modelling, we can express these points relative to any frame and across joints to sub-millimetre accuracy. It enables the setup of vectors on the mechanical axes of the femur and tibia for kinematic analysis. Cadaveric experiments are used to verify the tracking of internal anatomies and joint motion analysis. The proposed integrated solution is a first step in the automation of leg manipulation and can be used as a ground-truth for future robot-assisted orthopaedic research.
Abstract:While Product of Exponentials (POE) formula has been gaining increasing popularity in modeling the kinematics of a serial-link robot, the Denavit-Hartenberg (D-H) notation is still the most widely used due to its intuitive and concise geometric interpretation of the robot. This paper has developed an analytical solution to automatically convert a POE model into a D-H model for a robot with revolute, prismatic, and helical joints, which are the complete set of three basic one degree of freedom lower pair joints for constructing a serial-link robot. The conversion algorithm developed can be used in applications such as calibration where it is necessary to convert the D-H model to the POE model for identification and then back to the D-H model for compensation. The equivalence of the two models proved in this paper also benefits the analysis of the identifiability of the kinematic parameters. It is found that the maximum number of identifiable parameters in a general POE model is 5h+4r +2t +n+6 where h, r, t, and n stand for the number of helical, revolute, prismatic, and general joints, respectively. It is also suggested that the identifiability of the base frame and the tool frame in the D-H model is restricted rather than the arbitrary six parameters as assumed previously.