Abstract:In this study, we propose a novel framework that utilizes deep learning (DL) and attention mechanisms to predict the radiographic progression of patellofemoral osteoarthritis (PFOA) over a period of seven years. This study included subjects (1832 subjects, 3276 knees) from the baseline of the MOST study. PF joint regions-of-interest were identified using an automated landmark detection tool (BoneFinder) on lateral knee X-rays. An end-to-end DL method was developed for predicting PFOA progression based on imaging data in a 5-fold cross-validation setting. A set of baselines based on known risk factors were developed and analyzed using gradient boosting machine (GBM). Risk factors included age, sex, BMI and WOMAC score, and the radiographic osteoarthritis stage of the tibiofemoral joint (KL score). Finally, we trained an ensemble model using both imaging and clinical data. Among the individual models, the performance of our deep convolutional neural network attention model achieved the best performance with an AUC of 0.856 and AP of 0.431; slightly outperforming the deep learning approach without attention (AUC=0.832, AP= 0.4) and the best performing reference GBM model (AUC=0.767, AP= 0.334). The inclusion of imaging data and clinical variables in an ensemble model allowed statistically more powerful prediction of PFOA progression (AUC = 0.865, AP=0.447), although the clinical significance of this minor performance gain remains unknown. This study demonstrated the potential of machine learning models to predict the progression of PFOA using imaging and clinical variables. These models could be used to identify patients who are at high risk of progression and prioritize them for new treatments. However, even though the accuracy of the models were excellent in this study using the MOST dataset, they should be still validated using external patient cohorts in the future.
Abstract:Objective is to assess the ability of texture features for detecting radiographic patellofemoral osteoarthritis (PFOA) from knee lateral view radiographs. We used lateral view knee radiographs from MOST public use datasets (n = 5507 knees). Patellar region-of-interest (ROI) was automatically detected using landmark detection tool (BoneFinder). Hand-crafted features, based on LocalBinary Patterns (LBP), were then extracted to describe the patellar texture. First, a machine learning model (Gradient Boosting Machine) was trained to detect radiographic PFOA from the LBP features. Furthermore, we used end-to-end trained deep convolutional neural networks (CNNs) directly on the texture patches for detecting the PFOA. The proposed classification models were eventually compared with more conventional reference models that use clinical assessments and participant characteristics such as age, sex, body mass index(BMI), the total WOMAC score, and tibiofemoral Kellgren-Lawrence (KL) grade. Atlas-guided visual assessment of PFOA status by expert readers provided in the MOST public use datasets was used as a classification outcome for the models. Performance of prediction models was assessed using the area under the receiver operating characteristic curve (ROC AUC), the area under the precision-recall (PR) curve-average precision (AP)-, and Brier score in the stratified 5-fold cross validation setting.Of the 5507 knees, 953 (17.3%) had PFOA. AUC and AP for the strongest reference model including age, sex, BMI, WOMAC score, and tibiofemoral KL grade to predict PFOA were 0.817 and 0.487, respectively. Textural ROI classification using CNN significantly improved the prediction performance (ROC AUC= 0.889, AP= 0.714). We present the first study that analyses patellar bone texture for diagnosing PFOA. Our results demonstrates the potential of using texture features of patella to predict PFOA.
Abstract:Objective: To assess the ability of imaging-based deep learning to predict radiographic patellofemoral osteoarthritis (PFOA) from knee lateral view radiographs. Design: Knee lateral view radiographs were extracted from The Multicenter Osteoarthritis Study (MOST) (n = 18,436 knees). Patellar region-of-interest (ROI) was first automatically detected, and subsequently, end-to-end deep convolutional neural networks (CNNs) were trained and validated to detect the status of patellofemoral OA. Patellar ROI was detected using deep-learning-based object detection method. Manual PFOA status assessment provided in the MOST dataset was used as a classification outcome for the CNNs. Performance of prediction models was assessed using the area under the receiver operating characteristic curve (ROC AUC) and the average precision (AP) obtained from the precision-recall (PR) curve in the stratified 5-fold cross validation setting. Results: Of the 18,436 knees, 3,425 (19%) had PFOA. AUC and AP for the reference model including age, sex, body mass index (BMI), the total Western Ontario and McMaster Universities Arthritis Index (WOMAC) score, and tibiofemoral Kellgren-Lawrence (KL) grade to predict PFOA were 0.806 and 0.478, respectively. The CNN model that used only image data significantly improved the prediction of PFOA status (ROC AUC= 0.958, AP= 0.862). Conclusion: We present the first machine learning based automatic PFOA detection method. Furthermore, our deep learning based model trained on patella region from knee lateral view radiographs performs better at predicting PFOA than models based on patient characteristics and clinical assessments.
Abstract:Knee osteoarthritis (OA) is very common progressive and degenerative musculoskeletal disease worldwide creates a heavy burden on patients with reduced quality of life and also on society due to financial impact. Therefore, any attempt to reduce the burden of the disease could help both patients and society. In this study, we propose a fully automated novel method, based on combination of joint shape and convolutional neural network (CNN) based bone texture features, to distinguish between the knee radiographs with and without radiographic osteoarthritis. Moreover, we report the first attempt at describing the bone texture using CNN. Knee radiographs from Osteoarthritis Initiative (OAI) and Multicenter Osteoarthritis (MOST) studies were used in the experiments. Our models were trained on 8953 knee radiographs from OAI and evaluated on 3445 knee radiographs from MOST. Our results demonstrate that fusing the proposed shape and texture parameters achieves the state-of-the art performance in radiographic OA detection yielding area under the ROC curve (AUC) of 95.21%
Abstract:The purposes of this study were to investigate: 1) the effect of placement of region-of-interest (ROI) for texture analysis of subchondral bone in knee radiographs, and 2) the ability of several texture descriptors to distinguish between the knees with and without radiographic osteoarthritis (OA). Bilateral posterior-anterior knee radiographs were analyzed from the baseline of OAI and MOST datasets. A fully automatic method to locate the most informative region from subchondral bone using adaptive segmentation was developed. We used an oversegmentation strategy for partitioning knee images into the compact regions that follow natural texture boundaries. LBP, Fractal Dimension (FD), Haralick features, Shannon entropy, and HOG methods were computed within the standard ROI and within the proposed adaptive ROIs. Subsequently, we built logistic regression models to identify and compare the performances of each texture descriptor and each ROI placement method using 5-fold cross validation setting. Importantly, we also investigated the generalizability of our approach by training the models on OAI and testing them on MOST dataset.We used area under the receiver operating characteristic (ROC) curve (AUC) and average precision (AP) obtained from the precision-recall (PR) curve to compare the results. We found that the adaptive ROI improves the classification performance (OA vs. non-OA) over the commonly used standard ROI (up to 9% percent increase in AUC). We also observed that, from all texture parameters, LBP yielded the best performance in all settings with the best AUC of 0.840 [0.825, 0.852] and associated AP of 0.804 [0.786, 0.820]. Compared to the current state-of-the-art approaches, our results suggest that the proposed adaptive ROI approach in texture analysis of subchondral bone can increase the diagnostic performance for detecting the presence of radiographic OA.