Abstract:Tropospheric ozone, known as a concerning air pollutant, has been associated with health issues including asthma, bronchitis, and impaired lung function. The rates at which peroxy radicals react with NO play a critical role in the overall formation and depletion of tropospheric ozone. However, obtaining comprehensive kinetic data for these reactions remains challenging. Traditional approaches to determine rate constants are costly and technically intricate. Fortunately, the emergence of machine learning-based models offers a less resource and time-intensive alternative for acquiring kinetics information. In this study, we leveraged deep reinforcement learning to predict ranges of rate constants (\textit{k}) with exceptional accuracy, achieving a testing set accuracy of 100%. To analyze reactivity trends based on the molecular structure of peroxy radicals, we employed 51 global descriptors as input parameters. These descriptors were derived from optimized minimum energy geometries of peroxy radicals using the quantum composite G3B3 method. Through the application of Integrated Gradients (IGs), we gained valuable insights into the significance of the various descriptors in relation to reaction rates. We successfully validated and contextualized our findings by conducting cross-comparisons with established trends in the existing literature. These results establish a solid foundation for pioneering advancements in chemistry, where computer analysis serves as an inspirational source driving innovation.
Abstract:Artificial intelligence (AI) in radiology has made great strides in recent years, but many hurdles remain. Overfitting and lack of generalizability represent important ongoing challenges hindering accurate and dependable clinical deployment. If AI algorithms can avoid overfitting and achieve true generalizability, they can go from the research realm to the forefront of clinical work. Recently, small data AI approaches such as deep neuroevolution (DNE) have avoided overfitting small training sets. We seek to address both overfitting and generalizability by applying DNE to a virtually pooled data set consisting of images from various institutions. Our use case is classifying neuroblastoma brain metastases on MRI. Neuroblastoma is well-suited for our goals because it is a rare cancer. Hence, studying this pediatric disease requires a small data approach. As a tertiary care center, the neuroblastoma images in our local Picture Archiving and Communication System (PACS) are largely from outside institutions. These multi-institutional images provide a heterogeneous data set that can simulate real world clinical deployment. As in prior DNE work, we used a small training set, consisting of 30 normal and 30 metastasis-containing post-contrast MRI brain scans, with 37% outside images. The testing set was enriched with 83% outside images. DNE converged to a testing set accuracy of 97%. Hence, the algorithm was able to predict image class with near-perfect accuracy on a testing set that simulates real-world data. Hence, the work described here represents a considerable contribution toward clinically feasible AI.
Abstract:Whereas MRI produces anatomic information about the brain, functional MRI (fMRI) tells us about neural activity within the brain, including how various regions communicate with each other. The full chorus of conversations within the brain is summarized elegantly in the adjacency matrix. Although information-rich, adjacency matrices typically provide little in the way of intuition. Whereas trained radiologists viewing anatomic MRI can readily distinguish between different kinds of brain cancer, a similar determination using adjacency matrices would exceed any expert's grasp. Artificial intelligence (AI) in radiology usually analyzes anatomic imaging, providing assistance to radiologists. For non-intuitive data types such as adjacency matrices, AI moves beyond the role of helpful assistant, emerging as indispensible. We sought here to show that AI can learn to discern between two important brain tumor types, high-grade glioma (HGG) and low-grade glioma (LGG), based on adjacency matrices. We trained a convolutional neural networks (CNN) with the method of deep neuroevolution (DNE), because of the latter's recent promising results; DNE has produced remarkably accurate CNNs even when relying on small and noisy training sets, or performing nuanced tasks. After training on just 30 adjacency matrices, our CNN could tell HGG apart from LGG with perfect testing set accuracy. Saliency maps revealed that the network learned highly sophisticated and complex features to achieve its success. Hence, we have shown that it is possible for AI to recognize brain tumor type from functional connectivity. In future work, we will apply DNE to other noisy and somewhat cryptic forms of medical data, including further explorations with fMRI.
Abstract:Purpose : Because functional MRI (fMRI) data sets are in general small, we sought a data efficient approach to resting state fMRI classification of autism spectrum disorder (ASD) versus neurotypical (NT) controls. We hypothesized that a Deep Reinforcement Learning (DRL) classifier could learn effectively on a small fMRI training set. Methods : We trained a Deep Reinforcement Learning (DRL) classifier on 100 graph-label pairs from the Autism Brain Imaging Data Exchange (ABIDE) database. For comparison, we trained a Supervised Deep Learning (SDL) classifier on the same training set. Results : DRL significantly outperformed SDL, with a p-value of 2.4 x 10^(-7). DRL achieved superior results for a variety of classifier performance metrics, including an F1 score of 76, versus 67 for SDL. Whereas SDL quickly overfit the training data, DRL learned in a progressive manner that generalised to the separate testing set. Conclusion : DRL can learn to classify ASD versus NT in a data efficient manner, doing so for a small training set. Future work will involve optimizing the neural network for data efficiency and applying the approach to other fMRI data sets, namely for brain cancer patients.
Abstract:Purpose: A core component of advancing cancer treatment research is assessing response to therapy. Doing so by hand, for example as per RECIST or RANO criteria, is tedious, time-consuming, and can miss important tumor response information; most notably, they exclude non-target lesions. We wish to assess change in a holistic fashion that includes all lesions, obtaining simple, informative, and automated assessments of tumor progression or regression. Due to often low patient enrolments in clinical trials, we wish to make response assessments with small training sets. Deep neuroevolution (DNE) can produce radiology artificial intelligence (AI) that performs well on small training sets. Here we use DNE for function approximation that predicts progression versus regression of metastatic brain disease. Methods: We analyzed 50 pairs of MRI contrast-enhanced images as our training set. Half of these pairs, separated in time, qualified as disease progression, while the other 25 images constituted regression. We trained the parameters of a relatively small CNN via mutations that consisted of random CNN weight adjustments and mutation fitness. We then incorporated the best mutations into the next generations CNN, repeating this process for approximately 50,000 generations. We applied the CNNs to our training set, as well as a separate testing set with the same class balance of 25 progression and 25 regression images. Results: DNE achieved monotonic convergence to 100% training set accuracy. DNE also converged monotonically to 100% testing set accuracy. Conclusion: DNE can accurately classify brain-metastatic disease progression versus regression. Future work will extend the input from 2D image slices to full 3D volumes, and include the category of no change. We believe that an approach such as our could ultimately provide a useful adjunct to RANO/RECIST assessment.
Abstract:Purpose: Image classification is perhaps the most fundamental task in imaging AI. However, labeling images is time-consuming and tedious. We have recently demonstrated that reinforcement learning (RL) can classify 2D slices of MRI brain images with high accuracy. Here we make two important steps toward speeding image classification: Firstly, we automatically extract class labels from the clinical reports. Secondly, we extend our prior 2D classification work to fully 3D image volumes from our institution. Hence, we proceed as follows: in Part 1, we extract labels from reports automatically using the SBERT natural language processing approach. Then, in Part 2, we use these labels with RL to train a classification Deep-Q Network (DQN) for 3D image volumes. Methods: For Part 1, we trained SBERT with 90 radiology report impressions. We then used the trained SBERT to predict class labels for use in Part 2. In Part 2, we applied multi-step image classification to allow for combined Deep-Q learning using 3D convolutions and TD(0) Q learning. We trained on a set of 90 images. We tested on a separate set of 61 images, again using the classes predicted from patient reports by the trained SBERT in Part 1. For comparison, we also trained and tested a supervised deep learning classification network on the same set of training and testing images using the same labels. Results: Part 1: Upon training with the corpus of radiology reports, the SBERT model had 100% accuracy for both normal and metastasis-containing scans. Part 2: Then, using these labels, whereas the supervised approach quickly overfit the training data and as expected performed poorly on the testing set (66% accuracy, just over random guessing), the reinforcement learning approach achieved an accuracy of 92%. The results were found to be statistically significant, with a p-value of 3.1 x 10^-5.
Abstract:Purpose: We seek to use neural networks (NNs) to solve a well-known system of differential equations describing the balance between T cells and HIV viral burden. Materials and Methods: In this paper, we employ a 3-input parallel NN to approximate solutions for the system of first-order ordinary differential equations describing the above biochemical relationship. Results: The numerical results obtained by the NN are very similar to a host of numerical approximations from the literature. Conclusion: We have demonstrated use of NN integration of a well-known and medically important system of first order coupled ordinary differential equations. Our trial-and-error approach counteracts the system's inherent scale imbalance. However, it highlights the need to address scale imbalance more substantively in future work. Doing so will allow more automated solutions to larger systems of equations, which could describe increasingly complex and biologically interesting systems.
Abstract:Purpose: Image classification may be the fundamental task in imaging artificial intelligence. We have recently shown that reinforcement learning can achieve high accuracy for lesion localization and segmentation even with minuscule training sets. Here, we introduce reinforcement learning for image classification. In particular, we apply the approach to normal vs. tumor-containing 2D MRI brain images. Materials and Methods: We applied multi-step image classification to allow for combined Deep Q learning and TD(0) Q learning. We trained on a set of 30 images (15 normal and 15 tumor-containing). We tested on a separate set of 30 images (15 normal and 15 tumor-containing). For comparison, we also trained and tested a supervised deep-learning classification network on the same set of training and testing images. Results: Whereas the supervised approach quickly overfit the training data and as expected performed poorly on the testing set (57% accuracy, just over random guessing), the reinforcement learning approach achieved an accuracy of 100%. Conclusion: We have shown a proof-of-principle application of reinforcement learning to the classification of brain tumors. We achieved perfect testing set accuracy with a training set of merely 30 images.
Abstract:Purpose: Lesion segmentation in medical imaging is key to evaluating treatment response. We have recently shown that reinforcement learning can be applied to radiological images for lesion localization. Furthermore, we demonstrated that reinforcement learning addresses important limitations of supervised deep learning; namely, it can eliminate the requirement for large amounts of annotated training data and can provide valuable intuition lacking in supervised approaches. However, we did not address the fundamental task of lesion/structure-of-interest segmentation. Here we introduce a method combining unsupervised deep learning clustering with reinforcement learning to segment brain lesions on MRI. Materials and Methods: We initially clustered images using unsupervised deep learning clustering to generate candidate lesion masks for each MRI image. The user then selected the best mask for each of 10 training images. We then trained a reinforcement learning algorithm to select the masks. We tested the corresponding trained deep Q network on a separate testing set of 10 images. For comparison, we also trained and tested a U-net supervised deep learning network on the same set of training/testing images. Results: Whereas the supervised approach quickly overfit the training data and predictably performed poorly on the testing set (16% average Dice score), the unsupervised deep clustering and reinforcement learning achieved an average Dice score of 83%. Conclusion: We have demonstrated a proof-of-principle application of unsupervised deep clustering and reinforcement learning to segment brain tumors. The approach represents human-allied AI that requires minimal input from the radiologist without the need for hand-traced annotation.
Abstract:Purpose: AI in radiology is hindered chiefly by: 1) Requiring large annotated data sets. 2) Non-generalizability that limits deployment to new scanners / institutions. And 3) Inadequate explainability and interpretability. We believe that reinforcement learning can address all three shortcomings, with robust and intuitive algorithms trainable on small datasets. To the best of our knowledge, reinforcement learning has not been directly applied to computer vision tasks for radiological images. In this proof-of-principle work, we train a deep reinforcement learning network to predict brain tumor location. Materials and Methods: Using the BraTS brain tumor imaging database, we trained a deep Q network on 70 post-contrast T1-weighted 2D image slices. We did so in concert with image exploration, with rewards and punishments designed to localize lesions. To compare with supervised deep learning, we trained a keypoint detection convolutional neural network on the same 70 images. We applied both approaches to a separate 30 image testing set. Results: Reinforcement learning predictions consistently improved during training, whereas those of supervised deep learning quickly diverged. Reinforcement learning predicted testing set lesion locations with 85% accuracy, compared to roughly 7% accuracy for the supervised deep network. Conclusion: Reinforcement learning predicted lesions with high accuracy, which is unprecedented for such a small training set. We believe that reinforcement learning can propel radiology AI well past the inherent limitations of supervised deep learning, with more clinician-driven research and finally toward true clinical applicability.