AI Case Study
Researchers from Stanford University develop a predictive model to identify knee injuries in MRI scans and avoid misdiagnosis
Stanford University Researchers use convolutional neural networks to detect abnormalities in knee MRI scans. The machine learning method provides medical practitioners with a probability of types of injuries present in the scans, or lack thereof. In testing, the system helped avoid misidentifying ACL tears, thus preventing unnecessary surgery.
Healthcare Equipment And Supplies
The researchers "developed a deep learning model for detecting general abnormalities and specific diagnoses (anterior cruciate ligament [ACL] tears and meniscal tears) on knee MRI exams. 7 practicing board-certified general radiologists and 2 practicing orthopedic surgeons at Stanford University Medical Center (3–29 years in practice, average 12 years) read a validation set of 120 exams twice, once without model assistance and once with model assistance, separated by a washout period of at least 10 days. For the reads with model assistance, model predictions were provided as predicted probabilities of a positive diagnosis for each of the 3 labels (e.g., 0.98 ACL tear, 0.7 Meniscal tear, and 0.99 abnormal)."
"Magnetic resonance (MR) imaging of the knee is the standard of care imaging modality to evaluate knee disorders, and more musculoskeletal MR examinations are performed on the knee than on any other region of the body".
"Model assistance resulted in a mean increase of 0.048 (4.8%) in ACL specificity: for every 100 healthy patients, ~5 are saved from being unneccessarily considered for surgery.
Though it appeared that model assistance also significantly increased the clinical experts’ accuracy in detecting ACL tears and sensitivity in detecting meniscus tears, these findings were no longer significant after adjusting for multiple comparisons by controlling the False Discovery Rate."
"The primary building block of our prediction system is MRNet, a convolutional neural network (CNN) mapping a 3-dimensional MRI series to a probability. The input to MRNet has dimensions s × 3 × 256 × 256, where s is the number of images in the MRI series (3 is the number of color channels). First, each 2-dimensional MRI image slice is passed through a feature extractor to obtain a s × 256 × 7 × 7 tensor containing features for each slice. A global average pooling layer is then applied to reduce these features to s × 256. We then applied max pooling across slices to obtain a 256-dimensional vector, which is passed to a fully connected layer to obtain a prediction probability.
Because MRNet generates a prediction for each of the sagittal T2, coronal T1, and axial PD series, we train a logistic regression to weight the predictions from the 3 series and generate a single output for each exam."
The researchers used "a public dataset of 917 exams with sagittal T1-weighted series and labels for ACL injury from Clinical Hospital Centre Rijeka, Croatia. On the external validation set of 183 exams, the MRNet trained on Stanford sagittal T2-weighted series achieved an AUC of 0.824 (95% CI 0.757, 0.892) in the detection of ACL injuries with no additional training; an MRNet trained on the rest of the external data achieved an AUC of 0.911 (95% CI 0.864, 0.958)."