Fall Research Expo 2020

Automated Segmentation of Postoperative Epilepsy Imaging

This study aims to demonstrate a method for segmenting surgically removed tissue in postoperative MRI using deep learning and the applications of this tool to epilepsy patient care.

Clinical (3T) T1 brain MR images (N = 55) were collected from temporal lobe epilepsy patients at the Hospital of the University of Pennsylvania (HUP). A U-Net convolutional neural network (Inception/ResNet backbone) was trained to segment surgical lesions in axial slices from these images, and performance was comparable to models developed in public brain segmentation challenges on open-source datasets. A clinical application pipeline was developed to demonstrate the potential uses of this segmentation model, such as assessing hippocampal remnant after surgery. 

Algorithm performance was measured using the dice score coefficient (DSC), a popular metric for image segmentation models that measures overlap between the ground truth and predicted lesions. The average DSC per slice on the test set was 0.83, comparable to the mean DSC of the best models developed for public segmentation challenges (BraTS 2013 (N = 65): 0.71-0.87, BraTS 2018 (N = 542): 0.80-0.88). The average DSC per scan on the test set was 0.78. 

In this study, a deep learning model was developed to segment surgical lesions with comparable overall performance to models trained for similar problems. The model’s performance demonstrates its potential to speed up manual review of postoperative MRI and be utilized in a clinical setting.

PRESENTED BY
Other
Engineering & Applied Sciences 2022
Advised By
Brian Litt
M.D.
Thomas Campbell Arnold
Join Ramya for a virtual discussion
PRESENTED BY
Other
Engineering & Applied Sciences 2022
Advised By
Brian Litt
M.D.
Thomas Campbell Arnold

Comments

This is a really cool project, and the results from your tests look really promising! Was the data randomly sorted into training, testing, and validation categories? I'm also curious to know if overfitting is more common for models trained on smaller datasets, and it would also be cool to see/hear about examples of data augmentation (which I assume is basically generating synthetic data from the existing ones, so the model has a larger dataset to train with)? Great work!