Fall Research Expo 2023

Pediatric Automatic Defacing to Protect Patient Privacy

Through this project, our lab set out to develop a pediatric specific defacing technique using a convolutional neural network. Defaced pediatric brain scans are made necessary by the presence of modern day AI technology which can identify subjects from a 3D rendering of their MRI scans. Given the ethical concern this technology proposes, defacing was developed as a technique to remove identifiable features from MRIs, such as a subject's eyes, mouth, nose, and ears, while preserving the subject's brain anatomy.

While current defacing methods exist, all such models have been trained using adult subjects as a basis. One of the most popular defacing methods, MiDeface, uses a mathematical algorithm to generate a facemask for adult T1-weighted images. Since pediatric patients are a particularly vulnerable group, our lab was specifically concerned with how well an algorithm like MiDeface would hold up for younger patients who undergo rapid facial development in a short amount of time. Thus, MiDeface was used to generate the initial facemasks for four different scans (T1, T1ce, T2, and Flair) for 186 different pediatric subjects. After running this algorithm in Flywheel, a medical imaging platform, 386 scans still required manual edits to correct their facemasks in ITK-SNAP, an image segmentation software. Common corrections to facemasks included restoring brain “voxels” or pixels, particularly in the right prefrontal cortex, as well as properly realigning the facemask with the subject's face.

Once edited, all accurate face masks were compiled and split into training and testing data based on age, tumor type, and scan type. To build the model, we used nnUNet which is a deep learning method built for imaging data in the field of biomedical science. The ground-truth training data was inputted to nnUnet, which then automatically configured a segmentation pipeline that could accurately deface pediatric images. After receiving the facemasks created by the model, we programmatically compared them to the data set aside for testing. This was done using a dice similarity score which was found to have an average value of 0.779 and a median value of 0.801.

Overall, the defacing model appears to generate accurate facemasks for pediatric subjects as represented by the median dice score. Additionally, low outlier dice scores were found to be the result of manual edits made to facemasks rather than an indication that the model is inaccurate. This result proves that the machine learning model is more generalizable to age differences and types of MRI scans than the mathematical algorithms currently available. Further work on this project includes packaging the model as a software that can be publicly available to researchers and testing the model's effectiveness at protecting patient privacy from 3D rendering software. Through this project we believe to have improved patient privacy within imaging research and hope that further work with AI will advance biomedical research as a whole.

PRESENTED BY
PURM - Penn Undergraduate Research Mentoring Program
Engineering & Applied Sciences 2026
CO-PRESENTERS
Evan Grove - 2026
Advised By
Ali Nabavizadeh
Assistant Professor of Radiology at the Hospital of the University of Pennsylvania
Ariana Familiar
Imaging Data Scientist & Technical Lead, Center for Data-Driven Discovery in Biomedicine
PRESENTED BY
PURM - Penn Undergraduate Research Mentoring Program
Engineering & Applied Sciences 2026
CO-PRESENTERS
Evan Grove - 2026
Advised By
Ali Nabavizadeh
Assistant Professor of Radiology at the Hospital of the University of Pennsylvania
Ariana Familiar
Imaging Data Scientist & Technical Lead, Center for Data-Driven Discovery in Biomedicine

Comments