TY - JOUR
T1 - Automated Contouring of Contrast and Noncontrast Computed Tomography Liver Images With Fully Convolutional Networks
AU - Anderson, Brian M.
AU - Lin, Ethan Y.
AU - Cardenas, Carlos E.
AU - Gress, Dustin A.
AU - Erwin, William D.
AU - Odisio, Bruno C.
AU - Koay, Eugene J.
AU - Brock, Kristy K.
N1 - Funding Information:
Disclosures: During the manuscript work, Mr Dustin Gress’ employment changed from UT MD Anderson Cancer Center to the American College of Radiology (ACR). The ACR has a Data Science Institute (DSI, acrdsi.org ), but none of Mr Gress’ interactions with ACR DSI as part of his job duties were related in any way to the work of this manuscript. Furthermore, there was no influence from ACR or DSI. Mr William Erwin reports grants from Oncosil Medical Ltd., IPSEN Pharmaceuticals S.A.S., Advanced Accelerator Applications International SA, and Y-mAbs Therapeutics A/S outside of the submitted work. Dr Bruno Odisio received grants from Siemens Healthineers and other incentives from Koo Foundation outside of the submitted work. Dr Eugene J. Koay reports grants from the National Cancer Institute, Stand Up to Cancer, Project Purple, Pancreatic Cancer Action Network, and Philips Health Care during the conduct of the study, as well as other incentives from Taylor and Francis, LLC, outside of the submitted work. Dr Kristy K. Brock reports research funding and a Licensing Agreement with RaySearch Laboratories. None of the other authors have any conflicts of interest to report.
Funding Information:
Sources of support: Brian Anderson is supported as a fellow through funding from the Society of Interventional Radiology Allied Scientist Grant . Research reported in this publication was supported in part by the National Cancer Institute of the National Institutes of Health under award numbers 1R01CA221971 , R01CA235564 . The authors would like to acknowledge funding and support from the Helen Black Image Guided Fund and the Image Guided Cancer Therapy Research Program at The University of Texas MD Anderson Cancer Center. Dr Eugene Koay was supported by institutional funds from the MD Anderson Cancer Moonshots program , the NIH ( U54CA143837 and U01CA196403 ), and the Andrew Sabin Family Fellowship . The authors would also like to recognize the Medical Image Computing and Computer Assisted Intervention (MICCAI), and the Texas Advanced Computing Center (TACC, http://www.tacc.utexas.edu ) at The University of Texas at Austin for providing computing resources that contributed to the research results reported with this paper.
Funding Information:
Sources of support: Brian Anderson is supported as a fellow through funding from the Society of Interventional Radiology Allied Scientist Grant. Research reported in this publication was supported in part by the National Cancer Institute of the National Institutes of Health under award numbers 1R01CA221971, R01CA235564. The authors would like to acknowledge funding and support from the Helen Black Image Guided Fund and the Image Guided Cancer Therapy Research Program at The University of Texas MD Anderson Cancer Center. Dr Eugene Koay was supported by institutional funds from the MD Anderson Cancer Moonshots program, the NIH (U54CA143837 and U01CA196403), and the Andrew Sabin Family Fellowship. The authors would also like to recognize the Medical Image Computing and Computer Assisted Intervention (MICCAI), and the Texas Advanced Computing Center (TACC, http://www.tacc.utexas.edu) at The University of Texas at Austin for providing computing resources that contributed to the research results reported with this paper. Disclosures: During the manuscript work, Mr Dustin Gress’ employment changed from UT MD Anderson Cancer Center to the American College of Radiology (ACR). The ACR has a Data Science Institute (DSI, acrdsi.org), but none of Mr Gress’ interactions with ACR DSI as part of his job duties were related in any way to the work of this manuscript. Furthermore, there was no influence from ACR or DSI. Mr William Erwin reports grants from Oncosil Medical Ltd., IPSEN Pharmaceuticals S.A.S., Advanced Accelerator Applications International SA, and Y-mAbs Therapeutics A/S outside of the submitted work. Dr Bruno Odisio received grants from Siemens Healthineers and other incentives from Koo Foundation outside of the submitted work. Dr Eugene J. Koay reports grants from the National Cancer Institute, Stand Up to Cancer, Project Purple, Pancreatic Cancer Action Network, and Philips Health Care during the conduct of the study, as well as other incentives from Taylor and Francis, LLC, outside of the submitted work. Dr Kristy K. Brock reports research funding and a Licensing Agreement with RaySearch Laboratories. None of the other authors have any conflicts of interest to report.
Publisher Copyright:
© 2020 The Author(s)
PY - 2021/1/1
Y1 - 2021/1/1
N2 - Purpose: The deformable nature of the liver can make focal treatment challenging and is not adequately addressed with simple rigid registration techniques. More advanced registration techniques can take deformations into account (eg, biomechanical modeling) but require segmentations of the whole liver for each scan, which is a time-intensive process. We hypothesize that fully convolutional networks can be used to rapidly and accurately autosegment the liver, removing the temporal bottleneck for biomechanical modeling. Methods and Materials: Manual liver segmentations on computed tomography scans from 183 patients treated at our institution and 30 scans from the Medical Image Computing & Computer Assisted Intervention challenges were collected for this study. Three architectures were investigated for rapid automated segmentation of the liver (VGG-16, DeepLabv3 +, and a 3-dimensional UNet). Fifty-six cases were set aside as a final test set for quantitative model evaluation. Accuracy of the autosegmentations was assessed using Dice similarity coefficient and mean surface distance. Qualitative evaluation was also performed by 3 radiation oncologists on 50 independent cases with previously clinically treated liver contours. Results: The mean (minimum-maximum) mean surface distance for the test groups with the final model, DeepLabv3 +, were as follows: μContrast(N = 17): 0.99 mm (0.47-2.2), μNon_Contrast(N = 19)l: 1.12 mm (0.41-2.87), and μMiccai(N = 30)t: 1.48 mm (0.82-3.96). The qualitative evaluation showed that 30 of 50 autosegmentations (60%) were preferred to manual contours (majority voting) in a blinded comparison, and 48 of 50 autosegmentations (96%) were deemed clinically acceptable by at least 1 reviewing physician. Conclusions: The autosegmentations were preferred compared with manually defined contours in the majority of cases. The ability to rapidly segment the liver with high accuracy achieved in this investigation has the potential to enable the efficient integration of biomechanical model-based registration into a clinical workflow.
AB - Purpose: The deformable nature of the liver can make focal treatment challenging and is not adequately addressed with simple rigid registration techniques. More advanced registration techniques can take deformations into account (eg, biomechanical modeling) but require segmentations of the whole liver for each scan, which is a time-intensive process. We hypothesize that fully convolutional networks can be used to rapidly and accurately autosegment the liver, removing the temporal bottleneck for biomechanical modeling. Methods and Materials: Manual liver segmentations on computed tomography scans from 183 patients treated at our institution and 30 scans from the Medical Image Computing & Computer Assisted Intervention challenges were collected for this study. Three architectures were investigated for rapid automated segmentation of the liver (VGG-16, DeepLabv3 +, and a 3-dimensional UNet). Fifty-six cases were set aside as a final test set for quantitative model evaluation. Accuracy of the autosegmentations was assessed using Dice similarity coefficient and mean surface distance. Qualitative evaluation was also performed by 3 radiation oncologists on 50 independent cases with previously clinically treated liver contours. Results: The mean (minimum-maximum) mean surface distance for the test groups with the final model, DeepLabv3 +, were as follows: μContrast(N = 17): 0.99 mm (0.47-2.2), μNon_Contrast(N = 19)l: 1.12 mm (0.41-2.87), and μMiccai(N = 30)t: 1.48 mm (0.82-3.96). The qualitative evaluation showed that 30 of 50 autosegmentations (60%) were preferred to manual contours (majority voting) in a blinded comparison, and 48 of 50 autosegmentations (96%) were deemed clinically acceptable by at least 1 reviewing physician. Conclusions: The autosegmentations were preferred compared with manually defined contours in the majority of cases. The ability to rapidly segment the liver with high accuracy achieved in this investigation has the potential to enable the efficient integration of biomechanical model-based registration into a clinical workflow.
UR - http://www.scopus.com/inward/record.url?scp=85086564573&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85086564573&partnerID=8YFLogxK
U2 - 10.1016/j.adro.2020.04.023
DO - 10.1016/j.adro.2020.04.023
M3 - Article
AN - SCOPUS:85086564573
VL - 6
JO - Advances in Radiation Oncology
JF - Advances in Radiation Oncology
SN - 2452-1094
IS - 1
M1 - 100464
ER -