TY - JOUR
T1 - Human visual explanations mitigate bias in AI-based assessment of surgeon skills
AU - Kiyasseh, Dani
AU - Laca, Jasper
AU - Haque, Taseen F.
AU - Otiato, Maxwell
AU - Miles, Brian J.
AU - Wagner, Christian
AU - Donoho, Daniel A.
AU - Trinh, Quoc Dien
AU - Anandkumar, Animashree
AU - Hung, Andrew J.
N1 - Publisher Copyright:
© 2023, The Author(s).
PY - 2023/3/30
Y1 - 2023/3/30
N2 - Artificial intelligence (AI) systems can now reliably assess surgeon skills through videos of intraoperative surgical activity. With such systems informing future high-stakes decisions such as whether to credential surgeons and grant them the privilege to operate on patients, it is critical that they treat all surgeons fairly. However, it remains an open question whether surgical AI systems exhibit bias against surgeon sub-cohorts, and, if so, whether such bias can be mitigated. Here, we examine and mitigate the bias exhibited by a family of surgical AI systems—SAIS—deployed on videos of robotic surgeries from three geographically-diverse hospitals (USA and EU). We show that SAIS exhibits an underskilling bias, erroneously downgrading surgical performance, and an overskilling bias, erroneously upgrading surgical performance, at different rates across surgeon sub-cohorts. To mitigate such bias, we leverage a strategy —TWIX—which teaches an AI system to provide a visual explanation for its skill assessment that otherwise would have been provided by human experts. We show that whereas baseline strategies inconsistently mitigate algorithmic bias, TWIX can effectively mitigate the underskilling and overskilling bias while simultaneously improving the performance of these AI systems across hospitals. We discovered that these findings carry over to the training environment where we assess medical students’ skills today. Our study is a critical prerequisite to the eventual implementation of AI-augmented global surgeon credentialing programs, ensuring that all surgeons are treated fairly.
AB - Artificial intelligence (AI) systems can now reliably assess surgeon skills through videos of intraoperative surgical activity. With such systems informing future high-stakes decisions such as whether to credential surgeons and grant them the privilege to operate on patients, it is critical that they treat all surgeons fairly. However, it remains an open question whether surgical AI systems exhibit bias against surgeon sub-cohorts, and, if so, whether such bias can be mitigated. Here, we examine and mitigate the bias exhibited by a family of surgical AI systems—SAIS—deployed on videos of robotic surgeries from three geographically-diverse hospitals (USA and EU). We show that SAIS exhibits an underskilling bias, erroneously downgrading surgical performance, and an overskilling bias, erroneously upgrading surgical performance, at different rates across surgeon sub-cohorts. To mitigate such bias, we leverage a strategy —TWIX—which teaches an AI system to provide a visual explanation for its skill assessment that otherwise would have been provided by human experts. We show that whereas baseline strategies inconsistently mitigate algorithmic bias, TWIX can effectively mitigate the underskilling and overskilling bias while simultaneously improving the performance of these AI systems across hospitals. We discovered that these findings carry over to the training environment where we assess medical students’ skills today. Our study is a critical prerequisite to the eventual implementation of AI-augmented global surgeon credentialing programs, ensuring that all surgeons are treated fairly.
UR - http://www.scopus.com/inward/record.url?scp=85151396353&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85151396353&partnerID=8YFLogxK
U2 - 10.1038/s41746-023-00766-2
DO - 10.1038/s41746-023-00766-2
M3 - Article
C2 - 36997642
AN - SCOPUS:85151396353
SN - 2398-6352
VL - 6
JO - npj Digital Medicine
JF - npj Digital Medicine
IS - 1
M1 - 54
ER -