A vision transformer for decoding surgeon activity from surgical videos

Dani Kiyasseh, Runzhuo Ma, Taseen F. Haque, Brian J. Miles, Christian Wagner, Daniel A. Donoho, Animashree Anandkumar, Andrew J. Hung

Research output: Contribution to journalArticlepeer-review

1 Scopus citations


The intraoperative activity of a surgeon has substantial impact on postoperative outcomes. However, for most surgical procedures, the details of intraoperative surgical actions, which can vary widely, are not well understood. Here we report a machine learning system leveraging a vision transformer and supervised contrastive learning for the decoding of elements of intraoperative surgical activity from videos commonly collected during robotic surgeries. The system accurately identified surgical steps, actions performed by the surgeon, the quality of these actions and the relative contribution of individual video frames to the decoding of the actions. Through extensive testing on data from three different hospitals located in two different continents, we show that the system generalizes across videos, surgeons, hospitals and surgical procedures, and that it can provide information on surgical gestures and skills from unannotated videos. Decoding intraoperative activity via accurate machine learning systems could be used to provide surgeons with feedback on their operating skills, and may allow for the identification of optimal surgical behaviour and for the study of relationships between intraoperative factors and postoperative outcomes.

Original languageEnglish (US)
JournalNature Biomedical Engineering
StateAccepted/In press - 2023

ASJC Scopus subject areas

  • Biotechnology
  • Bioengineering
  • Medicine (miscellaneous)
  • Biomedical Engineering
  • Computer Science Applications


Dive into the research topics of 'A vision transformer for decoding surgeon activity from surgical videos'. Together they form a unique fingerprint.

Cite this