TY - JOUR
T1 - Accuracy and Completeness of Bard and Chat-GPT 4 Responses for Questions Derived from the International Consensus Statement on Endoscopic Skull-Base Surgery 2019
AU - Abgin, Yavar
AU - Umemoto, Kayla
AU - Goulian, Andrew
AU - Vasquez, Missael
AU - Polster, Sean
AU - Wu, Arthur
AU - Roxbury, Christopher
AU - Soni, Pranay
AU - Ahmed, Omar G.
AU - Tang, Dennis M.
N1 - Publisher Copyright:
© 2024. Thieme. All rights reserved.
PY - 2024
Y1 - 2024
N2 - Artificial intelligence large language models (LLMs), such as Chat Generative Pre-Trained Transformer 4 (Chat-GPT) by OpenAI and Bard by Google, emerged in 2022 as tools for answering questions, providing information, and offering suggestions to the layperson. These LLMs impact how information is disseminated and it is essential to compare their answers to experts in the corresponding field. The International Consensus Statement on Endoscopic Skull-Base Surgery 2019 (ICAR:SB) is a multidisciplinary international collaboration that critically evaluated and graded the current literature. Objectives Evaluate the accuracy and completeness of Chat-GPT and Bard responses to questions derived from the ICAR:SB policy statements. Design Thirty-four questions were created based on ICAR:SB policy statements and input into Chat-GPT and Bard. Two rhinologists and two neurosurgeons graded the accuracy and completeness of LLM responses, using a 5-point Likert scale. The Wilcoxon rank-sum and Kruskal-Wallis tests were used for analysis. Setting Online. Participants None. Outcomes Compare the mean accuracy and completeness scores between (1) responses generated by Chat-GPT versus Bard and (2) rhinologists versus neurosurgeons. Results Using the Wilcoxon rank-sum test, there were statistically significant differences in (1) accuracy (p < 0.001) and completeness (p < 0.001) of Chat-GPT compared with Bard; and (2) accuracy (p < 0.001) and completeness (p < 0.001) ratings between rhinologists and neurosurgeons. Conclusion Chat-GPT responses are overall more accurate and complete compared with Bard, but both are very accurate and complete. Overall, rhinologists graded lower than neurosurgeons. Further research is needed to better understand the full potential of LLMs.
AB - Artificial intelligence large language models (LLMs), such as Chat Generative Pre-Trained Transformer 4 (Chat-GPT) by OpenAI and Bard by Google, emerged in 2022 as tools for answering questions, providing information, and offering suggestions to the layperson. These LLMs impact how information is disseminated and it is essential to compare their answers to experts in the corresponding field. The International Consensus Statement on Endoscopic Skull-Base Surgery 2019 (ICAR:SB) is a multidisciplinary international collaboration that critically evaluated and graded the current literature. Objectives Evaluate the accuracy and completeness of Chat-GPT and Bard responses to questions derived from the ICAR:SB policy statements. Design Thirty-four questions were created based on ICAR:SB policy statements and input into Chat-GPT and Bard. Two rhinologists and two neurosurgeons graded the accuracy and completeness of LLM responses, using a 5-point Likert scale. The Wilcoxon rank-sum and Kruskal-Wallis tests were used for analysis. Setting Online. Participants None. Outcomes Compare the mean accuracy and completeness scores between (1) responses generated by Chat-GPT versus Bard and (2) rhinologists versus neurosurgeons. Results Using the Wilcoxon rank-sum test, there were statistically significant differences in (1) accuracy (p < 0.001) and completeness (p < 0.001) of Chat-GPT compared with Bard; and (2) accuracy (p < 0.001) and completeness (p < 0.001) ratings between rhinologists and neurosurgeons. Conclusion Chat-GPT responses are overall more accurate and complete compared with Bard, but both are very accurate and complete. Overall, rhinologists graded lower than neurosurgeons. Further research is needed to better understand the full potential of LLMs.
KW - artificial intelligence
KW - Chat-GPT 4
KW - endoscopic skull-base surgery
KW - large language models
UR - http://www.scopus.com/inward/record.url?scp=85208665241&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85208665241&partnerID=8YFLogxK
U2 - 10.1055/a-2436-4222
DO - 10.1055/a-2436-4222
M3 - Article
AN - SCOPUS:85208665241
SN - 2193-634X
JO - Journal of Neurological Surgery, Part B: Skull Base
JF - Journal of Neurological Surgery, Part B: Skull Base
ER -