当前位置: X-MOL 学术BMJ Neurol. Open › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Assessment of ChatGPT’s performance on neurology written board examination questions
BMJ Neurology Open Pub Date : 2023-11-01 , DOI: 10.1136/bmjno-2023-000530
Tse Chian Chen 1 , Evan Multala 2 , Patrick Kearns 2 , Johnny Delashaw 3 , Aaron Dumont 3 , Demetrius Maraganore 1 , Arthur Wang 3
Affiliation  

Background and objectives ChatGPT has shown promise in healthcare. To assess the utility of this novel tool in healthcare education, we evaluated ChatGPT’s performance in answering neurology board exam questions. Methods Neurology board-style examination questions were accessed from BoardVitals, a commercial neurology question bank. ChatGPT was provided a full question prompt and multiple answer choices. First attempts and additional attempts up to three tries were given to ChatGPT to select the correct answer. A total of 560 questions (14 blocks of 40 questions) were used, although any image-based questions were disregarded due to ChatGPT’s inability to process visual input. The artificial intelligence (AI) answers were then compared with human user data provided by the question bank to gauge its performance. Results Out of 509 eligible questions over 14 question blocks, ChatGPT correctly answered 335 questions (65.8%) on the first attempt/iteration and 383 (75.3%) over three attempts/iterations, scoring at approximately the 26th and 50th percentiles, respectively. The highest performing subjects were pain (100%), epilepsy & seizures (85%) and genetic (82%) while the lowest performing subjects were imaging/diagnostic studies (27%), critical care (41%) and cranial nerves (48%). Discussion This study found that ChatGPT performed similarly to its human counterparts. The accuracy of the AI increased with multiple attempts and performance fell within the expected range of neurology resident learners. This study demonstrates ChatGPT’s potential in processing specialised medical information. Future studies would better define the scope to which AI would be able to integrate into medical decision making. Data are available upon reasonable request.

中文翻译:

ChatGPT 在神经科笔试问题上的表现评估

背景和目标 ChatGPT 在医疗保健领域显示出了前景。为了评估这种新颖工具在医疗保健教育中的实用性,我们评估了 ChatGPT 在回答神经病学委员会考试问题方面的表现。方法 神经病学委员会式考试题目取自商业神经病学题库 BoardVitals。ChatGPT 提供了完整的问题提示和多个答案选择。ChatGPT 进行了首次尝试和最多三次尝试以选择正确答案。总共使用了 560 个问题(14 块,每块 40 个问题),但由于 ChatGPT 无法处理视觉输入,任何基于图像的问题都被忽略。然后将人工智能 (AI) 答案与题库提供的人类用户数据进行比较,以评估其表现。结果 在 14 个问题块中的 509 个符合条件的问题中,ChatGPT 在第一次尝试/迭代中正确回答了 335 个问题 (65.8%),在三次尝试/迭代中正确回答了 383 个问题 (75.3%),得分分别约为第 26 个百分位数和第 50 个百分位数。表现最高的科目是疼痛 (100%)、癫痫和癫痫发作 (85%) 和遗传 (82%),而表现最差的科目是影像/诊断研究 (27%)、重症监护 (41%) 和脑神经 (48 %)。讨论 这项研究发现 ChatGPT 的表现与其人类对应物相似。经过多次尝试,人工智能的准确性不断提高,并且表现落在神经科住院医师学习者的预期范围内。这项研究证明了 ChatGPT 在处理专业医疗信息方面的潜力。未来的研究将更好地定义人工智能能够融入医疗决策的范围。数据可根据合理要求提供。
更新日期:2023-11-03
down
wechat
bug