Assessment of ChatGPT-4 in family medicine board examinations using advanced ai learning and analytical methods: observational study

Loading...
Thumbnail Image
Files
mededu-2024-1-e56128.pdf(378.79 KB)
Published Version
Date
2024
Authors
Goodings, Anthony James
Kajitani, Sten
Chhor, Allison
Albakri, Ahmad
Pastrak, Mila
Kodancha, Megha
Rowan, Ives
Lee, Yoo Bin
Kajitani, Kari
Journal Title
Journal ISSN
Volume Title
Publisher
JMIR Publications
Published Version
Research Projects
Organizational Units
Journal Issue
Abstract
Background: This research explores the capabilities of ChatGPT-4 in passing the American Board of Family Medicine (ABFM) Certification Examination. Addressing a gap in existing literature, where earlier artificial intelligence (AI) models showed limitations in medical board examinations, this study evaluates the enhanced features and potential of ChatGPT-4, especially in document analysis and information synthesis. Objective: The primary goal is to assess whether ChatGPT-4, when provided with extensive preparation resources and when using sophisticated data analysis, can achieve a score equal to or above the passing threshold for the Family Medicine Board Examinations. Methods: In this study, ChatGPT-4 was embedded in a specialized subenvironment, “AI Family Medicine Board Exam Taker,” designed to closely mimic the conditions of the ABFM Certification Examination. This subenvironment enabled the AI to access and analyze a range of relevant study materials, including a primary medical textbook and supplementary web-based resources. The AI was presented with a series of ABFM-type examination questions, reflecting the breadth and complexity typical of the examination. Emphasis was placed on assessing the AI’s ability to interpret and respond to these questions accurately, leveraging its advanced data processing and analysis capabilities within this controlled subenvironment. Results: In our study, ChatGPT-4’s performance was quantitatively assessed on 300 practice ABFM examination questions. The AI achieved a correct response rate of 88.67% (95% CI 85.08%-92.25%) for the Custom Robot version and 87.33% (95% CI 83.57%-91.10%) for the Regular version. Statistical analysis, including the McNemar test (P=.45), indicated no significant difference in accuracy between the 2 versions. In addition, the chi-square test for error-type distribution (P=.32) revealed no significant variation in the pattern of errors across versions. These results highlight ChatGPT-4’s capacity for high-level performance and consistency in responding to complex medical examination questions under controlled conditions. Conclusions: The study demonstrates that ChatGPT-4, particularly when equipped with specialized preparation and when operating in a tailored subenvironment, shows promising potential in handling the intricacies of medical board examinations. While its performance is comparable with the expected standards for passing the ABFM Certification Examination, further enhancements in AI technology and tailored training methods could push these capabilities to new heights. This exploration opens avenues for integrating AI tools such as ChatGPT-4 in medical education and assessment, emphasizing the importance of continuous advancement and specialized training in medical applications of AI.
Description
Keywords
ChatGPT-4 , Family Medicine Board Examination , Artificial intelligence in medical education , AI performance assessment , Prompt engineering , ChatGPT , Artificial intelligence , AI , Medical education , Assessment , Observational , Analytical method , Data analysis , Examination
Citation
Goodings, A.J., Kajitani, S., Chhor, A., Albakri, A., Pastrak, M., Kodancha, M., Ives, R., Lee, Y.B. and Kajitani, K. (2024) ‘Assessment of ChatGPT-4 in family medicine board examinations using advanced ai learning and analytical methods: observational study’, JMIR Medical Education, 10, e56128–e56128 (8pp). https://doi:10.2196/56128
Link to publisher’s version