Generative AI’s performance on emergency medicine boards questions: an observational study
| dc.contributor.author | Kajitani, Sten | en |
| dc.contributor.author | Pastrak, Mila | en |
| dc.contributor.author | Goodings, Anthony | en |
| dc.contributor.author | Nguyen, Audrey | en |
| dc.contributor.author | Drewek, Austin | en |
| dc.contributor.author | Lafree, Andrew | en |
| dc.contributor.author | Murphy, Adrian | en |
| dc.contributor.editor | Mehta, Shobha | en |
| dc.contributor.editor | Cronin, Pádraig | en |
| dc.date.accessioned | 2025-10-28T13:55:06Z | |
| dc.date.available | 2025-10-28T13:55:06Z | |
| dc.date.issued | 2025 | en |
| dc.description.abstract | Background: The evolving field of medicine has introduced ChatGPT as a potential assistive platform, though its use in medical board exam preparation remains debated [1-2]. This study aimed to evaluate the performance of a custom-modified version of ChatGPT-4, tailored with emergency medicine board exam preparatory materials (Anki deck), compared to its default version and previous iteration (3.5) [3]. The goal was to assess the accuracy of ChatGPT-4 answering board- style questions and its suitability as a tool for medical education. Methods: A comparative analysis was conducted using a random selection of 598 questions from the Rosh In-Training Exam Question Bank [4]. The subjects of the study included three versions of ChatGPT: the Default, a Custom, and ChatGPT-3.5. Accuracy, response length, medical discipline subgroups, and underlying causes of error were analyzed. Results: Custom ChatGPT-4 did not significantly improve accuracy over Default (p>0.05), but both significantly outperformed ChatGPT-3.5 (p< | en |
| dc.description.status | Not peer reviewed | en |
| dc.description.version | Published Version | en |
| dc.format.mimetype | application/pdf | en |
| dc.identifier.citation | Kajitani, S. Pastrak, M., Goodings, A., Nguyen, A., Drewek, A., Lafree, A. and Murphy, A. (2025) 'Generative AI’s performance on emergency medicine boards questions: an observational study', UCC Student Medical Journal, 5, p. 113. https://doi.org/10.33178/SMJ.2025.1.39 | en |
| dc.identifier.doi | 10.33178/SMJ.2025.1.39 | en |
| dc.identifier.endpage | 113 | en |
| dc.identifier.issn | 2737-7237 | |
| dc.identifier.journalabbrev | UCC SMJ | |
| dc.identifier.journaltitle | UCC Student Medical Journal | en |
| dc.identifier.startpage | 113 | en |
| dc.identifier.uri | https://hdl.handle.net/10468/18116 | |
| dc.identifier.volume | 5 | |
| dc.language.iso | en | en |
| dc.publisher | UCC Medical Research and Technology Society | en |
| dc.rights | © 2025, the Author(s). This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. | en |
| dc.rights.uri | https://creativecommons.org/licenses/by-nc/4.0 | |
| dc.source | Batch upload | en |
| dc.subject | Generative AI | en |
| dc.subject | Emergency medicine boards questions | en |
| dc.title | Generative AI’s performance on emergency medicine boards questions: an observational study | en |
| dc.type | Conference item | en |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Generative+AI's+performance+on+emergency+medicine+boards+questions_+observational+study.pdf
- Size:
- 124.13 KB
- Format:
- Adobe Portable Document Format
- Description:
- Published Version
