Performance of ChatGPT in the Portuguese National Residency Access Examination
DOI:
https://doi.org/10.20344/amp.22506Keywords:
Artificial Intelligence, Clinical Competence, Educational Measurement, Internship and Residency, PortugalAbstract
ChatGPT, a language model developed by OpenAI, has been tested in several medical board examinations. This study aims to evaluate the performance of ChatGPT on the Portuguese National Residency Access Examination, a mandatory test for medical residency in Portugal. The study specifically compares the capabilities of ChatGPT versions 3.5 and 4o across five examination editions from 2019 to 2023. A total of 750 multiple-choice questions were submitted to both versions, and their answers were evaluated against the official responses. The findings revealed that ChatGPT 4o significantly outperformed ChatGPT 3.5, with a median examination score of 127 compared to 106 (p = 0.048). Notably, ChatGPT 4o achieved scores within the top 1% in two examination editions and exceeded the median performance of human candidates in all editions. Additionally, ChatGPT 4o’s scores were high enough to qualify for any specialty. In conclusion, ChatGPT 4o can be a valuable tool for medical education and decision-making, but human oversight remains essential to ensure safe and accurate clinical practice.
Downloads
References
Berşe S, Akça K, Dirgar E, Kaplan Serin E. The role and potential contributions of the artificial intelligence language model ChatGPT. Ann Biomed Eng 2024;52:130-3.
Liu M, Okuhara T, Chang X, Shirabe R, Nishiie Y, Okada H, et al. Performance of ChatGPT across different versions in medical licensing examinations worldwide: systematic review and meta-analysis. J Med Internet Res. 2024;26:e60807.
Knoedler L, Alfertshofer M, Knoedler S, Hoch CC, Funk PF, Cotofana S, et al. Pure wisdom or potemkin villages? A comparison of chatGPT 3.5 and ChatGPT 4 on USMLE step 3 style questions: quantitative analysis. JMIR Med Educ. 2024;10:e51148.
Malik A, Madias C, Wessler BS. Performance of ChatGPT-4o in the adult clinical cardiology self-assessment program. Eur Hear J - Digit Heal. 2024:ztae077.
Ribeiro JC, Villanueva T. The new medical licensing examination in Portugal. Acta Med Port. 2018;31:293-4.
Rosoł M, Gąsior JS, Łaba J, Korzeniewski K, Młynczak. Evaluation of the performance of GPT-3.5 and GPT-4 on the polish medical final examination. Sci Reports. 2023;13:1-13.
Alexandrou M, Mahtani AU, Rempakos A, Mutlu D, Ogaili AA, Gill GS, et al. Performance of ChatGPT on ACC/SCAI interventional cardiology certification simulation exam. JACC Cardiovasc Interv. 2024;17:1292-3.
Indran IR, Paranthaman P, Gupta N, Mustafa N. Twelve tips to leverage AI for efficient and effective medical question generation: a guide for educators using Chat GPT. Med Teach. 2024;46:1021-6.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Acta Médica Portuguesa
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
All the articles published in the AMP are open access and comply with the requirements of funding agencies or academic institutions. The AMP is governed by the terms of the Creative Commons ‘Attribution – Non-Commercial Use - (CC-BY-NC)’ license, regarding the use by third parties.
It is the author’s responsibility to obtain approval for the reproduction of figures, tables, etc. from other publications.
Upon acceptance of an article for publication, the authors will be asked to complete the ICMJE “Copyright Liability and Copyright Sharing Statement “(http://www.actamedicaportuguesa.com/info/AMP-NormasPublicacao.pdf) and the “Declaration of Potential Conflicts of Interest” (http:// www.icmje.org/conflicts-of-interest). An e-mail will be sent to the corresponding author to acknowledge receipt of the manuscript.
After publication, the authors are authorised to make their articles available in repositories of their institutions of origin, as long as they always mention where they were published and according to the Creative Commons license.