This is a preprint.
Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow
- PMID: 36865204
- PMCID: PMC9980239
- DOI: 10.1101/2023.02.21.23285886
Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow
Update in
-
Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study.J Med Internet Res. 2023 Aug 22;25:e48659. doi: 10.2196/48659. J Med Internet Res. 2023. PMID: 37606976 Free PMC article.
Abstract
Importance: Large language model (LLM) artificial intelligence (AI) chatbots direct the power of large training datasets towards successive, related tasks, as opposed to single-ask tasks, for which AI already achieves impressive performance. The capacity of LLMs to assist in the full scope of iterative clinical reasoning via successive prompting, in effect acting as virtual physicians, has not yet been evaluated.
Objective: To evaluate ChatGPT's capacity for ongoing clinical decision support via its performance on standardized clinical vignettes.
Design: We inputted all 36 published clinical vignettes from the Merck Sharpe & Dohme (MSD) Clinical Manual into ChatGPT and compared accuracy on differential diagnoses, diagnostic testing, final diagnosis, and management based on patient age, gender, and case acuity.
Setting: ChatGPT, a publicly available LLM.
Participants: Clinical vignettes featured hypothetical patients with a variety of age and gender identities, and a range of Emergency Severity Indices (ESIs) based on initial clinical presentation.
Exposures: MSD Clinical Manual vignettes.
Main outcomes and measures: We measured the proportion of correct responses to the questions posed within the clinical vignettes tested.
Results: ChatGPT achieved 71.7% (95% CI, 69.3% to 74.1%) accuracy overall across all 36 clinical vignettes. The LLM demonstrated the highest performance in making a final diagnosis with an accuracy of 76.9% (95% CI, 67.8% to 86.1%), and the lowest performance in generating an initial differential diagnosis with an accuracy of 60.3% (95% CI, 54.2% to 66.6%). Compared to answering questions about general medical knowledge, ChatGPT demonstrated inferior performance on differential diagnosis (β=-15.8%, p<0.001) and clinical management (β=-7.4%, p=0.02) type questions.
Conclusions and relevance: ChatGPT achieves impressive accuracy in clinical decision making, with particular strengths emerging as it has more clinical information at its disposal.
Similar articles
-
Assessing ChatGPT 4.0's test performance and clinical diagnostic accuracy on USMLE STEP 2 CK and clinical case reports.Sci Rep. 2024 Apr 23;14(1):9330. doi: 10.1038/s41598-024-58760-x. Sci Rep. 2024. PMID: 38654011 Free PMC article.
-
ChatGPT-Generated Differential Diagnosis Lists for Complex Case-Derived Clinical Vignettes: Diagnostic Accuracy Evaluation.JMIR Med Inform. 2023 Oct 9;11:e48808. doi: 10.2196/48808. JMIR Med Inform. 2023. PMID: 37812468 Free PMC article.
-
ChatGPT and the Future of Journal Reviews: A Feasibility Study.Yale J Biol Med. 2023 Sep 29;96(3):415-420. doi: 10.59249/SKDH9286. eCollection 2023 Sep. Yale J Biol Med. 2023. PMID: 37780993 Free PMC article. Review.
-
Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study.J Med Internet Res. 2023 Aug 22;25:e48659. doi: 10.2196/48659. J Med Internet Res. 2023. PMID: 37606976 Free PMC article.
-
ChatGPT and large language model (LLM) chatbots: The current state of acceptability and a proposal for guidelines on utilization in academic medicine.J Pediatr Urol. 2023 Oct;19(5):598-604. doi: 10.1016/j.jpurol.2023.05.018. Epub 2023 Jun 2. J Pediatr Urol. 2023. PMID: 37328321 Review.
Publication types
LinkOut - more resources
Full Text Sources