Operationalizing and Implementing Pretrained, Large Artificial Intelligence Linguistic Models in the US Health Care System: Outlook of Generative Pretrained Transformer 3 (GPT-3) as a Service Model
- PMID: 35142635
- PMCID: PMC8874824
- DOI: 10.2196/32875
Operationalizing and Implementing Pretrained, Large Artificial Intelligence Linguistic Models in the US Health Care System: Outlook of Generative Pretrained Transformer 3 (GPT-3) as a Service Model
Abstract
Generative pretrained transformer models have been popular recently due to their enhanced capabilities and performance. In contrast to many existing artificial intelligence models, generative pretrained transformer models can perform with very limited training data. Generative pretrained transformer 3 (GPT-3) is one of the latest releases in this pipeline, demonstrating human-like logical and intellectual responses to prompts. Some examples include writing essays, answering complex questions, matching pronouns to their nouns, and conducting sentiment analyses. However, questions remain with regard to its implementation in health care, specifically in terms of operationalization and its use in clinical practice and research. In this viewpoint paper, we briefly introduce GPT-3 and its capabilities and outline considerations for its implementation and operationalization in clinical practice through a use case. The implementation considerations include (1) processing needs and information systems infrastructure, (2) operating costs, (3) model biases, and (4) evaluation metrics. In addition, we outline the following three major operational factors that drive the adoption of GPT-3 in the US health care system: (1) ensuring Health Insurance Portability and Accountability Act compliance, (2) building trust with health care providers, and (3) establishing broader access to the GPT-3 tools. This viewpoint can inform health care practitioners, developers, clinicians, and decision makers toward understanding the use of the powerful artificial intelligence tools integrated into hospital systems and health care.
Keywords: artificial intelligence; chatbot; clinical informatics; generative pretrained transformer; natural language processing.
©Emre Sezgin, Joseph Sirrianni, Simon L Linwood. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 10.02.2022.
Conflict of interest statement
Conflicts of Interest: None declared.
Figures
Similar articles
-
Are Generative Pretrained Transformer 4 Responses to Developmental Dysplasia of the Hip Clinical Scenarios Universal? An International Review.J Pediatr Orthop. 2024 Jul 1;44(6):e504-e511. doi: 10.1097/BPO.0000000000002682. Epub 2024 Apr 9. J Pediatr Orthop. 2024. PMID: 38597198 Review.
-
Transformer Models in Healthcare: A Survey and Thematic Analysis of Potentials, Shortcomings and Risks.J Med Syst. 2024 Feb 17;48(1):23. doi: 10.1007/s10916-024-02043-5. J Med Syst. 2024. PMID: 38367119 Free PMC article. Review.
-
A Generative Pretrained Transformer (GPT)-Powered Chatbot as a Simulated Patient to Practice History Taking: Prospective, Mixed Methods Study.JMIR Med Educ. 2024 Jan 16;10:e53961. doi: 10.2196/53961. JMIR Med Educ. 2024. PMID: 38227363 Free PMC article.
-
A Future of Smarter Digital Health Empowered by Generative Pretrained Transformer.J Med Internet Res. 2023 Sep 26;25:e49963. doi: 10.2196/49963. J Med Internet Res. 2023. PMID: 37751243 Free PMC article.
-
Medical Text Prediction and Suggestion Using Generative Pretrained Transformer Models with Dental Medical Notes.Methods Inf Med. 2022 Dec;61(5-06):195-200. doi: 10.1055/a-1900-7351. Epub 2022 Jul 14. Methods Inf Med. 2022. PMID: 35835447
Cited by
-
Potential of Large Language Models in Health Care: Delphi Study.J Med Internet Res. 2024 May 13;26:e52399. doi: 10.2196/52399. J Med Internet Res. 2024. PMID: 38739445 Free PMC article.
-
Artificial Intelligence in Medical Imaging: Analyzing the Performance of ChatGPT and Microsoft Bing in Scoliosis Detection and Cobb Angle Assessment.Diagnostics (Basel). 2024 Apr 5;14(7):773. doi: 10.3390/diagnostics14070773. Diagnostics (Basel). 2024. PMID: 38611686 Free PMC article.
-
Can large language models reason about medical questions?Patterns (N Y). 2024 Mar 1;5(3):100943. doi: 10.1016/j.patter.2024.100943. eCollection 2024 Mar 8. Patterns (N Y). 2024. PMID: 38487804 Free PMC article.
-
Assessing the research landscape and clinical utility of large language models: a scoping review.BMC Med Inform Decis Mak. 2024 Mar 12;24(1):72. doi: 10.1186/s12911-024-02459-6. BMC Med Inform Decis Mak. 2024. PMID: 38475802 Free PMC article. Review.
-
Empowering personalized pharmacogenomics with generative AI solutions.J Am Med Inform Assoc. 2024 May 20;31(6):1356-1366. doi: 10.1093/jamia/ocae039. J Am Med Inform Assoc. 2024. PMID: 38447590
References
-
- Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler DM, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B, Clark J, Berner C, McCandlish S, Radford A, Sutskever I, Amodei D. Language models are few-shot learners. arXiv. Preprint posted online on July 22, 2020. https://arxiv.org/pdf/2005.14165.pdf
-
- Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I. Attention is all you need. 31st Conference on Neural Information Processing Systems (NIPS 2017); December 4-9, 2017; Long Beach, California, USA. 2017. http://papers.nips.cc/paper/7181-attention-is-all-you-%0Aneed.pdf
-
- Liu J, Shen D, Zhang Y, Dolan B, Carin L, Chen W. What makes good in-context examples for GPT-3? arXiv. Preprint posted online on January 17, 2021. https://arxiv.org/pdf/2101.06804.pdf
-
- Emerson. GPT-3 Demo. [2021-12-14]. https://gpt3demo.com/apps/quickchat-emerson .
LinkOut - more resources
Full Text Sources