pmid
stringlengths
8
8
title
stringlengths
3
289
year
int64
2.02k
2.03k
journal
stringlengths
3
221
doi
stringclasses
1 value
mesh
stringclasses
1 value
keywords
stringclasses
1 value
abstract
stringlengths
115
3.67k
authors
stringlengths
3
798
cluster
class label
5 classes
38737711
How AI drives innovation in cardiovascular medicine.
2,024
Frontiers in cardiovascular medicine
Medicine is entering a new era in which artificial intelligence (AI) and deep learning have a measurable impact on patient care. This impact is especially evident in cardiovascular medicine. While the purpose of this short opinion paper is not to provide an in-depth review of the many applications of AI in cardiovascular medicine, we summarize some of the important advances that have taken place in this domain.
Cerrato PL; Halamka JD
10
37082496
Neuroblastoma Masquerading as a Septic Hip Infection in a Three-Year-Old.
2,023
Cureus
Metastatic neuroblastoma to the bone and septic joint shares the same incidence in age and clinical symptomology. Here we discuss a three-year-old male who presented with anemia, persistent hip pain, and a refusal to bear weight. A thorough evaluation based on a broad differential diagnosis allowed for an expedient diagnosis of metastatic neuroblastoma. The timely diagnosis allowed for rapid enrolment in a children's oncology group (COG) clinical trial for advanced neuroblastoma. The patient tolerated the therapy without adverse events and remains in remission.
Lynch JD; Tomboc PJ
0-1
37077800
From human writing to artificial intelligence generated text: examining the prospects and potential threats of ChatGPT in academic writing.
2,023
Biology of sport
Natural language processing (NLP) has been studied in computing for decades. Recent technological advancements have led to the development of sophisticated artificial intelligence (AI) models, such as Chat Generative Pre-trained Transformer (ChatGPT). These models can perform a range of language tasks and generate human-like responses, which offers exciting prospects for academic efficiency. This manuscript aims at (i) exploring the potential benefits and threats of ChatGPT and other NLP technologies in academic writing and research publications; (ii) highlights the ethical considerations involved in using these tools, and (iii) consider the impact they may have on the authenticity and credibility of academic work. This study involved a literature review of relevant scholarly articles published in peer-reviewed journals indexed in Scopus as quartile 1. The search used keywords such as "ChatGPT," "AI-generated text," "academic writing," and "natural language processing." The analysis was carried out using a quasi-qualitative approach, which involved reading and critically evaluating the sources and identifying relevant data to support the research questions. The study found that ChatGPT and other NLP technologies have the potential to enhance academic writing and research efficiency. However, their use also raises concerns about the impact on the authenticity and credibility of academic work. The study highlights the need for comprehensive discussions on the potential use, threats, and limitations of these tools, emphasizing the importance of ethical and academic principles, with human intelligence and critical thinking at the forefront of the research process. This study highlights the need for comprehensive debates and ethical considerations involved in their use. The study also recommends that academics exercise caution when using these tools and ensure transparency in their use, emphasizing the importance of human intelligence and critical thinking in academic work.
Dergaa I; Chamari K; Zmijewski P; Ben Saad H
10
39059557
Natural Language Processing in medicine and ophthalmology: A review for the 21st-century clinician.
2,024
Asia-Pacific journal of ophthalmology (Philadelphia, Pa.)
Natural Language Processing (NLP) is a subfield of artificial intelligence that focuses on the interaction between computers and human language, enabling computers to understand, generate, and derive meaning from human language. NLP's potential applications in the medical field are extensive and vary from extracting data from Electronic Health Records -one of its most well-known and frequently exploited uses- to investigating relationships among genetics, biomarkers, drugs, and diseases for the proposal of new medications. NLP can be useful for clinical decision support, patient monitoring, or medical image analysis. Despite its vast potential, the real-world application of NLP is still limited due to various challenges and constraints, meaning that its evolution predominantly continues within the research domain. However, with the increasingly widespread use of NLP, particularly with the availability of large language models, such as ChatGPT, it is crucial for medical professionals to be aware of the status, uses, and limitations of these technologies.
Rojas-Carabali W; Agrawal R; Gutierrez-Sinisterra L; Baxter SL; Cifuentes-Gonzalez C; Wei YC; Abisheganaden J; Kannapiran P; Wong S; Lee B; de-la-Torre A; Agrawal R
10
36924907
The exciting potential for ChatGPT in obstetrics and gynecology.
2,023
American journal of obstetrics and gynecology
Natural language processing-the branch of artificial intelligence concerned with the interaction between computers and human language-has advanced markedly in recent years with the introduction of sophisticated deep-learning models. Improved performance in natural language processing tasks, such as text and speech processing, have fueled impressive demonstrations of these models' capabilities. Perhaps no demonstration has been more impactful to date than the introduction of the publicly available online chatbot ChatGPT in November 2022 by OpenAI, which is based on a natural language processing model known as a Generative Pretrained Transformer. Through a series of questions posed by the authors about obstetrics and gynecology to ChatGPT as prompts, we evaluated the model's ability to handle clinical-related queries. Its answers demonstrated that in its current form, ChatGPT can be valuable for users who want preliminary information about virtually any topic in the field. Because its educational role is still being defined, we must recognize its limitations. Although answers were generally eloquent, informed, and lacked a significant degree of mistakes or misinformation, we also observed evidence of its weaknesses. A significant drawback is that the data on which the model has been trained are apparently not readily updated. The specific model that was assessed here, seems to not reliably (if at all) source data from after 2021. Users of ChatGPT who expect data to be more up to date need to be aware of this drawback. An inability to cite sources or to truly understand what the user is asking suggests that it has the capability to mislead. Responsible use of models like ChatGPT will be important for ensuring that they work to help but not harm users seeking information on obstetrics and gynecology.
Grunebaum A; Chervenak J; Pollet SL; Katz A; Chervenak FA
10
39634994
Evaluating the Performance of ChatGPT in the Prescribing Safety Assessment: Implications for Artificial Intelligence-Assisted Prescribing.
2,024
Cureus
Objective With the rapid advancement of artificial intelligence (AI) technologies, models like Chat Generative Pre-Trained Transformer (ChatGPT) are increasingly being evaluated for their potential applications in healthcare. The Prescribing Safety Assessment (PSA) is a standardised test for junior physicians in the UK to evaluate prescribing competence. This study aims to assess ChatGPT's ability to pass the PSA and its performance across different exam sections. Methodology ChatGPT (version GPT-4) was tested on four official PSA practice papers, each containing 30 questions, in three independent trials per paper, with answers evaluated using official PSA mark schemes. Performance was measured by calculating overall percentage scores and comparing them to the pass marks provided for each practice paper. Subsection performance was also analysed to identify strengths and weaknesses. Results ChatGPT achieved mean scores of 257/300 (85.67%), 236/300 (78.67%), 199/300 (66.33%), and 233/300 (77.67%) across the four papers, consistently surpassing the pass marks where available. ChatGPT performed well in sections requiring factual recall, such as "Adverse Drug Reactions", scoring 63/72 (87.50%), and "Communicating Information", scoring 63/72 (88.89%). However, it struggled in "Data Interpretation", scoring 32/72 (44.44%), showing variability across trials and indicating limitations in handling more complex clinical reasoning tasks. Conclusion While ChatGPT demonstrated strong potential in passing the PSA and excelling in sections requiring factual knowledge, its limitations in data interpretation highlight the current gaps in AI's ability to fully replicate human clinical judgement. ChatGPT shows promise in supporting safe prescribing, particularly in areas prone to human error, such as drug interactions and communicating correct information. However, due to its variability in more complex reasoning tasks, ChatGPT is not yet ready to replace human prescribers and should instead serve as a supplemental tool in clinical practice.
Bull D; Okaygoun D
10
38593984
Identifying ChatGPT-Written Patient Education Materials Using Text Analysis and Readability.
2,024
American journal of perinatology
OBJECTIVE: Artificial intelligence (AI)-based text generators such as Chat Generative Pre-Trained Transformer (ChatGPT) have come into the forefront of modern medicine. Given the similarity between AI-generated and human-composed text, tools need to be developed to quickly differentiate the two. Previous work has shown that simple grammatical analysis can reliably differentiate AI-generated text from human-written text. STUDY DESIGN: In this study, ChatGPT was used to generate 25 articles related to obstetric topics similar to those made by the American College of Obstetrics and Gynecology (ACOG). All articles were geared towards patient education. These AI-generated articles were then analyzed for their readability and grammar using validated scoring systems and compared to real articles from ACOG. RESULTS: Characteristics of the 25 AI-generated articles included fewer overall characters than original articles (mean 3,066 vs. 7,426; p < 0.0001), a greater average word length (mean 5.3 vs. 4.8; p < 0.0001), and a lower Flesch-Kincaid score (mean 46 vs. 59; p < 0.0001). With this knowledge, a new scoring system was develop to score articles based on their Flesch-Kincaid readability score, number of total characters, and average word length. This novel scoring system was tested on 17 new AI-generated articles related to obstetrics and 7 articles from ACOG, and was able to differentiate between AI-generated articles and human-written articles with a sensitivity of 94.1% and specificity of 100% (Area Under the Curve [AUC] 0.99). CONCLUSION: As ChatGPT is more widely integrated into medicine, it will be important for health care stakeholders to have tools to separate originally written documents from those generated by AI. While more robust analyses may be required to determine the authenticity of articles written by complex AI technology in the future, simple grammatical analysis can accurately characterize current AI-generated texts with a high degree of sensitivity and specificity. KEY POINTS: . More tools are needed to identify AI-generated text in obstetrics, for both doctors and patients.. . Grammatical analysis is quick and easily done.. . Grammatical analysis is a feasible and accurate way to identify AI-generated text..
Monje S; Ulene S; Gimovsky AC
10
38599321
Chatbots vs andrologists: Testing 25 clinical cases.
2,024
The French journal of urology
OBJECTIVE: AI-derived language models are booming, and their place in medicine is undefined. The aim of our study is to compare responses to andrology clinical cases, between chatbots and andrologists, to assess the reliability of these technologies. MATERIAL AND METHOD: We analyzed the responses of 32 experts, 18 residents and three chatbots (ChatGPT v3.5, v4 and Bard) to 25 andrology clinical cases. Responses were assessed on a Likert scale ranging from 0 to 2 for each question (0-false response or no response; 1-partially correct response, 2- correct response), on the basis of the latest national or, in the absence of such, international recommendations. We compared the averages obtained for all cases by the different groups. RESULTS: Experts obtained a higher mean score (m=11/12.4 sigma=1.4) than ChatGPT v4 (m=10.7/12.4 sigma=2.2, p=0.6475), ChatGPT v3.5 (m=9.5/12.4 sigma=2.1, p=0.0062) and Bard (m=7.2/12.4 sigma=3.3, p<0.0001). Residents obtained a mean score (m=9.4/12.4 sigma=1.7) higher than Bard (m=7.2/12.4 sigma=3.3, p=0.0053) but lower than ChatGPT v3.5 (m=9.5/12.4 sigma=2.1, p=0.8393) and v4 (m=10.7/12.4 sigma=2.2, p=0.0183) and experts (m=11.0/12.4 sigma=1.4,p=0.0009). ChatGPT v4 performance (m=10.7 sigma=2.2) was better than ChatGPT v3.5 (m=9.5, sigma=2.1, p=0.0476) and Bard performance (m=7.2 sigma=3.3, p<0.0001). CONCLUSION: The use of chatbots in medicine could be relevant. More studies are needed to integrate them into clinical practice.
Perrot O; Schirmann A; Vidart A; Guillot-Tantay C; Izard V; Lebret T; Boillot B; Mesnard B; Lebacle C; Madec FX
10
39715109
Comparison of the experience and perception of artificial intelligence among practicing doctors and medical students.
2,024
Wiadomosci lekarskie (Warsaw, Poland : 1960)
OBJECTIVE: Aim: To analyze and compare the experiences and perceptions of artificial intelligence (AI) among practicing doctors and medical students. PATIENTS AND METHODS: Materials and Methods: A survey was conducted among 30 doctors and 30 fifth-year master's students enrolled in the "Medicine" program. Participants were asked about their experiences with AI, their perceptions of AI's impact on their education and practice, and their views on the benefits and drawbacks of AI in the medical field. The data were analyzed to compare the responses between the two groups. RESULTS: Results: Among the respondents, 8 doctors (26,67%) and 4 students (13,33%) had not used AI in their practice or studies. The analysis was conducted on the remaining 22 doctors and 26 students. The study found that students generally rated the effectiveness of AI higher than physicians did, particularly in areas such as enhancing work and educational experiences. Both groups used AI primarily for information retrieval, with students showing a slightly greater openness to expanding AI's role in education and practice. Despite recognizing the advantages of AI, both groups expressed concerns regarding its accuracy and reliability. CONCLUSION: Conclusions: The study indicates that while AI, particularly ChatGPT, is increasingly being adopted in medical education and practice, there is still a level of caution and skepticism among both students and professionals. Further research is needed to optimize the integration of AI in medical curricula and address the ethical implications of its use.
Drevitska OO; Butska LV; Drevytskyi OO; Ryzhak VO; Varina HB; Kovalova OV; Medvid IV
10
38374694
Neurological Diagnosis: Artificial Intelligence Compared With Diagnostic Generator.
2,024
The neurologist
OBJECTIVE: Artificial intelligence has recently become available for widespread use in medicine, including the interpretation of digitized information, big data for tracking disease trends and patterns, and clinical diagnosis. Comparative studies and expert opinion support the validity of imaging and data analysis, yet similar validation is lacking in clinical diagnosis. Artificial intelligence programs are here compared with a diagnostic generator program in clinical neurology. METHODS: Using 4 nonrandomly selected case records from New England Journal of Medicine clinicopathologic conferences from 2017 to 2022, 2 artificial intelligence programs (ChatGPT-4 and GLASS AI) were compared with a neurological diagnostic generator program (NeurologicDx.com) for diagnostic capability and accuracy and source authentication. RESULTS: Compared with NeurologicDx.com, the 2 AI programs showed results varying with order of key term entry and with repeat querying. The diagnostic generator yielded more differential diagnostic entities, with correct diagnoses in 4 of 4 test cases versus 0 of 4 for ChatGPT-4 and 1 of 4 for GLASS AI, respectively, and with authentication of diagnostic entities compared with the AI programs. CONCLUSIONS: The diagnostic generator NeurologicDx yielded a more robust and reproducible differential diagnostic list with higher diagnostic accuracy and associated authentication compared with artificial intelligence programs.
Finelli PF
10
40060265
Emergency Medicine Assistants in the Field of Toxicology, Comparison of ChatGPT-3.5 and GEMINI Artificial Intelligence Systems.
2,024
Acta medica Lituanica
OBJECTIVE: Artificial intelligence models human thinking and problem-solving abilities, allowing computers to make autonomous decisions. There is a lack of studies demonstrating the clinical utility of GPT and Gemin in the field of toxicology, which means their level of competence is not well understood. This study compares the responses given by GPT-3.5 and Gemin to those provided by emergency medicine residents. METHODS: This prospective study was focused on toxicology and utilized the widely recognized educational resource 'Tintinalli Emergency Medicine: A Comprehensive Study Guide' for the field of Emergency Medicine. A set of twenty questions, each with five options, was devised to test knowledge of toxicological data as defined in the book. These questions were then used to train ChatGPT GPT-3.5 (Generative Pre-trained Transformer 3.5) by OpenAI and Gemini by Google AI in the clinic. The resulting answers were then meticulously analyzed. RESULTS: 28 physicians, 35.7% of whom were women, were included in our study. A comparison was made between the physician and AI scores. While a significant difference was found in the comparison (F=2.368 and p<0.001), no significant difference was found between the two groups in the post-hoc Tukey test. GPT-3.5 mean score is 9.9+/-0.71, Gemini mean score is 11.30+/-1.17 and, physicians' mean score is 9.82+/-3.70 (Figure 1). CONCLUSIONS: It is clear that GPT-3.5 and Gemini respond similarly to topics in toxicology, just as resident physicians do.
Bedel HA; Bedel C; Selvi F; Zortuk O; Karanci Y
21
39285054
Evaluation of Rhinoplasty Information from ChatGPT, Gemini, and Claude for Readability and Accuracy.
2,025
Aesthetic plastic surgery
OBJECTIVE: Assessment of the readability, accuracy, quality, and completeness of ChatGPT (Open AI, San Francisco, CA), Gemini (Google, Mountain View, CA), and Claude (Anthropic, San Francisco, CA) responses to common questions about rhinoplasty. METHODS: Ten questions commonly encountered in the senior author's (SPM) rhinoplasty practice were presented to ChatGPT-4, Gemini and Claude. Seven Facial Plastic and Reconstructive Surgeons with experience in rhinoplasty were asked to evaluate these responses for accuracy, quality, completeness, relevance, and use of medical jargon on a Likert scale. The responses were also evaluated using several readability indices. RESULTS: ChatGPT achieved significantly higher evaluator scores for accuracy, and overall quality but scored significantly lower on completeness compared to Gemini and Claude. All three chatbot responses to the ten questions were rated as neutral to incomplete. All three chatbots were found to use medical jargon and scored at a college reading level for readability scores. CONCLUSIONS: Rhinoplasty surgeons should be aware that the medical information found on chatbot platforms is incomplete and still needs to be scrutinized for accuracy. However, the technology does have potential for use in healthcare education by training it on evidence-based recommendations and improving readability. LEVEL OF EVIDENCE V: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Meyer MKR; Kandathil CK; Davis SJ; Durairaj KK; Patel PN; Pepper JP; Spataro EA; Most SP
32
38511678
Is ChatGPT an Accurate and Reliable Source of Information for Patients with Vaccine and Statin Hesitancy?
2,024
Medeniyet medical journal
OBJECTIVE: Chat Generative Pre-trained Transformer (ChatGPT) is an artificial intelligence (AI) language model that is trained to respond to questions across a wide range of topics. Our aim is to elucidate whether it would be beneficial for patients who are hesitant about vaccines and statins to use ChatGPT. METHODS: This cross-sectional and observational study was conducted from March 2 to March 30, 2023, using OpenAI ChatGPT-3.5. ChatGPT provided responses to 7 questions related to vaccine and statin hesitancy. The same questions were also directed at physicians. Both the answers from ChatGPT and the physicians were assessed for accuracy, clarity, and conciseness by experts in cardiology, internal medicine, and microbiology, who possessed a minimum of 30 years of professional experience. Responses were rated on a scale of 0-4, and the ChatGPT's average score was compared with that of physicians using the Mann-Whitney U test. RESULTS: The mean scores of ChatGPT (3.78+/-0.36) and physicians (3.65+/-0.57) were similar (Mann-Whitney U test p=0.33). The mean scores of ChatGPT were 3.85+/-0.34 for vaccination and 3.68+/-0.35 for statin use. The mean scores of physicians were 3.73+/-0.51 for vaccination and 3.58+/-0.61 for statin use. There was no statistically significant difference between the mean scores of ChatGPT and physicians for both vaccine and statin use (p=0.403 for vaccination, p=0.678 for statin). ChatGPT did not consider sources of conspiratorial information on vaccines and statins. CONCLUSIONS: This study suggests that ChatGPT can be a valuable source of information for guiding patients with vaccine and statin hesitancy.
Torun C; Sarmis A; Oguz A
0-1
38084123
Are Different Versions of ChatGPT's Ability Comparable to the Clinical Diagnosis Presented in Case Reports? A Descriptive Study.
2,023
Journal of multidisciplinary healthcare
OBJECTIVE: ChatGPT, an advanced language model developed by OpenAI, holds the opportunity to bring about a transformation in the processing of clinical decision-making within the realm of medicine. Despite the growing popularity of research related on ChatGPT, there is a paucity of research assessing its appropriateness for clinical decision support. Our study delved into ChatGPT's ability to respond in accordance with the diagnoses found in case reports, with the intention of serving as a reference for clinical decision-making. METHODS: We included 147 case reports from the Chinese Medical Association Journal Database that generated primary and secondary diagnoses covering various diseases. Each question was independently posed three times to both GPT-3.5 and GPT-4.0, respectively. The results were analyzed regarding ChatGPT's mean scores and accuracy types. RESULTS: GPT-4.0 displayed moderate accuracy in primary diagnoses. With the increasing number of input, a corresponding enhancement in the accuracy of ChatGPT's outputs became evident. Notably, autoimmune diseases comprised the largest proportion of case reports, and the mean score for primary diagnosis exhibited statistically significant differences in autoimmune diseases. CONCLUSION: Our finding suggested that the potential practicality in utilizing ChatGPT for clinical decision-making. To enhance the accuracy of ChatGPT, it is necessary to integrate it with the existing electronic health record system in the future.
Chen J; Liu L; Ruan S; Li M; Yin C
10
39232303
Bias Perpetuates Bias: ChatGPT Learns Gender Inequities in Academic Surgery Promotions.
2,024
Journal of surgical education
OBJECTIVE: Gender inequities persist in academic surgery with implicit bias impacting hiring and promotion at all levels. We hypothesized that creating letters of recommendation for both female and male candidates for academic promotion in surgery using an AI platform, ChatGPT, would elucidate the entrained gender biases already present in the promotion process. DESIGN: Using ChatGPT, we generated 6 letters of recommendation for "a phenomenal surgeon applying for job promotion to associate professor position", specifying "female" or "male" before surgeon in the prompt. We compared 3 "female" letters to 3 "male" letters for differences in length, language, and tone. RESULTS: The letters written for females averaged 298 words compared to 314 for males. Female letters more frequently referred to "compassion", "empathy", and "inclusivity"; whereas male letters referred to "respect", "reputation", and "skill". CONCLUSIONS: These findings highlight the gender bias present in promotion letters generated by ChatGPT, reiterating existing literature regarding real letters of recommendation in academic surgery. Our study suggests that surgeons should use AI tools, such as ChatGPT, with caution when writing LORs for academic surgery faculty promotion.
Desai P; Wang H; Davis L; Ullmann TM; DiBrito SR
10
37701430
The potential of chatbots in chronic venous disease patient management.
2,023
JVS-vascular insights
OBJECTIVE: Health care providers and recipients have been using artificial intelligence and its subfields, such as natural language processing and machine learning technologies, in the form of search engines to obtain medical information for some time now. Although a search engine returns a ranked list of webpages in response to a query and allows the user to obtain information from those links directly, ChatGPT has elevated the interface between humans with artificial intelligence by attempting to provide relevant information in a human-like textual conversation. This technology is being adopted rapidly and has enormous potential to impact various aspects of health care, including patient education, research, scientific writing, pre-visit/post-visit queries, documentation assistance, and more. The objective of this study is to assess whether chatbots could assist with answering patient questions and electronic health record inbox management. METHODS: We devised two questionnaires: (1) administrative and non-complex medical questions (based on actual inbox questions); and (2) complex medical questions on the topic of chronic venous disease. We graded the performance of publicly available chatbots regarding their potential to assist with electronic health record inbox management. The study was graded by an internist and a vascular medicine specialist independently. RESULTS: On administrative and non-complex medical questions, ChatGPT 4.0 performed better than ChatGPT 3.5. ChatGPT 4.0 received a grade of 1 on all the questions: 20 of 20 (100%). ChatGPT 3.5 received a grade of 1 on 14 of 20 questions (70%), grade 2 on 4 of 16 questions (20%), grade 3 on 0 questions (0%), and grade 4 on 2/20 questions (10%). On complex medical questions, ChatGPT 4.0 performed the best. ChatGPT 4.0 received a grade of 1 on 15 of 20 questions (75%), grade 2 on 2 of 20 questions (10%), grade 3 on 2 of 20 questions (10%), and grade 4 on 1 of 20 questions (5%). ChatGPT 3.5 received a grade of 1 on 9 of 20 questions (45%), grade 2 on 4 of 20 questions (20%), grade 3 on 4 of 20 questions (20%), and grade 4 on 3 of 20 questions (15%). Clinical Camel received a grade of 1 on 0 of 20 questions (0%), grade 2 on 5 of 20 questions (25%), grade 3 on 5 of 20 questions (25%), and grade 4 on 10 of 20 questions (50%). CONCLUSIONS: Based on our interactions with ChatGPT regarding the topic of chronic venous disease, it is plausible that in the future, this technology may be used to assist with electronic health record inbox management and offload medical staff. However, for this technology to receive regulatory approval to be used for that purpose, it will require extensive supervised training by subject experts, have guardrails to prevent "hallucinations" and maintain confidentiality, and prove that it can perform at a level comparable to (if not better than) humans. (JVS-Vascular Insights 2023;1:100019.).
Athavale A; Baier J; Ross E; Fukaya E
43
38462064
Evaluation of ChatGPT-generated medical responses: A systematic review and meta-analysis.
2,024
Journal of biomedical informatics
OBJECTIVE: Large language models (LLMs) such as ChatGPT are increasingly explored in medical domains. However, the absence of standard guidelines for performance evaluation has led to methodological inconsistencies. This study aims to summarize the available evidence on evaluating ChatGPT's performance in answering medical questions and provide direction for future research. METHODS: An extensive literature search was conducted on June 15, 2023, across ten medical databases. The keyword used was "ChatGPT," without restrictions on publication type, language, or date. Studies evaluating ChatGPT's performance in answering medical questions were included. Exclusions comprised review articles, comments, patents, non-medical evaluations of ChatGPT, and preprint studies. Data was extracted on general study characteristics, question sources, conversation processes, assessment metrics, and performance of ChatGPT. An evaluation framework for LLM in medical inquiries was proposed by integrating insights from selected literature. This study is registered with PROSPERO, CRD42023456327. RESULTS: A total of 3520 articles were identified, of which 60 were reviewed and summarized in this paper and 17 were included in the meta-analysis. ChatGPT displayed an overall integrated accuracy of 56 % (95 % CI: 51 %-60 %, I(2) = 87 %) in addressing medical queries. However, the studies varied in question resource, question-asking process, and evaluation metrics. As per our proposed evaluation framework, many studies failed to report methodological details, such as the date of inquiry, version of ChatGPT, and inter-rater consistency. CONCLUSION: This review reveals ChatGPT's potential in addressing medical inquiries, but the heterogeneity of the study design and insufficient reporting might affect the results' reliability. Our proposed evaluation framework provides insights for the future study design and transparent reporting of LLM in responding to medical questions.
Wei Q; Yao Z; Cui Y; Wei B; Jin Z; Xu X
0-1
38556438
Evaluating The Role of ChatGPT as a Study Aid in Medical Education in Surgery.
2,024
Journal of surgical education
OBJECTIVE: Our aim was to assess how ChatGPT compares to Google search in assisting medical students during their surgery clerkships. DESIGN: We conducted a crossover study where participants were asked to complete 2 standardized assessments on different general surgery topics before and after they used either Google search or ChatGPT. SETTING: The study was conducted at the Perelman School of Medicine at the University of Pennsylvania (PSOM) in Philadelphia, Pennsylvania. PARTICIPANTS: 19 third-year medical students participated in our study. RESULTS: The baseline (preintervention) performance of participants on both quizzes did not differ between the Google search and ChatGPT groups (p = 0.728). Students overall performed better postintervention and the difference in test scores was statistically significant for both the Google group (p < 0.001) and the ChatGPT group (p = 0.01). The mean percent increase in test scores pre- and postintervention was higher in the Google group at 11% vs. 10% in the ChatGPT group, but this difference was not statistically significant (p = 0.87). Similarly, there was no statistically significant difference in postintervention scores on both assessments between the 2 groups (p = 0.508). Postassessment surveys revealed that all students (100%) have known about ChatGPT before, and 47% have previously used it for various purposes. On a scale of 1 to 10 with 1 being the lowest and 10 being the highest, the feasibility of ChatGPT and its usefulness in finding answers were rated as 8.4 and 6.6 on average, respectively. When asked to rate the likelihood of using ChatGPT in their surgery rotation, the answers ranged between 1 and 3 ("Unlikely" 47%), 4 to 6 ("intermediate" 26%), and 7 to 10 ("likely" 26%). CONCLUSION: Our results show that even though ChatGPT was comparable to Google search in finding answers pertaining to surgery questions, many students were reluctant to use ChatGPT for learning purposes during their surgery clerkship.
Araji T; Brooks AD
10
38130802
Feasibility and acceptability of ChatGPT generated radiology report summaries for cancer patients.
2,023
Digital health
OBJECTIVE: Patients now have direct access to their radiology reports, which can include complex terminology and be difficult to understand. We assessed ChatGPT's ability to generate summarized MRI reports for patients with prostate cancer and evaluated physician satisfaction with the artificial intelligence (AI)-summarized report. METHODS: We used ChatGPT to summarize five full MRI reports for patients with prostate cancer performed at a single institution from 2021 to 2022. Three summarized reports were generated for each full MRI report. Full MRI and summarized reports were assessed for readability using Flesch-Kincaid Grade Level (FK) score. Radiation oncologists were asked to evaluate the AI-summarized reports via an anonymous questionnaire. Qualitative responses were given on a 1-5 Likert-type scale. Fifty newly diagnosed prostate cancer patient MRIs performed at a single institution were additionally assessed for physician online portal response rates. RESULTS: Fifteen summarized reports were generated from five full MRI reports using ChatGPT. The median FK score for the full MRI reports and summarized reports was 9.6 vs. 5.0, (p < 0.05), respectively. Twelve radiation oncologists responded to our questionnaire. The mean [SD] ratings for summarized reports were factual correctness (4.0 [0.6], understanding 4.0 [0.7]), completeness (4.1 [0.5]), potential for harm (3.5 [0.9]), overall quality (3.4 [0.9]), and likelihood to send to patient (3.1 [1.1]). Current physician online portal response rates were 14/50 (28%) at our institution. CONCLUSIONS: We demonstrate a novel application of ChatGPT to summarize MRI reports at a reading level appropriate for patients. Physicians were likely to be satisfied with the summarized reports with respect to factual correctness, ease of understanding, and completeness. Physicians were less likely to be satisfied with respect to potential for harm, overall quality, and likelihood to send to patients. Further research is needed to optimize ChatGPT's ability to summarize radiology reports and understand what factors influence physician trust in AI-summarized reports.
Chung EM; Zhang SC; Nguyen AT; Atkins KM; Sandler HM; Kamrava M
43
38613821
PMC-LLaMA: toward building open-source language models for medicine.
2,024
Journal of the American Medical Informatics Association : JAMIA
OBJECTIVE: Recently, large language models (LLMs) have showcased remarkable capabilities in natural language understanding. While demonstrating proficiency in everyday conversations and question-answering (QA) situations, these models frequently struggle in domains that require precision, such as medical applications, due to their lack of domain-specific knowledge. In this article, we describe the procedure for building a powerful, open-source language model specifically designed for medicine applications, termed as PMC-LLaMA. MATERIALS AND METHODS: We adapt a general-purpose LLM toward the medical domain, involving data-centric knowledge injection through the integration of 4.8M biomedical academic papers and 30K medical textbooks, as well as comprehensive domain-specific instruction fine-tuning, encompassing medical QA, rationale for reasoning, and conversational dialogues with 202M tokens. RESULTS: While evaluating various public medical QA benchmarks and manual rating, our lightweight PMC-LLaMA, which consists of only 13B parameters, exhibits superior performance, even surpassing ChatGPT. All models, codes, and datasets for instruction tuning will be released to the research community. DISCUSSION: Our contributions are 3-fold: (1) we build up an open-source LLM toward the medical domain. We believe the proposed PMC-LLaMA model can promote further development of foundation models in medicine, serving as a medical trainable basic generative language backbone; (2) we conduct thorough ablation studies to demonstrate the effectiveness of each proposed component, demonstrating how different training data and model scales affect medical LLMs; (3) we contribute a large-scale, comprehensive dataset for instruction tuning. CONCLUSION: In this article, we systematically investigate the process of building up an open-source medical-specific LLM, PMC-LLaMA.
Wu C; Lin W; Zhang X; Zhang Y; Xie W; Wang Y
10
38758667
Utilizing ChatGPT as a scientific reasoning engine to differentiate conflicting evidence and summarize challenges in controversial clinical questions.
2,024
Journal of the American Medical Informatics Association : JAMIA
OBJECTIVE: Synthesizing and evaluating inconsistent medical evidence is essential in evidence-based medicine. This study aimed to employ ChatGPT as a sophisticated scientific reasoning engine to identify conflicting clinical evidence and summarize unresolved questions to inform further research. MATERIALS AND METHODS: We evaluated ChatGPT's effectiveness in identifying conflicting evidence and investigated its principles of logical reasoning. An automated framework was developed to generate a PubMed dataset focused on controversial clinical topics. ChatGPT analyzed this dataset to identify consensus and controversy, and to formulate unsolved research questions. Expert evaluations were conducted 1) on the consensus and controversy for factual consistency, comprehensiveness, and potential harm and, 2) on the research questions for relevance, innovation, clarity, and specificity. RESULTS: The gpt-4-1106-preview model achieved a 90% recall rate in detecting inconsistent claim pairs within a ternary assertions setup. Notably, without explicit reasoning prompts, ChatGPT provided sound reasoning for the assertions between claims and hypotheses, based on an analysis grounded in relevance, specificity, and certainty. ChatGPT's conclusions of consensus and controversies in clinical literature were comprehensive and factually consistent. The research questions proposed by ChatGPT received high expert ratings. DISCUSSION: Our experiment implies that, in evaluating the relationship between evidence and claims, ChatGPT considered more detailed information beyond a straightforward assessment of sentimental orientation. This ability to process intricate information and conduct scientific reasoning regarding sentiment is noteworthy, particularly as this pattern emerged without explicit guidance or directives in prompts, highlighting ChatGPT's inherent logical reasoning capabilities. CONCLUSION: This study demonstrated ChatGPT's capacity to evaluate and interpret scientific claims. Such proficiency can be generalized to broader clinical research literature. ChatGPT effectively aids in facilitating clinical studies by proposing unresolved challenges based on analysis of existing studies. However, caution is advised as ChatGPT's outputs are inferences drawn from the input literature and could be harmful to clinical practice.
Xie S; Zhao W; Deng G; He G; He N; Lu Z; Hu W; Zhao M; Du J
0-1
39833868
Health profession students' perceptions of ChatGPT in healthcare and education: insights from a mixed-methods study.
2,025
BMC medical education
OBJECTIVE: The aim of this study was to investigate the perceptions of health profession students regarding ChatGPT use and the potential impact of integrating ChatGPT in healthcare and education. BACKGROUND: Artificial Intelligence is increasingly utilized in medical education and clinical profession training. However, since its introduction, ChatGPT remains relatively unexplored in terms of health profession students' acceptance of its use in education and practice. DESIGN: This study employed a mixed-methods approach, using a web-based survey. METHODS: The study involved a convenience sample recruited through various methods, including Faculty of Medicine announcements, social media, and snowball sampling, during the second semester (March to June 2023). Data were collected using a structured questionnaire with closed-ended questions and three open-ended questions. The final sample comprised 217 undergraduate health profession students, including 73 (33.6%) nursing students, 65 (30.0%) medical students, and 79 (36.4%) occupational therapy, physiotherapy, and speech therapy students. RESULTS: Among the surveyed students, 86.2% were familiar with ChatGPT, with generally positive perceptions as reflected by a mean score of 4.04 (SD = 0.62) on a scale of 1 to 5. Positive feedback was particularly noted with respect to ChatGPT's role in information retrieval and summarization. The qualitative data revealed three main themes: experiences with ChatGPT, its impact on the quality of healthcare, and its integration into the curriculum. The findings highlight benefits such as serving as a convenient tool for accessing information, reducing human errors, and fostering innovative learning approaches. However, they also underscore areas of concern, including ethical considerations, challenges in fostering critical thinking, and issues related to verification. The absence of significant differences between the different fields of study indicates consistent perceptions across nursing, medicine, and other health profession students. CONCLUSIONS: Our findings underscore the necessity for continuous refinement to enhance ChatGPT's accuracy, reliability, and alignment with the diverse educational needs of health professions. These insights not only deepen our understanding of student perceptions of ChatGPT in healthcare education but also have significant implications for the future integration of AI in health profession practice. The study emphasizes the importance of a careful balance between leveraging the benefits of AI tools and addressing ethical and pedagogical concerns.
Moskovich L; Rozani V
10
38112347
FROM TEXT TO DIAGNOSE: CHATGPT'S EFFICACY IN MEDICAL DECISION-MAKING.
2,023
Wiadomosci lekarskie (Warsaw, Poland : 1960)
OBJECTIVE: The aim: Evaluate the diagnostic capabilities of the ChatGPT in the field of medical diagnosis. PATIENTS AND METHODS: Materials and methods: We utilized 50 clinical cases, employing Large Language Model ChatGPT-3.5. The experiment had three phases, each with a new chat setup. In the initial phase, ChatGPT received detailed clinical case descriptions, guided by a "Persona Pattern" prompt. In the second phase, cases with diagnostic errors were addressed by providing potential diagnoses for ChatGPT to choose from. The final phase assessed artificial intelligence's ability to mimic a medical practitioner's diagnostic process, with prompts limiting initial information to symptoms and history. RESULTS: Results: In the initial phase, ChatGPT showed a 66.00% diagnostic accuracy, surpassing physicians by nearly 50%. Notably, in 11 cases requiring image inter notpretation, ChatGPT struggled initially but achieved a correct diagnosis for four without added interpretations. In the second phase, ChatGPT demonstrated a remarkable 70.59% diagnostic accuracy, while physicians averaged 41.47%. Furthermore, the overall accuracy of Large Language Model in first and second phases together was 90.00%. In the third phase emulating real doctor decision-making, ChatGPT achieved a 46.00% success rate. CONCLUSION: Conclusions: Our research underscores ChatGPT's strong potential in clinical medicine as a diagnostic tool, especially in structured scenarios. It emphasizes the need for supplementary data and the complexity of medical diagnosis. This contributes valuable insights to AI-driven clinical diagnostics, with a nod to the importance of prompt engineering techniques in ChatGPT's interaction with doctors.
Mykhalko Y; Kish P; Rubtsova Y; Kutsyn O; Koval V
10
38488302
ChatGPT for Automated Cross-Checking of Authors' Conflicts of Interest Against Industry Payments.
2,024
Otolaryngology--head and neck surgery : official journal of American Academy of Otolaryngology-Head and Neck Surgery
OBJECTIVE: The Centers for Medicare & Medicaid Services "OpenPayments" database tracks industry payments to US physicians to improve research conflicts of interest (COIs) transparency, but manual cross-checking of articles' authors against this database is labor-intensive. This study aims to assess the potential of large language models (LLMs) like ChatGPT to automate COI data analysis in medical publications. STUDY DESIGN: An observational study analyzing the accuracy of ChatGPT in automating the cross-checking of COI disclosures in medical research articles against the OpenPayments database. SETTING: Publications regarding Food and Drug Administration-approved biologics for chronic rhinosinusitis with nasal polyposis: omalizumab, mepolizumab, and dupilumab. METHODS: First, ChatGPT evaluated author affiliations from PubMed to identify those based in the United States. Second, for author names matching 1 or multiple payment recipients in OpenPayments, ChatGPT undertook a comparative analysis between author affiliation and OpenPayments recipient metadata. Third, ChatGPT scrutinized full article COI statements, producing an intricate matrix of disclosures for each author against each relevant company (Sanofi, Regeneron, Genentech, Novartis, and GlaxoSmithKline). A random subset of responses was manually checked for accuracy. RESULTS: In total, 78 relevant articles and 294 unique US authors were included, leading to 980 LLM queries. Manual verification showed accuracies of 100% (200/200; 95% confidence interval [CI]: 98.1%-100%) for country analysis, 97.4% (113/116; 95% CI: 92.7%-99.1%) for matching author affiliations with OpenPayments metadata, and 99.2% (1091/1100; 95% CI: 98.5%-99.6%) for COI statement data extraction. CONCLUSION: LLMs have robust potential to automate author-company-specific COI cross-checking against the OpenPayments database. Our findings pave the way for streamlined, efficient, and accurate COI assessment that could be widely employed across medical research.
Safranek C; Liu C; Richmond R; Boyi T; Rimmer R; Manes RP
10
39603549
Taxonomy-based prompt engineering to generate synthetic drug-related patient portal messages.
2,024
Journal of biomedical informatics
OBJECTIVE: The objectives of this study were to: (1) create a corpus of synthetic drug-related patient portal messages to address the current lack of publicly available datasets for model development, (2) assess differences in language used and linguistics among the synthetic patient portal messages, and (3) assess the accuracy of patient-reported drug side effects for different racial groups. METHODS: We leveraged a taxonomy for patient- and clinician-generated content to guide prompt engineering for synthetic drug-related patient portal messages. We generated two groups of messages: the first group (200 messages) used a subset of the taxonomy relevant to a broad range of drug-related messages and the second group (250 messages) used a subset of the taxonomy relevant to a narrow range of messages focused on side effects. Prompts also include one of five racial groups. Next, we assessed linguistic characteristics among message parts (subject, beginning, body, ending) across different prompt specifications (urgency, patient portal taxa, race). We also assessed the performance and frequency of patient-reported side effects across different racial groups and compared to data present in a real world data source (SIDER). RESULTS: The study generated 450 synthetic patient portal messages, and we assessed linguistic patterns, accuracy of drug-side effect pairs, frequency of pairs compared to real world data. Linguistic analysis revealed variations in language usage and politeness and analysis of positive predictive values identified differences in symptoms reported based on urgency levels and racial groups in the prompt. We also found that low incident SIDER drug-side effect pairs were observed less frequently in our dataset. CONCLUSION: This study demonstrates the potential of synthetic patient portal messages as a valuable resource for healthcare research. After creating a corpus of synthetic drug-related patient portal messages, we identified significant language differences and provided evidence that drug-side effect pairs observed in messages are comparable to what is expected in real world settings.
Wang N; Treewaree S; Zirikly A; Lu YL; Nguyen MH; Agarwal B; Shah J; Stevenson JM; Taylor CO
10
39254470
Is Artificial Intelligence (AI) currently able to provide evidence-based scientific responses on methods that can improve the outcomes of embryo transfers? No.
2,024
JBRA assisted reproduction
OBJECTIVE: The rapid development of Artificial Intelligence (AI) has raised questions about its potential uses in different sectors of everyday life. Specifically in medicine, the question arose whether chatbots could be used as tools for clinical decision-making or patients' and physicians' education. To answer this question in the context of fertility, we conducted a test to determine whether current AI platforms can provide evidence-based responses regarding methods that can improve the outcomes of embryo transfers. METHODS: We asked nine popular chatbots to write a 300-word scientific essay, outlining scientific methods that improve embryo transfer outcomes. We then gathered the responses and extracted the methods suggested by each chatbot. RESULTS: Out of a total of 43 recommendations, which could be grouped into 19 similar categories, only 3/19 (15.8%) were evidence-based practices, those being "ultrasound-guided embryo transfer" in 7/9 (77.8%) chatbots, "single embryo transfer" in 4/9 (44.4%) and "use of a soft catheter" in 2/9 (22.2%), whereas some controversial responses like "preimplantation genetic testing" appeared frequently (6/9 chatbots; 66.7%), along with other debatable recommendations like "endometrial receptivity assay", "assisted hatching" and "time-lapse incubator". CONCLUSIONS: Our results suggest that AI is not yet in a position to give evidence-based recommendations in the field of fertility, particularly concerning embryo transfer, since the vast majority of responses consisted of scientifically unsupported recommendations. As such, both patients and physicians should be wary of guiding care based on chatbot recommendations in infertility. Chatbot results might improve with time especially if trained from validated medical databases; however, this will have to be scientifically checked.
Kolokythas A; Dahan MH
0-1
38815316
Automating biomedical literature review for rapid drug discovery: Leveraging GPT-4 to expedite pandemic response.
2,024
International journal of medical informatics
OBJECTIVE: The rapid expansion of the biomedical literature challenges traditional review methods, especially during outbreaks of emerging infectious diseases when quick action is critical. Our study aims to explore the potential of ChatGPT to automate the biomedical literature review for rapid drug discovery. MATERIALS AND METHODS: We introduce a novel automated pipeline helping to identify drugs for a given virus in response to a potential future global health threat. Our approach can be used to select PubMed articles identifying a drug target for the given virus. We tested our approach on two known pathogens: SARS-CoV-2, where the literature is vast, and Nipah, where the literature is sparse. Specifically, a panel of three experts reviewed a set of PubMed articles and labeled them as either describing a drug target for the given virus or not. The same task was given to the automated pipeline and its performance was based on whether it labeled the articles similarly to the human experts. We applied a number of prompt engineering techniques to improve the performance of ChatGPT. RESULTS: Our best configuration used GPT-4 by OpenAI and achieved an out-of-sample validation performance with accuracy/F1-score/sensitivity/specificity of 92.87%/88.43%/83.38%/97.82% for SARS-CoV-2 and 87.40%/73.90%/74.72%/91.36% for Nipah. CONCLUSION: These results highlight the utility of ChatGPT in drug discovery and development and reveal their potential to enable rapid drug target identification during a pandemic-level health emergency.
Yang J; Walker KC; Bekar-Cesaretli AA; Hao B; Bhadelia N; Joseph-McCarthy D; Paschalidis IC
10
38135548
Development and Evaluation of Aeyeconsult: A Novel Ophthalmology Chatbot Leveraging Verified Textbook Knowledge and GPT-4.
2,024
Journal of surgical education
OBJECTIVE: There has been much excitement on the use of large language models (LLMs) such as ChatGPT in ophthalmology. However, LLMs are limited in that they are trained on unverified information and do not cite their sources. This paper highlights a new methodology to create a generative AI chatbot to answer eye care related questions which uses only verified ophthalmology textbooks as data and cites its sources. SETTING: Yale School of Medicine Department of Ophthalmology and Visual Science. DESIGN/METHODS: Aeyeconsult, an ophthalmology chatbot, was developed using GPT-4 (the LLM used to power the publicly available chatbot ChatGPT-4), LangChain, and Pinecone. Ophthalmology textbooks were processed into embeddings and stored in Pinecone. User queries were similarly converted, compared to stored embeddings, and GPT-4 generated responses. The interface was adapted from public code. Both Aeyeconsult and ChatGPT-4 were tested on the same 260 questions from OphthoQuestions.com, with the first response from Aeyeconsult and ChatGPT-4 recorded as the answer. RESULTS: Aeyeconsult outperformed ChatGPT-4 on the OKAP dataset, with 83.4% correct answers compared to 69.2% (p = 0.0118). Aeyeconsult also had fewer instances of no answer and multiple answers. Both systems performed best in General Medicine, with Aeyeconsult achieving 96.2% accuracy. Aeyeconsult's weakest performance was in Clinical Optics at 68.1%, but it still outperformed ChatGPT-4 in this category (45.5%). CONCLUSION: LLMs may be useful in answering ophthalmology questions but their trustworthiness and accuracy is limited due to training on unverified internet data and lack of source citation. We used a new methodology, using verified ophthalmology textbooks as source material and providing citations, to mitigate these issues, resulting in a chatbot more accurate than ChatGPT-4 in answering OKAPs style questions.
Singer MB; Fu JJ; Chow J; Teng CC
0-1
37017291
Implications of large language models such as ChatGPT for dental medicine.
2,023
Journal of esthetic and restorative dentistry : official publication of the American Academy of Esthetic Dentistry ... [et al.]
OBJECTIVE: This article provides an overview of the implications of ChatGPT and other large language models (LLMs) for dental medicine. OVERVIEW: ChatGPT, a LLM trained on massive amounts of textual data, is adept at fulfilling various language-related tasks. Despite its impressive capabilities, ChatGPT has serious limitations, such as occasionally giving incorrect answers, producing nonsensical content, and presenting misinformation as fact. Dental practitioners, assistants, and hygienists are not likely to be significantly impacted by LLMs. However, LLMs could affect the work of administrative personnel and the provision of dental telemedicine. LLMs offer potential for clinical decision support, text summarization, efficient writing, and multilingual communication. As more people seek health information from LLMs, it is crucial to safeguard against inaccurate, outdated, and biased responses to health-related queries. LLMs pose challenges for patient data confidentiality and cybersecurity that must be tackled. In dental education, LLMs present fewer challenges than in other academic fields. LLMs can enhance academic writing fluency, but acceptable usage boundaries in science need to be established. CONCLUSIONS: While LLMs such as ChatGPT may have various useful applications in dental medicine, they come with risks of malicious use and serious limitations, including the potential for misinformation. CLINICAL SIGNIFICANCE: Along with the potential benefits of using LLMs as an additional tool in dental medicine, it is crucial to carefully consider the limitations and potential risks inherent in such artificial intelligence technologies.
Eggmann F; Weiger R; Zitzmann NU; Blatz MB
10
37991499
How ChatGPT works: a mini review.
2,024
European archives of oto-rhino-laryngology : official journal of the European Federation of Oto-Rhino-Laryngological Societies (EUFOS) : affiliated with the German Society for Oto-Rhino-Laryngology - Head and Neck Surgery
OBJECTIVE: This paper offers a mini-review of OpenAI's language model, ChatGPT, detailing its mechanisms, applications in healthcare, and comparisons with other large language models (LLMs). METHODS: The underlying technology of ChatGPT is outlined, focusing on its neural network architecture, training process, and the role of key elements such as input embedding, encoder, decoder, attention mechanism, and output projection. The advancements in GPT-4, including its capacity for internet connection and the integration of plugins for enhanced functionality are discussed. RESULTS: ChatGPT can generate creative, coherent, and contextually relevant sentences, making it a valuable tool in healthcare for patient engagement, medical education, and clinical decision support. Yet, like other LLMs, it has limitations, including a lack of common sense knowledge, a propensity for hallucination of facts, a restricted context window, and potential privacy concerns. CONCLUSION: Despite the limitations, LLMs like ChatGPT offer transformative possibilities for healthcare. With ongoing research in model interpretability, common-sense reasoning, and handling of longer context windows, their potential is vast. It is crucial for healthcare professionals to remain informed about these technologies and consider their ethical integration into practice.
Briganti G
10
40318343
Taking the plunge together: A student-led faculty learning seminar series on artificial intelligence.
2,025
Currents in pharmacy teaching & learning
OBJECTIVE: This pilot study explored the effectiveness of a student-led faculty development series by evaluating two key outcomes: the capacity of students to deliver meaningful professional development sessions to faculty and the impact of these sessions on faculty perceptions of generative artificial intelligence (AI). METHODS: In a flipped classroom model, two pharmacy students and 12 faculty members engaged in a semester-long learning series on AI. Each week, students presented on a selected topic followed by discussions that facilitated self-directed learning, including decision-making and project management. Faculty perceptions of AI were evaluated before and after the series using an anonymous survey tool (Technology Acceptance Model Edited to Assess ChatGPT Adoption, TAME-ChatGPT). Respondents created a self-chosen code to link their responses. Additionally, students completed a questionnaire to gauge their reflective thinking after the series. RESULTS: Faculty participation averaged 7 members per session. Twelve faculty completed the pre-survey, while 8 faculty completed the post-survey. Among those who had used ChatGPT (n = 4 pre [33 %], n = 2 post [25 %]), scores for usefulness increased, while concerns about risks decreased. In contrast, faculty who had not used ChatGPT (n = 8 pre [67 %], n = 6 post [75 %]) reported unchanged or improved scores for ease of use and reduced anxiety. Both students responded positively to the reflective thinking questionnaire. CONCLUSION: This pilot study demonstrated that a student-led faculty learning series effectively fostered mutual collaborative learning, benefiting both faculty and students. Pharmacy students, often an underutilized resource, can play a valuable role in faculty development. Colleges of pharmacy may enhance faculty engagement by integrating student-led initiatives into their programs.
Munir F; Abdulbaki E; Saiyad Z; Ipema H
10
38188855
Performance of ChatGPT incorporated chain-of-thought method in bilingual nuclear medicine physician board examinations.
2,024
Digital health
OBJECTIVE: This research explores the performance of ChatGPT, compared to human doctors, in bilingual, Mandarin Chinese and English, medical specialty exam in Nuclear Medicine in Taiwan. METHODS: The study employed generative pre-trained transformer (GPT-4) and integrated chain-of-thoughts (COT) method to enhance performance by triggering and explaining the thinking process to answer the question in a coherent and logical manner. Questions from the Taiwanese Nuclear Medicine Specialty Exam served as the basis for testing. The research analyzed the correctness of AI responses in different sections of the exam and explored the influence of question length and language proportion on accuracy. RESULTS: AI, especially ChatGPT with COT, exhibited exceptional capabilities in theoretical knowledge, clinical medicine, and handling integrated questions, often surpassing, or matching human doctor performance. However, AI struggled with questions related to medical regulations. The analysis of question length showed that questions within the 109-163 words range yielded the highest accuracy. Moreover, an increase in the proportion of English words in questions improved both AI and human accuracy. CONCLUSIONS: This research highlights the potential and challenges of AI in the medical field. ChatGPT demonstrates significant competence in various aspects of medical knowledge. However, areas like medical regulations require improvement. The study also suggests that AI may help in evaluating exam question difficulty and maintaining fairness in examinations. These findings shed light on AI role in the medical field, with potential applications in healthcare education, exam preparation, and multilingual environments. Ongoing AI advancements are expected to further enhance AI utility in the medical domain.
Ting YT; Hsieh TC; Wang YF; Kuo YC; Chen YJ; Chan PK; Kao CH
21
37274428
Using ChatGPT in Medical Research: Current Status and Future Directions.
2,023
Journal of multidisciplinary healthcare
OBJECTIVE: This review aims to evaluate the current evidence on the use of the Generative Pre-trained Transformer (ChatGPT) in medical research, including but not limited to treatment, diagnosis, or medication provision. METHODS: This review follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. We searched Google Scholar, Web of Science, PubMed, and Medline to identify studies published between 2022 and 2023 that aimed to utilize ChatGPT in medical research. All identified references were stored in EndNote. RESULTS: We initially identified 114 articles, out of which six studies met the inclusion and exclusion criteria for full-text screening. Among the six studies, two focused on drug development (33.33%), two on literature review writing (33.33%), and one each on medical report improvement, provision of medical information, improving research conduct, data analysis, and personalized medicine (16.67% each). CONCLUSION: ChatGPT has the potential to revolutionize medical research in various ways. However, its accuracy, originality, academic integrity, and ethical issues must be thoroughly discussed and improved before its widespread implementation in clinical research and medical practice.
Ruksakulpiwat S; Kumar A; Ajibade A
10
40234374
Can interactive artificial intelligence be used for patient explanations of nuclear medicine examinations in Japanese?
2,025
Annals of nuclear medicine
OBJECTIVE: This study aimed to evaluate the accuracy and validity of patient explanations about nuclear medicine examinations generated in Japanese using ChatGPT- 3.5 and ChatGPT- 4. METHODS: ChatGPT was used to generate Japanese language explanations for seven single-photon emission computed tomography examinations (bone scintigraphy, brain perfusion imaging, myocardial perfusion imaging, dopamine transporter scintigraphy [DAT scintigraphy], sentinel lymph node scintigraphy, lung perfusion scintigraphy, and renal function scintigraphy) and (18)F-fluorodeoxyglucose positron emission tomography. Nineteen board-certified nuclear medicine technologists evaluated the accuracy and validity of the responses using a 5-point scale. RESULTS: ChatGPT- 4 demonstrated significantly higher accuracy and validity than ChatGPT- 3.5, with 77.9% of responses rated as above average or excellent for accuracy, in comparison to 36.3% for ChatGPT- 3.5. For validity, 73.1% of ChatGPT- 4's responses were rated as above average or excellent, in comparison to 19.6% for ChatGPT- 3.5. ChatGPT- 4 outperformed ChatGPT- 3.5 in all examinations, with notable improvements in bone scintigraphy, lung perfusion scintigraphy, and DAT scintigraphy. CONCLUSION: These findings suggest that ChatGPT- 4 can be a valuable tool for providing patient explanations of nuclear medicine examinations. However, its application still requires expert supervision, and further research is needed to address potential risks and security concerns.
Matsutomo N; Fukami M; Yamamoto T
43
38793077
Assessment of the Quality and Readability of Information Provided by ChatGPT in Relation to the Use of Platelet-Rich Plasma Therapy for Osteoarthritis.
2,024
Journal of personalized medicine
Objective: This study aimed to evaluate the quality and readability of information generated by ChatGPT versions 3.5 and 4 concerning platelet-rich plasma (PRP) therapy in the management of knee osteoarthritis (OA), exploring whether large language models (LLMs) could play a significant role in patient education. Design: A total of 23 common patient queries regarding the role of PRP therapy in knee OA management were presented to ChatGPT versions 3.5 and 4. The quality of the responses was assessed using the DISCERN criteria, and readability was evaluated using six established assessment tools. Results: Both ChatGPT versions 3.5 and 4 produced moderate quality information. The quality of information provided by ChatGPT version 4 was significantly better than version 3.5, with mean DISCERN scores of 48.74 and 44.59, respectively. Both models scored highly with respect to response relevance and had a consistent emphasis on the importance of shared decision making. However, both versions produced content significantly above the recommended 8th grade reading level for patient education materials (PEMs), with mean reading grade levels (RGLs) of 17.18 for ChatGPT version 3.5 and 16.36 for ChatGPT version 4, indicating a potential barrier to their utility in patient education. Conclusions: While ChatGPT versions 3.5 and 4 both demonstrated the capability to generate information of moderate quality regarding the role of PRP therapy for knee OA, the readability of the content remains a significant barrier to widespread usage, exceeding the recommended reading levels for PEMs. Although ChatGPT version 4 showed improvements in quality and source citation, future iterations must focus on producing more accessible content to serve as a viable resource in patient education. Collaboration between healthcare providers, patient organizations, and AI developers is crucial to ensure the generation of high quality, peer reviewed, and easily understandable information that supports informed healthcare decisions.
Fahy S; Niemann M; Bohm P; Winkler T; Oehme S
32
39564507
The performance of AI in medical examinations: an exploration of ChatGPT in ultrasound medical education.
2,024
Frontiers in medicine
OBJECTIVE: This study aims to evaluate the accuracy of ChatGPT in the context of China's Intermediate Professional Technical Qualification Examination for Ultrasound Medicine, exploring its potential role in ultrasound medical education. METHODS: A total of 100 questions, comprising 70 single-choice and 30 multiple-choice questions, were selected from the examination's question bank. These questions were categorized into four groups: basic knowledge, relevant clinical knowledge, professional knowledge, and professional practice. ChatGPT versions 3.5 and 4.0 were tested, and accuracy was measured based on the proportion of correct answers for each version. RESULTS: ChatGPT 3.5 achieved an accuracy of 35.7% for single-choice and 30.0% for multiple-choice questions, while version 4.0 improved to 61.4 and 50.0%, respectively. Both versions performed better in basic knowledge questions but showed limitations in professional practice-related questions. Version 4.0 demonstrated significant improvements across all categories compared to version 3.5, but it still underperformed when compared to resident doctors in certain areas. CONCLUSION: While ChatGPT did not meet the passing criteria for the Intermediate Professional Technical Qualification Examination in Ultrasound Medicine, its strong performance in basic medical knowledge suggests potential as a supplementary tool in medical education. However, its limitations in addressing professional practice tasks need to be addressed.
Hong DR; Huang CY
21
40075297
AI-assisted decision-making in mild traumatic brain injury.
2,025
BMC emergency medicine
OBJECTIVE: This study evaluates the potential use of ChatGPT in aiding clinical decision-making for patients with mild traumatic brain injury (TBI) by assessing the quality of responses it generates for clinical care. METHODS: Seventeen mild TBI case scenarios were selected from PubMed Central, and each case was analyzed by GPT-4 (March 21, 2024, version) between April 11 and April 20, 2024. Responses were evaluated by four emergency medicine specialists, who rated the ease of understanding, scientific adequacy, and satisfaction with each response using a 7-point Likert scale. Evaluators were also asked to identify critical errors, defined as mistakes in clinical care or interpretation that could lead to morbidity or mortality. The readability of GPT-4's responses was also assessed using the Flesch Reading Ease and Flesch-Kincaid Grade Level tools. RESULTS: There was no significant difference in the ease of understanding between responses with and without critical errors (p = 0.133). However, responses with critical errors significantly reduced satisfaction and scientific adequacy (p < 0.001). GPT-4 responses were significantly more difficult to read than the case descriptions (p < 0.001). CONCLUSION: GPT-4 demonstrates potential utility in clinical decision-making for mild TBI management, offering scientifically appropriate and comprehensible responses. However, critical errors and readability issues limit its immediate implementation in emergency settings without oversight by experienced medical professionals.
Yigit Y; Kaynak MF; Alkahlout B; Ahmed S; Gunay S; Ozbek AE
0-1
37545428
Dr. ChatGPT: Utilizing Artificial Intelligence in Surgical Education.
2,024
The Cleft palate-craniofacial journal : official publication of the American Cleft Palate-Craniofacial Association
OBJECTIVE: This study sought to explore the unexamined capabilities of ChatGPT in describing the surgical steps of a specialized operation, the Fisher cleft lip repair. DESIGN: A chat log within ChatGPT was created to generate the procedural steps of a cleft lip repair utilizing the Fisher technique. A board certified craniomaxillofacial (CMF) surgeon then wrote the Fisher repair in his own words blinded to the ChatGPT response. Using both responses, a voluntary survey questionnaire was distributed to residents of plastic and reconstructive surgery (PRS), general surgery (GS), internal medicine (IM), and medical students at our institution in a blinded study. SETTING: Authors collected information from residents (PRS, GS, IM) and medical students at one institution. MAIN OUTCOME MEASURES: Primary outcome measures included understanding, preference, and author identification of the procedural prompts. RESULTS: Results show PRS residents were able to detect more inaccuracies of the ChatGPT response as well as prefer the CMF surgeon's prompt in performing the surgery. Residents with less expertise in the procedure not only failed to detect who wrote what procedure, but preferred the ChatGPT response in explaining the concept and chose it to perform the surgery. CONCLUSIONS: In applications to surgical education, ChatGPT was found to be effective in generating easy to understand procedural steps that can be followed by medical personnel of all specialties. However, it does not have expert capabilities to provide the minute detail of measurements and specific anatomy required to perform medical procedures.
Lebhar MS; Velazquez A; Goza S; Hoppe IC
32
38314324
Large-Scale assessment of ChatGPT's performance in benign and malignant bone tumors imaging report diagnosis and its potential for clinical applications.
2,024
Journal of bone oncology
OBJECTIVE: This study was designed to delve into the complexities involved in diagnosing of benign and malignant bone tumors and to assess the potential of AI technologies like ChatGPT in improving diagnostic accuracy and efficiency. The study also explores the few-shot learning as a method to optimize ChatGPT's performance in specialized medical domains such as benign and malignant bone tumors diagnosis. METHODS: A total of 1366 benign and malignant bone tumors-related imaging reports were collected and diagnosed by 25 experienced physicians. The gold standard of diagnosis was established by combining clinical, imaging and pathological principles.These reports were then input into the ChatGPT model which underwent a few-shot learning method to generate diagnostic results. The diagnostic results of the physicians and the AI model were compared to evaluate the performance of ChatGPT. An experiment was conducted to assess the influence of different radiologist's reporting styles on the model's diagnostic performance. Furthermore, in-depth analysis of misdiagnosed cases was carried out, categorizing diagnostic errors and exploring possible causes. RESULTS: The diagnostic results generated by ChatGPT showed an accuracy of 0.73, sensitivity of 0.95, and specificity of 0.58. After few-shot learning, ChatGPT demonstrated significant improvement, achieving an accuracy of 0.87, sensitivity of 0.99, and specificity of 0.73, bringing it much closer to the level of physician diagnostics. In an experiment analyzing the influence of the radiologist's reporting style, the model demonstrated higher sensitivity when interpreting reports written by high-level radiologists. In 56 benign cases, ChatGPT misdiagnosed them as malignant. Among these, 35 benign lesions- fibrous dysplasia and osteofibrous dysplasia- were incorrectly identified as metastatic tumors or osteosarcomas; 8 cases of myositis ossificans were wrongly diagnosed as extraosseous osteosarcoma. 7 cases of giant cell tumor of bone at the end of long bone were misdiagnosed as osteosarcoma by intermediate doctors. Chondroblastoma was misdiagnosed as malignant tumor in 6 cases -2 osteosarcoma and 4 chondrosarcoma-In this study, 23 osteosarcoma cases were misdiagnosed by ChatGPT as osteomyelitis; Chondrosarcoma was misdiagnosed as fibrous dysplasia or aneurysmal bone cyst in 8 cases. Four cases of spinal chordoma were misdiagnosed as spinal tuberculosis. CONCLUSION: Our findings highlight the potential of ChatGPT in the diagnosis of benign and malignant bone tumors, offering advantages like enhanced efficiency and a reduction in missed diagnoses. However, the necessity of collaborative interactions between physicians and ChatGPT in practical settings was underscored. With an examination into AI's capacity in benign and malignant bone tumors diagnosis, this study lays the groundwork for future AI advancements in medicine. Additionally, the benefits of few-shot learning in fine-tuning ChatGPT applications in specialized fields were also demonstrated.
Yang F; Yan D; Wang Z
0-1
38720222
The Revival of Essay-Type Questions in Medical Education: Harnessing Artificial Intelligence and Machine Learning.
2,024
Journal of the College of Physicians and Surgeons--Pakistan : JCPSP
OBJECTIVE: To analyse and compare the assessment and grading of human-written and machine-written formative essays. STUDY DESIGN: Quasi-experimental, qualitative cross-sectional study. Place and Duration of the Study: Department of Science of Dental Materials, Hamdard College of Medicine & Dentistry, Hamdard University, Karachi, from February to April 2023. METHODOLOGY: Ten short formative essays of final-year dental students were manually assessed and graded. These essays were then graded using ChatGPT version 3.5. The chatbot responses and prompts were recorded and matched with manually graded essays. Qualitative analysis of the chatbot responses was then performed. RESULTS: Four different prompts were given to the artificial intelligence (AI) driven platform of ChatGPT to grade the summative essays. These were the chatbot's initial responses without grading, the chatbot's response to grading against criteria, the chatbot's response to criteria-wise grading, and the chatbot's response to questions for the difference in grading. Based on the results, four innovative ways of using AI and machine learning (ML) have been proposed for medical educators: Automated grading, content analysis, plagiarism detection, and formative assessment. ChatGPT provided a comprehensive report with feedback on writing skills, as opposed to manual grading of essays. CONCLUSION: The chatbot's responses were fascinating and thought-provoking. AI and ML technologies can potentially supplement human grading in the assessment of essays. Medical educators need to embrace AI and ML technology to enhance the standards and quality of medical education, particularly when assessing long and short essay-type questions. Further empirical research and evaluation are needed to confirm their effectiveness. KEY WORDS: Machine learning, Artificial intelligence, Essays, ChatGPT, Formative assessment.
Shamim MS; Zaidi SJA; Rehman A
0-1
40273883
Artificial intelligence (AI) performance on pharmacy skills laboratory course assignments.
2,025
Currents in pharmacy teaching & learning
OBJECTIVE: To compare pharmacy student scores to scores of artificial intelligence (AI)-generated results of three common platforms on pharmacy skills laboratory assignments. METHODS: Pharmacy skills laboratory course assignments were completed by four fourth-year pharmacy student investigators with three free AI platforms: ChatGPT, Copilot, and Gemini. Assignments evaluated were calculations, patient case vignettes, in-depth patient cases, drug information questions, and a reflection activity. Course coordinators graded the AI-generated submissions. Descriptive statistics were utilized to summarize AI scores and compare averages to recent pharmacy student cohorts. Interrater reliability for the four student investigators completing the assignments was assessed. RESULTS: Fourteen skills laboratory assignments were completed utilizing three different AI platforms (ChatGPT, Copilot, and Gemini) by four fourth-year student investigators (n = 168 AI-generated submissions). Copilot was unable to complete 12; therefore, 156 AI-generated submissions were graded by the faculty course coordinators for accuracy and scored from 0 to 100 %. Pharmacy student cohort scores were higher than the average AI scores for all of the skills laboratory assignments except for two in-depth patient cases completed with ChatGPT. CONCLUSION: Pharmacy students on average performed better on most skills laboratory assignments than three commonly used artificial intelligence platforms. Teaching students the strengths and weaknesses of utilizing AI in the classroom is essential.
Do V; Donohoe KL; Peddi AN; Carr E; Kim C; Mele V; Patel D; Crawford AN
10
37217092
The promise and peril of using a large language model to obtain clinical information: ChatGPT performs strongly as a fertility counseling tool with limitations.
2,023
Fertility and sterility
OBJECTIVE: To compare the responses of the large language model-based "ChatGPT" to reputable sources when given fertility-related clinical prompts. DESIGN: The "Feb 13" version of ChatGPT by OpenAI was tested against established sources relating to patient-oriented clinical information: 17 "frequently asked questions (FAQs)" about infertility on the Centers for Disease Control (CDC) Website, 2 validated fertility knowledge surveys, the Cardiff Fertility Knowledge Scale and the Fertility and Infertility Treatment Knowledge Score, as well as the American Society for Reproductive Medicine committee opinion "optimizing natural fertility." SETTING: Academic medical center. PATIENT(S): Online AI Chatbot. INTERVENTION(S): Frequently asked questions, survey questions and rephrased summary statements were entered as prompts in the chatbot over a 1-week period in February 2023. MAIN OUTCOME MEASURE(S): For FAQs from CDC: words/response, sentiment analysis polarity and objectivity, total factual statements, rate of statements that were incorrect, referenced a source, or noted the value of consulting providers. FOR FERTILITY KNOWLEDGE SURVEYS: Percentile according to published population data. FOR COMMITTEE OPINION: Whether response to conclusions rephrased as questions identified missing facts. RESULT(S): When administered the CDC's 17 infertility FAQ's, ChatGPT produced responses of similar length (207.8 ChatGPT vs. 181.0 CDC words/response), factual content (8.65 factual statements/response vs. 10.41), sentiment polarity (mean 0.11 vs. 0.11 on a scale of -1 (negative) to 1 (positive)), and subjectivity (mean 0.42 vs. 0.35 on a scale of 0 (objective) to 1 (subjective)). In total, 9 (6.12%) of 147 ChatGPT factual statements were categorized as incorrect, and only 1 (0.68%) statement cited a reference. ChatGPT would have been at the 87th percentile of Bunting's 2013 international cohort for the Cardiff Fertility Knowledge Scale and at the 95th percentile on the basis of Kudesia's 2017 cohort for the Fertility and Infertility Treatment Knowledge Score. ChatGPT reproduced the missing facts for all 7 summary statements from "optimizing natural fertility." CONCLUSION(S): A February 2023 version of "ChatGPT" demonstrates the ability of generative artificial intelligence to produce relevant, meaningful responses to fertility-related clinical queries comparable to established sources. Although performance may improve with medical domain-specific training, limitations such as the inability to reliably cite sources and the unpredictable possibility of fabricated information may limit its clinical use.
Chervenak J; Lieman H; Blanco-Breindel M; Jindal S
0-1
38712405
Exploring the application of CHATGPT in plastic surgery: a comprehensive systematic review.
2,024
JPMA. The Journal of the Pakistan Medical Association
OBJECTIVE: To determine the impact of ChatGPT in plastic surgery research and assess the authenticity of such contributions. METHODS: The study conducted a literature search in Sep'23 from databases like Pubmed, Google Scholar, SCOPUS, and OVID Medline.The following keywords 'ChatGPT', 'chatbot', 'reconstruction', 'aesthetic' and 'plastic surgery' were used. 32 papers were included from the initial 131 results of articles. English language articles from November 2022 to July 2023 discussing ChatGPT's role in plastic and aesthetic surgery were included whereas non-English documents, irrelevant content, and non-academic sources were excluded from the study. RESULTS: The manuscripts included in the systematic review had a diverse range, including original research articles, case reports, letters to the editor, and editorials. Among the included studies, there were 9 original research articles, 1 case report, 23 letters to the editor, and 2 editorials. Most publications originated from the United States (18) and Australia (7). Analysis suggested concerns, such as inaccuracies, plagiarism, outdated knowledge, and lack of personalized advice. Various authors recommend using ChatGPT as a supplementary tool rather than a replacement for human decision-making in medicine. CONCLUSIONS: ChatGPT shows potential in plastic surgery research, concerns about inaccuracies and outdated knowledge may provide deceiving information and it always requires human input and verification.
Arif F; Safri MK; Shahzad Z; Yasmeen SF; Rahman MF; Shaikh SA
32
39460888
Performance Assessment of GPT 4.0 on the Japanese Medical Licensing Examination.
2,024
Current medical science
OBJECTIVE: To evaluate the accuracy and parsing ability of GPT 4.0 for Japanese medical practitioner qualification examinations in a multidimensional way to investigate its response accuracy and comprehensiveness to medical knowledge. METHODS: We evaluated the performance of the GPT 4.0 on Japanese Medical Licensing Examination (JMLE) questions (2021-2023). Questions are categorized by difficulty and type, with distinctions between general and clinical parts, as well as between single-choice (MCQ1) and multiple-choice (MCQ2) questions. Difficulty levels were determined on the basis of correct rates provided by the JMLE Preparatory School. The accuracy and quality of the GPT 4.0 responses were analyzed via an improved Global Qualily Scale (GQS) scores, considering both the chosen options and the accompanying analysis. Descriptive statistics and Pearson Chi-square tests were used to examine performance across exam years, question difficulty, type, and choice. GPT 4.0 ability was evaluated via the GQS, with comparisons made via the Mann-Whitney U or Kruskal-Wallis test. RESULTS: The correct response rate and parsing ability of the GPT4.0 to the JMLE questions reached the qualification level (80.4%). In terms of the accuracy of the GPT4.0 response to the JMLE, we found significant differences in accuracy across both difficulty levels and option types. According to the GQS scores for the GPT 4.0 responses to all the JMLE questions, the performance of the questionnaire varied according to year and choice type. CONCLUSION: GTP4.0 performs well in providing basic support in medical education and medical research, but it also needs to input a large amount of medical-related data to train its model and improve the accuracy of its medical knowledge output. Further integration of ChatGPT with the medical field could open new opportunities for medicine.
Wang HL; Zhou H; Zhang JY; Xie Y; Yang JM; Xue MD; Yan ZN; Li W; Zhang XB; Wu Y; Chen XL; Liu PR; Lu L; Ye ZW
21
37277096
Appropriateness and Readability of ChatGPT-4-Generated Responses for Surgical Treatment of Retinal Diseases.
2,023
Ophthalmology. Retina
OBJECTIVE: To evaluate the appropriateness and readability of the medical knowledge provided by ChatGPT-4, an artificial intelligence-powered conversational search engine, regarding common vitreoretinal surgeries for retinal detachments (RDs), macular holes (MHs), and epiretinal membranes (ERMs). DESIGN: Retrospective cross-sectional study. SUBJECTS: This study did not involve any human participants. METHODS: We created lists of common questions about the definition, prevalence, visual impact, diagnostic methods, surgical and nonsurgical treatment options, postoperative information, surgery-related complications, and visual prognosis of RD, MH, and ERM, and asked each question 3 times on the online ChatGPT-4 platform. The data for this cross-sectional study were recorded on April 25, 2023. Two independent retina specialists graded the appropriateness of the responses. Readability was assessed using Readable, an online readability tool. MAIN OUTCOME MEASURES: The "appropriateness" and "readability" of the answers generated by ChatGPT-4 bot. RESULTS: Responses were consistently appropriate in 84.6% (33/39), 92% (23/25), and 91.7% (22/24) of the questions related to RD, MH, and ERM, respectively. Answers were inappropriate at least once in 5.1% (2/39), 8% (2/25), and 8.3% (2/24) of the respective questions. The average Flesch Kincaid Grade Level and Flesch Reading Ease Score were 14.1 +/- 2.6 and 32.3 +/- 10.8 for RD, 14 +/- 1.3 and 34.4 +/- 7.7 for MH, and 14.8 +/- 1.3 and 28.1 +/- 7.5 for ERM. These scores indicate that the answers are difficult or very difficult to read for the average lay person and college graduation would be required to understand the material. CONCLUSIONS: Most of the answers provided by ChatGPT-4 were consistently appropriate. However, ChatGPT and other natural language models in their current form are not a source of factual information. Improving the credibility and readability of responses, especially in specialized fields, such as medicine, is a critical focus of research. Patients, physicians, and laypersons should be advised of the limitations of these tools for eye- and health-related counseling. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Momenaei B; Wakabayashi T; Shahlaee A; Durrani AF; Pandit SA; Wang K; Mansour HA; Abishek RM; Xu D; Sridhar J; Yonekawa Y; Kuriyan AE
32
37614494
Diagnostic and Management Applications of ChatGPT in Structured Otolaryngology Clinical Scenarios.
2,023
OTO open
OBJECTIVE: To evaluate the clinical applications and limitations of chat generative pretrained transformer (ChatGPT) in otolaryngology. STUDY DESIGN: Cross-sectional survey. SETTING: Tertiary academic center. METHODS: ChatGPT 4.0 was queried for diagnoses and management plans for 20 physician-written clinical vignettes in otolaryngology. Attending physicians were then asked to rate the difficulty of the clinical vignettes and agreement with the differential diagnoses and management plans of ChatGPT responses on a 5-point Likert scale. Summary statistics were calculated. Univariate ordinal regression was then performed between vignette difficulty and quality of the diagnoses and management plans. RESULTS: Eleven attending physicians completed the survey (61% response rate). Overall, vignettes were rated as very easy to neutral difficulty (range of median score: 1.00-4.00; overall median 2.00). There was a high agreement with the differential diagnosis provided by ChatGPT (range of median score: 3.00-5.00; overall median: 5.00). There was also high agreement with treatment plans (range of median score: 3.00-5.00; overall median: 5.00). There was no association between vignette difficulty and agreement with differential diagnosis or treatment. Lower diagnosis scores had greater odds of having lower treatment scores. CONCLUSION: Generative artificial intelligence models like ChatGPT are being rapidly adopted in medicine. Performance with curated, easy-to-moderate difficulty otolaryngology scenarios indicate high agreement with physicians for diagnosis and management. However, a decreased quality in diagnosis is associated with decreased quality in management. Further research is necessary on ChatGPT's ability to handle unstructured clinical information.
Qu RW; Qureshi U; Petersen G; Lee SC
32
37691497
Use of artificial intelligence large language models as a clinical tool in rehabilitation medicine: a comparative test case.
2,023
Journal of rehabilitation medicine
OBJECTIVE: To explore the potential use of artificial intelligence language models in formulating rehabilitation prescriptions and International Classification of Functioning, Disability and Health (ICF) codes. Design: Comparative study based on a single case report compared to standard answers from a textbook. SUBJECTS: A stroke case from textbook. Methods: Chat Generative Pre-Trained Transformer-4 (ChatGPT-4)was used to generate comprehensive medical and rehabilitation prescription information and ICF codes pertaining to the stroke case. This information was compared with standard answers from textbook, and 2 licensed Physical Medicine and Rehabilitation (PMR) clinicians reviewed the artificial intelligence recommendations for further discussion. RESULTS: ChatGPT-4 effectively formulated rehabilitation prescriptions and ICF codes for a typical stroke case, together with a rationale to support its recommendations. This information was generated in seconds. Compared with standard answers, the large language model generated broader and more general prescriptions in terms of medical problems and management plans, rehabilitation problems and management plans, as well as rehabilitation goals. It also demonstrated the ability to propose specified approaches for each rehabilitation therapy. The language model made an error regarding the ICF category for the stroke case, but no mistakes were identified in the ICF codes assigned. Conclusion: This test case suggests that artificial intelligence language models have potential use in facilitating clinical practice and education in the field of rehabilitation medicine.
Zhang L; Tashiro S; Mukaino M; Yamada S
10
38578616
Can large language models provide secondary reliable opinion on treatment options for dermatological diseases?
2,024
Journal of the American Medical Informatics Association : JAMIA
OBJECTIVE: To investigate the consistency and reliability of medication recommendations provided by ChatGPT for common dermatological conditions, highlighting the potential for ChatGPT to offer second opinions in patient treatment while also delineating possible limitations. MATERIALS AND METHODS: In this mixed-methods study, we used survey questions in April 2023 for drug recommendations generated by ChatGPT with data from secondary databases, that is, Taiwan's National Health Insurance Research Database and an US medical center database, and validated by dermatologists. The methodology included preprocessing queries, executing them multiple times, and evaluating ChatGPT responses against the databases and dermatologists. The ChatGPT-generated responses were analyzed statistically in a disease-drug matrix, considering disease-medication associations (Q-value) and expert evaluation. RESULTS: ChatGPT achieved a high 98.87% dermatologist approval rate for common dermatological medication recommendations. We evaluated its drug suggestions using the Q-value, showing that human expert validation agreement surpassed Q-value cutoff-based agreement. Varying cutoff values for disease-medication associations, a cutoff of 3 achieved 95.14% accurate prescriptions, 5 yielded 85.42%, and 10 resulted in 72.92%. While ChatGPT offered accurate drug advice, it occasionally included incorrect ATC codes, leading to issues like incorrect drug use and type, nonexistent codes, repeated errors, and incomplete medication codes. CONCLUSION: ChatGPT provides medication recommendations as a second opinion in dermatology treatment, but its reliability and comprehensiveness need refinement for greater accuracy. In the future, integrating a medical domain-specific knowledge base for training and ongoing optimization will enhance the precision of ChatGPT's results.
Iqbal U; Lee LT; Rahmanti AR; Celi LA; Li YJ
0-1
39316174
Accuracy and consistency of ChatGPT-3.5 and - 4 in providing differential diagnoses in oral and maxillofacial diseases: a comparative diagnostic performance analysis.
2,024
Clinical oral investigations
OBJECTIVE: To investigate the performance of ChatGPT in the differential diagnosis of oral and maxillofacial diseases. METHODS: Thirty-seven oral and maxillofacial lesions findings were presented to ChatGPT-3.5 and - 4, 18 dental surgeons trained in oral medicine/pathology (OMP), 23 general dental surgeons (DDS), and 16 dental students (DS) for differential diagnosis. Additionally, a group of 15 general dentists was asked to describe 11 cases to ChatGPT versions. The ChatGPT-3.5, -4, and human primary and alternative diagnoses were rated by 2 independent investigators with a 4 Likert-Scale. The consistency of ChatGPT-3.5 and - 4 was evaluated with regenerated inputs. RESULTS: Moderate consistency of outputs was observed for ChatGPT-3.5 and - 4 to provide primary (kappa = 0.532 and kappa = 0.533 respectively) and alternative (kappa = 0.337 and kappa = 0.367 respectively) hypotheses. The mean of correct diagnoses was 64.86% for ChatGPT-3.5, 80.18% for ChatGPT-4, 86.64% for OMP, 24.32% for DDS, and 16.67% for DS. The mean correct primary hypothesis rates were 45.95% for ChatGPT-3.5, 61.80% for ChatGPT-4, 82.28% for OMP, 22.72% for DDS, and 15.77% for DS. The mean correct diagnosis rate for ChatGPT-3.5 with standard descriptions was 64.86%, compared to 45.95% with participants' descriptions. For ChatGPT-4, the mean was 80.18% with standard descriptions and 61.80% with participant descriptions. CONCLUSION: ChatGPT-4 demonstrates an accuracy comparable to specialists to provide differential diagnosis for oral and maxillofacial diseases. Consistency of ChatGPT to provide diagnostic hypotheses for oral diseases cases is moderate, representing a weakness for clinical application. The quality of case documentation and descriptions impacts significantly on the performance of ChatGPT. CLINICAL RELEVANCE: General dentists, dental students and specialists in oral medicine and pathology may benefit from ChatGPT-4 as an auxiliary method to define differential diagnosis for oral and maxillofacial lesions, but its accuracy is dependent on precise case descriptions.
Tomo S; Lechien JR; Bueno HS; Cantieri-Debortoli DF; Simonato LE
32
39799741
The role of artificial intelligence in gynecologic and obstetric emergencies.
2,025
European journal of obstetrics, gynecology, and reproductive biology
OBJECTIVE: To investigate the potential of artificial intelligence (AI) in emergency medicine, focusing on its utility in triaging and managing acute gynecologic and obstetric emergencies. METHODS AND MATERIALS: This feasibility study assessed Chat-GPT's performance in triaging and recommending management interventions for gynecologic and obstetric emergencies, using ten fictive cases. Five common conditions were modeled for each specialty. Chat-GPT was tasked with proposing triage classifications and providing immediate management recommendations. Human experts independently reviewed each case, classified triage categories, and proposed management. Following this, experts evaluated Chat-GPT's recommendations, rating the AI's responses on accuracy and clinical applicability. RESULTS: Chat-GPT's recommendations demonstrated high concordance with human evaluators. Chat-GPT's triage classifications matched those of human experts in most cases, though minor discrepancies in urgency ratings were observed. The AIs suggestions were mostly rated as "very good" to "excellent." While Chat-GPT consistently delivered appropriate responses, some human evaluators noted slight differences in perceived urgency. CONCLUSIONS: This study highlights Chat-GPT's potential as a clinical support tool in emergency medicine. Chat-GPT provided structured, evidence-based recommendations comparable to those of experienced clinicians, especially for high-stakes gynecologic and obstetric emergencies. Although encouraging, these results highlight the value of utilizing AI in addition to human knowledge, as variations in urgency ratings and management nuances highlight the necessity of human supervision in crucial decision-making.
Psilopatis I; Heindl F; Cupisti S; Fischer U; Kohlmann V; Schneider M; Bader S; Krueckel A; Emons J
10
39420246
Exploring the potential of artificial intelligence models for triage in the emergency department.
2,024
Postgraduate medicine
OBJECTIVE: To perform a comparative analysis of the three-level triage protocol conducted by triage nurses and emergency medicine doctors with the use of ChatGPT, Gemini, and Pi, which are recognized artificial intelligence (AI) models widely used in the daily life. MATERIALS AND METHODS: The study was prospectively conducted with patients presenting to the emergency department of a tertiary care hospital from 1 April 2024, to 7 April 2024. Among the patients who presented to the emergency department over this period, data pertaining to their primary complaints, arterial blood pressure values, heart rates, peripheral oxygen saturation values measured by pulse oximetry, body temperature values, age, and gender characteristics were analyzed. The triage categories determined by triage nurses, the abovementioned AI chatbots, and emergency medicine doctors were compared. RESULTS: The study included 500 patients, of whom 23.8% were categorized identically by all triage evaluators. Compared to the triage conducted by emergency medicine doctors, triage nurses overtriaged 6.4% of the patients and undertriaged 3.1% of the yellow-coded patients and 3.4% of the red-coded patients. Of the AI chatbots, ChatGPT exhibited the closest triage approximation to that of emergency medicine doctors; however, its undertriage rates were 26.5% for yellow-coded patients and 42.6% for red-coded patients. CONCLUSION: The undertriage rates observed in AI models were considerably high. Hence, it does not yet seem appropriate to solely rely on the specified AI models for triage purposes in the emergency department.
Tortum F; Kasali K
10
40290213
Comparison of performance of artificial intelligence tools in answering emergency medicine question pool: ChatGPT 4.0, Google Gemini and Microsoft Copilot.
2,025
Pakistan journal of medical sciences
OBJECTIVE: Using artificial intelligence tools that work with different software architectures for both clinical and educational purposes in the medical field has been a subject of considerable interest recently. In this study, we compared the answers given by three different artificial intelligence chatbots to the Emergency Medicine question pool obtained from the questions asked in the Turkish National Medical Specialization Exam. We tried to investigate the effects on the answers given by classifying the questions in terms of content and form and examining the question sentences. METHODS: The questions related to emergency medicine of the Medical Specialization Exam questions between 2015-2020 were recorded. The questions were asked to artificial intelligence models, including ChatGPT-4, Gemini, and Copilot. The length of the questions, the question type and the topics of the wrong answers were recorded. RESULTS: The most successful chatbot in terms of total score was Microsoft Copilot (7.8% error margin), while the least successful was Google Gemini (22.9% error margin) (p<0.001). It was important that all chatbots had the highest error margins in questions about trauma and surgical approaches and made mistakes in burns and pediatrics. The increase in the error rates in questions containing the root "probability" also showed that the question style affected the answers given. CONCLUSIONS: Although chatbots show promising success in determining the correct answer, we think that they should not see chatbots as a primary source for the exam, but rather as a good auxiliary tool to support their learning processes.
Aksoy I; Arslan MK
21
38487300
Can ChatGPT-3.5 Pass a Medical Exam? A Systematic Review of ChatGPT's Performance in Academic Testing.
2,024
Journal of medical education and curricular development
OBJECTIVE: We, therefore, aim to conduct a systematic review to assess the academic potential of ChatGPT-3.5, along with its strengths and limitations when giving medical exams. METHOD: Following PRISMA guidelines, a systemic search of the literature was performed using electronic databases PUBMED/MEDLINE, Google Scholar, and Cochrane. Articles from their inception till April 4, 2023, were queried. A formal narrative analysis was conducted by systematically arranging similarities and differences between individual findings together. RESULTS: After rigorous screening, 12 articles underwent this review. All the selected papers assessed the academic performance of ChatGPT-3.5. One study compared the performance of ChatGPT-3.5 with the performance of ChatGPT-4 when giving a medical exam. Overall, ChatGPT performed well in 4 tests, averaged in 4 tests, and performed badly in 4 tests. ChatGPT's performance was directly proportional to the level of the questions' difficulty but was unremarkable on whether the questions were binary, descriptive, or MCQ-based. ChatGPT's explanation, reasoning, memory, and accuracy were remarkably good, whereas it failed to understand image-based questions, and lacked insight and critical thinking. CONCLUSION: ChatGPT-3.5 performed satisfactorily in the exams it took as an examinee. However, there is a need for future related studies to fully explore the potential of ChatGPT in medical education.
Sumbal A; Sumbal R; Amir A
21
38818395
Global trends and hotspots of ChatGPT in medical research: a bibliometric and visualized study.
2,024
Frontiers in medicine
OBJECTIVE: With the rapid advancement of Chat Generative Pre-Trained Transformer (ChatGPT) in medical research, our study aimed to identify global trends and focal points in this domain. METHOD: All publications on ChatGPT in medical research were retrieved from the Web of Science Core Collection (WoSCC) by Clarivate Analytics from January 1, 2023, to January 31, 2024. The research trends and focal points were visualized and analyzed using VOSviewer and CiteSpace. RESULTS: A total of 1,239 publications were collected and analyzed. The USA contributed the largest number of publications (458, 37.145%) with the highest total citation frequencies (2,461) and the largest H-index. Harvard University contributed the highest number of publications (33) among all full-time institutions. The Cureus Journal of Medical Science published the most ChatGPT-related research (127, 10.30%). Additionally, Wiwanitkit V contributed the majority of publications in this field (20). "Artificial Intelligence (AI) and Machine Learning (ML)," "Education and Training," "Healthcare Applications," and "Data Analysis and Technology" emerged as the primary clusters of keywords. These areas are predicted to remain hotspots in future research in this field. CONCLUSION: Overall, this study signifies the interdisciplinary nature of ChatGPT research in medicine, encompassing AI and ML technologies, education and training initiatives, diverse healthcare applications, and data analysis and technology advancements. These areas are expected to remain at the forefront of future research, driving continued innovation and progress in the field of ChatGPT in medical research.
Liu L; Qu S; Zhao H; Kong L; Xie Z; Jiang Z; Zou P
10
38073946
Potential Use of ChatGPT for Patient Information in Periodontology: A Descriptive Pilot Study.
2,023
Cureus
Objectives The aim of this study is to evaluate the accuracy and completeness of the answers given by Chat Generative Pre-trained Transformer (ChatGPT) (OpenAI OpCo, LLC, San Francisco, CA), to the most frequently asked questions on different topics in the field of periodontology. Methods The 10 most frequently asked questions by patients about seven different topics (periodontal diseases, peri-implant diseases, tooth sensitivity, gingival recessions, halitosis, dental implants, and periodontal surgery) in periodontology were created by ChatGPT. To obtain responses, a set of 70 questions was submitted to ChatGPT, with an allocation of 10 questions per subject. The responses that were documented were assessed using two distinct Likert scales by professionals specializing in the subject of periodontology. The accuracy of the responses was rated on a Likert scale ranging from one to six, while the completeness of the responses was rated on a scale ranging from one to three. Results The median accuracy score for all responses was six, while the completeness score was two. The mean scores for accuracy and completeness were 5.50 +/- 0.23 and 2.34 +/- 0.24, respectively. It was observed that ChatGPT's responses to the most frequently asked questions by patients for information purposes in periodontology were at least "nearly completely correct" in terms of accuracy and "adequate" in terms of completeness. There was a statistically significant difference between subjects in terms of accuracy and completeness (P<0.05). The highest and lowest accuracy scores were peri-implant diseases and gingival recession, respectively, while the highest and lowest completeness scores were gingival recession and dental implants, respectively. Conclusions The utilization of large language models has become increasingly prevalent, extending its applicability to patients within the healthcare domain. While ChatGPT may not offer absolute precision and comprehensive results without expert supervision, it is apparent that those within the field of periodontology can utilize it as an informational resource, albeit acknowledging the potential for inaccuracies.
Babayigit O; Tastan Eroglu Z; Ozkan Sen D; Ucan Yarkac F
32
38995551
Assessing ChatGPT's theoretical knowledge and prescriptive accuracy in bacterial infections: a comparative study with infectious diseases residents and specialists.
2,024
Infection
OBJECTIVES: Advancements in Artificial Intelligence(AI) have made platforms like ChatGPT increasingly relevant in medicine. This study assesses ChatGPT's utility in addressing bacterial infection-related questions and antibiogram-based clinical cases. METHODS: This study involved a collaborative effort involving infectious disease (ID) specialists and residents. A group of experts formulated six true/false, six open-ended questions, and six clinical cases with antibiograms for four types of infections (endocarditis, pneumonia, intra-abdominal infections, and bloodstream infection) for a total of 96 questions. The questions were submitted to four senior residents and four specialists in ID and inputted into ChatGPT-4 and a trained version of ChatGPT-4. A total of 720 responses were obtained and reviewed by a blinded panel of experts in antibiotic treatments. They evaluated the responses for accuracy and completeness, the ability to identify correct resistance mechanisms from antibiograms, and the appropriateness of antibiotics prescriptions. RESULTS: No significant difference was noted among the four groups for true/false questions, with approximately 70% correct answers. The trained ChatGPT-4 and ChatGPT-4 offered more accurate and complete answers to the open-ended questions than both the residents and specialists. Regarding the clinical case, we observed a lower accuracy from ChatGPT-4 to recognize the correct resistance mechanism. ChatGPT-4 tended not to prescribe newer antibiotics like cefiderocol or imipenem/cilastatin/relebactam, favoring less recommended options like colistin. Both trained- ChatGPT-4 and ChatGPT-4 recommended longer than necessary treatment periods (p-value = 0.022). CONCLUSIONS: This study highlights ChatGPT's capabilities and limitations in medical decision-making, specifically regarding bacterial infections and antibiogram analysis. While ChatGPT demonstrated proficiency in answering theoretical questions, it did not consistently align with expert decisions in clinical case management. Despite these limitations, the potential of ChatGPT as a supportive tool in ID education and preliminary analysis is evident. However, it should not replace expert consultation, especially in complex clinical decision-making.
De Vito A; Geremia N; Marino A; Bavaro DF; Caruana G; Meschiari M; Colpani A; Mazzitelli M; Scaglione V; Venanzi Rullo E; Fiore V; Fois M; Campanella E; Pistara E; Faltoni M; Nunnari G; Cattelan A; Mussini C; Bartoletti M; Vaira LA; Madeddu G
43
40097790
Assessing the performance of an artificial intelligence based chatbot in the differential diagnosis of oral mucosal lesions: clinical validation study.
2,025
Clinical oral investigations
OBJECTIVES: Artificial intelligence (AI) is becoming more popular in medicine. The current study aims to investigate, primarily, if an AI-based chatbot, such as ChatGPT, could be a valid tool for assisting in establishing a differential diagnosis of oral mucosal lesions. METHODS: Data was gathered from patients who were referred to our clinic for an oral mucosal biopsy by one oral medicine specialist. Clinical description, differential diagnoses, and final histopathologic diagnoses were retrospectively extracted from patient records. The lesion description was inputted into ChatGPT version 4.0 under a uniform script to generate three differential diagnoses. ChatGPT and an oral medicine specialist's differential diagnosis were compared to the final histopathologic diagnosis. RESULTS: 100 oral soft tissue lesions were evaluated. A statistically significant correlation was found between the ability of the Chatbot and the Specialist to accurately diagnose the cases (P < 0.001). ChatGPT demonstrated remarkable sensitivity for diagnosing urgent cases, as none of the malignant lesions were missed by the chatbot. At the same time, the specificity of the specialist was higher in cases of malignant lesion diagnosis (p < 0.05). The chatbot performance was reliable in two different events (p < 0.01). CONCLUSION: ChatGPT-4 has shown the ability to pinpoint suspicious malignant lesions and suggest an adequate differential diagnosis for soft tissue lesions, in a consistent and repetitive manner. CLINICAL RELEVANCE: This study serves as a primary insight into the role of AI chatbots, as assisting tools in oral medicine and assesses their clinical capabilities.
Grinberg N; Whitefield S; Kleinman S; Ianculovici C; Wasserman G; Peleg O
32
39588809
Artificial intelligence and pain medicine education: Benefits and pitfalls for the medical trainee.
2,025
Pain practice : the official journal of World Institute of Pain
OBJECTIVES: Artificial intelligence (AI) represents an exciting and evolving technology that is increasingly being utilized across pain medicine. Large language models (LLMs) are one type of AI that has become particularly popular. Currently, there is a paucity of literature analyzing the impact that AI may have on trainee education. As such, we sought to assess the benefits and pitfalls that AI may have on pain medicine trainee education. Given the rapidly increasing popularity of LLMs, we particularly assessed how these LLMs may promote and hinder trainee education through a pilot quality improvement project. MATERIALS AND METHODS: A comprehensive search of the existing literature regarding AI within medicine was performed to identify its potential benefits and pitfalls within pain medicine. The pilot project was approved by UPMC Quality Improvement Review Committee (#4547). Three of the most commonly utilized LLMs at the initiation of this pilot study - ChatGPT Plus, Google Bard, and Bing AI - were asked a series of multiple choice questions to evaluate their ability to assist in learner education within pain medicine. RESULTS: Potential benefits of AI within pain medicine trainee education include ease of use, imaging interpretation, procedural/surgical skills training, learner assessment, personalized learning experiences, ability to summarize vast amounts of knowledge, and preparation for the future of pain medicine. Potential pitfalls include discrepancies between AI devices and associated cost-differences, correlating radiographic findings to clinical significance, interpersonal/communication skills, educational disparities, bias/plagiarism/cheating concerns, lack of incorporation of private domain literature, and absence of training specifically for pain medicine education. Regarding the quality improvement project, ChatGPT Plus answered the highest percentage of all questions correctly (16/17). Lowest correctness scores by LLMs were in answering first-order questions, with Google Bard and Bing AI answering 4/9 and 3/9 first-order questions correctly, respectively. Qualitative evaluation of these LLM-provided explanations in answering second- and third-order questions revealed some reasoning inconsistencies (e.g., providing flawed information in selecting the correct answer). CONCLUSIONS: AI represents a continually evolving and promising modality to assist trainees pursuing a career in pain medicine. Still, limitations currently exist that may hinder their independent use in this setting. Future research exploring how AI may overcome these challenges is thus required. Until then, AI should be utilized as supplementary tool within pain medicine trainee education and with caution.
Glicksman M; Wang S; Yellapragada S; Robinson C; Orhurhu V; Emerick T
0-1
38940388
Artificial intelligence chatbot vs pathology faculty and residents: Real-world clinical questions from a genitourinary treatment planning conference.
2,024
American journal of clinical pathology
OBJECTIVES: Artificial intelligence (AI)-based chatbots have demonstrated accuracy in a variety of fields, including medicine, but research has yet to substantiate their accuracy and clinical relevance. We evaluated an AI chatbot's answers to questions posed during a treatment planning conference. METHODS: Pathology residents, pathology faculty, and an AI chatbot (OpenAI ChatGPT [January 30, 2023, release]) answered a questionnaire curated from a genitourinary subspecialty treatment planning conference. Results were evaluated by 2 blinded adjudicators: a clinician expert and a pathology expert. Scores were based on accuracy and clinical relevance. RESULTS: Overall, faculty scored highest (4.75), followed by the AI chatbot (4.10), research-prepared residents (3.50), and unprepared residents (2.87). The AI chatbot scored statistically significantly better than unprepared residents (P = .03) but not statistically significantly different from research-prepared residents (P = .33) or faculty (P = .30). Residents did not statistically significantly improve after research (P = .39), and faculty performed statistically significantly better than both resident categories (unprepared, P < .01; research prepared, P = .01). CONCLUSIONS: The AI chatbot gave answers to medical questions that were comparable in accuracy and clinical relevance to pathology faculty, suggesting promise for further development. Serious concerns remain, however, that without the ability to provide support with references, AI will face legitimate scrutiny as to how it can be integrated into medical decision-making.
Luo MX; Lyle A; Bennett P; Albertson D; Sirohi D; Maughan BL; McMurtry V; Mahlow J
43
39843286
Efficacy and empathy of AI chatbots in answering frequently asked questions on oral oncology.
2,025
Oral surgery, oral medicine, oral pathology and oral radiology
OBJECTIVES: Artificial intelligence chatbots have demonstrated feasibility and efficacy in improving health outcomes. In this study, responses from 5 different publicly available AI chatbots-Bing, GPT-3.5, GPT-4, Google Bard, and Claude-to frequently asked questions related to oral cancer were evaluated. STUDY DESIGN: Relevant patient-related frequently asked questions about oral cancer were obtained from two main sources: public health websites and social media platforms. From these sources, 20 oral cancer-related questions were selected. Four board-certified specialists in oral medicine/oral and maxillofacial pathology assessed the answers using modified version of the global quality score on a 5-point Likert scale. Additionally, readability was measured using the Flesch-Kincaid Grade Level and Flesch Reading Ease scores. Responses were also assessed for empathy using a validated 5-point scale. RESULTS: Specialists ranked GPT-4 with highest total score of 17.3 +/- 1.5, while Bing received the lowest at 14.9 +/- 2.2. Bard had the highest Flesch Reading Ease score of 62 +/- 7; and ChatGPT-3.5 and Claude received the lowest scores (more challenging readability). GPT-4 and Bard emerged as the most superior chatbots in terms of empathy and accurate citations on patient-related frequently asked questions pertaining to oral cancer. GPT-4 had highest overall quality, whereas Bing showed the lowest level of quality, empathy, and accuracy for citations. CONCLUSION: GPT-4 demonstrated the highest quality responses to frequently asked questions pertaining to oral cancer. Although impressive in their ability to guide patients on common oral cancer topics, most chatbots did not perform well when assessed for empathy or citation accuracy.
Rokhshad R; Khoury ZH; Mohammad-Rahimi H; Motie P; Price JB; Tavares T; Jessri M; Bavarian R; Sciubba JJ; Sultan AS
43
37529789
Performance of emergency triage prediction of an open access natural language processing based chatbot application (ChatGPT): A preliminary, scenario-based cross-sectional study.
2,023
Turkish journal of emergency medicine
OBJECTIVES: Artificial intelligence companies have been increasing their initiatives recently to improve the results of chatbots, which are software programs that can converse with a human in natural language. The role of chatbots in health care is deemed worthy of research. OpenAI's ChatGPT is a supervised and empowered machine learning-based chatbot. The aim of this study was to determine the performance of ChatGPT in emergency medicine (EM) triage prediction. METHODS: This was a preliminary, cross-sectional study conducted with case scenarios generated by the researchers based on the emergency severity index (ESI) handbook v4 cases. Two independent EM specialists who were experts in the ESI triage scale determined the triage categories for each case. A third independent EM specialist was consulted as arbiter, if necessary. Consensus results for each case scenario were assumed as the reference triage category. Subsequently, each case scenario was queried with ChatGPT and the answer was recorded as the index triage category. Inconsistent classifications between the ChatGPT and reference category were defined as over-triage (false positive) or under-triage (false negative). RESULTS: Fifty case scenarios were assessed in the study. Reliability analysis showed a fair agreement between EM specialists and ChatGPT (Cohen's Kappa: 0.341). Eleven cases (22%) were over triaged and 9 (18%) cases were under triaged by ChatGPT. In 9 cases (18%), ChatGPT reported two consecutive triage categories, one of which matched the expert consensus. It had an overall sensitivity of 57.1% (95% confidence interval [CI]: 34-78.2), specificity of 34.5% (95% CI: 17.9-54.3), positive predictive value (PPV) of 38.7% (95% CI: 21.8-57.8), negative predictive value (NPV) of 52.6 (95% CI: 28.9-75.6), and an F1 score of 0.461. In high acuity cases (ESI-1 and ESI-2), ChatGPT showed a sensitivity of 76.2% (95% CI: 52.8-91.8), specificity of 93.1% (95% CI: 77.2-99.2), PPV of 88.9% (95% CI: 65.3-98.6), NPV of 84.4 (95% CI: 67.2-94.7), and an F1 score of 0.821. The receiver operating characteristic curve showed an area under the curve of 0.846 (95% CI: 0.724-0.969, P < 0.001) for high acuity cases. CONCLUSION: The performance of ChatGPT was best when predicting high acuity cases (ESI-1 and ESI-2). It may be useful when determining the cases requiring critical care. When trained with more medical knowledge, ChatGPT may be more accurate for other triage category predictions.
Sarbay I; Berikol GB; Ozturan IU
10
38895543
ChatGPT-3.5 passes Poland's medical final examination-Is it possible for ChatGPT to become a doctor in Poland?
2,024
SAGE open medicine
OBJECTIVES: ChatGPT is an advanced chatbot based on Large Language Model that has the ability to answer questions. Undoubtedly, ChatGPT is capable of transforming communication, education, and customer support; however, can it play the role of a doctor? In Poland, prior to obtaining a medical diploma, candidates must successfully pass the Medical Final Examination. METHODS: The purpose of this research was to determine how well ChatGPT performed on the Polish Medical Final Examination, which passing is required to become a doctor in Poland (an exam is considered passed if at least 56% of the tasks are answered correctly). A total of 2138 categorized Medical Final Examination questions (from 11 examination sessions held between 2013-2015 and 2021-2023) were presented to ChatGPT-3.5 from 19 to 26 May 2023. For further analysis, the questions were divided into quintiles based on difficulty and duration, as well as question types (simple A-type or complex K-type). The answers provided by ChatGPT were compared to the official answer key, reviewed for any changes resulting from the advancement of medical knowledge. RESULTS: ChatGPT correctly answered 53.4%-64.9% of questions. In 8 out of 11 exam sessions, ChatGPT achieved the scores required to successfully pass the examination (60%). The correlation between the efficacy of artificial intelligence and the level of complexity, difficulty, and length of a question was found to be negative. AI outperformed humans in one category: psychiatry (77.18% vs. 70.25%, p = 0.081). CONCLUSIONS: The performance of artificial intelligence is deemed satisfactory; however, it is observed to be markedly inferior to that of human graduates in the majority of instances. Despite its potential utility in many medical areas, ChatGPT is constrained by its inherent limitations that prevent it from entirely supplanting human expertise and knowledge.
Suwala S; Szulc P; Guzowski C; Kaminska B; Dorobiala J; Wojciechowska K; Berska M; Kubicka O; Kosturkiewicz O; Kosztulska B; Rajewska A; Junik R
21
37083166
Potentials and pitfalls of ChatGPT and natural-language artificial intelligence models for the understanding of laboratory medicine test results. An assessment by the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Working Group on Artificial Intelligence (WG-AI).
2,023
Clinical chemistry and laboratory medicine
OBJECTIVES: ChatGPT, a tool based on natural language processing (NLP), is on everyone's mind, and several potential applications in healthcare have been already proposed. However, since the ability of this tool to interpret laboratory test results has not yet been tested, the EFLM Working group on Artificial Intelligence (WG-AI) has set itself the task of closing this gap with a systematic approach. METHODS: WG-AI members generated 10 simulated laboratory reports of common parameters, which were then passed to ChatGPT for interpretation, according to reference intervals (RI) and units, using an optimized prompt. The results were subsequently evaluated independently by all WG-AI members with respect to relevance, correctness, helpfulness and safety. RESULTS: ChatGPT recognized all laboratory tests, it could detect if they deviated from the RI and gave a test-by-test as well as an overall interpretation. The interpretations were rather superficial, not always correct, and, only in some cases, judged coherently. The magnitude of the deviation from the RI seldom plays a role in the interpretation of laboratory tests, and artificial intelligence (AI) did not make any meaningful suggestion regarding follow-up diagnostics or further procedures in general. CONCLUSIONS: ChatGPT in its current form, being not specifically trained on medical data or laboratory data in particular, may only be considered a tool capable of interpreting a laboratory report on a test-by-test basis at best, but not on the interpretation of an overall diagnostic picture. Future generations of similar AIs with medical ground truth training data might surely revolutionize current processes in healthcare, despite this implementation is not ready yet.
Cadamuro J; Cabitza F; Debeljak Z; De Bruyne S; Frans G; Perez SM; Ozdemir H; Tolios A; Carobene A; Padoan A
10
37992892
Assessing the applicability and appropriateness of ChatGPT in answering clinical pharmacy questions.
2,024
Annales pharmaceutiques francaises
OBJECTIVES: Clinical pharmacists rely on different scientific references to ensure appropriate, safe, and cost-effective drug use. Tools based on artificial intelligence (AI) such as ChatGPT (Generative Pre-trained Transformer) could offer valuable support. The objective of this study was to assess ChatGPT's capacity to correctly respond to clinical pharmacy questions asked by healthcare professionals in our university hospital. MATERIAL AND METHODS: ChatGPT's capacity to respond correctly to the last 100 consecutive questions recorded in our clinical pharmacy database was assessed. Questions were copied from our FileMaker Pro database and pasted into ChatGPT March 14 version online platform. The generated answers were then copied verbatim into an Excel file. Two blinded clinical pharmacists reviewed all the questions and the answers given by the software. In case of disagreements, a third blinded pharmacist intervened to decide. RESULTS: Documentation-related issues (n=36) and drug administration mode (n=30) were preponderantly recorded. Among 69 applicable questions, the rate of correct answers varied from 30 to 57.1% depending on questions type with a global rate of 44.9%. Regarding inappropriate answers (n=38), 20 were incorrect, 18 gave no answers and 8 were incomplete with 8 answers belonging to 2 different categories. No better answers than the pharmacists were observed. CONCLUSIONS: ChatGPT demonstrated a mitigated performance in answering clinical pharmacy questions. It should not replace human expertise as a high rate of inappropriate answers was highlighted. Future studies should focus on the optimization of ChatGPT for specific clinical pharmacy questions and explore the potential benefits and limitations of integrating this technology into clinical practice.
Fournier A; Fallet C; Sadeghipour F; Perrottet N
10
39661726
Medical language matters: impact of clinical summary composition on a generative artificial intelligence's diagnostic accuracy.
2,025
Diagnosis (Berlin, Germany)
OBJECTIVES: Evaluate the impact of problem representation (PR) characteristics on Generative Artificial Intelligence (GAI) diagnostic accuracy. METHODS: Internal medicine attendings and residents from two academic medical centers were given a clinical vignette and instructed to write a PR. Deductive content analysis described the characteristics comprising each PR. Individual PRs were input into ChatGPT-4 (OpenAI, September 2023) which was prompted to generate a ranked three-item differential. The ranked differential and the top-ranked diagnosis were scored on a 3-part scale, ranging from incorrect, partially correct, to correct. Logistic regression evaluated individual PR characteristic's impact on ChatGPT accuracy. RESULTS: For a three-item differential, accuracy was associated with including fewer comorbidities (OR 0.57, p=0.010), fewer past historical items (OR 0.60, p=0.019), and more physical examination items (OR 1.66, p=0.015). For ChatGPT's ability to rank the true diagnosis as the single-best diagnosis, utilizing temporal semantic qualifiers, more semantic qualifiers overall, and adhering to a typical 3-part PR format all correlated with diagnostic accuracy: OR 3.447, p=0.046; OR 1.300, p=0.005; OR 3.577, p=0.020, respectively. CONCLUSIONS: Several distinct PR factors improved ChatGPT diagnostic accuracy. These factors have previously been associated with expertise in creating PR. Future studies should explore how clinical input qualities affect GAI diagnostic accuracy prospectively.
Skittle C; Bonifacino E; McQuade CN
10
38804035
Comparison of ChatGPT, Gemini, and Le Chat with physician interpretations of medical laboratory questions from an online health forum.
2,024
Clinical chemistry and laboratory medicine
OBJECTIVES: Laboratory medical reports are often not intuitively comprehensible to non-medical professionals. Given their recent advancements, easier accessibility and remarkable performance on medical licensing exams, patients are therefore likely to turn to artificial intelligence-based chatbots to understand their laboratory results. However, empirical studies assessing the efficacy of these chatbots in responding to real-life patient queries regarding laboratory medicine are scarce. METHODS: Thus, this investigation included 100 patient inquiries from an online health forum, specifically addressing Complete Blood Count interpretation. The aim was to evaluate the proficiency of three artificial intelligence-based chatbots (ChatGPT, Gemini and Le Chat) against the online responses from certified physicians. RESULTS: The findings revealed that the chatbots' interpretations of laboratory results were inferior to those from online medical professionals. While the chatbots exhibited a higher degree of empathetic communication, they frequently produced erroneous or overly generalized responses to complex patient questions. The appropriateness of chatbot responses ranged from 51 to 64 %, with 22 to 33 % of responses overestimating patient conditions. A notable positive aspect was the chatbots' consistent inclusion of disclaimers regarding its non-medical nature and recommendations to seek professional medical advice. CONCLUSIONS: The chatbots' interpretations of laboratory results from real patient queries highlight a dangerous dichotomy - a perceived trustworthiness potentially obscuring factual inaccuracies. Given the growing inclination towards self-diagnosis using AI platforms, further research and improvement of these chatbots is imperative to increase patients' awareness and avoid future burdens on the healthcare system.
Meyer A; Soleman A; Riese J; Streichert T
43
40113208
Comparing large language models for antibiotic prescribing in different clinical scenarios: which performs better?
2,025
Clinical microbiology and infection : the official publication of the European Society of Clinical Microbiology and Infectious Diseases
OBJECTIVES: Large language models (LLMs) show promise in clinical decision-making, but comparative evaluations of their antibiotic prescribing accuracy are limited. This study assesses the performance of various LLMs in recommending antibiotic treatments across diverse clinical scenarios. METHODS: Fourteen LLMs, including standard and premium versions of ChatGPT, Claude, Copilot, Gemini, Le Chat, Grok, Perplexity, and Pi.ai, were evaluated using 60 clinical cases with antibiograms covering 10 infection types. A standardized prompt was used for antibiotic recommendations focusing on drug choice, dosage, and treatment duration. Responses were anonymized and reviewed by a blinded expert panel assessing antibiotic appropriateness, dosage correctness, and duration adequacy. RESULTS: A total of 840 responses were collected and analysed. ChatGPT-o1 demonstrated the highest accuracy in antibiotic prescriptions, with 71.7% (43/60) of its recommendations classified as correct and only one (1.7%) incorrect. Gemini and Claude 3 Opus had the lowest accuracy. Dosage correctness was highest for ChatGPT-o1 (96.7%, 58/60), followed by Perplexity Pro (90.0%, 54/60) and Claude 3.5 Sonnet (91.7%, 55/60). In treatment duration, Gemini provided the most appropriate recommendations (75.0%, 45/60), whereas Claude 3.5 Sonnet tended to over-prescribe duration. Performance declined with increasing case complexity, particularly for difficult-to-treat microorganisms. DISCUSSION: There is significant variability among LLMs in prescribing appropriate antibiotics, dosages, and treatment durations. ChatGPT-o1 outperformed other models, indicating the potential of advanced LLMs as decision-support tools in antibiotic prescribing. However, decreased accuracy in complex cases and inconsistencies among models highlight the need for careful validation before clinical utilization.
De Vito A; Geremia N; Bavaro DF; Seo SK; Laracy J; Mazzitelli M; Marino A; Maraolo AE; Russo A; Colpani A; Bartoletti M; Cattelan AM; Mussini C; Parisi SG; Vaira LA; Nunnari G; Madeddu G
10
39393839
Investigating the capabilities of advanced large language models in generating patient instructions and patient educational material.
2,024
European journal of hospital pharmacy : science and practice
OBJECTIVES: Large language models (LLMs) with advanced language generation capabilities have the potential to enhance patient interactions. This study evaluates the effectiveness of ChatGPT 4.0 and Gemini 1.0 Pro in providing patient instructions and creating patient educational material (PEM). METHODS: A cross-sectional study employed ChatGPT 4.0 and Gemini 1.0 Pro across six medical scenarios using simple and detailed prompts. The Patient Education Materials Assessment Tool for Print materials (PEMAT-P) evaluated the understandability, actionability, and readability of the outputs. RESULTS: LLMs provided consistent responses, especially regarding drug information, therapeutic goals, administration, common side effects, and interactions. However, they lacked guidance on expiration dates and proper medication disposal. Detailed prompts yielded comprehensible outputs for the average adult. ChatGPT 4.0 had mean understandability and actionability scores of 80% and 60%, respectively, compared with 67% and 60% for Gemini 1.0 Pro. ChatGPT 4.0 produced longer outputs, achieving 85% readability with detailed prompts, while Gemini 1.0 Pro maintained consistent readability. Simple prompts resulted in ChatGPT 4.0 outputs at a 10th-grade reading level, while Gemini 1.0 Pro outputs were at a 7th-grade level. Both LLMs produced outputs at a 6th-grade level with detailed prompts. CONCLUSION: LLMs show promise in generating patient instructions and PEM. However, healthcare professional oversight and patient education on LLM use are essential for effective implementation.
Sridharan K; Sivaramakrishnan G
10
38046089
Brain versus bot: Distinguishing letters of recommendation authored by humans compared with artificial intelligence.
2,023
AEM education and training
OBJECTIVES: Letters of recommendation (LORs) are essential within academic medicine, affecting a number of important decisions regarding advancement, yet these letters take significant amounts of time and labor to prepare. The use of generative artificial intelligence (AI) tools, such as ChatGPT, are gaining popularity for a variety of academic writing tasks and offer an innovative solution to relieve the burden of letter writing. It is yet to be determined if ChatGPT could aid in crafting LORs, particularly in high-stakes contexts like faculty promotion. To determine the feasibility of this process and whether there is a significant difference between AI and human-authored letters, we conducted a study aimed at determining whether academic physicians can distinguish between the two. METHODS: A quasi-experimental study was conducted using a single-blind design. Academic physicians with experience in reviewing LORs were presented with LORs for promotion to associate professor, written by either humans or AI. Participants reviewed LORs and identified the authorship. Statistical analysis was performed to determine accuracy in distinguishing between human and AI-authored LORs. Additionally, the perceived quality and persuasiveness of the LORs were compared based on suspected and actual authorship. RESULTS: A total of 32 participants completed letter review. The mean accuracy of distinguishing between human- versus AI-authored LORs was 59.4%. The reviewer's certainty and time spent deliberating did not significantly impact accuracy. LORs suspected to be human-authored were rated more favorably in terms of quality and persuasiveness. A difference in gender-biased language was observed in our letters: human-authored letters contained significantly more female-associated words, while the majority of AI-authored letters tended to use more male-associated words. CONCLUSIONS: Participants were unable to reliably differentiate between human- and AI-authored LORs for promotion. AI may be able to generate LORs and relieve the burden of letter writing for academicians. New strategies, policies, and guidelines are needed to balance the benefits of AI while preserving integrity and fairness in academic promotion decisions.
Preiksaitis C; Nash C; Gottlieb M; Chan TM; Alvarez A; Landry A
10
38857454
RefAI: a GPT-powered retrieval-augmented generative tool for biomedical literature recommendation and summarization.
2,024
Journal of the American Medical Informatics Association : JAMIA
OBJECTIVES: Precise literature recommendation and summarization are crucial for biomedical professionals. While the latest iteration of generative pretrained transformer (GPT) incorporates 2 distinct modes-real-time search and pretrained model utilization-it encounters challenges in dealing with these tasks. Specifically, the real-time search can pinpoint some relevant articles but occasionally provides fabricated papers, whereas the pretrained model excels in generating well-structured summaries but struggles to cite specific sources. In response, this study introduces RefAI, an innovative retrieval-augmented generative tool designed to synergize the strengths of large language models (LLMs) while overcoming their limitations. MATERIALS AND METHODS: RefAI utilized PubMed for systematic literature retrieval, employed a novel multivariable algorithm for article recommendation, and leveraged GPT-4 turbo for summarization. Ten queries under 2 prevalent topics ("cancer immunotherapy and target therapy" and "LLMs in medicine") were chosen as use cases and 3 established counterparts (ChatGPT-4, ScholarAI, and Gemini) as our baselines. The evaluation was conducted by 10 domain experts through standard statistical analyses for performance comparison. RESULTS: The overall performance of RefAI surpassed that of the baselines across 5 evaluated dimensions-relevance and quality for literature recommendation, accuracy, comprehensiveness, and reference integration for summarization, with the majority exhibiting statistically significant improvements (P-values <.05). DISCUSSION: RefAI demonstrated substantial improvements in literature recommendation and summarization over existing tools, addressing issues like fabricated papers, metadata inaccuracies, restricted recommendations, and poor reference integration. CONCLUSION: By augmenting LLM with external resources and a novel ranking algorithm, RefAI is uniquely capable of recommending high-quality literature and generating well-structured summaries, holding the potential to meet the critical needs of biomedical professionals in navigating and synthesizing vast amounts of scientific literature.
Li Y; Zhao J; Li M; Dang Y; Yu E; Li J; Sun Z; Hussein U; Wen J; Abdelhameed AM; Mai J; Li S; Yu Y; Hu X; Yang D; Feng J; Li Z; He J; Tao W; Duan T; Lou Y; Li F; Tao C
10
37945282
Heart-to-heart with ChatGPT: the impact of patients consulting AI for cardiovascular health advice.
2,023
Open heart
OBJECTIVES: The advent of conversational artificial intelligence (AI) systems employing large language models such as ChatGPT has sparked public, professional and academic debates on the capabilities of such technologies. This mixed-methods study sets out to review and systematically explore the capabilities of ChatGPT to adequately provide health advice to patients when prompted regarding four topics from the field of cardiovascular diseases. METHODS: As of 30 May 2023, 528 items on PubMed contained the term ChatGPT in their title and/or abstract, with 258 being classified as journal articles and included in our thematic state-of-the-art review. For the experimental part, we systematically developed and assessed 123 prompts across the four topics based on three classes of users and two languages. Medical and communications experts scored ChatGPT's responses according to the 4Cs of language model evaluation proposed in this article: correct, concise, comprehensive and comprehensible. RESULTS: The articles reviewed were fairly evenly distributed across discussing how ChatGPT could be used for medical publishing, in clinical practice and for education of medical personnel and/or patients. Quantitatively and qualitatively assessing the capability of ChatGPT on the 123 prompts demonstrated that, while the responses generally received above-average scores, they occupy a spectrum from the concise and correct via the absurd to what only can be described as hazardously incorrect and incomplete. Prompts formulated at higher levels of health literacy generally yielded higher-quality answers. Counterintuitively, responses in a lower-resource language were often of higher quality. CONCLUSIONS: The results emphasise the relationship between prompt and response quality and hint at potentially concerning futures in personalised medicine. The widespread use of large language models for health advice might amplify existing health inequalities and will increase the pressure on healthcare systems by providing easy access to many seemingly likely differential diagnoses and recommendations for seeing a doctor for even harmless ailments.
Lautrup AD; Hyrup T; Schneider-Kamp A; Dahl M; Lindholt JS; Schneider-Kamp P
0-1
39045939
Comparative performance of artificial intelligence models in physical medicine and rehabilitation board-level questions.
2,024
Revista da Associacao Medica Brasileira (1992)
OBJECTİVES: The aim of this study was to compare the performance of artificial intelligence models ChatGPT-3.5, ChatGPT-4, and Google Bard in answering Physical Medicine and Rehabilitation board-style questions, assessing their capabilities in medical education and potential clinical applications. METHODS: A comparative cross-sectional study was conducted using the PMR100, an example question set for the American Board of Physical Medicine and Rehabilitation Part I exam, focusing on artificial intelligence models' ability to answer and categorize questions by difficulty. The study evaluated the artificial intelligence models and analyzed them for accuracy, reliability, and alignment with difficulty levels determined by physiatrists. RESULTS: ChatGPT-4 led with a 74% success rate, followed by Bard at 66%, and ChatGPT-3.5 at 63.8%. Bard showed remarkable answer consistency, altering responses in only 1% of cases. The difficulty assessment by ChatGPT models closely matched that of physiatrists. The study highlighted nuanced differences in artificial intelligence models' performance across various Physical Medicine and Rehabilitation subfields. CONCLUSION: The study illustrates the potential of artificial intelligence in medical education and clinical settings, with ChatGPT-4 showing a slight edge in performance. It emphasizes the importance of artificial intelligence as a supportive tool for physiatrists, despite the need for careful oversight of artificial intelligence-generated responses to ensure patient safety.
Menekseoglu AK; Is EE
32
38584026
Evaluating the Potential of AI Chatbots in Treatment Decision-making for Acquired Bilateral Vocal Fold Paralysis in Adults.
2,024
Journal of voice : official journal of the Voice Foundation
OBJECTIVES: The development of artificial intelligence-powered language models, such as Chatbot Generative Pre-trained Transformer (ChatGPT) or Large Language Model Meta AI (Llama), is emerging in medicine. Patients and practitioners have full access to chatbots that may provide medical information. The aim of this study was to explore the performance and accuracy of ChatGPT and Llama in treatment decision-making for bilateral vocal fold paralysis (BVFP). METHODS: Data of 20 clinical cases, treated between 2018 and 2023, were retrospectively collected from four tertiary laryngology centers in Europe. The cases were defined as the most common or most challenging scenarios regarding BVFP treatment. The treatment proposals were discussed in their local multidisciplinary teams (MDT). Each case was presented to ChatGPT-4.0 and Llama Chat-2.0, and potential treatment strategies were requested. The Artificial Intelligence Performance Instrument (AIPI) treatment subscore was used to compare both Chatbots' performances to MDT treatment proposal. RESULTS: Most common etiology of BVFP was thyroid surgery. A form of partial arytenoidectomy with or without posterior transverse cordotomy was the MDT proposal for most cases. The accuracy of both Chatbots was very low regarding their treatment proposals, with a maximum AIPI treatment score in 5% of the cases. In most cases even harmful assertions were made, including the suggestion of vocal fold medialisation to treat patients with stridor and dyspnea. ChatGPT-4.0 performed significantly better in suggesting the correct treatment as part of the treatment proposal (50%) compared to Llama Chat-2.0 (15%). CONCLUSION: ChatGPT and Llama are judged as inaccurate in proposing correct treatment for BVFP. ChatGPT significantly outperformed Llama. Treatment decision-making for a complex condition such as BVFP is clearly beyond the Chatbot's knowledge expertise. This study highlights the complexity and heterogeneity of BVFP treatment, and the need for further guidelines dedicated to the management of BVFP.
Dronkers EAC; Geneid A; Al Yaghchi C; Lechien JR
32
40271313
Precision Oncology in Non-small Cell Lung Cancer: A Comparative Study of Contextualized ChatGPT Models.
2,025
Cureus
OBJECTIVES: The growing adoption of Large Language Models (LLMs) in medicine has raised important questions about their potential utility for clinical decision support within oncology. This study aimed to evaluate the effects of various contextualization methods on ChatGPT's ability to provide National Comprehensive Cancer Network (NCCN) guideline-aligned recommendations on managing non-small cell lung cancer (NSCLC). METHODOLOGY: GPT-4o, base GPT-4, and GPT-4 models contextualized with prompts and PDF documents were asked to identify preferred chemotherapies for twelve advanced lung cancers given molecular profiles derived from the 2024 NCCN Clinical Practice Guidelines in Oncology for NSCLC. GPT responses were subsequently compared to NCCN guidelines using readability scores and qualitative reviewer assessments of (1) recommendation of specific targeted therapy, (2) agreement with NCCN-guideline-preferred therapies, (3) recommendation of guideline non-concordant therapies, and (4) provision of supplementary information. RESULTS: The PDF+Prompt contextualized model demonstrated elevated agreement scores of 23/24 versus 17/24 for GPT-4 (P = 0.040) and 18/24 for GPT-4o (P = 0.089). No PDF+Prompt model responses contained guideline non-concordant therapies in contrast to 4/12 responses for GPT4 (P = 0.093) and 5/12 responses for GPT4o (P = 0.037). Comparison of response readability between the PDF+Prompt model and GPT-4 or GPT-4o showed a lower mean word count (both P < 0.001), Simple Measure of Gobbledygook (SMOG) score (both P < 0.001), and Gunning Fog readability score (P < 0.001 for GPT-4, P = 0.002 for GPT-4o). Prompting alone did not significantly improve agreement or reduce the rate of non-concordant therapy recommendations. CONCLUSIONS: The performance gains observed following contextualization suggest that broader applications of LLMs in oncology may exist than current literature indicates. This study provides proof of concept for the use of contextualized GPT models in oncology and showcases their accessibility. Future studies validating this application within additional cancer types or real-life patient encounters could provide an important bridge to eventual adoption.
Brown EDL; Shah HA; Donnelly BM; Ward M; Vojnic M; D'Amico RS
10
38830507
Enhancing Diagnostic Support for Chiari Malformation and Syringomyelia: A Comparative Study of Contextualized ChatGPT Models.
2,024
World neurosurgery
OBJECTIVES: The rapidly increasing adoption of large language models in medicine has drawn attention to potential applications within the field of neurosurgery. This study evaluates the effects of various contextualization methods on ChatGPT's ability to provide expert-consensus aligned recommendations on the diagnosis and management of Chiari Malformation and Syringomyelia. METHODS: Native GPT4 and GPT4 models contextualized using various strategies were asked questions revised from the 2022 Chiari and Syringomyelia Consortium International Consensus Document. ChatGPT-provided responses were then compared to consensus statements using reviewer assessments of 1) responding to the prompt, 2) agreement of ChatGPT response with consensus statements, 3) recommendation to consult with a medical professional, and 4) presence of supplementary information. Flesch-Kincaid, SMOG, word count, and Gunning-Fog readability scores were calculated for each model using the quanteda package in R. RESULTS: Relative to GPT4, all contextualized GPTs demonstrated increased agreement with consensus statements. PDF+Prompting and Prompting models provided the most elevated agreement scores of 19 of 24 and 23 of 24, respectively, versus 9 of 24 for GPT4 (p=.021, p=.001). A trend toward improved readability was observed when comparing contextualized models at large to ChatGPT4, with significant decreases in average word count (180.7 vs 382.3, p<.001) and Flesch-Kincaid Reading Ease score (11.7 vs 17.2, p=.033). CONCLUSIONS: The enhanced performance observed in response to ChatGPT4 contextualization suggests broader applications of large language models in neurosurgery than what the current literature indicates. This study provides proof of concept for the use of contextualized GPT models in neurosurgical contexts and showcases the easy accessibility of improved model performance.
Brown EDL; Ward M; Maity A; Mittler MA; Larry Lo SF; D'Amico RS
10
37494894
Diagnostic and Management Performance of ChatGPT in Obstetrics and Gynecology.
2,023
Gynecologic and obstetric investigation
OBJECTIVES: The use of artificial intelligence (AI) in clinical patient management and medical education has been advancing over time. ChatGPT was developed and trained recently, using a large quantity of textual data from the internet. Medical science is expected to be transformed by its use. The present study was conducted to evaluate the diagnostic and management performance of the ChatGPT AI model in obstetrics and gynecology. DESIGN: A cross-sectional study was conducted. PARTICIPANTS/MATERIALS, SETTING, METHODS: This study was conducted in Iran in March 2023. Medical histories and examination results of 30 cases were determined in six areas of obstetrics and gynecology. The cases were presented to a gynecologist and ChatGPT for diagnosis and management. Answers from the gynecologist and ChatGPT were compared, and the diagnostic and management performance of ChatGPT were determined. RESULTS: Ninety percent (27 of 30) of the cases in obstetrics and gynecology were correctly handled by ChatGPT. Its responses were eloquent, informed, and free of a significant number of errors or misinformation. Even when the answers provided by ChatGPT were incorrect, the responses contained a logical explanation about the case as well as information provided in the question stem. LIMITATIONS: The data used in this study were taken from the electronic book and may reflect bias in the diagnosis of ChatGPT. CONCLUSIONS: This is the first evaluation of ChatGPT's performance in diagnosis and management in the field of obstetrics and gynecology. It appears that ChatGPT has potential applications in the practice of medicine and is (currently) free and simple to use. However, several ethical considerations and limitations such as bias, validity, copyright infringement, and plagiarism need to be addressed in future studies.
Allahqoli L; Ghiasvand MM; Mazidimoradi A; Salehiniya H; Alkatout I
0-1
39372551
Based on Medicine, The Now and Future of Large Language Models.
2,024
Cellular and molecular bioengineering
OBJECTIVES: This review explores the potential applications of large language models (LLMs) such as ChatGPT, GPT-3.5, and GPT-4 in the medical field, aiming to encourage their prudent use, provide professional support, and develop accessible medical AI tools that adhere to healthcare standards. METHODS: This paper examines the impact of technologies such as OpenAI's Generative Pre-trained Transformers (GPT) series, including GPT-3.5 and GPT-4, and other large language models (LLMs) in medical education, scientific research, clinical practice, and nursing. Specifically, it includes supporting curriculum design, acting as personalized learning assistants, creating standardized simulated patient scenarios in education; assisting with writing papers, data analysis, and optimizing experimental designs in scientific research; aiding in medical imaging analysis, decision-making, patient education, and communication in clinical practice; and reducing repetitive tasks, promoting personalized care and self-care, providing psychological support, and enhancing management efficiency in nursing. RESULTS: LLMs, including ChatGPT, have demonstrated significant potential and effectiveness in the aforementioned areas, yet their deployment in healthcare settings is fraught with ethical complexities, potential lack of empathy, and risks of biased responses. CONCLUSION: Despite these challenges, significant medical advancements can be expected through the proper use of LLMs and appropriate policy guidance. Future research should focus on overcoming these barriers to ensure the effective and ethical application of LLMs in the medical field.
Su Z; Tang G; Huang R; Qiao Y; Zhang Z; Dai X
10
39610079
Application value of generative artificial intelligence in the field of stomatology.
2,024
Hua xi kou qiang yi xue za zhi = Huaxi kouqiang yixue zazhi = West China journal of stomatology
OBJECTIVES: This study aims to compare and analyze three types of generative artificial intelligence (GAI) and explore their application value and existing problems in the field of stomatology in the Chinese context. METHODS: A total of 36 questions were designed, covering all the professional areas of stomatology. The questions encompassed various aspects including medical records, professional knowledge, and translation and editing. These questions were submitted to ChatGPT4-turbo, Gemini (2024.2) and ERNIE Bot 4.0. After obtaining the answers, a blinded evaluation was conducted by three experienced oral medicine physicians using a four-point Likert scale. The value of GAI in various application scenarios was evaluated. RESULTS: Gemini scored 45, ERNIE Bot scored 38, and ChatGPT scored 33 for clinical documentation and image production. For research assistance, Gemini achieved 45, ERNIE Bot had 39, and ChatGPT scored 35. Teaching assistance capabilities were rated at 54 for ERNIE Bot, 50 for Gemini, and 48 for ChatGPT. In patient consultation and guidance, Gemini scored 78, ERNIE Bot scored 59, and ChatGPT scored 48. Overall, the total scores were 218, 190, and 164 for Gemini, ERNIE Bot, and ChatGPT, respectively. Among GAI applications, the top scoring categories were article translation and polishing (26), patient-doctor communication documentation (23), and popular science content creation (23). The lowest scoring categories were literature search and reporting (13) and image generation (12). CONCLUSIONS: In the Chinese context, the application value of GAI is the highest for Gemini, followed by ERNIE Bot and ChatGPT. GAI shows significant value in translation, patient-doctor communication, and popular science writing. However, its value in literature search, reporting, and image generation remains limited.
Ye Y; Zeng W; Chen J; Liu L
0-1
39325705
Is ChatGPT 3.5 smarter than Otolaryngology trainees? A comparison study of board style exam questions.
2,024
PloS one
OBJECTIVES: This study compares the performance of the artificial intelligence (AI) platform Chat Generative Pre-Trained Transformer (ChatGPT) to Otolaryngology trainees on board-style exam questions. METHODS: We administered a set of 30 Otolaryngology board-style questions to medical students (MS) and Otolaryngology residents (OR). 31 MSs and 17 ORs completed the questionnaire. The same test was administered to ChatGPT version 3.5, five times. Comparisons of performance were achieved using a one-way ANOVA with Tukey Post Hoc test, along with a regression analysis to explore the relationship between education level and performance. RESULTS: The average scores increased each year from MS1 to PGY5. A one-way ANOVA revealed that ChatGPT outperformed trainee years MS1, MS2, and MS3 (p = <0.001, 0.003, and 0.019, respectively). PGY4 and PGY5 otolaryngology residents outperformed ChatGPT (p = 0.033 and 0.002, respectively). For years MS4, PGY1, PGY2, and PGY3 there was no statistical difference between trainee scores and ChatGPT (p = .104, .996, and 1.000, respectively). CONCLUSION: ChatGPT can outperform lower-level medical trainees on Otolaryngology board-style exam but still lacks the ability to outperform higher-level trainees. These questions primarily test rote memorization of medical facts; in contrast, the art of practicing medicine is predicated on the synthesis of complex presentations of disease and multilayered application of knowledge of the healing process. Given that upper-level trainees outperform ChatGPT, it is unlikely that ChatGPT, in its current form will provide significant clinical utility over an Otolaryngologist.
Patel J; Robinson P; Illing E; Anthony B
0-1
38481520
Automated HEART score determination via ChatGPT: Honing a framework for iterative prompt development.
2,024
Journal of the American College of Emergency Physicians open
OBJECTIVES: This study presents a design framework to enhance the accuracy by which large language models (LLMs), like ChatGPT can extract insights from clinical notes. We highlight this framework via prompt refinement for the automated determination of HEART (History, ECG, Age, Risk factors, Troponin risk algorithm) scores in chest pain evaluation. METHODS: We developed a pipeline for LLM prompt testing, employing stochastic repeat testing and quantifying response errors relative to physician assessment. We evaluated the pipeline for automated HEART score determination across a limited set of 24 synthetic clinical notes representing four simulated patients. To assess whether iterative prompt design could improve the LLMs' ability to extract complex clinical concepts and apply rule-based logic to translate them to HEART subscores, we monitored diagnostic performance during prompt iteration. RESULTS: Validation included three iterative rounds of prompt improvement for three HEART subscores with 25 repeat trials totaling 1200 queries each for GPT-3.5 and GPT-4. For both LLM models, from initial to final prompt design, there was a decrease in the rate of responses with erroneous, non-numerical subscore answers. Accuracy of numerical responses for HEART subscores (discrete 0-2 point scale) improved for GPT-4 from the initial to final prompt iteration, decreasing from a mean error of 0.16-0.10 (95% confidence interval: 0.07-0.14) points. CONCLUSION: We established a framework for iterative prompt design in the clinical space. Although the results indicate potential for integrating LLMs in structured clinical note analysis, translation to real, large-scale clinical data with appropriate data privacy safeguards is needed.
Safranek CW; Huang T; Wright DS; Wright CX; Socrates V; Sangal RB; Iscoe M; Chartash D; Taylor RA
0-1
38034065
Assessment of Artificial Intelligence Performance on the Otolaryngology Residency In-Service Exam.
2,023
OTO open
OBJECTIVES: This study seeks to determine the potential use and reliability of a large language learning model for answering questions in a sub-specialized area of medicine, specifically practice exam questions in otolaryngology-head and neck surgery and assess its current efficacy for surgical trainees and learners. STUDY DESIGN AND SETTING: All available questions from a public, paid-access question bank were manually input through ChatGPT. METHODS: Outputs from ChatGPT were compared against the benchmark of the answers and explanations from the question bank. Questions were assessed in 2 domains: accuracy and comprehensiveness of explanations. RESULTS: Overall, our study demonstrates a ChatGPT correct answer rate of 53% and a correct explanation rate of 54%. We find that with increasing difficulty of questions there is a decreasing rate of answer and explanation accuracy. CONCLUSION: Currently, artificial intelligence-driven learning platforms are not robust enough to be reliable medical education resources to assist learners in sub-specialty specific patient decision making scenarios.
Mahajan AP; Shabet CL; Smith J; Rudy SF; Kupfer RA; Bohm LA
21
39658118
Assessing the accuracy and efficiency of Chat GPT-4 Omni (GPT-4o) in biomedical statistics: Comparative study with traditional tools.
2,024
Saudi medical journal
OBJECTIVES: To assess the accuracy of ChatGPT-4 Omni (GPT-4o) in biomedical statistics. The recent novel inauguration of Artificial Intelligence ChatGPT-Omni (GPT-4o), has emerged with the potential to analyze sophisticated and extensive data sets, challenging the expertise of statisticians using traditional statistical tools for data analysis. METHODS: This study was performed in the Department of Physiology, College of Medicine, King Saud University, Riyadh, Saudi Arabia, in May 2024. Three datasets in a raw Excel file format were imported onto Statistical Package for the Social Sciences (SPSS) version 29 for data analysis. Based on this analysis, a script of 9 questions was prepared to command GPT-4 Omni, which was used for data analysis for all 3 datasets on Omni. The score and the time were recorded for each result and verified after being compared to the original analysis results performed on SPSS. RESULTS: GPT-4 Omni scored 73 (85.88%) out of 85 points for all 3 datasets. All datasets took a total of 38.43 minutes to be fully analyzed. Individually, Omni scored 21/25 (84%) for the small dataset in 487.4 seconds, 20/25 (80%) for the middle dataset in 747.02 seconds and 32/35 (91.42%) for the large dataset in 1071 seconds. GPT-4 Omni produced accurate graphs and charts. CONCLUSION: ChatGPT-4 Omni scored better over 80% in all 3 statistical datasets in a short period. GPT-4 Omni also produced accurate graphs and charts as commanded however it required explicit commands with clear instructions to avoid errors and omission of results to achieve appropriate results in biomedical data analysis.
Meo AS; Shaikh N; Meo SA
10
37794249
ChatGPT makes medicine easy to swallow: an exploratory case study on simplified radiology reports.
2,024
European radiology
OBJECTIVES: To assess the quality of simplified radiology reports generated with the large language model (LLM) ChatGPT and to discuss challenges and chances of ChatGPT-like LLMs for medical text simplification. METHODS: In this exploratory case study, a radiologist created three fictitious radiology reports which we simplified by prompting ChatGPT with "Explain this medical report to a child using simple language." In a questionnaire, we tasked 15 radiologists to rate the quality of the simplified radiology reports with respect to their factual correctness, completeness, and potential harm for patients. We used Likert scale analysis and inductive free-text categorization to assess the quality of the simplified reports. RESULTS: Most radiologists agreed that the simplified reports were factually correct, complete, and not potentially harmful to the patient. Nevertheless, instances of incorrect statements, missed relevant medical information, and potentially harmful passages were reported. CONCLUSION: While we see a need for further adaption to the medical field, the initial insights of this study indicate a tremendous potential in using LLMs like ChatGPT to improve patient-centered care in radiology and other medical domains. CLINICAL RELEVANCE STATEMENT: Patients have started to use ChatGPT to simplify and explain their medical reports, which is expected to affect patient-doctor interaction. This phenomenon raises several opportunities and challenges for clinical routine. KEY POINTS: * Patients have started to use ChatGPT to simplify their medical reports, but their quality was unknown. * In a questionnaire, most participating radiologists overall asserted good quality to radiology reports simplified with ChatGPT. However, they also highlighted a notable presence of errors, potentially leading patients to draw harmful conclusions. * Large language models such as ChatGPT have vast potential to enhance patient-centered care in radiology and other medical domains. To realize this potential while minimizing harm, they need supervision by medical experts and adaption to the medical field.
Jeblick K; Schachtner B; Dexl J; Mittermeier A; Stuber AT; Topalis J; Weber T; Wesp P; Sabel BO; Ricke J; Ingrisch M
10
37698703
Validity and reliability of an instrument evaluating the performance of intelligent chatbot: the Artificial Intelligence Performance Instrument (AIPI).
2,024
European archives of oto-rhino-laryngology : official journal of the European Federation of Oto-Rhino-Laryngological Societies (EUFOS) : affiliated with the German Society for Oto-Rhino-Laryngology - Head and Neck Surgery
OBJECTIVES: To evaluate the reliability and validity of the Artificial Intelligence Performance Instrument (AIPI). METHODS: Medical records of patients consulting in otolaryngology were evaluated by physicians and ChatGPT for differential diagnosis, management, and treatment. The ChatGPT performance was rated twice using AIPI within a 7-day period to assess test-retest reliability. Internal consistency was evaluated using Cronbach's alpha. Internal validity was evaluated by comparing the AIPI scores of the clinical cases rated by ChatGPT and 2 blinded practitioners. Convergent validity was measured by comparing the AIPI score with a modified version of the Ottawa Clinical Assessment Tool (OCAT). Interrater reliability was assessed using Kendall's tau. RESULTS: Forty-five patients completed the evaluations (28 females). The AIPI Cronbach's alpha analysis suggested an adequate internal consistency (alpha = 0.754). The test-retest reliability was moderate-to-strong for items and the total score of AIPI (r(s) = 0.486, p = 0.001). The mean AIPI score of the senior otolaryngologist was significantly higher compared to the score of ChatGPT, supporting adequate internal validity (p = 0.001). Convergent validity reported a moderate and significant correlation between AIPI and modified OCAT (r(s) = 0.319; p = 0.044). The interrater reliability reported significant positive concordance between both otolaryngologists for the patient feature, diagnostic, additional examination, and treatment subscores as well as for the AIPI total score. CONCLUSIONS: AIPI is a valid and reliable instrument in assessing the performance of ChatGPT in ear, nose and throat conditions. Future studies are needed to investigate the usefulness of AIPI in medicine and surgery, and to evaluate the psychometric properties in these fields.
Lechien JR; Maniaci A; Gengler I; Hans S; Chiesa-Estomba CM; Vaira LA
32
37263772
Performance and risks of ChatGPT used in drug information: an exploratory real-world analysis.
2,024
European journal of hospital pharmacy : science and practice
OBJECTIVES: To investigate the performance and risk associated with the usage of Chat Generative Pre-trained Transformer (ChatGPT) to answer drug-related questions. METHODS: A sample of 50 drug-related questions were consecutively collected and entered in the artificial intelligence software application ChatGPT. Answers were documented and rated in a standardised consensus process by six senior hospital pharmacists in the domains content (correct, incomplete, false), patient management (possible, insufficient, not possible) and risk (no risk, low risk, high risk). As reference, answers were researched in adherence to the German guideline of drug information and stratified in four categories according to the sources used. In addition, the reproducibility of ChatGPT's answers was analysed by entering three questions at different timepoints repeatedly (day 1, day 2, week 2, week 3). RESULTS: Overall, only 13 of 50 answers provided correct content and had enough information to initiate management with no risk of patient harm. The majority of answers were either false (38%, n=19) or had partly correct content (36%, n=18) and no references were provided. A high risk of patient harm was likely in 26% (n=13) of the cases and risk was judged low for 28% (n=14) of the cases. In all high-risk cases, actions could have been initiated based on the provided information. The answers of ChatGPT varied over time when entered repeatedly and only three out of 12 answers were identical, showing no reproducibility to low reproducibility. CONCLUSION: In a real-world sample of 50 drug-related questions, ChatGPT answered the majority of questions wrong or partly wrong. The use of artificial intelligence applications in drug information is not possible as long as barriers like wrong content, missing references and reproducibility remain.
Morath B; Chiriac U; Jaszkowski E; Deiss C; Nurnberg H; Horth K; Hoppe-Tichy T; Green K
10
40046775
Capturing pharmacists' perspectives on the value, risks, and applications of ChatGPT in pharmacy practice: A qualitative study.
2,024
Exploratory research in clinical and social pharmacy
OBJECTIVES: To investigate the pharmacists' perspectives on benefits and risks of using ChatGPT in pharmacy practice and explore how this disruptive and ground-breaking technology can be effectively integrated. METHODS AND MATERIALS: A qualitative approach that draws data from licensed pharmacists using semi-structured interviews. RESULTS: Most participants felt ChatGPT could enhance the compliance, use, management, safety, adherence to medication, medication counseling, minimize medication errors, and streamline medication dispensing. However, when Chat-GPT has limited information and relies on obsolete medication databases, it risks providing inaccurate recommendations and inadequate medication details. Also, most participants highlighted the difficulty in interpreting ambiguous patient input or drug descriptions when using the application. CONCLUSIONS: Despite its potential, utilizing ChatGPT in pharmacy practice must be dependent on evidence-based results that offer profound insight into AI technology.
Jairoun AA; Al-Hemyari SS; Shahwan M; Alnuaimi GR; Ibrahim N; Jaber AAS
10
39523628
Use of generative artificial intelligence (AI) in psychiatry and mental health care: a systematic review.
2,024
Acta neuropsychiatrica
OBJECTIVES: Tools based on generative artificial intelligence (AI) such as ChatGPT have the potential to transform modern society, including the field of medicine. Due to the prominent role of language in psychiatry, e.g., for diagnostic assessment and psychotherapy, these tools may be particularly useful within this medical field. Therefore, the aim of this study was to systematically review the literature on generative AI applications in psychiatry and mental health. METHODS: We conducted a systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. The search was conducted across three databases, and the resulting articles were screened independently by two researchers. The content, themes, and findings of the articles were qualitatively assessed. RESULTS: The search and screening process resulted in the inclusion of 40 studies. The median year of publication was 2023. The themes covered in the articles were mainly mental health and well-being in general - with less emphasis on specific mental disorders (substance use disorder being the most prevalent). The majority of studies were conducted as prompt experiments, with the remaining studies comprising surveys, pilot studies, and case reports. Most studies focused on models that generate language, ChatGPT in particular. CONCLUSIONS: Generative AI in psychiatry and mental health is a nascent but quickly expanding field. The literature mainly focuses on applications of ChatGPT, and finds that generative AI performs well, but notes that it is limited by significant safety and ethical concerns. Future research should strive to enhance transparency of methods, use experimental designs, ensure clinical relevance, and involve users/patients in the design phase.
Kolding S; Lundin RM; Hansen L; Ostergaard SD
0-1
39816417
Evaluation of ChatGPT-4 Performance in Answering Patients' Questions About the Management of Type 2 Diabetes.
2,024
Sisli Etfal Hastanesi tip bulteni
OBJECTIVES: Type 2 diabetes mellitus is a disease with a rising prevalence worldwide. Person-centered treatment factors, including comorbidities and treatment goals, should be considered in determining the pharmacological treatment of type 2 diabetes. ChatGPT-4 (Generative Pre-trained Transformer), a large language model, holds the potential performance in various fields, including medicine. We aimed to examine the reliability, quality, reproducibility, and readability of ChatGPT-4's responses to clinical scenarios about the medical treatment approach and management of type 2 diabetes patients. METHODS: ChatGPT-4's responses to 24 questions were independently graded by two endocrinologists with clinical experience in endocrinology and resolved by a third reviewer based on the ADA(American Diabetes Association) 2023 guidelines. DISCERN (Quality Criteria for Consumer Health Information) Measurement Tool was used to evaluate the reliability and quality of information. RESULTS: Responses to questions by ChatGPT-4 were fairly consistent in both sessions. No false or misleading information was found in any ChatGPT-4 responses. In terms of reliability, most of the answers showed good (87.5%), followed by excellent (12.5%) reliability. Reading Level was classified as fairly difficult to read (8.3%), difficult to read (50%), and very difficult to read (41.7%). CONCLUSION: ChatGPT-4 may have a role as an additional informative tool for type 2 diabetes patients for medical treatment approaches.
Gokbulut P; Kuskonmaz SM; Onder CE; Taskaldiran I; Koc G
0-1
38964828
Assessment of the information provided by ChatGPT regarding exercise for patients with type 2 diabetes: a pilot study.
2,024
BMJ health & care informatics
OBJECTIVES: We assessed the feasibility of ChatGPT for patients with type 2 diabetes seeking information about exercise. METHODS: In this pilot study, two physicians with expertise in diabetes care and rehabilitative treatment in Republic of Korea discussed and determined the 14 most asked questions on exercise for managing type 2 diabetes by patients in clinical practice. Each question was inputted into ChatGPT (V.4.0), and the answers from ChatGPT were assessed. The Likert scale was calculated for each category of validity (1-4), safety (1-4) and utility (1-4) based on position statements of the American Diabetes Association and American College of Sports Medicine. RESULTS: Regarding validity, 4 of 14 ChatGPT (28.6%) responses were scored as 3, indicating accurate but incomplete information. The other 10 responses (71.4%) were scored as 4, indicating complete accuracy with complete information. Safety and utility scored 4 (no danger and completely useful) for all 14 ChatGPT responses. CONCLUSION: ChatGPT can be used as supplementary educational material for diabetic exercise. However, users should be aware that ChatGPT may provide incomplete answers to some questions on exercise for type 2 diabetes.
Chung SM; Chang MC
0-1
39583920
ChatGPT in medical writing: A game-changer or a gimmick?
2,024
Perspectives in clinical research
OpenAI's ChatGPT (Generative Pre-trained Transformer) is a chatbot that answers questions and performs writing tasks in a conversational tone. Within months of release, multiple sectors are contemplating the varied applications of this chatbot, including medicine, education, and research, all of which are involved in medical communication and scientific publishing. Medical writers and academics use several artificial intelligence (AI) tools and software for research, literature survey, data analyses, referencing, and writing. There are benefits of using different AI tools in medical writing. However, using chatbots for medical communications pose some major concerns such as potential inaccuracies, data bias, security, and ethical issues. Perceived incorrect notions also limit their use. Moreover, ChatGPT can also be challenging if used incorrectly and for irrelevant tasks. If used appropriately, ChatGPT will not only upgrade the knowledge of the medical writer but also save time and energy that could be directed toward more creative and analytical areas requiring expert skill sets. This review introduces chatbots, outlines the progress in ChatGPT research, elaborates the potential uses of ChatGPT in medical communications along with its challenges and limitations, and proposes future research perspectives. It aims to provide guidance for doctors, researchers, and medical writers on the uses of ChatGPT in medical communications.
Ahaley SS; Pandey A; Juneja SK; Gupta TS; Vijayakumar S
10
38611686
Artificial Intelligence in Medical Imaging: Analyzing the Performance of ChatGPT and Microsoft Bing in Scoliosis Detection and Cobb Angle Assessment.
2,024
Diagnostics (Basel, Switzerland)
Open-source artificial intelligence models (OSAIM) find free applications in various industries, including information technology and medicine. Their clinical potential, especially in supporting diagnosis and therapy, is the subject of increasingly intensive research. Due to the growing interest in artificial intelligence (AI) for diagnostic purposes, we conducted a study evaluating the capabilities of AI models, including ChatGPT and Microsoft Bing, in the diagnosis of single-curve scoliosis based on posturographic radiological images. Two independent neurosurgeons assessed the degree of spinal deformation, selecting 23 cases of severe single-curve scoliosis. Each posturographic image was separately implemented onto each of the mentioned platforms using a set of formulated questions, starting from 'What do you see in the image?' and ending with a request to determine the Cobb angle. In the responses, we focused on how these AI models identify and interpret spinal deformations and how accurately they recognize the direction and type of scoliosis as well as vertebral rotation. The Intraclass Correlation Coefficient (ICC) with a 'two-way' model was used to assess the consistency of Cobb angle measurements, and its confidence intervals were determined using the F test. Differences in Cobb angle measurements between human assessments and the AI ChatGPT model were analyzed using metrics such as RMSEA, MSE, MPE, MAE, RMSLE, and MAPE, allowing for a comprehensive assessment of AI model performance from various statistical perspectives. The ChatGPT model achieved 100% effectiveness in detecting scoliosis in X-ray images, while the Bing model did not detect any scoliosis. However, ChatGPT had limited effectiveness (43.5%) in assessing Cobb angles, showing significant inaccuracy and discrepancy compared to human assessments. This model also had limited accuracy in determining the direction of spinal curvature, classifying the type of scoliosis, and detecting vertebral rotation. Overall, although ChatGPT demonstrated potential in detecting scoliosis, its abilities in assessing Cobb angles and other parameters were limited and inconsistent with expert assessments. These results underscore the need for comprehensive improvement of AI algorithms, including broader training with diverse X-ray images and advanced image processing techniques, before they can be considered as auxiliary in diagnosing scoliosis by specialists.
Fabijan A; Zawadzka-Fabijan A; Fabijan R; Zakrzewski K; Nowoslawska E; Polis B
0-1
38138922
Artificial Intelligence in Scoliosis Classification: An Investigation of Language-Based Models.
2,023
Journal of personalized medicine
Open-source artificial intelligence models are finding free application in various industries, including computer science and medicine. Their clinical potential, especially in assisting diagnosis and therapy, is the subject of increasingly intensive research. Due to the growing interest in AI for diagnostics, we conducted a study evaluating the abilities of AI models, including ChatGPT, Microsoft Bing, and Scholar AI, in classifying single-curve scoliosis based on radiological descriptions. Fifty-six posturographic images depicting single-curve scoliosis were selected and assessed by two independent neurosurgery specialists, who classified them as mild, moderate, or severe based on Cobb angles. Subsequently, descriptions were developed that accurately characterized the degree of spinal deformation, based on the measured values of Cobb angles. These descriptions were then provided to AI language models to assess their proficiency in diagnosing spinal pathologies. The artificial intelligence models conducted classification using the provided data. Our study also focused on identifying specific sources of information and criteria applied in their decision-making algorithms, aiming for a deeper understanding of the determinants influencing AI decision processes in scoliosis classification. The classification quality of the predictions was evaluated using performance evaluation metrics such as sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, and balanced accuracy. Our study strongly supported our hypothesis, showing that among four AI models, ChatGPT 4 and Scholar AI Premium excelled in classifying single-curve scoliosis with perfect sensitivity and specificity. These models demonstrated unmatched rater concordance and excellent performance metrics. In comparing real and AI-generated scoliosis classifications, they showed impeccable precision in all posturographic images, indicating total accuracy (1.0, MAE = 0.0) and remarkable inter-rater agreement, with a perfect Fleiss' Kappa score. This was consistent across scoliosis cases with a Cobb's angle range of 11-92 degrees. Despite high accuracy in classification, each model used an incorrect angular range for the mild stage of scoliosis. Our findings highlight the immense potential of AI in analyzing medical data sets. However, the diversity in competencies of AI models indicates the need for their further development to more effectively meet specific needs in clinical practice.
Fabijan A; Polis B; Fabijan R; Zakrzewski K; Nowoslawska E; Zawadzka-Fabijan A
0-1
37830256
Beyond ChatGPT: What does GPT-4 add to healthcare? The dawn of a new era.
2,023
Cardiology journal
Over the past few years, artificial intelligence (AI) has significantly improved healthcare. Once the stuff of science fiction, AI is now widely used, even in our daily lives - often without us thinking about it. All healthcare professionals - especially executives and medical doctors - need to understand the capabilities of advanced AI tools and other breakthrough innovations. This understanding will allow them to recognize opportunities and threats emerging technologies can bring to their organizations. We hope to contribute to a meaningful public discussion about the role of this new type of AI and how our approach to healthcare and medicine can best evolve with the rapid development of this technology. Since medicine learns by example, only a few possible uses of AI in medicine are provided, which merely outline the system's capabilities. Among the examples, it is worth highlighting the roles of AI in medical notes, education, preventive programs, consultation, triage and intervention. It is believed by the authors that large language models such as chat generative pre-trained transformer (ChatGPT) are reaching a level of maturity that will soon impact clinical medicine as a whole and improve the delivery of individualized, compassionate, and scalable healthcare. It is unlikely that AI will replace physicians in the near future. The human aspects of care, including empathy, compassion, critical thinking, and complex decision-making, are invaluable in providing holistic patient care beyond diagnosis and treatment decisions. The GPT-4 has many limitations and cannot replace direct contact between an experienced physician and a patient for even the most seemingly simple consultations, not to mention the ethical and legal aspects of responsibility for diagnosis.
Wojcik S; Rulkiewicz A; Pruszczyk P; Lisik W; Pobozy M; Domienik-Karlowicz J
10
37726551
Using ChatGPT to predict the future of personalized medicine.
2,023
The pharmacogenomics journal
Personalized medicine is a novel frontier in health care that is based on each person's unique genetic makeup. It represents an exciting opportunity to improve the future of individualized health care for all individuals. Pharmacogenomics, as the main part of personalized medicine, aims to optimize and create a more targeted treatment approach based on genetic variations in drug response. It is predicted that future treatments will be algorithm-based instead of evidence-based that will consider a patient's genetic, transcriptomic, proteomic, epigenetic, and lifestyle factors resulting in individualized medication. A generative pretrained transformer (GPT) is an artificial intelligence (AI) tool that generates language resembling human-like writing enabling users to engage in a manner that is practically identical to speaking with a human being. GPT's predictive algorithms can respond to questions that have never been addressed. Chat Generative Pretrained Transformer (ChatGPT) is an AI chatbot's advanced with conversational capabilities. In the present study, questions were asked from ChatGPT about the future of personalized medicine and pharmacogenomics. ChatGPT predicted both to be a promising approach with a bright future that holds great promises in improving patient outcomes and transforming the field of medicine. But it still has several limitations that need to be solved.
Patrinos GP; Sarhangi N; Sarrami B; Khodayari N; Larijani B; Hasanzad M
0-1
40200464
Minimizing STOPP and Beers Criteria Risks in PIM Treatments Using PM-TOM and ChatGPT: A Case Study.
2,025
Studies in health technology and informatics
PM-TOM (Personalized Medicine-Therapy Optimization Method) is a clinical decision-support tool designed to optimize polypharmacy treatments by minimizing their adverse drug reactions (ADRs) caused by individual drugs or drug interactions (DDIs, DCIs, DFIs, DGIs), along with the risks identified by the STOPP and Beers criteria. On the other hand, AI tools like ChatGPT 4.0, trained on medical literature texts, can provide broader clinical reasoning and insights tailored to individual patient contexts. By referring to a documented deprescribing case, this study demonstrates the synergistic power of PM-TOM and ChatGPT in optimizing potentially inappropriate medication (PIM) treatments. A malnourished older woman was admitted to a deprescribing facility with recurrent falls, hypertension, ischemic heart disease, depression, osteoarthritis, osteoporosis, and GERD. She was initially prescribed acetaminophen, alendronate, omeprazole, lisinopril, metoprolol, aspirin, citalopram, and vitamin D, which were assessed as inadequate. While the discharge regimen improved some conditions by replacing alendronate with zoledronic acid and reducing some drug dosages, PM-TOM revealed that key risks, stemming primarily from omeprazole, aspirin, and citalopram, remained unaddressed. The discharge treatment was optimized with PM-TOM after considering alternative drug classes suggested by ChatGPT and elaborated in the available medical literature. In the optimized treatment, omeprazole (PPI) was replaced with famotidine (H2-blocker), citalopram (SSRI) with agomelatine (atypical antidepressant), zoledronic acid (bisphosphonate) with denosumab (RANK ligand inhibitor), aspirin (NSAID) with ticagrelor (antiplatelet), and lisinopril with benazepril (ACE inhibitor). These changes significantly reduced possible ADRs and the geriatric care criteria risks. Finally, ChatGPT validated the proposed adjustments, confirming their alignment with the guidelines and highlighting the potential for longer-term benefits. This case study illustrates how a combined use of PM-TOM and AI tools can effectively support the clinical decision-making process by optimizing polypharmacy treatments and minimizing their PIMs, major contributors to morbidity in older adults and high healthcare costs.
Kulenovic A; Lagumdzija-Kulenovic A
10
39319818
ChatGPT's Role in Improving Education Among Patients Seeking Emergency Medical Treatment.
2,024
The western journal of emergency medicine
Providing appropriate patient education during a medical encounter remains an important area for improvement across healthcare settings. Personalized resources can offer an impactful way to improve patient understanding and satisfaction during or after a healthcare visit. ChatGPT is a novel chatbot-computer program designed to simulate conversation with humans- that has the potential to assist with care-related questions, clarify discharge instructions, help triage medical problem urgency, and could potentially be used to improve patient-clinician communication. However, due to its training methodology, ChatGPT has inherent limitations, including technical restrictions, risk of misinformation, lack of input standardization, and privacy concerns. Medicolegal liability also remains an open question for physicians interacting with this technology. Nonetheless, careful utilization of ChatGPT in clinical medicine has the potential to supplement patient education in important ways.
Halaseh FF; Yang JS; Danza CN; Halaseh R; Spiegelman L
0-1
37704461
Battle of the (Chat)Bots: Comparing Large Language Models to Practice Guidelines for Transfusion-Associated Graft-Versus-Host Disease Prevention.
2,023
Transfusion medicine reviews
Published guidelines and clinical practices vary when defining indications for irradiation of blood components for the prevention of transfusion-associated graft-versus-host disease (TA-GVHD). This study assessed irradiation indication lists generated by multiple artificial intelligence (AI) programs, or chatbots, and compared them to 2020 British Society for Haematology (BSH) practice guidelines. Four chatbots (ChatGPT-3.5, ChatGPT-4, Bard, and Bing Chat) were prompted to list the indications for irradiation to prevent TA-GVHD. Responses were graded for concordance with BSH guidelines. Chatbot response length, discrepancies, and omissions were noted. Chatbot responses differed, but all were relevant, short in length, generally more concordant than discordant with BSH guidelines, and roughly complete. They lacked several indications listed in BSH guidelines and notably differed in their irradiation eligibility criteria for fetuses and neonates. The chatbots variably listed erroneous indications for TA-GVHD prevention, such as patients receiving blood from a donor who is of a different race or ethnicity. This study demonstrates the potential use of generative AI for transfusion medicine and hematology topics but underscores the risk of chatbot medical misinformation. Further study of risk factors for TA-GVHD, as well as the applications of chatbots in transfusion medicine and hematology, is warranted.
Stephens LD; Jacobs JW; Adkins BD; Booth GS
43
37885556
Bias and Inaccuracy in AI Chatbot Ophthalmologist Recommendations.
2,023
Cureus
PURPOSE AND DESIGN: To evaluate the accuracy and bias of ophthalmologist recommendations made by three AI chatbots, namely ChatGPT 3.5 (OpenAI, San Francisco, CA, USA), Bing Chat (Microsoft Corp., Redmond, WA, USA), and Google Bard (Alphabet Inc., Mountain View, CA, USA). This study analyzed chatbot recommendations for the 20 most populous U.S. cities. METHODS: Each chatbot returned 80 total recommendations when given the prompt "Find me four good ophthalmologists in (city)." Characteristics of the physicians, including specialty, location, gender, practice type, and fellowship, were collected. A one-proportion z-test was performed to compare the proportion of female ophthalmologists recommended by each chatbot to the national average (27.2% per the Association of American Medical Colleges (AAMC)). Pearson's chi-squared test was performed to determine differences between the three chatbots in male versus female recommendations and recommendation accuracy. RESULTS: Female ophthalmologists recommended by Bing Chat (1.61%) and Bard (8.0%) were significantly less than the national proportion of 27.2% practicing female ophthalmologists (p<0.001, p<0.01, respectively). ChatGPT recommended fewer female (29.5%) than male ophthalmologists (p<0.722). ChatGPT (73.8%), Bing Chat (67.5%), and Bard (62.5%) gave high rates of inaccurate recommendations. Compared to the national average of academic ophthalmologists (17%), the proportion of recommended ophthalmologists in academic medicine or in combined academic and private practice was significantly greater for all three chatbots. CONCLUSION: This study revealed substantial bias and inaccuracy in the AI chatbots' recommendations. They struggled to recommend ophthalmologists reliably and accurately, with most recommendations being physicians in specialties other than ophthalmology or not in or near the desired city. Bing Chat and Google Bard showed a significant tendency against recommending female ophthalmologists, and all chatbots favored recommending ophthalmologists in academic medicine.
Oca MC; Meller L; Wilson K; Parikh AO; McCoy A; Chang J; Sudharshan R; Gupta S; Zhang-Nunes S
43
39347335
Accuracy and Readability of Artificial Intelligence Chatbot Responses to Vasectomy-Related Questions: Public Beware.
2,024
Cureus
Purpose Artificial intelligence (AI) has rapidly gained popularity with the growth of ChatGPT (OpenAI, San Francisco, USA) and other large-language model chatbots, and these programs have tremendous potential to impact medicine. One important area of consequence in medicine and public health is that patients may use these programs in search of answers to medical questions. Despite the increased utilization of AI chatbots by the public, there is little research to assess the reliability of ChatGPT and alternative programs when queried for medical information. This study seeks to elucidate the accuracy and readability of AI chatbots in answering patient questions regarding urology. As vasectomy is one of the most common urologic procedures, this study investigates AI-generated responses to frequently asked vasectomy-related questions. For this study, five popular and free-to-access AI platforms were utilized to undertake this investigation. Methods Fifteen vasectomy-related questions were individually queried to five AI chatbots from November-December 2023: ChatGPT (OpenAI, San Francisco, USA), Bard (Google Inc., Mountainview, USA) Bing (Microsoft, Redmond, USA) Perplexity (Perplexity AI Inc., San Francisco, USA), and Claude (Anthropic, San Francisco, USA). Responses from each platform were graded by two attending urologists, two urology research faculty, and one urological resident physician using a Likert (1-6) scale: (1-completely inaccurate, 6-completely accurate) based on comparison to existing American Urological Association guidelines. Flesch-Kincaid Grade levels (FKGL) and Flesch Reading Ease scores (FRES) (1-100) were calculated for each response. To assess differences in Likert, FRES, and FKGL, Kruskal-Wallis tests were performed using GraphPad Prism V10.1.0 (GraphPad, San Diego, USA) with Alpha set at 0.05. Results Analysis shows that ChatGPT provided the most accurate responses across the five AI chatbots with an average score of 5.04 on the Likert scale. Subsequently, Microsoft Bing (4.91), Anthropic Claude (4.65), Google Bard (4.43), and Perplexity (4.41) followed. All five chatbots were found to score, on average, higher than 4.41 corresponding to a score of at least "somewhat accurate." Google Bard received the highest Flesch Reading Ease score (49.67) and lowest Grade level (10.1) when compared to the other chatbots. Anthropic Claude scored 46.7 on the FRES and 10.55 on the FKGL. Microsoft Bing scored 45.57 on the FRES and 11.56 on the FKGL. Perplexity scored 36.4 on the FRES and 13.29 on the FKGL. ChatGPT had the lowest FRES of 30.4 and highest FKGL of 14.2. Conclusion This study investigates the use of AI in medicine, specifically urology, and it helps to determine whether large-language model chatbots can be reliable sources of freely available medical information. All five AI chatbots on average were able to achieve at least "somewhat accurate" on a 6-point Likert scale. In terms of readability, all five AI chatbots on average had Flesch Reading Ease scores of less than 50 and were higher than a 10th-grade level. In this small-scale study, there were several significant differences identified between the readability scores of each AI chatbot. However, there were no significant differences found among their accuracies. Thus, our study suggests that major AI chatbots may perform similarly in their ability to be correct but differ in their ease of being comprehended by the general public.
Carlson JA; Cheng RZ; Lange A; Nagalakshmi N; Rabets J; Shah T; Sindhwani P
43
39469383
Evaluating Large Language Models in Dental Anesthesiology: A Comparative Analysis of ChatGPT-4, Claude 3 Opus, and Gemini 1.0 on the Japanese Dental Society of Anesthesiology Board Certification Exam.
2,024
Cureus
Purpose Large language models (LLMs) are increasingly employed across various fields, including medicine and dentistry. In the field of dental anesthesiology, LLM is expected to enhance the efficiency of information gathering, patient outcomes, and education. This study evaluates the performance of different LLMs in answering questions from the Japanese Dental Society of Anesthesiology Board Certification Examination (JDSABCE) to determine their utility in dental anesthesiology. Methods The study assessed three LLMs, ChatGPT-4 (OpenAI, San Francisco, California, United States), Gemini 1.0 (Google, Mountain View, California, United States), and Claude 3 Opus (Anthropic, San Francisco, California, United States), using multiple-choice questions from the 2020 to 2022 JDSABCE exams. Each LLM answered these questions three times. The study excluded questions involving figures or deemed inappropriate. The primary outcome was the accuracy rate of each LLM, with secondary analysis focusing on six subgroups: (1) basic physiology necessary for general anesthesia, (2) local anesthesia, (3) sedation and general anesthesia, (4) diseases and patient management methods that pose challenges in systemic management, (5) pain management, and (6) shock and cardiopulmonary resuscitation. Statistical analysis was performed using one-way ANOVA with Dunnett's multiple comparisons, with a significance threshold of p<0.05. Results ChatGPT-4 achieved a correct answer rate of 51.2% (95% CI: 42.78-60.56, p=0.003) and Claude 3 Opus 47.4% (95% CI: 43.45-51.44, p<0.001), both significantly higher than Gemini 1.0, which had a rate of 30.3% (95% CI: 26.53-34.14). In subgroup analyses, ChatGPT-4 and Claude 3 Opus demonstrated superior performance in basic physiology, sedation and general anesthesia, and systemic management challenges compared to Gemini 1.0. Notably, ChatGPT-4 excelled in questions related to systemic management (62.5%) and Claude 3 Opus in pain management (61.53%). Conclusions ChatGPT-4 and Claude 3 Opus exhibit potential for use in dental anesthesiology, outperforming Gemini 1.0. However, their current accuracy rates are insufficient for reliable clinical use. These findings have significant implications for dental anesthesiology practice and education, including educational support, clinical decision support, and continuing education. To enhance LLM utility in dental anesthesiology, it is crucial to increase the availability of high-quality information online and refine prompt engineering to better guide LLM responses.
Fujimoto M; Kuroda H; Katayama T; Yamaguchi A; Katagiri N; Kagawa K; Tsukimoto S; Nakano A; Imaizumi U; Sato-Boku A; Kishimoto N; Itamiya T; Kido K; Sanuki T
21