pmid
stringlengths
8
8
title
stringlengths
3
289
year
int64
2.02k
2.03k
journal
stringlengths
3
221
doi
stringclasses
1 value
mesh
stringclasses
1 value
keywords
stringclasses
1 value
abstract
stringlengths
115
3.67k
authors
stringlengths
3
798
cluster
class label
5 classes
38618358
Generative Artificial Intelligence Performs at a Second-Year Orthopedic Resident Level.
2,024
Cureus
Introduction Artificial intelligence (AI) models using large language models (LLMs) and non-specific domains have gained attention for their innovative information processing. As AI advances, it's essential to regularly evaluate these tools' competency to maintain high standards, prevent errors or biases, and avoid flawed reasoning or misinformation that could harm patients or spread inaccuracies. Our study aimed to determine the performance of Chat Generative Pre-trained Transformer (ChatGPT) by OpenAI and Google BARD (BARD) in orthopedic surgery, assess performance based on question types, contrast performance between different AIs and compare AI performance to orthopedic residents. Methods We administered ChatGPT and BARD 757 Orthopedic In-Training Examination (OITE) questions. After excluding image-related questions, the AIs answered 390 multiple choice questions, all categorized within 10 sub-specialties (basic science, trauma, sports medicine, spine, hip and knee, pediatrics, oncology, shoulder and elbow, hand, and food and ankle) and three taxonomy classes (recall, interpretation, and application of knowledge). Statistical analysis was performed to analyze the number of questions answered correctly by each AI model, the performance returned by each AI model within the categorized question sub-specialty designation, and the performance of each AI model in comparison to the results returned by orthopedic residents classified by their respective post-graduate year (PGY) level. Results BARD answered more overall questions correctly (58% vs 54%, p<0.001). ChatGPT performed better in sports medicine and basic science and worse in hand surgery, while BARD performed better in basic science (p<0.05). The AIs performed better in recall questions compared to the application of knowledge (p<0.05). Based on previous data, it ranked in the 42nd-96th percentile for post-graduate year ones (PGY1s), 27th-58th for PGY2s, 3rd-29th for PGY3s, 1st-21st for PGY4s, and 1st-17th for PGY5s. Discussion ChatGPT excelled in sports medicine but fell short in hand surgery, while both AIs performed well in the basic science sub-specialty but performed poorly in the application of knowledge-based taxonomy questions. BARD performed better than ChatGPT overall. Although the AI reached the second-year PGY orthopedic resident level, it fell short of passing the American Board of Orthopedic Surgery (ABOS). Its strengths in recall-based inquiries highlight its potential as an orthopedic learning and educational tool.
Lum ZC; Collins DP; Dennison S; Guntupalli L; Choudhary S; Saiz AM; Randall RL
32
37484787
ChatGPT's Ability to Assess Quality and Readability of Online Medical Information: Evidence From a Cross-Sectional Study.
2,023
Cureus
Introduction Artificial Intelligence (AI) platforms have gained widespread attention for their distinct ability to generate automated responses to various prompts. However, its role in assessing the quality and readability of a provided text remains unclear. Thus, the purpose of this study is to evaluate the proficiency of the conversational generative pre-trained transformer (ChatGPT) in utilizing the DISCERN tool to evaluate the quality of online content regarding shock wave therapy for erectile dysfunction. Methods Websites were generated using a Google search of "shock wave therapy for erectile dysfunction" with location filters disabled. Readability was analyzed using Readable software (Readable.com, Horsham, United Kingdom). Quality was assessed independently by three reviewers using the DISCERN tool. The same plain text files collected were inputted into ChatGPT to determine whether they produced comparable metrics for readability and quality. Results The study results revealed a notable disparity between ChatGPT's readability assessment and that obtained from a reliable tool, Readable.com (p<0.05). This indicates a lack of alignment between ChatGPT's algorithm and that of established tools, such as Readable.com. Similarly, the DISCERN score generated by ChatGPT differed significantly from the scores generated manually by human evaluators (p<0.05), suggesting that ChatGPT may not be capable of accurately identifying poor-quality information sources regarding shock wave therapy as a treatment for erectile dysfunction. Conclusion ChatGPT's evaluation of the quality and readability of online text regarding shockwave therapy for erectile dysfunction differs from that of human raters and trusted tools. Therefore, ChatGPT's current capabilities were not sufficient for reliably assessing the quality and readability of textual content. Further research is needed to elucidate the role of AI in the objective evaluation of online medical content in other fields. Continued development in AI and incorporation of tools such as DISCERN into AI software may enhance the way patients navigate the web in search of high-quality medical content in the future.
Golan R; Ripps SJ; Reddy R; Loloi J; Bernstein AP; Connelly ZM; Golan NS; Ramasamy R
0-1
38803743
Accuracy of ChatGPT in Neurolocalization.
2,024
Cureus
Introduction ChatGPT (OpenAI Incorporated, Mission District, San Francisco, United States) is an artificial intelligence (AI) chatbot with advanced communication skills and a massive knowledge database. However, its application in medicine, specifically in neurolocalization, necessitates clinical reasoning in addition to deep neuroanatomical knowledge. This article examines ChatGPT's capabilities in neurolocalization. Methods Forty-six text-based neurolocalization case scenarios were presented to ChatGPT-3.5 from November 6th, 2023, to November 16th, 2023. Seven neurosurgeons evaluated ChatGPT's responses to these cases, utilizing a 5-point scoring system recommended by ChatGPT, to score the accuracy of these responses. Results ChatGPT-3.5 achieved an accuracy score of 84.8% in generating "completely correct" and "mostly correct" responses. ANOVA analysis suggested a consistent scoring approach between different evaluators. The mean length of the case text was 69.8 tokens (SD 20.8). Conclusion While this accuracy score is promising, it is not yet reliable for routine patient care. We recommend keeping interactions with ChatGPT concise, precise, and simple to improve response accuracy. As AI continues to evolve, it will hold significant and innovative breakthroughs in medicine.
Dabbas WF; Odeibat YM; Alhazaimeh M; Hiasat MY; Alomari AA; Marji A; Samara QA; Ibrahim B; Al Arabiyat RM; Momani G
0-1
38021639
Is ChatGPT's Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination?
2,023
Cureus
Introduction ChatGPT is a large language model (LLM)-based chatbot that uses natural language processing to create humanlike conversational dialogue. It has created a significant impact on the entire global landscape, especially in sectors like finance and banking, e-commerce, education, legal, human resources (HR), and recruitment since its inception. There have been multiple ongoing controversies regarding the seamless integration of ChatGPT with the healthcare system because of its factual accuracy, lack of experience, lack of clarity, expertise, and above all, lack of empathy. Our study seeks to compare ChatGPT's knowledge and interpretative abilities with those of first-year medical students in India in the subject of medical biochemistry. Materials and methods A total of 79 questions (40 multiple choice questions and 39 subjective questions) of medical biochemistry were set for Phase 1, block II term examination. Chat GPT was enrolled as the 101st student in the class. The questions were entered into ChatGPT's interface and responses were noted. The response time for the multiple-choice questions (MCQs) asked was also noted. The answers given by ChatGPT and 100 students of the class were checked by two subject experts, and marks were given according to the quality of answers. Marks obtained by the AI chatbot were compared with the marks obtained by the students. Results ChatGPT scored 140 marks out of 200 and outperformed almost all the students and ranked fifth in the class. It scored very well in information-based MCQs (92%) and descriptive logical reasoning (80%), whereas performed poorly in descriptive clinical scenario-based questions (52%). In terms of time taken to respond to the MCQs, it took significantly more time to answer logical reasoning MCQs than simple information-based MCQs (3.10+/-0.882 sec vs. 2.02+/-0.477 sec, p<0.005). Conclusions ChatGPT was able to outperform almost all the students in the subject of medical biochemistry. If the ethical issues are dealt with efficiently, these LLMs have a huge potential to be used in teaching and learning methods of modern medicine by students successfully.
Ghosh A; Maini Jindal N; Gupta VK; Bansal E; Kaur Bajwa N; Sett A
21
39156271
Learning the Randleman Criteria in Refractive Surgery: Utilizing ChatGPT-3.5 Versus Internet Search Engine.
2,024
Cureus
Introduction Large language models such as OpenAI's (San Francisco, CA) ChatGPT-3.5 hold immense potential to augment self-directed learning in medicine, but concerns have risen regarding its accuracy in specialized fields. This study compares ChatGPT-3.5 with an internet search engine in their ability to define the Randleman criteria and its five parameters within a self-directed learning environment. Methods Twenty-three medical students gathered information on the Randleman criteria. Each student was allocated 10 minutes to interact with ChatGPT-3.5, followed by 10 minutes to search the internet independently. Each ChatGPT-3.5 conversation, student summary, and internet reference were subsequently analyzed for accuracy, efficiency, and reliability. Results ChatGPT-3.5 provided the correct definition for 26.1% of students (6/23, 95% CI: 12.3% to 46.8%), while an independent internet search resulted in sources containing the correct definition for 100% of students (23/23, 95% CI: 87.5% to 100%, p = 0.0001). ChatGPT-3.5 incorrectly identified the Randleman criteria as a corneal ectasia staging system for 17.4% of students (4/23), fabricated a "Randleman syndrome" for 4.3% of students (1/23), and gave no definition for 52.2% of students (12/23). When a definition was given (47.8%, 11/23), a median of two of the five correct parameters was provided along with a median of two additional falsified parameters. Conclusion Internet search engine outperformed ChatGPT-3.5 in providing accurate and reliable information on the Randleman criteria. ChatGPT-3.5 gave false information, required excessive prompting, and propagated misunderstandings. Learners should exercise discernment when using ChatGPT-3.5. Future initiatives should evaluate the implementation of prompt engineering and updated large-language models.
Tuttle JJ; Moshirfar M; Garcia J; Altaf AW; Omidvarnia S; Hoopes PC
21
39822447
A Study of Orthopedic Patient Leaflets and Readability of AI-Generated Text in Foot and Ankle Surgery (SOLE-AI).
2,024
Cureus
Introduction The internet age has broadened the horizons of modern medicine, and the ever-increasing scope of artificial intelligence (AI) has made information about healthcare, common pathologies, and available treatment options much more accessible to the wider population. Patient autonomy relies on clear, accurate, and user-friendly information to give informed consent to an intervention. Our paper aims to outline the quality, readability, and accuracy of readily available information produced by AI relating to common foot and ankle procedures. Materials and methods A retrospective qualitative analysis of procedure-specific information relating to three common foot and ankle orthopedic procedures: ankle arthroscopy, ankle arthrodesis/fusion, and a gastrocnemius lengthening procedure was undertaken. Patient information leaflets (PILs) created by The British Orthopaedic Foot and Ankle Society (BOFAS) were compared to ChatGPT responses for readability, quality, and accuracy of information. Four language tools were used to assess readability: the Flesch-Kincaid reading ease (FKRE) score, the Flesch-Kincaid grade level (FKGL), the Gunning fog score (GFS), and the simple measure of gobbledygook (SMOG) index. Quality and accuracy were determined by using the DISCERN tool by five independent assessors. Results PILs produced by AI had significantly lower FKRE scores when compared to BOFAS -40.4 (SD: +/-7.69) compared to 91.9 (SD: +/-2.24) (p </= 0.0001), indicating poor readability of AI-generated text. DISCERN scoring highlighted a statistically significant improvement in accuracy and quality of human-generated information across two PILs with a mean score of 55.06 compared to 46.8. FKGL scoring indicated that the required grade of students to understand AI responses was consistently higher than compared to information leaflets at 11.7 versus 1.1 (p </= 0.0001). The number of years spent in education required to understand the ChatGPT-produced PILs was significantly higher in both GFS (14.46 vs. 2.0 years) (p < 0.0001) and SMOG (11.0 vs. 3.06 years) (p < 0.0001). Conclusion Despite significant advances in the implementation of AI in surgery, AI-generated PILs for common foot and ankle surgical procedures currently lack sufficient quality, depth, and readability - this risks leaving patients misinformed regarding upcoming procedures. We conclude that information from trusted professional bodies should be used to complement a clinical consultation, as there currently lacks sufficient evidence to support the routine implementation of AI-generated information into the consent process.
Jaques A; Abdelghafour K; Perkins O; Nuttall H; Haidar O; Johal K
32
40271334
Dr. Chatbot: Investigating the Quality and Quantity of Responses Generated by Three AI Chatbots to Prompts Regarding Carpal Tunnel Syndrome.
2,025
Cureus
Introduction The objective of this study is to investigate the amount and accuracy of statements provided in answers by AI chatbots to prompts about carpal tunnel syndrome. To the authors' knowledge, this is the first study to assess the answers provided by OpenAI ChatGPT-4o model, AMBOSS GPT, and Google Gemini to common patient-based questions regarding carpal tunnel, using UpToDate as a standard reference. Objective To determine which chatbot produces the most medically accurate responses. The authors hypothesize that the paid upgrade to Chat-GPT-4o (AMBOSS GPT) will have the most accurate responses compared to the two free chatbots, ChatGPT-4o and Google Gemini 1.5 Flash model. Main outcome measures The number of statements generated by each chatbot and the percentage of those statements that can be directly verified using exact quotations from supporting information available on UpToDate as of December 2024. Results There was a significant difference in terms of the number of average statements provided per prompt by the three chatbots, as GPT-4o produced 8.9 more statements compared to AMBOSS GPT (p = 0.0081916), GPT-4o produced 19.65 more statements compared to Gemini (p = 0.0000001), and AMBOSS GPT produced 10.75 more statements than Gemini (p = <0.0000001). There was also a significant difference in terms of the percentage of information provided by each chatbot that was able to be verified in AMBOSS GPT (85.97%) vs. GPT-4o (71.76%) and Gemini (73.53%), with differences of 14.22% (p = 0.0000002) and 12.44% (p = 0.0003969), respectively. Conclusions This study demonstrated that when looking at the three AI chatbots, AMBOSS GPT, GPT-4o, and Google Gemini, GPT-4o produced the most information per prompt; however, AMBOSS GPT provided a larger percentage of information that was able to be found supported within information available in UpToDate.
Buchman ZJ; Savarino VR; Vinarski BM; Jay LF; Phrathep D; Boesler D
32
37457604
Advancing Artificial Intelligence for Clinical Knowledge Retrieval: A Case Study Using ChatGPT-4 and Link Retrieval Plug-In to Analyze Diabetic Ketoacidosis Guidelines.
2,023
Cureus
Introduction This case study aimed to enhance the traceability and retrieval accuracy of ChatGPT-4 in medical text by employing a step-by-step systematic approach. The focus was on retrieving clinical answers from three international guidelines on diabetic ketoacidosis (DKA). Methods A systematic methodology was developed to guide the retrieval process. One question was asked per guideline to ensure accuracy and maintain referencing. ChatGPT-4 was utilized to retrieve answers, and the 'Link Reader' plug-in was integrated to facilitate direct access to webpages containing the guidelines. Subsequently, ChatGPT-4 was employed to compile answers while providing citations to the sources. This process was iterated 30 times per question to ensure consistency. In this report, we present our observations regarding the retrieval accuracy, consistency of responses, and the challenges encountered during the process. Results Integrating ChatGPT-4 with the 'Link Reader' plug-in demonstrated notable traceability and retrieval accuracy benefits. The AI model successfully provided relevant and accurate clinical answers based on the analyzed guidelines. Despite occasional challenges with webpage access and minor memory drift, the overall performance of the integrated system was promising. The compilation of the answers was also impressive and held significant promise for further trials. Conclusion The findings of this case study contribute to the utilization of AI text-generation models as valuable tools for medical professionals and researchers. The systematic approach employed in this case study and the integration of the 'Link Reader' plug-in offer a framework for automating medical text synthesis, asking one question at a time before compilation from different sources, which has led to improving AI models' traceability and retrieval accuracy. Further advancements and refinement of AI models and integration with other software utilities hold promise for enhancing the utility and applicability of AI-generated recommendations in medicine and scientific academia. These advancements have the potential to drive significant improvements in everyday medical practice.
Hamed E; Sharif A; Eid A; Alfehaidi A; Alberry M
0-1
38435177
Generative Artificial Intelligence in Patient Education: ChatGPT Takes on Hypertension Questions.
2,024
Cureus
Introduction Uncontrolled hypertension significantly contributes to the development and deterioration of various medical conditions, such as myocardial infarction, chronic kidney disease, and cerebrovascular events. Despite being the most common preventable risk factor for all-cause mortality, only a fraction of affected individuals maintain their blood pressure in the desired range. In recent times, there has been a growing reliance on online platforms for medical information. While providing a convenient source of information, differentiating reliable from unreliable information can be daunting for the layperson, and false information can potentially hinder timely diagnosis and management of medical conditions. The surge in accessibility of generative artificial intelligence (GeAI) technology has led to increased use in obtaining health-related information. This has sparked debates among healthcare providers about the potential for misuse and misinformation while recognizing the role of GeAI in improving health literacy. This study aims to investigate the accuracy of AI-generated information specifically related to hypertension. Additionally, it seeks to explore the reproducibility of information provided by GeAI. Method A nonhuman-subject qualitative study was devised to evaluate the accuracy of information provided by ChatGPT regarding hypertension and its secondary complications. Frequently asked questions on hypertension were compiled by three study staff, internal medicine residents at an ACGME-accredited program, and then reviewed by a physician experienced in treating hypertension, resulting in a final set of 100 questions. Each question was posed to ChatGPT three times, once by each study staff, and the majority response was then assessed against the recommended guidelines. A board-certified internal medicine physician with over eight years of experience further reviewed the responses and categorized them into two classes based on their clinical appropriateness: appropriate (in line with clinical recommendations) and inappropriate (containing errors). Descriptive statistical analysis was employed to assess ChatGPT responses for accuracy and reproducibility. Result Initially, a pool of 130 questions was gathered, of which a final set of 100 questions was selected for the purpose of this study. When assessed against acceptable standard responses, ChatGPT responses were found to be appropriate in 92.5% of cases and inappropriate in 7.5%. Furthermore, ChatGPT had a reproducibility score of 93%, meaning that it could consistently reproduce answers that conveyed similar meanings across multiple runs. Conclusion ChatGPT showcased commendable accuracy in addressing commonly asked questions about hypertension. These results underscore the potential of GeAI in providing valuable information to patients. However, continued research and refinement are essential to evaluate further the reliability and broader applicability of ChatGPT within the medical field.
Almagazzachi A; Mustafa A; Eighaei Sedeh A; Vazquez Gonzalez AE; Polianovskaia A; Abood M; Abdelrahman A; Muyolema Arce V; Acob T; Saleem B
43
39050297
ChatGPT Versus National Eligibility cum Entrance Test for Postgraduate (NEET PG).
2,024
Cureus
Introduction With both suspicion and excitement, artificial intelligence tools are being integrated into nearly every aspect of human existence, including medical sciences and medical education. The newest large language model (LLM) in the class of autoregressive language models is ChatGPT. While ChatGPT's potential to revolutionize clinical practice and medical education is under investigation, further research is necessary to understand its strengths and limitations in this field comprehensively. Methods Two hundred National Eligibility cum Entrance Test for Postgraduate 2023 questions were gathered from various public education websites and individually entered into Microsoft Bing (GPT-4 Version 2.2.1). Microsoft Bing Chatbot is currently the only platform incorporating all of GPT-4's multimodal features, including image recognition. The results were subsequently analyzed. Results Out of 200 questions, ChatGPT-4 answered 129 correctly. The most tested specialties were medicine (15%), obstetrics and gynecology (15%), general surgery (14%), and pathology (10%), respectively. Conclusion This study sheds light on how well the GPT-4 performs in addressing the NEET-PG entrance test. ChatGPT has potential as an adjunctive instrument within medical education and clinical settings. Its capacity to react intelligently and accurately in complicated clinical settings demonstrates its versatility.
Paul S; Govindaraj S; Jk J
21
39592896
Transforming Alzheimer's Digital Caregiving through Large Language Models.
2,024
Current Alzheimer research
INTRODUCTION/OBJECTIVE: Alzheimer's Disease and Related Dementias (AD/ADRD) present significant caregiving challenges, with increasing burdens on informal caregivers. This study examines the potential of AI-driven Large Language Models (LLMs) in developing digital caregiving strategies for AD/ADRD. The objectives include analyzing existing caregiving education materials (CEMs) and mobile application descriptions (MADs) and aligning key caregiving tasks with digital functions across different stages of disease progression. METHODS: We analyzed 38 CEMs from the National Library of Medicine's MedlinePlus, along with associated hyperlinked web resources, and 57 MADs focused on AD digital caregiving. Using ChatGPT 3.5, essential caregiving tasks were extracted and matched with digital functionalities suitable for each stage of AD progression, while also highlighting digital literacy requirements for caregivers. RESULTS: The analysis categorizes AD caregiving into 4 stages-Pre-Clinical, Mild, Moderate, and Severe-identifying key tasks, such as behavior monitoring, daily assistance, direct supervision, and ensuring a safe environment. These tasks were supported by digital aids, including memory- enhancing apps, Global Positioning System (GPS) tracking, voice-controlled devices, and advanced GPS tracking for comprehensive care. Additionally, 6 essential digital literacy skills for AD/ADRD caregiving were identified: basic digital skills, communication, information management, safety and privacy, healthcare knowledge, and caregiver coordination, highlighting the need for tailored training. CONCLUSION: The findings advocate for an LLM-driven strategy in designing digital caregiving interventions, particularly emphasizing a novel paradigm in AD/ADRD support, offering adaptive assistance that evolves with caregivers' needs, thereby enhancing their shared decision-making and patient care capabilities.
Kim S; Han DY; Bae J
10
37271011
ChatGPT in medical imaging higher education.
2,023
Radiography (London, England : 1995)
INTRODUCTION: Academic integrity among radiographers and nuclear medicine technologists/scientists in both higher education and scientific writing has been challenged by advances in artificial intelligence (AI). The recent release of ChatGPT, a chatbot powered by GPT-3.5 capable of producing accurate and human-like responses to questions in real-time, has redefined the boundaries of academic and scientific writing. These boundaries require objective evaluation. METHOD: ChatGPT was tested against six subjects across the first three years of the medical radiation science undergraduate course for both exams (n = 6) and written assignment tasks (n = 3). ChatGPT submissions were marked against standardised rubrics and results compared to student cohorts. Submissions were also evaluated by Turnitin for similarity and AI scores. RESULTS: ChatGPT powered by GPT-3.5 performed below the average student performance in all written tasks with an increasing disparity as subjects advanced. ChatGPT performed better than the average student in foundation or general subject examinations where shallow responses meet learning outcomes. For discipline specific subjects, ChatGPT lacked the depth, breadth, and currency of insight to provide pass level answers. CONCLUSION: ChatGPT simultaneously poses a risk to academic integrity in writing and assessment while affording a tool for enhanced learning environments. These risks and benefits are likely to be restricted to learning outcomes of lower taxonomies. Both risks and benefits are likely to be constrained by higher order taxonomies. IMPLICATIONS FOR PRACTICE: ChatGPT powered by GPT3.5 has limited capacity to support student cheating, introduces errors and fabricated information, and is readily identified by software as AI generated. Lack of depth of insight and appropriateness for professional communication also limits capacity as a learning enhancement tool.
Currie G; Singh C; Nelson T; Nabasenja C; Al-Hayek Y; Spuur K
0-1
39938135
Performance evaluation of ChatGPT-4.0 and Gemini on image-based neurosurgery board practice questions: A comparative analysis.
2,025
Journal of clinical neuroscience : official journal of the Neurosurgical Society of Australasia
INTRODUCTION: Artificial intelligence (AI) has gained significant attention in medicine, particularly in neurosurgery, where its potential is often discussed and occasionally feared. Large language models (LLMs), such as ChatGPT-4.0 (OpenAI) and Gemini (formerly known as Bard, Google DeepMind), have shown promise in text-based tasks but remain under explored in image-based domains, which are essential for neurosurgery. This study evaluates the performance of ChatGPT-4.0 and Gemini on image-based neurosurgery board practice questions, focusing on their ability to interpret visual data, a critical aspect of neurosurgical decision-making. METHODS: A total of 250 image-based questions selected from two neurosurgical board review books were obtained. Each question was presented to both ChatGPT-4.0 and Gemini in its original format, including images such as MRI scans, pathology slides, and surgical visuals. The models were tasked with answering the questions, and their accuracy was determined based on the number of correct responses. RESULTS: ChatGPT-4.0 accurately answered 135/250 (54.0 %) of questions, while Gemini correctly answered 24/250 (8.6 %). ChatGPT accurately answered 85/129 (65.9 %) of The Comprehensive Neurosurgery Board Preparation Book, and 50/121 (41.3 %) of the Neurosurgery Board Review book. Gemini answered 23/129 (17.8 %) of The Comprehensive Neurosurgery Board Preparation Book, and only 1 of 121 (0.8 %) questions from Neurosurgery Board Review. When comparing the ability of ChatGPT-4.0o vs. Gemini, ChatGPT significantly outperformed Gemini in accuracy (p < 0.0001). The overall refusal rate for Gemini in answering questions was 102/250 (40.8 %), while ChatGPT attempted to answer all questions. CONCLUSIONS: While ChatGPT-4.0 demonstrated some capacity to interpret image-based neurosurgery board questions, both models exhibited significant limitations, particularly in processing and analyzing complex visual data. These findings emphasize the need for targeted advancements in AI to improve visual interpretation in neurosurgical education and practice.
McNulty AM; Valluri H; Gajjar AA; Custozzo A; Field NC; Paul AR
0-1
38173951
Comparing Artificial Intelligence and Senior Residents in Oral Lesion Diagnosis: A Comparative Study.
2,024
Cureus
INTRODUCTION: Artificial intelligence (AI) is a field of computer science that seeks to build intelligent machines that can carry out tasks that usually necessitate human intelligence. AI may help dentists with a variety of dental tasks, including clinical diagnosis and treatment planning. This study aims to compare the performance of AI and oral medicine residents in diagnosing different cases, providing treatment, and determining if it is reliable to assist them in their field of work. METHODS: The study conducted a comparative analysis of the responses from third- and fourth-year residents trained in Oral Medicine and Pathology at King Saud University, College of Dentistry. The residents were given a closed multiple-choice test consisting of 19 questions with four response options labeled A-D and one question with five response options labeled A-E. The test was administered via Google Forms, and each resident's response was stored electronically in an Excel sheet (Microsoft(R) Corp., Redmond, WA). The residents' answers were then compared to the responses generated by three major language models: OpenAI, Stablediffusion, and PopAI. The questions were inputted into the language models in the same format as the original test, and prior to each question, an artificial intelligence chat session was created to eliminate memory retention bias. The input was done on November 19, 2023, the same day the official multiple-choice test was administered. The study had a sample size of 20 residents trained in Oral Medicine and Pathology at King Saud University, College of Dentistry, consisting of both third-year and fourth-year residents. RESULT: The responses of three large language models (LLM), including OpenAI, Stablediffusion, and PopAI, as well as the responses of 20 senior residents for 20 clinical cases about oral lesion diagnosis. There were no significant variations observed for the remaining questions in the responses to only two questions (10%). For the remaining questions, there were no significant differences. The median (IQR) score of LLMs was 50.0 (45.0 to 60.0), with a minimum of 40 (for stable diffusion) and a maximum of 70 (for OpenAI). The median (IQR) score of senior residents was 65.0 (55.0-75.0). The highest and lowest scores of residents were 40 and 90, respectively. There was no significant difference in the percent scores of residents and LLMs (p = 0.211). The agreement level was measured using the Kappa value. The agreement among senior dental residents was observed to be weak, with a Kappa value of 0.396. In contrast, the agreement among LLMs demonstrated a moderate level, with a Kappa value of 0.622, suggesting a more cohesive alignment in responses among the artificial intelligence models. When comparing residents' responses with those generated by different OpenAI models, including OpenAI, Stablediffusion, and PopAI, the agreement levels were consistently categorized as weak, with Kappa values of 0.402, 0.381, and 0.392, respectively. CONCLUSION: What the current study reveals is that when comparing the response score, there is no significant difference, in contrast to the agreement analysis among the residents, which was low compared to the LLMs, in which it was high. Dentists should consider that AI is very beneficial in providing diagnosis and treatment and use it to assist them.
Albagieh H; Alzeer ZO; Alasmari ON; Alkadhi AA; Naitah AN; Almasaad KF; Alshahrani TS; Alshahrani KS; Almahmoud MI
32
40425883
Evaluating Surgical Results in Breast Cancer with Artificial Intelligence.
2,025
Aesthetic plastic surgery
INTRODUCTION: Artificial intelligence (AI) is rapidly transforming healthcare, with increasing applications in surgical evaluation. In breast cancer surgery, achieving aesthetic symmetry is essential for patient satisfaction and emotional well-being. While human evaluation remains fundamental, AI-driven symmetry assessment promises objective alternatives. This study evaluates the performance of publicly available AI models in breast symmetry assessment and compares them with Pyolo8, a custom AI model developed by the authors. Additionally, the study explores the potential emotional impact and ethical considerations of AI-generated assessments in postoperative breast cancer patients. METHODS: Sixty-eight patients who underwent breast reconstruction were evaluated with the use of publicly available AI models and contrasted with an AI model developed by the authors named Pyolo8. All results were evaluated by human observers. RESULTS: ChatGPT 4o and Pyolo8 AI models showed statistically significant moderate to strong positive correlation for postoperative assessment when compared to human observers. Direct interaction between AI models and patients was censored due to concerns of misinterpretation. CONCLUSIONS: Both ChatGPT and Pyolo8 showed moderate to strong correlation with humans, but ChatGPT demonstrated superior communication skills. However, AI systems may lack the subtlety and empathy required for direct patient interactions, as vulnerable postoperative patients receiving an AI-generated symmetry assessment without appropriate clinical context may experience emotional distress or misinterpret the results. Human oversight and empathetic communication remain essential to ensure quality care while AI is increasingly integrated into medicine. LEVEL OF EVIDENCE IV: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Kenig N; Monton Echeverria J; Muntaner Vives A
32
38451040
Conformity of ChatGPT recommendations with the AUA/SUFU guideline on postprostatectomy urinary incontinence.
2,024
Neurourology and urodynamics
INTRODUCTION: Artificial intelligence (AI) shows immense potential in medicine and Chat generative pretrained transformer (ChatGPT) has been used for different purposes in the field. However, it may not match the complexity and nuance of certain medical scenarios. This study evaluates the accuracy of ChatGPT 3.5 and 4 in providing recommendations regarding the management of postprostatectomy urinary incontinence (PPUI), considering The Incontinence After Prostate Treatment: AUA/SUFU Guideline as the best practice benchmark. MATERIALS AND METHODS: A set of questions based on the AUA/SUFU Guideline was prepared. Queries included 10 conceptual questions and 10 case-based questions. All questions were open and entered into the ChatGPT with a recommendation to limit the answer to 200 words, for greater objectivity. Responses were graded as correct (1 point); partially correct (0.5 point), or incorrect (0 point). Performances of versions 3.5 and 4 of ChatGPT were analyzed overall and separately for the conceptual and the case-based questions. RESULTS: ChatGPT 3.5 scored 11.5 out of 20 points (57.5% accuracy), while ChatGPT 4 scored 18 (90.0%; p = 0.031). In the conceptual questions, ChatGPT 3.5 provided accurate answers to six questions along with one partially correct response and three incorrect answers, with a final score of 6.5. In contrast, ChatGPT 4 provided correct answers to eight questions and partially correct answers to two questions, scoring 9.0. In the case-based questions, ChatGPT 3.5 scored 5.0, while ChatGPT 4 scored 9.0. The domains where ChatGPT performed worst were evaluation, treatment options, surgical complications, and special situations. CONCLUSION: ChatGPT 4 demonstrated superior performance compared to ChatGPT 3.5 in providing recommendations for the management of PPUI, using the AUA/SUFU Guideline as a benchmark. Continuous monitoring is essential for evaluating the development and precision of AI-generated medical information.
Pinto VBP; de Azevedo MF; Wroclawski ML; Gentile G; Jesus VLM; de Bessa Junior J; Nahas WC; Sacomani CAR; Sandhu JS; Gomes CM
0-1
39956557
Human versus artificial intelligence: evaluating ChatGPT's performance in conducting published systematic reviews with meta-analysis in chronic pain research.
2,025
Regional anesthesia and pain medicine
INTRODUCTION: Artificial intelligence (AI), particularly large-language models like Chat Generative Pre-Trained Transformer (ChatGPT), has demonstrated potential in streamlining research methodologies. Systematic reviews and meta-analyses, often considered the pinnacle of evidence-based medicine, are inherently time-intensive and demand meticulous planning, rigorous data extraction, thorough analysis, and careful synthesis. Despite promising applications of AI, its utility in conducting systematic reviews with meta-analysis remains unclear. This study evaluated ChatGPT's accuracy in conducting key tasks of a systematic review with meta-analysis. METHODS: This validation study used data from a published meta-analysis on emotional functioning after spinal cord stimulation. ChatGPT-4o performed title/abstract screening, full-text study selection, and data pooling for this systematic review with meta-analysis. Comparisons were made against human-executed steps, which were considered the gold standard. Outcomes of interest included accuracy, sensitivity, specificity, positive predictive value, and negative predictive value for screening and full-text review tasks. We also assessed for discrepancies in pooled effect estimates and forest plot generation. RESULTS: For title and abstract screening, ChatGPT achieved an accuracy of 70.4%, sensitivity of 54.9%, and specificity of 80.1%. In the full-text screening phase, accuracy was 68.4%, sensitivity 75.6%, and specificity 66.8%. ChatGPT successfully pooled data for five forest plots, achieving 100% accuracy in calculating pooled mean differences, 95% CIs, and heterogeneity estimates (I(2) score and tau-squared values) for most outcomes, with minor discrepancies in tau-squared values (range 0.01-0.05). Forest plots showed no significant discrepancies. CONCLUSION: ChatGPT demonstrates modest to moderate accuracy in screening and study selection tasks, but performs well in data pooling and meta-analytic calculations. These findings underscore the potential of AI to augment systematic review methodologies, while also emphasizing the need for human oversight to ensure accuracy and integrity in research workflows.
Purewal A; Fautsch K; Klasova J; Hussain N; D'Souza RS
0-1
39963119
Comparative analysis of ChatGPT and Gemini (Bard) in medical inquiry: a scoping review.
2,025
Frontiers in digital health
INTRODUCTION: Artificial intelligence and machine learning are popular interconnected technologies. AI chatbots like ChatGPT and Gemini show considerable promise in medical inquiries. This scoping review aims to assess the accuracy and response length (in characters) of ChatGPT and Gemini in medical applications. METHODS: The eligible databases were searched to find studies published in English from January 1 to October 20, 2023. The inclusion criteria consisted of studies that focused on using AI in medicine and assessed outcomes based on the accuracy and character count (length) of ChatGPT and Gemini. Data collected from the studies included the first author's name, the country where the study was conducted, the type of study design, publication year, sample size, medical speciality, and the accuracy and response length. RESULTS: The initial search identified 64 papers, with 11 meeting the inclusion criteria, involving 1,177 samples. ChatGPT showed higher accuracy in radiology (87.43% vs. Gemini's 71%) and shorter responses (907 vs. 1,428 characters). Similar trends were noted in other specialties. However, Gemini outperformed ChatGPT in emergency scenarios (87% vs. 77%) and in renal diets with low potassium and high phosphorus (79% vs. 60% and 100% vs. 77%). Statistical analysis confirms that ChatGPT has greater accuracy and shorter responses than Gemini in medical studies, with a p-value of <.001 for both metrics. CONCLUSION: This Scoping review suggests that ChatGPT may demonstrate higher accuracy and provide shorter responses than Gemini in medical studies.
Fattah FH; Salih AM; Salih AM; Asaad SK; Ghafour AK; Bapir R; Abdalla BA; Othman S; Ahmed SM; Hasan SJ; Mahmood YM; Kakamad FH
43
38188865
Evaluation of information provided to patients by ChatGPT about chronic diseases in Spanish language.
2,024
Digital health
INTRODUCTION: Artificial intelligence has presented exponential growth in medicine. The ChatGPT language model has been highlighted as a possible source of patient information. This study evaluates the reliability and readability of ChatGPT-generated patient information on chronic diseases in Spanish. METHODS: Questions frequently asked by patients on the internet about diabetes mellitus, heart failure, rheumatoid arthritis (RA), chronic kidney disease (CKD), and systemic lupus erythematosus (SLE) were submitted to ChatGPT. Reliability was assessed by rating responses as (1) comprehensive, (2) correct but inadequate, (3) some correct and some incorrect, (4) completely incorrect, and divided between "good" (1 and 2) and "bad" (3 and 4). Readability was evaluated with the adapted Flesch and Szigriszt formulas. RESULTS: And 71.67% of the answers were "good," with none qualified as "completely incorrect." Better reliability was observed in questions on diabetes and RA versus heart failure (p = 0.02). In readability, responses were "moderately difficult" (54.73, interquartile range (IQR) 51.59-58.58), with better results for CKD (median 56.1, IQR 53.5-59.1) and RA (56.4, IQR 53.7-60.7), than for heart failure responses (median 50.6, IQR 46.3-53.8). CONCLUSION: Our study suggests that the ChatGPT tool can be a reliable source of information in spanish for patients with chronic diseases with different reliability for some of them, however, it needs to improve the readability of its answers to be recommended as a useful tool for patients.
Soto-Chavez MJ; Bustos MM; Fernandez-Avila DG; Munoz OM
0-1
37596194
Evaluating the performance of ChatGPT in answering questions related to pediatric urology.
2,024
Journal of pediatric urology
INTRODUCTION: Artificial intelligence is advancing in various domains, including medicine, and its progress is expected to continue in the future. OBJECTIVE: This research aimed to assess the precision and consistency of ChatGPT's responses to commonly asked inquiries related to pediatric urology. MATERIALS AND METHODS: We examined commonly posed inquiries regarding pediatric urology found on urology association websites, hospitals, and social media platforms. Additionally, we referenced the recommendations tables in the European Urology Association's (EAU) 2022 Guidelines on Pediatric Urology, which contained robust data at the strong recommendation level. All questions were systematically presented to ChatGPT's May 23 Version, and two expert urologists independently assessed and assigned scores ranging from 1 to 4 to each response. RESULTS: A hundred thirty seven questions about pediatric urology were included in the study. The answers to questions resulted in 92.0% completely correct. The completely correct rate in the questions prepared according to the strong recommendations of the EAU guideline was 93.6%. No question was answered completely wrong. The similarity rates of the answers to the repeated questions were between 93.8% and 100%. CONCLUSION: ChatGPT has provided satisfactory responses to inquiries related to pediatric urology. Despite its limitations, it is foreseeable that this continuously evolving platform will occupy a crucial position in the healthcare industry.
Caglar U; Yildiz O; Meric A; Ayranci A; Gelmis M; Sarilar O; Ozgor F
0-1
38749814
Surgeons vs ChatGPT: Assessment and Feedback Performance Based on Real Surgical Scenarios.
2,024
Journal of surgical education
INTRODUCTION: Artificial intelligence tools are being progressively integrated into medicine and surgical education. Large language models, such as ChatGPT, could provide relevant feedback aimed at improving surgical skills. The purpose of this study is to assess ChatGPT s ability to provide feedback based on surgical scenarios. METHODS: Surgical situations were transformed into texts using a neutral narrative. Texts were evaluated by ChatGPT 4.0 and 3 surgeons (A, B, C) after a brief instruction was delivered: identify errors and provide feedback accordingly. Surgical residents were provided with each of the situations and feedback obtained during the first stage, as written by each surgeon and ChatGPT, and were asked to assess the utility of feedback (FCUR) and its quality (FQ). As control measurement, an Education-Expert (EE) and a Clinical-Expert (CE) were asked to assess FCUR and FQ. RESULTS: Regarding residents' evaluations, 96.43% of times, outputs provided by ChatGPT were considered useful, comparable to what surgeons' B and C obtained. Assessing FQ, ChatGPT and all surgeons received similar scores. Regarding EE's assessment, ChatGPT obtained a significantly higher FQ score when compared to surgeons A and B (p = 0.019; p = 0.033) with a median score of 8 vs. 7 and 7.5, respectively; and no difference respect surgeon C (score of 8; p = 0.2). Regarding CE s assessment, surgeon B obtained the highest FQ score while ChatGPT received scores comparable to that of surgeons A and C. When participants were asked to identify the source of the feedback, residents, CE, and EE perceived ChatGPT's outputs as human-provided in 33.9%, 28.5%, and 14.3% of cases, respectively. CONCLUSION: When given brief written surgical situations, ChatGPT was able to identify errors with a detection rate comparable to that of experienced surgeons and to generate feedback that was considered useful for skill improvement in a surgical context performing as well as surgical instructors across assessments made by general surgery residents, an experienced surgeon, and a nonsurgeon feedback expert.
Jarry Trujillo C; Vela Ulloa J; Escalona Vivas G; Grasset Escobar E; Villagran Gutierrez I; Achurra Tirado P; Varas Cohen J
32
39985409
ChatGPT Achieves Only Fair Agreement with ACFAS Expert Panelist Clinical Consensus Statements.
2,025
Foot & ankle specialist
INTRODUCTION: As artificial intelligence (AI) becomes increasingly integrated into medicine and surgery, its applications are expanding rapidly-from aiding clinical documentation to providing patient information. However, its role in medical decision-making remains uncertain. This study evaluates an AI language model's alignment with clinical consensus statements in foot and ankle surgery. METHODS: Clinical consensus statements from the American College of Foot and Ankle Surgeons (ACFAS; 2015-2022) were collected and rated by ChatGPT-o1 as being inappropriate, neither appropriate nor inappropriate, and appropriate. Ten repetitions of the statements were entered into ChatGPT-o1 in a random order, and the model was prompted to assign a corresponding rating. The AI-generated scores were compared to the expert panel's ratings, and intra-rater analysis was performed. RESULTS: The analysis of 9 clinical consensus documents and 129 statements revealed an overall Cohen's kappa of 0.29 (95% CI: 0.12, 0.46), indicating fair alignment between expert panelists and ChatGPT. Overall, ankle arthritis and heel pain showed the highest concordance at 100%, while flatfoot exhibited the lowest agreement at 25%, reflecting variability between ChatGPT and expert panelists. Among the ChatGPT ratings, Cohen's kappa values ranged from 0.41 to 0.92, highlighting variability in internal reliability across topics. CONCLUSION: ChatGPT achieved overall fair agreement and demonstrated variable consistency when repetitively rating ACFAS expert panel clinical practice guidelines representing a variety of topics. These data reflect the need for further study of the causes, impacts, and solutions for this disparity between intelligence and human intelligence. LEVEL OF EVIDENCE: Level IV: Retrospective cohort study.
Casciato DJ; Calhoun J
32
38667587
AI and Ethics: A Systematic Review of the Ethical Considerations of Large Language Model Use in Surgery Research.
2,024
Healthcare (Basel, Switzerland)
INTRODUCTION: As large language models receive greater attention in medical research, the investigation of ethical considerations is warranted. This review aims to explore surgery literature to identify ethical concerns surrounding these artificial intelligence models and evaluate how autonomy, beneficence, nonmaleficence, and justice are represented within these ethical discussions to provide insights in order to guide further research and practice. METHODS: A systematic review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Five electronic databases were searched in October 2023. Eligible studies included surgery-related articles that focused on large language models and contained adequate ethical discussion. Study details, including specialty and ethical concerns, were collected. RESULTS: The literature search yielded 1179 articles, with 53 meeting the inclusion criteria. Plastic surgery, orthopedic surgery, and neurosurgery were the most represented surgical specialties. Autonomy was the most explicitly cited ethical principle. The most frequently discussed ethical concern was accuracy (n = 45, 84.9%), followed by bias, patient confidentiality, and responsibility. CONCLUSION: The ethical implications of using large language models in surgery are complex and evolving. The integration of these models into surgery necessitates continuous ethical discourse to ensure responsible and ethical use, balancing technological advancement with human dignity and safety.
Pressman SM; Borna S; Gomez-Cabello CA; Haider SA; Haider C; Forte AJ
10
39667153
In the face of confounders: Atrial fibrillation detection - Practitioners vs. ChatGPT.
2,025
Journal of electrocardiology
INTRODUCTION: Atrial fibrillation (AF) is the most common arrhythmia in clinical practice, yet interpretation concerns among healthcare providers persist. Confounding factors contribute to false-positive and false-negative AF diagnoses, leading to potential omissions. Artificial intelligence advancements show promise in electrocardiogram (ECG) interpretation. We sought to examine the diagnostic accuracy of ChatGPT-4omni (GPT-4o), equipped with image evaluation capabilities, in interpreting ECGs with confounding factors and compare its performance to that of physicians. METHODS: Twenty ECG cases, divided into Group A (10 cases of AF or atrial flutter) and Group B (10 cases of sinus or another atrial rhythm), were crafted into multiple-choice questions. Total of 100 practitioners (25 from each: emergency medicine, internal medicine, primary care, and cardiology) were tasked to identify the underlying rhythm. Next, GPT-4o was prompted in five separate sessions. RESULTS: GPT-4o performed inadequately, averaging 3 (+/-2) in Group A questions and 5.40 (+/-1.34) in Group B questions. Upon examining the accuracy of the total ECG questions, no significant difference was found between GPT-4o, internists, and primary care physicians (p = 0.952 and = 0.852, respectively). Cardiologists outperformed other medical disciplines and GPT-4o (p < 0.001), while emergency physicians followed in accuracy, though comparison to GPT-4o only indicated a trend (p = 0.068). CONCLUSION: GPT-4o demonstrated suboptimal accuracy with significant under- and over-recognition of AF in ECGs with confounding factors. Despite its potential as a supportive tool for ECG interpretation, its performance did not surpass that of medical practitioners, underscoring the continued importance of human expertise in complex diagnostics.
Avidan Y; Tabachnikov V; Court OB; Khoury R; Aker A
10
38329526
Is generative pre-trained transformer artificial intelligence (Chat-GPT) a reliable tool for guidelines synthesis? A preliminary evaluation for biologic CRSwNP therapy.
2,024
European archives of oto-rhino-laryngology : official journal of the European Federation of Oto-Rhino-Laryngological Societies (EUFOS) : affiliated with the German Society for Oto-Rhino-Laryngology - Head and Neck Surgery
INTRODUCTION: Biologic therapies for Chronic Rhinosinusitis with Nasal Polyps (CRSwNP) have emerged as an auspicious treatment alternative. However, the ideal patient population, dosage, and treatment duration are yet to be well-defined. Moreover, biologic therapy has disadvantages, such as high costs and limited access. The proposal of a novel Artificial Intelligence (AI) algorithm offers an intriguing solution for optimizing decision-making protocols. METHODS: The AI algorithm was initially programmed to conduct a systematic literature review searching for the current primary guidelines on biologics' clinical efficacy and safety in treating CRSwNP. The review included a total of 12 studies: 6 systematic reviews, 4 expert consensus guidelines, and 2 surveys. Simultaneously, two independent human researchers conducted a literature search to compare the results. Subsequently, the AI was tasked to critically analyze the identified papers, highlighting strengths and weaknesses, thereby creating a decision-making algorithm and pyramid flow chart. RESULTS: The studies evaluated various biologics, including monoclonal antibodies targeting Interleukin-5 (IL-5), IL-4, IL-13, and Immunoglobulin E (IgE), assessing their effectiveness in different patient populations, such as those with comorbid asthma or refractory CRSwNP. Dupilumab, a monoclonal antibody targeting the IL-4 receptor alpha subunit, demonstrated significant improvement in nasal symptoms and quality of life in patients with CRSwNP in several randomized controlled trials and systematic reviews. Similarly, mepolizumab and reslizumab, which target IL-5, have also shown efficacy in reducing nasal polyp burden and improving symptoms in patients with CRSwNP, particularly those with comorbid asthma. However, additional studies are required to confirm the long-term efficacy and safety of these biologics in treating CRSwNP. CONCLUSIONS: Biologic therapies have surfaced as a promising treatment option for patients with severe or refractory CRSwNP; however, the optimal patient population, dosage, and treatment duration are yet to be defined. The application of AI in decision-making protocols and the creation of therapeutic algorithms for biologic drug selection, could offer fascinating future prospects in the management of CRSwNP.
Maniaci A; Saibene AM; Calvo-Henriquez C; Vaira L; Radulesco T; Michel J; Chiesa-Estomba C; Sowerby L; Lobo Duro D; Mayo-Yanez M; Maza-Solano J; Lechien JR; La Mantia I; Cocuzza S
0-1
39119408
Is ChatGPT a Reliable Source of Patient Information on Asthma?
2,024
Cureus
INTRODUCTION: ChatGPT (OpenAI, San Francisco, CA, USA) is a novel artificial intelligence (AI) application that is used by millions of people, and the numbers are growing by the day. Because it has the potential to be a source of patient information, the study aimed to evaluate the ability of ChatGPT to answer frequently asked questions (FAQs) about asthma with consistent reliability, acceptability, and easy readability. METHODS: We collected 30 FAQs about asthma from the Global Initiative for Asthma website. ChatGPT was asked each question twice, by two different users, to assess for consistency. The responses were evaluated by five board-certified internal medicine physicians for reliability and acceptability. The consistency of responses was determined by the differences in evaluation between the two answers to the same question. The readability of all responses was measured using the Flesch Reading Ease Scale (FRES), the Flesch-Kincaid Grade Level (FKGL), and the Simple Measure of Gobbledygook (SMOG). RESULTS: Sixty responses were collected for evaluation. Fifty-six (93.33%) of the responses were of good reliability. The average rating of the responses was 3.65 out of 4 total points. 78.3% (n=47) of the responses were found acceptable by the evaluators to be the only answer for an asthmatic patient. Only two (6.67%) of the 30 questions had inconsistent answers. The average readability of all responses was determined to be 33.50+/-14.37 on the FRES, 12.79+/-2.89 on the FKGL, and 13.47+/-2.38 on the SMOG. CONCLUSION: Compared to online websites, we found that ChatGPT can be a reliable and acceptable source of information for asthma patients in terms of information quality. However, all responses were of difficult readability, and none followed the recommended readability levels. Therefore, the readability of this AI application requires improvement to be more suitable for patients.
Alabdulmohsen DM; Almahmudi MA; Alhashim JN; Almahdi MH; Alkishy EF; Almossabeh MJ; Alkhalifah SA
43
39329220
ChatGPT Solving Complex Kidney Transplant Cases: A Comparative Study With Human Respondents.
2,024
Clinical transplantation
INTRODUCTION: ChatGPT has shown the ability to answer clinical questions in general medicine but may be constrained by the specialized nature of kidney transplantation. Thus, it is important to explore how ChatGPT can be used in kidney transplantation and how its knowledge compares to human respondents. METHODS: We prompted ChatGPT versions 3.5, 4, and 4 Visual (4 V) with 12 multiple-choice questions related to six kidney transplant cases from 2013 to 2015 American Society of Nephrology (ASN) fellowship program quizzes. We compared the performance of ChatGPT with US nephrology fellowship program directors, nephrology fellows, and the audience of the ASN's annual Kidney Week meeting. RESULTS: Overall, ChatGPT 4 V correctly answered 10 out of 12 questions, showing a performance level comparable to nephrology fellows (group majority correctly answered 9 of 12 questions) and training program directors (11 of 12). This surpassed ChatGPT 4 (7 of 12 correct) and 3.5 (5 of 12). All three ChatGPT versions failed to correctly answer questions where the consensus among human respondents was low. CONCLUSION: Each iterative version of ChatGPT performed better than the prior version, with version 4 V achieving performance on par with nephrology fellows and training program directors. While it shows promise in understanding and answering kidney transplantation questions, ChatGPT should be seen as a complementary tool to human expertise rather than a replacement.
Mankowski MA; Jaffe IS; Xu J; Bae S; Oermann EK; Aphinyanaphongs Y; McAdams-DeMarco MA; Lonze BE; Orandi BJ; Stewart D; Levan M; Massie A; Gentry S; Segev DL
21
37426402
Transforming Medical Education: Assessing the Integration of ChatGPT Into Faculty Workflows at a Caribbean Medical School.
2,023
Cureus
INTRODUCTION: ChatGPT is a Large Language Model (LLM) which allows for natural language processing and interactions with users in a conversational style. Since its release in 2022, it has had a significant impact in many occupational fields, including medical education. We sought to gain insight into the extent and type of usage of ChatGPT at a Caribbean medical school, the American University of Antigua College of Medicine (AUA). METHODS: We administered a questionnaire to 87 full-time faculty at the school via email. We quantified and made graphical representations of the results via Qualtrics Experience Management software (QualtricsXM, Qualtrics, Provo, UT). Survey results were investigated using bar graph comparisons of absolute numbers and percentages for various categories related to ChatGPT usage, and descriptive statistics for Likert scale questions. RESULTS: We found an estimated 33% of faculty were currently using ChatGPT. There was broad acceptance of the program by those who were using it and most believed it should be an option for students. The primary task ChatGPT was being used for was multiple choice question (MCQ) generation. The primary concern faculty had was incorrect information being included in ChatGPT output. CONCLUSION: ChatGPT has been quickly adopted by a subset of the college faculty, demonstrating its growing acceptance. Given the level of approval expressed about the program, we believe ChatGPT will continue to form an important and expanding part of faculty workflows at AUA and in medical education in general.
Cross J; Robinson R; Devaraju S; Vaughans A; Hood R; Kayalackakom T; Honnavar P; Naik S; Sebastian R
0-1
38341993
Human intelligence versus Chat-GPT: who performs better in correctly classifying patients in triage?
2,024
The American journal of emergency medicine
INTRODUCTION: Chat-GPT is rapidly emerging as a promising and potentially revolutionary tool in medicine. One of its possible applications is the stratification of patients according to the severity of clinical conditions and prognosis during the triage evaluation in the emergency department (ED). METHODS: Using a randomly selected sample of 30 vignettes recreated from real clinical cases, we compared the concordance in risk stratification of ED patients between healthcare personnel and Chat-GPT. The concordance was assessed with Cohen's kappa, and the performance was evaluated with the area under the receiver operating characteristic curve (AUROC) curves. Among the outcomes, we considered mortality within 72 h, the need for hospitalization, and the presence of a severe or time-dependent condition. RESULTS: The concordance in triage code assignment between triage nurses and Chat-GPT was 0.278 (unweighted Cohen's kappa; 95% confidence intervals: 0.231-0.388). For all outcomes, the ROC values were higher for the triage nurses. The most relevant difference was found in 72-h mortality, where triage nurses showed an AUROC of 0.910 (0.757-1.000) compared to only 0.669 (0.153-1.000) for Chat-GPT. CONCLUSIONS: The current level of Chat-GPT reliability is insufficient to make it a valid substitute for the expertise of triage nurses in prioritizing ED patients. Further developments are required to enhance the safety and effectiveness of AI for risk stratification of ED patients.
Zaboli A; Brigo F; Sibilio S; Mian M; Turcato G
10
37589944
Complications Following Facelift and Neck Lift: Implementation and Assessment of Large Language Model and Artificial Intelligence (ChatGPT) Performance Across 16 Simulated Patient Presentations.
2,023
Aesthetic plastic surgery
INTRODUCTION: ChatGPT represents a potential resource for patient guidance and education, with the possibility for quality improvement in healthcare delivery. The present study evaluates the role of ChatGPT as an interactive patient resource, and assesses its performance in identifying, triaging, and guiding patients with concerns of postoperative complications following facelift and neck lift surgery. METHODS: Sixteen patient profiles were generated to simulate postoperative patient presentations, with complications of varying acuity and severity. ChatGPT was assessed for its accuracy in generating a differential diagnosis, soliciting a history, providing the most-likely diagnosis, the appropriate disposition, treatments/interventions to begin from home, and red-flag symptoms necessitating an urgent presentation to the emergency department. RESULTS: Overall accuracy in providing a complete differential diagnosis in response to simulated presentations was 85%, with an accuracy of 88% in identifying the most-likely diagnosis after history-taking. However, appropriate patient dispositions were suggested in only 56% of cases. Relevant home treatments/interventions were suggested with an 82% accuracy, and red-flag symptoms with a 73% accuracy. A detailed analysis, stratified according to latency of postoperative presentation (<48 h, 48 h-1 week, or >1 week), and according to acuity of complications, is presented herein. CONCLUSIONS: ChatGPT overestimated the urgency of indicated patient dispositions in 44% of cases, concerning for potential unnecessary increase in healthcare resource utilization. Imperfect performance, and the tool's tendency for overinclusion in its responses, risk increasing patient anxiety and straining physician-patient relationships. While artificial intelligence has great potential in triaging postoperative patient concerns, and improving efficiency and resource utilization, ChatGPT's performance, in its current form, demonstrates a need for further refinement before its safe and effective implementation in facial aesthetic surgical practice. LEVEL OF EVIDENCE IV: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Abi-Rafeh J; Hanna S; Bassiri-Tehrani B; Kazan R; Nahai F
32
38764369
Evaluating ChatGPT's effectiveness and tendencies in Japanese internal medicine.
2,024
Journal of evaluation in clinical practice
INTRODUCTION: ChatGPT, a large-scale language model, is a notable example of AI's potential in health care. However, its effectiveness in clinical settings, especially when compared to human physicians, is not fully understood. This study evaluates ChatGPT's capabilities and limitations in answering questions for Japanese internal medicine specialists, aiming to clarify its accuracy and tendencies in both correct and incorrect responses. METHODS: We utilized ChatGPT's answers on four sets of self-training questions for internal medicine specialists in Japan from 2020 to 2023. We ran three trials for each set to evaluate its overall accuracy and performance on nonimage questions. Subsequently, we categorized the questions into two groups: those ChatGPT consistently answered correctly (Confirmed Correct Answer, CCA) and those it consistently answered incorrectly (Confirmed Incorrect Answer, CIA). For these groups, we calculated the average accuracy rates and 95% confidence intervals based on the actual performance of internal medicine physicians on each question and analyzed the statistical significance between the two groups. This process was then similarly applied to the subset of nonimage CCA and CIA questions. RESULTS: ChatGPT's overall accuracy rate was 59.05%, increasing to 65.76% for nonimage questions. 24.87% of the questions had answers that varied between correct and incorrect in the three trials. Despite surpassing the passing threshold for nonimage questions, ChatGPT's accuracy was lower than that of human specialists. There was a significant variance in accuracy between CCA and CIA groups, with ChatGPT mirroring human physician patterns in responding to different question types. CONCLUSION: This study underscores ChatGPT's potential utility and limitations in internal medicine. While effective in some aspects, its dependence on question type and context suggests that it should supplement, not replace, professional medical judgment. Further research is needed to integrate Artificial Intelligence tools like ChatGPT more effectively into specialized medical practices.
Kaneda Y; Tayuinosho A; Tomoyose R; Takita M; Hamaki T; Tanimoto T; Ozaki A
21
40147063
Artificial intelligence versus orthopedic surgeons as an orthopedic consultant in the emergency department.
2,025
Injury
INTRODUCTION: ChatGPT, a widely accessible AI program, has demonstrated potential in various healthcare applications, including emergency department (ED) triage, differential diagnosis, and patient education. However, its potential in providing recommendations to emergency department providers with orthopedic consultations has not been evaluated yet. METHODS: This study compared the performance of four board certified orthopedic surgeons, two attendings and two trauma fellows who take independent call at the same institution and ChatGPT-4 in responding to clinical scenarios commonly encountered in emergency departments. Five common orthopedic ED scenarios were developed (lateral malleolar ankle fractures, distal radius fractures, septic arthritis of the knee, shoulder dislocations, and Achilles tendon ruptures), each with four questions related to diagnosis, management, surgical indication, and patient counseling, totaling 20 questions. Responses were anonymized, coded, and evaluated by independent reviewers including emergency medicine physicians using a five-point Likert scale across five criteria: accuracy, completeness, helpfulness, specificity, and overall quality. RESULTS: When comparing the ratings of AI answers to non-AI responders, the AI answers were shown to be superior in completeness, helpfulness, specificity, and overall quality with no difference in regards to accuracy (p < 0.05). When considering question subtypes including diagnosis, management, treatment, and patient counseling, AI was shown to have superior scores in helpfulness, and specificity in diagnostic questions(p < 0.05). In addition, AI responses were superior in all the assessed categories when looking at the patient counseling questions (p < 0.05). When considering different clinical scenarios, AI outperformed non-AI groups in completeness in the distal radius fracture scenario. Furthermore, AI outperformed non-AI groups in helpfulness in the lateral malleolus fracture scenario. In the shoulder dislocation scenario, AI responses were more complete, helpful, and had a better overall quality. AI responses were non-inferior in the remaining categories of the different scenarios. CONCLUSION: Artificial intelligence exhibited non-inferior and often superior performance in common orthopedic-ED consultations compared to board certified orthopedic surgeons While current AI models are limited in their ability to integrate specific images and patient scenarios, our findings suggest AI can provide high quality recommendations for generic orthopedic consultations and with further development, will likely have an increasing role in the future.
Liu J; Segal K; Daher M; Ozolin J; Binder WD; Bergen M; McDonald CL; Owens BD; Antoci V
32
38507847
Comparison of emergency medicine specialist, cardiologist, and chat-GPT in electrocardiography assessment.
2,024
The American journal of emergency medicine
INTRODUCTION: ChatGPT, developed by OpenAI, represents the cutting-edge in its field with its latest model, GPT-4. Extensive research is currently being conducted in various domains, including cardiovascular diseases, using ChatGPT. Nevertheless, there is a lack of studies addressing the proficiency of GPT-4 in diagnosing conditions based on Electrocardiography (ECG) data. The goal of this study is to evaluate the diagnostic accuracy of GPT-4 when provided with ECG data, and to compare its performance with that of emergency medicine specialists and cardiologists. METHODS: This study has received approval from the Clinical Research Ethics Committee of Hitit University Medical Faculty on August 21, 2023 (decision no: 2023-91). Drawing on cases from the "150 ECG Cases" book, a total of 40 ECG cases were crafted into multiple-choice questions (comprising 20 everyday and 20 more challenging ECG questions). The participant pool included 12 emergency medicine specialists and 12 cardiology specialists. GPT-4 was administered the questions in a total of 12 separate sessions. The responses from the cardiology physicians, emergency medicine physicians, and GPT-4 were evaluated separately for each of the three groups. RESULTS: In the everyday ECG questions, GPT-4 demonstrated superior performance compared to both the emergency medicine specialists and the cardiology specialists (p < 0.001, p = 0.001). In the more challenging ECG questions, while Chat-GPT outperformed the emergency medicine specialists (p < 0.001), no significant statistical difference was found between Chat-GPT and the cardiology specialists (p = 0.190). Upon examining the accuracy of the total ECG questions, Chat-GPT was found to be more successful compared to both the Emergency Medicine Specialists and the cardiologists (p < 0.001, p = 0.001). CONCLUSION: Our study has shown that GPT-4 is more successful than emergency medicine specialists in evaluating both everyday and more challenging ECG questions. It performed better compared to cardiologists on everyday questions, but its performance aligned closely with that of the cardiologists as the difficulty of the questions increased.
Gunay S; Ozturk A; Ozerol H; Yigit Y; Erenler AK
10
39096711
The accuracy of Gemini, GPT-4, and GPT-4o in ECG analysis: A comparison with cardiologists and emergency medicine specialists.
2,024
The American journal of emergency medicine
INTRODUCTION: GPT-4, GPT-4o and Gemini advanced, which are among the well-known large language models (LLMs), have the capability to recognize and interpret visual data. When the literature is examined, there are a very limited number of studies examining the ECG performance of GPT-4. However, there is no study in the literature examining the success of Gemini and GPT-4o in ECG evaluation. The aim of our study is to evaluate the performance of GPT-4, GPT-4o, and Gemini in ECG evaluation, assess their usability in the medical field, and compare their accuracy rates in ECG interpretation with those of cardiologists and emergency medicine specialists. METHODS: The study was conducted from May 14, 2024, to June 3, 2024. The book "150 ECG Cases" served as a reference, containing two sections: daily routine ECGs and more challenging ECGs. For this study, two emergency medicine specialists selected 20 ECG cases from each section, totaling 40 cases. In the next stage, the questions were evaluated by emergency medicine specialists and cardiologists. In the subsequent phase, a diagnostic question was entered daily into GPT-4, GPT-4o, and Gemini Advanced on separate chat interfaces. In the final phase, the responses provided by cardiologists, emergency medicine specialists, GPT-4, GPT-4o, and Gemini Advanced were statistically evaluated across three categories: routine daily ECGs, more challenging ECGs, and the total number of ECGs. RESULTS: Cardiologists outperformed GPT-4, GPT-4o, and Gemini Advanced in all three groups. Emergency medicine specialists performed better than GPT-4o in routine daily ECG questions and total ECG questions (p = 0.003 and p = 0.042, respectively). When comparing GPT-4o with Gemini Advanced and GPT-4, GPT-4o performed better in total ECG questions (p = 0.027 and p < 0.001, respectively). In routine daily ECG questions, GPT-4o also outperformed Gemini Advanced (p = 0.004). Weak agreement was observed in the responses given by GPT-4 (p < 0.001, Fleiss Kappa = 0.265) and Gemini Advanced (p < 0.001, Fleiss Kappa = 0.347), while moderate agreement was observed in the responses given by GPT-4o (p < 0.001, Fleiss Kappa = 0.514). CONCLUSION: While GPT-4o shows promise, especially in more challenging ECG questions, and may have potential as an assistant for ECG evaluation, its performance in routine and overall assessments still lags behind human specialists. The limited accuracy and consistency of GPT-4 and Gemini suggest that their current use in clinical ECG interpretation is risky.
Gunay S; Ozturk A; Yigit Y
10
39882522
Evolution of artificial intelligence in healthcare: a 30-year bibliometric study.
2,024
Frontiers in medicine
INTRODUCTION: In recent years, the development of artificial intelligence (AI) technologies, including machine learning, deep learning, and large language models, has significantly supported clinical work. Concurrently, the integration of artificial intelligence with the medical field has garnered increasing attention from medical experts. This study undertakes a dynamic and longitudinal bibliometric analysis of AI publications within the healthcare sector over the past three decades to investigate the current status and trends of the fusion between medicine and artificial intelligence. METHODS: Following a search on the Web of Science, researchers retrieved all reviews and original articles concerning artificial intelligence in healthcare published between January 1993 and December 2023. The analysis employed Bibliometrix, Biblioshiny, and Microsoft Excel, incorporating the bibliometrix R package for data mining and analysis, and visualized the observed trends in bibliometrics. RESULTS: A total of 22,950 documents were collected in this study. From 1993 to 2023, there was a discernible upward trajectory in scientific output within bibliometrics. The United States and China emerged as primary contributors to medical artificial intelligence research, with Harvard University leading in publication volume among institutions. Notably, the rapid expansion of emerging topics such as COVID-19 and new drug discovery in recent years is noteworthy. Furthermore, the top five most cited papers in 2023 were all pertinent to the theme of ChatGPT. CONCLUSION: This study reveals a sustained explosive growth trend in AI technologies within the healthcare sector in recent years, with increasingly profound applications in medicine. Additionally, medical artificial intelligence research is dynamically evolving with the advent of new technologies. Moving forward, concerted efforts to bolster international collaboration and enhance comprehension and utilization of AI technologies are imperative for fostering novel innovations in healthcare.
Xie Y; Zhai Y; Lu G
10
39534227
Large language models in patient education: a scoping review of applications in medicine.
2,024
Frontiers in medicine
INTRODUCTION: Large Language Models (LLMs) are sophisticated algorithms that analyze and generate vast amounts of textual data, mimicking human communication. Notable LLMs include GPT-4o by Open AI, Claude 3.5 Sonnet by Anthropic, and Gemini by Google. This scoping review aims to synthesize the current applications and potential uses of LLMs in patient education and engagement. MATERIALS AND METHODS: Following the PRISMA-ScR checklist and methodologies by Arksey, O'Malley, and Levac, we conducted a scoping review. We searched PubMed in June 2024, using keywords and MeSH terms related to LLMs and patient education. Two authors conducted the initial screening, and discrepancies were resolved by consensus. We employed thematic analysis to address our primary research question. RESULTS: The review identified 201 studies, predominantly from the United States (58.2%). Six themes emerged: generating patient education materials, interpreting medical information, providing lifestyle recommendations, supporting customized medication use, offering perioperative care instructions, and optimizing doctor-patient interaction. LLMs were found to provide accurate responses to patient queries, enhance existing educational materials, and translate medical information into patient-friendly language. However, challenges such as readability, accuracy, and potential biases were noted. DISCUSSION: LLMs demonstrate significant potential in patient education and engagement by creating accessible educational materials, interpreting complex medical information, and enhancing communication between patients and healthcare providers. Nonetheless, issues related to the accuracy and readability of LLM-generated content, as well as ethical concerns, require further research and development. Future studies should focus on improving LLMs and ensuring content reliability while addressing ethical considerations.
Aydin S; Karabacak M; Vlachos V; Margetis K
10
39675178
Use of a large language model (LLM) for ambulance dispatch and triage.
2,025
The American journal of emergency medicine
INTRODUCTION: Large language models (LLMs) have grown in popularity in recent months and have demonstrated advanced clinical reasoning ability. Given the need to prioritize the sickest patients requesting emergency medical services (EMS), we attempted to identify if an LLM could accurately triage ambulance requests using real-world data from a major metropolitan area. METHODS: An LLM (ChatGPT 4o Mini, Open AI, San Francisco, CA, USA) with no prior task-specific training was given real ambulance requests from a major metropolitan city in the United States. Requests were batched into groups of four, and the LLM was prompted to identify which of the four patients should be prioritized. The same groupings of four requests were then shown to a panel of experienced critical care paramedics who voted on which patient should be prioritized. RESULTS: Across 98 groupings of four ambulance requests (392 total requests), the LLM agreed with the paramedic panel in most cases (76.5 %, n = 75). In groupings where the paramedic panel was unanimous in their decision (n = 48), the LLM agreed with the unanimous panel in 93.8 % of groupings (n = 45). CONCLUSIONS: Our preliminary analysis indicates LLMs may have the potential to become a useful tool for triage and resource allocation in emergency care settings, especially in cases where there is consensus among subject matter experts. Further research is needed to better understand and clarify how they may best be of service.
Shekhar AC; Kimbrell J; Saharan A; Stebel J; Ashley E; Abbott EE
10
38606229
Comparing the Performance of Popular Large Language Models on the National Board of Medical Examiners Sample Questions.
2,024
Cureus
INTRODUCTION: Large language models (LLMs) have transformed various domains in medicine, aiding in complex tasks and clinical decision-making, with OpenAI's GPT-4, GPT-3.5, Google's Bard, and Anthropic's Claude among the most widely used. While GPT-4 has demonstrated superior performance in some studies, comprehensive comparisons among these models remain limited. Recognizing the significance of the National Board of Medical Examiners (NBME) exams in assessing the clinical knowledge of medical students, this study aims to compare the accuracy of popular LLMs on NBME clinical subject exam sample questions. METHODS: The questions used in this study were multiple-choice questions obtained from the official NBME website and are publicly available. Questions from the NBME subject exams in medicine, pediatrics, obstetrics and gynecology, clinical neurology, ambulatory care, family medicine, psychiatry, and surgery were used to query each LLM. The responses from GPT-4, GPT-3.5, Claude, and Bard were collected in October 2023. The response by each LLM was compared to the answer provided by the NBME and checked for accuracy. Statistical analysis was performed using one-way analysis of variance (ANOVA). RESULTS: A total of 163 questions were queried by each LLM. GPT-4 scored 163/163 (100%), GPT-3.5 scored 134/163 (82.2%), Bard scored 123/163 (75.5%), and Claude scored 138/163 (84.7%). The total performance of GPT-4 was statistically superior to that of GPT-3.5, Claude, and Bard by 17.8%, 15.3%, and 24.5%, respectively. The total performance of GPT-3.5, Claude, and Bard was not significantly different. GPT-4 significantly outperformed Bard in specific subjects, including medicine, pediatrics, family medicine, and ambulatory care, and GPT-3.5 in ambulatory care and family medicine. Across all LLMs, the surgery exam had the highest average score (18.25/20), while the family medicine exam had the lowest average score (3.75/5). Conclusion: GPT-4's superior performance on NBME clinical subject exam sample questions underscores its potential in medical education and practice. While LLMs exhibit promise, discernment in their application is crucial, considering occasional inaccuracies. As technological advancements continue, regular reassessments and refinements are imperative to maintain their reliability and relevance in medicine.
Abbas A; Rehman MS; Rehman SS
21
39234726
Zero-Shot LLMs for Named Entity Recognition: Targeting Cardiac Function Indicators in German Clinical Texts.
2,024
Studies in health technology and informatics
INTRODUCTION: Large Language Models (LLMs) like ChatGPT have become increasingly prevalent. In medicine, many potential areas arise where LLMs may offer added value. Our research focuses on the use of open-source LLM alternatives like Llama 3, Gemma, Mistral, and Mixtral to extract medical parameters from German clinical texts. We concentrate on German due to an observed gap in research for non-English tasks. OBJECTIVE: To evaluate the effectiveness of open-source LLMs in extracting medical parameters from German clinical texts, specially focusing on cardiovascular function indicators from cardiac MRI reports. METHODS: We extracted 14 cardiovascular function indicators, including left and right ventricular ejection fraction (LV-EF and RV-EF), from 497 variously formulated cardiac magnetic resonance imaging (MRI) reports. Our systematic analysis involved assessing the performance of Llama 3, Gemma, Mistral, and Mixtral models in terms of right annotation and named entity recognition (NER) accuracy. RESULTS: The analysis confirms strong performance with up to 95.4% right annotation and 99.8% NER accuracy across different architectures, despite the fact that these models were not explicitly fine-tuned for data extraction and the German language. CONCLUSION: The results strongly recommend using open-source LLMs for extracting medical parameters from clinical texts, including those in German, due to their high accuracy and effectiveness even without specific fine-tuning.
Plagwitz L; Neuhaus P; Yildirim K; Losch N; Varghese J; Buscher A
10
37276372
New Artificial Intelligence ChatGPT Performs Poorly on the 2022 Self-assessment Study Program for Urology.
2,023
Urology practice
INTRODUCTION: Large language models have demonstrated impressive capabilities, but application to medicine remains unclear. We seek to evaluate the use of ChatGPT on the American Urological Association Self-assessment Study Program as an educational adjunct for urology trainees and practicing physicians. METHODS: One hundred fifty questions from the 2022 Self-assessment Study Program exam were screened, and those containing visual assets (n=15) were removed. The remaining items were encoded as open ended or multiple choice. ChatGPT's output was coded as correct, incorrect, or indeterminate; if indeterminate, responses were regenerated up to 2 times. Concordance, quality, and accuracy were ascertained by 3 independent researchers and reviewed by 2 physician adjudicators. A new session was started for each entry to avoid crossover learning. RESULTS: ChatGPT was correct on 36/135 (26.7%) open-ended and 38/135 (28.2%) multiple-choice questions. Indeterminate responses were generated in 40 (29.6%) and 4 (3.0%), respectively. Of the correct responses, 24/36 (66.7%) and 36/38 (94.7%) were on initial output, 8 (22.2%) and 1 (2.6%) on second output, and 4 (11.1%) and 1 (2.6%) on final output, respectively. Although regeneration decreased indeterminate responses, proportion of correct responses did not increase. For open-ended and multiple-choice questions, ChatGPT provided consistent justifications for incorrect answers and remained concordant between correct and incorrect answers. CONCLUSIONS: ChatGPT previously demonstrated promise on medical licensing exams; however, application to the 2022 Self-assessment Study Program was not demonstrated. Performance improved with multiple-choice over open-ended questions. More importantly were the persistent justifications for incorrect responses-left unchecked, utilization of ChatGPT in medicine may facilitate medical misinformation.
Huynh LM; Bonebrake BT; Schultis K; Quach A; Deibert CM
21
39936270
ChatGPT versus physician-derived answers to drug-related questions.
2,024
Danish medical journal
INTRODUCTION: Large language models have recently gained interest within the medical community. Their clinical impact is currently being investigated, with potential application in pharmaceutical counselling, which has yet to be assessed. METHODS: We performed a retrospective investigation of ChatGPT 3.5 and 4.0 in response to 49 consecutive inquiries encountered in the joint pharmaceutical counselling service of the Central and North Denmark regions. Answers were rated by comparing them with the answers generated by physicians. RESULTS: ChatGPT 3.5 and 4.0 provided answers rated better or equal in 39 (80%) and 48 (98%) cases, respectively, compared to the pharmaceutical counselling service. References did not accompany answers from ChatGPT, and ChatGPT did not elaborate on what would be considered most clinically relevant when providing multiple answers. CONCLUSIONS: In drug-related questions, ChatGPT (4.0) provided answers of a reasonably high quality. The lack of references and an occasionally limited clinical interpretation makes it less useful as a primary source of information. FUNDING: None. TRIAL REGISTRATION: Not relevant.
Helgestad OK; Hjelholt AJ; Vestergaard SV; Azuz S; Saedder EA; Overvad TF
10
38081765
ChatGPT in Iranian medical licensing examination: evaluating the diagnostic accuracy and decision-making capabilities of an AI-based model.
2,023
BMJ health & care informatics
INTRODUCTION: Large language models such as ChatGPT have gained popularity for their ability to generate comprehensive responses to human queries. In the field of medicine, ChatGPT has shown promise in applications ranging from diagnostics to decision-making. However, its performance in medical examinations and its comparison to random guessing have not been extensively studied. METHODS: This study aimed to evaluate the performance of ChatGPT in the preinternship examination, a comprehensive medical assessment for students in Iran. The examination consisted of 200 multiple-choice questions categorised into basic science evaluation, diagnosis and decision-making. GPT-4 was used, and the questions were translated to English. A statistical analysis was conducted to assess the performance of ChatGPT and also compare it with a random test group. RESULTS: The results showed that ChatGPT performed exceptionally well, with 68.5% of the questions answered correctly, significantly surpassing the pass mark of 45%. It exhibited superior performance in decision-making and successfully passed all specialties. Comparing ChatGPT to the random test group, ChatGPT's performance was significantly higher, demonstrating its ability to provide more accurate responses and reasoning. CONCLUSION: This study highlights the potential of ChatGPT in medical licensing examinations and its advantage over random guessing. However, it is important to note that ChatGPT still falls short of human physicians in terms of diagnostic accuracy and decision-making capabilities. Caution should be exercised when using ChatGPT, and its results should be verified by human experts to ensure patient safety and avoid potential errors in the medical field.
Ebrahimian M; Behnam B; Ghayebi N; Sobhrakhshankhah E
21
40365297
Optimizing theranostics chatbots with context-augmented large language models.
2,025
Theranostics
Introduction: Nuclear medicine theranostics is rapidly emerging, as an interdisciplinary therapy option with multi-dimensional considerations. Healthcare Professionals do not have the time to do in depth research on every therapy option. Personalized Chatbots might help to educate them. Chatbots using Large Language Models (LLMs), such as ChatGPT, are gaining interest addressing these challenges. However, chatbot performances often fall short in specific domains, which is critical in healthcare applications. Methods: This study develops a framework to examine the use of contextual augmentation to improve the performance of medical theranostic chatbots to create the first theranostic chatbot. Contextual augmentation involves providing additional relevant information to LLMs to improve their responses. We evaluate five state-of-the-art LLMs on questions translated into English and German. We compare answers generated with and without contextual augmentation, where the LLMs access pre-selected research papers via Retrieval Augmented Generation (RAG). We are using two RAG techniques: Naive RAG and Advanced RAG. Results: A user study and LLM-based evaluation assess answer quality across different metrics. Results show that Advanced RAG techniques considerably enhance LLM performance. Among the models, the best-performing variants are CLAUDE 3 OPUS and GPT-4O. These models consistently achieve the highest scores, indicating robust integration and utilization of contextual information. The most notable improvements between Naive RAG and Advanced RAG are observed in the GEMINI 1.5 and COMMAND R+ variants. Conclusion: This study demonstrates that contextual augmentation addresses the complexities inherent in theranostics. Despite promising results, key limitations include the biased selection of questions focusing primarily on PRRT, the need for comprehensive context documents. Future research should include a broader range of theranostics questions, explore additional RAG methods and aim to compare human and LLM evaluations more directly to enhance LLM performance further.
Koller P; Clement C; van Eijk A; Seifert R; Zhang J; Prenosil G; Sathekge MM; Herrmann K; Baum R; Weber WA; Rominger A; Shi K
0-1
39430700
Evaluating the comprehension and accuracy of ChatGPT's responses to diabetes-related questions in Urdu compared to English.
2,024
Digital health
INTRODUCTION: Patients with diabetes require healthcare and information that are accurate and extensive. Large language models (LLMs) like ChatGPT herald the capacity to provide such exhaustive data. To determine (a) the comprehensiveness of ChatGPT's responses in Urdu to diabetes-related questions and (b) the accuracy of ChatGPT's Urdu responses when compared to its English responses. METHODS: A cross-sectional observational study was conducted. Two reviewers experienced in internal medicine and endocrinology graded 53 Urdu and English responses on diabetes knowledge, lifestyle, and prevention. A senior reviewer resolved discrepancies. Responses were assessed for comprehension and accuracy, then compared to English. RESULTS: Among the Urdu responses generated, only two of 53 (3.8%) questions were graded as comprehensive, and five of 53 (9.4%) were graded as correct but inadequate. We found that 25 of 53 (47.2%) questions were graded as mixed with correct and incorrect/outdated data, the most significant proportion of responses being graded as such. When considering the comparison of response scale grading the comparative accuracy of Urdu and English responses, no Urdu response (0.0%) was considered to have more accuracy than English. Most of the Urdu responses were found to have an accuracy less than that of English, an overwhelming majority of 49 of 53 (92.5%) responses. CONCLUSION: We found that although the ability to retrieve such information about diabetes is impressive, it can merely be used as an adjunct instead of a solitary source of information. Further work must be done to optimize Urdu responses in medical contexts to approximate the boundless potential it heralds.
Faisal S; Kamran TE; Khalid R; Haider Z; Siddiqui Y; Saeed N; Imran S; Faisal R; Jabeen M
0-1
37795422
Evaluating the performance of ChatGPT-4 on the United Kingdom Medical Licensing Assessment.
2,023
Frontiers in medicine
INTRODUCTION: Recent developments in artificial intelligence large language models (LLMs), such as ChatGPT, have allowed for the understanding and generation of human-like text. Studies have found LLMs abilities to perform well in various examinations including law, business and medicine. This study aims to evaluate the performance of ChatGPT in the United Kingdom Medical Licensing Assessment (UKMLA). METHODS: Two publicly available UKMLA papers consisting of 200 single-best-answer (SBA) questions were screened. Nine SBAs were omitted as they contained images that were not suitable for input. Each question was assigned a specialty based on the UKMLA content map published by the General Medical Council. A total of 191 SBAs were inputted in ChatGPT-4 through three attempts over the course of 3 weeks (once per week). RESULTS: ChatGPT scored 74.9% (143/191), 78.0% (149/191) and 75.6% (145/191) on three attempts, respectively. The average of all three attempts was 76.3% (437/573) with a 95% confidence interval of (74.46% and 78.08%). ChatGPT answered 129 SBAs correctly and 32 SBAs incorrectly on all three attempts. On three attempts, ChatGPT performed well in mental health (8/9 SBAs), cancer (11/14 SBAs) and cardiovascular (10/13 SBAs). On three attempts, ChatGPT did not perform well in clinical haematology (3/7 SBAs), endocrine and metabolic (2/5 SBAs) and gastrointestinal including liver (3/10 SBAs). Regarding to response consistency, ChatGPT provided correct answers consistently in 67.5% (129/191) of SBAs but provided incorrect answers consistently in 12.6% (24/191) and inconsistent response in 19.9% (38/191) of SBAs, respectively. DISCUSSION AND CONCLUSION: This study suggests ChatGPT performs well in the UKMLA. There may be a potential correlation between specialty performance. LLMs ability to correctly answer SBAs suggests that it could be utilised as a supplementary learning tool in medical education with appropriate medical educator supervision.
Lai UH; Wu KS; Hsu TY; Kan JKC
21
39290564
Zero-shot evaluation of ChatGPT for food named-entity recognition and linking.
2,024
Frontiers in nutrition
INTRODUCTION: Recognizing and extracting key information from textual data plays an important role in intelligent systems by maintaining up-to-date knowledge, reinforcing informed decision-making, question-answering, and more. It is especially apparent in the food domain, where critical information guides the decisions of nutritionists and clinicians. The information extraction process involves two natural language processing tasks named entity recognition-NER and named entity linking-NEL. With the emergence of large language models (LLMs), especially ChatGPT, many areas began incorporating its knowledge to reduce workloads or simplify tasks. In the field of food, however, we noticed an opportunity to involve ChatGPT in NER and NEL. METHODS: To assess ChatGPT's capabilities, we have evaluated its two versions, ChatGPT-3.5 and ChatGPT-4, focusing on their performance across both NER and NEL tasks, emphasizing food-related data. To benchmark our results in the food domain, we also investigated its capabilities in a more broadly investigated biomedical domain. By evaluating its zero-shot capabilities, we were able to ascertain the strengths and weaknesses of the two versions of ChatGPT. RESULTS: Despite being able to show promising results in NER compared to other models. When tasked with linking entities to their identifiers from semantic models ChatGPT's effectiveness falls drastically. DISCUSSION: While the integration of ChatGPT holds potential across various fields, it is crucial to approach its use with caution, particularly in relying on its responses for critical decisions in food and bio-medicine.
Ogrinc M; Korousic Seljak B; Eftimov T
10
38331591
[The spring of artificial intelligence: AI vs. expert for internal medicine cases].
2,024
La Revue de medecine interne
INTRODUCTION: The "Printemps de la Medecine Interne" are training days for Francophone internists. The clinical cases presented during these days are complex. This study aims to evaluate the diagnostic capabilities of non-specialized artificial intelligence (language models) ChatGPT-4 and Bard by confronting them with the puzzles of the "Printemps de la Medecine Interne". METHOD: Clinical cases from the "Printemps de la Medecine Interne" 2021 and 2022 were submitted to two language models: ChatGPT-4 and Bard. In case of a wrong answer, a second attempt was offered. We then compared the responses of human internist experts to those of artificial intelligence. RESULTS: Of the 12 clinical cases submitted, human internist experts diagnosed nine, ChatGPT-4 diagnosed three, and Bard diagnosed one. One of the cases solved by ChatGPT-4 was not solved by the internist expert. The artificial intelligence had a response time of a few seconds. CONCLUSIONS: Currently, the diagnostic skills of ChatGPT-4 and Bard are inferior to those of human experts in solving complex clinical cases but are very promising. Recently made available to the general public, they already have impressive capabilities, questioning the role of the diagnostic physician. It would be advisable to adapt the rules or subjects of future "Printemps de la Medecine Interne" so that they are not solved by a public language model.
Albaladejo A; Lorleac'h A; Allain JS
10
38985176
[What is the potential of ChatGPT for qualified patient information? : Attempt of a structured analysis on the basis of a survey regarding complementary and alternative medicine (CAM) in rheumatology].
2,025
Zeitschrift fur Rheumatologie
INTRODUCTION: The chatbot ChatGPT represents a milestone in the interaction between humans and large databases that are accessible via the internet. It facilitates the answering of complex questions by enabling a communication in everyday language. Therefore, it is a potential source of information for those who are affected by rheumatic diseases. The aim of our investigation was to find out whether ChatGPT (version 3.5) is capable of giving qualified answers regarding the application of specific methods of complementary and alternative medicine (CAM) in three rheumatic diseases: rheumatoid arthritis (RA), systemic lupus erythematosus (SLE) and granulomatosis with polyangiitis (GPA). In addition, it was investigated how the answers of the chatbot were influenced by the wording of the question. METHODS: The questioning of ChatGPT was performed in three parts. Part A consisted of an open question regarding the best way of treatment of the respective disease. In part B, the questions were directed towards possible indications for the application of CAM in general in one of the three disorders. In part C, the chatbot was asked for specific recommendations regarding one of three CAM methods: homeopathy, ayurvedic medicine and herbal medicine. Questions in parts B and C were expressed in two modifications: firstly, it was asked whether the specific CAM was applicable at all in certain rheumatic diseases. The second question asked which procedure of the respective CAM method worked best in the specific disease. The validity of the answers was checked by using the ChatGPT reliability score, a Likert scale ranging from 1 (lowest validity) to 7 (highest validity). RESULTS: The answers to the open questions of part A had the highest validity. In parts B and C, ChatGPT suggested a variety of CAM applications that lacked scientific evidence. The validity of the answers depended on the wording of the questions. If the question suggested the inclination to apply a certain CAM, the answers often lacked the information of missing evidence and were graded with lower score values. CONCLUSION: The answers of ChatGPT (version 3.5) regarding the applicability of CAM in selected rheumatic diseases are not convincingly based on scientific evidence. In addition, the wording of the questions affects the validity of the information. Currently, an uncritical application of ChatGPT as an instrument for patient information cannot be recommended.
Keysser G; Pfeil A; Reuss-Borst M; Frohne I; Schultz O; Sander O
43
38251407
Benzodiazepine Boom: Tracking Etizolam, Pyrazolam, and Flubromazepam from Pre-UK Psychoactive Act 2016 to Present Using Analytical and Social Listening Techniques.
2,024
Pharmacy (Basel, Switzerland)
INTRODUCTION: The designer benzodiazepine (DBZD) market continues to expand whilst evading regulatory controls. The widespread adoption of social media by pro-drug use communities encourages positive discussions around DBZD use/misuse, driving demand. This research addresses the evolution of three popular DBZDs, etizolam (E), flubromazepam (F), and pyrazolam (P), available on the drug market for over a decade, comparing the quantitative chemical analyses of tablet samples, purchased from the internet prior to the implementation of the Psychoactive Substances Act UK 2016, with the thematic netnographic analyses of social media content. METHOD: Drug samples were purchased from the internet in early 2016. The characterisation of all drug batches were performed using UHPLC-MS and supported with (1)H NMR. In addition, netnographic studies across the platforms X (formerly Twitter) and Reddit, between 2016-2023, were conducted. The latter was supported by both manual and artificial intelligence (AI)-driven thematic analyses, using numerous.ai and ChatGPT, of social media threads and discussions. RESULTS: UHPLC-MS confirmed the expected drug in every sample, showing remarkable inter/intra batch variability across all batches (E = 13.8 +/- 0.6 to 24.7 +/- 0.9 mg; F = 4.0 +/- 0.2 to 23.5 +/- 0.8 mg; P = 5.2 +/- 0.2 to 11.5 +/- 0.4 mg). (1)H NMR could not confirm etizolam as a lone compound in any etizolam batch. Thematic analyses showed etizolam dominated social media discussions (59% of all posts), with 24.2% of posts involving sale/purchase and 17.8% detailing new administration trends/poly-drug use scenarios. Artificial intelligence confirmed three of the top five trends identified manually. CONCLUSIONS: Purity variability identified across all tested samples emphasises the increased potential health risks associated with DBZD consumption. We propose the global DBZD market is exacerbated by surface web social media discussions, recorded across X and Reddit. Despite the appearance of newer analogues, these three DBZDs remain prevalent and popularised. Reporting themes on harm/effects and new developments in poly-drug use trends, demand for DBZDs continues to grow, despite their potent nature and potential risk to life. It is proposed that greater controls and constant live monitoring of social media user content is warranted to drive active regulation strategies and targeted, effective, harm reduction strategies.
Mullin A; Scott M; Vaccaro G; Floresta G; Arillotta D; Catalani V; Corkery JM; Stair JL; Schifano F; Guirguis A
10
38575866
Bibliometric analysis of ChatGPT in medicine.
2,024
International journal of emergency medicine
INTRODUCTION: The emergence of artificial intelligence (AI) chat programs has opened two distinct paths, one enhancing interaction and another potentially replacing personal understanding. Ethical and legal concerns arise due to the rapid development of these programs. This paper investigates academic discussions on AI in medicine, analyzing the context, frequency, and reasons behind these conversations. METHODS: The study collected data from the Web of Science database on articles containing the keyword "ChatGPT" published from January to September 2023, resulting in 786 medically related journal articles. The inclusion criteria were peer-reviewed articles in English related to medicine. RESULTS: The United States led in publications (38.1%), followed by India (15.5%) and China (7.0%). Keywords such as "patient" (16.7%), "research" (12%), and "performance" (10.6%) were prevalent. The Cureus Journal of Medical Science (11.8%) had the most publications, followed by the Annals of Biomedical Engineering (8.3%). August 2023 had the highest number of publications (29.3%), with significant growth between February to March and April to May. Medical General Internal (21.0%) was the most common category, followed by Surgery (15.4%) and Radiology (7.9%). DISCUSSION: The prominence of India in ChatGPT research, despite lower research funding, indicates the platform's popularity and highlights the importance of monitoring its use for potential medical misinformation. China's interest in ChatGPT research suggests a focus on Natural Language Processing (NLP) AI applications, despite public bans on the platform. Cureus' success in publishing ChatGPT articles can be attributed to its open-access, rapid publication model. The study identifies research trends in plastic surgery, radiology, and obstetric gynecology, emphasizing the need for ethical considerations and reliability assessments in the application of ChatGPT in medical practice. CONCLUSION: ChatGPT's presence in medical literature is growing rapidly across various specialties, but concerns related to safety, privacy, and accuracy persist. More research is needed to assess its suitability for patient care and implications for non-medical use. Skepticism and thorough review of research are essential, as current studies may face retraction as more information emerges.
Gande S; Gould M; Ganti L
10
38354991
Accuracy of Online Artificial Intelligence Models in Primary Care Settings.
2,024
American journal of preventive medicine
INTRODUCTION: The importance of preventive medicine and primary care in the sphere of public health is expanding, yet a gap exists in the utilization of recommended medical services. As patients increasingly turn to online resources for supplementary advice, the role of artificial intelligence (AI) in providing accurate and reliable information has emerged. The present study aimed to assess ChatGPT-4's and Google Bard's capacity to deliver accurate recommendations in preventive medicine and primary care. METHODS: Fifty-six questions were formulated and presented to ChatGPT-4 in June 2023 and Google Bard in October 2023, and the responses were independently reviewed by two physicians, with each answer being classified as "accurate," "inaccurate," or "accurate with missing information." Disagreements were resolved by a third physician. RESULTS: Initial inter-reviewer agreement on grading was substantial (Cohen's Kappa was 0.76, 95%CI [0.61-0.90] for ChatGPT-4 and 0.89, 95%CI [0.79-0.99] for Bard). After reaching a consensus, 28.6% of ChatGPT-4-generated answers were deemed accurate, 28.6% inaccurate, and 42.8% accurate with missing information. In comparison, 53.6% of Bard-generated answers were deemed accurate, 17.8% inaccurate, and 28.6% accurate with missing information. Responses to CDC and immunization-related questions showed notable inaccuracies (80%) in both models. CONCLUSIONS: ChatGPT-4 and Bard demonstrated potential in offering accurate information in preventive care. It also brought to light the critical need for regular updates, particularly in the rapidly evolving areas of medicine. A significant proportion of the AI models' responses were deemed "accurate with missing information," emphasizing the importance of viewing AI tools as complementary resources when seeking medical information.
Kassab J; Hadi El Hajjar A; Wardrop RM 3rd; Brateanu A
43
39552949
Performance of ChatGPT in emergency medicine residency exams in Qatar: A comparative analysis with resident physicians.
2,024
Qatar medical journal
INTRODUCTION: The inclusion of artificial intelligence (AI) in the healthcare sector has transformed medical practices by introducing innovative techniques for medical education, diagnosis, and treatment strategies. In medical education, the potential of AI to enhance learning and assessment methods is being increasingly recognized. This study aims to evaluate the performance of OpenAI's Chat Generative Pre-Trained Transformer (ChatGPT) in emergency medicine (EM) residency examinations in Qatar and compare it with the performance of resident physicians. METHODS: A retrospective descriptive study with a mixed-methods design was conducted in August 2023. EM residents' examination scores were collected and compared with the performance of ChatGPT on the same examinations. The examinations consisted of multiple-choice questions (MCQs) from the same faculty responsible for Qatari Board EM examinations. ChatGPT's performance on these examinations was analyzed and compared with residents across various postgraduate years (PGY). RESULTS: The study included 238 emergency department residents from PGY1 to PGY4 and compared their performances with ChatGPT. ChatGPT scored consistently higher than resident groups in all examination categories. However, a notable decline in passing rates was observed among senior residents, indicating a potential misalignment between examination performance and practical competencies. Another likely reason can be the impact of the COVID-19 pandemic on their learning experience, knowledge acquisition, and consolidation. CONCLUSION: ChatGPT demonstrated significant proficiency in the theoretical knowledge of EM, outperforming resident physicians in examination settings. This finding suggests the potential of AI as a supplementary tool in medical education.
Iftikhar H; Anjum S; Bhutta ZA; Najam M; Bashir K
21
37868075
Teaching AI Ethics in Medical Education: A Scoping Review of Current Literature and Practices.
2,023
Perspectives on medical education
INTRODUCTION: The increasing use of Artificial Intelligence (AI) in medicine has raised ethical concerns, such as patient autonomy, bias, and transparency. Recent studies suggest a need for teaching AI ethics as part of medical curricula. This scoping review aimed to represent and synthesize the literature on teaching AI ethics as part of medical education. METHODS: The PRISMA-SCR guidelines and JBI methodology guided a literature search in four databases (PubMed, Embase, Scopus, and Web of Science) for the past 22 years (2000-2022). To account for the release of AI-based chat applications, such as ChatGPT, the literature search was updated to include publications until the end of June 2023. RESULTS: 1384 publications were originally identified and, after screening titles and abstracts, the full text of 87 publications was assessed. Following the assessment of the full text, 10 publications were included for further analysis. The updated literature search identified two additional relevant publications from 2023 were identified and included in the analysis. All 12 publications recommended teaching AI ethics in medical curricula due to the potential implications of AI in medicine. Anticipated ethical challenges such as bias were identified as the recommended basis for teaching content in addition to basic principles of medical ethics. Case-based teaching using real-world examples in interactive seminars and small groups was recommended as a teaching modality. CONCLUSION: This scoping review reveals a scarcity of literature on teaching AI ethics in medical education, with most of the available literature being recent and theoretical. These findings emphasize the importance of more empirical studies and foundational definitions of AI ethics to guide the development of teaching content and modalities. Recognizing AI's significant impact of AI on medicine, additional research on the teaching of AI ethics in medical education is needed to best prepare medical students for future ethical challenges.
Weidener L; Fischer M
10
39454451
The utility of ChatGPT in gender-affirming mastectomy education.
2,024
Journal of plastic, reconstructive & aesthetic surgery : JPRAS
INTRODUCTION: The integration of AI such as ChatGPT in medicine has been showing promise in enhancing patient education. Gender-affirming mastectomy (GAM) is a surgical procedure designed to help individuals transition to their self-identified gender, playing a crucial role in mitigating psychological distress for many transmasculine and non-binary (TNB) patients. With increased demand and attention towards GAM, plastic and reconstructive surgeons may rely on AI-driven chatbots as an accessible, accurate, and patient-driven model for relevant details on this procedure. SPECIFIC AIM(S): This study aimed to assess the quality and readability of information provided by ChatGPT in response to frequently asked questions (FAQs) related to GAM. METHODS: Inspired by online forums and physician websites, 10 frequently asked questions (FAQs) about pre- and postoperative topics were submitted to ChatGPT and assessed using validated readability score measures and expert interpretation. RESULTS: The average readability score was 16.0 +/- 1, indicating a college or graduate reading level. Mean accuracy, comprehensiveness, and danger scores were 8.8 +/- 0.5, 7.8 +/- 0.7, and 2.2 +/- 0.4, respectively. Although physicians appreciated ChatGPT's tone, patient autonomy, and advice to seek professional medical and mental help, they also cited instance of generic information, misinformation, support of debated techniques, and pathologization of gender dysphoria. CONCLUSION: Even with its promise in providing accurate and comprehensive information on GAM, ChatGPT's current limitations suggest caution as a supplementary tool to physician consultation.
Snee I; Lava CX; Li KR; Corral GD
32
39224724
Assessing the Quality and Reliability of AI-Generated Responses to Common Hypertension Queries.
2,024
Cureus
INTRODUCTION: The integration of artificial intelligence (AI) in healthcare, particularly through language models like ChatGPT and ChatSonic, has gained substantial attention. This article explores the utilization of these AI models to address patient queries related to hypertension, emphasizing their potential to enhance health literacy and disease understanding. The study aims to compare the quality and reliability of responses generated by ChatGPT and ChatSonic in addressing common patient queries about hypertension and evaluate these AI models using the Global Quality Scale (GQS) and the Modified DISCERN scale. METHODS: A virtual cross-sectional observational study was conducted over one month, starting in October 2023. Ten common patient queries regarding hypertension were presented to ChatGPT (https://chat.openai.com/) and ChatSonic (https://writesonic.com/chat), and the responses were recorded. Two internal medicine physicians assessed the responses using the GQS and the Modified DISCERN scale. Statistical analysis included Cohen's Kappa values for inter-rater agreement. RESULTS: The study evaluated responses from ChatGPT and ChatSonic for 10 patient queries. Assessors observed variations in the quality and reliability assessments between the two AI models. Cohen's Kappa values indicated minimal agreement between the evaluators for both the GQS and Modified DISCERN scale. CONCLUSIONS: This study highlights the variations in the assessment of responses generated by ChatGPT and ChatSonic for hypertension-related queries. The findings highlight the need for ongoing monitoring and fact-checking of AI-generated responses.
Vinufrancis A; Al Hussein H; Patel HV; Nizami A; Singh A; Nunez B; Abdel-Aal AM
43
40066104
Introducing AI-generated cases (AI-cases) & standardized clients (AI-SCs) in communication training for veterinary students: perceptions and adoption challenges.
2,024
Frontiers in veterinary science
INTRODUCTION: The integration of Artificial Intelligence (AI) into medical education and healthcare has grown steadily over these past couple of years, though its application in veterinary education and practice remains relatively underexplored. This study is among the first to introduce veterinary students to AI-generated cases (AI-cases) and AI-standardized clients (AI-SCs) for teaching and learning communication skills. The study aimed to evaluate students' beliefs and perceptions surrounding the use of AI in veterinary education, with specific focus on communication skills training. METHODS: Conducted at Texas Tech University School of Veterinary Medicine (TTU SVM) during the Spring 2024 semester, the study included pre-clinical veterinary students (n = 237), who participated in a 90-min communication skills laboratory activity. Each class was introduced to two AI-cases and two AI-SCs, developed using OpenAI's ChatGPT-3.5. The Calgary Cambridge Guide (CCG) served as the framework for practicing communication skills. RESULTS: Results showed that although students recognized the widespread use of AI in everyday life, their familiarity, comfort and application of AI in veterinary education were limited. Notably, upper-year students were more hesitant to adopt AI-based tools, particularly in communication skills training. DISCUSSION: The findings suggest that veterinary institutions should prioritize AI-literacy and further explore how AI can enhance and complement communication training, veterinary education and practice.
Artemiou E; Hooper S; Dascanio L; Schmidt M; Gilbert G
10
40061376
A bibliometric analysis of the advance of artificial intelligence in medicine.
2,025
Frontiers in medicine
INTRODUCTION: The integration of artificial intelligence (AI) into medicine has ushered an era of unprecedented innovation, with substantial impacts on healthcare delivery and patient outcomes. Understanding the current development, primary research focuses, and key contributors in AI applications in medicine through bibliometric analysis is essential. METHODS: For this research, we utilized the Web of Science Core Collection as our main database and performed a review of literature covering the period from January 2019 to December 2023. VOSviewer and R-bibliometrix were performed to conduct bibliometric analysis and network visualization, including the number of publications, countries, journals, citations, authors, and keywords. RESULTS: A total of 1,811 publications on research for AI in medicine were released across 565 journals by 12,376 authors affiliated with 3,583 institutions from 97 countries. The United States became the foremost producer of scholarly works, significantly impacting the field. Harvard Medical School exhibited the highest publication count among all institutions. The Journal of Medical Internet Research achieved the highest H-index (19), publication count (76), and total citations (1,495). Four keyword clusters were identified, covering AI applications in digital health, COVID-19 and ChatGPT, precision medicine, and public health epidemiology. "Outcomes" and "Risk" demonstrated a notable upward trend, indicating the utilization of AI in engaging with clinicians and patients to discuss patients' health condition risks, foreshadowing future research focal points. CONCLUSION: Analyzing our bibliometric data allowed us to identify progress, focus areas, and emerging fields in AI for medicine, pointing to potential future research directions. Since 2019, there has been a steady rise in publications related to AI in medicine, indicating its rapid growth. In addition, we reviewed journals and significant publications to pinpoint prominent countries, institutions, and academics. Researchers will gain important insights into the current landscape, collaborative frameworks, and key research topics in the field from this study. The findings suggest directions for future research.
Lin M; Lin L; Lin L; Lin Z; Yan X
10
40034889
Evaluating the Quality and Readability of Generative Artificial Intelligence (AI) Chatbot Responses in the Management of Achilles Tendon Rupture.
2,025
Cureus
INTRODUCTION: The rise of artificial intelligence (AI), including generative chatbots like ChatGPT (OpenAI, San Francisco, CA, USA), has revolutionized many fields, including healthcare. Patients have gained the ability to prompt chatbots to generate purportedly accurate and individualized healthcare content. This study analyzed the readability and quality of answers to Achilles tendon rupture questions from six generative AI chatbots to evaluate and distinguish their potential as patient education resources. METHODS: The six AI models used were ChatGPT 3.5, ChatGPT 4, Gemini 1.0 (previously Bard; Google, Mountain View, CA, USA), Gemini 1.5 Pro, Claude (Anthropic, San Francisco, CA, USA) and Grok (xAI, Palo Alto, CA, USA) without prior prompting. Each was asked 10 common patient questions about Achilles tendon rupture, determined by five orthopaedic surgeons. The readability of generative responses was measured using Flesch-Kincaid Reading Grade Level, Gunning Fog, and SMOG (Simple Measure of Gobbledygook). The response quality was subsequently graded using the DISCERN criteria by five blinded orthopaedic surgeons. RESULTS: Gemini 1.0 generated statistically significant differences in ease of readability (closest to average American reading level) than responses from ChatGPT 3.5, ChatGPT 4, and Claude. Additionally, mean DISCERN scores demonstrated significantly higher quality of responses from Gemini 1.0 (63.0+/-5.1) and ChatGPT 4 (63.8+/-6.2) than ChatGPT 3.5 (53.8+/-3.8), Claude (55.0+/-3.8), and Grok (54.2+/-4.8). However, the overall quality (question 16, DISCERN) of each model was averaged and graded at an above-average level (range, 3.4-4.4). DISCUSSION AND CONCLUSION: Our results indicate that generative chatbots can potentially serve as patient education resources alongside physicians. Although some models lacked sufficient content, each performed above average in overall quality. With the lowest readability and highest DISCERN scores, Gemini 1.0 outperformed ChatGPT, Claude, and Grok and potentially emerged as the simplest and most reliable generative chatbot regarding management of Achilles tendon rupture.
Collins CE; Giammanco PA; Guirgus M; Kricfalusi M; Rice RC; Nayak R; Ruckle D; Filler R; Elsissy JG
32
40233367
Comparison of ChatGPT's Diagnostic and Management Accuracy of Foot and Ankle Bone-Related Pathologies to Orthopaedic Surgeons.
2,025
The Journal of the American Academy of Orthopaedic Surgeons
INTRODUCTION: The steep rise in utilization of large language model chatbots, such as ChatGPT, has spilled into medicine in recent years. The newest version of ChatGPT, ChatGPT-4, has passed medical licensure examinations and, specifically in orthopaedics, has performed at the level of a postgraduate level three orthopaedic surgery resident on the Orthopaedic In-Service Training Examination question bank sets. The purpose of this study was to evaluate ChatGPT-4's diagnostic and decision-making capacity in the clinical management of bone-related injuries of the foot and ankle. METHODS: Eight bone-related foot and ankle orthopaedic cases were presented to ChatGPT-4 and subsequently evaluated by three fellowship-trained foot and ankle orthopaedic surgeons. Cases were scored using criteria on a Likert scale, graded from a total score of 5 (lowest) to 25 (highest) across five criteria. ChatGPT-4 was referred to as "Dr. GPT," establishing a peer dynamic so that the role of an orthopaedic surgeon was emulated by the chatbot. RESULTS: The average score across all criteria for each case was 4.53 of 5, noting an overall average sum score of 22.7 of 25 for all cases. The pathology with the highest score was the second metatarsal stress fracture (24.3), whereas the case with the lowest score was hallux rigidus (21.3). Kendall correlation analysis of interrater reliability showed variable correlation among surgeons, without statistical significance. CONCLUSION: ChatGPT-4 effectively diagnosed and provided appropriate treatment options for simple bone-related foot and ankle cases. Importantly, ChatGPT did not fabricate treatment options (ie, hallucination phenomenon), which has been previously well-documented in the literature, notably receiving its second-highest overall average score in this criterion. ChatGPT struggled to provide comprehensive information beyond standard treatment options. Overall, ChatGPT has the potential to serve as a widely accessible resource for patients and nonorthopaedic clinicians, although limitations may exist in the delivery of comprehensive information.
Essis MD; Hartman H; Tung WS; Oh I; Peden S; Gianakos AL
32
39851791
ChatGPT, Google, or PINK? Who Provides the Most Reliable Information on Side Effects of Systemic Therapy for Early Breast Cancer?
2,024
Clinics and practice
Introduction: The survival in early breast cancer (BC) has been significantly improved thanks to numerous new drugs. Nevertheless, the information about the need for systemic therapy, especially chemotherapy, represents an additional stress factor for patients. A common coping strategy is searching for further information, traditionally via search engines or websites, but artificial intelligence (AI) is also increasingly being used. Who provides the most reliable information is now unclear. Material and Methods: AI in the form of ChatGPT 3.5 and 4.0, Google, and the website of PINK, a provider of a prescription-based mobile health app for patients with BC, were compared to determine the validity of the statements on the five most common side effects of nineteen approved drugs and one drug with pending approval (Ribociclib) for the systemic treatment of BC. For this purpose, the drugs were divided into three groups: chemotherapy, targeted therapy, and endocrine therapy. The reference for the comparison was the prescribing information of the respective drug. A congruence score was calculated for the information on side effects: correct information (2 points), generally appropriate information (1 point), and otherwise no point. The information sources were then compared using a Friedmann test and a Bonferroni-corrected post-hoc test. Results: In the overall comparison, ChatGPT 3.5 received the best score with a congruence of 67.5%, followed by ChatGPT 4.0 with 67.0%, PINK with 59.5%, and with Google 40.0% (p < 0.001). There were also significant differences when comparing the individual subcategories, with the best congruence achieved by PINK (73.3%, p = 0.059) in the chemotherapy category, ChatGPT 4.0 (77.5%; p < 0.001) in the targeted therapy category, and ChatGPT 3.5 (p = 0.002) in the endocrine therapy category. Conclusions: Artificial intelligence and professional online information websites provide the most reliable information on the possible side effects of the systemic treatment of early breast cancer, but congruence with prescribing information is limited. The medical consultation should still be considered the best source of information.
Lukac S; Griewing S; Leinert E; Dayan D; Heitmeir B; Wallwiener M; Janni W; Fink V; Ebner F
0-1
38728938
Comparative analysis of ChatGPT, Gemini and emergency medicine specialist in ESI triage assessment.
2,024
The American journal of emergency medicine
INTRODUCTION: The term Artificial Intelligence (AI) was first coined in the 1960s and has made significant progress up to the present day. During this period, numerous AI applications have been developed. GPT-4 and Gemini are two of the best-known of these AI models. As a triage system The Emergency Severity Index (ESI) is currently one of the most commonly used for effective patient triage in the emergency department. The aim of this study is to evaluate the performance of GPT-4, Gemini, and emergency medicine specialists in ESI triage against each other; furthermore, it aims to contribute to the literature on the usability of these AI programs in emergency department triage. METHODS: Our study was conducted between February 1, 2024, and February 29, 2024, among emergency medicine specialists in Turkey, as well as with GPT-4 and Gemini. Ten emergency medicine specialists were included in our study but as a limitation the emergency medicine specialists participating in the study do not frequently use the ESI triage model in daily practice. In the first phase of our study, 100 case examples related to adult or trauma patients were extracted from the sample and training cases found in the ESI Implementation Handbook. In the second phase of our study, the provided responses were categorized into three groups: correct triage, over-triage, and under-triage. In the third phase of our study, the questions were categorized according to the correct triage responses. RESULTS: In the results of our study, a statistically significant difference was found between the three groups in terms of correct triage, over-triage, and under-triage (p < 0.001). GPT-4 was found to have the highest correct triage rate with an average of 70.60 (+/-3.74), while Gemini had the highest over-triage rate with an average of 35.2 (+/-2.93) (p < 0.001). The highest under-triage rate was observed in emergency medicine specialists (32.90 (+/-11.83)). In the ESI 1-2 class, Gemini had a correct triage rate of 87.77%, GPT-4 had 85.11%, and emergency medicine specialists had 49.33%. CONCLUSION: In conclusion, our study shows that both GPT-4 and Gemini can accurately triage critical and urgent patients in ESI 1&2 groups at a high rate. Furthermore, GPT-4 has been more successful in ESI triage for all patients. These results suggest that GPT-4 and Gemini could assist in accurate ESI triage of patients in emergency departments.
Meral G; Ates S; Gunay S; Ozturk A; Kusdogan M
10
38973528
Evaluation of online chat-based artificial intelligence responses about inflammatory bowel disease and diet.
2,024
European journal of gastroenterology & hepatology
INTRODUCTION: The USA has the highest age-standardized prevalence of inflammatory bowel disease (IBD). Both genetic and environmental factors have been implicated in IBD flares and multiple strategies are centered around avoiding dietary triggers to maintain remission. Chat-based artificial intelligence (CB-AI) has shown great potential in enhancing patient education in medicine. We evaluate the role of CB-AI in patient education on dietary management of IBD. METHODS: Six questions evaluating important concepts about the dietary management of IBD which then were posed to three CB-AI models - ChatGPT, BingChat, and YouChat three different times. All responses were graded for appropriateness and reliability by two physicians using dietary information from the Crohn's and Colitis Foundation. The responses were graded as reliably appropriate, reliably inappropriate, and unreliable. The expert assessment of the reviewing physicians was validated by the joint probability of agreement for two raters. RESULTS: ChatGPT provided reliably appropriate responses to questions on dietary management of IBD more often than BingChat and YouChat. There were two questions that more than one CB-AI provided unreliable responses to. Each CB-AI provided examples within their responses, but the examples were not always appropriate. Whether the response was appropriate or not, CB-AIs mentioned consulting with an expert in the field. The inter-rater reliability was 88.9%. DISCUSSION: CB-AIs have the potential to improve patient education and outcomes but studies evaluating their appropriateness for various health conditions are sparse. Our study showed that CB-AIs have the ability to provide appropriate answers to most questions regarding the dietary management of IBD.
Naqvi HA; Delungahawatta T; Atarere JO; Bandaru SK; Barrow JB; Mattar MC
43
39810943
A comparative analysis of generative artificial intelligence responses from leading chatbots to questions about endometriosis.
2,025
AJOG global reports
INTRODUCTION: The use of generative artificial intelligence (AI) has begun to permeate most industries, including medicine, and patients will inevitably start using these large language model (LLM) chatbots as a modality for education. As healthcare information technology evolves, it is imperative to evaluate chatbots and the accuracy of the information they provide to patients and to determine if there is variability between them. OBJECTIVE: This study aimed to evaluate the accuracy and comprehensiveness of three chatbots in addressing questions related to endometriosis and determine the level of variability between them. STUDY DESIGN: Three LLMs, including Chat GPT-4 (Open AI), Claude (Anthropic), and Bard (Google) were asked to generate answers to 10 commonly asked questions about endometriosis. The responses were qualitatively compared to current guidelines and expert opinion on endometriosis and rated on a scale by nine gynecologists. The grading scale included the following: (1) Completely incorrect, (2) mostly incorrect and some correct, (3) mostly correct and some incorrect, (4) correct but inadequate, (5) correct and comprehensive. Final scores were averaged between the nine reviewers. Kendall's W and the related chi-square test were used to evaluate the reviewers' strength of agreement in ranking the LLMs' responses for each item. RESULTS: Average scores for the 10 answers amongst Bard, Chat GPT, and Claude were 3.69, 4.24, and 3.7, respectively. Two questions showed significant disagreement between the nine reviewers. There were no questions the models could answer comprehensively or correctly across the reviewers. The model most associated with comprehensive and correct responses was ChatGPT. Chatbots showed an improved ability to accurately answer questions about symptoms and pathophysiology over treatment and risk of recurrence. CONCLUSION: The analysis of LLMs revealed that, on average, they mainly provided correct but inadequate responses to commonly asked patient questions about endometriosis. While chatbot responses can serve as valuable supplements to information provided by licensed medical professionals, it is crucial to maintain a thorough ongoing evaluation process for outputs to provide the most comprehensive and accurate information to patients. Further research into this technology and its role in patient education and treatment is crucial as generative AI becomes more embedded in the medical field.
Cohen ND; Ho M; McIntire D; Smith K; Kho KA
43
39106968
Fact Check: Assessing the Response of ChatGPT to Alzheimer's Disease Myths.
2,024
Journal of the American Medical Directors Association
INTRODUCTION: There are many myths regarding Alzheimer's disease (AD) that have been circulated on the internet, each exhibiting varying degrees of accuracy, inaccuracy, and misinformation. Large language models, such as ChatGPT, may be a valuable tool to help assess these myths for veracity and inaccuracy; however, they can induce misinformation as well. OBJECTIVE: This study assesses ChatGPT's ability to identify and address AD myths with reliable information. METHODS: We conducted a cross-sectional study of attending geriatric medicine clinicians' evaluation of ChatGPT (GPT 4.0) responses to 16 selected AD myths. We prompted ChatGPT to express its opinion on each myth and implemented a survey using REDCap to determine the degree to which clinicians agreed with the accuracy of each of ChatGPT's explanations. We also collected their explanations of any disagreements with ChatGPT's responses. We used a 5-category Likert-type scale with a score ranging from -2 to 2 to quantify clinicians' agreement in each aspect of the evaluation. RESULTS: The clinicians (n = 10) were generally satisfied with ChatGPT's explanations. Among the 16 myths, the clinicians were generally satisfied with these explanations, with [mean (SD) score of 1.1(+/-0.3)]. Most clinicians selected "Agree" or "Strongly Agree" for each statement. Some statements obtained a small number of "Disagree" responses. There were no "Strongly Disagree" responses. CONCLUSION: Most surveyed health care professionals acknowledged the potential value of ChatGPT in mitigating AD misinformation; however, the need for more refined and detailed explanations of the disease's mechanisms and treatments was highlighted.
Huang SS; Song Q; Beiting KJ; Duggan MC; Hines K; Murff H; Leung V; Powers J; Harvey TS; Malin B; Yin Z
0-1
37328321
ChatGPT and large language model (LLM) chatbots: The current state of acceptability and a proposal for guidelines on utilization in academic medicine.
2,023
Journal of pediatric urology
INTRODUCTION: There is currently no clear consensus on the standards for using large language models such as ChatGPT in academic medicine. Hence, we performed a scoping review of available literature to understand the current state of LLM use in medicine and to provide a guideline for future utilization in academia. MATERIALS AND METHODS: A scoping review of the literature was performed through a Medline search on February 16, 2023 using a combination of keywords including artificial intelligence, machine learning, natural language processing, generative pre-trained transformer, ChatGPT, and large language model. There were no restrictions to language or date of publication. Records not pertaining to LLMs were excluded. Records pertaining to LLM ChatBots and ChatGPT were identified and evaluated separately. Among the records pertaining to LLM ChatBots and ChatGPT, those that suggest recommendations for ChatGPT use in academia were utilized to create guideline statements for ChatGPT and LLM use in academic medicine. RESULTS: A total of 87 records were identified. 30 records were not pertaining to large language models and were excluded. 54 records underwent a full-text review for evaluation. There were 33 records related to LLM ChatBots or ChatGPT. DISCUSSION: From assessing these texts, five guideline statements for LLM use was developed: (1) ChatGPT/LLM cannot be cited as an author in scientific manuscripts; (2) If use of ChatGPT/LLM are considered for use in academic work, author(s) should have at least a basic understanding of what ChatGPT/LLM is; (3) Do not use ChatGPT/LLM to produce entirety of text in manuscripts; humans must be held accountable for use of ChatGPT/LLM and contents created by ChatGPT/LLM should be meticulously verified by humans; (4) ChatGPT/LLMs may be used for editing and refining of text; (5) Any use of ChatGPT/LLM should be transparent and should be clearly outlined in scientific manuscripts and acknowledged. CONCLUSION: Future authors should remain mindful of the potential impact their academic work may have on healthcare and continue to uphold the highest ethical standards and integrity when utilizing ChatGPT/LLM.
Kim JK; Chua M; Rickard M; Lorenzo A
10
40358604
Comparing Diagnostic Accuracy of ChatGPT to Clinical Diagnosis in General Surgery Consults: A Quantitative Analysis of Disease Diagnosis.
2,025
Military medicine
INTRODUCTION: This study addressed the challenge of providing accurate and timely medical diagnostics in military health care settings with limited access to advanced diagnostic tools, such as those encountered in austere environments, remote locations, or during large-scale combat operations. The primary objective was to evaluate the utility of ChatGPT, an artificial intelligence (AI) language model, as a support tool for health care providers in clinical decision-making and early diagnosis. MATERIALS AND METHODS: The research used an observational cross-sectional cohort design and exploratory predictive techniques. The methodology involved collecting and analyzing data from clinical scenarios based on common general surgery diagnoses-acute appendicitis, acute cholecystitis, and diverticulitis. These scenarios incorporated age, gender, symptoms, vital signs, physical exam findings, laboratory values, medical and surgical histories, and current medication regimens as data inputs. All collected data were entered into a table for each diagnosis. These tables were then used for scenario creation, with scenarios written to reflect typical patient presentations for each diagnosis. Finally, each scenario was entered into ChatGPT (version 3.5) individually, with ChatGPT then being asked to provide the leading diagnosis for the condition based on the provided information. The output from ChatGPT was then compared to the expected diagnosis to assess the accuracy. RESULTS: A statistically significant difference between ChatGPT's diagnostic outcomes and clinical diagnoses for acute cholecystitis and diverticulitis was observed, with ChatGPT demonstrating inferior accuracy in controlled test scenarios. A secondary outcome analysis looked at the relationship between specific symptoms and diagnosis. The presence of these symptoms in incorrect diagnoses indicates that they may adversely impact ChatGPT's diagnostic decision-making, resulting in a higher likelihood of misdiagnosis. These results highlight AI's potential as a diagnostic support tool but underscore the importance of continued research to evaluate its performance in more complex and varied clinical scenarios. CONCLUSIONS: In summary, this study evaluated the diagnostic accuracy of ChatGPT in identifying three common surgical conditions (acute appendicitis, acute cholecystitis, and diverticulitis) using comprehensive patient data, including age, gender, medical history, medications, symptoms, vital signs, physical exam findings, and basic laboratory results. The hypothesis was that ChatGPT might display slightly lower accuracy rates than clinical diagnoses made by medical providers. The statistical analysis, which included Fisher's exact test, revealed a significant difference between ChatGPT's diagnostic outcomes and clinical diagnoses, particularly in acute cholecystitis and diverticulitis cases. Therefore, we reject the null hypothesis, as the results indicated that ChatGPT's diagnostic accuracy significantly differs from clinical diagnostics in the presented scenarios. However, ChatGPT's overall high accuracy suggests that it can reliably support clinicians, especially in environments where diagnostic resources are limited, and can serve as a valuable tool in military medicine.
Meier H; McMahon R; Hout B; Randles J; Aden J; Rizzo JA
10
38555637
Evaluating the Efficacy of AI Chatbots as Tutors in Urology: A Comparative Analysis of Responses to the 2022 In-Service Assessment of the European Board of Urology.
2,024
Urologia internationalis
INTRODUCTION: This study assessed the potential of large language models (LLMs) as educational tools by evaluating their accuracy in answering questions across urological subtopics. METHODS: Three LLMs (ChatGPT-3.5, ChatGPT-4, and Bing AI) were examined in two testing rounds, separated by 48 h, using 100 Multiple-Choice Questions (MCQs) from the 2022 European Board of Urology (EBU) In-Service Assessment (ISA), covering five different subtopics. The correct answer was defined as "formal accuracy" (FA) representing the designated single best answer (SBA) among four options. Alternative answers selected from LLMs, which may not necessarily be the SBA but are still deemed correct, were labeled as "extended accuracy" (EA). Their capacity to enhance the overall accuracy rate when combined with FA was examined. RESULTS: In two rounds of testing, the FA scores were achieved as follows: ChatGPT-3.5: 58% and 62%, ChatGPT-4: 63% and 77%, and BING AI: 81% and 73%. The incorporation of EA did not yield a significant enhancement in overall performance. The achieved gains for ChatGPT-3.5, ChatGPT-4, and BING AI were as a result 7% and 5%, 5% and 2%, and 3% and 1%, respectively (p &gt; 0.3). Within urological subtopics, LLMs showcased best performance in Pediatrics/Congenital and comparatively less effectiveness in Functional/BPS/Incontinence. CONCLUSION: LLMs exhibit suboptimal urology knowledge and unsatisfactory proficiency for educational purposes. The overall accuracy did not significantly improve when combining EA to FA. The error rates remained high ranging from 16 to 35%. Proficiency levels vary substantially across subtopics. Further development of medicine-specific LLMs is required before integration into urological training programs.
May M; Korner-Riffard K; Kollitsch L; Burger M; Brookman-May SD; Rauchenwald M; Marszalek M; Eredics K
21
39958944
Readability of Hospital Online Patient Education Materials Across Otolaryngology Specialties.
2,025
Laryngoscope investigative otolaryngology
INTRODUCTION: This study evaluates the readability of online patient education materials (OPEMs) across otolaryngology subspecialties, hospital characteristics, and national otolaryngology organizations, while assessing AI alternatives. METHODS: Hospitals from the US News Best ENT list were queried for OPEMs describing a chosen surgery per subspecialty; the American Academy of Otolaryngology-Head and Neck Surgery (AAO), American Laryngological Association (ALA), Ear, Nose, and Throat United Kingdom (ENTUK), and the Canadian Society of Otolaryngology-Head and Neck Surgery (CSOHNS) were similarly queried. Google was queried for the top 10 links from hospitals per procedure. Ownership (private/public), presence of respective otolaryngology fellowships, region, and median household income (zip code) were collected. Readability was assessed using seven indices and averaged: Automated Readability Index (ARI), Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Readability (GFR), Simple Measure of Gobbledygook (SMOG), Coleman-Liau Readability Index (CLRI), and Linsear Write Readability Formula (LWRF). AI-generated materials from ChatGPT were compared for readability, accuracy, content, and tone. Analyses were conducted between subspecialties, against national organizations, NIH standard, and across demographic variables. RESULTS: Across 144 hospitals, OPEMs exceeded NIH readability standards, averaging at an 8th-12th grade level across subspecialties. In rhinology, facial plastics, and sleep medicine, hospital OPEMs had higher readability scores than ENTUK's materials (11.4 vs. 9.1, 10.4 vs. 7.2, 11.5 vs. 9.2, respectively; all p < 0.05), but lower than AAO (p = 0.005). ChatGPT-generated materials averaged a 6.8-grade level, demonstrating improved readability, especially with specialized prompting, compared to all hospital and organization OPEMs. CONCLUSION: OPEMs from all sources exceed the NIH readability standard. ENTUK serves as a benchmark for accessible language, while ChatGPT demonstrates the feasibility of producing more readable content. Otolaryngologists might consider using ChatGPT to generate patient-friendly materials, with caution, and advocate for national-level improvements in patient education readability.
Warrier A; Singh RP; Haleem A; Lee A; Mothy D; Patel A; Eloy JA; Manzi B
32
39487846
Transforming emergency triage: A preliminary, scenario-based cross-sectional study comparing artificial intelligence models and clinical expertise for enhanced accuracy.
2,024
Bratislavske lekarske listy
INTRODUCTION: This study examines triage judgments in emergency settings and compares the outcomes of artificial intelligence models for healthcare professionals. It discusses the disparities in precision rates between subjective evaluations by health professionals with objective assessments of AI systems. MATERIAL AND METHOD: For the analysis of the efficacy of emergency triage; 50 virtual patient scenarios had been created. Emergency medicine residents and other healthcare providers who had triage education were tasked with categorizing triage levels for virtual patient scenarios. Also artificial intelligence systems, tasked for resolving the same scenarios. All of them were asked to use three color-coded triage of the Republic of Turkey Ministry of Health. The answer keys were created by consensus of the researchers. In addition, Emergency medicine specialists were asked to evaluate the acuity level of each scenario in order to perform sub-analyses. RESULTS: The study consisted of 86 healthcare professionals, comprising 31 Emergency medicine residents (26.5%), 1 paramedic (0.9%), 5 emergency health technicians (4.3%), and 80 nurses (68.4%). Google Bard AI and OpenAI Chat GPT v.3.5 were used as artificial intelligence systems. The responses compared with the answer key to determine each groups efficacy. As planned the responses from healthcare professionals were analyzed individually for acuity level of scenarios. Emergency medicine residents and other groups of healthcare providers had significantly higher numbers of correct answers compared to Google Bard and Chat GPT (n=30.7 vs n=25.5). There was no significant difference between ChatGPT and Bard for low and high acuity scenarios (p=0.821)CONCLUSION: AI models can examine extensive data sets and make more accurate and quicker triage judgments with sophisticated algorithms. However, in this study, we found that the triage ability of artificial intelligence is not as sufficient as humans. A more efficient triage system can be developed by integrating artificial intelligence with human input, rather than solely relying on technology (Tab. 4, Ref. 41). Text in PDF www.elis.sk Keywords: emergency triage, AI applications, health technology, artificial intelligence, emergency management.
Eraybar S; Dal E; Aydin MO; Begenen M
10
39025818
Effects of interacting with a large language model compared with a human coach on the clinical diagnostic process and outcomes among fourth-year medical students: study protocol for a prospective, randomised experiment using patient vignettes.
2,024
BMJ open
INTRODUCTION: Versatile large language models (LLMs) have the potential to augment diagnostic decision-making by assisting diagnosticians, thanks to their ability to engage in open-ended, natural conversations and their comprehensive knowledge access. Yet the novelty of LLMs in diagnostic decision-making introduces uncertainties regarding their impact. Clinicians unfamiliar with the use of LLMs in their professional context may rely on general attitudes towards LLMs more broadly, potentially hindering thoughtful use and critical evaluation of their input, leading to either over-reliance and lack of critical thinking or an unwillingness to use LLMs as diagnostic aids. To address these concerns, this study examines the influence on the diagnostic process and outcomes of interacting with an LLM compared with a human coach, and of prior training vs no training for interacting with either of these 'coaches'. Our findings aim to illuminate the potential benefits and risks of employing artificial intelligence (AI) in diagnostic decision-making. METHODS AND ANALYSIS: We are conducting a prospective, randomised experiment with N=158 fourth-year medical students from Charite Medical School, Berlin, Germany. Participants are asked to diagnose patient vignettes after being assigned to either a human coach or ChatGPT and after either training or no training (both between-subject factors). We are specifically collecting data on the effects of using either of these 'coaches' and of additional training on information search, number of hypotheses entertained, diagnostic accuracy and confidence. Statistical methods will include linear mixed effects models. Exploratory analyses of the interaction patterns and attitudes towards AI will also generate more generalisable knowledge about the role of AI in medicine. ETHICS AND DISSEMINATION: The Bern Cantonal Ethics Committee considered the study exempt from full ethical review (BASEC No: Req-2023-01396). All methods will be conducted in accordance with relevant guidelines and regulations. Participation is voluntary and informed consent will be obtained. Results will be published in peer-reviewed scientific medical journals. Authorship will be determined according to the International Committee of Medical Journal Editors guidelines.
Kammer JE; Hautz WE; Krummrey G; Sauter TC; Penders D; Birrenbach T; Bienefeld N
10
38665043
Evaluating ChatGPT's Utility in Medicine Guidelines Through Web Search Analysis.
2,024
The Permanente journal
INTRODUCTION: With the rise of machine learning applications in health care, shifts in medical fields that rely on precise prognostic models and pattern detection tools are anticipated in the near future. Chat Generative Pretrained Transformer (ChatGPT) is a recent machine learning innovation known for producing text that mimics human conversation. To gauge ChatGPT's capability in addressing patient inquiries, the authors set out to juxtapose it with Google Search, America's predominant search engine. Their comparison focused on: 1) the top questions related to clinical practice guidelines from the American Academy of Family Physicians by category and subject; 2) responses to these prevalent questions; and 3) the top questions that elicited a numerical reply. METHODS: Utilizing a freshly installed Google Chrome browser (version 109.0.5414.119), the authors conducted a Google web search (www.google.com) on March 4, 2023, ensuring minimal influence from personalized search algorithms. Search phrases were derived from the clinical guidelines of the American Academy of Family Physicians. The authors prompted ChatGPT with: "Search Google using the term '(refer to search terms)' and document the top four questions linked to the term." The same 25 search terms were employed. The authors cataloged the primary 4 questions and their answers for each term, resulting in 100 questions and answers. RESULTS: Of the 100 questions, 42% (42 questions) were consistent across all search terms. ChatGPT predominantly sourced from academic (38% vs 15%, p = 0.0002) and government (50% vs 39%, p = 0.12) domains, whereas Google web searches leaned toward commercial sources (32% vs 11%, p = 0.0002). Thirty-nine percent (39 questions) of the questions yielded divergent answers between the 2 platforms. Notably, 16 of the 39 distinct answers from ChatGPT lacked a numerical reply, instead advising a consultation with a medical professional for health guidance. CONCLUSION: Google Search and ChatGPT present varied questions and answers for both broad and specific queries. Both patients and doctors should exercise prudence when considering ChatGPT as a digital health adviser. It's essential for medical professionals to assist patients in accurately communicating their online discoveries and ensuing inquiries for a comprehensive discussion.
Dubin JA; Bains SS; Hameed D; Chen Z; Gaertner E; Nace J; Mont MA; Delanois RE
0-1
40289627
Evaluating Large Language Models on Aerospace Medicine Principles.
2,025
Wilderness & environmental medicine
IntroductionLarge language models (LLMs) hold immense potential to serve as clinical decision-support tools for Earth-independent medical operations. However, the generation of incorrect information may be misleading or even harmful when applied to care in this setting.MethodTo better understand this risk, this work tested two publicly available LLMs, ChatGPT-4 and Google Gemini Advanced (1.0 Ultra), as well as a custom Retrieval-Augmented Generation (RAG) LLM on factual knowledge and clinical reasoning in accordance with published material in aerospace medicine. We also evaluated the consistency of the two public LLMs when answering self-generated board-style questions.ResultsWhen queried with 857 free-response questions from Aerospace Medicine Boards Questions and Answers, ChatGPT-4 had a mean reader score from 4.23 to 5.00 (Likert scale of 1-5) across chapters, whereas Gemini Advanced and the RAG LLM scored 3.30 to 4.91 and 4.69 to 5.00, respectively. When queried with 20 multiple-choice aerospace medicine board questions provided by the American College of Preventive Medicine, ChatGPT-4 and Gemini Advanced responded correctly 70% and 55% of the time, respectively, while the RAG LLM answered 85% correctly. Despite this quantitative measure of high performance, the LLMs tested still exhibited gaps in factual knowledge that potentially could be harmful, a degree of clinical reasoning that may not pass the aerospace medicine board exam, and some inconsistency when answering self-generated questions.ConclusionThere is considerable promise for LLM use in autonomous medical operations in spaceflight given the anticipated continued rapid pace of development, including advancements in model training, data quality, and fine-tuning methods.
Anderson KD; Davis CA; Pickett SM; Pohlen MS
0-1
38502861
ChatGPT in medicine: prospects and challenges: a review article.
2,024
International journal of surgery (London, England)
It has been a year since the launch of Chat Generator Pre-Trained Transformer (ChatGPT), a generative artificial intelligence (AI) program. The introduction of this cross-generational product initially brought a huge shock to people with its incredible potential and then aroused increasing concerns among people. In the field of medicine, researchers have extensively explored the possible applications of ChatGPT and achieved numerous satisfactory results. However, opportunities and issues always come together. Problems have also been exposed during the applications of ChatGPT, requiring cautious handling, thorough consideration, and further guidelines for safe use. Here, the authors summarized the potential applications of ChatGPT in the medical field, including revolutionizing healthcare consultation, assisting patient management and treatment, transforming medical education, and facilitating clinical research. Meanwhile, the authors also enumerated researchers' concerns arising along with its broad and satisfactory applications. As it is irreversible that AI will gradually permeate every aspect of modern life, the authors hope that this review can not only promote people's understanding of the potential applications of ChatGPT in the future but also remind them to be more cautious about this "Pandora's Box" in the medical field. It is necessary to establish normative guidelines for its safe use in the medical field as soon as possible.
Tan S; Xin X; Wu D
10
37761715
Enhancing Kidney Transplant Care through the Integration of Chatbot.
2,023
Healthcare (Basel, Switzerland)
Kidney transplantation is a critical treatment option for end-stage kidney disease patients, offering improved quality of life and increased survival rates. However, the complexities of kidney transplant care necessitate continuous advancements in decision making, patient communication, and operational efficiency. This article explores the potential integration of a sophisticated chatbot, an AI-powered conversational agent, to enhance kidney transplant practice and potentially improve patient outcomes. Chatbots and generative AI have shown promising applications in various domains, including healthcare, by simulating human-like interactions and generating contextually appropriate responses. Noteworthy AI models like ChatGPT by OpenAI, BingChat by Microsoft, and Bard AI by Google exhibit significant potential in supporting evidence-based research and healthcare decision making. The integration of chatbots in kidney transplant care may offer transformative possibilities. As a clinical decision support tool, it could provide healthcare professionals with real-time access to medical literature and guidelines, potentially enabling informed decision making and improved knowledge dissemination. Additionally, the chatbot has the potential to facilitate patient education by offering personalized and understandable information, addressing queries, and providing guidance on post-transplant care. Furthermore, under clinician or transplant pharmacist supervision, it has the potential to support post-transplant care and medication management by analyzing patient data, which may lead to tailored recommendations on dosages, monitoring schedules, and potential drug interactions. However, to fully ascertain its effectiveness and safety in these roles, further studies and validation are required. Its integration with existing clinical decision support systems may enhance risk stratification and treatment planning, contributing to more informed and efficient decision making in kidney transplant care. Given the importance of ethical considerations and bias mitigation in AI integration, future studies may evaluate long-term patient outcomes, cost-effectiveness, user experience, and the generalizability of chatbot recommendations. By addressing these factors and potentially leveraging AI capabilities, the integration of chatbots in kidney transplant care holds promise for potentially improving patient outcomes, enhancing decision making, and fostering the equitable and responsible use of AI in healthcare.
Garcia Valencia OA; Thongprayoon C; Jadlowiec CC; Mao SA; Miao J; Cheungpasitporn W
43
37900350
An approach for collaborative development of a federated biomedical knowledge graph-based question-answering system: Question-of-the-Month challenges.
2,023
Journal of clinical and translational science
Knowledge graphs have become a common approach for knowledge representation. Yet, the application of graph methodology is elusive due to the sheer number and complexity of knowledge sources. In addition, semantic incompatibilities hinder efforts to harmonize and integrate across these diverse sources. As part of The Biomedical Translator Consortium, we have developed a knowledge graph-based question-answering system designed to augment human reasoning and accelerate translational scientific discovery: the Translator system. We have applied the Translator system to answer biomedical questions in the context of a broad array of diseases and syndromes, including Fanconi anemia, primary ciliary dyskinesia, multiple sclerosis, and others. A variety of collaborative approaches have been used to research and develop the Translator system. One recent approach involved the establishment of a monthly "Question-of-the-Month (QotM) Challenge" series. Herein, we describe the structure of the QotM Challenge; the six challenges that have been conducted to date on drug-induced liver injury, cannabidiol toxicity, coronavirus infection, diabetes, psoriatic arthritis, and ATP1A3-related phenotypes; the scientific insights that have been gleaned during the challenges; and the technical issues that were identified over the course of the challenges and that can now be addressed to foster further development of the prototype Translator system. We close with a discussion on Large Language Models such as ChatGPT and highlight differences between those models and the Translator system.
Fecho K; Bizon C; Issabekova T; Moxon S; Thessen AE; Abdollahi S; Baranzini SE; Belhu B; Byrd WE; Chung L; Crouse A; Duby MP; Ferguson S; Foksinska A; Forero L; Friedman J; Gardner V; Glusman G; Hadlock J; Hanspers K; Hinderer E; Hobbs C; Hyde G; Huang S; Koslicki D; Mease P; Muller S; Mungall CJ; Ramsey SA; Roach J; Rubin I; Schurman SH; Shalev A; Smith B; Soman K; Stemann S; Su AI; Ta C; Watkins PB; Williams MD; Wu C; Xu CH
10
40140500
Preliminary evaluation of ChatGPT model iterations in emergency department diagnostics.
2,025
Scientific reports
Large language model chatbots such as ChatGPT have shown the potential in assisting health professionals in emergency departments (EDs). However, the diagnostic accuracy of newer ChatGPT models remains unclear. This retrospective study evaluated the diagnostic performance of various ChatGPT models-including GPT-3.5, GPT-4, GPT-4o, and o1 series-in predicting diagnoses for ED patients (n = 30) and examined the impact of explicitly invoking reasoning (thoughts). Earlier models, such as GPT-3.5, demonstrated high accuracy for top-three differential diagnoses (80.0% in accuracy) but underperformed in identifying leading diagnoses (47.8%) compared to newer models such as chatgpt-4o-latest (60%, p < 0.01) and o1-preview (60%, p < 0.01). Asking for thoughts to be provided significantly enhanced the performance on predicting leading diagnosis for 4o models such as 4o-2024-0513 (from 45.6 to 56.7%; p = 0.03) and 4o-mini-2024-07-18 (from 54.4 to 60.0%; p = 0.04) but had minimal impact on o1-mini and o1-preview. In challenging cases, such as pneumonia without fever, all models generally failed to predict the correct diagnosis, indicating atypical presentations as a major limitation for ED application of current ChatGPT models.
Wang J; Shue K; Liu L; Hu G
10
40206998
Fine-Tuning Large Language Models for Specialized Use Cases.
2,025
Mayo Clinic proceedings. Digital health
Large language models (LLMs) are a type of artificial intelligence, which operate by predicting and assembling sequences of words that are statistically likely to follow from a given text input. With this basic ability, LLMs are able to answer complex questions and follow extremely complex instructions. Products created using LLMs such as ChatGPT by OpenAI and Claude by Anthropic have created a huge amount of traction and user engagements and revolutionized the way we interact with technology, bringing a new dimension to human-computer interaction. Fine-tuning is a process in which a pretrained model, such as an LLM, is further trained on a custom data set to adapt it for specialized tasks or domains. In this review, we outline some of the major methodologic approaches and techniques that can be used to fine-tune LLMs for specialized use cases and enumerate the general steps required for carrying out LLM fine-tuning. We then illustrate a few of these methodologic approaches by describing several specific use cases of fine-tuning LLMs across medical subspecialties. Finally, we close with a consideration of some of the benefits and limitations associated with fine-tuning LLMs for specialized use cases, with an emphasis on specific concerns in the field of medicine.
Anisuzzaman DM; Malins JG; Friedman PA; Attia ZI
10
37816837
The future landscape of large language models in medicine.
2,023
Communications medicine
Large language models (LLMs) are artificial intelligence (AI) tools specifically trained to process and generate text. LLMs attracted substantial public attention after OpenAI's ChatGPT was made publicly available in November 2022. LLMs can often answer questions, summarize, paraphrase and translate text on a level that is nearly indistinguishable from human capabilities. The possibility to actively interact with models like ChatGPT makes LLMs attractive tools in various fields, including medicine. While these models have the potential to democratize medical knowledge and facilitate access to healthcare, they could equally distribute misinformation and exacerbate scientific misconduct due to a lack of accountability and transparency. In this article, we provide a systematic and comprehensive overview of the potentials and limitations of LLMs in clinical practice, medical research and medical education.
Clusmann J; Kolbinger FR; Muti HS; Carrero ZI; Eckardt JN; Laleh NG; Loffler CML; Schwarzkopf SC; Unger M; Veldhuizen GP; Wagner SJ; Kather JN
10
38312244
Comparison of large language models in management advice for melanoma: Google's AI BARD, BingAI and ChatGPT.
2,024
Skin health and disease
Large language models (LLMs) are emerging artificial intelligence (AI) technology refining research and healthcare. Their use in medicine has seen numerous recent applications. One area where LLMs have shown particular promise is in the provision of medical information and guidance to practitioners. This study aims to assess three prominent LLMs-Google's AI BARD, BingAI and ChatGPT-4 in providing management advice for melanoma by comparing their responses to current clinical guidelines and existing literature. Five questions on melanoma pathology were prompted to three LLMs. A panel of three experienced Board-certified plastic surgeons evaluated the responses for reliability using reliability matrix (Flesch Reading Ease Score, the Flesch-Kincaid Grade Level and the Coleman-Liau Index), suitability (modified DISCERN score) and comparing them to existing guidelines. t-Test was performed to calculate differences in mean readability and reliability scores between LLMs and p value <0.05 was considered statistically significant. The mean readability scores across three LLMs were same. ChatGPT exhibited superiority with a Flesch Reading Ease Score of 35.42 (+/-21.02), Flesch-Kincaid Grade Level of 11.98 (+/-4.49) and Coleman-Liau Index of 12.00 (+/-5.10), however all of these were insignificant (p > 0.05). Suitability-wise using DISCERN score, ChatGPT 58 (+/-6.44) significantly (p = 0.04) outperformed BARD 36.2 (+/-34.06) and was insignificant to BingAI's 49.8 (+/-22.28). This study demonstrates that ChatGPT marginally outperforms BARD and BingAI in providing reliable, evidence-based clinical advice, but they still face limitations in depth and specificity. Future research should improve LLM performance by integrating specialized databases and expert knowledge to support patient-centred care.
Mu X; Lim B; Seth I; Xie Y; Cevik J; Sofiadellis F; Hunter-Smith DJ; Rozen WM
10
39177261
Harnessing large language models' zero-shot and few-shot learning capabilities for regulatory research.
2,024
Briefings in bioinformatics
Large language models (LLMs) are sophisticated AI-driven models trained on vast sources of natural language data. They are adept at generating responses that closely mimic human conversational patterns. One of the most notable examples is OpenAI's ChatGPT, which has been extensively used across diverse sectors. Despite their flexibility, a significant challenge arises as most users must transmit their data to the servers of companies operating these models. Utilizing ChatGPT or similar models online may inadvertently expose sensitive information to the risk of data breaches. Therefore, implementing LLMs that are open source and smaller in scale within a secure local network becomes a crucial step for organizations where ensuring data privacy and protection has the highest priority, such as regulatory agencies. As a feasibility evaluation, we implemented a series of open-source LLMs within a regulatory agency's local network and assessed their performance on specific tasks involving extracting relevant clinical pharmacology information from regulatory drug labels. Our research shows that some models work well in the context of few- or zero-shot learning, achieving performance comparable, or even better than, neural network models that needed thousands of training samples. One of the models was selected to address a real-world issue of finding intrinsic factors that affect drugs' clinical exposure without any training or fine-tuning. In a dataset of over 700 000 sentences, the model showed a 78.5% accuracy rate. Our work pointed to the possibility of implementing open-source LLMs within a secure local network and using these models to perform various natural language processing tasks when large numbers of training examples are unavailable.
Meshkin H; Zirkle J; Arabidarrehdor G; Chaturbedi A; Chakravartula S; Mann J; Thrasher B; Li Z
10
39917061
Navigating the potential and pitfalls of large language models in patient-centered medication guidance and self-decision support.
2,025
Frontiers in medicine
Large Language Models (LLMs) are transforming patient education in medication management by providing accessible information to support healthcare decision-making. Building on our recent scoping review of LLMs in patient education, this perspective examines their specific role in medication guidance. These artificial intelligence (AI)-driven tools can generate comprehensive responses about drug interactions, side effects, and emergency care protocols, potentially enhancing patient autonomy in medication decisions. However, significant challenges exist, including the risk of misinformation and the complexity of providing accurate drug information without access to individual patient data. Safety concerns are particularly acute when patients rely solely on AI-generated advice for self-medication decisions. This perspective analyzes current capabilities, examines critical limitations, and raises questions regarding the possible integration of LLMs in medication guidance. We emphasize the need for regulatory oversight to ensure these tools serve as supplements to, rather than replacements for, professional healthcare guidance.
Aydin S; Karabacak M; Vlachos V; Margetis K
10
40097720
Large language model agents can use tools to perform clinical calculations.
2,025
NPJ digital medicine
Large language models (LLMs) can answer expert-level questions in medicine but are prone to hallucinations and arithmetic errors. Early evidence suggests LLMs cannot reliably perform clinical calculations, limiting their potential integration into clinical workflows. We evaluated ChatGPT's performance across 48 medical calculation tasks, finding incorrect responses in one-third of trials (n = 212). We then assessed three forms of agentic augmentation: retrieval-augmented generation, a code interpreter tool, and a set of task-specific calculation tools (OpenMedCalc) across 10,000 trials. Models with access to task-specific tools showed the greatest improvement, with LLaMa and GPT-based models demonstrating a 5.5-fold (88% vs 16%) and 13-fold (64% vs 4.8%) reduction in incorrect responses, respectively, compared to the unimproved models. Our findings suggest that integration of machine-readable, task-specific tools may help overcome LLMs' limitations in medical calculations.
Goodell AJ; Chu SN; Rouholiman D; Chu LF
10
37460753
Large language models in medicine.
2,023
Nature medicine
Large language models (LLMs) can respond to free-text queries without being specifically trained in the task in question, causing excitement and concern about their use in healthcare settings. ChatGPT is a generative artificial intelligence (AI) chatbot produced through sophisticated fine-tuning of an LLM, and other tools are emerging through similar developmental processes. Here we outline how LLM applications such as ChatGPT are developed, and we discuss how they are being leveraged in clinical settings. We consider the strengths and limitations of LLMs and their potential to improve the efficiency and effectiveness of clinical, educational and research work in medicine. LLM chatbots have already been deployed in a range of biomedical contexts, with impressive but mixed results. This review acts as a primer for interested clinicians, who will determine if and how LLM technology is used in healthcare for the benefit of patients and practitioners.
Thirunavukarasu AJ; Ting DSJ; Elangovan K; Gutierrez L; Tan TF; Ting DSW
10
38428889
Comparison of the problem-solving performance of ChatGPT-3.5, ChatGPT-4, Bing Chat, and Bard for the Korean emergency medicine board examination question bank.
2,024
Medicine
Large language models (LLMs) have been deployed in diverse fields, and the potential for their application in medicine has been explored through numerous studies. This study aimed to evaluate and compare the performance of ChatGPT-3.5, ChatGPT-4, Bing Chat, and Bard for the Emergency Medicine Board Examination question bank in the Korean language. Of the 2353 questions in the question bank, 150 questions were randomly selected, and 27 containing figures were excluded. Questions that required abilities such as analysis, creative thinking, evaluation, and synthesis were classified as higher-order questions, and those that required only recall, memory, and factual information in response were classified as lower-order questions. The answers and explanations obtained by inputting the 123 questions into the LLMs were analyzed and compared. ChatGPT-4 (75.6%) and Bing Chat (70.7%) showed higher correct response rates than ChatGPT-3.5 (56.9%) and Bard (51.2%). ChatGPT-4 showed the highest correct response rate for the higher-order questions at 76.5%, and Bard and Bing Chat showed the highest rate for the lower-order questions at 71.4%. The appropriateness of the explanation for the answer was significantly higher for ChatGPT-4 and Bing Chat than for ChatGPT-3.5 and Bard (75.6%, 68.3%, 52.8%, and 50.4%, respectively). ChatGPT-4 and Bing Chat outperformed ChatGPT-3.5 and Bard in answering a random selection of Emergency Medicine Board Examination questions in the Korean language.
Lee GU; Hong DY; Kim SY; Kim JW; Lee YH; Park SO; Lee KR
21
38098921
Stratified Evaluation of GPT's Question Answering in Surgery Reveals Artificial Intelligence (AI) Knowledge Gaps.
2,023
Cureus
Large language models (LLMs) have broad potential applications in medicine, such as aiding with education, providing reassurance to patients, and supporting clinical decision-making. However, there is a notable gap in understanding their applicability and performance in the surgical domain and how their performance varies across specialties. This paper aims to evaluate the performance of LLMs in answering surgical questions relevant to clinical practice and to assess how this performance varies across different surgical specialties. We used the MedMCQA dataset, a large-scale multi-choice question-answer (MCQA) dataset consisting of clinical questions across all areas of medicine. We extracted the relevant 23,035 surgical questions and submitted them to the popular LLMs Generative Pre-trained Transformers (GPT)-3.5 and GPT-4 (OpenAI OpCo, LLC, San Francisco, CA). Generative Pre-trained Transformer is a large language model that can generate human-like text by predicting subsequent words in a sentence based on the context of the words that come before it. It is pre-trained on a diverse range of texts and can perform a variety of tasks, such as answering questions, without needing task-specific training. The question-answering accuracy of GPT was calculated and compared between the two models and across surgical specialties. Both GPT-3.5 and GPT-4 achieved accuracies of 53.3% and 64.4%, respectively, on surgical questions, showing a statistically significant difference in performance. When compared to their performance on the full MedMCQA dataset, the two models performed differently: GPT-4 performed worse on surgical questions than on the dataset as a whole, while GPT-3.5 showed the opposite pattern. Significant variations in accuracy were also observed across different surgical specialties, with strong performances in anatomy, vascular, and paediatric surgery and worse performances in orthopaedics, ENT, and neurosurgery. Large language models exhibit promising capabilities in addressing surgical questions, although the variability in their performance between specialties cannot be ignored. The lower performance of the latest GPT-4 model on surgical questions relative to questions across all medicine highlights the need for targeted improvements and continuous updates to ensure relevance and accuracy in surgical applications. Further research and continuous monitoring of LLM performance in surgical domains are crucial to fully harnessing their potential and mitigating the risks of misinformation.
Murphy Lonergan R; Curry J; Dhas K; Simmons BI
21
37378099
Embracing Large Language Models for Medical Applications: Opportunities and Challenges.
2,023
Cureus
Large language models (LLMs) have the potential to revolutionize the field of medicine by, among other applications, improving diagnostic accuracy and supporting clinical decision-making. However, the successful integration of LLMs in medicine requires addressing challenges and considerations specific to the medical domain. This viewpoint article provides a comprehensive overview of key aspects for the successful implementation of LLMs in medicine, including transfer learning, domain-specific fine-tuning, domain adaptation, reinforcement learning with expert input, dynamic training, interdisciplinary collaboration, education and training, evaluation metrics, clinical validation, ethical considerations, data privacy, and regulatory frameworks. By adopting a multifaceted approach and fostering interdisciplinary collaboration, LLMs can be developed, validated, and integrated into medical practice responsibly, effectively, and ethically, addressing the needs of various medical disciplines and diverse patient populations. Ultimately, this approach will ensure that LLMs enhance patient care and improve overall health outcomes for all.
Karabacak M; Margetis K
10
39001657
Enhancing Care for Older Adults and Dementia Patients With Large Language Models: Proceedings of the National Institute on Aging-Artificial Intelligence & Technology Collaboratory for Aging Research Symposium.
2,024
The journals of gerontology. Series A, Biological sciences and medical sciences
Large Language Models (LLMs) stand on the brink of reshaping the field of aging and dementia care, challenging the one-size-fits-all paradigm with their capacity for precision medicine and individualized treatment strategies. The "Large Pre-Trained Models with a Focus on AD/ADRD and Healthy Aging" symposium, organized by the National Institute on Aging and the Johns Hopkins Artificial Intelligence & Technology Collaboratory for Aging Research, served as a platform for exploring this potential. The symposium brought together diverse experts to discuss the integration of LLMs in aging and dementia care. They highlighted the roles LLMs can play in clinical decision support and predictive analytics, while also addressing critical ethical concerns including bias, privacy, and the responsible use of artificial intelligence (AI). The discussions focused on the need to balance technological advancement with ethical considerations in AI deployment. In conclusion, the symposium projected a future where LLMs not only revolutionize healthcare practices but also pose significant challenges that require careful navigation.
Abadir PM; Battle A; Walston JD; Chellappa R
10
38103973
Evaluation of ChatGPT and Google Bard Using Prompt Engineering in Cancer Screening Algorithms.
2,024
Academic radiology
Large language models (LLMs) such as ChatGPT and Bard have emerged as powerful tools in medicine, showcasing strong results in tasks such as radiology report translations and research paper drafting. While their implementation in clinical practice holds promise, their response accuracy remains variable. This study aimed to evaluate the accuracy of ChatGPT and Bard in clinical decision-making based on the American College of Radiology Appropriateness Criteria for various cancers. Both LLMs were evaluated in terms of their responses to open-ended (OE) and select-all-that-apply (SATA) prompts. Furthermore, the study incorporated prompt engineering (PE) techniques to enhance the accuracy of LLM outputs. The results revealed similar performances between ChatGPT and Bard on OE prompts, with ChatGPT exhibiting marginally higher accuracy in SATA scenarios. The introduction of PE also marginally improved LLM outputs in OE prompts but did not enhance SATA responses. The results highlight the potential of LLMs in aiding clinical decision-making processes, especially when guided by optimally engineered prompts. Future studies in diverse clinical situations are imperative to better understand the impact of LLMs in radiology.
Nguyen D; Swanson D; Newbury A; Kim YH
10
38726506
ChatGPT's performance in dentistry and allergyimmunology assessments: a comparative study.
2,023
Swiss dental journal
Large language models (LLMs) such as ChatGPT have potential applications in healthcare, including dentistry. Priming, the practice of providing LLMs with initial, relevant information, is an approach to improve their output quality. This study aimed to evaluate the performance of ChatGPT 3 and ChatGPT 4 on self-assessment questions for dentistry, through the Swiss Federal Licensing Examination in Dental Medicine (SFLEDM), and allergy and clinical immunology, through the European Examination in Allergy and Clinical Immunology (EEAACI). The second objective was to assess the impact of priming on ChatGPT's performance. The SFLEDM and EEAACI multiple-choice questions from the University of Bern's Institute for Medical Education platform were administered to both ChatGPT versions, with and without priming. Performance was analyzed based on correct responses. The statistical analysis included Wilcoxon rank sum tests (alpha=0.05). The average accuracy rates in the SFLEDM and EEAACI assessments were 63.3% and 79.3%, respectively. Both ChatGPT versions performed better on EEAACI than SFLEDM, with ChatGPT 4 outperforming ChatGPT 3 across all tests. ChatGPT 3's performance exhibited a significant improvement with priming for both EEAACI (p=0.017) and SFLEDM (p=0.024) assessments. For ChatGPT 4, the priming effect was significant only in the SFLEDM assessment (p=0.038). The performance disparity between SFLEDM and EEAACI assessments underscores ChatGPT's varying proficiency across different medical domains, likely tied to the nature and amount of training data available in each field. Priming can be a tool for enhancing output, especially in earlier LLMs. Advancements from ChatGPT 3 to 4 highlight the rapid developments in LLM technology. Yet, their use in critical fields such as healthcare must remain cautious owing to LLMs' inherent limitations and risks.
Fuchs A; Trachsel T; Weiger R; Eggmann F
21
37799027
ChatGPT's performance in dentistry and allergy-immunology assessments: a comparative study.
2,023
Swiss dental journal
Large language models (LLMs) such as ChatGPT have potential applications in healthcare, including dentistry. Priming, the practice of providing LLMs with initial, relevant information, is an approach to improve their output quality. This study aimed to evaluate the performance of ChatGPT 3 and ChatGPT 4 on self-assessment questions for dentistry, through the Swiss Federal Licensing Examination in Dental Medicine (SFLEDM), and allergy and clinical immunology, through the European Examination in Allergy and Clinical Immunology (EEAACI). The second objective was to assess the impact of priming on ChatGPT's performance. The SFLEDM and EEAACI multiple-choice questions from the University of Bern's Institute for Medical Education platform were administered to both ChatGPT versions, with and without priming. Performance was analyzed based on correct responses. The statistical analysis included Wilcoxon rank sum tests (alpha=0.05). The average accuracy rates in the SFLEDM and EEAACI assessments were 63.3% and 79.3%, respectively. Both ChatGPT versions performed better on EEAACI than SFLEDM, with ChatGPT 4 outperforming ChatGPT 3 across all tests. ChatGPT 3's performance exhibited a significant improvement with priming for both EEAACI (p=0.017) and SFLEDM (p=0.024) assessments. For ChatGPT 4, the priming effect was significant only in the SFLEDM assessment (p=0.038). The performance disparity between SFLEDM and EEAACI assessments underscores ChatGPT's varying proficiency across different medical domains, likely tied to the nature and amount of training data available in each field. Priming can be a tool for enhancing output, especially in earlier LLMs. Advancements from ChatGPT 3 to 4 highlight the rapid developments in LLM technology. Yet, their use in critical fields such as healthcare must remain cautious owing to LLMs' inherent limitations and risks.
Fuchs A; Trachsel T; Weiger R; Eggmann F
21
37855948
Examining the Potential of ChatGPT on Biomedical Information Retrieval: Fact-Checking Drug-Disease Associations.
2,024
Annals of biomedical engineering
Large language models (LLMs) such as ChatGPT have recently attracted significant attention due to their impressive performance on many real-world tasks. These models have also demonstrated the potential in facilitating various biomedical tasks. However, little is known of their potential in biomedical information retrieval, especially identifying drug-disease associations. This study aims to explore the potential of ChatGPT, a popular LLM, in discerning drug-disease associations. We collected 2694 true drug-disease associations and 5662 false drug-disease pairs. Our approach involved creating various prompts to instruct ChatGPT in identifying these associations. Under varying prompt designs, ChatGPT's capability to identify drug-disease associations with an accuracy of 74.6-83.5% and 96.2-97.6% for the true and false pairs, respectively. This study shows that ChatGPT has the potential in identifying drug-disease associations and may serve as a helpful tool in searching pharmacy-related information. However, the accuracy of its insights warrants comprehensive examination before its implementation in medical practice.
Gao Z; Li L; Ma S; Wang Q; Hemphill L; Xu R
10
39722188
Application of large language models in disease diagnosis and treatment.
2,025
Chinese medical journal
Large language models (LLMs) such as ChatGPT, Claude, Llama, and Qwen are emerging as transformative technologies for the diagnosis and treatment of various diseases. With their exceptional long-context reasoning capabilities, LLMs are proficient in clinically relevant tasks, particularly in medical text analysis and interactive dialogue. They can enhance diagnostic accuracy by processing vast amounts of patient data and medical literature and have demonstrated their utility in diagnosing common diseases and facilitating the identification of rare diseases by recognizing subtle patterns in symptoms and test results. Building on their image-recognition abilities, multimodal LLMs (MLLMs) show promising potential for diagnosis based on radiography, chest computed tomography (CT), electrocardiography (ECG), and common pathological images. These models can also assist in treatment planning by suggesting evidence-based interventions and improving clinical decision support systems through integrated analysis of patient records. Despite these promising developments, significant challenges persist regarding the use of LLMs in medicine, including concerns regarding algorithmic bias, the potential for hallucinations, and the need for rigorous clinical validation. Ethical considerations also underscore the importance of maintaining the function of supervision in clinical practice. This paper highlights the rapid advancements in research on the diagnostic and therapeutic applications of LLMs across different medical disciplines and emphasizes the importance of policymaking, ethical supervision, and multidisciplinary collaboration in promoting more effective and safer clinical applications of LLMs. Future directions include the integration of proprietary clinical knowledge, the investigation of open-source and customized models, and the evaluation of real-time effects in clinical diagnosis and treatment practices.
Yang X; Li T; Su Q; Liu Y; Kang C; Lyu Y; Zhao L; Nie Y; Pan Y
10
38801706
Evidence-Based Learning Strategies in Medicine Using AI.
2,024
JMIR medical education
Large language models (LLMs), like ChatGPT, are transforming the landscape of medical education. They offer a vast range of applications, such as tutoring (personalized learning), patient simulation, generation of examination questions, and streamlined access to information. The rapid advancement of medical knowledge and the need for personalized learning underscore the relevance and timeliness of exploring innovative strategies for integrating artificial intelligence (AI) into medical education. In this paper, we propose coupling evidence-based learning strategies, such as active recall and memory cues, with AI to optimize learning. These strategies include the generation of tests, mnemonics, and visual cues.
Arango-Ibanez JP; Posso-Nunez JA; Diaz-Solorzano JP; Cruz-Suarez G
21
37501529
Application of ChatGPT in Routine Diagnostic Pathology: Promises, Pitfalls, and Potential Future Directions.
2,024
Advances in anatomic pathology
Large Language Models are forms of artificial intelligence that use deep learning algorithms to decipher large amounts of text and exhibit strong capabilities like question answering and translation. Recently, an influx of Large Language Models has emerged in the medical and academic discussion, given their potential widespread application to improve patient care and provider workflow. One application that has gained notable recognition in the literature is ChatGPT, which is a natural language processing "chatbot" technology developed by the artificial intelligence development software company OpenAI. It learns from large amounts of text data to generate automated responses to inquiries in seconds. In health care and academia, chatbot systems like ChatGPT have gained much recognition recently, given their potential to become functional, reliable virtual assistants. However, much research is required to determine the accuracy, validity, and ethical concerns of the integration of ChatGPT and other chatbots into everyday practice. One such field where little information and research on the matter currently exists is pathology. Herein, we present a literature review of pertinent articles regarding the current status and understanding of ChatGPT and its potential application in routine diagnostic pathology. In this review, we address the promises, possible pitfalls, and future potential of this application. We provide examples of actual conversations conducted with the chatbot technology that mimic hypothetical but practical diagnostic pathology scenarios that may be encountered in routine clinical practice. On the basis of this experience, we observe that ChatGPT and other chatbots already have a remarkable ability to distill and summarize, within seconds, vast amounts of publicly available data and information to assist in laying a foundation of knowledge on a specific topic. We emphasize that, at this time, any use of such knowledge at the patient care level in clinical medicine must be carefully vetted through established sources of medical information and expertise. We suggest and anticipate that with the ever-expanding knowledge base required to reliably practice personalized, precision anatomic pathology, improved technologies like future versions of ChatGPT (and other chatbots) enabled by expanded access to reliable, diverse data, might serve as a key ally to the diagnostician. Such technology has real potential to further empower the time-honored paradigm of histopathologic diagnoses based on the integrative cognitive assessment of clinical, gross, and microscopic findings and ancillary immunohistochemical and molecular studies at a time of exploding biomedical knowledge.
Schukow C; Smith SC; Landgrebe E; Parasuraman S; Folaranmi OO; Paner GP; Amin MB
10
36589923
Rapamycin in the context of Pascal's Wager: generative pre-trained transformer perspective.
2,022
Oncoscience
Large language models utilizing transformer neural networks and other deep learning architectures demonstrated unprecedented results in many tasks previously accessible only to human intelligence. In this article, we collaborate with ChatGPT, an AI model developed by OpenAI to speculate on the applications of Rapamycin, in the context of Pascal's Wager philosophical argument commonly utilized to justify the belief in god. In response to the query "Write an exhaustive research perspective on why taking Rapamycin may be more beneficial than not taking Rapamycin from the perspective of Pascal's wager" ChatGPT provided the pros and cons for the use of Rapamycin considering the preclinical evidence of potential life extension in animals. This article demonstrates the potential of ChatGPT to produce complex philosophical arguments and should not be used for any off-label use of Rapamycin.
Zhavoronkov A
10
38093584
The use of large language models in medicine: proceeding with caution.
2,024
Current medical research and opinion
Large language models, like ChatGPT and Bard, have potential clinical applications due to their ability to generate conversational responses and encode medical knowledge. However, their clinical adoption faces challenges including hallucinations, lack of transparency, and lack of consistency. Ethicolegal concerns surrounding patient consent, legal liability, and data privacy further complicate matters. Despite their promise, an optimistic but cautious approach is essential for the safe integration of large language models into clinical settings.
Deng J; Zubair A; Park YJ; Affan E; Zuo QK
10
37286844
Harnessing the Potential of ChatGPT in Breast Reconstruction: A Revolution in Patient Communication and Education.
2,023
Aesthetic plastic surgery
Level of Evidence IV This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Lanzano G
0-1
39470819
Comment on Evaluation of Rhinoplasty Information from ChatGPT, Gemini, and Claude.
2,024
Aesthetic plastic surgery
Level of Evidence V This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Kleebayoon A; Wiwanitkit V
0-1
39330905
Enhancing the Interpretability of Malaria and Typhoid Diagnosis with Explainable AI and Large Language Models.
2,024
Tropical medicine and infectious disease
Malaria and Typhoid fever are prevalent diseases in tropical regions, and both are exacerbated by unclear protocols, drug resistance, and environmental factors. Prompt and accurate diagnosis is crucial to improve accessibility and reduce mortality rates. Traditional diagnosis methods cannot effectively capture the complexities of these diseases due to the presence of similar symptoms. Although machine learning (ML) models offer accurate predictions, they operate as "black boxes" with non-interpretable decision-making processes, making it challenging for healthcare providers to comprehend how the conclusions are reached. This study employs explainable AI (XAI) models such as Local Interpretable Model-agnostic Explanations (LIME), and Large Language Models (LLMs) like GPT to clarify diagnostic results for healthcare workers, building trust and transparency in medical diagnostics by describing which symptoms had the greatest impact on the model's decisions and providing clear, understandable explanations. The models were implemented on Google Colab and Visual Studio Code because of their rich libraries and extensions. Results showed that the Random Forest model outperformed the other tested models; in addition, important features were identified with the LIME plots while ChatGPT 3.5 had a comparative advantage over other LLMs. The study integrates RF, LIME, and GPT in building a mobile app to enhance the interpretability and transparency in malaria and typhoid diagnosis system. Despite its promising results, the system's performance is constrained by the quality of the dataset. Additionally, while LIME and GPT improve transparency, they may introduce complexities in real-time deployment due to computational demands and the need for internet service to maintain relevance and accuracy. The findings suggest that AI-driven diagnostic systems can significantly enhance healthcare delivery in environments with limited resources, and future works can explore the applicability of this framework to other medical conditions and datasets.
Attai K; Ekpenyong M; Amannah C; Asuquo D; Ajuga P; Obot O; Johnson E; John A; Maduka O; Akwaowo C; Uzoka FM
10
37332004
Success Through Simplicity: What Other Artificial Intelligence Applications in Medicine Should Learn from History and ChatGPT.
2,023
Annals of biomedical engineering
Many artificial intelligence (AI) algorithms have been developed for medical practice, but few have led to clinically used products. The recent hype of ChatGPT shows us that simple, user-friendly interfaces are one major factor in the applications' popularity. The majority of AI-based applications in clinical practice are still far from simple-to-use applications with user-friendly interfaces. Therefore, simplifying operations is one key to AI-based medical applications' success.
Sedaghat S
10