pmid
stringlengths
8
8
title
stringlengths
3
289
year
int64
2.02k
2.03k
journal
stringlengths
3
221
doi
stringclasses
1 value
mesh
stringclasses
1 value
keywords
stringclasses
1 value
abstract
stringlengths
115
3.67k
authors
stringlengths
3
798
cluster
class label
5 classes
37962176
ChatGPT in urology practice: revolutionizing efficiency and patient care with generative artificial intelligence.
2,024
Current opinion in urology
PURPOSE OF REVIEW: ChatGPT has emerged as a potentially useful tool for healthcare. Its role in urology is in its infancy and has much potential for research, clinical practice and for patient assistance. With this narrative review, we want to draw a picture of what is known about ChatGPT's integration in urology, alongside future promises and challenges. RECENT FINDINGS: The use of ChatGPT can ease the administrative work, helping urologists with note-taking and clinical documentation such as discharge summaries and clinical notes. It can improve patient engagement through increasing awareness and facilitating communication, as it has especially been investigated for uro-oncological diseases. Its ability to understand human emotions makes ChatGPT an empathic and thoughtful interactive tool or source for urological patients and their relatives. Currently, its role in clinical diagnosis and treatment decisions is uncertain, as concerns have been raised about misinterpretation, hallucination and out-of-date information. Moreover, a mandatory regulatory process for ChatGPT in urology is yet to be established. SUMMARY: ChatGPT has the potential to contribute to precision medicine and tailored practice by its quick, structured responses. However, this will depend on how well information can be obtained by seeking appropriate responses and asking the pertinent questions. The key lies in being able to validate the responses, regulating the information shared and avoiding misuse of the same to protect the data and patient privacy. Its successful integration into mainstream urology needs educational bodies to provide guidelines or best practice recommendations for the same.
Nedbal C; Naik N; Castellani D; Gauhar V; Geraghty R; Somani BK
0-1
38976174
Application of Artificial Intelligence in the Headache Field.
2,024
Current pain and headache reports
PURPOSE OF REVIEW: Headache disorders are highly prevalent worldwide. Rapidly advancing capabilities in artificial intelligence (AI) have expanded headache-related research with the potential to solve unmet needs in the headache field. We provide an overview of AI in headache research in this article. RECENT FINDINGS: We briefly introduce machine learning models and commonly used evaluation metrics. We then review studies that have utilized AI in the field to advance diagnostic accuracy and classification, predict treatment responses, gather insights from various data sources, and forecast migraine attacks. Furthermore, given the emergence of ChatGPT, a type of large language model (LLM), and the popularity it has gained, we also discuss how LLMs could be used to advance the field. Finally, we discuss the potential pitfalls, bias, and future directions of employing AI in headache medicine. Many recent studies on headache medicine incorporated machine learning, generative AI and LLMs. A comprehensive understanding of potential pitfalls and biases is crucial to using these novel techniques with minimum harm. When used appropriately, AI has the potential to revolutionize headache medicine.
Ihara K; Dumkrieger G; Zhang P; Takizawa T; Schwedt TJ; Chiang CC
10
37729050
Large language models and the future of rheumatology: assessing impact and emerging opportunities.
2,024
Current opinion in rheumatology
PURPOSE OF REVIEW: Large language models (LLMs) have grown rapidly in size and capabilities as more training data and compute power has become available. Since the release of ChatGPT in late 2022, there has been growing interest and exploration around potential applications of LLM technology. Numerous examples and pilot studies demonstrating the capabilities of these tools have emerged across several domains. For rheumatology professionals and patients, LLMs have the potential to transform current practices in medicine. RECENT FINDINGS: Recent studies have begun exploring capabilities of LLMs that can assist rheumatologists in clinical practice, research, and medical education, though applications are still emerging. In clinical settings, LLMs have shown promise in assist healthcare professionals enabling more personalized medicine or generating routine documentation like notes and letters. Challenges remain around integrating LLMs into clinical workflows, accuracy of the LLMs and ensuring patient data confidentiality. In research, early experiments demonstrate LLMs can offer analysis of datasets, with quality control as a critical piece. Lastly, LLMs could supplement medical education by providing personalized learning experiences and integration into established curriculums. SUMMARY: As these powerful tools continue evolving at a rapid pace, rheumatology professionals should stay informed on how they may impact the field.
Mannstadt I; Mehta B
10
38060133
Machine Learning and Artificial Intelligence Applications to Epilepsy: a Review for the Practicing Epileptologist.
2,023
Current neurology and neuroscience reports
PURPOSE OF REVIEW: Machine Learning (ML) and Artificial Intelligence (AI) are data-driven techniques to translate raw data into applicable and interpretable insights that can assist in clinical decision making. Some of these tools have extremely promising initial results, earning both great excitement and creating hype. This non-technical article reviews recent developments in ML/AI in epilepsy to assist the current practicing epileptologist in understanding both the benefits and limitations of integrating ML/AI tools into their clinical practice. RECENT FINDINGS: ML/AI tools have been developed to assist clinicians in almost every clinical decision including (1) predicting future epilepsy in people at risk, (2) detecting and monitoring for seizures, (3) differentiating epilepsy from mimics, (4) using data to improve neuroanatomic localization and lateralization, and (5) tracking and predicting response to medical and surgical treatments. We also discuss practical, ethical, and equity considerations in the development and application of ML/AI tools including chatbots based on Large Language Models (e.g., ChatGPT). ML/AI tools will change how clinical medicine is practiced, but, with rare exceptions, the transferability to other centers, effectiveness, and safety of these approaches have not yet been established rigorously. In the future, ML/AI will not replace epileptologists, but epileptologists with ML/AI will replace epileptologists without ML/AI.
Kerr WT; McFarlane KN
10
38277274
Applications of artificial intelligence-enabled robots and chatbots in ophthalmology: recent advances and future trends.
2,024
Current opinion in ophthalmology
PURPOSE OF REVIEW: Recent advances in artificial intelligence (AI), robotics, and chatbots have brought these technologies to the forefront of medicine, particularly ophthalmology. These technologies have been applied in diagnosis, prognosis, surgical operations, and patient-specific care in ophthalmology. It is thus both timely and pertinent to assess the existing landscape, recent advances, and trajectory of trends of AI, AI-enabled robots, and chatbots in ophthalmology. RECENT FINDINGS: Some recent developments have integrated AI enabled robotics with diagnosis, and surgical procedures in ophthalmology. More recently, large language models (LLMs) like ChatGPT have shown promise in augmenting research capabilities and diagnosing ophthalmic diseases. These developments may portend a new era of doctor-patient-machine collaboration. SUMMARY: Ophthalmology is undergoing a revolutionary change in research, clinical practice, and surgical interventions. Ophthalmic AI-enabled robotics and chatbot technologies based on LLMs are converging to create a new era of digital ophthalmology. Collectively, these developments portend a future in which conventional ophthalmic knowledge will be seamlessly integrated with AI to improve the patient experience and enhance therapeutic outcomes.
Madadi Y; Delsoz M; Khouri AS; Boland M; Grzybowski A; Yousefi S
0-1
38032442
An Introduction to Generative Artificial Intelligence in Mental Health Care: Considerations and Guidance.
2,023
Current psychiatry reports
PURPOSE OF REVIEW: This paper provides an overview of generative artificial intelligence (AI) and the possible implications in the delivery of mental health care. RECENT FINDINGS: Generative AI is a powerful technology that is changing rapidly. As psychiatrists, it is important for us to understand generative AI technology and how it may impact our patients and our practice of medicine. This paper aims to build this understanding by focusing on GPT-4 and its potential impact on mental health care delivery. We first introduce key concepts and terminology describing how the technology works and various novel uses of it. We then dive into key considerations for GPT-4 and other large language models (LLMs) and wrap up with suggested future directions and initial guidance to the field.
King DR; Nanda G; Stoddard J; Dempsey A; Hergert S; Shore JH; Torous J
0-1
39450858
A comparison of drug information question responses by a drug information center and by ChatGPT.
2,025
American journal of health-system pharmacy : AJHP : official journal of the American Society of Health-System Pharmacists
PURPOSE: A study was conducted to assess the accuracy and ability of Chat Generative Pre-trained Transformer (ChatGPT) to systematically respond to drug information inquiries relative to responses of a drug information center (DIC). METHODS: Ten drug information questions answered by the DIC in 2022 or 2023 were selected for analysis. Three pharmacists created new ChatGPT accounts and submitted each question to ChatGPT at the same time. Each question was submitted twice to identify consistency in responses. Two days later, the same process was conducted by a fourth pharmacist. Phase 1 of data analysis consisted of a drug information pharmacist assessing all 84 ChatGPT responses for accuracy relative to the DIC responses. In phase 2, 10 ChatGPT responses were selected to be assessed by 3 blinded reviewers. Reviewers utilized an 8-question predetermined rubric to evaluate the ChatGPT and DIC responses. RESULTS: When comparing the ChatGPT responses (n = 84) to the DIC responses, ChatGPT had an overall accuracy rate of 50%. Accuracy across the different question types varied. In regards to the overall blinded score, ChatGPT responses scored higher than the responses by the DIC according to the rubric (overall scores of 67.5% and 55.0%, respectively). The DIC responses scored higher in the categories of references mentioned and references identified. CONCLUSION: Responses generated by ChatGPT have been found to be better than those created by a DIC in clarity and readability; however, the accuracy of ChatGPT responses was lacking. ChatGPT responses to drug information questions would need to be carefully reviewed for accuracy and completeness.
Triplett S; Ness-Engle GL; Behnen EM
10
38648540
How do we teach generative artificial intelligence to medical educators? Pilot of a faculty development workshop using ChatGPT.
2,025
Medical teacher
PURPOSE: Artificial intelligence (AI) is already impacting the practice of medicine and it is therefore important for future healthcare professionals and medical educators to gain experience with the benefits, limitations, and applications of this technology. The purpose of this project was to develop, implement, and evaluate a faculty development workshop on generative AI using ChatGPT, to familiarise participants with AI. MATERIALS AND METHODS: A brief workshop introducing faculty to generative AI and its applications in medical education was developed for preclinical clinical skills preceptors at our institution. During the workshop, faculty were given prompts to enter into ChatGPT that were relevant to their teaching activities, including generating differential diagnoses and providing feedback on student notes. Participant feedback was collected using an anonymous survey. RESULTS: 27/36 participants completed the survey. Prior to the workshop, 15% of participants indicated having used ChatGPT, and approximately half were familiar with AI applications in medical education. Interest in using the tool increased from 43% to 65% following the workshop, yet participants expressed concerns regarding accuracy and privacy with use of ChatGPT. CONCLUSION: This brief workshop serves as a model for faculty development in AI applications in medical education. The workshop increased interest in using ChatGPT for educational purposes, and was well received.
Chadha N; Popil E; Gregory J; Armstrong-Davies L; Justin G
10
40125532
Integrating ChatGPT as a Tool in Pharmacy Practice: A Cross-Sectional Exploration Among Pharmacists in Saudi Arabia.
2,025
Integrated pharmacy research & practice
PURPOSE: Artificial Intelligence (AI), especially ChatGPT, is rapidly assimilating into healthcare, providing significant advantages in pharmacy practice, such as improved clinical decision-making, patient counselling, and drug information management. The adoption of AI tools is heavily contingent upon pharmacy practitioners' knowledge, attitudes, and practices (KAP). This study sought to evaluate the knowledge and practices of pharmacists in Saudi Arabia concerning the utilization of ChatGPT in their daily activities. PATIENTS AND METHODS: A cross-sectional study was performed from May 2023 to July 2024 including pharmacists in Riyadh, Saudi Arabia. An online pre-validated KAP questionnaire was disseminated, collecting data on demographics, knowledge, attitudes, and practices about ChatGPT. Descriptive statistics and regression analyses were conducted using SPSS. RESULTS: Of 1022 respondents, 78.7% were familiar with AI in pharmacy, while 90.1% correctly identified ChatGPT as an advanced AI chatbot. Positive attitudes towards ChatGPT were reported by 64.1% of pharmacists, although only 24.3% used AI tools regularly. Significant predictors of positive attitudes and practices included academic/research roles (beta=0.7, p=0.005) and 6-10 years of experience (beta=0.9, p=0.05). Ethical concerns were raised by 64% of respondents, and 92% reported a lack of formal training. CONCLUSION: While the majority of pharmacists held positive attitudes toward ChatGPT, practical implementation remains limited due to ethical concerns and inadequate training. Addressing these barriers is essential for successful AI integration in pharmacy, supporting Saudi Arabia's Vision 2030 initiative.
Alghitran A; AlOsaimi HM; Albuluwi A; Almalki EO; Aldowayan AZ; Alharthi R; Qattan JM; Alghamdi F; AlHalabi M; Almalki NA; Alharthi A; Alshammari A; Kanan M
10
39494261
Evaluating the Accuracy of Large Language Model (ChatGPT) in Providing Information on Metastatic Breast Cancer.
2,024
Advanced pharmaceutical bulletin
PURPOSE: Artificial intelligence (AI), particularly large language models like ChatGPT developed by OpenAI, has demonstrated potential in various domains, including medicine. While ChatGPT has shown the capability to pass rigorous exams like the United States Medical Licensing Examination (USMLE) Step 1, its proficiency in addressing breast cancer-related inquiries-a complex and prevalent disease-remains underexplored. This study aims to assess the accuracy and comprehensiveness of ChatGPT's responses to common breast cancer questions, addressing a critical gap in the literature and evaluating its potential in enhancing patient education and support in breast cancer management. METHODS: A curated list of 100 frequently asked breast cancer questions was compiled from Cancer.net, the National Breast Cancer Foundation, and clinical practice. These questions were input into ChatGPT, and the responses were evaluated for accuracy by two primary experts using a four-point scale. Discrepancies in scoring were resolved through additional expert review. RESULTS: Of the 100 responses, 5 were entirely inaccurate, 22 partially accurate, 42 accurate but lacking comprehensiveness, and 31 highly accurate. The majority of the responses were found to be at least partially accurate, demonstrating ChatGPT's potential in providing reliable information on breast cancer. CONCLUSION: ChatGPT shows promise as a supplementary tool for patient education on breast cancer. While generally accurate, the presence of inaccuracies underscores the need for professional oversight. The study advocates for integrating AI tools like ChatGPT in healthcare settings to support patient-provider interactions and health education, emphasizing the importance of regular updates to reflect the latest research and clinical guidelines.
Gummadi R; Dasari N; Kumar DS; Pindiprolu SKSS
0-1
37578849
Comparative Performance of ChatGPT and Bard in a Text-Based Radiology Knowledge Assessment.
2,024
Canadian Association of Radiologists journal = Journal l'Association canadienne des radiologistes
PURPOSE: Bard by Google, a direct competitor to ChatGPT, was recently released. Understanding the relative performance of these different chatbots can provide important insight into their strengths and weaknesses as well as which roles they are most suited to fill. In this project, we aimed to compare the most recent version of ChatGPT, ChatGPT-4, and Bard by Google, in their ability to accurately respond to radiology board examination practice questions. METHODS: Text-based questions were collected from the 2017-2021 American College of Radiology's Diagnostic Radiology In-Training (DXIT) examinations. ChatGPT-4 and Bard were queried, and their comparative accuracies, response lengths, and response times were documented. Subspecialty-specific performance was analyzed as well. RESULTS: 318 questions were included in our analysis. ChatGPT answered significantly more accurately than Bard (87.11% vs 70.44%, P < .0001). ChatGPT's response length was significantly shorter than Bard's (935.28 +/- 440.88 characters vs 1437.52 +/- 415.91 characters, P < .0001). ChatGPT's response time was significantly longer than Bard's (26.79 +/- 3.27 seconds vs 7.55 +/- 1.88 seconds, P < .0001). ChatGPT performed superiorly to Bard in neuroradiology, (100.00% vs 86.21%, P = .03), general & physics (85.39% vs 68.54%, P < .001), nuclear medicine (80.00% vs 56.67%, P < .01), pediatric radiology (93.75% vs 68.75%, P = .03), and ultrasound (100.00% vs 63.64%, P < .001). In the remaining subspecialties, there were no significant differences between ChatGPT and Bard's performance. CONCLUSION: ChatGPT displayed superior radiology knowledge compared to Bard. While both chatbots display reasonable radiology knowledge, they should be used with conscious knowledge of their limitations and fallibility. Both chatbots provided incorrect or illogical answer explanations and did not always address the educational content of the question.
Patil NS; Huang RS; van der Pol CB; Larocque N
0-1
39348666
Informatics and Artificial Intelligence-Guided Assessment of the Regulatory and Translational Research Landscape of First-in-Class Oncology Drugs in the United States, 2018-2022.
2,024
JCO clinical cancer informatics
PURPOSE: Cancer drug development remains a critical but challenging process that affects millions of patients and their families. Using biomedical informatics and artificial intelligence (AI) approaches, we assessed the regulatory and translational research landscape defining successful first-in-class drugs for patients with cancer. METHODS: This is a retrospective observational study of all novel first-in-class drugs approved by the US Food and Drug Administration (FDA) from 2018 to 2022, stratified by cancer versus noncancer drugs. A biomedical informatics pipeline leveraging interoperability standards and ChatGPT performed integration and analysis of public databases provided by the FDA, National Institutes of Health, and WHO. RESULTS: Between 2018 and 2022, the FDA approved a total of 247 novel drugs, of which 107 (43.3%) were first-in-class drugs involving a new biologic target. Of these first-in-class drugs, 30 (28%) treatments were indicated for patients with cancer, including 19 (63.3%) for solid tumors and the remaining 11 (36.7%) for hematologic cancers. A median of 68 publications of basic, clinical, and other relevant translational science preceded successful FDA approval of first-in-class cancer drugs, with oncology-related treatments involving fewer median years of target-based research than therapies not related to cancer (33 v 43 years; P < .05). Overall, 94.4% of first-in-class drugs had at least 25 years of target-related research papers, while 85.5% of first-in-class drugs had at least 10 years of translational research publications. CONCLUSION: Novel first-in-class cancer treatments are defined by diverse clinical indications, personalized molecular targets, dependence on expedited regulatory pathways, and translational research metrics reflecting this complex landscape. Biomedical informatics and AI provide scalable, data-driven ways to assess and even address important challenges in the drug development pipeline.
Ronquillo JG; South B; Naik P; Singh R; De Jesus M; Watt SJ; Habtezion A
0-1
38650670
The Application of ChatGPT in Medicine: A Scoping Review and Bibliometric Analysis.
2,024
Journal of multidisciplinary healthcare
PURPOSE: ChatGPT has a wide range of applications in the medical field. Therefore, this review aims to define the key issues and provide a comprehensive view of the literature based on the application of ChatGPT in medicine. METHODS: This scope follows Arksey and O'Malley's five-stage framework. A comprehensive literature search of publications (30 November 2022 to 16 August 2023) was conducted. Six databases were searched and relevant references were systematically catalogued. Attention was focused on the general characteristics of the articles, their fields of application, and the advantages and disadvantages of using ChatGPT. Descriptive statistics and narrative synthesis methods were used for data analysis. RESULTS: Of the 3426 studies, 247 met the criteria for inclusion in this review. The majority of articles (31.17%) were from the United States. Editorials (43.32%) ranked first, followed by experimental studys (11.74%). The potential applications of ChatGPT in medicine are varied, with the largest number of studies (45.75%) exploring clinical practice, including assisting with clinical decision support and providing disease information and medical advice. This was followed by medical education (27.13%) and scientific research (16.19%). Particularly noteworthy in the discipline statistics were radiology, surgery and dentistry at the top of the list. However, ChatGPT in medicine also faces issues of data privacy, inaccuracy and plagiarism. CONCLUSION: The application of ChatGPT in medicine focuses on different disciplines and general application scenarios. ChatGPT has a paradoxical nature: it offers significant advantages, but at the same time raises great concerns about its application in healthcare settings. Therefore, it is imperative to develop theoretical frameworks that not only address its widespread use in healthcare but also facilitate a comprehensive assessment. In addition, these frameworks should contribute to the development of strict and effective guidelines and regulatory measures.
Wu J; Ma Y; Wang J; Xiao M
0-1
38866652
Evaluating ChatGPT to test its robustness as an interactive information database of radiation oncology and to assess its responses to common queries from radiotherapy patients: A single institution investigation.
2,024
Cancer radiotherapie : journal de la Societe francaise de radiotherapie oncologique
PURPOSE: Commercial vendors have created artificial intelligence (AI) tools for use in all aspects of life and medicine, including radiation oncology. AI innovations will likely disrupt workflows in the field of radiation oncology. However, limited data exist on using AI-based chatbots about the quality of radiation oncology information. This study aims to assess the accuracy of ChatGPT, an AI-based chatbot, in answering patients' questions during their first visit to the radiation oncology outpatient department and test knowledge of ChatGPT in radiation oncology. MATERIAL AND METHODS: Expert opinion was formulated using a set of ten standard questions of patients encountered in outpatient department practice. A blinded expert opinion was taken for the ten questions on common queries of patients in outpatient department visits, and the same questions were evaluated on ChatGPT version 3.5 (ChatGPT 3.5). The answers by expert and ChatGPT were independently evaluated for accuracy by three scientific reviewers. Additionally, a comparison was made for the extent of similarity of answers between ChatGPT and experts by a response scoring for each answer. Word count and Flesch-Kincaid readability score and grade were done for the responses obtained from expert and ChatGPT. A comparison of the answers of ChatGPT and expert was done with a Likert scale. As a second component of the study, we tested the technical knowledge of ChatGPT. Ten multiple choice questions were framed with increasing order of difficulty - basic, intermediate and advanced, and the responses were evaluated on ChatGPT. Statistical testing was done using SPSS version 27. RESULTS: After expert review, the accuracy of expert opinion was 100%, and ChatGPT's was 80% (8/10) for regular questions encountered in outpatient department visits. A noticeable difference was observed in word count and readability of answers from expert opinion or ChatGPT. Of the ten multiple-choice questions for assessment of radiation oncology database, ChatGPT had an accuracy rate of 90% (9 out of 10). One answer to a basic-level question was incorrect, whereas all answers to intermediate and difficult-level questions were correct. CONCLUSION: ChatGPT provides reasonably accurate information about routine questions encountered in the first outpatient department visit of the patient and also demonstrated a sound knowledge of the subject. The result of our study can inform the future development of educational tools in radiation oncology and may have implications in other medical fields. This is the first study that provides essential insight into the potentially positive capabilities of two components of ChatGPT: firstly, ChatGPT's response to common queries of patients at OPD visits, and secondly, the assessment of the radiation oncology knowledge base of ChatGPT.
Pandey VK; Munshi A; Mohanti BK; Bansal K; Rastogi K
43
38393353
An introduction to machine learning and generative artificial intelligence for otolaryngologists-head and neck surgeons: a narrative review.
2,024
European archives of oto-rhino-laryngology : official journal of the European Federation of Oto-Rhino-Laryngological Societies (EUFOS) : affiliated with the German Society for Oto-Rhino-Laryngology - Head and Neck Surgery
PURPOSE: Despite the robust expansion of research surrounding artificial intelligence (AI) and machine learning (ML) and their applications to medicine, these methodologies often remain opaque and inaccessible to many otolaryngologists. Especially, with the increasing ubiquity of large-language models (LLMs), such as ChatGPT and their potential implementation in clinical practice, clinicians may benefit from a baseline understanding of some aspects of AI. In this narrative review, we seek to clarify underlying concepts, illustrate applications to otolaryngology, and highlight future directions and limitations of these tools. METHODS: Recent literature regarding AI principles and otolaryngologic applications of ML and LLMs was reviewed via search in PubMed and Google Scholar. RESULTS: Significant recent strides have been made in otolaryngology research utilizing AI and ML, across all subspecialties, including neurotology, head and neck oncology, laryngology, rhinology, and sleep surgery. Potential applications suggested by recent publications include screening and diagnosis, predictive tools, clinical decision support, and clinical workflow improvement via LLMs. Ongoing concerns regarding AI in medicine include ethical concerns around bias and data sharing, as well as the "black box" problem and limitations in explainability. CONCLUSIONS: Potential implementations of AI in otolaryngology are rapidly expanding. While implementation in clinical practice remains theoretical for most of these tools, their potential power to influence the practice of otolaryngology is substantial.
Alter IL; Chan K; Lechien J; Rameau A
32
37334036
Evaluating the Performance of ChatGPT in Ophthalmology: An Analysis of Its Successes and Shortcomings.
2,023
Ophthalmology science
PURPOSE: Foundation models are a novel type of artificial intelligence algorithms, in which models are pretrained at scale on unannotated data and fine-tuned for a myriad of downstream tasks, such as generating text. This study assessed the accuracy of ChatGPT, a large language model (LLM), in the ophthalmology question-answering space. DESIGN: Evaluation of diagnostic test or technology. PARTICIPANTS: ChatGPT is a publicly available LLM. METHODS: We tested 2 versions of ChatGPT (January 9 "legacy" and ChatGPT Plus) on 2 popular multiple choice question banks commonly used to prepare for the high-stakes Ophthalmic Knowledge Assessment Program (OKAP) examination. We generated two 260-question simulated exams from the Basic and Clinical Science Course (BCSC) Self-Assessment Program and the OphthoQuestions online question bank. We carried out logistic regression to determine the effect of the examination section, cognitive level, and difficulty index on answer accuracy. We also performed a post hoc analysis using Tukey's test to decide if there were meaningful differences between the tested subspecialties. MAIN OUTCOME MEASURES: We reported the accuracy of ChatGPT for each examination section in percentage correct by comparing ChatGPT's outputs with the answer key provided by the question banks. We presented logistic regression results with a likelihood ratio (LR) chi-square. We considered differences between examination sections statistically significant at a P value of < 0.05. RESULTS: The legacy model achieved 55.8% accuracy on the BCSC set and 42.7% on the OphthoQuestions set. With ChatGPT Plus, accuracy increased to 59.4% +/- 0.6% and 49.2% +/- 1.0%, respectively. Accuracy improved with easier questions when controlling for the examination section and cognitive level. Logistic regression analysis of the legacy model showed that the examination section (LR, 27.57; P = 0.006) followed by question difficulty (LR, 24.05; P < 0.001) were most predictive of ChatGPT's answer accuracy. Although the legacy model performed best in general medicine and worst in neuro-ophthalmology (P < 0.001) and ocular pathology (P = 0.029), similar post hoc findings were not seen with ChatGPT Plus, suggesting more consistent results across examination sections. CONCLUSION: ChatGPT has encouraging performance on a simulated OKAP examination. Specializing LLMs through domain-specific pretraining may be necessary to improve their performance in ophthalmic subspecialties. FINANCIAL DISCLOSURES: Proprietary or commercial disclosure may be found after the references.
Antaki F; Touma S; Milad D; El-Khoury J; Duval R
21
37792149
Performance evaluation of ChatGPT, GPT-4, and Bard on the official board examination of the Japan Radiology Society.
2,024
Japanese journal of radiology
PURPOSE: Herein, we assessed the accuracy of large language models (LLMs) in generating responses to questions in clinical radiology practice. We compared the performance of ChatGPT, GPT-4, and Google Bard using questions from the Japan Radiology Board Examination (JRBE). MATERIALS AND METHODS: In total, 103 questions from the JRBE 2022 were used with permission from the Japan Radiological Society. These questions were categorized by pattern, required level of thinking, and topic. McNemar's test was used to compare the proportion of correct responses between the LLMs. Fisher's exact test was used to assess the performance of GPT-4 for each topic category. RESULTS: ChatGPT, GPT-4, and Google Bard correctly answered 40.8% (42 of 103), 65.0% (67 of 103), and 38.8% (40 of 103) of the questions, respectively. GPT-4 significantly outperformed ChatGPT by 24.2% (p < 0.001) and Google Bard by 26.2% (p < 0.001). In the categorical analysis by level of thinking, GPT-4 correctly answered 79.7% of the lower-order questions, which was significantly higher than ChatGPT or Google Bard (p < 0.001). The categorical analysis by question pattern revealed GPT-4's superiority over ChatGPT (67.4% vs. 46.5%, p = 0.004) and Google Bard (39.5%, p < 0.001) in the single-answer questions. The categorical analysis by topic revealed that GPT-4 outperformed ChatGPT (40%, p = 0.013) and Google Bard (26.7%, p = 0.004). No significant differences were observed between the LLMs in the categories not mentioned above. The performance of GPT-4 was significantly better in nuclear medicine (93.3%) than in diagnostic radiology (55.8%; p < 0.001). GPT-4 also performed better on lower-order questions than on higher-order questions (79.7% vs. 45.5%, p < 0.001). CONCLUSION: ChatGPTplus based on GPT-4 scored 65% when answering Japanese questions from the JRBE, outperforming ChatGPT and Google Bard. This highlights the potential of using LLMs to address advanced clinical questions in the field of radiology in Japan.
Toyama Y; Harigai A; Abe M; Nagano M; Kawabata M; Seki Y; Takase K
21
38801461
[ChatGPT and the German board examination for ophthalmology: an evaluation].
2,024
Die Ophthalmologie
PURPOSE: In recent years artificial intelligence (AI), as a new segment of computer science, has also become increasingly more important in medicine. The aim of this project was to investigate whether the current version of ChatGPT (ChatGPT 4.0) is able to answer open questions that could be asked in the context of a German board examination in ophthalmology. METHODS: After excluding image-based questions, 10 questions from 15 different chapters/topics were selected from the textbook 1000 questions in ophthalmology (1000 Fragen Augenheilkunde 2nd edition, 2014). ChatGPT was instructed by means of a so-called prompt to assume the role of a board certified ophthalmologist and to concentrate on the essentials when answering. A human expert with considerable expertise in the respective topic, evaluated the answers regarding their correctness, relevance and internal coherence. Additionally, the overall performance was rated by school grades and assessed whether the answers would have been sufficient to pass the ophthalmology board examination. RESULTS: The ChatGPT would have passed the board examination in 12 out of 15 topics. The overall performance, however, was limited with only 53.3% completely correct answers. While the correctness of the results in the different topics was highly variable (uveitis and lens/cataract 100%; optics and refraction 20%), the answers always had a high thematic fit (70%) and internal coherence (71%). CONCLUSION: The fact that ChatGPT 4.0 would have passed the specialist examination in 12 out of 15 topics is remarkable considering the fact that this AI was not specifically trained for medical questions; however, there is a considerable performance variability between the topics, with some serious shortcomings that currently rule out its safe use in clinical practice.
Yaici R; Cieplucha M; Bock R; Moayed F; Bechrakis NE; Berens P; Feltgen N; Friedburg D; Graf M; Guthoff R; Hoffmann EM; Hoerauf H; Hintschich C; Kohnen T; Messmer EM; Nentwich MM; Pleyer U; Schaudig U; Seitz B; Geerling G; Roth M
21
37577545
From Answers to Insights: Unveiling the Strengths and Limitations of ChatGPT and Biomedical Knowledge Graphs.
2,023
Research square
PURPOSE: Large Language Models (LLMs) have shown exceptional performance in various natural language processing tasks, benefiting from their language generation capabilities and ability to acquire knowledge from unstructured text. However, in the biomedical domain, LLMs face limitations that lead to inaccurate and inconsistent answers. Knowledge Graphs (KGs) have emerged as valuable resources for organizing structured information. Biomedical Knowledge Graphs (BKGs) have gained significant attention for managing diverse and large-scale biomedical knowledge. The objective of this study is to assess and compare the capabilities of ChatGPT and existing BKGs in question-answering, biomedical knowledge discovery, and reasoning tasks within the biomedical domain. METHODS: We conducted a series of experiments to assess the performance of ChatGPT and the BKGs in various aspects of querying existing biomedical knowledge, knowledge discovery, and knowledge reasoning. Firstly, we tasked ChatGPT with answering questions sourced from the "Alternative Medicine" sub-category of Yahoo! Answers and recorded the responses. Additionally, we queried BKG to retrieve the relevant knowledge records corresponding to the questions and assessed them manually. In another experiment, we formulated a prediction scenario to assess ChatGPT's ability to suggest potential drug/dietary supplement repurposing candidates. Simultaneously, we utilized BKG to perform link prediction for the same task. The outcomes of ChatGPT and BKG were compared and analyzed. Furthermore, we evaluated ChatGPT and BKG's capabilities in establishing associations between pairs of proposed entities. This evaluation aimed to assess their reasoning abilities and the extent to which they can infer connections within the knowledge domain. RESULTS: The results indicate that ChatGPT with GPT-4.0 outperforms both GPT-3.5 and BKGs in providing existing information. However, BKGs demonstrate higher reliability in terms of information accuracy. ChatGPT exhibits limitations in performing novel discoveries and reasoning, particularly in establishing structured links between entities compared to BKGs. CONCLUSIONS: To address the limitations observed, future research should focus on integrating LLMs and BKGs to leverage the strengths of both approaches. Such integration would optimize task performance and mitigate potential risks, leading to advancements in knowledge within the biomedical field and contributing to the overall well-being of individuals.
Hou Y; Yeung J; Xu H; Su C; Wang F; Zhang R
10
39741798
Patient Support in Obstructive Sleep Apnoea by a Large Language Model - ChatGPT 4o on Answering Frequently Asked Questions on First Line Positive Airway Pressure and Second Line Hypoglossal Nerve Stimulation Therapy: A Pilot Study.
2,024
Nature and science of sleep
PURPOSE: Obstructive sleep apnoea (OSA) is a common disease that benefits from early treatment and patient support in order to prevent secondary illnesses. This study assesses the capability of the large language model (LLM) ChatGPT-4o to offer patient support regarding first line positive airway pressure (PAP) and second line hypoglossal nerve stimulation (HGNS) therapy. METHODS: Seventeen questions, each regarding PAP and HGNS therapy, were posed to ChatGPT-4o. Answers were rated by experienced experts in sleep medicine on a 6-point Likert scale in the categories of medical adequacy, conciseness, coherence, and comprehensibility. Completeness of medical information and potential hazard for patients were rated using a binary system. RESULTS: Overall, ChatGPT-4o achieved reasonably high ratings in all categories. In medical adequacy, it performed significantly better on PAP questions (mean 4.9) compared to those on HGNS (mean 4.6) (p < 0.05). Scores for coherence, comprehensibility and conciseness showed similar results for both HGNS and PAP answers. Raters confirmed completeness of responses in 45 of 51 ratings (88.24%) for PAP answers and 28 of 51 ratings (54.9%) for HGNS answers. Potential hazards for patients were stated in 2 of 52 ratings (0.04%) for PAP answers and none for HGNS answers. CONCLUSION: ChatGPT-4o has potential as a valuable patient-oriented support tool in sleep medicine therapy that can enhance subsequent face-to-face consultations with a sleep specialist. However, some substantial flaws regarding second line HGNS therapy are most likely due to recent advances in HGNS therapy and the consequent limited information available in LLM training data.
Pordzik J; Bahr-Hamm K; Huppertz T; Gouveris H; Seifen C; Blaikie A; Matthias C; Kuhn S; Eckrich J; Buhr CR
0-1
39661913
Comparative Analysis of Generative Pre-Trained Transformer Models in Oncogene-Driven Non-Small Cell Lung Cancer: Introducing the Generative Artificial Intelligence Performance Score.
2,024
JCO clinical cancer informatics
PURPOSE: Precision oncology in non-small cell lung cancer (NSCLC) relies on biomarker testing for clinical decision making. Despite its importance, challenges like the lack of genomic oncology training, nonstandardized biomarker reporting, and a rapidly evolving treatment landscape hinder its practice. Generative artificial intelligence (AI), such as ChatGPT, offers promise for enhancing clinical decision support. Effective performance metrics are crucial to evaluate these models' accuracy and their propensity for producing incorrect or hallucinated information. We assessed various ChatGPT versions' ability to generate accurate next-generation sequencing reports and treatment recommendations for NSCLC, using a novel Generative AI Performance Score (G-PS), which considers accuracy, relevancy, and hallucinations. METHODS: We queried ChatGPT versions for first-line NSCLC treatment recommendations with an Food and Drug Administration-approved targeted therapy, using a zero-shot prompt approach for eight oncogenes. Responses were assessed against National Comprehensive Cancer Network (NCCN) guidelines for accuracy, relevance, and hallucinations, with G-PS calculating scores from -1 (all hallucinations) to 1 (fully NCCN-compliant recommendations). G-PS was designed as a composite measure with a base score for correct recommendations (weighted for preferred treatments) and a penalty for hallucinations. RESULTS: Analyzing 160 responses, generative pre-trained transformer (GPT)-4 outperformed GPT-3.5, showing higher base score (90% v 60%; P < .01) and fewer hallucinations (34% v 53%; P < .01). GPT-4's overall G-PS was significantly higher (0.34 v -0.15; P < .01), indicating superior performance. CONCLUSION: This study highlights the rapid improvement of generative AI in matching treatment recommendations with biomarkers in precision oncology. Although the rate of hallucinations improved in the GPT-4 model, future generative AI use in clinical care requires high levels of accuracy with minimal to no room for hallucinations. The GP-S represents a novel metric quantifying generative AI utility in health care compared with national guidelines, with potential adaptation beyond precision oncology.
Hamilton Z; Aseem A; Chen Z; Naffakh N; Reizine NM; Weinberg F; Jain S; Kessler LG; Gadi VK; Bun C; Nguyen RH
10
37668790
Performance of ChatGPT in Israeli Hebrew OBGYN national residency examinations.
2,023
Archives of gynecology and obstetrics
PURPOSE: Previous studies of ChatGPT performance in the field of medical examinations have reached contradictory results. Moreover, the performance of ChatGPT in other languages other than English is yet to be explored. We aim to study the performance of ChatGPT in Hebrew OBGYN-'Shlav-Alef' (Phase 1) examination. METHODS: A performance study was conducted using a consecutive sample of text-based multiple choice questions, originated from authentic Hebrew OBGYN-'Shlav-Alef' examinations in 2021-2022. We constructed 150 multiple choice questions from consecutive text-based-only original questions. We compared the performance of ChatGPT performance to the real-life actual performance of OBGYN residents who completed the tests in 2021-2022. We also compared ChatGTP Hebrew performance vs. previously published English medical tests. RESULTS: In 2021-2022, 27.8% of OBGYN residents failed the 'Shlav-Alef' examination and the mean score of the residents was 68.4. Overall, 150 authentic questions were evaluated (one examination). ChatGPT correctly answered 58 questions (38.7%) and reached a failed score. The performance of Hebrew ChatGPT was lower when compared to actual performance of residents: 38.7% vs. 68.4%, p < .001. In a comparison to ChatGPT performance in 9,091 English language questions in the field of medicine, the performance of Hebrew ChatGPT was lower (38.7% in Hebrew vs. 60.7% in English, p < .001). CONCLUSIONS: ChatGPT answered correctly on less than 40% of Hebrew OBGYN resident examination questions. Residents cannot rely on ChatGPT for the preparation of this examination. Efforts should be made to improve ChatGPT performance in other languages besides English.
Cohen A; Alter R; Lessans N; Meyer R; Brezinov Y; Levin G
21
37530687
Exploring the Role of a Large Language Model on Carpal Tunnel Syndrome Management: An Observation Study of ChatGPT.
2,023
The Journal of hand surgery
PURPOSE: Recently, large language models, such as ChatGPT, have emerged as promising tools to facilitate scientific research and health care management. The present study aimed to explore the extent of knowledge possessed by ChatGPT concerning carpal tunnel syndrome (CTS), a compressive neuropathy that may lead to impaired hand function and that is frequently encountered in the field of hand surgery. METHODS: Six questions pertaining to diagnosis and management of CTS were posed to ChatGPT. The responses were subsequently analyzed and evaluated based on their accuracy, coherence, and comprehensiveness. In addition, ChatGPT was requested to provide five high-level evidence references in support of its answers. A simulated doctor-patient consultation was also conducted to assess whether ChatGPT could offer safe medical advice. RESULTS: ChatGPT supplied clinically relevant information regarding CTS, although at a relatively superficial level. In the context of doctor-patient interaction, ChatGPT suggested a diagnostic pathway that deviated from the widely accepted clinical consensus on CTS diagnosis. Nevertheless, it incorporated differential diagnoses and valuable management options for CTS. Although ChatGPT demonstrated the ability to retain and recall information from previous patient conversations, it infrequently produced pertinent references, many of which were either nonexistent or incorrect. CONCLUSIONS: ChatGPT displayed the capability to deliver validated medical information on CTS to nonmedical individuals. However, the generation of nonexistent and inaccurate references by ChatGPT presents a challenge to academic integrity. CLINICAL RELEVANCE: To increase their utility in medicine and academia, large language models must go through specialized reputable data set training and validation from experts. It is essential to note that at present, large language models cannot replace the expertise of health care professionals and may act as a supportive tool.
Seth I; Xie Y; Rodwell A; Gracias D; Bulloch G; Hunter-Smith DJ; Rozen WM
32
37566133
Will artificial intelligence chatbots replace clinical pharmacologists? An exploratory study in clinical practice.
2,023
European journal of clinical pharmacology
PURPOSE: Recently, there has been a growing interest in using ChatGPT for various applications in Medicine. We evaluated the interest of OpenAI chatbot (GPT 4.0) for drug information activities at Toulouse Pharmacovigilance Center. METHODS: Based on a series of 50 randomly selected questions sent to our pharmacovigilance center by healthcare professionals or patients, we compared the level of responses from the chatbot GPT 4.0 with those provided by specialists in pharmacovigilance. RESULTS: Chatbot answers were globally not acceptable. Responses to inquiries regarding the assessment of drug causality were not consistently precise or clinically meaningful. CONCLUSION: The interest of chatbot assistance needs to be confirmed or rejected through further studies conducted in other pharmacovigilance centers.
Montastruc F; Storck W; de Canecaude C; Victor L; Li J; Cesbron C; Zelmat Y; Barus R
10
38821410
Reliability and readability analysis of ChatGPT-4 and Google Bard as a patient information source for the most commonly applied radionuclide treatments in cancer patients.
2,024
Revista espanola de medicina nuclear e imagen molecular
PURPOSE: Searching for online health information is a popular approach employed by patients to enhance their knowledge for their diseases. Recently developed AI chatbots are probably the easiest way in this regard. The purpose of the study is to analyze the reliability and readability of AI chatbot responses in terms of the most commonly applied radionuclide treatments in cancer patients. METHODS: Basic patient questions, thirty about RAI, PRRT and TARE treatments and twenty-nine about PSMA-TRT, were asked one by one to GPT-4 and Bard on January 2024. The reliability and readability of the responses were assessed by using DISCERN scale, Flesch Reading Ease(FRE) and Flesch-Kincaid Reading Grade Level(FKRGL). RESULTS: The mean (SD) FKRGL scores for the responses of GPT-4 and Google Bard about RAI, PSMA-TRT, PRRT and TARE treatmens were 14.57 (1.19), 14.65 (1.38), 14.25 (1.10), 14.38 (1.2) and 11.49 (1.59), 12.42 (1.71), 11.35 (1.80), 13.01 (1.97), respectively. In terms of readability the FRKGL scores of the responses of GPT-4 and Google Bard about RAI, PSMA-TRT, PRRT and TARE treatments were above the general public reading grade level. The mean (SD) DISCERN scores assesses by nuclear medicine phsician for the responses of GPT-4 and Bard about RAI, PSMA-TRT, PRRT and TARE treatments were 47.86 (5.09), 48.48 (4.22), 46.76 (4.09), 48.33 (5.15) and 51.50 (5.64), 53.44 (5.42), 53 (6.36), 49.43 (5.32), respectively. Based on mean DISCERN scores, the reliability of the responses of GPT-4 and Google Bard about RAI, PSMA-TRT, PRRT, and TARE treatments ranged from fair to good. The inter-rater reliability correlation coefficient of DISCERN scores assessed by GPT-4, Bard and nuclear medicine physician for the responses of GPT-4 about RAI, PSMA-TRT, PRRT and TARE treatments were 0.512(95% CI 0.296: 0.704), 0.695(95% CI 0.518: 0.829), 0.687(95% CI 0.511: 0.823) and 0.649 (95% CI 0.462: 0.798), respectively (p < 0.01). The inter-rater reliability correlation coefficient of DISCERN scores assessed by GPT-4, Bard and nuclear medicine physician for the responses of Bard about RAI, PSMA-TRT, PRRT and TARE treatments were 0.753(95% CI 0.602: 0.863), 0.812(95% CI 0.686: 0.899), 0.804(95% CI 0.677: 0.894) and 0.671 (95% CI 0.489: 0.812), respectively (p < 0.01). The inter-rater reliability for the responses of Bard and GPT-4 about RAI, PSMA-TRT, PRRT and TARE treatments were moderate to good. Further, consulting to the nuclear medicine physician was rarely emphasized both in GPT-4 and Google Bard and references were included in some responses of Google Bard, but there were no references in GPT-4. CONCLUSION: Although the information provided by AI chatbots may be acceptable in medical terms, it can not be easy to read for the general public, which may prevent it from being understandable. Effective prompts using 'prompt engineering' may refine the responses in a more comprehensible manner. Since radionuclide treatments are specific to nuclear medicine expertise, nuclear medicine physician need to be stated as a consultant in responses in order to guide patients and caregivers to obtain accurate medical advice. Referencing is significant in terms of confidence and satisfaction of patients and caregivers seeking information.
San H; Bayrakci O; Cagdas B; Serdengecti M; Alagoz E
43
39831699
A randomised cross-over trial assessing the impact of AI-generated individual feedback on written online assignments for medical students.
2,025
Medical teacher
PURPOSE: Self-testing has been proven to significantly improve not only simple learning outcomes, but also higher-order skills such as clinical reasoning in medical students. Previous studies have shown that self-testing was especially beneficial when it was presented with feedback, which leaves the question whether an immediate and personalized feedback further encourages this effect. Therefore, we hypothesised that individual feedback has a greater effect on learning outcomes, compared to generic feedback. MATERIALS AND METHODS: In a randomised cross-over trial, German medical students were invited to voluntarily answer daily key-feature questions via an App. For half of the items they received a generalised feedback by an expert, while the feedback on the other half was generated immediately through ChatGPT. After the intervention, the students participated in a mandatory exit exam. RESULTS: Those participants who used the app more frequently experienced a better learning outcome compared to those who did not use it frequently, even though this finding was only examined in a correlative nature. The individual ChatGPT generated feedback did not show a greater effect on exit exam scores compared to the expert comment (51.8 +/- 22.0% vs. 55.8 +/- 22.8%; p = 0.06). CONCLUSION: This study proves the concept of providing personalised feedback on medical questions. Despite the promising results, improved prompting and further development of the application seems necessary to strengthen the possible impact of the personalised feedback. Our study closes a research gap and holds great potential for further use not only in medicine but also in other academic fields.
Nissen L; Rother JF; Heinemann M; Reimer LM; Jonas S; Raupach T
21
38304112
Exploring Capabilities of Large Language Models such as ChatGPT in Radiation Oncology.
2,024
Advances in radiation oncology
PURPOSE: Technological progress of machine learning and natural language processing has led to the development of large language models (LLMs), capable of producing well-formed text responses and providing natural language access to knowledge. Modern conversational LLMs such as ChatGPT have shown remarkable capabilities across a variety of fields, including medicine. These models may assess even highly specialized medical knowledge within specific disciplines, such as radiation therapy. We conducted an exploratory study to examine the capabilities of ChatGPT to answer questions in radiation therapy. METHODS AND MATERIALS: A set of multiple-choice questions about clinical, physics, and biology general knowledge in radiation oncology as well as a set of open-ended questions were created. These were given as prompts to the LLM ChatGPT, and the answers were collected and analyzed. For the multiple-choice questions, it was checked how many of the answers of the model could be clearly assigned to one of the allowed multiple-choice-answers, and the proportion of correct answers was determined. For the open-ended questions, independent blinded radiation oncologists evaluated the quality of the answers regarding correctness and usefulness on a 5-point Likert scale. Furthermore, the evaluators were asked to provide suggestions for improving the quality of the answers. RESULTS: For 70 multiple-choice questions, ChatGPT gave valid answers in 66 cases (94.3%). In 60.61% of the valid answers, the selected answer was correct (50.0% of clinical questions, 78.6% of physics questions, and 58.3% of biology questions). For 25 open-ended questions, 12 answers of ChatGPT were considered as "acceptable," "good," or "very good" regarding both correctness and helpfulness by all 6 participating radiation oncologists. Overall, the answers were considered "very good" in 29.3% and 28%, "good" in 28% and 29.3%, "acceptable" in 19.3% and 19.3%, "bad" in 9.3% and 9.3%, and "very bad" in 14% and 14% regarding correctness/helpfulness. CONCLUSIONS: Modern conversational LLMs such as ChatGPT can provide satisfying answers to many relevant questions in radiation therapy. As they still fall short of consistently providing correct information, it is problematic to use them for obtaining medical information. As LLMs will further improve in the future, they are expected to have an increasing impact not only on general society, but also on clinical practice, including radiation oncology.
Dennstadt F; Hastings J; Putora PM; Vu E; Fischer GF; Suveg K; Glatzer M; Riggenbach E; Ha HL; Cihoric N
0-1
38717951
Using ChatGPT-4 to Analyze 24-Hour Urine Results and Generate Custom Dietary Recommendations for Nephrolithiasis.
2,024
Journal of endourology
Purpose: The increasing incidence of nephrolithiasis underscores the need for effective, accessible tools to aid urologists in preventing recurrence. Despite dietary modification's crucial role in prevention, targeted dietary counseling using 24-hour urine collections is underutilized. This study evaluates ChatGPT-4, a multimodal large language model, in analyzing urine collection results and providing custom dietary advice, exploring the potential for artificial intelligence-assisted analysis and counseling. Materials and Methods: Eleven unique prompts with synthesized 24-hour urine collection results were submitted to ChatGPT-4. The model was instructed to provide five dietary recommendations in response to the results. One prompt contained all "normal" values, with subsequent prompts introducing one abnormality each. Generated responses were assessed for accuracy, completeness, and appropriateness by two urologists, a nephrologist, and a clinical dietitian. Results: ChatGPT-4 achieved average scores of 5.2/6 for accuracy, 2.4/3 for completeness, and 2.6/3 for appropriateness. It correctly identified all "normal" values but had difficulty consistently detecting abnormalities and formulating appropriate recommendations. The model performed particularly poorly in response to calcium and citrate abnormalities and failed to address 3/10 abnormalities entirely. Conclusions: ChatGPT-4 exhibits potential in the dietary management of nephrolithiasis but requires further refinement for dependable performance. The model demonstrated the ability to generate personalized recommendations that were often accurate and complete but displayed inconsistencies in identifying and addressing urine abnormalities. Despite these limitations, with precise prompt design, physician oversight, and continued training, ChatGPT-4 can serve as a foundation for personalized medicine while also reducing administrative burden, indicating its promising role in improving the management of conditions such as nephrolithiasis.
Kiriakedis S; Duty B; Chase T; Wusirika R; Metzler I
0-1
37776392
Evaluating ChatGPT responses in the context of a 53-year-old male with a femoral neck fracture: a qualitative analysis.
2,024
European journal of orthopaedic surgery & traumatology : orthopedie traumatologie
PURPOSE: The integration of artificial intelligence (AI) tools, such as ChatGPT, in clinical medicine and medical education has gained significant attention due to their potential to support decision-making and improve patient care. However, there is a need to evaluate the benefits and limitations of these tools in specific clinical scenarios. METHODS: This study used a case study approach within the field of orthopaedic surgery. A clinical case report featuring a 53-year-old male with a femoral neck fracture was used as the basis for evaluation. ChatGPT, a large language model, was asked to respond to clinical questions related to the case. The responses generated by ChatGPT were evaluated qualitatively, considering their relevance, justification, and alignment with the responses of real clinicians. Alternative dialogue protocols were also employed to assess the impact of additional prompts and contextual information on ChatGPT responses. RESULTS: ChatGPT generally provided clinically appropriate responses to the questions posed in the clinical case report. However, the level of justification and explanation varied across the generated responses. Occasionally, clinically inappropriate responses and inconsistencies were observed in the generated responses across different dialogue protocols and on separate days. CONCLUSIONS: The findings of this study highlight both the potential and limitations of using ChatGPT in clinical practice. While ChatGPT demonstrated the ability to provide relevant clinical information, the lack of consistent justification and occasional clinically inappropriate responses raise concerns about its reliability. These results underscore the importance of careful consideration and validation when using AI tools in healthcare. Further research and clinician training are necessary to effectively integrate AI tools like ChatGPT, ensuring their safe and reliable use in clinical decision-making.
Zhou Y; Moon C; Szatkowski J; Moore D; Stevens J
32
38661379
Keeping Up With ChatGPT: Evaluating Its Recognition and Interpretation of Nuclear Medicine Images.
2,024
Clinical nuclear medicine
PURPOSE: The latest iteration of GPT4 (generative pretrained transformer) is a large multimodal model that can integrate both text and image input, but its performance with medical images has not been systematically evaluated. We studied whether ChatGPT with GPT-4V(ision) can recognize images from common nuclear medicine examinations and interpret them. PATIENTS AND METHODS: Fifteen representative images (scintigraphy, 11; PET, 4) were submitted to ChatGPT with GPT-4V(ision), both in its Default and "Advanced Data Analysis (beta)" version. ChatGPT was asked to name the type of examination and tracer, explain the findings and whether there are abnormalities. ChatGPT should also mark anatomical structures or pathological findings. The appropriateness of the responses was rated by 3 nuclear medicine physicians. RESULTS: The Default version identified the examination and the tracer correctly in the majority of the 15 cases (60% or 53%) and gave an "appropriate" description of the findings or abnormalities in 47% or 33% of cases, respectively. The Default version cannot manipulate images. "Advanced Data Analysis (beta)" failed in all tasks in >90% of cases. A "major" or "incompatible" inconsistency between 3 trials of the same prompt was observed in 73% (Default version) or 87% of cases ("Advanced Data Analysis (beta)" version). CONCLUSIONS: Although GPT-4V(ision) demonstrates preliminary capabilities in analyzing nuclear medicine images, it exhibits significant limitations, particularly in its reliability (ie, correctness, predictability, and consistency).
Rogasch JMM; Jochens HV; Metzger G; Wetz C; Kaufmann J; Furth C; Amthauer H; Schatka I
0-1
37790756
Benchmarking ChatGPT-4 on a radiation oncology in-training exam and Red Journal Gray Zone cases: potentials and challenges for ai-assisted medical education and decision making in radiation oncology.
2,023
Frontiers in oncology
PURPOSE: The potential of large language models in medicine for education and decision-making purposes has been demonstrated as they have achieved decent scores on medical exams such as the United States Medical Licensing Exam (USMLE) and the MedQA exam. This work aims to evaluate the performance of ChatGPT-4 in the specialized field of radiation oncology. METHODS: The 38th American College of Radiology (ACR) radiation oncology in-training (TXIT) exam and the 2022 Red Journal Gray Zone cases are used to benchmark the performance of ChatGPT-4. The TXIT exam contains 300 questions covering various topics of radiation oncology. The 2022 Gray Zone collection contains 15 complex clinical cases. RESULTS: For the TXIT exam, ChatGPT-3.5 and ChatGPT-4 have achieved the scores of 62.05% and 78.77%, respectively, highlighting the advantage of the latest ChatGPT-4 model. Based on the TXIT exam, ChatGPT-4's strong and weak areas in radiation oncology are identified to some extent. Specifically, ChatGPT-4 demonstrates better knowledge of statistics, CNS & eye, pediatrics, biology, and physics than knowledge of bone & soft tissue and gynecology, as per the ACR knowledge domain. Regarding clinical care paths, ChatGPT-4 performs better in diagnosis, prognosis, and toxicity than brachytherapy and dosimetry. It lacks proficiency in in-depth details of clinical trials. For the Gray Zone cases, ChatGPT-4 is able to suggest a personalized treatment approach to each case with high correctness and comprehensiveness. Importantly, it provides novel treatment aspects for many cases, which are not suggested by any human experts. CONCLUSION: Both evaluations demonstrate the potential of ChatGPT-4 in medical education for the general public and cancer patients, as well as the potential to aid clinical decision-making, while acknowledging its limitations in certain domains. Owing to the risk of hallucinations, it is essential to verify the content generated by models such as ChatGPT for accuracy.
Huang Y; Gomaa A; Semrau S; Haderlein M; Lettmaier S; Weissmann T; Grigo J; Tkhayat HB; Frey B; Gaipl U; Distel L; Maier A; Fietkau R; Bert C; Putz F
21
38066714
Genetic counselors' utilization of ChatGPT in professional practice: A cross-sectional study.
2,024
American journal of medical genetics. Part A
PURPOSE: The precision medicine era has seen increased utilization of artificial intelligence (AI) in the field of genetics. We sought to explore the ways that genetic counselors (GCs) currently use the publicly accessible AI tool Chat Generative Pre-trained Transformer (ChatGPT) in their work. METHODS: GCs in North America were surveyed about how ChatGPT is used in different aspects of their work. Descriptive statistics were reported through frequencies and means. RESULTS: Of 118 GCs who completed the survey, 33.8% (40) reported using ChatGPT in their work; 47.5% (19) use it in clinical practice, 35% (14) use it in education, and 32.5% (13) use it in research. Most GCs (62.7%; 74) felt that it saves time on administrative tasks but the majority (82.2%; 97) felt that a paramount challenge was the risk of obtaining incorrect information. The majority of GCs not using ChatGPT (58.9%; 46) felt it was not necessary for their work. CONCLUSION: A considerable number of GCs in the field are using ChatGPT in different ways, but it is primarily helpful with tasks that involve writing. It has potential to streamline workflow issues encountered in clinical genetics, but practitioners need to be informed and uniformly trained about its limitations.
Ahimaz P; Bergner AL; Florido ME; Harkavy N; Bhattacharyya S
0-1
40352999
Fruits of the Professional Educator Appreciation and Recognition (PEAR) Awards: Learning what Students Value in Their Medical Educators.
2,025
Medical science educator
PURPOSE: The Professional Educator Appreciation and Recognition (PEAR) awards program was created by students at Baylor College of Medicine (BCM) in 2020 to recognize exemplary educators. We reviewed our 3-year experience of this initiative to identify characteristics of award-winning educators, and to assess the award's impact. MATERIALS AND METHODS: We reviewed the title, department, and clinical affiliation of award winners. Using ChatGPT, we qualitatively analyzed de-identified nomination narratives to identify themes of educator characteristics most valued by students. Following refinement by the study team, each award was then assigned one or more themes. Unsolicited thank-you emails were reviewed to consider the impact of the award upon recipients. RESULTS: Of 202 award recipients, most were near-peers (two-thirds were assistant professors or residents). Winners were affiliated with diverse departments and disciplines. Students most valued outstanding teaching skills (36.6%), showing support by being kind, encouraging, or approachable (26.7%), and investing time in trainees via constructive feedback (26.7%). Unsolicited thank-you emails from 20% of recipients conveyed the meaning of the award for the winning educators. CONCLUSIONS: A review of the BCM PEAR award program identified educator characteristics most valued by students, providing targets for professional development. This low-cost, high-impact initiative may enhance educator wellness and motivate professional behaviors. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s40670-024-02234-2.
Tomlinson M; Nasto K; Gosselin K; Friedman EM; Gill A; Rose S
10
39110155
Accuracy and Completeness of Large Language Models About Antibody-Drug Conjugates and Associated Ocular Adverse Effects.
2,024
Cornea
PURPOSE: The purpose of this study was to assess the accuracy and completeness of 3 large language models (LLMs) to generate information about antibody-drug conjugate (ADC)-associated ocular toxicities. METHODS: There were 22 questions about ADCs, tisotumab vedotin, and mirvetuximab soravtansine that were developed and input into ChatGPT 4.0, Bard, and LLaMa. Answers were rated by 4 ocular toxicity experts using standardized 6-point Likert scales on accuracy and completeness. ANOVA tests were conducted for comparison between the 3 subgroups, followed by pairwise t-tests. Interrater variability was assessed with Fleiss kappa tests. RESULTS: The mean accuracy score was 4.62 (SD 0.89) for ChatGPT, 4.77 (SD 0.90) for Bard, and 4.41 (SD 1.09) for LLaMA. Both ChatGPT (P = 0.03) and Bard (P = 0.003) scored significantly better for accuracy when compared with LLaMA. The mean completeness score was 4.43 (SD 0.91) for ChatGPT, 4.57 (SD 0.93) for Bard, and 4.42 (SD 0.99) for LLaMA. There were no significant differences in completeness scores between groups. Fleiss kappa assessment for interrater variability was good (0.74) for accuracy and fair (0.31) for completeness. CONCLUSIONS: All 3 LLMs had relatively high accuracy and completeness ratings, showing LLMs are able to provide sufficient answers for niche topics of ophthalmology. Our results indicate that ChatGPT and Bard may be slightly better at providing more accurate answers than LLaMA. As further research and treatment plans are developed for ADC-associated ocular toxicities, these LLMs should be reassessed to see if they provide complete and accurate answers that remain in line with current medical knowledge.
Marshall R; Xu H; Dalvin LA; Mishra K; Edalat C; Kirupaharan N; Francis JH; Berkenstock M
10
38615289
Geriatrics and artificial intelligence in Spain (Ger-IA project): talking to ChatGPT, a nationwide survey.
2,024
European geriatric medicine
PURPOSE: The purposes of the study was to describe the degree of agreement between geriatricians with the answers given by an AI tool (ChatGPT) in response to questions related to different areas in geriatrics, to study the differences between specialists and residents in geriatrics in terms of the degree of agreement with ChatGPT, and to analyse the mean scores obtained by areas of knowledge/domains. METHODS: An observational study was conducted involving 126 doctors from 41 geriatric medicine departments in Spain. Ten questions about geriatric medicine were posed to ChatGPT, and doctors evaluated the AI's answers using a Likert scale. Sociodemographic variables were included. Questions were categorized into five knowledge domains, and means and standard deviations were calculated for each. RESULTS: 130 doctors answered the questionnaire. 126 doctors (69.8% women, mean age 41.4 [9.8]) were included in the final analysis. The mean score obtained by ChatGPT was 3.1/5 [0.67]. Specialists rated ChatGPT lower than residents (3.0/5 vs. 3.3/5 points, respectively, P < 0.05). By domains, ChatGPT ​​scored better (M: 3.96; SD: 0.71) in general/theoretical questions rather than in complex decisions/end-of-life situations (M: 2.50; SD: 0.76) and answers related to diagnosis/performing of complementary tests obtained the lowest ones (M: 2.48; SD: 0.77). CONCLUSION: Scores presented big variability depending on the area of knowledge. Questions related to theoretical aspects of challenges/future in geriatrics obtained better scores. When it comes to complex decision-making, appropriateness of the therapeutic efforts or decisions about diagnostic tests, professionals indicated a poorer performance. AI is likely to be incorporated into some areas of medicine, but it would still present important limitations, mainly in complex medical decision-making.
Rossello-Jimenez D; Docampo S; Collado Y; Cuadra-Llopart L; Riba F; Llonch-Masriera M
43
38206515
Repeatability, reproducibility, and diagnostic accuracy of a commercial large language model (ChatGPT) to perform emergency department triage using the Canadian triage and acuity scale.
2,024
CJEM
PURPOSE: The release of the ChatGPT prototype to the public in November 2022 drastically reduced the barrier to using artificial intelligence by allowing easy access to a large language model with only a simple web interface. One situation where ChatGPT could be useful is in triaging patients arriving to the emergency department. This study aimed to address the research problem: "can emergency physicians use ChatGPT to accurately triage patients using the Canadian Triage and Acuity Scale (CTAS)?". METHODS: Six unique prompts were developed independently by five emergency physicians. An automated script was used to query ChatGPT with each of the 6 prompts combined with 61 validated and previously published patient vignettes. Thirty repetitions of each combination were performed for a total of 10,980 simulated triages. RESULTS: In 99.6% of 10,980 queries, a CTAS score was returned. However, there was considerable variations in results. Repeatability (use of the same prompt repeatedly) was responsible for 21.0% of overall variation. Reproducibility (use of different prompts) was responsible for 4.0% of overall variation. Overall accuracy of ChatGPT to triage simulated patients was 47.5% with a 13.7% under-triage rate and a 38.7% over-triage rate. More extensively detailed text given as a prompt was associated with greater reproducibility, but minimal increase in accuracy. CONCLUSIONS: This study suggests that the current ChatGPT large language model is not sufficient for emergency physicians to triage simulated patients using the Canadian Triage and Acuity Scale due to poor repeatability and accuracy. Medical practitioners should be aware that while ChatGPT can be a valuable tool, it may lack consistency and may frequently provide false information.
Franc JM; Cheng L; Hart A; Hata R; Hertelendy A
10
38217726
ChatGPT vs UpToDate: comparative study of usefulness and reliability of Chatbot in common clinical presentations of otorhinolaryngology-head and neck surgery.
2,024
European archives of oto-rhino-laryngology : official journal of the European Federation of Oto-Rhino-Laryngological Societies (EUFOS) : affiliated with the German Society for Oto-Rhino-Laryngology - Head and Neck Surgery
PURPOSE: The usage of Chatbots as a kind of Artificial Intelligence in medicine is getting to increase in recent years. UpToDate(R) is another well-known search tool established on evidence-based knowledge and is used daily by doctors worldwide. In this study, we aimed to investigate the usefulness and reliability of ChatGPT compared to UpToDate in Otorhinolaryngology and Head and Neck Surgery (ORL-HNS). MATERIALS AND METHODS: ChatGPT-3.5 and UpToDate were interrogated for the management of 25 common clinical case scenarios (13 males/12 females) recruited from literature considering the daily observation at the Department of Otorhinolaryngology of Ege University Faculty of Medicine. Scientific references for the management were requested for each clinical case. The accuracy of the references in the ChatGPT answers was assessed on a 0-2 scale and the usefulness of the ChatGPT and UpToDate answers was assessed with 1-3 scores by reviewers. UpToDate and ChatGPT 3.5 responses were compared. RESULTS: ChatGPT did not give references in some questions in contrast to UpToDate. Information on the ChatGPT was limited to 2021. UpToDate supported the paper with subheadings, tables, figures, and algorithms. The mean accuracy score of references in ChatGPT answers was 0.25-weak/unrelated. The median (Q1-Q3) was 1.00 (1.25-2.00) for ChatGPT and 2.63 (2.75-3.00) for UpToDate, the difference was statistically significant (p < 0.001). UpToDate was observed more useful and reliable than ChatGPT. CONCLUSIONS: ChatGPT has the potential to support the physicians to find out the information but our results suggest that ChatGPT needs to be improved to increase the usefulness and reliability of medical evidence-based knowledge.
Karimov Z; Allahverdiyev I; Agayarov OY; Demir D; Almuradova E
32
38218659
An evaluation of AI generated literature reviews in musculoskeletal radiology.
2,024
The surgeon : journal of the Royal Colleges of Surgeons of Edinburgh and Ireland
PURPOSE: The use of artificial intelligence (AI) tools to aid in summarizing information in medicine and research has recently garnered a huge amount of interest. While tools such as ChatGPT produce convincing and naturally sounding output, the answers are sometimes incorrect. Some of these drawbacks, it is hoped, can be avoided by using programmes trained for a more specific scope. In this study we compared the performance of a new AI tool (the-literature.com) to the latest version OpenAI's ChatGPT (GPT-4) in summarizing topics that the authors have significantly contributed to. METHODS: The AI tools were asked to produce a literature review on 7 topics. These were selected based on the research topics that the authors were intimately familiar with and have contributed to through their own publications. The output produced by the AI tools were graded on a 1-5 Likert scale for accuracy, comprehensiveness, and relevance by two fellowship trained consultant radiologists. RESULTS: The-literature.com produced 3 excellent summaries, 3 very poor summaries not relevant to the prompt, and one summary, which was relevant but did not include all relevant papers. All of the summaries produced by GPT-4 were relevant, but fewer relevant papers were identified. The average Likert rating was for the-literature was 2.88 and 3.86 for GPT-4. There was good agreement between the ratings of both radiologists (ICC = 0.883). CONCLUSION: Summaries produced by AI in its current state require careful human validation. GPT-4 on average provides higher quality summaries. Neither tool can reliably identify all relevant publications.
Jenko N; Ariyaratne S; Jeys L; Evans S; Iyengar KP; Botchu R
10
38977032
Performance of GPT-3.5 and GPT-4 on standardized urology knowledge assessment items in the United States: a descriptive study.
2,024
Journal of educational evaluation for health professions
PURPOSE: This study aimed to evaluate the performance of Chat Generative Pre-Trained Transformer (ChatGPT) with respect to standardized urology multiple-choice items in the United States. METHODS: In total, 700 multiple-choice urology board exam-style items were submitted to GPT-3.5 and GPT-4, and responses were recorded. Items were categorized based on topic and question complexity (recall, interpretation, and problem-solving). The accuracy of GPT-3.5 and GPT-4 was compared across item types in February 2024. RESULTS: GPT-4 answered 44.4% of items correctly compared to 30.9% for GPT-3.5 (P>0.0001). GPT-4 (vs. GPT-3.5) had higher accuracy with urologic oncology (43.8% vs. 33.9%, P=0.03), sexual medicine (44.3% vs. 27.8%, P=0.046), and pediatric urology (47.1% vs. 27.1%, P=0.012) items. Endourology (38.0% vs. 25.7%, P=0.15), reconstruction and trauma (29.0% vs. 21.0%, P=0.41), and neurourology (49.0% vs. 33.3%, P=0.11) items did not show significant differences in performance across versions. GPT-4 also outperformed GPT-3.5 with respect to recall (45.9% vs. 27.4%, P<0.00001), interpretation (45.6% vs. 31.5%, P=0.0005), and problem-solving (41.8% vs. 34.5%, P=0.56) type items. This difference was not significant for the higher-complexity items. CONCLUSION: s: ChatGPT performs relatively poorly on standardized multiple-choice urology board exam-style items, with GPT-4 outperforming GPT-3.5. The accuracy was below the proposed minimum passing standards for the American Board of Urology's Continuing Urologic Certification knowledge reinforcement activity (60%). As artificial intelligence progresses in complexity, ChatGPT may become more capable and accurate with respect to board examination items. For now, its responses should be scrutinized.
Yudovich MS; Makarova E; Hague CM; Raman JD
21
39349172
Evaluacion de la fiabilidad y legibilidad de las respuestas de los chatbots como recurso de informacion al paciente para las exploraciones PET-TC mas communes.
2,025
Revista espanola de medicina nuclear e imagen molecular
PURPOSE: This study aimed to evaluate the reliability and readability of responses generated by two popular AI-chatbots, 'ChatGPT-4.0' and 'Google Gemini', to potential patient questions about PET/CT scans. MATERIALS AND METHODS: Thirty potential questions for each of [(18)F]FDG and [(68)Ga]Ga-DOTA-SSTR PET/CT, and twenty-nine potential questions for [(68)Ga]Ga-PSMA PET/CT were asked separately to ChatGPT-4 and Gemini in May 2024. The responses were evaluated for reliability and readability using the modified DISCERN (mDISCERN) scale, Flesch Reading Ease (FRE), Gunning Fog Index (GFI), and Flesch-Kincaid Reading Grade Level (FKRGL). The inter-rater reliability of mDISCERN scores provided by three raters (ChatGPT-4, Gemini, and a nuclear medicine physician) for the responses was assessed. RESULTS: The median [min-max] mDISCERN scores reviewed by the physician for responses about FDG, PSMA and DOTA PET/CT scans were 3.5 [2-4], 3 [3-4], 3 [3-4] for ChatPT-4 and 4 [2-5], 4 [2-5], 3.5 [3-5] for Gemini, respectively. The mDISCERN scores assessed using ChatGPT-4 for answers about FDG, PSMA, and DOTA-SSTR PET/CT scans were 3.5 [3-5], 3 [3-4], 3 [2-3] for ChatGPT-4, and 4 [3-5], 4 [3-5], 4 [3-5] for Gemini, respectively. The mDISCERN scores evaluated using Gemini for responses FDG, PSMA, and DOTA-SSTR PET/CTs were 3 [2-4], 2 [2-4], 3 [2-4] for ChatGPT-4, and 3 [2-5], 3 [1-5], 3 [2-5] for Gemini, respectively. The inter-rater reliability correlation coefficient of mDISCERN scores for ChatGPT-4 responses about FDG, PSMA, and DOTA-SSTR PET/CT scans were 0.629 (95% CI = 0,32-0,812), 0.707 (95% CI = 0.458-0.853) and 0.738 (95% CI = 0.519-0.866), respectively (p < 0.001). The correlation coefficient of mDISCERN scores for Gemini responses about FDG, PSMA, and DOTA-SSTR PET/CT scans were 0.824 (95% CI = 0.677-0.910), 0.881 (95% CI = 0.78-0.94) and 0.847 (95% CI = 0.719-0.922), respectively (p < 0.001). The mDISCERN scores assessed by ChatGPT-4, Gemini, and the physician showed that the chatbots' responses about all PET/CT scans had moderate to good statistical agreement according to the inter-rater reliability correlation coefficient (p < 0,001). There was a statistically significant difference in all readability scores (FKRGL, GFI, and FRE) of ChatGPT-4 and Gemini responses about PET/CT scans (p < 0,001). Gemini responses were shorter and had better readability scores than ChatGPT-4 responses. CONCLUSION: There was an acceptable level of agreement between raters for the mDISCERN score, indicating agreement with the overall reliability of the responses. However, the information provided by AI-chatbots cannot be easily read by the public.
Aydinbelge-Dizdar N; Dizdar K
43
37166289
ChatGPT and Lacrimal Drainage Disorders: Performance and Scope of Improvement.
2,023
Ophthalmic plastic and reconstructive surgery
PURPOSE: This study aimed to report the performance of the large language model ChatGPT (OpenAI, San Francisco, CA, U.S.A.) in the context of lacrimal drainage disorders. METHODS: A set of prompts was constructed through questions and statements spanning common and uncommon aspects of lacrimal drainage disorders. Care was taken to avoid constructing prompts that had significant or new knowledge beyond the year 2020. Each of the prompts was presented thrice to ChatGPT. The questions covered common disorders such as primary acquired nasolacrimal duct obstruction and congenital nasolacrimal duct obstruction and their cause and management. The prompts also tested ChatGPT on certain specifics, such as the history of dacryocystorhinostomy (DCR) surgery, lacrimal pump anatomy, and human canalicular surfactants. ChatGPT was also quizzed on controversial topics such as silicone intubation and the use of mitomycin C in DCR surgery. The responses of ChatGPT were carefully analyzed for evidence-based content, specificity of the response, presence of generic text, disclaimers, factual inaccuracies, and its abilities to admit mistakes and challenge incorrect premises. Three lacrimal surgeons graded the responses into three categories: correct, partially correct, and factually incorrect. RESULTS: A total of 21 prompts were presented to the ChatGPT. The responses were detailed and were based according to the prompt structure. In response to most questions, ChatGPT provided a generic disclaimer that it could not give medical advice or professional opinion but then provided an answer to the question in detail. Specific prompts such as "how can I perform an external DCR?" were responded by a sequential listing of all the surgical steps. However, several factual inaccuracies were noted across many ChatGPT replies. Several responses on controversial topics such as silicone intubation and mitomycin C were generic and not precisely evidence-based. ChatGPT's response to specific questions such as canalicular surfactants and idiopathic canalicular inflammatory disease was poor. The presentation of variable prompts on a single topic led to responses with either repetition or recycling of the phrases. Citations were uniformly missing across all responses. Agreement among the three observers was high (95%) in grading the responses. The responses of ChatGPT were graded as correct for only 40% of the prompts, partially correct in 35%, and outright factually incorrect in 25%. Hence, some degree of factual inaccuracy was present in 60% of the responses, if we consider the partially correct responses. The exciting aspect was that ChatGPT was able to admit mistakes and correct them when presented with counterarguments. It was also capable of challenging incorrect prompts and premises. CONCLUSION: The performance of ChatGPT in the context of lacrimal drainage disorders, at best, can be termed average. However, the potential of this AI chatbot to influence medicine is enormous. There is a need for it to be specifically trained and retrained for individual medical subspecialties.
Ali MJ
32
38188345
The opportunities and challenges of adopting ChatGPT in medical research.
2,023
Frontiers in medicine
PURPOSE: This study aims to investigate the opportunities and challenges of adopting ChatGPT in medical research. METHODS: A qualitative approach with focus groups is adopted in this study. A total of 62 participants including academic researchers from different streams in medicine and eHealth, participated in this study. RESULTS: A total of five themes with 16 sub-themes related to the opportunities; and a total of five themes with 12 sub-themes related to the challenges were identified. The major opportunities include improved data collection and analysis, improved communication and accessibility, and support for researchers in multiple streams of medical research. The major challenges identified were limitations of training data leading to bias, ethical issues, technical limitations, and limitations in data collection and analysis. CONCLUSION: Although ChatGPT can be used as a potential tool in medical research, there is a need for further evidence to generalize its impact on the different research activities.
Alsadhan A; Al-Anezi F; Almohanna A; Alnaim N; Alzahrani H; Shinawi R; AboAlsamh H; Bakhshwain A; Alenazy M; Arif W; Alyousef S; Alhamidi S; Alghamdi A; AlShrayfi N; Rubaian NB; Alanzi T; AlSahli A; Alturki R; Herzallah N
0-1
37980605
Chat GPT for the management of obstructive sleep apnea: do we have a polar star?
2,024
European archives of oto-rhino-laryngology : official journal of the European Federation of Oto-Rhino-Laryngological Societies (EUFOS) : affiliated with the German Society for Oto-Rhino-Laryngology - Head and Neck Surgery
PURPOSE: This study explores the potential of the Chat-Generative Pre-Trained Transformer (Chat-GPT), a Large Language Model (LLM), in assisting healthcare professionals in the diagnosis of obstructive sleep apnea (OSA). It aims to assess the agreement between Chat-GPT's responses and those of expert otolaryngologists, shedding light on the role of AI-generated content in medical decision-making. METHODS: A prospective, cross-sectional study was conducted, involving 350 otolaryngologists from 25 countries who responded to a specialized OSA survey. Chat-GPT was tasked with providing answers to the same survey questions. Responses were assessed by both super-experts and statistically analyzed for agreement. RESULTS: The study revealed that Chat-GPT and expert responses shared a common answer in over 75% of cases for individual questions. However, the overall consensus was achieved in only four questions. Super-expert assessments showed a moderate agreement level, with Chat-GPT scoring slightly lower than experts. Statistically, Chat-GPT's responses differed significantly from experts' opinions (p = 0.0009). Sub-analysis revealed areas of improvement for Chat-GPT, particularly in questions where super-experts rated its responses lower than expert consensus. CONCLUSIONS: Chat-GPT demonstrates potential as a valuable resource for OSA diagnosis, especially where access to specialists is limited. The study emphasizes the importance of AI-human collaboration, with Chat-GPT serving as a complementary tool rather than a replacement for medical professionals. This research contributes to the discourse in otolaryngology and encourages further exploration of AI-driven healthcare applications. While Chat-GPT exhibits a commendable level of consensus with expert responses, ongoing refinements in AI-based healthcare tools hold significant promise for the future of medicine, addressing the underdiagnosis and undertreatment of OSA and improving patient outcomes.
Mira FA; Favier V; Dos Santos Sobreira Nunes H; de Castro JV; Carsuzaa F; Meccariello G; Vicini C; De Vito A; Lechien JR; Chiesa-Estomba C; Maniaci A; Iannella G; Rojas EP; Cornejo JB; Cammaroto G
32
39613920
Diagnostic performance of ChatGPT in tibial plateau fracture in knee X-ray.
2,025
Emergency radiology
PURPOSE: Tibial plateau fractures are relatively common and require accurate diagnosis. Chat Generative Pre-Trained Transformer (ChatGPT) has emerged as a tool to improve medical diagnosis. This study aims to investigate the accuracy of this tool in diagnosing tibial plateau fractures. METHODS: A secondary analysis was performed on 111 knee radiographs from emergency department patients, with 29 confirmed fractures by computed tomography (CT) imaging. The X-rays were reviewed by a board-certified emergency physician (EP) and radiologist and then analyzed by ChatGPT-4 and ChatGPT-4o. The diagnostic performances were compared using the area under the receiver operating characteristic curve (AUC). Sensitivity, specificity, and likelihood ratios were also calculated. RESULTS: The results indicated a sensitivity and negative likelihood ratio of 58.6% (95% CI: 38.9 - 76.4%) and 0.4 (95% CI: 0.3-0.7) for the EP, 72.4% (95% CI: 52.7 - 87.2%) and 0.3 (95% CI: 0.2-0.6) for the radiologist, 27.5% (95% CI: 12.7 - 47.2%) and 0.7 (95% CI: 0.6-0.9) for ChatGPT-4, and 55.1% (95% CI: 35.6 - 73.5%) and 0.4 (95% CI: 0.3-0.7) for ChatGPT4o. The specificity and positive likelihood ratio were 85.3% (95% CI: 75.8 - 92.2%) and 4.0 (95% CI: 2.1-7.3) for the EP, 76.8% (95% CI: 66.2 - 85.4%) and 3.1 (95% CI: 1.9-4.9) for the radiologist, 95.1% (95% CI: 87.9 - 98.6%) and 5.6 (95% CI: 1.8-17.3) for ChatGPT-4, and 93.9% (95% CI: 86.3 - 97.9%) and 9.0 (95% CI: 3.6-22.4) for ChatGPT4o. The area under the receiver operating characteristic curve (AUC) was 0.72 (95% CI: 0.6-0.8) for the EP, 0.75 (95% CI: 0.6-0.8) for the radiologist, 0.61 (95% CI: 0.4-0.7) for ChatGPT-4, and 0.74 (95% CI: 0.6-0.8) for ChatGPT4-o. The EP and radiologist significantly outperformed ChatGPT-4 (P value = 0.02 and 0.01, respectively), whereas there was no significant difference between the EP, ChatGPT-4o, and radiologist. CONCLUSION: ChatGPT-4o matched the physicians' performance and also had the highest specificity. Similar to the physicians, ChatGPT chatbots were not suitable for ruling out the fracture.
Mohammadi M; Parviz S; Parvaz P; Pirmoradi MM; Afzalimoghaddam M; Mirfazaelian H
0-1
40321662
Simulation-Based Evaluation of Large Language Models for Comorbidity Detection in Sleep Medicine - a Pilot Study on ChatGPT o1 Preview.
2,025
Nature and science of sleep
PURPOSE: Timely identification of comorbidities is critical in sleep medicine, where large language models (LLMs) like ChatGPT are currently emerging as transformative tools. Here, we investigate whether the novel LLM ChatGPT o1 preview can identify individual health risks or potentially existing comorbidities from the medical data of fictitious sleep medicine patients. METHODS: We conducted a simulation-based study using 30 fictitious patients, designed to represent realistic variations in demographic and clinical parameters commonly seen in sleep medicine. Each profile included personal data (eg, body mass index, smoking status, drinking habits), blood pressure, and routine blood test results, along with a predefined sleep medicine diagnosis. Each patient profile was evaluated independently by the LLM and a sleep medicine specialist (SMS) for identification of potential comorbidities or individual health risks. Their recommendations were compared for concordance across lifestyle changes and further medical measures. RESULTS: The LLM achieved high concordance with the SMS for lifestyle modification recommendations, including 100% concordance on smoking cessation (kappa = 1; p < 0.001), 97% on alcohol reduction (kappa = 0.92; p < 0.001) and endocrinological examination (kappa = 0.92; p < 0.001) or 93% on weight loss (kappa = 0.86; p < 0.001). However, it exhibited a tendency to over-recommend further medical measures (particularly 57% concordance for cardiological examination (kappa = 0.08; p = 0.28) and 33% for gastrointestinal examination (kappa = 0.1; p = 0.22)) compared to the SMS. CONCLUSION: Despite the obvious limitation of using fictitious data, the findings suggest that LLMs like ChatGPT have the potential to complement clinical workflows in sleep medicine by identifying individual health risks and comorbidities. As LLMs continue to evolve, their integration into healthcare could redefine the approach to patient evaluation and risk stratification. Future research should contextualize the findings within broader clinical applications ideally testing locally run LLMs meeting data protection requirements.
Seifen C; Bahr-Hamm K; Gouveris H; Pordzik J; Blaikie A; Matthias C; Kuhn S; Buhr CR
0-1
39313138
Artificial Intelligence Large Language Models Address Anterior Cruciate Ligament Reconstruction: Superior Clarity and Completeness by Gemini Compared With ChatGPT-4 in Response to American Academy of Orthopaedic Surgeons Clinical Practice Guidelines.
2,024
Arthroscopy : the journal of arthroscopic & related surgery : official publication of the Arthroscopy Association of North America and the International Arthroscopy Association
PURPOSE: To assess the ability of ChatGPT-4 and Gemini to generate accurate and relevant responses to the 2022 American Academy of Orthopaedic Surgeons (AAOS) Clinical Practice Guidelines (CPG) for anterior cruciate ligament reconstruction (ACLR). METHODS: Responses from ChatGPT-4 and Gemini to prompts derived from all 15 AAOS guidelines were evaluated by 7 fellowship-trained orthopaedic sports medicine surgeons using a structured questionnaire assessing 5 key characteristics on a scale from 1 to 5. The prompts were categorized into 3 areas: diagnosis and preoperative management, surgical timing and technique, and rehabilitation and prevention. Statistical analysis included mean scoring, standard deviation, and 2-sided t tests to compare the performance between the 2 large language models (LLMs). Scores were then evaluated for inter-rater reliability (IRR). RESULTS: Overall, both LLMs performed well with mean scores >4 for the 5 key characteristics. Gemini demonstrated superior performance in overall clarity (4.848 +/- 0.36 vs 4.743 +/- 0.481, P = .034), but all other characteristics demonstrated nonsignificant differences (P > .05). Gemini also demonstrated superior clarity in the surgical timing and technique (P = .038) as well as the prevention and rehabilitation (P = .044) subcategories. Additionally, Gemini had superior performance completeness scores in the rehabilitation and prevention subcategory (P = .044), but no statistically significant differences were found amongst the other subcategories. The overall IRR was found to be 0.71 (moderate). CONCLUSIONS: Both Gemini and ChatGPT-4 demonstrate an overall good ability to generate accurate and relevant responses to question prompts based on the 2022 AAOS CPG for ACLR. However, Gemini demonstrated superior clarity in multiple domains in addition to superior completeness for questions pertaining to rehabilitation and prevention. CLINICAL RELEVANCE: The current study addresses a current gap in the LLM and ACLR literature by comparing the performance of ChatGPT-4 to Gemini, which is growing in popularity with more than 300 million individual uses in May 2024 alone. Moreover, the results demonstrated superior performance of Gemini in both clarity and completeness, which are critical elements of a tool being used by patients for educational purposes. Additionally, the current study uses question prompts based on the AAOS CPG, which may be used as a method of standardization for future investigations on performance of LLM platforms. Thus, the results of this study may be of interest to both the readership of Arthroscopy and patients.
Quinn M; Milner JD; Schmitt P; Morrissey P; Lemme N; Marcaccio S; DeFroda S; Tabaddor R; Owens BD
32
38936557
ChatGPT-4 Performs Clinical Information Retrieval Tasks Using Consistently More Trustworthy Resources Than Does Google Search for Queries Concerning the Latarjet Procedure.
2,025
Arthroscopy : the journal of arthroscopic & related surgery : official publication of the Arthroscopy Association of North America and the International Arthroscopy Association
PURPOSE: To assess the ability of ChatGPT-4, an automated Chatbot powered by artificial intelligence, to answer common patient questions concerning the Latarjet procedure for patients with anterior shoulder instability and compare this performance with Google Search Engine. METHODS: Using previously validated methods, a Google search was first performed using the query "Latarjet." Subsequently, the top 10 frequently asked questions (FAQs) and associated sources were extracted. ChatGPT-4 was then prompted to provide the top 10 FAQs and answers concerning the procedure. This process was repeated to identify additional FAQs requiring discrete-numeric answers to allow for a comparison between ChatGPT-4 and Google. Discrete, numeric answers were subsequently assessed for accuracy on the basis of the clinical judgment of 2 fellowship-trained sports medicine surgeons who were blinded to search platform. RESULTS: Mean (+/- standard deviation) accuracy to numeric-based answers was 2.9 +/- 0.9 for ChatGPT-4 versus 2.5 +/- 1.4 for Google (P = .65). ChatGPT-4 derived information for answers only from academic sources, which was significantly different from Google Search Engine (P = .003), which used only 30% academic sources and websites from individual surgeons (50%) and larger medical practices (20%). For general FAQs, 40% of FAQs were found to be identical when comparing ChatGPT-4 and Google Search Engine. In terms of sources used to answer these questions, ChatGPT-4 again used 100% academic resources, whereas Google Search Engine used 60% academic resources, 20% surgeon personal websites, and 20% medical practices (P = .087). CONCLUSIONS: ChatGPT-4 demonstrated the ability to provide accurate and reliable information about the Latarjet procedure in response to patient queries, using multiple academic sources in all cases. This was in contrast to Google Search Engine, which more frequently used single-surgeon and large medical practice websites. Despite differences in the resources accessed to perform information retrieval tasks, the clinical relevance and accuracy of information provided did not significantly differ between ChatGPT-4 and Google Search Engine. CLINICAL RELEVANCE: Commercially available large language models (LLMs), such as ChatGPT-4, can perform diverse information retrieval tasks on-demand. An important medical information retrieval application for LLMs consists of the ability to provide comprehensive, relevant, and accurate information for various use cases such as investigation about a recently diagnosed medical condition or procedure. Understanding the performance and abilities of LLMs for use cases has important implications for deployment within health care settings.
Oeding JF; Lu AZ; Mazzucco M; Fu MC; Taylor SA; Dines DM; Warren RF; Gulotta LV; Dines JS; Kunze KN
32
39242057
ChatGPT Can Offer At Least Satisfactory Responses to Common Patient Questions Regarding Hip Arthroscopy.
2,024
Arthroscopy : the journal of arthroscopic & related surgery : official publication of the Arthroscopy Association of North America and the International Arthroscopy Association
PURPOSE: To assess the accuracy of answers provided by ChatGPT 4.0 (an advanced language model developed by OpenAI) regarding 25 common patient questions about hip arthroscopy. METHODS: ChatGPT 4.0 was presented with 25 common patient questions regarding hip arthroscopy with no follow-up questions and repetition. Each response was evaluated by 2 board-certified orthopaedic sports medicine surgeons independently. Responses were rated, with scores of 1, 2, 3, and 4 corresponding to "excellent response not requiring clarification," "satisfactory requiring minimal clarification," "satisfactory requiring moderate clarification," and "unsatisfactory requiring substantial clarification," respectively. RESULTS: Twenty responses were rated "excellent" and 2 responses were rated "satisfactory requiring minimal clarification" by both of reviewers. Responses to questions "What kind of anesthesia is used for hip arthroscopy?" and "What is the average age for hip arthroscopy?" were rated as "satisfactory requiring minimal clarification" by both reviewers. None of the responses were rated as "satisfactory requiring moderate clarification" or "unsatisfactory" by either of the reviewers. CONCLUSIONS: ChatGPT 4.0 provides at least satisfactory responses to patient questions regarding hip arthroscopy. Under the supervision of an orthopaedic sports medicine surgeon, it could be used as a supplementary tool for patient education. CLINICAL RELEVANCE: This study compared the answers of ChatGPT to patients' questions regarding hip arthroscopy with the current literature. As ChatGPT has gained popularity among patients, the study aimed to find if the responses that patients get from this chatbot are compatible with the up-to-date literature.
Ozbek EA; Ertan MB; Kindan P; Karaca MO; Gursoy S; Chahla J
32
38117307
Performance of artificial intelligence chatbots in sleep medicine certification board exams: ChatGPT versus Google Bard.
2,024
European archives of oto-rhino-laryngology : official journal of the European Federation of Oto-Rhino-Laryngological Societies (EUFOS) : affiliated with the German Society for Oto-Rhino-Laryngology - Head and Neck Surgery
PURPOSE: To conduct a comparative performance evaluation of GPT-3.5, GPT-4 and Google Bard in self-assessment questions at the level of the American Sleep Medicine Certification Board Exam. METHODS: A total of 301 text-based single-best-answer multiple choice questions with four answer options each, across 10 categories, were included in the study and transcribed as inputs for GPT-3.5, GPT-4 and Google Bard. The first output responses generated were selected and matched for answer accuracy against the gold-standard answer provided by the American Academy of Sleep Medicine for each question. A global score of 80% and above is required by human sleep medicine specialists to pass each exam category. RESULTS: GPT-4 successfully achieved the pass mark of 80% or above in five of the 10 exam categories, including the Normal Sleep and Variants Self-Assessment Exam (2021), Circadian Rhythm Sleep-Wake Disorders Self-Assessment Exam (2021), Insomnia Self-Assessment Exam (2022), Parasomnias Self-Assessment Exam (2022) and the Sleep-Related Movements Self-Assessment Exam (2023). GPT-4 demonstrated superior performance in all exam categories and achieved a higher overall score of 68.1% when compared against both GPT-3.5 (46.8%) and Google Bard (45.5%), which was statistically significant (p value < 0.001). There was no significant difference in the overall score performance between GPT-3.5 and Google Bard. CONCLUSIONS: Otolaryngologists and sleep medicine physicians have a crucial role through agile and robust research to ensure the next generation AI chatbots are built safely and responsibly.
Cheong RCT; Pang KP; Unadkat S; Mcneillis V; Williamson A; Joseph J; Randhawa P; Andrews P; Paleri V
32
40209833
Large Language Model Use Cases in Health Care Research Are Redundant and Often Lack Appropriate Methodological Conduct: A Scoping Review and Call for Improved Practices.
2,025
Arthroscopy : the journal of arthroscopic & related surgery : official publication of the Arthroscopy Association of North America and the International Arthroscopy Association
PURPOSE: To describe the current use cases of large language models (LLMs) in musculoskeletal medicine and to evaluate the methodologic conduct of these investigations in order to safeguard future implementation of LLMs in clinical research and identify key areas for methodological improvement. METHODS: A comprehensive literature search was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines using PubMed, Cochrane Library, and Embase databases to identify eligible studies. Included studies evaluated the use of LLMs within any realm of orthopaedic surgery, regardless of its application in a clinical or educational setting. Methodological Index for Non-Randomized Studies criteria was used to assess the quality of all included studies. RESULTS: In total, 114 studies published from 2022 to 2024 were identified. Extensive use case redundancy was observed, and 5 main categories of clinical applications of LLMs were identified: 48 studies (42.1%) that assessed the ability to answer patient questions, 24 studies (21.1%) that evaluated the ability to diagnose and manage medical conditions, 21 studies (18.4%) that evaluated the ability to take orthopaedic examinations, 11 studies (9.6%) that analyzed the ability to develop or evaluate patient educational materials, and 10 studies (8.8%) concerning other applications, such as generating images, generating discharge documents and clinical letters, writing scientific abstracts and manuscripts, and enhancing billing efficiency. General orthopaedics was the focus of most included studies (n = 39, 34.2%), followed by orthopaedic sports medicine (n = 18, 15.8%), and adult reconstructive surgery (n = 17, 14.9%). ChatGPT 3.5 was the most common LLM used or evaluated (n = 79, 69.2%), followed by ChatGPT 4.0 (n = 47, 41.2%). Methodological inconsistency was prevalent among studies, with 36 (31.6%) studies failing to disclose the exact prompts used, 64 (56.1%) failing to disclose the exact outputs generated by the LLM, and only 7 (6.1%) evaluating different prompting strategies to elicit desired outputs. No studies attempted to investigate how the influence of race or gender influenced model outputs. CONCLUSIONS: Among studies evaluating LLM health care use cases, the scope of clinical investigations was limited, with most studies showing redundant use cases. Because of infrequently reported descriptions of prompting strategies, incomplete model specifications, failure to disclose exact model outputs, and limited attempts to address bias, methodological inconsistency was concerningly extensive. CLINICAL RELEVANCE: A comprehensive understanding of current LLM use cases is critical to familiarize providers with the possibilities through which this technology may be used in clinical practice. As LLM health care applications transition from research to clinical integration, model transparency and trustworthiness is critical. The results of the current study suggest that guidance is urgently needed, with focus on promoting appropriate methodological conduct practices and novel use cases to advance the field.
Kunze KN; Gerhold C; Dave U; Abunnur N; Mamonov A; Nwachukwu BU; Verma NN; Chahla J
32
40015549
An alternative approach to code, store, and regenerate 3D data in dental medicine using open-source software: A scripting-based technique.
2,025
Journal of dentistry
PURPOSE: To develop a scripting-based technique for managing three-dimensional (3D) dental data and evaluate the regenerated standard tessellation language (STL) data in terms of file size, accuracy (trueness and precision), and processing time. MATERIALS AND METHODS: Ten STL dental and maxillofacial models were obtained from various imaging technologies, including intraoral scanners, computer-aided design (CAD) software, and cone-beam computed tomography (CBCT), and saved as STL files. ChatGPT was used to generate Python scripts in Blender for mesh simplification and data compression, which were then saved as .py files. The models were regenerated from these scripts in Blender, and their accuracy was assessed using GOM Inspect software, comparing trueness and precision. Statistical analysis, including Kruskal-Wallis and Mann-Whitney tests, was conducted to evaluate differences in file sizes between the original, Python-generated, and regenerated STL files, with statistical analyses performed at a level of significance alpha=0.05. RESULTS: The scripting-based technique was successfully utilized in ChatGPT to generate Python script code for accessing comprehensive data on STL models, utilizing Blender's scripting functionality. This approach enabled the generation, regeneration, and visualization of STL models, resulting in significantly smaller file sizes for both the Python script and regenerated STL files compared to the original STL files (p < 0.001). No significant differences in trueness were observed, with deviations ranging from 0.0 microm to 6.8 microm, and all regenerated STL models demonstrated perfect precision. Additionally, a proportional relationship was noted between the original STL file sizes and processing times. CONCLUSIONS: The scripting-based approach proved to be effective in coding, storing, and regenerating STL dental data with reduced file sizes and efficient processing times without compromising the accuracy. CLINICAL SIGNIFICANCE: Various STL dental models of patients can be coded, stored, and regenerated to be used again within efficient processing time without affecting the accuracy.
Elbashti ME; Paz-Cortes MM; Giovannini G; Acero-Sanz J; Abou-Ayash S; Cakmak G; Molinero-Mourelle P
32
40357425
Performance analysis of an emergency triage system in ophthalmology using a customized CHATBOT.
2,025
Digital health
PURPOSE: To evaluate the performance of a custom ChatGPT-based chatbot in triaging ophthalmic emergencies compared to trained ophthalmologists. METHODS: One hundred hypothetical ophthalmic cases were created based on actual patient data from an ophthalmic emergency department, including details such as age, symptoms and medical history. Three experienced ophthalmologists independently graded these cases using a four-tier severity scale, ranging from Grade 1 (immediate care required) to Grade 4 (non-urgent care). A customized version of ChatGPT was developed to perform the same grading task. Inter-rater agreement was measured between the chatbot and the ophthalmologists, as well as among all human graders. RESULTS: The chatbot demonstrated substantial agreement with the ophthalmologists, achieving Cohen's kappa scores of 0.737, 0.749 and 0.751, respectively. The highest agreement was between ophthalmologist 3 and the chatbot (kappa = 0.751). Fleiss' kappa for overall agreement among all graders was 0.79, indicating substantial agreement. The Kruskal-Wallis test showed no statistically significant differences in the distribution of grades assigned by the chatbot and the ophthalmologists (p = 0.967). Bootstrap analysis revealed no significant difference in kappa values between the chatbot and human graders (p = 0.572, 95% CI -0.163 to 0.072). CONCLUSIONS: The study demonstrates that a customized chatbot can perform ophthalmic triage with a level of accuracy comparable to that of trained ophthalmologists. This suggests that AI-assisted triage could be a valuable tool in emergency departments, potentially enhancing clinical workflows and reducing waiting times while maintaining high standards of patient care.
Schumacher I; Ferro Desideri L; Buhler VMM; Sagurski N; Subhi Y; Bhardwaj G; Roth J; Anguita R
43
37536678
ChatGPT for Sample-Size Calculation in Sports Medicine and Exercise Sciences: A Cautionary Note.
2,023
International journal of sports physiology and performance
PURPOSE: To investigate the accuracy of ChatGPT (Chat generative pretrained transformer), a large language model, in calculating sample size for sport-sciences and sports-medicine research studies. METHODS: We conducted an analysis on 4 published papers (ie, examples 1-4) encompassing various study designs and approaches for calculating sample size in 3 sport-science and -medicine journals, including 3 randomized controlled trials and 1 survey paper. We provided ChatGPT with all necessary data such as mean, percentage SD, normal deviates (Zalpha/2 and Z1-beta), and study design. Prompting from 1 example has subsequently been reused to gain insights into the reproducibility of the ChatGPT response. RESULTS: ChatGPT correctly calculated the sample size for 1 randomized controlled trial but failed in the remaining 3 examples, including the incorrect identification of the formula in one example of a survey paper. After interaction with ChatGPT, the correct sample size was obtained for the survey paper. Intriguingly, when the prompt from Example 3 was reused, ChatGPT provided a completely different sample size than its initial response. CONCLUSIONS: While the use of artificial-intelligence tools holds great promise, it should be noted that it might lead to errors and inconsistencies in sample-size calculations even when the tool is fed with the necessary correct information. As artificial-intelligence technology continues to advance and learn from human feedback, there is hope for improvement in sample-size calculation and other research tasks. However, it is important for scientists to exercise caution in utilizing these tools. Future studies should assess more advanced/powerful versions of this tool (ie, ChatGPT4).
Methnani J; Latiri I; Dergaa I; Chamari K; Ben Saad H
32
37553552
Exploring the potential of ChatGPT as a supplementary tool for providing orthopaedic information.
2,023
Knee surgery, sports traumatology, arthroscopy : official journal of the ESSKA
PURPOSE: To investigate the potential use of large language models (LLMs) in orthopaedics by presenting queries pertinent to anterior cruciate ligament (ACL) surgery to generative pre-trained transformer (ChatGPT, specifically using its GPT-4 model of March 14th 2023). Additionally, this study aimed to evaluate the depth of the LLM's knowledge and investigate its adaptability to different user groups. It was hypothesized that the ChatGPT would be able to adapt to different target groups due to its strong language understanding and processing capabilities. METHODS: ChatGPT was presented with 20 questions and response was requested for two distinct target audiences: patients and non-orthopaedic medical doctors. Two board-certified orthopaedic sports medicine surgeons and two expert orthopaedic sports medicine surgeons independently evaluated the responses generated by ChatGPT. Mean correctness, completeness, and adaptability to the target audiences (patients and non-orthopaedic medical doctors) were determined. A three-point response scale facilitated nuanced assessment. RESULTS: ChatGPT exhibited fair accuracy, with average correctness scores of 1.69 and 1.66 (on a scale from 0, incorrect, 1, partially correct, to 2, correct) for patients and medical doctors, respectively. Three of the 20 questions (15.0%) were deemed incorrect by any of the four orthopaedic sports medicine surgeon assessors. Moreover, overall completeness was calculated to be 1.51 and 1.64 for patients and medical doctors, respectively, while overall adaptiveness was determined to be 1.75 and 1.73 for patients and doctors, respectively. CONCLUSION: Overall, ChatGPT was successful in generating correct responses in approximately 65% of the cases related to ACL surgery. The findings of this study imply that LLMs offer potential as a supplementary tool for acquiring orthopaedic knowledge. However, although ChatGPT can provide guidance and effectively adapt to diverse target audiences, it cannot supplant the expertise of orthopaedic sports medicine surgeons in diagnostic and treatment planning endeavours due to its limited understanding of orthopaedic domains and its potential for erroneous responses. LEVEL OF EVIDENCE: V.
Kaarre J; Feldt R; Keeling LE; Dadoo S; Zsidai B; Hughes JD; Samuelsson K; Musahl V
32
37917165
Artificial intelligence chatbots as sources of patient education material for obstructive sleep apnoea: ChatGPT versus Google Bard.
2,024
European archives of oto-rhino-laryngology : official journal of the European Federation of Oto-Rhino-Laryngological Societies (EUFOS) : affiliated with the German Society for Oto-Rhino-Laryngology - Head and Neck Surgery
PURPOSE: To perform the first head-to-head comparative evaluation of patient education material for obstructive sleep apnoea generated by two artificial intelligence chatbots, ChatGPT and its primary rival Google Bard. METHODS: Fifty frequently asked questions on obstructive sleep apnoea in English were extracted from the patient information webpages of four major sleep organizations and categorized as input prompts. ChatGPT and Google Bard responses were selected and independently rated using the Patient Education Materials Assessment Tool-Printable (PEMAT-P) Auto-Scoring Form by two otolaryngologists, with a Fellowship of the Royal College of Surgeons (FRCS) and a special interest in sleep medicine and surgery. Responses were subjectively screened for any incorrect or dangerous information as a secondary outcome. The Flesch-Kincaid Calculator was used to evaluate the readability of responses for both ChatGPT and Google Bard. RESULTS: A total of 46 questions were curated and categorized into three domains: condition (n = 14), investigation (n = 9) and treatment (n = 23). Understandability scores for ChatGPT versus Google Bard on the various domains were as follows: condition 90.86% vs.76.32% (p < 0.001); investigation 89.94% vs. 71.67% (p < 0.001); treatment 90.78% vs.73.74% (p < 0.001). Actionability scores for ChatGPT versus Google Bard on the various domains were as follows: condition 77.14% vs. 51.43% (p < 0.001); investigation 72.22% vs. 54.44% (p = 0.05); treatment 73.04% vs. 54.78% (p = 0.002). The mean Flesch-Kincaid Grade Level for ChatGPT was 9.0 and Google Bard was 5.9. No incorrect or dangerous information was identified in any of the generated responses from both ChatGPT and Google Bard. CONCLUSION: Evaluation of ChatGPT and Google Bard patient education material for OSA indicates the former to offer superior information across several domains.
Cheong RCT; Unadkat S; Mcneillis V; Williamson A; Joseph J; Randhawa P; Andrews P; Paleri V
0-1
38925234
The Large Language Model ChatGPT-4 Exhibits Excellent Triage Capabilities and Diagnostic Performance for Patients Presenting With Various Causes of Knee Pain.
2,025
Arthroscopy : the journal of arthroscopic & related surgery : official publication of the Arthroscopy Association of North America and the International Arthroscopy Association
PURPOSE: To provide a proof-of-concept analysis of the appropriateness and performance of ChatGPT-4 to triage, synthesize differential diagnoses, and generate treatment plans concerning common presentations of knee pain. METHODS: Twenty knee complaints warranting triage and expanded scenarios were input into ChatGPT-4, with memory cleared prior to each new input to mitigate bias. For the 10 triage complaints, ChatGPT-4 was asked to generate a differential diagnosis that was graded for accuracy and suitability in comparison to a differential created by 2 orthopaedic sports medicine physicians. For the 10 clinical scenarios, ChatGPT-4 was prompted to provide treatment guidance for the patient, which was again graded. To test the higher-order capabilities of ChatGPT-4, further inquiry into these specific management recommendations was performed and graded. RESULTS: All ChatGPT-4 diagnoses were deemed appropriate within the spectrum of potential pathologies on a differential. The top diagnosis on the differential was identical between surgeons and ChatGPT-4 for 70% of scenarios, and the top diagnosis provided by the surgeon appeared as either the first or second diagnosis in 90% of scenarios. Overall, 16 of 30 diagnoses (53.3%) in the differential were identical. When provided with 10 expanded vignettes with a single diagnosis, the accuracy of ChatGPT-4 increased to 100%, with the suitability of management graded as appropriate in 90% of cases. Specific information pertaining to conservative management, surgical approaches, and related treatments was appropriate and accurate in 100% of cases. CONCLUSIONS: ChatGPT-4 provided clinically reasonable diagnoses to triage patient complaints of knee pain due to various underlying conditions that were generally consistent with differentials provided by sports medicine physicians. Diagnostic performance was enhanced when providing additional information, allowing ChatGPT-4 to reach high predictive accuracy for recommendations concerning management and treatment options. However, ChatGPT-4 may show clinically important error rates for diagnosis depending on prompting strategy and information provided; therefore, further refinements are necessary prior to implementation into clinical workflows. CLINICAL RELEVANCE: Although ChatGPT-4 is increasingly being used by patients for health information, the potential for ChatGPT-4 to serve as a clinical support tool is unclear. In this study, we found that ChatGPT-4 was frequently able to diagnose and triage knee complaints appropriately as rated by sports medicine surgeons, suggesting that it may eventually be a useful clinical support tool.
Kunze KN; Varady NH; Mazzucco M; Lu AZ; Chahla J; Martin RK; Ranawat AS; Pearle AD; Williams RJ 3rd
32
39521391
Custom Large Language Models Improve Accuracy: Comparing Retrieval Augmented Generation and Artificial Intelligence Agents to Noncustom Models for Evidence-Based Medicine.
2,025
Arthroscopy : the journal of arthroscopic & related surgery : official publication of the Arthroscopy Association of North America and the International Arthroscopy Association
PURPOSE: To show the value of custom methods, namely Retrieval Augmented Generation (RAG)-based Large Language Models (LLMs) and Agentic Augmentation, over standard LLMs in delivering accurate information using an anterior cruciate ligament (ACL) injury case. METHODS: A set of 100 questions and answers based on the 2022 AAOS ACL guidelines were curated. Closed-source (open AI GPT4/GPT 3.5 and Anthropic's Claude3) and open-source models (LLama3 8b/70b and Mistral 8x7b) were asked questions in base form and again with AAOS guidelines embedded into a RAG system. The top-performing models were further augmented with artificial intelligence (AI) agents and reevaluated. Two fellowship-trained surgeons blindly evaluated the accuracy of the responses of each cohort. Recall-Oriented Understudy of Gisting Evaluation and Metric for Evaluation of Translation with Explicit Ordering scores were calculated to assess semantic similarity in the response. RESULTS: All noncustom LLM models started below 60% accuracy. Applying RAG improved the accuracy of every model by an average 39.7%. The highest performing model with just RAG was Meta's open-source Llama3 70b (94%). The highest performing model with RAG and AI agents was Open AI's GPT4 (95%). CONCLUSIONS: RAG improved accuracy by an average of 39.7%, with the highest accuracy rate of 94% in the Meta Llama3 70b. Incorporating AI agents into a previously RAG-augmented LLM improved ChatGPT4 accuracy rate to 95%. Thus, Agentic and RAG augmented LLMs can be accurate liaisons of information, supporting our hypothesis. CLINICAL RELEVANCE: Despite literature surrounding the use of LLM in medicine, there has been considerable and appropriate skepticism given the variably accurate response rates. This study establishes the groundwork to identify whether custom modifications to LLMs using RAG and agentic augmentation can better deliver accurate information in orthopaedic care. With this knowledge, online medical information commonly sought in popular LLMs, such as ChatGPT, can be standardized and provide relevant online medical information to better support shared decision making between surgeon and patient.
Woo JJ; Yang AJ; Olsen RJ; Hasan SS; Nawabi DH; Nwachukwu BU; Williams RJ 3rd; Ramkumar PN
32
38686158
Ethical and Professional Decision-Making Capabilities of Artificial Intelligence Chatbots: Evaluating ChatGPT's Professional Competencies in Medicine.
2,024
Medical science educator
PURPOSE: We examined the performance of artificial intelligence chatbots on the PREview Practice Exam, an online situational judgment test for professionalism and ethics. METHODS: We used validated methodologies to calculate scores and descriptive statistics, chi(2) tests, and Fisher's exact tests to compare scores by model and competency. RESULTS: GPT-3.5 and GPT-4 scored 6/9 (76th percentile) and 7/9 (92nd percentile), respectively, higher than medical school applicant averages of 5/9 (56th percentile). Both models answered 95 + % of questions correctly. CONCLUSIONS: Chatbots outperformed the average applicant on PREview, suggesting their potential for healthcare training and decision-making and highlighting risks of online assessment delivery.
Lin JC; Kurapati SS; Younessi DN; Scott IU; Gong DA
21
38365990
A cross-sectional comparative study: ChatGPT 3.5 versus diverse levels of medical experts in the diagnosis of ENT diseases.
2,024
European archives of oto-rhino-laryngology : official journal of the European Federation of Oto-Rhino-Laryngological Societies (EUFOS) : affiliated with the German Society for Oto-Rhino-Laryngology - Head and Neck Surgery
PURPOSE: With recent advances in artificial intelligence (AI), it has become crucial to thoroughly evaluate its applicability in healthcare. This study aimed to assess the accuracy of ChatGPT in diagnosing ear, nose, and throat (ENT) pathology, and comparing its performance to that of medical experts. METHODS: We conducted a cross-sectional comparative study where 32 ENT cases were presented to ChatGPT 3.5, ENT physicians, ENT residents, family medicine (FM) specialists, second-year medical students (Med2), and third-year medical students (Med3). Each participant provided three differential diagnoses. The study analyzed diagnostic accuracy rates and inter-rater agreement within and between participant groups and ChatGPT. RESULTS: The accuracy rate of ChatGPT was 70.8%, being not significantly different from ENT physicians or ENT residents. However, a significant difference in correctness rate existed between ChatGPT and FM specialists (49.8%, p < 0.001), and between ChatGPT and medical students (Med2 47.5%, p < 0.001; Med3 47%, p < 0.001). Inter-rater agreement for the differential diagnosis between ChatGPT and each participant group was either poor or fair. In 68.75% of cases, ChatGPT failed to mention the most critical diagnosis. CONCLUSIONS: ChatGPT demonstrated accuracy comparable to that of ENT physicians and ENT residents in diagnosing ENT pathology, outperforming FM specialists, Med2 and Med3. However, it showed limitations in identifying the most critical diagnosis.
Makhoul M; Melkane AE; Khoury PE; Hadi CE; Matar N
32
38760650
Evaluation of the Impact of ChatGPT on the Selection of Surgical Technique in Bariatric Surgery.
2,025
Obesity surgery
PURPOSE: With the growing interest in artificial intelligence (AI) applications in medicine, this study explores ChatGPT's potential to influence surgical technique selection in metabolic and bariatric surgery (MBS), contrasting AI recommendations with established clinical guidelines and expert consensus. MATERIALS AND METHODS: Conducting a single-center retrospective analysis, the study involved 161 patients who underwent MBS between January 2022 and December 2023. ChatGPT4 was used to analyze patient data, including demographics, pathological history, and BMI, to recommend the most suitable surgical technique. These AI recommendations were then compared with the hospital's algorithm-based decisions. RESULTS: ChatGPT recommended Roux-en-Y gastric bypass in over half of the cases. However, a significant difference was observed between AI suggestions and actual surgical techniques applied, with only a 34.16% match rate. Further analysis revealed any significant correlation between ChatGPT recommendations and the established surgical algorithm. CONCLUSION: Despite ChatGPT's ability to process and analyze large datasets, its recommendations for MBS techniques do not align closely with those determined by expert surgical teams using a high success rate algorithm. Consequently, the study concludes that ChatGPT4 should not replace expert consultation in selecting MBS techniques.
Lopez-Gonzalez R; Sanchez-Cordero S; Pujol-Gebelli J; Castellvi J
32
38527823
Quality, Accuracy, and Bias in ChatGPT-Based Summarization of Medical Abstracts.
2,024
Annals of family medicine
PURPOSE: Worldwide clinical knowledge is expanding rapidly, but physicians have sparse time to review scientific literature. Large language models (eg, Chat Generative Pretrained Transformer [ChatGPT]), might help summarize and prioritize research articles to review. However, large language models sometimes "hallucinate" incorrect information. METHODS: We evaluated ChatGPT's ability to summarize 140 peer-reviewed abstracts from 14 journals. Physicians rated the quality, accuracy, and bias of the ChatGPT summaries. We also compared human ratings of relevance to various areas of medicine to ChatGPT relevance ratings. RESULTS: ChatGPT produced summaries that were 70% shorter (mean abstract length of 2,438 characters decreased to 739 characters). Summaries were nevertheless rated as high quality (median score 90, interquartile range [IQR] 87.0-92.5; scale 0-100), high accuracy (median 92.5, IQR 89.0-95.0), and low bias (median 0, IQR 0-7.5). Serious inaccuracies and hallucinations were uncommon. Classification of the relevance of entire journals to various fields of medicine closely mirrored physician classifications (nonlinear standard error of the regression [SER] 8.6 on a scale of 0-100). However, relevance classification for individual articles was much more modest (SER 22.3). CONCLUSIONS: Summaries generated by ChatGPT were 70% shorter than mean abstract length and were characterized by high quality, high accuracy, and low bias. Conversely, ChatGPT had modest ability to classify the relevance of articles to medical specialties. We suggest that ChatGPT can help family physicians accelerate review of the scientific literature and have developed software (pyJournalWatch) to support this application. Life-critical medical decisions should remain based on full, critical, and thoughtful evaluation of the full text of research articles in context with clinical guidelines.
Hake J; Crowley M; Coy A; Shanks D; Eoff A; Kirmer-Voss K; Dhanda G; Parente DJ
10
40064613
Unveiling the power of R: a comprehensive perspective for laboratory medicine data analysis.
2,025
Clinical chemistry and laboratory medicine
R language has gained traction in laboratory medicine for its statistical power and dynamic tools like RMarkdown and RShiny. However, there is limited literature summarizing R packages and functions tailored for laboratory medicine, making it difficult for clinical laboratory workers to access these tools. Additionally, varying algorithms across R packages can lead to inconsistencies in published reports. This review addresses these challenges by providing an overview of R's evolution and its key features, followed by a summary of statistical methods implemented in R, including platform comparisons, precision verification, factor analysis, and the establishment of reference intervals (RIs). We also highlight the development and validation of predictive models using techniques such as linear and logistic regression, decision trees, random forests, support vector machines, naive Bayes, K-Nearest Neighbors, k-means clustering, and backpropagation neural networks - all implemented in R. To ensure transparency and reproducibility in research, a checklist is provided for authors publishing papers using R for data analysis in laboratory medicine. In the final section, the potential of R in big data analytics is explored, focusing on standardized reporting through RMarkdown and the creation of user-friendly data visualization platforms with RShiny. Moreover, the integration of large language models (LLMs), such as ChatGPT, is discussed for their benefits in enhancing R programming, automating reporting, and offering insights from data analysis, thus improving the efficiency and accuracy of laboratory data analysis.
Ma C; Qiu L
10
37758604
Radiology Reading Room for the Future: Harnessing the Power of Large Language Models Like ChatGPT.
2,023
Current problems in diagnostic radiology
Radiology has usually been the field of medicine that has been at the forefront of technological advances, often being the first to wholeheartedly embrace them. Whether it's from digitization to cloud side architecture, radiology has led the way for adopting the latest advances. With the advent of large language models (LLMs), especially with the unprecedented explosion of freely available ChatGPT, time is ripe for radiology and radiologists to find novel ways to use the technology to improve their workflow. Towards this, we believe these LLMs have a key role in the radiology reading room not only to expedite processes, simplify mundane and archaic tasks, but also to increase the radiologist's and radiologist trainee's knowledge base at a far faster pace. In this article, we discuss some of the ways we believe ChatGPT, and the likes can be harnessed in the reading room.
Tippareddy C; Jiang S; Bera K; Ramaiya N
10
38160089
Could ChatGPT Pass the UK Radiology Fellowship Examinations?
2,024
Academic radiology
RATIONALE AND OBJECTIVES: Chat Generative Pre-trained Transformer (ChatGPT) is an artificial intelligence (AI) tool which utilises machine learning to generate original text resembling human language. AI models have recently demonstrated remarkable ability at analysing and solving problems, including passing professional examinations. We investigate the performance of ChatGPT on some of the UK radiology fellowship equivalent examination questions. METHODS: ChatGPT was asked to answer questions from question banks resembling the Fellowship of the Royal College of Radiologists (FRCR) examination. The entire physics part 1 question bank (203 5-part true/false questions) was answered by the GPT-4 model and answers recorded. 240 single best answer questions (SBAs) (representing the true length of the FRCR 2A examination) were answered by both GPT-3.5 and GPT-4 models. RESULTS: ChatGPT 4 answered 74.8% of part 1 true/false statements correctly. The spring 2023 passing mark of the part 1 examination was 75.5% and ChatGPT thus narrowly failed. In the 2A examination, ChatGPT 3.5 answered 50.8% SBAs correctly, while GPT-4 answered 74.2% correctly. The winter 2022 2A pass mark was 63.3% and thus GPT-4 clearly passed. CONCLUSION: AI models such as ChatGPT are able to answer the majority of questions in an FRCR style examination. It is reasonable to assume that further developments in AI will be more likely to succeed in comprehending and solving questions related to medicine, specifically clinical radiology. ADVANCES IN KNOWLEDGE: Our findings outline the unprecedented capabilities of AI, adding to the current relatively small body of literature on the subject, which in turn can play a role medical training, evaluation and practice. This can undoubtedly have implications for radiology.
Ariyaratne S; Jenko N; Mark Davies A; Iyengar KP; Botchu R
21
37492829
The Emerging Role of Generative Artificial Intelligence in Medical Education, Research, and Practice.
2,023
Cureus
Recent breakthroughs in generative artificial intelligence (GAI) and the emergence of transformer-based large language models such as Chat Generative Pre-trained Transformer (ChatGPT) have the potential to transform healthcare education, research, and clinical practice. This article examines the current trends in using GAI models in medicine, outlining their strengths and limitations. It is imperative to develop further consensus-based guidelines to govern the appropriate use of GAI, not only in medical education but also in research, scholarship, and clinical practice.
Shoja MM; Van de Ridder JMM; Rajput V
10
37407364
Pearls and pitfalls of ChatGPT in medical oncology.
2,023
Trends in cancer
Recently, ChatGPT has drawn attention to the potential uses of artificial intelligence (AI) in academia. Here, we discuss how ChatGPT can be of value to medicine and medical oncology and the potential pitfalls that may be encountered.
Blum J; Menta AK; Zhao X; Yang VB; Gouda MA; Subbiah V
10
40055532
Red teaming ChatGPT in medicine to yield real-world insights on model behavior.
2,025
NPJ digital medicine
Red teaming, the practice of adversarially exposing unexpected or undesired model behaviors, is critical towards improving equity and accuracy of large language models, but non-model creator-affiliated red teaming is scant in healthcare. We convened teams of clinicians, medical and engineering students, and technical professionals (80 participants total) to stress-test models with real-world clinical cases and categorize inappropriate responses along axes of safety, privacy, hallucinations/accuracy, and bias. Six medically-trained reviewers re-analyzed prompt-response pairs and added qualitative annotations. Of 376 unique prompts (1504 responses), 20.1% were inappropriate (GPT-3.5: 25.8%; GPT-4.0: 16%; GPT-4.0 with Internet: 17.8%). Subsequently, we show the utility of our benchmark by testing GPT-4o, a model released after our event (20.4% inappropriate). 21.5% of responses appropriate with GPT-3.5 were inappropriate in updated models. We share insights for constructing red teaming prompts, and present our benchmark for iterative model assessments.
Chang CT; Farah H; Gui H; Rezaei SJ; Bou-Khalil C; Park YJ; Swaminathan A; Omiye JA; Kolluri A; Chaurasia A; Lozano A; Heiman A; Jia AS; Kaushal A; Jia A; Iacovelli A; Yang A; Salles A; Singhal A; Narasimhan B; Belai B; Jacobson BH; Li B; Poe CH; Sanghera C; Zheng C; Messer C; Kettud DV; Pandya D; Kaur D; Hla D; Dindoust D; Moehrle D; Ross D; Chou E; Lin E; Haredasht FN; Cheng G; Gao I; Chang J; Silberg J; Fries JA; Xu J; Jamison J; Tamaresis JS; Chen JH; Lazaro J; Banda JM; Lee JJ; Matthys KE; Steffner KR; Tian L; Pegolotti L; Srinivasan M; Manimaran M; Schwede M; Zhang M; Nguyen M; Fathzadeh M; Zhao Q; Bajra R; Khurana R; Azam R; Bartlett R; Truong ST; Fleming SL; Raj S; Behr S; Onyeka S; Muppidi S; Bandali T; Eulalio TY; Chen W; Zhou X; Ding Y; Cui Y; Tan Y; Liu Y; Shah N; Daneshjou R
10
38570021
A framework enabling LLMs into regulatory environment for transparency and trustworthiness and its application to drug labeling document.
2,024
Regulatory toxicology and pharmacology : RTP
Regulatory agencies consistently deal with extensive document reviews, ranging from product submissions to both internal and external communications. Large Language Models (LLMs) like ChatGPT can be invaluable tools for these tasks, however present several challenges, particularly the proprietary information, combining customized function with specific review needs, and transparency and explainability of the model's output. Hence, a localized and customized solution is imperative. To tackle these challenges, we formulated a framework named askFDALabel on FDA drug labeling documents that is a crucial resource in the FDA drug review process. AskFDALabel operates within a secure IT environment and comprises two key modules: a semantic search and a Q&A/text-generation module. The Module S built on word embeddings to enable comprehensive semantic queries within labeling documents. The Module T utilizes a tuned LLM to generate responses based on references from Module S. As the result, our framework enabled small LLMs to perform comparably to ChatGPT with as a computationally inexpensive solution for regulatory application. To conclude, through AskFDALabel, we have showcased a pathway that harnesses LLMs to support agency operations within a secure environment, offering tailored functions for the needs of regulatory research.
Wu L; Xu J; Thakkar S; Gray M; Qu Y; Li D; Tong W
10
37434733
ChatGPT's potential role in non-English-speaking outpatient clinic settings.
2,023
Digital health
Researchers recently utilized ChatGPT as a tool for composing clinic letters, highlighting its ability to generate accurate and empathetic communications. Here we demonstrated the potential application of ChatGPT as a medical assistant in Mandarin Chinese-speaking outpatient clinics, aiming to improve patient satisfaction in high-patient volume settings. ChatGPT achieved an average score of 72.4% in the Chinese Medical Licensing Examination's Clinical Knowledge section, ranking within the top 20th percentile. It also demonstrated its potential for clinical communication in non-English speaking environments. Our study suggests that ChatGPT could serve as an interface between physicians and patients in Chinese-speaking outpatient settings, possibly extending to other languages. However, further optimization is required, including training on medical-specific datasets, rigorous testing, privacy compliance, integration with existing systems, user-friendly interfaces, and the development of guidelines for medical professionals. Controlled clinical trials and regulatory approval are necessary before widespread implementation. As chatbots' integration into medical practice becomes more feasible, rigorous early investigations and pilot studies can help mitigate potential risks.
Zhu Z; Ying Y; Zhu J; Wu H
0-1
37802491
ChatGPT in medical research: challenging time ahead.
2,023
The Medico-legal journal
Since its launch, ChatGPT, an artificial intelligence-powered language model tool, has generated significant attention in research writing. The use of ChatGPT in medical research can be a double-edged sword. ChatGPT can expedite the research writing process by assisting with hypothesis formulation, literature review, data analysis and manuscript writing. On the other hand, using ChatGPT raises concerns regarding the originality and authenticity of content, the precision and potential bias of the tool's output, and the potential legal issues associated with privacy, confidentiality and plagiarism. The article also calls for adherence to stringent citation guidelines and the development of regulations promoting the responsible application of AI. Despite the revolutionary capabilities of ChatGPT, the article highlights its inability to replicate human thought and the difficulties in maintaining the integrity and reliability of ChatGPT-enabled research, particularly in complex fields such as medicine and law. AI tools can be used as supplementary aids rather than primary sources of analysis in medical research writing.
Bhargava DC; Jadav D; Meshram VP; Kanchan T
10
37984563
Accuracy of ChatGPT in Common Gastrointestinal Diseases: Impact for Patients and Providers.
2,024
Clinical gastroenterology and hepatology : the official clinical practice journal of the American Gastroenterological Association
Since its release in 2022, Chat Generative Pre-Trained Transformer (ChatGPT) became the most rapidly expanding consumer software application in history,(1) and its role in medicine is underscored by its potential to enhance patient education and physician-patient communication. Previous studies in gastroenterology and hepatology have focused primarily on the earlier Generative Pre-Trained Transformer 3 (GPT-3) model, with none investigating ChatGPT's ability to generate supportive references for its responses, or its applicability as a physician educational tool.(2-6) Our study evaluated the accuracy of the more recent ChatGPT, powered by GPT-4, in addressing frequently asked questions by patients on irritable bowel syndrome (IBS), inflammatory bowel disease (IBD), colonoscopy and colorectal cancer (CRC) screening, questions on CRC screening from a physician perspective, and reference generation and suitability.
Kerbage A; Kassab J; El Dahdah J; Burke CA; Achkar JP; Rouphael C
0-1
38144348
A systematic review and meta-analysis on ChatGPT and its utilization in medical and dental research.
2,023
Heliyon
Since its release, ChatGPT has taken the world by storm with its utilization in various fields of life. This review's main goal was to offer a thorough and fact-based evaluation of ChatGPT's potential as a tool for medical and dental research, which could direct subsequent research and influence clinical practices. METHODS: Different online databases were scoured for relevant articles that were in accordance with the study objectives. A team of reviewers was assembled to devise a proper methodological framework for inclusion of articles and meta-analysis. RESULTS: 11 descriptive studies were considered for this review that evaluated the accuracy of ChatGPT in answering medical queries related to different domains such as systematic reviews, cancer, liver diseases, diagnostic imaging, education, and COVID-19 vaccination. The studies reported different accuracy ranges, from 18.3 % to 100 %, across various datasets and specialties. The meta-analysis showed an odds ratio (OR) of 2.25 and a relative risk (RR) of 1.47 with a 95 % confidence interval (CI), indicating that the accuracy of ChatGPT in providing correct responses was significantly higher compared to the total responses for queries. However, significant heterogeneity was present among the studies, suggesting considerable variability in the effect sizes across the included studies. CONCLUSION: The observations indicate that ChatGPT has the ability to provide appropriate solutions to questions in the medical and dentistry areas, but researchers and doctors should cautiously assess its responses because they might not always be dependable. Overall, the importance of this study rests in shedding light on ChatGPT's accuracy in the medical and dentistry fields and emphasizing the need for additional investigation to enhance its performance. (c) 2017 Elsevier Inc. All rights reserved.
Bagde H; Dhopte A; Alam MK; Basri R
0-1
39039954
ChatGPT and dermatology.
2,024
Italian journal of dermatology and venereology
Since the development of the artificial intelligence (AI), several applications have been proposed. Among these, the intersection of AI and medicine has sparked a wave of innovation, revolutionizing various aspects of healthcare delivery, diagnosis, and treatment. A review of the current literature was performed to evaluate the possible applications of ChatGPT (OpenAI, San Francisco, CA, USA) in the dermatological field. A total of 20 manuscripts were collected in the present review, reporting several potential applications of ChatGPT in dermatology, ranging from clinical practice to patients' support. The convergence of ChatGPT and dermatology represents a compelling synergy that transcends traditional boundaries of healthcare delivery. As AI continues to evolve and permeate every facet of medicine, ChatGPT stands at the forefront of innovation, empowering patients and clinicians alike with its conversational prowess and knowledge dissemination capabilities.
D'Agostino M; Feo F; Martora F; Genco L; Megna M; Cacciapuoti S; Villani A; Potestio L
0-1
40259576
Can ChatGPT detect breast cancer on mammography?
2,025
Journal of medical screening
Some noteworthy studies have questioned the use of ChatGPT, a free artificial intelligence program that has become very popular and widespread in recent times, in different branches of medicine. In this study, the success of ChatGPT in detecting breast cancer on mammography (MMG) was evaluated. The pre-treatment mammographic images of patients with a histopathological diagnosis of invasive breast carcinoma and prominent mass formation on MMG were read separately into two ChatGPT subprograms: Radiologist Report Writer (P1) and XrayGPT (P2). The programs were asked to determine mammographic breast density, tumor size, side, and quadrant, the presence of microcalcification, distortion, skin or nipple changes, and axillary lymphadenopathy (LAP), and BI-RADS score. The responses were evaluated in consensus by two experienced radiologists. Although the mass detection rate of both programs was over 60%, the success in determining breast density, tumor size and localization, microcalcification, distortion, skin or nipple changes, and axillary LAP was low. BI-RADS category agreement with readers was fair for P1 (kappa:28%, 0.20< kappa </= 0.40) and moderate for P2 (kappa:58%, 0.40< kappa </= 0.60). In conclusion, while the XrayGPT application can detect breast cancer with a mass appearance on MMG images better than the Radiologist Report Writer application, the success of both is low in detecting all other related features. This casts doubt over the suitability of current large language models for image analysis in breast screening.
Tekcan Sanli DE; Sanli AN; Yildirim D; Dogan I
0-1
40055086
Large Language Models in peri-implant disease: How well do they perform?
2,025
The Journal of prosthetic dentistry
STATEMENT OF PROBLEM: Artificial intelligence (AI) has gained significant recent attention and several AI applications, such as the Large Language Models (LLMs) are promising for use in clinical medicine and dentistry. Nevertheless, assessing the performance of LLMs is essential to identify potential inaccuracies or even prevent harmful outcomes. PURPOSE: The purpose of this study was to evaluate and compare the evidence-based potential of answers provided by 4 LLMs to clinical questions in the field of implant dentistry. MATERIAL AND METHODS: A total of 10 open-ended questions pertinent to prevention and treatment of peri-implant disease were posed to 4 distinct LLMs including ChatGPT 4.0, Google Gemini, Google Gemini Advanced, and Microsoft Copilot. The answers were evaluated independently by 2 periodontists against scientific evidence for comprehensiveness, scientific accuracy, clarity, and relevance. The LLMs responses received scores ranging from 0 (minimum) to 10 (maximum) points. To assess the intra-evaluator reliability, a re-evaluation of the LLM responses was performed after 2 weeks and Cronbach alpha and interclass correlation coefficient (ICC) was used (alpha=.05). RESULTS: The scores assigned by the examiners on the 2 occasions were not statistically different and each LLM received an average score. Google Gemini Advanced ranked higher than the rest of the LLMs, while Google Gemini scored worst. The difference between Google Gemini Advanced and Google Gemini was statistically significantly different (P=.005). CONCLUSIONS: Dental professionals need to be cautious when using LLMs to access content related to peri-implant diseases. LLMs cannot currently replace dental professionals and caution should be exercised when used in patient care.
Koidou VP; Chatzopoulos GS; Tsalikis L; Kaklamanos EG
32
37811040
ChatGPT and artificial hallucinations in stem cell research: assessing the accuracy of generated references - a preliminary study.
2,023
Annals of medicine and surgery (2012)
Stem cell research has the transformative potential to revolutionize medicine. Language models like ChatGPT, which use artificial intelligence (AI) and natural language processing, generate human-like text that can aid researchers. However, it is vital to ensure the accuracy and reliability of AI-generated references. This study assesses Chat Generative Pre-Trained Transformer (ChatGPT)'s utility in stem cell research and evaluates the accuracy of its references. Of the 86 references analyzed, 15.12% were fabricated and 9.30% were erroneous. These errors were due to limitations such as no real-time internet access and reliance on preexisting data. Artificial hallucinations were also observed, where the text seems plausible but deviates from fact. Monitoring, diverse training, and expanding knowledge cut-off can help to reduce fabricated references and hallucinations. Researchers must verify references and consider the limitations of AI models. Further research is needed to enhance the accuracy of such language models. Despite these challenges, ChatGPT has the potential to be a valuable tool for stem cell research. It can help researchers to stay up-to-date on the latest developments in the field and to find relevant information.
Sharun K; Banu SA; Pawde AM; Kumar R; Akash S; Dhama K; Pal A
10
37025739
Toxic Epidermal Necrolysis in a Critically Ill African American Woman: A Case Report Written With ChatGPT Assistance.
2,023
Cureus
Stevens-Johnson syndrome (SJS) and toxic epidermal necrolysis (TEN) are life-threatening spectrum diseases in which a medication triggers a mucocutaneous reaction associated with severe necrosis and loss of epidermal integrity. The disease has a high mortality rate that can be assessed by dermatology scoring scales based on an affected total body surface area (TBSA). Sloughing of <10% TBSA is considered SJS, with a mortality of 10%. Sloughing of >30% TBSA is termed TEN, with an increased mortality rate of 25% to 35%. We present a case and management of TEN that involved >30% TBSA in a critically ill African American woman. Identification of the offending agent was difficult due to complicated medication exposure throughout her multi-facility care management. This case conveys the importance of close monitoring of a critically ill patient during a clinical course involving SJS-/TEN-inducing drugs. We also discuss the potential increased risks for SJS/TEN in the African American population due to genetic or epigenetic predispositions to skin conditions. This case report also contributes to increasing skin of color representation in the current literature. Additionally, we discuss the use of Chat Generative Pre-trained Transformer (ChatGPT, OpenAI LP, OpenAI Inc., San Francisco, CA, USA) and list its benefits and errors.
Lantz R
0-1
37248403
Large language models for structured reporting in radiology: performance of GPT-4, ChatGPT-3.5, Perplexity and Bing.
2,023
La Radiologia medica
Structured reporting may improve the radiological workflow and communication among physicians. Artificial intelligence applications in medicine are growing fast. Large language models (LLMs) are recently gaining importance as valuable tools in radiology and are currently being tested for the critical task of structured reporting. We compared four LLMs models in terms of knowledge on structured reporting and templates proposal. LLMs hold a great potential for generating structured reports in radiology but additional formal validations are needed on this topic.
Mallio CA; Sertorio AC; Bernetti C; Beomonte Zobel B
10
39928039
Appropriateness and Consistency of an Online Artificial Intelligence System's Response to Common Questions Regarding Cervical Fusion.
2,025
Clinical spine surgery
STUDY DESIGN: Prospective survey study. OBJECTIVE: To address a gap that exists concerning ChatGPT's ability to respond to various types of questions regarding cervical surgery. SUMMARY OF BACKGROUND DATA: Artificial Intelligence (AI) and machine learning have been creating great change in the landscape of scientific research. Chat Generative Pre-trained Transformer(ChatGPT), an online AI language model, has emerged as a powerful tool in clinical medicine and surgery. Previous studies have demonstrated appropriate and reliable responses from ChatGPT concerning patient questions regarding total joint arthroplasty, distal radius fractures, and lumbar laminectomy. However, there is a gap that exists in examining how accurate and reliable ChatGPT responses are to common questions related to cervical surgery. MATERIALS AND METHODS: Twenty questions regarding cervical surgery were presented to the online ChatGPT-3.5 web application 3 separate times, creating 60 responses. Responses were then analyzed by 3 fellowship-trained spine surgeons across 2 institutions using a modified Global Quality Scale (1-5 rating) to evaluate accuracy and utility. Descriptive statistics were reported based on responses, and intraclass correlation coefficients were then calculated to assess the consistency of response quality. RESULTS: Out of all questions proposed to the AI platform, the average score was 3.17 (95% CI, 2.92, 3.42), with 66.7% of responses being recorded to be of at least "moderate" quality by 1 reviewer. Nine (45%) questions yielded responses that were graded at least "moderate" quality by all 3 reviewers. The test-retest reliability was poor with the intraclass correlation coefficient (ICC) calculated as 0.0941 (-0.222, 0.135). CONCLUSION: This study demonstrated that ChatGPT can answer common patient questions concerning cervical surgery with moderate quality during the majority of responses. Further research within AI is necessary to increase response.
Miller M; DiCiurcio WT 3rd; Meade M; Buchan L; Gleimer J; Woods B; Kepler C
32
38531823
Chat Generative Pretraining Transformer Answers Patient-focused Questions in Cervical Spine Surgery.
2,024
Clinical spine surgery
STUDY DESIGN: Review of Chat Generative Pretraining Transformer (ChatGPT) outputs to select patient-focused questions. OBJECTIVE: We aimed to examine the quality of ChatGPT responses to cervical spine questions. BACKGROUND: Artificial intelligence and its utilization to improve patient experience across medicine is seeing remarkable growth. One such usage is patient education. For the first time on a large scale, patients can ask targeted questions and receive similarly targeted answers. Although patients may use these resources to assist in decision-making, there still exists little data regarding their accuracy, especially within orthopedic surgery and more specifically spine surgery. METHODS: We compiled 9 frequently asked questions cervical spine surgeons receive in the clinic to test ChatGPT's version 3.5 ability to answer a nuanced topic. Responses were reviewed by 2 independent reviewers on a Likert Scale for the accuracy of information presented (0-5 points), appropriateness in giving a specific answer (0-3 points), and readability for a layperson (0-2 points). Readability was assessed through the Flesh-Kincaid grade level analysis for the original prompt and for a second prompt asking for rephrasing at the sixth-grade reading level. RESULTS: On average, ChatGPT's responses scored a 7.1/10. Accuracy was rated on average a 4.1/5. Appropriateness was 1.8/3. Readability was a 1.2/2. Readability was determined to be at the 13.5 grade level originally and at the 11.2 grade level after prompting. CONCLUSIONS: ChatGPT has the capacity to be a powerful means for patients to gain important and specific information regarding their pathologies and surgical options. These responses are limited in their accuracy, and we, in addition, noted readability is not optimal for the average patient. Despite these limitations in ChatGPT's capability to answer these nuanced questions, the technology is impressive, and surgeons should be aware patients will likely increasingly rely on it.
Subramanian T; Araghi K; Amen TB; Kaidi A; Sosa B; Shahi P; Qureshi S; Iyer S
32
38483426
Harnessing the Power of Generative AI for Clinical Summaries: Perspectives From Emergency Physicians.
2,024
Annals of emergency medicine
STUDY OBJECTIVE: The workload of clinical documentation contributes to health care costs and professional burnout. The advent of generative artificial intelligence language models presents a promising solution. The perspective of clinicians may contribute to effective and responsible implementation of such tools. This study sought to evaluate 3 uses for generative artificial intelligence for clinical documentation in pediatric emergency medicine, measuring time savings, effort reduction, and physician attitudes and identifying potential risks and barriers. METHODS: This mixed-methods study was performed with 10 pediatric emergency medicine attending physicians from a single pediatric emergency department. Participants were asked to write a supervisory note for 4 clinical scenarios, with varying levels of complexity, twice without any assistance and twice with the assistance of ChatGPT Version 4.0. Participants evaluated 2 additional ChatGPT-generated clinical summaries: a structured handoff and a visit summary for a family written at an 8th grade reading level. Finally, a semistructured interview was performed to assess physicians' perspective on the use of ChatGPT in pediatric emergency medicine. Main outcomes and measures included between subjects' comparisons of the effort and time taken to complete the supervisory note with and without ChatGPT assistance. Effort was measured using a self-reported Likert scale of 0 to 10. Physicians' scoring of and attitude toward the ChatGPT-generated summaries were measured using a 0 to 10 Likert scale and open-ended questions. Summaries were scored for completeness, accuracy, efficiency, readability, and overall satisfaction. A thematic analysis was performed to analyze the content of the open-ended questions and to identify key themes. RESULTS: ChatGPT yielded a 40% reduction in time and a 33% decrease in effort for supervisory notes in intricate cases, with no discernible effect on simpler notes. ChatGPT-generated summaries for structured handoffs and family letters were highly rated, ranging from 7.0 to 9.0 out of 10, and most participants favored their inclusion in clinical practice. However, there were several critical reservations, out of which a set of general recommendations for applying ChatGPT to clinical summaries was formulated. CONCLUSION: Pediatric emergency medicine attendings in our study perceived that ChatGPT can deliver high-quality summaries while saving time and effort in many scenarios, but not all.
Barak-Corren Y; Wolf R; Rozenblum R; Creedon JK; Lipsett SC; Lyons TW; Michelson KA; Miller KA; Shapiro DJ; Reis BY; Fine AM
10
40265240
Artificial intelligence in sleep medicine: assessing the diagnostic precision of ChatGPT-4.
2,025
Journal of clinical sleep medicine : JCSM : official publication of the American Academy of Sleep Medicine
STUDY OBJECTIVES: Large language models (LLMs) like ChatGPT-4 are emerging in medicine, including sleep medicine, where artificial intelligence (AI) is used to analyze sleep data and predict treatment outcomes. Effectiveness of LLM in accurately diagnosing sleep disorders based on clinical history has not yet been studied. This study evaluates ChatGPT-4's diagnostic performance using clinical vignettes. METHODS: Nineteen clinical vignettes containing patient history, physical examination findings, and diagnostic tests from the Case Book of Sleep Medicine (3rd ed., 2019, AASM) were presented to ChatGPT-4. Its differential and final diagnoses were compared to reference diagnoses, with accuracy assessed by (1) the percentage of correct differentials and (2) three-tier scoring system (no match, partial match, full match) for final diagnoses. RESULTS: The mean accuracy for differential diagnoses was 63.27% +/- 15.61% (SD), ranging from 33.33% to 100%. The mean number of AI-generated differential diagnoses matching the AASM case differential diagnoses was 2.79 +/- 0.71 (SD). For final diagnoses, ChatGPT-4 scored total of 30 out of a possible 38, resulting in an overall accuracy of 78.95%. The model achieved a mean score was 1.58 +/- 0.61 (SD) out of 2, with 68.42% of cases achieving a full match. Performance was higher in cases with fewer differential diagnoses, whereas accuracy decreased in complex cases. CONCLUSIONS: ChatGPT-4 demonstrates promising diagnostic potential in sleep medicine, with moderate to high accuracy in identifying differential and final diagnoses. Although its variability in more complex cases calls for refinement and clinical validation.
Patel A; Cheung J
0-1
38217478
Evaluating insomnia queries from an artificial intelligence chatbot for patient education.
2,024
Journal of clinical sleep medicine : JCSM : official publication of the American Academy of Sleep Medicine
STUDY OBJECTIVES: We evaluated the accuracy of ChatGPT in addressing insomnia-related queries for patient education and assessed ChatGPT's ability to provide varied responses based on differing prompting scenarios. METHODS: Four identical sets of 20 insomnia-related queries were posed to ChatGPT. Each set differed by the context in which ChatGPT was prompted: no prompt, patient-centered, physician-centered, and with references and statistics. Responses were reviewed by 2 academic sleep surgeons, 1 academic sleep medicine physician, and 2 sleep medicine fellows across 4 domains: clinical accuracy, prompt adherence, referencing, and statistical precision, using a binary grading system. Flesch-Kincaid grade-level scores were calculated to estimate the grade level of the responses, with statistical differences between prompts analyzed via analysis of variance and Tukey's test. Interrater reliability was calculated using Fleiss's kappa. RESULTS: The study revealed significant variations in the Flesch-Kincaid grade-level scores across 4 prompts: unprompted (13.2 +/- 2.2), patient-centered (8.1 +/- 1.9), physician-centered (15.4 +/- 2.8), and with references and statistics (17.3 +/- 2.3, P < .001). Despite poor Fleiss kappa scores, indicating low interrater reliability for clinical accuracy and relevance, all evaluators agreed that the majority of ChatGPT's responses were clinically accurate, with the highest variability on Form 4. The responses were also uniformly relevant to the given prompts (100% agreement). Eighty percent of the references ChatGPT cited were verified as both real and relevant, and only 25% of cited statistics were corroborated within referenced articles. CONCLUSIONS: ChatGPT can be used to generate clinically accurate responses to insomnia-related inquiries. CITATION: Alapati R, Campbell D, Molin N, et al. Evaluating insomnia queries from an artificial intelligence chatbot for patient education. J Clin Sleep Med. 2024;20(4):583-594.
Alapati R; Campbell D; Molin N; Creighton E; Wei Z; Boon M; Huntley C
0-1
38723763
Text summarization with ChatGPT for drug labeling documents.
2,024
Drug discovery today
Text summarization is crucial in scientific research, drug discovery and development, regulatory review, and more. This task demands domain expertise, language proficiency, semantic prowess, and conceptual skill. The recent advent of large language models (LLMs), such as ChatGPT, offers unprecedented opportunities to automate this process. We compared ChatGPT-generated summaries with those produced by human experts using FDA drug labeling documents. The labeling contains summaries of key labeling sections, making them an ideal human benchmark to evaluate ChatGPT's summarization capabilities. Analyzing >14000 summaries, we observed that ChatGPT-generated summaries closely resembled those generated by human experts. Importantly, ChatGPT exhibited even greater similarity when summarizing drug safety information. These findings highlight ChatGPT's potential to accelerate work in critical areas, including drug safety.
Ying L; Liu Z; Fang H; Kusko R; Wu L; Harris S; Tong W
10
39176909
Comparing a Large Language Model with Previous Deep Learning Models on Named Entity Recognition of Adverse Drug Events.
2,024
Studies in health technology and informatics
The ability to fine-tune pre-trained deep learning models to learn how to process a downstream task using a large training set allow to significantly improve performances of named entity recognition. Large language models are recent models based on the Transformers architecture that may be conditioned on a new task with in-context learning, by providing a series of instructions or prompt. These models only require few examples and such approach is defined as few shot learning. Our objective was to compare performances of named entity recognition of adverse drug events between state of the art deep learning models fine-tuned on Pubmed abstracts and a large language model using few-shot learning. Hussain et al's state of the art model (PMID: 34422092) significantly outperformed the ChatGPT-3.5 model (F1-Score: 97.6% vs 86.0%). Few-shot learning is a convenient way to perform named entity recognition when training examples are rare, but performances are still inferior to those of a deep learning model fine-tuned with several training examples. Perspectives are to evaluate few-shot prompting with GPT-4 and perform fine-tuning on GPT-3.5.
Tiffet T; Pikaar A; Trombert-Paviot B; Jaulent MC; Bousquet C
10
38540976
Personalized Medicine Transformed: ChatGPT's Contribution to Continuous Renal Replacement Therapy Alarm Management in Intensive Care Units.
2,024
Journal of personalized medicine
The accurate interpretation of CRRT machine alarms is crucial in the intensive care setting. ChatGPT, with its advanced natural language processing capabilities, has emerged as a tool that is evolving and advancing in its ability to assist with healthcare information. This study is designed to evaluate the accuracy of the ChatGPT-3.5 and ChatGPT-4 models in addressing queries related to CRRT alarm troubleshooting. This study consisted of two rounds of ChatGPT-3.5 and ChatGPT-4 responses to address 50 CRRT machine alarm questions that were carefully selected by two nephrologists in intensive care. Accuracy was determined by comparing the model responses to predetermined answer keys provided by critical care nephrologists, and consistency was determined by comparing outcomes across the two rounds. The accuracy rate of ChatGPT-3.5 was 86% and 84%, while the accuracy rate of ChatGPT-4 was 90% and 94% in the first and second rounds, respectively. The agreement between the first and second rounds of ChatGPT-3.5 was 84% with a Kappa statistic of 0.78, while the agreement of ChatGPT-4 was 92% with a Kappa statistic of 0.88. Although ChatGPT-4 tended to provide more accurate and consistent responses than ChatGPT-3.5, there was no statistically significant difference between the accuracy and agreement rate between ChatGPT-3.5 and -4. ChatGPT-4 had higher accuracy and consistency but did not achieve statistical significance. While these findings are encouraging, there is still potential for further development to achieve even greater reliability. This advancement is essential for ensuring the highest-quality patient care and safety standards in managing CRRT machine-related issues.
Sheikh MS; Thongprayoon C; Qureshi F; Suppadungsuk S; Kashani KB; Miao J; Craici IM; Cheungpasitporn W
10
40241839
Large language models in critical care.
2,025
Journal of intensive medicine
The advent of chat generative pre-trained transformer (ChatGPT) and large language models (LLMs) has revolutionized natural language processing (NLP). These models possess unprecedented capabilities in understanding and generating human-like language. This breakthrough holds significant promise for critical care medicine, where unstructured data and complex clinical information are abundant. Key applications of LLMs in this field include administrative support through automated documentation and patient chart summarization; clinical decision support by assisting in diagnostics and treatment planning; personalized communication to enhance patient and family understanding; and improving data quality by extracting insights from unstructured clinical notes. Despite these opportunities, challenges such as the risk of generating inaccurate or biased information "hallucinations", ethical considerations, and the need for clinician artificial intelligence (AI) literacy must be addressed. Integrating LLMs with traditional machine learning models - an approach known as Hybrid AI - combines the strengths of both technologies while mitigating their limitations. Careful implementation, regulatory compliance, and ongoing validation are essential to ensure that LLMs enhance patient care rather than hinder it. LLMs have the potential to transform critical care practices, but integrating them requires caution. Responsible use and thorough clinician training are crucial to fully realize their benefits.
Biesheuvel LA; Workum JD; Reuland M; van Genderen ME; Thoral P; Dongelmans D; Elbers P
10
37130009
Medical School Admissions: Focusing on Producing a Physician Workforce That Addresses the Needs of the United States.
2,023
Academic medicine : journal of the Association of American Medical Colleges
The aging population, burnout, and earlier retirement of physicians along with the static number of training positions are likely to worsen the current physician shortage. There is an urgent need to transform the process for selecting medical students. In this Invited Commentary, the authors suggest that to build the physician workforce that the United States needs for the future, academic medicine should focus on building capacity in 3 overarching areas. First, medical schools need to develop a more diverse pool of capable applicants that better matches the demographic characteristics of health care trainees with those of the population, and they need to nurture applicants with diverse career aspirations. Second, medical schools should recalibrate their student selection process, aligning criteria for admission with competencies expected of medical school graduates, whether they choose to become practicing clinicians, physician-scientists, members of the public health workforce, or policy makers. Selection criteria that overweight the results of standardized test scores should be replaced by assessments that value and predict academic capacity, adaptive learning skills, curiosity, compassion, empathy, emotional maturity, and superior communication skills. Finally, to improve the equity and effectiveness of the selection processes, medical schools should leverage innovations in data science and generative artificial intelligence platforms. The ability of ChatGPT to pass the United States Medical Licensing Examination (USMLE) demonstrates the decreasing importance of memorization in medicine in favor of critical thinking and problem-solving skills. The 2022 change in the USMLE Step 1 to pass/fail plus the exodus of several prominent medical schools from the U.S. News and World Report rankings have exposed limitations of the current selection processes. Newer approaches that use precision education systems to leverage data and technology can help address these limitations.
Prober CG; Desai SV
10
39050000
How to mitigate the risks of deployment of artificial intelligence in medicine?
2,024
Turkish journal of medical sciences
The aim of this study is to examine the risks associated with the use of artificial intelligence (AI) in medicine and to offer policy suggestions to reduce these risks and optimize the benefits of AI technology. AI is a multifaceted technology. If harnessed effectively, it has the capacity to significantly impact the future of humanity in the field of health, as well as in several other areas. However, the rapid spread of this technology also raises significant ethical, legal, and social issues. This study examines the potential dangers of AI integration in medicine by reviewing current scientific work and exploring strategies to mitigate these risks. Biases in data sets for AI systems can lead to inequities in health care. Educational data that is narrowly represented based on a demographic group can lead to biased results from AI systems for those who do not belong to that group. In addition, the concepts of explainability and accountability in AI systems could create challenges for healthcare professionals in understanding and evaluating AI-generated diagnoses or treatment recommendations. This could jeopardize patient safety and lead to the selection of inappropriate treatments. Ensuring the security of personal health information will be critical as AI systems become more widespread. Therefore, improving patient privacy and security protocols for AI systems is imperative. The report offers suggestions for reducing the risks associated with the increasing use of AI systems in the medical sector. These include increasing AI literacy, implementing a participatory society-in-the-loop management strategy, and creating ongoing education and auditing systems. Integrating ethical principles and cultural values into the design of AI systems can help reduce healthcare disparities and improve patient care. Implementing these recommendations will ensure the efficient and equitable use of AI systems in medicine, improve the quality of healthcare services, and ensure patient safety.
Uygun Ilikhan S; Ozer M; Tanberkan H; Bozkurt V
10
39401518
[AI-supported decision-making in obstetrics - a feasibility study on the medical accuracy and reliability of ChatGPT].
2,025
Zeitschrift fur Geburtshilfe und Neonatologie
The aim of this study is to investigate the feasibility of artificial intelligence in the interpretation and application of medical guidelines to support clinical decision-making in obstetrics. ChatGPT was provided with guidelines on specific obstetric issues. Using several clinical scenarios as examples, the AI was then evaluated for its ability to make accurate diagnoses and appropriate clinical decisions. The results varied, with ChatGPT providing predominantly correct answers in some fictional scenarios but performing inadequately in others. Despite ChatGPT's ability to grasp complex medical information, the study revealed limitations in the precision and reliability of its interpretations and recommendations. These discrepancies highlight the need for careful review by healthcare professionals and underscore the importance of clear, unambiguous guideline recommendations. Furthermore, continuous technical development is required to harness artificial intelligence as a supportive tool in clinical practice. Overall, while the use of AI in medicine shows promise, its current suitability primarily lies in controlled scientific settings due to potential error susceptibility and interpretation weaknesses, aiming to safeguard the safety and accuracy of patient care.
Bader S; Schneider MO; Psilopatis I; Anetsberger D; Emons J; Kehl S
0-1
38886470
Multi role ChatGPT framework for transforming medical data analysis.
2,024
Scientific reports
The application of ChatGPTin the medical field has sparked debate regarding its accuracy. To address this issue, we present a Multi-Role ChatGPT Framework (MRCF), designed to improve ChatGPT's performance in medical data analysis by optimizing prompt words, integrating real-world data, and implementing quality control protocols. Compared to the singular ChatGPT model, MRCF significantly outperforms traditional manual analysis in interpreting medical data, exhibiting fewer random errors, higher accuracy, and better identification of incorrect information. Notably, MRCF is over 600 times more time-efficient than conventional manual annotation methods and costs only one-tenth as much. Leveraging MRCF, we have established two user-friendly databases for efficient and straightforward drug repositioning analysis. This research not only enhances the accuracy and efficiency of ChatGPT in medical data science applications but also offers valuable insights for data analysis models across various professional domains.
Chen H; Zhang S; Zhang L; Geng J; Lu J; Hou C; He P; Lu X
10
37956228
ChatGPT in Drug Discovery: A Case Study on Anticocaine Addiction Drug Development with Chatbots.
2,023
Journal of chemical information and modeling
The birth of ChatGPT, a cutting-edge language model-based chatbot developed by OpenAI, ushered in a new era in AI. However, due to potential pitfalls, its role in rigorous scientific research is not clear yet. This paper vividly showcases its innovative application within the field of drug discovery. Focused specifically on developing anticocaine addiction drugs, the study employs GPT-4 as a virtual guide, offering strategic and methodological insights to researchers working on generative models for drug candidates. The primary objective is to generate optimal drug-like molecules with desired properties. By leveraging the capabilities of ChatGPT, the study introduces a novel approach to the drug discovery process. This symbiotic partnership between AI and researchers transforms how drug development is approached. Chatbots become facilitators, steering researchers toward innovative methodologies and productive paths for creating effective drug candidates. This research sheds light on the collaborative synergy between human expertise and AI assistance, wherein ChatGPT's cognitive abilities enhance the design and development of pharmaceutical solutions. This paper not only explores the integration of advanced AI in drug discovery but also reimagines the landscape by advocating for AI-powered chatbots as trailblazers in revolutionizing therapeutic innovation.
Wang R; Feng H; Wei GW
0-1
37645039
ChatGPT in Drug Discovery: A Case Study on Anti-Cocaine Addiction Drug Development with Chatbots.
2,023
ArXiv
The birth of ChatGPT, a cutting-edge language model-based chatbot developed by OpenAI, ushered in a new era in AI. However, due to potential pitfalls, its role in rigorous scientific research is not clear yet. This paper vividly showcases its innovative application within the field of drug discovery. Focused specifically on developing anti-cocaine addiction drugs, the study employs GPT-4 as a virtual guide, offering strategic and methodological insights to researchers working on generative models for drug candidates. The primary objective is to generate optimal drug-like molecules with desired properties. By leveraging the capabilities of ChatGPT, the study introduces a novel approach to the drug discovery process. This symbiotic partnership between AI and researchers transforms how drug development is approached. Chatbots become facilitators, steering researchers towards innovative methodologies and productive paths for creating effective drug candidates. This research sheds light on the collaborative synergy between human expertise and AI assistance, wherein ChatGPT's cognitive abilities enhance the design and development of potential pharmaceutical solutions. This paper not only explores the integration of advanced AI in drug discovery but also reimagines the landscape by advocating for AI-powered chatbots as trailblazers in revolutionizing therapeutic innovation.
Wang R; Feng H; Wei GW
0-1
38472596
Artificial intelligence in academic writing and clinical pharmacy education: consequences and opportunities.
2,024
International journal of clinical pharmacy
The current academic debate on the use of artificial intelligence (AI) in research and teaching has been ongoing since the launch of ChatGPT in November 2022. It mainly focuses on ethical considerations, academic integrity, authorship and the need for new legal frameworks. Time efficiencies may allow for more critical thinking, while ease of pattern recognition across large amounts of data may promote drug discovery, better clinical decision making and guideline development with resultant consequences for patient safety. AI is also prompting a re-evaluation of the nature of learning and the purpose of education worldwide. It challenges traditional pedagogies, forcing a shift from rote learning to more critical, analytical, and creative thinking skills. Despite this opportunity to re-think education concepts for pharmacy curricula several universities around the world have banned its use. This commentary summarizes the existing debate and identifies the consequences and opportunities for clinical pharmacy research and education.
Weidmann AE
10
38811359
Attention mechanism models for precision medicine.
2,024
Briefings in bioinformatics
The development of deep learning models plays a crucial role in advancing precision medicine. These models enable personalized medical treatments and interventions based on the unique genetic, environmental and lifestyle factors of individual patients, and the promotion of precision medicine is achieved mainly through genomic data analysis, variant annotation and interpretation, pharmacogenomics research, biomarker discovery, disease typing, clinical decision support and disease mechanism interpretation. Extensive research has been conducted to address precision medicine challenges using attention mechanism models such as SAN, GAT and transformers. Especially, the recent popularity of ChatGPT has significantly propelled the application of this model type to a new height. Therefore, I propose a Special Issue for Briefings in Bioinformatics about the topic 'Attention Mechanism Models for Precision Medicine'. This Special Issue aims to provide a comprehensive overview and presentation of innovative researches on the application of graph attention mechanism models in precision medicine.
Cheng L
0-1
37731643
Applications of large language models in cancer care: current evidence and future perspectives.
2,023
Frontiers in oncology
The development of large language models (LLMs) is a recent success in the field of generative artificial intelligence (AI). They are computer models able to perform a wide range of natural language processing tasks, including content generation, question answering, or language translation. In recent months, a growing number of studies aimed to assess their potential applications in the field of medicine, including cancer care. In this mini review, we described the present published evidence for using LLMs in oncology. All the available studies assessed ChatGPT, an advanced language model developed by OpenAI, alone or compared to other LLMs, such as Google Bard, Chatsonic, and Perplexity. Although ChatGPT could provide adequate information on the screening or the management of specific solid tumors, it also demonstrated a significant error rate and a tendency toward providing obsolete data. Therefore, an accurate, expert-driven verification process remains mandatory to avoid the potential for misinformation and incorrect evidence. Overall, although this new generative AI-based technology has the potential to revolutionize the field of medicine, including that of cancer care, it will be necessary to develop rules to guide the application of these tools to maximize benefits and minimize risks.
Iannantuono GM; Bracken-Clarke D; Floudas CS; Roselli M; Gulley JL; Karzai F
10
36834073
Diagnostic Accuracy of Differential-Diagnosis Lists Generated by Generative Pretrained Transformer 3 Chatbot for Clinical Vignettes with Common Chief Complaints: A Pilot Study.
2,023
International journal of environmental research and public health
The diagnostic accuracy of differential diagnoses generated by artificial intelligence (AI) chatbots, including the generative pretrained transformer 3 (GPT-3) chatbot (ChatGPT-3) is unknown. This study evaluated the accuracy of differential-diagnosis lists generated by ChatGPT-3 for clinical vignettes with common chief complaints. General internal medicine physicians created clinical cases, correct diagnoses, and five differential diagnoses for ten common chief complaints. The rate of correct diagnosis by ChatGPT-3 within the ten differential-diagnosis lists was 28/30 (93.3%). The rate of correct diagnosis by physicians was still superior to that by ChatGPT-3 within the five differential-diagnosis lists (98.3% vs. 83.3%, p = 0.03). The rate of correct diagnosis by physicians was also superior to that by ChatGPT-3 in the top diagnosis (53.3% vs. 93.3%, p < 0.001). The rate of consistent differential diagnoses among physicians within the ten differential-diagnosis lists generated by ChatGPT-3 was 62/88 (70.5%). In summary, this study demonstrates the high diagnostic accuracy of differential-diagnosis lists generated by ChatGPT-3 for clinical cases with common chief complaints. This suggests that AI chatbots such as ChatGPT-3 can generate a well-differentiated diagnosis list for common chief complaints. However, the order of these lists can be improved in the future.
Hirosawa T; Harada Y; Yokose M; Sakamoto T; Kawamura R; Shimizu T
10
38413108
Updated Primer on Generative Artificial Intelligence and Large Language Models in Medical Imaging for Medical Professionals.
2,024
Korean journal of radiology
The emergence of Chat Generative Pre-trained Transformer (ChatGPT), a chatbot developed by OpenAI, has garnered interest in the application of generative artificial intelligence (AI) models in the medical field. This review summarizes different generative AI models and their potential applications in the field of medicine and explores the evolving landscape of Generative Adversarial Networks and diffusion models since the introduction of generative AI models. These models have made valuable contributions to the field of radiology. Furthermore, this review also explores the significance of synthetic data in addressing privacy concerns and augmenting data diversity and quality within the medical domain, in addition to emphasizing the role of inversion in the investigation of generative models and outlining an approach to replicate this process. We provide an overview of Large Language Models, such as GPTs and bidirectional encoder representations (BERTs), that focus on prominent representatives and discuss recent initiatives involving language-vision models in radiology, including innovative large language and vision assistant for biomedicine (LLaVa-Med), to illustrate their practical application. This comprehensive review offers insights into the wide-ranging applications of generative AI models in clinical research and emphasizes their transformative potential.
Kim K; Cho K; Jang R; Kyung S; Lee S; Ham S; Choi E; Hong GS; Kim N
10
37852647
GPT-4 in Nuclear Medicine Education: Does It Outperform GPT-3.5?
2,023
Journal of nuclear medicine technology
The emergence of ChatGPT has challenged academic integrity in teaching institutions, including those providing nuclear medicine training. Although previous evaluations of ChatGPT have suggested a limited scope for academic writing, the March 2023 release of generative pretrained transformer (GPT)-4 promises enhanced capabilities that require evaluation. Methods: Examinations (final and calculation) and written assignments for nuclear medicine subjects were tested using GPT-3.5 and GPT-4. GPT-3.5 and GPT-4 responses were evaluated by Turnitin software for artificial intelligence scores, marked against standardized rubrics, and compared with the mean performance of student cohorts. Results: ChatGPT powered by GPT-3.5 performed poorly in calculation examinations (31.4%), compared with GPT-4 (59.1%). GPT-3.5 failed each of 3 written tasks (39.9%), whereas GPT-4 passed each task (56.3%). Conclusion: Although GPT-3.5 poses a minimal risk to academic integrity, its usefulness as a cheating tool can be significantly enhanced by GPT-4 but remains prone to hallucination and fabrication.
Currie GM
0-1
38295300
Medical malpractice liability in large language model artificial intelligence: legal review and policy recommendations.
2,024
Journal of osteopathic medicine
The emergence of generative large language model (LLM) artificial intelligence (AI) represents one of the most profound developments in healthcare in decades, with the potential to create revolutionary and seismic changes in the practice of medicine as we know it. However, significant concerns have arisen over questions of liability for bad outcomes associated with LLM AI-influenced medical decision making. Although the authors were not able to identify a case in the United States that has been adjudicated on medical malpractice in the context of LLM AI at this time, sufficient precedent exists to interpret how analogous situations might be applied to these cases when they inevitably come to trial in the future. This commentary will discuss areas of potential legal vulnerability for clinicians utilizing LLM AI through review of past case law pertaining to third-party medical guidance and review the patchwork of current regulations relating to medical malpractice liability in AI. Finally, we will propose proactive policy recommendations including creating an enforcement duty at the US Food and Drug Administration (FDA) to require algorithmic transparency, recommend reliance on peer-reviewed data and rigorous validation testing when LLMs are utilized in clinical settings, and encourage tort reform to share liability between physicians and LLM developers.
Shumway DO; Hartman HJ
10