pmid
stringlengths
8
8
title
stringlengths
3
289
year
int64
2.02k
2.03k
journal
stringlengths
3
221
doi
stringclasses
1 value
mesh
stringclasses
1 value
keywords
stringclasses
1 value
abstract
stringlengths
115
3.67k
authors
stringlengths
3
798
cluster
class label
5 classes
37352529
ChatGPT encounters multiple opportunities and challenges in neurosurgery.
2,023
International journal of surgery (London, England)
BACKGROUND: ChatGPT, powered by the GPT model and Transformer architecture, has demonstrated remarkable performance in the domains of medicine and healthcare, providing customized and informative responses. In our study, we investigated the potential of ChatGPT in the field of neurosurgery, focusing on its applications at the patient, neurosurgery student/resident, and neurosurgeon levels. METHOD: The authors conducted inquiries with ChatGPT from the viewpoints of patients, neurosurgery students/residents, and neurosurgeons, covering a range of topics, such as disease diagnosis, treatment options, prognosis, rehabilitation, and patient care. The authors also explored concepts related to neurosurgery, including fundamental principles and clinical aspects, as well as tools and techniques to enhance the skills of neurosurgery students/residents. Additionally, the authors examined disease-specific medical interventions and the decision-making processes involved in clinical practice. RESULTS: The authors received individual responses from ChatGPT, but they tended to be shallow and repetitive, lacking depth and personalization. Furthermore, ChatGPT may struggle to discern a patient's emotional state, hindering the establishment of rapport and the delivery of appropriate care. The language used in the medical field is influenced by technical and cultural factors, and biases in the training data can result in skewed or inaccurate responses. Additionally, ChatGPT's limitations include the inability to conduct physical examinations or interpret diagnostic images, potentially overlooking complex details and individual nuances in each patient's case. Moreover, its absence in the surgical setting limits its practical utility. CONCLUSION: Although ChatGPT is a powerful language model, it cannot substitute for the expertise and experience of trained medical professionals. It lacks the capability to perform physical examinations, make diagnoses, administer treatments, establish trust, provide emotional support, and assist in the recovery process. Moreover, the implementation of Artificial Intelligence in healthcare necessitates careful consideration of legal and ethical concerns. While recognizing the potential of ChatGPT, additional training with comprehensive data is necessary to fully maximize its capabilities.
Kuang YR; Zou MX; Niu HQ; Zheng BY; Zhang TL; Zheng BW
0-1
37389908
Reliability of Medical Information Provided by ChatGPT: Assessment Against Clinical Guidelines and Patient Information Quality Instrument.
2,023
Journal of medical Internet research
BACKGROUND: ChatGPT-4 is the latest release of a novel artificial intelligence (AI) chatbot able to answer freely formulated and complex questions. In the near future, ChatGPT could become the new standard for health care professionals and patients to access medical information. However, little is known about the quality of medical information provided by the AI. OBJECTIVE: We aimed to assess the reliability of medical information provided by ChatGPT. METHODS: Medical information provided by ChatGPT-4 on the 5 hepato-pancreatico-biliary (HPB) conditions with the highest global disease burden was measured with the Ensuring Quality Information for Patients (EQIP) tool. The EQIP tool is used to measure the quality of internet-available information and consists of 36 items that are divided into 3 subsections. In addition, 5 guideline recommendations per analyzed condition were rephrased as questions and input to ChatGPT, and agreement between the guidelines and the AI answer was measured by 2 authors independently. All queries were repeated 3 times to measure the internal consistency of ChatGPT. RESULTS: Five conditions were identified (gallstone disease, pancreatitis, liver cirrhosis, pancreatic cancer, and hepatocellular carcinoma). The median EQIP score across all conditions was 16 (IQR 14.5-18) for the total of 36 items. Divided by subsection, median scores for content, identification, and structure data were 10 (IQR 9.5-12.5), 1 (IQR 1-1), and 4 (IQR 4-5), respectively. Agreement between guideline recommendations and answers provided by ChatGPT was 60% (15/25). Interrater agreement as measured by the Fleiss kappa was 0.78 (P<.001), indicating substantial agreement. Internal consistency of the answers provided by ChatGPT was 100%. CONCLUSIONS: ChatGPT provides medical information of comparable quality to available static internet information. Although currently of limited quality, large language models could become the future standard for patients and health care professionals to gather medical information.
Walker HL; Ghani S; Kuemmerli C; Nebiker CA; Muller BP; Raptis DA; Staubli SM
43
38465158
Enhancing Postoperative Cochlear Implant Care With ChatGPT-4: A Study on Artificial Intelligence (AI)-Assisted Patient Education and Support.
2,024
Cureus
BACKGROUND: Cochlear implantation is a critical surgical intervention for patients with severe hearing loss. Postoperative care is essential for successful rehabilitation, yet access to timely medical advice can be challenging, especially in remote or resource-limited settings. Integrating advanced artificial intelligence (AI) tools like Chat Generative Pre-trained Transformer (ChatGPT)-4 in post-surgical care could bridge the patient education and support gap. AIM: This study aimed to assess the effectiveness of ChatGPT-4 as a supplementary information resource for postoperative cochlear implant patients. The focus was on evaluating the AI chatbot's ability to provide accurate, clear, and relevant information, particularly in scenarios where access to healthcare professionals is limited. MATERIALS AND METHODS: Five common postoperative questions related to cochlear implant care were posed to ChatGPT-4. The AI chatbot's responses were analyzed for accuracy, response time, clarity, and relevance. The aim was to determine whether ChatGPT-4 could serve as a reliable source of information for patients in need, especially if the patients could not reach out to the hospital or the specialists at that moment. RESULTS: ChatGPT-4 provided responses aligned with current medical guidelines, demonstrating accuracy and relevance. The AI chatbot responded to each query within seconds, indicating its potential as a timely resource. Additionally, the responses were clear and understandable, making complex medical information accessible to non-medical audiences. These findings suggest that ChatGPT-4 could effectively supplement traditional patient education, providing valuable support in postoperative care. CONCLUSION: The study concluded that ChatGPT-4 has significant potential as a supportive tool for cochlear implant patients post surgery. While it cannot replace professional medical advice, ChatGPT-4 can provide immediate, accessible, and understandable information, which is particularly beneficial in special moments. This underscores the utility of AI in enhancing patient care and supporting cochlear implantation.
Aliyeva A; Sari E; Alaskarov E; Nasirov R
32
38420978
Performance of ChatGPT in Israeli Hebrew Internal Medicine National Residency Exam.
2,024
The Israel Medical Association journal : IMAJ
BACKGROUND: Completing internal medicine specialty training in Israel involves passing the Israel National Internal Medicine Exam (Shlav Aleph), a challenging multiple-choice test. multiple-choice test. Chat generative pre-trained transformer (ChatGPT) 3.5, a language model, is increasingly used for exam preparation. OBJECTIVES: To assess the ability of ChatGPT 3.5 to pass the Israel National Internal Medicine Exam in Hebrew. METHODS: Using the 2023 Shlav Aleph exam questions, ChatGPT received prompts in Hebrew. Textual questions were analyzed after the appeal, comparing its answers to the official key. RESULTS: ChatGPT 3.5 correctly answered 36.6% of the 133 analyzed questions, with consistent performance across topics, except for challenges in nephrology and biostatistics. CONCLUSIONS: While ChatGPT 3.5 has excelled in English medical exams, its performance in the Hebrew Shlav Aleph was suboptimal. Factors include limited training data in Hebrew, translation complexities, and unique language structures. Further investigation is essential for its effective adaptation to Hebrew medical exam preparation.
Ozeri DJ; Cohen A; Bacharach N; Ukashi O; Oppenheim A
21
39911377
Evaluation of GPT-4 concordance with north American spine society guidelines for lumbar fusion surgery.
2,025
North American Spine Society journal
BACKGROUND: Concordance with evidence-based medicine (EBM) guidelines is associated with improved clinical outcomes in spine surgery. The North American Spine Society (NASS) has published coverage guidelines on indications for lumbar fusion surgery, with a recent survey demonstrating a 60% concordance rate across its members. GPT-4 is a popular deep learning model that receives knowledge training across public databases including those containing EBM guidelines. There is prior research exploring the potential utility of artificial intelligence (AI) software in adherence with spine surgery practices and guidelines, inviting opportunity to further investigate application in the setting of lumbar fusion surgery with current AI models. METHODS: Seventeen well-validated clinical vignettes with specific indications for or against lumbar fusion based on NASS criteria were obtained from a prior published research study. Each case was transcribed into a standardized prompt and entered into GPT-4 to obtain a decision whether fusion is indicated. Interquery reliability was assessed with serial identical queries utilizing the Fleiss' Kappa statistic. Majority response among serial queries was considered as the final GPT-4 decision. Queries were all entered in separate strings. The investigator entering the prompts was blinded to the NASS-concordant decisions for the cases prior to complete data collection. Decisions by GPT-4 and NASS guidelines were compared with Chi-square analysis. RESULTS: GPT-4 responses for 15/17 (88.2%) of the clinical vignettes were in concordance with NASS EBM lumbar fusion guidelines. There was a significant association in clinical decision-making when determining indication for spine fusion surgery between GPT-4 and NASS guidelines (chi(2) = 9.75; p<.01). There was substantial agreement among the sets of responses generated by GPT-4 for each clinical case (K = 0.71; p<.001). CONCLUSIONS: There is significant concordance between GPT-4 responses and NASS EBM indications for lumbar fusion surgery. AI and deep learning models may prove to be an effective adjunct tool for clinical decision-making within modern spine surgery practices.
Khoylyan A; Salvato J; Vazquez F; Girgis M; Tang A; Chen T
32
40059391
The Need to Improve the Medical Subject Headings (MeSH) and the Excerpta Medica Tree (EMTREE) Thesauri to Perform Systematic Review on Oral Potentially Malignant Disorders.
2,025
Journal of oral pathology & medicine : official publication of the International Association of Oral Pathologists and the American Academy of Oral Pathology
BACKGROUND: Despite recent advancements in the understanding and classification of oral potentially malignant disorders (OPMD), their terminology remains inconsistent and heterogeneous throughout the scientific literature, thus affecting evidence-based decision-making relevant for clinical management of these disorders. Updating this classification represents a necessity to improve the indexing and retrieval of OPMD publications, in particular for systematic reviews and meta-analysis. METHODS: Through a critical appraisal of the Medical Subject Headings (MeSH) and Excerpta Medica Tree (EMTREE) thesauri, we assessed gaps in the indexing for OPMD literature and propose improvements for enhanced categorisation and retrieval. RESULTS: The present study identifies inconsistencies and limitations in the classification of these disorders across the major medical databases, which may be summarized in the following findings: a) The MeSH database lacks a dedicated subject heading for "oral potentially malignant disorders"; b) EMTREE indexing is incomplete, with only 5 out of 11 recognised OPMD having corresponding terms; c) Incoherent controlled vocabulary mappings hinder systematic literature retrieval. CONCLUSION: To ensure accurate evidence synthesis, the authors recommend searching both PubMed and Embase for OPMD studies. Moreover, the use of Embase's PubMed query translator and Large Language Models, such as ChatGPT, may lead to retrieval biases due to indexing discrepancies, posing challenges for early-career researchers and students. We recommend introducing "oral potentially malignant disorders" as a standardised subject heading. Evidence-based medicine underpins clinical decision support systems, which rely on standardised clinical coding for reliable health information. Enhanced medical ontologies will facilitate structured clinical coding, ensuring interoperability and improving clinical decision support systems.
Caponio VCA; Musella G; Perez-Sayans M; Lo Muzio L; Amaral Mendes R; Lopez-Pintor RM
0-1
39941547
Performance of ChatGPT in Pediatric Audiology as Rated by Students and Experts.
2,025
Journal of clinical medicine
Background: Despite the growing popularity of artificial intelligence (AI)-based systems such as ChatGPT, there is still little evidence of their effectiveness in audiology, particularly in pediatric audiology. The present study aimed to verify the performance of ChatGPT in this field, as assessed by both students and professionals, and to compare its Polish and English versions. Methods: ChatGPT was presented with 20 questions, which were posed twice, first in Polish and then in English. A group of 20 students and 16 professionals in the field of audiology and otolaryngology rated the answers on a Likert scale of 1 to 5 in terms of correctness, relevance, completeness, and linguistic accuracy. Both groups were also asked to assess the usefulness of ChatGPT as a source of information for patients, in educational settings for students, and in professional work. Results: Both students and professionals generally rated ChatGPT's responses to be satisfactory. For most of the questions, ChatGPT's responses were rated somewhat higher by the students than the professionals, although statistically significant differences were only evident for completeness and linguistic accuracy. Those who rated ChatGPT's responses more highly also rated its usefulness more highly. Conclusions: ChatGPT can possibly be used for quick information retrieval, especially by non-experts, but it lacks the depth and reliability required by professionals. The different ratings given by students and professionals, and its language dependency, indicate it works best as a supplementary tool, not as a replacement for verifiable sources, particularly in a healthcare setting.
Ratuszniak A; Gos E; Lorens A; Skarzynski PH; Skarzynski H; Jedrzejczak WW
32
37128784
Performance of ChatGPT on the Plastic Surgery Inservice Training Examination.
2,023
Aesthetic surgery journal
BACKGROUND: Developed originally as a tool for resident self-evaluation, the Plastic Surgery Inservice Training Examination (PSITE) has become a standardized tool adopted by Plastic Surgery residency programs. The introduction of large language models (LLMs), such as ChatGPT (OpenAI, San Francisco, CA), has demonstrated the potential to help propel the field of Plastic Surgery. OBJECTIVES: The authors of this study wanted to assess whether or not ChatGPT could be utilized as a tool in resident education by assessing its accuracy on the PSITE. METHODS: Questions were obtained from the 2022 PSITE, which was present on the American Council of Academic Plastic Surgeons (ACAPS) website. Questions containing images or tables were carefully inspected and flagged before being inputted into ChatGPT. All responses by ChatGPT were qualified utilizing the properties of natural coherence. Responses that were found to be incorrect were divided into the following categories: logical, informational, or explicit fallacy. RESULTS: ChatGPT answered a total of 242 questions with an accuracy of 54.96%. The software incorporated logical reasoning in 88.8% of questions, internal information in 95.5% of questions, and external information in 92.1% of questions. When stratified by correct and incorrect responses, we determined that there was a statistically significant difference in ChatGPT's use of external information (P < .05). CONCLUSIONS: ChatGPT is a versatile tool that has the potential to impact resident education by providing general knowledge, clarifying information, providing case-based learning, and promoting evidence-based medicine. With advancements in LLM and artificial intelligence (AI), it is possible that ChatGPT may be an impactful tool for resident education within Plastic Surgery.
Gupta R; Herzog I; Park JB; Weisberger J; Firouzbakht P; Ocon V; Chao J; Lee ES; Mailey BA
32
39148849
Foundational model aided automatic high-throughput drug screening using self-controlled cohort study.
2,024
medRxiv : the preprint server for health sciences
BACKGROUND: Developing medicine from scratch to governmental authorization and detecting adverse drug reactions (ADR) have barely been economical, expeditious, and risk-averse investments. The availability of large-scale observational healthcare databases and the popularity of large language models offer an unparalleled opportunity to enable automatic high-throughput drug screening for both repurposing and pharmacovigilance. OBJECTIVES: To demonstrate a general workflow for automatic high-throughput drug screening with the following advantages: (i) the association of various exposure on diseases can be estimated; (ii) both repurposing and pharmacovigilance are integrated; (iii) accurate exposure length for each prescription is parsed from clinical texts; (iv) intrinsic relationship between drugs and diseases are removed jointly by bioinformatic mapping and large language model - ChatGPT; (v) causal-wise interpretations for incidence rate contrasts are provided. METHODS: Using a self-controlled cohort study design where subjects serve as their own control group, we tested the intention-to-treat association between medications on the incidence of diseases. Exposure length for each prescription is determined by parsing common dosages in English free text into a structured format. Exposure period starts from initial prescription to treatment discontinuation. A same exposure length preceding initial treatment is the control period. Clinical outcomes and categories are identified using existing phenotyping algorithms. Incident rate ratios (IRR) are tested using uniformly most powerful (UMP) unbiased tests. RESULTS: We assessed 3,444 medications on 276 diseases on 6,613,198 patients from the Clinical Practice Research Datalink (CPRD), an UK primary care electronic health records (EHR) spanning from 1987 to 2018. Due to the built-in selection bias of self-controlled cohort studies, ingredients-disease pairs confounded by deterministic medical relationships are removed by existing map from RxNorm and nonexistent maps by calling ChatGPT. A total of 16,901 drug-disease pairs reveals significant risk reduction, which can be considered as candidates for repurposing, while a total of 11,089 pairs showed significant risk increase, where drug safety might be of a concern instead. CONCLUSIONS: This work developed a data-driven, nonparametric, hypothesis generating, and automatic high-throughput workflow, which reveals the potential of natural language processing in pharmacoepidemiology. We demonstrate the paradigm to a large observational health dataset to help discover potential novel therapies and adverse drug effects. The framework of this study can be extended to other observational medical databases.
Xu S; Cobzaru R; Finkelstein SN; Welsch RE; Ng K; Middleton L
10
39229463
Diagnostic performance of generative artificial intelligences for a series of complex case reports.
2,024
Digital health
BACKGROUND: Diagnostic performance of generative artificial intelligences (AIs) using large language models (LLMs) across comprehensive medical specialties is still unknown. OBJECTIVE: We aimed to evaluate the diagnostic performance of generative AIs using LLMs in complex case series across comprehensive medical fields. METHODS: We analyzed published case reports from the American Journal of Case Reports from January 2022 to March 2023. We excluded pediatric cases and those primarily focused on management. We utilized three generative AIs to generate the top 10 differential-diagnosis (DDx) lists from case descriptions: the fourth-generation chat generative pre-trained transformer (ChatGPT-4), Google Gemini (previously Bard), and LLM Meta AI 2 (LLaMA2) chatbot. Two independent physicians assessed the inclusion of the final diagnosis in the lists generated by the AIs. RESULTS: Out of 557 consecutive case reports, 392 were included. The inclusion rates of the final diagnosis within top 10 DDx lists were 86.7% (340/392) for ChatGPT-4, 68.6% (269/392) for Google Gemini, and 54.6% (214/392) for LLaMA2 chatbot. The top diagnoses matched the final diagnoses in 54.6% (214/392) for ChatGPT-4, 31.4% (123/392) for Google Gemini, and 23.0% (90/392) for LLaMA2 chatbot. ChatGPT-4 showed higher diagnostic accuracy than Google Gemini (P < 0.001) and LLaMA2 chatbot (P < 0.001). Additionally, Google Gemini outperformed LLaMA2 chatbot within the top 10 DDx lists (P < 0.001) and as the top diagnosis (P = 0.010). CONCLUSIONS: This study demonstrated the diagnostic performance of generative AIs including ChatGPT-4, Google Gemini, and LLaMA2 chatbot. ChatGPT-4 exhibited higher diagnostic accuracy than the other platforms. These findings suggest the importance of understanding the differences in diagnostic performance among generative AIs, especially in complex case series across comprehensive medical fields, like general medicine.
Hirosawa T; Harada Y; Mizuta K; Sakamoto T; Tokumasu K; Shimizu T
10
39094112
Patient-Representing Population's Perceptions of GPT-Generated Versus Standard Emergency Department Discharge Instructions: Randomized Blind Survey Assessment.
2,024
Journal of medical Internet research
BACKGROUND: Discharge instructions are a key form of documentation and patient communication in the time of transition from the emergency department (ED) to home. Discharge instructions are time-consuming and often underprioritized, especially in the ED, leading to discharge delays and possibly impersonal patient instructions. Generative artificial intelligence and large language models (LLMs) offer promising methods of creating high-quality and personalized discharge instructions; however, there exists a gap in understanding patient perspectives of LLM-generated discharge instructions. OBJECTIVE: We aimed to assess the use of LLMs such as ChatGPT in synthesizing accurate and patient-accessible discharge instructions in the ED. METHODS: We synthesized 5 unique, fictional ED encounters to emulate real ED encounters that included a diverse set of clinician history, physical notes, and nursing notes. These were passed to GPT-4 in Azure OpenAI Service (Microsoft) to generate LLM-generated discharge instructions. Standard discharge instructions were also generated for each of the 5 unique ED encounters. All GPT-generated and standard discharge instructions were then formatted into standardized after-visit summary documents. These after-visit summaries containing either GPT-generated or standard discharge instructions were randomly and blindly administered to Amazon MTurk respondents representing patient populations through Amazon MTurk Survey Distribution. Discharge instructions were assessed based on metrics of interpretability of significance, understandability, and satisfaction. RESULTS: Our findings revealed that survey respondents' perspectives regarding GPT-generated and standard discharge instructions were significantly (P=.01) more favorable toward GPT-generated return precautions, and all other sections were considered noninferior to standard discharge instructions. Of the 156 survey respondents, GPT-generated discharge instructions were assigned favorable ratings, "agree" and "strongly agree," more frequently along the metric of interpretability of significance in discharge instruction subsections regarding diagnosis, procedures, treatment, post-ED medications or any changes to medications, and return precautions. Survey respondents found GPT-generated instructions to be more understandable when rating procedures, treatment, post-ED medications or medication changes, post-ED follow-up, and return precautions. Satisfaction with GPT-generated discharge instruction subsections was the most favorable in procedures, treatment, post-ED medications or medication changes, and return precautions. Wilcoxon rank-sum test of Likert responses revealed significant differences (P=.01) in the interpretability of significant return precautions in GPT-generated discharge instructions compared to standard discharge instructions but not for other evaluation metrics and discharge instruction subsections. CONCLUSIONS: This study demonstrates the potential for LLMs such as ChatGPT to act as a method of augmenting current documentation workflows in the ED to reduce the documentation burden of physicians. The ability of LLMs to provide tailored instructions for patients by improving readability and making instructions more applicable to patients could improve upon the methods of communication that currently exist.
Huang T; Safranek C; Socrates V; Chartash D; Wright D; Dilip M; Sangal RB; Taylor RA
10
40385316
Can large language models detect drug-drug interactions leading to adverse drug reactions?
2,025
Therapeutic advances in drug safety
BACKGROUND: Drug-drug interactions (DDI) are an important cause of adverse drug reactions (ADRs). Could large language models (LLMs) serve as valuable tools for pharmacovigilance specialists in detecting DDIs that lead to ADR notifications? OBJECTIVE: To compare the performance of three LLMs (ChatGPT, Gemini, and Claude) in detecting and explaining clinically significant DDIs that have led to an ADR. DESIGN: Observational cross-sectional study. METHODS: We used the French National Pharmacovigilance Database to randomly extract Individual Case Safety Reports (ICSRs) of ADRs with DDI (positive controls) and ICSRs of ADRs without DDI (negative controls) registered in 2022. Interaction cases were classified by difficulty level (level-1 DDI being the easiest and level-2 DDI being the most difficult). We give each LLM (ChatGPT, Gemini, and Claude) the same prompt and case summary. Sensitivity, specificity, and F-measure were calculated for each LLM in detecting DDIs in the case summaries. RESULTS: We assessed 82 ICSRs with DDIs and 22 ICSRs without DDIs. Among ICSRs with DDIs, 37 involved level-1 DDIs, and 45 involved level-2 DDIs. Correct responses were more frequent for level-1 DDIs than for level-2 DDIs. Regardless of difficulty level, ChatGPT detected 99% of DDI cases, and Claude and Gemini detected 95%. The percentage of correct answers to all DDI-related questions was 66% for ChatGPT, 68% for Claude, and 33% for Gemini. ChatGPT and Claude produced comparable results and outperformed Gemini (F-measure between 0.83 and 0.85 for ChatGPT and Claude and 0.63-0.68 for Gemini) to detect drugs involved in DDI. All exhibited low specificity (ChatGPT 0.68, Claude 0.64, and Gemini 0.36) and reported nonexistent DDIs for negative controls. CONCLUSION: LLMs can detect DDIs leading to pharmacovigilance cases, but cannot reliably exclude DDIs in cases without interactions. Pharmacologists are crucial for assessing whether a DDI is implicated in an ADR.
Sicard J; Montastruc F; Achalme C; Jonville-Bera AP; Songue P; Babin M; Soeiro T; Schiro P; de Canecaude C; Barus R
10
39702867
Aligning Large Language Models with Humans: A Comprehensive Survey of ChatGPT's Aptitude in Pharmacology.
2,025
Drugs
BACKGROUND: Due to the lack of a comprehensive pharmacology test set, evaluating the potential and value of large language models (LLMs) in pharmacology is complex and challenging. AIMS: This study aims to provide a test set reference for assessing the application potential of both general-purpose and specialized LLMs in pharmacology. METHODS: We constructed a pharmacology test set consisting of three tasks: drug information retrieval, lead compound structure optimization, and research trend summarization and analysis. Subsequently, we compared the performance of general-purpose LLMs GPT-3.5 and GPT-4 on this test set. RESULTS: The results indicate that GPT-3.5 and GPT-4 can better understand instructions for information retrieval, scheme optimization, and trend summarization in pharmacology, showing significant potential in basic pharmacology tasks, especially in areas such as drug pharmacological properties, pharmacokinetics, mode of action, and toxicity prediction. These general LLMs also effectively summarize the current challenges and future trends in this field, proving their valuable resource for interdisciplinary pharmacology researchers. However, the limitations of ChatGPT become evident when handling tasks such as drug identification queries, drug interaction information retrieval, and drug structure simulation optimization. It struggles to provide accurate interaction information for individual or specific drugs and cannot optimize specific drugs. This lack of depth in knowledge integration and analysis limits its application in scientific research and clinical exploration. CONCLUSION: Therefore, exploring retrieval-augmented generation (RAG) or integrating proprietary knowledge bases and knowledge graphs into pharmacology-oriented ChatGPT systems would yield favorable results. This integration will further optimize the potential of LLMs in pharmacology.
Zhang Y; Ren S; Wang J; Lu J; Wu C; He M; Liu X; Wu R; Zhao J; Zhan C; Du D; Zhan Z; Singla RK; Shen B
10
39507462
Artificial intelligence generates proficient Spanish obstetrics and gynecology counseling templates.
2,024
AJOG global reports
BACKGROUND: Effective patient counseling in Obstetrics and gynecology is vital. Existing language barriers between Spanish-speaking patients and English-speaking providers may negatively impact patient understanding and adherence to medical recommendations, as language discordance between provider and patient has been associated with medication noncompliance, adverse drug events, and underuse of preventative care. Artificial intelligence large language models may be a helpful adjunct to patient care by generating counseling templates in Spanish. OBJECTIVES: The primary objective was to determine if large language models can generate proficient counseling templates in Spanish on obstetric and gynecology topics. Secondary objectives were to (1) compare the content, quality, and comprehensiveness of generated templates between different large language models, (2) compare the proficiency ratings among the large language model generated templates, and (3) assess which generated templates had potential for integration into clinical practice. STUDY DESIGN: Cross-sectional study using free open-access large language models to generate counseling templates in Spanish on select obstetrics and gynecology topics. Native Spanish-speaking practicing obstetricians and gynecologists, who were blinded to the source large language model for each template, reviewed and subjectively scored each template on its content, quality, and comprehensiveness and considered it for integration into clinical practice. Proficiency ratings were calculated as a composite score of content, quality, and comprehensiveness. A score of >4 was considered proficient. Basic inferential statistics were performed. RESULTS: All artificial intelligence large language models generated proficient obstetrics and gynecology counseling templates in Spanish, with Google Bard generating the most proficient template (p<0.0001) and outperforming the others in comprehensiveness (P=.03), quality (P=.04), and content (P=.01). Microsoft Bing received the lowest scores in these domains. Physicians were likely to be willing to incorporate the templates into clinical practice, with no significant discrepancy in the likelihood of integration based on the source large language model (P=.45). CONCLUSIONS: Large language models have potential to generate proficient obstetrics and gynecology counseling templates in Spanish, which physicians would integrate into their clinical practice. Google Bard scored the highest across all attributes. There is an opportunity to use large language models to try to mitigate the language barriers in health care. Future studies should assess patient satisfaction, understanding, and adherence to clinical plans following receipt of these counseling templates.
Solmonovich RL; Kouba I; Quezada O; Rodriguez-Ayala G; Rojas V; Bonilla K; Espino K; Bracero LA
0-1
38819879
Redefining Health Care Data Interoperability: Empirical Exploration of Large Language Models in Information Exchange.
2,024
Journal of medical Internet research
BACKGROUND: Efficient data exchange and health care interoperability are impeded by medical records often being in nonstandardized or unstructured natural language format. Advanced language models, such as large language models (LLMs), may help overcome current challenges in information exchange. OBJECTIVE: This study aims to evaluate the capability of LLMs in transforming and transferring health care data to support interoperability. METHODS: Using data from the Medical Information Mart for Intensive Care III and UK Biobank, the study conducted 3 experiments. Experiment 1 assessed the accuracy of transforming structured laboratory results into unstructured format. Experiment 2 explored the conversion of diagnostic codes between the coding frameworks of the ICD-9-CM (International Classification of Diseases, Ninth Revision, Clinical Modification), and Systematized Nomenclature of Medicine Clinical Terms (SNOMED-CT) using a traditional mapping table and a text-based approach facilitated by the LLM ChatGPT. Experiment 3 focused on extracting targeted information from unstructured records that included comprehensive clinical information (discharge notes). RESULTS: The text-based approach showed a high conversion accuracy in transforming laboratory results (experiment 1) and an enhanced consistency in diagnostic code conversion, particularly for frequently used diagnostic names, compared with the traditional mapping approach (experiment 2). In experiment 3, the LLM showed a positive predictive value of 87.2% in extracting generic drug names. CONCLUSIONS: This study highlighted the potential role of LLMs in significantly improving health care data interoperability, demonstrated by their high accuracy and efficiency in data transformation and exchange. The LLMs hold vast potential for enhancing medical data exchange without complex standardization for medical terms and data structure.
Yoon D; Han C; Kim DW; Kim S; Bae S; Ryu JA; Choi Y
10
40374171
Patient Triage and Guidance in Emergency Departments Using Large Language Models: Multimetric Study.
2,025
Journal of medical Internet research
BACKGROUND: Emergency departments (EDs) face significant challenges due to overcrowding, prolonged waiting times, and staff shortages, leading to increased strain on health care systems. Efficient triage systems and accurate departmental guidance are critical for alleviating these pressures. Recent advancements in large language models (LLMs), such as ChatGPT, offer potential solutions for improving patient triage and outpatient department selection in emergency settings. OBJECTIVE: The study aimed to assess the accuracy, consistency, and feasibility of GPT-4-based ChatGPT models (GPT-4o and GPT-4-Turbo) for patient triage using the Modified Early Warning Score (MEWS) and evaluate GPT-4o's ability to provide accurate outpatient department guidance based on simulated patient scenarios. METHODS: A 2-phase experimental study was conducted. In the first phase, 2 ChatGPT models (GPT-4o and GPT-4-Turbo) were evaluated for MEWS-based patient triage accuracy using 1854 simulated patient scenarios. Accuracy and consistency were assessed before and after prompt engineering. In the second phase, GPT-4o was tested for outpatient department selection accuracy using 264 scenarios sourced from the Chinese Medical Case Repository. Each scenario was independently evaluated by GPT-4o thrice. Data analyses included Wilcoxon tests, Kendall correlation coefficients, and logistic regression analyses. RESULTS: In the first phase, ChatGPT's triage accuracy, based on MEWS, improved following prompt engineering. Interestingly, GPT-4-Turbo outperformed GPT-4o. GPT-4-Turbo achieved an accuracy of 100% compared to GPT-4o's accuracy of 96.2%, despite GPT-4o initially showing better performance prior to prompt engineering. This finding suggests that GPT-4-Turbo may be more adaptable to prompt optimization. In the second phase, GPT-4o, with superior performance on emotional responsiveness compared to GPT-4-Turbo, demonstrated an overall guidance accuracy of 92.63% (95% CI 90.34%-94.93%), with the highest accuracy in internal medicine (93.51%, 95% CI 90.85%-96.17%) and the lowest in general surgery (91.46%, 95% CI 86.50%-96.43%). CONCLUSIONS: ChatGPT demonstrated promising capability for supporting patient triage and outpatient guidance in EDs. GPT-4-Turbo showed greater adaptability to prompt engineering, whereas GPT-4o exhibited superior responsiveness and emotional interaction, which are essential for patient-facing tasks. Future studies should explore real-world implementation and address the identified limitations to enhance ChatGPT's clinical integration.
Wang C; Wang F; Li S; Ren QW; Tan X; Fu Y; Liu D; Qian G; Cao Y; Yin R; Li K
10
38432929
Performance of a Large Language Model on Japanese Emergency Medicine Board Certification Examinations.
2,024
Journal of Nippon Medical School = Nippon Ika Daigaku zasshi
BACKGROUND: Emergency physicians need a broad range of knowledge and skills to address critical medical, traumatic, and environmental conditions. Artificial intelligence (AI), including large language models (LLMs), has potential applications in healthcare settings; however, the performance of LLMs in emergency medicine remains unclear. METHODS: To evaluate the reliability of information provided by ChatGPT, an LLM was given the questions set by the Japanese Association of Acute Medicine in its board certification examinations over a period of 5 years (2018-2022) and programmed to answer them twice. Statistical analysis was used to assess agreement of the two responses. RESULTS: The LLM successfully answered 465 of the 475 text-based questions, achieving an overall correct response rate of 62.3%. For questions without images, the rate of correct answers was 65.9%. For questions with images that were not explained to the LLM, the rate of correct answers was only 52.0%. The annual rates of correct answers to questions without images ranged from 56.3% to 78.8%. Accuracy was better for scenario-based questions (69.1%) than for stand-alone questions (62.1%). Agreement between the two responses was substantial (kappa = 0.70). Factual error accounted for 82% of the incorrectly answered questions. CONCLUSION: An LLM performed satisfactorily on an emergency medicine board certification examination in Japanese and without images. However, factual errors in the responses highlight the need for physician oversight when using LLMs.
Igarashi Y; Nakahara K; Norii T; Miyake N; Tagami T; Yokobori S
21
39137031
Educational Utility of Clinical Vignettes Generated in Japanese by ChatGPT-4: Mixed Methods Study.
2,024
JMIR medical education
BACKGROUND: Evaluating the accuracy and educational utility of artificial intelligence-generated medical cases, especially those produced by large language models such as ChatGPT-4 (developed by OpenAI), is crucial yet underexplored. OBJECTIVE: This study aimed to assess the educational utility of ChatGPT-4-generated clinical vignettes and their applicability in educational settings. METHODS: Using a convergent mixed methods design, a web-based survey was conducted from January 8 to 28, 2024, to evaluate 18 medical cases generated by ChatGPT-4 in Japanese. In the survey, 6 main question items were used to evaluate the quality of the generated clinical vignettes and their educational utility, which are information quality, information accuracy, educational usefulness, clinical match, terminology accuracy (TA), and diagnosis difficulty. Feedback was solicited from physicians specializing in general internal medicine or general medicine and experienced in medical education. Chi-square and Mann-Whitney U tests were performed to identify differences among cases, and linear regression was used to examine trends associated with physicians' experience. Thematic analysis of qualitative feedback was performed to identify areas for improvement and confirm the educational utility of the cases. RESULTS: Of the 73 invited participants, 71 (97%) responded. The respondents, primarily male (64/71, 90%), spanned a broad range of practice years (from 1976 to 2017) and represented diverse hospital sizes throughout Japan. The majority deemed the information quality (mean 0.77, 95% CI 0.75-0.79) and information accuracy (mean 0.68, 95% CI 0.65-0.71) to be satisfactory, with these responses being based on binary data. The average scores assigned were 3.55 (95% CI 3.49-3.60) for educational usefulness, 3.70 (95% CI 3.65-3.75) for clinical match, 3.49 (95% CI 3.44-3.55) for TA, and 2.34 (95% CI 2.28-2.40) for diagnosis difficulty, based on a 5-point Likert scale. Statistical analysis showed significant variability in content quality and relevance across the cases (P<.001 after Bonferroni correction). Participants suggested improvements in generating physical findings, using natural language, and enhancing medical TA. The thematic analysis highlighted the need for clearer documentation, clinical information consistency, content relevance, and patient-centered case presentations. CONCLUSIONS: ChatGPT-4-generated medical cases written in Japanese possess considerable potential as resources in medical education, with recognized adequacy in quality and accuracy. Nevertheless, there is a notable need for enhancements in the precision and realism of case details. This study emphasizes ChatGPT-4's value as an adjunctive educational tool in the medical field, requiring expert oversight for optimal application.
Takahashi H; Shikino K; Kondo T; Komori A; Yamada Y; Saita M; Naito T
0-1
39823287
Performance of ChatGPT-4o in the diagnostic workup of fever among returning travellers requiring hospitalization: a validation study.
2,025
Journal of travel medicine
BACKGROUND: Febrile illness in returned travellers presents a diagnostic challenge in non-endemic settings. Chat generative pretrained transformer (ChatGPT) has the potential to assist in medical tasks, yet its diagnostic performance in clinical settings has rarely been evaluated. We conducted a validation assessment of ChatGPT-4o's performance in the workup of fever in returning travellers. METHODS: We retrieved the medical records of returning travellers hospitalized with fever during 2009-2024. Their clinical scenarios at time of presentation to the emergency department were prompted to ChatGPT-4o, using a detailed uniform format. The model was further prompted with four consistent questions concerning the differential diagnosis and recommended workup. To avoid training, we kept the model blinded to the final diagnosis. Our primary outcome was ChatGPT-4o's success rates in predicting the final diagnosis when requested to specify the top three differential diagnoses. Secondary outcomes were success rates when prompted to specify the single most likely diagnosis, and all necessary diagnostics. We also assessed ChatGPT-4o as a predicting tool for malaria and qualitatively evaluated its failures. RESULTS: ChatGPT-4o predicted the final diagnosis in 68% [95% confidence interval (CI) 59-77%], 78% (95% CI 69-85%) and 83% (95% CI 74-89%) of the 114 cases, when prompted to specify the most likely diagnosis, top three diagnoses and all possible diagnoses, respectively. ChatGPT-4o showed a sensitivity of 100% (95% CI 93-100%) and a specificity of 94% (95% CI 85-98%) for predicting malaria. The model failed to provide the final diagnosis in 18% (20/114) of cases, primarily by failing to predict globally endemic infections (16/21, 76%). CONCLUSIONS: ChatGPT-4o demonstrated high diagnostic accuracy when prompted with real-life scenarios of febrile returning travellers presenting to the emergency department, especially for malaria. Model training is expected to yield an improved performance and facilitate diagnostic decision-making in the field.
Yelin D; Shirin N; Harris I; Peretz Y; Yahav D; Schwartz E; Leshem E; Margalit I
10
39974103
The Clinical Value of ChatGPT for Epilepsy Presurgical Decision Making: Systematic Evaluation on Seizure Semiology Interpretation.
2,025
medRxiv : the preprint server for health sciences
BACKGROUND: For patients with drug-resistant focal epilepsy (DRE), surgical resection of the epileptogenic zone (EZ) is an effective treatment to control seizures. Accurate localization of the EZ is crucial and is typically achieved through comprehensive presurgical approaches such as seizure semiology interpretation, electroencephalography (EEG), magnetic resonance imaging (MRI), and intracranial EEG (iEEG). However, interpreting seizure semiology poses challenges because it relies heavily on expert knowledge and is often based on inconsistent and incoherent descriptions, leading to variability and potential limitations in presurgical evaluation. To overcome these challenges, advanced technologies like large language models (LLMs)-with ChatGPT being a notable example-offer valuable tools for analyzing complex textual information, making them well-suited to interpret detailed seizure semiology descriptions and assist in accurately localizing the EZ. OBJECTIVE: This study evaluates the clinical value of ChatGPT in interpreting seizure semiology to localize EZs in presurgical assessments for patients with focal epilepsy and compares its performance with epileptologists. METHODS: Two data cohorts were compiled: a publicly sourced cohort consisting of 852 semiology-EZ pairs from 193 peer-reviewed journal publications and a private cohort of 184 semiology-EZ pairs collected from Far Eastern Memorial Hospital (FEMH) in Taiwan. ChatGPT was evaluated to predict the most likely EZ locations using two prompt methods: zero-shot prompting (ZSP) and few-shot prompting (FSP). To compare ChatGPT's performance, eight epileptologists were recruited to participate in an online survey to interpret 100 randomly selected semiology records. The responses from ChatGPT and the epileptologists were compared using three metrics: regional sensitivity (RSens), weighted sensitivity (WSens), and net positive inference rate (NPIR). RESULTS: In the publicly sourced cohort, ChatGPT demonstrated high RSens reliability, achieving 80-90% for the frontal and temporal lobes, 20-40% for the parietal lobe, occipital lobe, and insular cortex, and only 3% for the cingulate cortex. The WSens, which accounts for biased data distribution, consistently exceeded 67%, while the mean NPIR remained around 0. These evaluation results based on the private FEMH cohort are consistent with those from the publicly sourced cohort. A group t-test with 1000 bootstrap samples revealed that ChatGPT-4 significantly outperformed epileptologists in RSens for commonly represented EZs, such as the frontal and temporal lobes (p < 0.001). Additionally, ChatGPT-4 demonstrated superior overall performance in WSens (p < 0.001). However, no significant differences were observed between ChatGPT and the epileptologists in NPIR, highlighting comparable performance in this metric. CONCLUSIONS: ChatGPT demonstrated clinical value as a tool to assist the decision-making in the epilepsy preoperative workup. With ongoing advancements in LLMs, it is anticipated that the reliability and accuracy of LLMs will continue to improve in the future.
Luo Y; Jiao M; Fotedar N; Ding JE; Karakis I; Rao VR; Asmar M; Xian X; Aboud O; Wen Y; Lin JJ; Hung FM; Sun H; Rosenow F; Liu F
10
40354107
Clinical Value of ChatGPT for Epilepsy Presurgical Decision-Making: Systematic Evaluation of Seizure Semiology Interpretation.
2,025
Journal of medical Internet research
BACKGROUND: For patients with drug-resistant focal epilepsy, surgical resection of the epileptogenic zone (EZ) is an effective treatment to control seizures. Accurate localization of the EZ is crucial and is typically achieved through comprehensive presurgical approaches such as seizure semiology interpretation, electroencephalography (EEG), magnetic resonance imaging (MRI), and intracranial EEG (iEEG). However, interpreting seizure semiology is challenging because it heavily relies on expert knowledge. The semiologies are often inconsistent and incoherent, leading to variability and potential limitations in presurgical evaluation. To overcome these challenges, advanced technologies like large language models (LLMs)-with ChatGPT being a notable example-offer valuable tools for analyzing complex textual information, making them well-suited to interpret detailed seizure semiology descriptions and accurately localize the EZ. OBJECTIVE: This study evaluates the clinical value of ChatGPT for interpreting seizure semiology to localize EZs in presurgical assessments for patients with focal epilepsy and compares its performance with that of epileptologists. METHODS: We compiled 2 data cohorts: a publicly sourced cohort of 852 semiology-EZ pairs from 193 peer-reviewed journal publications and a private cohort of 184 semiology-EZ pairs collected from Far Eastern Memorial Hospital (FEMH) in Taiwan. ChatGPT was evaluated to predict the most likely EZ locations using 2 prompt methods: zero-shot prompting (ZSP) and few-shot prompting (FSP). To compare the performance of ChatGPT, 8 epileptologists were recruited to participate in an online survey to interpret 100 randomly selected semiology records. The responses from ChatGPT and epileptologists were compared using 3 metrics: regional sensitivity (RSens), weighted sensitivity (WSens), and net positive inference rate (NPIR). RESULTS: In the publicly sourced cohort, ChatGPT demonstrated high RSens reliability, achieving 80% to 90% for the frontal and temporal lobes; 20% to 40% for the parietal lobe, occipital lobe, and insular cortex; and only 3% for the cingulate cortex. The WSens, which accounts for biased data distribution, consistently exceeded 67%, while the mean NPIR remained around 0. These evaluation results based on the private FEMH cohort are consistent with those from the publicly sourced cohort. A group t test with 1000 bootstrap samples revealed that ChatGPT-4 significantly outperformed epileptologists in RSens for the most frequently implicated EZs, such as the frontal and temporal lobes (P<.001). Additionally, ChatGPT-4 demonstrated superior overall performance in WSens (P<.001). However, no significant differences were observed between ChatGPT and the epileptologists in NPIR, highlighting comparable performance in this metric. CONCLUSIONS: ChatGPT demonstrated clinical value as a tool to assist decision-making during epilepsy preoperative workups. With ongoing advancements in LLMs, their reliability and accuracy are anticipated to improve.
Luo Y; Jiao M; Fotedar N; Ding JE; Karakis I; Rao VR; Asmar M; Xian X; Aboud O; Wen Y; Lin JJ; Hung FM; Sun H; Rosenow F; Liu F
10
39427271
Chasing sleep physicians: ChatGPT-4o on the interpretation of polysomnographic results.
2,025
European archives of oto-rhino-laryngology : official journal of the European Federation of Oto-Rhino-Laryngological Societies (EUFOS) : affiliated with the German Society for Oto-Rhino-Laryngology - Head and Neck Surgery
BACKGROUND: From a healthcare professional's perspective, the use of ChatGPT (Open AI), a large language model (LLM), offers huge potential as a practical and economic digital assistant. However, ChatGPT has not yet been evaluated for the interpretation of polysomnographic results in patients with suspected obstructive sleep apnea (OSA). AIMS/OBJECTIVES: To evaluate the agreement of polysomnographic result interpretation between ChatGPT-4o and a board-certified sleep physician and to shed light into the role of ChatGPT-4o in the field of medical decision-making in sleep medicine. MATERIAL AND METHODS: For this proof-of-concept study, 40 comprehensive patient profiles were designed, which represent a broad and typical spectrum of cases, ensuring a balanced distribution of demographics and clinical characteristics. After various prompts were tested, one prompt was used for initial diagnosis of OSA and a further for patients with positive airway pressure (PAP) therapy intolerance. Each polysomnographic result was independently evaluated by ChatGPT-4o and a board-certified sleep physician. Diagnosis and therapy suggestions were analyzed for agreement. RESULTS: ChatGPT-4o and the sleep physician showed 97% (29/30) concordance in the diagnosis of the simple cases. For the same cases the two assessment instances unveiled 100% (30/30) concordance regarding therapy suggestions. For cases with intolerance of treatment with positive airway pressure (PAP) ChatGPT-4o and the sleep physician revealed 70% (7/10) concordance in the diagnosis and 44% (22/50) concordance for therapy suggestions. CONCLUSION AND SIGNIFICANCE: Precise prompting improves the output of ChatGPT-4o and provides sleep physician-like polysomnographic result interpretation. Although ChatGPT shows some shortcomings in offering treatment advice, our results provide evidence for AI assisted automation and economization of polysomnographic interpretation by LLMs. Further research should explore data protection issues and demonstrate reproducibility with real patient data on a larger scale.
Seifen C; Huppertz T; Gouveris H; Bahr-Hamm K; Pordzik J; Eckrich J; Smith H; Kelsey T; Blaikie A; Matthias C; Kuhn S; Buhr CR
0-1
38185435
A review of top cardiology and cardiovascular medicine journal guidelines regarding the use of generative artificial intelligence tools in scientific writing.
2,024
Current problems in cardiology
BACKGROUND: Generative Artificial Intelligence (AI) tools have experienced rapid development over the last decade and are gaining increasing popularity as assistive models in academic writing. However, the ability of AI to generate reliable and accurate research articles is a topic of debate. Major scientific journals have issued policies regarding the contribution of AI tools in scientific writing. METHODS: We conducted a review of the author and peer reviewer guidelines of the top 25 Cardiology and Cardiovascular Medicine journals as per the 2023 SCImago rankings. Data were obtained though reviewing journal websites and directly emailing the editorial office. Descriptive data regarding journal characteristics were coded on SPSS. Subgroup analyses of the journal guidelines were conducted based on the publishing company policies. RESULTS: Our analysis revealed that all scientific journals in our study permitted the documented use of AI in scientific writing with certain limitations as per ICMJE recommendations. We found that AI tools cannot be included in the authorship or be used for image generation, and that all authors are required to assume full responsibility of their submitted and published work. The use of generative AI tools in the peer review process is strictly prohibited. CONCLUSION: Guidelines regarding the use of generative AI in scientific writing are standardized, detailed, and unanimously followed by all journals in our study according to the recommendations set forth by international forums. It is imperative to ensure that these policies are carefully followed and updated to maintain scientific integrity.
Inam M; Sheikh S; Minhas AMK; Vaughan EM; Krittanawong C; Samad Z; Lavie CJ; Khoja A; D'Cruze M; Slipczuk L; Alarakhiya F; Naseem A; Haider AH; Virani SS
10
38446539
Leveraging Generative AI Tools to Support the Development of Digital Solutions in Health Care Research: Case Study.
2,024
JMIR human factors
BACKGROUND: Generative artificial intelligence has the potential to revolutionize health technology product development by improving coding quality, efficiency, documentation, quality assessment and review, and troubleshooting. OBJECTIVE: This paper explores the application of a commercially available generative artificial intelligence tool (ChatGPT) to the development of a digital health behavior change intervention designed to support patient engagement in a commercial digital diabetes prevention program. METHODS: We examined the capacity, advantages, and limitations of ChatGPT to support digital product idea conceptualization, intervention content development, and the software engineering process, including software requirement generation, software design, and code production. In total, 11 evaluators, each with at least 10 years of experience in fields of study ranging from medicine and implementation science to computer science, participated in the output review process (ChatGPT vs human-generated output). All had familiarity or prior exposure to the original personalized automatic messaging system intervention. The evaluators rated the ChatGPT-produced outputs in terms of understandability, usability, novelty, relevance, completeness, and efficiency. RESULTS: Most metrics received positive scores. We identified that ChatGPT can (1) support developers to achieve high-quality products faster and (2) facilitate nontechnical communication and system understanding between technical and nontechnical team members around the development goal of rapid and easy-to-build computational solutions for medical technologies. CONCLUSIONS: ChatGPT can serve as a usable facilitator for researchers engaging in the software development life cycle, from product conceptualization to feature identification and user story development to code generation. TRIAL REGISTRATION: ClinicalTrials.gov NCT04049500; https://clinicaltrials.gov/ct2/show/NCT04049500.
Rodriguez DV; Lawrence K; Gonzalez J; Brandfield-Harvey B; Xu L; Tasneem S; Levine DL; Mann D
0-1
39948214
Evaluation of the Performance of Three Large Language Models in Clinical Decision Support: A Comparative Study Based on Actual Cases.
2,025
Journal of medical systems
BACKGROUND: Generative large language models (LLMs) are increasingly integrated into the medical field. However, their actual efficacy in clinical decision-making remains partially unexplored. This study aimed to assess the performance of the three LLMs, ChatGPT-4, Gemini, and Med-Go, in the domain of professional medicine when confronted with actual clinical cases. METHODS: This study involved 134 clinical cases spanning nine medical disciplines. Each LLM was required to provide suggestions for diagnosis, diagnostic criteria, differential diagnosis, examination and treatment for every case. Responses were scored by two experts using a predefined rubric. RESULTS: In overall performance among the models, Med-Go achieved the highest median score (37.5, IQR 31.9-41.5), while Gemini recorded the lowest (33.0, IQR 25.5-36.6), showing significant statistical difference among the three LLMs (p < 0.001). Analysis revealed that responses related to differential diagnosis were the weakest, while those pertaining to treatment recommendations were the strongest. Med-Go displayed notable performance advantages in gastroenterology, nephrology, and neurology. CONCLUSIONS: The findings show that all three LLMs achieved over 60% of the maximum possible score, indicating their potential applicability in clinical practice. However, inaccuracies that could lead to adverse decisions underscore the need for caution in their application. Med-Go's superior performance highlights the benefits of incorporating specialized medical knowledge into LLMs training. It is anticipated that further development and refinement of medical LLMs will enhance their precision and safety in clinical use.
Wang X; Ye H; Zhang S; Yang M; Wang X
10
39396402
PresRecRF: Herbal prescription recommendation via the representation fusion of large TCM semantics and molecular knowledge.
2,024
Phytomedicine : international journal of phytotherapy and phytopharmacology
BACKGROUND: Herbal prescription recommendation (HPR) is a hotspot in the research of clinical intelligent decision support. Recently plentiful HPR models based on deep neural networks have been proposed. Owing to insufficient data, e.g., lack of knowledge of molecular, TCM theory, and herbal dosage in HPR modeling, the existing models suffer from challenges, e.g., plain prediction precision, and are far from real-world clinics. PURPOSE: To address these problems, we proposed a novel herbal prescription recommendation model with the representation fusion of large TCM semantics and molecular knowledge (termed PresRecRF). STUDY DESIGN AND METHODS: PresRecRF comprises three key modules. The representation learning module consists of two key components: a molecular knowledge representation component, integrating molecular knowledge into the herb-symptom-protein knowledge graph to enhance representations for herbs and symptoms; and a TCM knowledge representation component, leveraging BERT and ChatGPT to acquire TCM knowledge-enriched semantic representations. We introduced a representation fusion module to effectively merge molecular and TCM semantic representations. In the herb recommendation module, a multi-task objective loss is implemented to predict both herbs and dosages simultaneously. RESULTS: The experimental results on two clinical datasets show that PresRecRF can achieve the optimal performance. Further analysis of ablation, hyper-parameters, and case studies indicate the effectiveness and reliability of the proposed model, suggesting that it can help precision medicine and treatment recommendations. CONCLUSION: The entire process of the proposed PresRecRF model closely mirrors the actual diagnosis and treatment procedures carried out by doctors, which are better applied in real clinical scenarios. The source codes of PresRecRF is available at https://github.com/2020MEAI/PresRecRF.
Yang K; Dong X; Zhang S; Yu H; Zhong L; Zhang L; Zhao H; Hou Y; Song X; Zhou X
10
39718328
Artificial Intelligence, the ChatGPT Large Language Model: Assessing the Accuracy of Responses to the Gynaecological Endoscopic Surgical Education and Assessment (GESEA) Level 1-2 knowledge tests.
2,024
Facts, views & vision in ObGyn
BACKGROUND: In 2022, OpenAI launched ChatGPT 3.5, which is now widely used in medical education, training, and research. Despite its valuable use for the generation of information, concerns persist about its authenticity and accuracy. Its undisclosed information source and outdated dataset pose risks of misinformation. Although it is widely used, AI-generated text inaccuracies raise doubts about its reliability. The ethical use of such technologies is crucial to uphold scientific accuracy in research. OBJECTIVE: This study aimed to assess the accuracy of ChatGPT in doing GESEA tests 1 and 2. MATERIALS AND METHODS: The 100 multiple-choice theoretical questions from GESEA certifications 1 and 2 were presented to ChatGPT, requesting the selection of the correct answer along with an explanation. Expert gynaecologists evaluated and graded the explanations for accuracy. MAIN OUTCOME MEASURES: ChatGPT showed a 59% accuracy in responses, with 64% providing comprehensive explanations. It performed better in GESEA Level 1 (64% accuracy) than in GESEA Level 2 (54% accuracy) questions. CONCLUSIONS: ChatGPT is a versatile tool in medicine and research, offering knowledge, information, and promoting evidence-based practice. Despite its widespread use, its accuracy has not been validated yet. This study found a 59% correct response rate, highlighting the need for accuracy validation and ethical use considerations. Future research should investigate ChatGPT's truthfulness in subspecialty fields such as gynaecologic oncology and compare different versions of chatbot for continuous improvement. WHAT IS NEW? Artificial intelligence (AI) has a great potential in scientific research. However, the validity of outputs remains unverified. This study aims to evaluate the accuracy of responses generated by ChatGPT to enhance the critical use of this tool.
Pavone M; Palmieri L; Bizzarri N; Rosati A; Campolo F; Innocenzi C; Taliento C; Restaino S; Catena U; Vizzielli G; Akladios C; Ianieri MM; Marescaux J; Campo R; Fanfani F; Scambia G
43
40405178
Challenging cases of hyponatremia incorrectly interpreted by ChatGPT.
2,025
BMC medical education
BACKGROUND: In clinical medicine, the assessment of hyponatremia is frequently required but also known as a source of major diagnostic errors, substantial mismanagement, and iatrogenic morbidity. Because artificial intelligence techniques are efficient in analyzing complex problems, their use may possibly overcome current assessment limitations. There is no literature concerning Chat Generative Pre-trained Transformer (ChatGPT-3.5) use for evaluating difficult hyponatremia cases. Because of the interesting pathophysiology, hyponatremia cases are often used in medical education for students to evaluate patients with students increasingly using artificial intelligence as a diagnostic tool. To evaluate this possibility, four challenging hyponatremia cases published previously, were presented to the free ChatGPT-3.5 for diagnosis and treatment suggestions. METHODS: We used four challenging hyponatremia cases, that were evaluated by 46 physicians in Canada, the Netherlands, South-Africa, Taiwan, and USA, and published previously. These four cases were presented two times in the free ChatGPT, version 3.5 in December 2023 as well as in September 2024 with the request to recommend diagnosis and therapy. Responses by ChatGPT were compared with those of the clinicians. RESULTS: Case 1 and 3 have a single cause of hyponatremia. Case 2 and 4 have two contributing hyponatremia features. Neither ChatGPT, in 2023, nor the previously published assessment by 46 clinicians, whose assessment was described in the original publication, recognized the most crucial cause of hyponatremia with major therapeutic consequences in all four cases. In 2024 ChatGPT properly diagnosed and suggested adequate management in one case. Concurrent Addison's disease was correctly recognized in case 1 by ChatGPT in 2023 and 2024, whereas 81% of the clinicians missed this diagnosis. No proper therapeutic recommendations were given by ChatGPT in 2023 in any of the four cases, but in one case adequate advice was given by ChatGPT in 2024. The 46 clinicians recommended inadequate therapy in 65%, 57%, 2%, and 76%, respectively in case 1 to 4. CONCLUSION: Our study currently does not support the use of the free version ChatGPT 3.5 in difficult hyponatremia cases, but a small improvement was observed after ten months with the same ChatGPT 3.5 version. Patients, health professionals, medical educators and students should be aware of the shortcomings of diagnosis and therapy suggestions by ChatGPT.
Berend K; Duits A; Gans ROB
10
40013072
Medication counseling for OTC drugs using customized ChatGPT-4: Comparison with ChatGPT-3.5 and ChatGPT-4o.
2,025
Digital health
BACKGROUND: In Japan, consumers can purchase most over-the-counter (OTC) drugs without pharmacist guidance. Recently, generative artificial intelligence (AI) has become increasingly popular. Therefore, medical professionals need to consider the use of generative AI by consumers for medication counseling. We have previously reported responses in Japanese from ChatGPT-3.5 to 264 questions regarding whether each of 22 OTC drugs can be taken under 12 typical patient conditions. The proportion of responses that satisfied the criteria of 1) accuracy, 2) relevance, and 3) reliability with respect to package insert instructions was 20.8%. In November 2023, GPTs were launched, enabling us to construct a customized ChatGPT, using natural language. In the present study, we compared performance in providing medication guidance among a newly customized GPT, the latest non-customized version ChatGPT-4o, and the previous version, ChatGPT-3.5. The aim was to determine whether the customization and version update of ChatGPT improved performance and to evaluate its potential usefulness. METHODS: We configured customized ChatGPT-4 by executing five instructions in Japanese and uploaded the text of package inserts for 22 OTC drugs as knowledge. We asked the same 264 questions as in our previous study. RESULTS: With the customized ChatGPT-4, the percentages of responses that satisfied the criteria of accuracy, relevance, and reliability were 93.2%, 100%, and 60.2%, respectively. Additionally, 56.1% of responses satisfied all three criteria, 2.7-fold higher compared with ChatGPT-3.5 and 1.3-fold higher compared with ChatGPT-4o. CONCLUSION: The performance of our customized GPT far exceeded that of ChatGPT-3.5. In particular, the proportion of appropriate responses to the questions using brand names was significantly improved. ChatGPT can be customized by providing drug package insert information and using appropriate prompt engineering, potentially offering helpful tools in clinical pharmacy.
Kiyomiya K; Aomori T; Ohtani H
10
39754097
Comparison of AI applications and anesthesiologist's anesthesia method choices.
2,025
BMC anesthesiology
BACKGROUND: In medicine, Artificial intelligence has begun to be utilized in nearly every domain, from medical devices to the interpretation of imaging studies. There is still a need for more experience and more studies related to the comprehensive use of AI in medicine. The aim of the present study is to evaluate the ability of AI to make decisions regarding anesthesia methods and to compare the most popular AI programs from this perspective. METHODS: The study included orthopedic patients over 18 years of age scheduled for limb surgery within a 1-month period. Patients classified as ASA I-III who were evaluated in the anesthesia clinic during the preoperative period were included in the study. The anesthesia method preferred by the anesthesiologist during the operation and the patient's demographic data, comorbidities, medications, and surgical history were recorded. The obtained patient data were discussed as if presenting a patient scenario using the free versions of the ChatGPT, Copilot, and Gemini applications by a different anesthesiologist who did not perform the operation. RESULTS: Over the course of 1 month, a total of 72 patients were enrolled in the study. It was observed that both the anesthesia specialists and the Gemini application chose spinal anesthesia for the same patient in 68.5% of cases. This rate was higher compared to the other AI applications. For patients taking medication, it was observed that the Gemini application presented choices that were highly compatible (85.7%) with the anesthesiologists' preferences. CONCLUSION: AI cannot fully master the guidelines and exceptional and specific cases that arrive in the course of medical treatment. Thus, we believe that AI can serve as a valuable assistant rather than replacing doctors.
Celik E; Turgut MA; Aydogan M; Kilinc M; Toktas I; Akelma H
0-1
39099569
The potential of ChatGPT in medicine: an example analysis of nephrology specialty exams in Poland.
2,024
Clinical kidney journal
BACKGROUND: In November 2022, OpenAI released a chatbot named ChatGPT, a product capable of processing natural language to create human-like conversational dialogue. It has generated a lot of interest, including from the scientific community and the medical science community. Recent publications have shown that ChatGPT can correctly answer questions from medical exams such as the United States Medical Licensing Examination and other specialty exams. To date, there have been no studies in which ChatGPT has been tested on specialty questions in the field of nephrology anywhere in the world. METHODS: Using the ChatGPT-3.5 and -4.0 algorithms in this comparative cross-sectional study, we analysed 1560 single-answer questions from the national specialty exam in nephrology from 2017 to 2023 that were available in the Polish Medical Examination Center's question database along with answer keys. RESULTS: Of the 1556 questions posed to ChatGPT-4.0, correct answers were obtained with an accuracy of 69.84%, compared with ChatGPT-3.5 (45.70%, P = .0001) and with the top results of medical doctors (85.73%, P = .0001). Of the 13 tests, ChatGPT-4.0 exceeded the required >/=60% pass rate in 11 tests passed, and scored higher than the average of the human exam results. CONCLUSION: ChatGPT-3.5 was not spectacularly successful in nephrology exams. The ChatGPT-4.0 algorithm was able to pass most of the analysed nephrology specialty exams. New generations of ChatGPT achieve similar results to humans. The best results of humans are better than those of ChatGPT-4.0.
Nicikowski J; Szczepanski M; Miedziaszczyk M; Kudlinski B
21
37190006
May Artificial Intelligence Influence Future Pediatric Research?-The Case of ChatGPT.
2,023
Children (Basel, Switzerland)
BACKGROUND: In recent months, there has been growing interest in the potential of artificial intelligence (AI) to revolutionize various aspects of medicine, including research, education, and clinical practice. ChatGPT represents a leading AI language model, with possible unpredictable effects on the quality of future medical research, including clinical decision-making, medical education, drug development, and better research outcomes. AIM AND METHODS: In this interview with ChatGPT, we explore the potential impact of AI on future pediatric research. Our discussion covers a range of topics, including the potential positive effects of AI, such as improved clinical decision-making, enhanced medical education, faster drug development, and better research outcomes. We also examine potential negative effects, such as bias and fairness concerns, safety and security issues, overreliance on technology, and ethical considerations. CONCLUSIONS: While AI continues to advance, it is crucial to remain vigilant about the possible risks and limitations of these technologies and to consider the implications of these technologies and their use in the medical field. The development of AI language models represents a significant advancement in the field of artificial intelligence and has the potential to revolutionize daily clinical practice in every branch of medicine, both surgical and clinical. Ethical and social implications must also be considered to ensure that these technologies are used in a responsible and beneficial manner.
Corsello A; Santangelo A
10
39578313
Leveraging ChatGPT for Enhanced Aesthetic Evaluations in Minimally Invasive Facial Procedures.
2,025
Aesthetic plastic surgery
BACKGROUND: In recent years, the application of AI technologies like ChatGPT has gained traction in the field of plastic surgery. AI models can analyze pre- and post-treatment images to offer insights into the effectiveness of cosmetic procedures. This technological advancement enables rapid, objective evaluations that can complement traditional assessment methods, providing a more comprehensive understanding of treatment outcomes. OBJECTIVE: The study aimed to comprehensively assess the effectiveness of custom ChatGPT model, "Face Rating and Review AI," in facial feature evaluation in minimally invasive aesthetic procedures, particularly before and after Botox treatments. METHOD: An analysis was conducted on the Web of Science (WoS) database, identifying 79 articles published between 2023 and 2024 on ChatGPT in the field of plastic surgery from various countries. A dataset of 23 patients from Kaggle, including pre- and post-Botox images, was used. The custom ChatGPT model, "Face Rating & Review AI," was used to assess facial features based on objective parameters such as the golden ratio, symmetry, proportion, side angles, skin condition, and overall harmony, as well as subjective parameters like personality, temperament, and social attraction. RESULT: The WoS search found 79 articles on ChatGPT in plastic surgery from 27 countries, with most publications originating from the USA, Australia, and Italy. The objective and subjective parameters were analyzed using a paired t-test, and all facial features showed low p-values (<0.05). Higher mean scores on features such as the golden ratio (mean = 5.86, SD = 0.69), skin condition (mean = 3.78, SD = 0.73), and personality (mean = 5.0, SD = 0.79) indicate positive shifts after the treatment. CONCLUSION: The custom ChatGPT model "Face Rating and Review AI" is a valuable tool for assessing facial features in Botox treatments. It effectively evaluates objective and subjective attributes, aiding clinical decision-making. However, ethical considerations highlight the need for diverse datasets in future research to improve accuracy and inclusivity. LEVEL OF EVIDENCE V: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Ali R; Cui H
32
38838389
Empowering gynaecologists with Artificial Intelligence: Tailoring surgical solutions for fibroids.
2,024
European journal of obstetrics, gynecology, and reproductive biology
BACKGROUND: In recent years, the integration ofArtificial intelligence (AI) into various fields of medicine including Gynaecology, has shown promising potential. Surgical treatment of fibroid is myomectomy if uterine preservation and fertility are the primary aims. AI usage begins with the involvement of LLM (Large Language Model) from the point when a patient visits a gynecologist, from identifying signs and symptoms to reaching a diagnosis, providing treatment plans, and patient counseling. OBJECTIVE: Use of AI (ChatGPT versus Google Bard) in the surgical management of fibroid. STUDY DESIGN: Identifyingthe patient's problems using LLMs like ChatGPT and Google Bard and giving a treatment optionin 8 clinical scenarios of fibroid. Data entry was done using M.S. Excel and was statistically analyzed using Statistical Package for Social Sciences (SPSS Version 26) for M.S. Windows 2010. All results were presented in tabular form. Data were analyzed using nonparametric tests Chi-square tests or Fisher exact test.pvalues < 0.05 were considered statistically significant. The sensitivity of both techniques was calculated. We have used Cohen's Kappa to know the degree of agreement. RESULTS: We found that on the first attempt, ChatGPT gave general answers in 62.5 % of cases and specific answers in 37.5 % of cases. ChatGPT showed improved sensitivity on successive prompts 37.5 % to 62.5 % on the third prompt. Google Bard could not identify the clinical question in 50 % of cases and gave incorrect answers in 12.5 % of cases (p = 0.04). Google Bard showed the same sensitivity of 25 % on all prompts. CONCLUSION: AI helps to reduce the time to diagnose and plan a treatment strategy for fibroid and acts as a powerful tool in the hands of a gynecologist. However, the usage of AI by patients for self-treatment is to be avoided and should be used only for education and counseling about fibroids.
Sinha R; Raina R; Bag M; Rupa B
0-1
38592758
Evaluating ChatGPT-4's Diagnostic Accuracy: Impact of Visual Data Integration.
2,024
JMIR medical informatics
BACKGROUND: In the evolving field of health care, multimodal generative artificial intelligence (AI) systems, such as ChatGPT-4 with vision (ChatGPT-4V), represent a significant advancement, as they integrate visual data with text data. This integration has the potential to revolutionize clinical diagnostics by offering more comprehensive analysis capabilities. However, the impact on diagnostic accuracy of using image data to augment ChatGPT-4 remains unclear. OBJECTIVE: This study aims to assess the impact of adding image data on ChatGPT-4's diagnostic accuracy and provide insights into how image data integration can enhance the accuracy of multimodal AI in medical diagnostics. Specifically, this study endeavored to compare the diagnostic accuracy between ChatGPT-4V, which processed both text and image data, and its counterpart, ChatGPT-4, which only uses text data. METHODS: We identified a total of 557 case reports published in the American Journal of Case Reports from January 2022 to March 2023. After excluding cases that were nondiagnostic, pediatric, and lacking image data, we included 363 case descriptions with their final diagnoses and associated images. We compared the diagnostic accuracy of ChatGPT-4V and ChatGPT-4 without vision based on their ability to include the final diagnoses within differential diagnosis lists. Two independent physicians evaluated their accuracy, with a third resolving any discrepancies, ensuring a rigorous and objective analysis. RESULTS: The integration of image data into ChatGPT-4V did not significantly enhance diagnostic accuracy, showing that final diagnoses were included in the top 10 differential diagnosis lists at a rate of 85.1% (n=309), comparable to the rate of 87.9% (n=319) for the text-only version (P=.33). Notably, ChatGPT-4V's performance in correctly identifying the top diagnosis was inferior, at 44.4% (n=161), compared with 55.9% (n=203) for the text-only version (P=.002, chi(2) test). Additionally, ChatGPT-4's self-reports showed that image data accounted for 30% of the weight in developing the differential diagnosis lists in more than half of cases. CONCLUSIONS: Our findings reveal that currently, ChatGPT-4V predominantly relies on textual data, limiting its ability to fully use the diagnostic potential of visual information. This study underscores the need for further development of multimodal generative AI systems to effectively integrate and use clinical image data. Enhancing the diagnostic performance of such AI systems through improved multimodal data integration could significantly benefit patient care by providing more accurate and comprehensive diagnostic insights. Future research should focus on overcoming these limitations, paving the way for the practical application of advanced AI in medicine.
Hirosawa T; Harada Y; Tokumasu K; Ito T; Suzuki T; Shimizu T
10
38775367
Evaluation of ChatGPT as a Tool for Answering Clinical Questions in Pharmacy Practice.
2,024
Journal of pharmacy practice
Background: In the healthcare field, there has been a growing interest in using artificial intelligence (AI)-powered tools to assist healthcare professionals, including pharmacists, in their daily tasks. Objectives: To provide commentary and insight into the potential for generative AI language models such as ChatGPT as a tool for answering practice-based, clinical questions and the challenges that need to be addressed before implementation in pharmacy practice settings. Methods: To assess ChatGPT, pharmacy-based questions were prompted to ChatGPT (Version 3.5; free version) and responses were recorded. Question types included 6 drug information questions, 6 enhanced prompt drug information questions, 5 patient case questions, 5 calculations questions, and 10 drug knowledge questions (e.g., top 200 drugs). After all responses were collected, ChatGPT responses were assessed for appropriateness. Results: ChatGPT responses were generated from 32 questions in 5 categories and evaluated on a total of 44 possible points. Among all ChatGPT responses and categories, the overall score was 21 of 44 points (47.73%). ChatGPT scored higher in pharmacy calculation (100%), drug information (83%), and top 200 drugs (80%) categories and lower in drug information enhanced prompt (33%) and patient case (20%) categories. Conclusion: This study suggests that ChatGPT has limited success as a tool to answer pharmacy-based questions. ChatGPT scored higher in calculation and multiple-choice questions but scored lower in drug information and patient case questions, generating misleading or fictional answers and citations.
Munir F; Gehres A; Wai D; Song L
10
39719573
Assessing the accuracy and quality of artificial intelligence (AI) chatbot-generated responses in making patient-specific drug-therapy and healthcare-related decisions.
2,024
BMC medical informatics and decision making
BACKGROUND: Interactive artificial intelligence tools such as ChatGPT have gained popularity, yet little is known about their reliability as a reference tool for healthcare-related information for healthcare providers and trainees. The objective of this study was to assess the consistency, quality, and accuracy of the responses generated by ChatGPT on healthcare-related inquiries. METHODS: A total of 18 open-ended questions including six questions in three defined clinical areas (2 each to address "what", "why", and "how", respectively) were submitted to ChatGPT v3.5 based on real-world usage experience. The experiment was conducted in duplicate using 2 computers. Five investigators independently ranked each response using a 4-point scale to rate the quality of the bot's responses. The Delphi method was used to compare each investigator's score with the goal of reaching at least 80% consistency. The accuracy of the responses was checked using established professional references and resources. When the responses were in question, the bot was asked to provide reference material used for the investigators to determine the accuracy and quality. The investigators determined the consistency, accuracy, and quality by establishing a consensus. RESULTS: The speech pattern and length of the responses were consistent within the same user but different between users. Occasionally, ChatGPT provided 2 completely different responses to the same question. Overall, ChatGPT provided more accurate responses (8 out of 12) to the "what" questions with less reliable performance to the "why" and "how" questions. We identified errors in calculation, unit of measurement, and misuse of protocols by ChatGPT. Some of these errors could result in clinical decisions leading to harm. We also identified citations and references shown by ChatGPT that did not exist in the literature. CONCLUSIONS: ChatGPT is not ready to take on the coaching role for either healthcare learners or healthcare professionals. The lack of consistency in the responses to the same question is problematic for both learners and decision-makers. The intrinsic assumptions made by the chatbot could lead to erroneous clinical decisions. The unreliability in providing valid references is a serious flaw in using ChatGPT to drive clinical decision making.
Shiferaw MW; Zheng T; Winter A; Mike LA; Chan LN
43
38401366
Assessing question characteristic influences on ChatGPT's performance and response-explanation consistency: Insights from Taiwan's Nursing Licensing Exam.
2,024
International journal of nursing studies
BACKGROUND: Investigates the integration of an artificial intelligence tool, specifically ChatGPT, in nursing education, addressing its effectiveness in exam preparation and self-assessment. OBJECTIVE: This study aims to evaluate the performance of ChatGPT, one of the most promising artificial intelligence-driven linguistic understanding tools in answering question banks for nursing licensing examination preparation. It further analyzes question characteristics that might impact the accuracy of ChatGPT-generated answers and examines its reliability through human expert reviews. DESIGN: Cross-sectional survey comparing ChatGPT-generated answers and their explanations. SETTING: 400 questions from Taiwan's 2022 Nursing Licensing Exam. METHODS: The study analyzed 400 questions from five distinct subjects of Taiwan's 2022 Nursing Licensing Exam using the ChatGPT model which provided answers and in-depth explanations for each question. The impact of various question characteristics, such as type and cognitive level, on the accuracy of the ChatGPT-generated responses was assessed using logistic regression analysis. Additionally, human experts evaluated the explanations for each question, comparing them with the ChatGPT-generated answers to determine consistency. RESULTS: ChatGPT exhibited overall accuracy at 80.75 % for Taiwan's National Nursing Exam, which passes the exam. The accuracy of ChatGPT-generated answers diverged significantly across test subjects, demonstrating a hierarchy ranging from General Medicine at 88.75 %, Medical-Surgical Nursing at 80.0 %, Psychology and Community Nursing at 70.0 %, Obstetrics and Gynecology Nursing at 67.5 %, down to Basic Nursing at 63.0 %. ChatGPT had a higher probability of eliciting incorrect responses for questions with certain characteristics, notably those with clinical vignettes [odds ratio 2.19, 95 % confidence interval 1.24-3.87, P = 0.007] and complex multiple-choice questions [odds ratio 2.37, 95 % confidence interval 1.00-5.60, P = 0.049]. Furthermore, 14.25 % of ChatGPT-generated answers were inconsistent with their explanations, leading to a reduction in the overall accuracy to 74 %. CONCLUSIONS: This study reveals the ChatGPT's capabilities and limitations in nursing exam preparation, underscoring its potential as an auxiliary educational tool. It highlights the model's varied performance across different question types and notable inconsistencies between its answers and explanations. The study contributes significantly to the understanding of artificial intelligence in learning environments, guiding the future development of more effective and reliable artificial intelligence-based educational technologies. TWEETABLE ABSTRACT: New study reveals ChatGPT's potential and challenges in nursing education: Achieves 80.75 % accuracy in exam prep but faces hurdles with complex questions and logical consistency. #AIinNursing #AIinEducation #NursingExams #ChatGPT.
Su MC; Lin LE; Lin LH; Chen YC
21
37662036
The utility of ChatGPT in the assessment of literature on the prevention of migraine: an observational, qualitative study.
2,023
Frontiers in neurology
BACKGROUND: It is not known how large language models, such as ChatGPT, can be applied toward the assessment of the efficacy of medications, including in the prevention of migraine, and how it might support those claims with existing medical evidence. METHODS: We queried ChatGPT-3.5 on the efficacy of 47 medications for the prevention of migraine and then asked it to give citations in support of its assessment. ChatGPT's evaluations were then compared to their FDA approval status for this indication as well as the American Academy of Neurology 2012 evidence-based guidelines for the prevention of migraine. The citations ChatGPT generated for these evaluations were then assessed to see if they were real papers and if they were relevant to the query. RESULTS: ChatGPT affirmed that the 14 medications that have either received FDA approval for prevention of migraine or AAN Grade A/B evidence were effective for migraine. Its assessments of the other 33 medications were unreliable including suggesting possible efficacy for four medications that have never been used for the prevention of migraine. Critically, only 33/115 (29%) of the papers ChatGPT cited were real, while 76/115 (66%) were "hallucinated" not real papers and 6/115 (5%) shared the names of real papers but had not real citations. CONCLUSION: While ChatGPT produced tailored answers on the efficacy of the queried medications, the results were unreliable and inaccurate because of the overwhelming volume of "hallucinated" articles it generated and cited.
Moskatel LS; Zhang N
10
38348835
Performance of ChatGPT as an AI-assisted decision support tool in medicine: a proof-of-concept study for interpreting symptoms and management of common cardiac conditions (AMSTELHEART-2).
2,024
Acta cardiologica
BACKGROUND: It is thought that ChatGPT, an advanced language model developed by OpenAI, may in the future serve as an AI-assisted decision support tool in medicine. OBJECTIVE: To evaluate the accuracy of ChatGPT's recommendations on medical questions related to common cardiac symptoms or conditions. METHODS: We tested ChatGPT's ability to address medical questions in two ways. First, we assessed its accuracy in correctly answering cardiovascular trivia questions (n = 50), based on quizzes for medical professionals. Second, we entered 20 clinical case vignettes on the ChatGPT platform and evaluated its accuracy compared to expert opinion and clinical course. Lastly, we compared the latest research version (v3.5; 27 September 2023) with a prior version (v3.5; 30 January 2023) to evaluate improvement over time. RESULTS: We found that ChatGPT latest version correctly answered 92% of the trivia questions, with slight variation in accuracy in the domains coronary artery disease (100%), pulmonary and venous thrombotic embolism (100%), atrial fibrillation (90%), heart failure (90%) and cardiovascular risk management (80%). In the 20 case vignettes, ChatGPT's response matched in 17 (85%) of the cases with the actual advice given. Straightforward patient-to-physician questions were all answered correctly (10/10). In more complex cases, where physicians (general practitioners) asked other physicians (cardiologists) for assistance or decision support, ChatGPT was correct in 70% of cases, and otherwise provided incomplete, inconclusive, or inappropriate recommendations when compared with expert consultation. ChatGPT showed significant improvement over time; as the January version correctly answered 74% (vs 92%) of trivia questions (p = 0.031), and correctly answered a mere 50% of complex cases. CONCLUSIONS: Our study suggests that ChatGPT has potential as an AI-assisted decision support tool in medicine, particularly for straightforward, low-complex medical questions, but further research is needed to fully evaluate its potential.
Harskamp RE; De Clercq L
0-1
40332991
Comparing Artificial Intelligence-Generated and Clinician-Created Personalized Self-Management Guidance for Patients With Knee Osteoarthritis: Blinded Observational Study.
2,025
Journal of medical Internet research
BACKGROUND: Knee osteoarthritis is a prevalent, chronic musculoskeletal disorder that impairs mobility and quality of life. Personalized patient education aims to improve self-management and adherence; yet, its delivery is often limited by time constraints, clinician workload, and the heterogeneity of patient needs. Recent advances in large language models offer potential solutions. GPT-4 (OpenAI), distinguished by its long-context reasoning and adoption in clinical artificial intelligence research, emerged as a leading candidate for personalized health communication. However, its application in generating condition-specific educational guidance remains underexplored, and concerns about misinformation, personalization limits, and ethical oversight remain. OBJECTIVE: We evaluated GPT-4's ability to generate individualized self-management guidance for patients with knee osteoarthritis in comparison with clinician-created content. METHODS: This 2-phase, double-blind, observational study used data from 50 patients previously enrolled in a registered randomized trial. In phase 1, 2 orthopedic clinicians each generated personalized education materials for 25 patient profiles using anonymized clinical data, including history, symptoms, and lifestyle. In phase 2, the same datasets were processed by GPT-4 using standardized prompts. All content was anonymized and evaluated by 2 independent, blinded clinical experts using validated scoring systems. Evaluation criteria included efficiency, readability (Flesch-Kincaid, Gunning Fog, Coleman-Liau, and Simple Measure of Gobbledygook), accuracy, personalization, and comprehensiveness and safety. Disagreements between reviewers were resolved through consensus or third-party adjudication. RESULTS: GPT-4 outperformed clinicians in content generation speed (530.03 vs 37.29 words per min, P<.001). Readability was better on the Flesch-Kincaid (mean 11.56, SD 1.08 vs mean 12.67 SD 0.95), Gunning Fog (mean 12.47, SD 1.36 vs mean 14.56, SD 0.93), and Simple Measure of Gobbledygook (mean 13.33, SD 1.00 vs mean 13.81 SD 0.69) indices (all P<.001), though GPT-4 scored slightly higher on the Coleman-Liau Index (mean 15.90, SD 1.03 vs mean 15.15, SD 0.91). GPT-4 also outperformed clinicians in accuracy (mean 5.31, SD 1.73 vs mean 4.76, SD 1.10; P=.05, personalization (mean 54.32, SD 6.21 vs mean 33.20, SD 5.40; P<.001), comprehensiveness (mean 51.74, SD 6.47 vs mean 35.26, SD 6.66; P<.001), and safety (median 61, IQR 58-66 vs median 50, IQR 47-55.25; P<.001). CONCLUSIONS: GPT-4 could generate personalized self-management guidance for knee osteoarthritis with greater efficiency, accuracy, personalization, comprehensiveness, and safety than clinician-generated content, as assessed using standardized, guideline-aligned evaluation frameworks. These findings underscore the potential of large language models to support scalable, high-quality patient education in chronic disease management. The observed lexical complexity suggests the need to refine outputs for populations with limited health literacy. As an exploratory, single-center study, these results warrant confirmation in larger, multicenter cohorts with diverse demographic profiles. Future implementation should be guided by ethical and operational safeguards, including data privacy, transparency, and the delineation of clinical responsibility. Hybrid models integrating artificial intelligence-generated content with clinician oversight may offer a pragmatic path forward.
Du K; Li A; Zuo QH; Zhang CY; Guo R; Chen P; Du WS; Li SM
32
37725411
Assessment of Resident and AI Chatbot Performance on the University of Toronto Family Medicine Residency Progress Test: Comparative Study.
2,023
JMIR medical education
BACKGROUND: Large language model (LLM)-based chatbots are evolving at an unprecedented pace with the release of ChatGPT, specifically GPT-3.5, and its successor, GPT-4. Their capabilities in general-purpose tasks and language generation have advanced to the point of performing excellently on various educational examination benchmarks, including medical knowledge tests. Comparing the performance of these 2 LLM models to that of Family Medicine residents on a multiple-choice medical knowledge test can provide insights into their potential as medical education tools. OBJECTIVE: This study aimed to quantitatively and qualitatively compare the performance of GPT-3.5, GPT-4, and Family Medicine residents in a multiple-choice medical knowledge test appropriate for the level of a Family Medicine resident. METHODS: An official University of Toronto Department of Family and Community Medicine Progress Test consisting of multiple-choice questions was inputted into GPT-3.5 and GPT-4. The artificial intelligence chatbot's responses were manually reviewed to determine the selected answer, response length, response time, provision of a rationale for the outputted response, and the root cause of all incorrect responses (classified into arithmetic, logical, and information errors). The performance of the artificial intelligence chatbots were compared against a cohort of Family Medicine residents who concurrently attempted the test. RESULTS: GPT-4 performed significantly better compared to GPT-3.5 (difference 25.0%, 95% CI 16.3%-32.8%; McNemar test: P<.001); it correctly answered 89/108 (82.4%) questions, while GPT-3.5 answered 62/108 (57.4%) questions correctly. Further, GPT-4 scored higher across all 11 categories of Family Medicine knowledge. In 86.1% (n=93) of the responses, GPT-4 provided a rationale for why other multiple-choice options were not chosen compared to the 16.7% (n=18) achieved by GPT-3.5. Qualitatively, for both GPT-3.5 and GPT-4 responses, logical errors were the most common, while arithmetic errors were the least common. The average performance of Family Medicine residents was 56.9% (95% CI 56.2%-57.6%). The performance of GPT-3.5 was similar to that of the average Family Medicine resident (P=.16), while the performance of GPT-4 exceeded that of the top-performing Family Medicine resident (P<.001). CONCLUSIONS: GPT-4 significantly outperforms both GPT-3.5 and Family Medicine residents on a multiple-choice medical knowledge test designed for Family Medicine residents. GPT-4 provides a logical rationale for its response choice, ruling out other answer choices efficiently and with concise justification. Its high degree of accuracy and advanced reasoning capabilities facilitate its potential applications in medical education, including the creation of exam questions and scenarios as well as serving as a resource for medical knowledge or information on community services.
Huang RS; Lu KJQ; Meaney C; Kemppainen J; Punnett A; Leung FH
21
38744501
Evaluating the diagnostic performance of a large language model-powered chatbot for providing immunohistochemistry recommendations in dermatopathology.
2,024
Journal of cutaneous pathology
BACKGROUND: Large language model (LLM)-powered chatbots such as ChatGPT have numerous applications. However, their effectiveness in dermatopathology has not been formally evaluated. Dermatopathological cases often require immunohistochemical workup. Here, we evaluate the performance of a chatbot in providing diagnostically useful information on immunohistochemistry relating to dermatological diseases. METHODS: We queried a commonly used chatbot for the immunophenotypes of 51 cutaneous diseases, including a diverse variety of epidermal, adnexal, hematolymphoid, and soft tissue entities. We requested it to provide references for each diagnosis. All tests were repeated, compiled, quantified, and then compared with established literature standards. RESULTS: Clustering analysis demonstrated that recommendations correlated with tumor type, suggesting chatbots can supply appropriate panels. However, a significant portion of recommendations were factually incorrect (13.9%). Citations were rarely clinically useful (24.5%). Many were confabulated (27.2%). Prompt responses for cutaneous adnexal lesions tended to be less accurate while literature references were less useful. Reference retrieval performance was associated with the number of PubMed entries per entity. CONCLUSIONS: This foundational study suggests that LLM-powered chatbots may be useful for generating immunohistochemical panels for dermatologic diagnoses. However, specific performance capabilities and biases must be considered. In addition, extreme caution is advised regarding the tendencies to fabricate material. Future models intentionally fine-tuned to augment diagnostic medicine may prove to be valuable.
McCrary MR; Galambus J; Chen WS
10
38717811
ChatGPT as a Tool for Medical Education and Clinical Decision-Making on the Wards: Case Study.
2,024
JMIR formative research
BACKGROUND: Large language models (LLMs) are computational artificial intelligence systems with advanced natural language processing capabilities that have recently been popularized among health care students and educators due to their ability to provide real-time access to a vast amount of medical knowledge. The adoption of LLM technology into medical education and training has varied, and little empirical evidence exists to support its use in clinical teaching environments. OBJECTIVE: The aim of the study is to identify and qualitatively evaluate potential use cases and limitations of LLM technology for real-time ward-based educational contexts. METHODS: A brief, single-site exploratory evaluation of the publicly available ChatGPT-3.5 (OpenAI) was conducted by implementing the tool into the daily attending rounds of a general internal medicine inpatient service at a large urban academic medical center. ChatGPT was integrated into rounds via both structured and organic use, using the web-based "chatbot" style interface to interact with the LLM through conversational free-text and discrete queries. A qualitative approach using phenomenological inquiry was used to identify key insights related to the use of ChatGPT through analysis of ChatGPT conversation logs and associated shorthand notes from the clinical sessions. RESULTS: Identified use cases for ChatGPT integration included addressing medical knowledge gaps through discrete medical knowledge inquiries, building differential diagnoses and engaging dual-process thinking, challenging medical axioms, using cognitive aids to support acute care decision-making, and improving complex care management by facilitating conversations with subspecialties. Potential additional uses included engaging in difficult conversations with patients, exploring ethical challenges and general medical ethics teaching, personal continuing medical education resources, developing ward-based teaching tools, supporting and automating clinical documentation, and supporting productivity and task management. LLM biases, misinformation, ethics, and health equity were identified as areas of concern and potential limitations to clinical and training use. A code of conduct on ethical and appropriate use was also developed to guide team usage on the wards. CONCLUSIONS: Overall, ChatGPT offers a novel tool to enhance ward-based learning through rapid information querying, second-order content exploration, and engaged team discussion regarding generated responses. More research is needed to fully understand contexts for educational use, particularly regarding the risks and limitations of the tool in clinical settings and its impacts on trainee development.
Skryd A; Lawrence K
10
39729356
Large Language Models in Worldwide Medical Exams: Platform Development and Comprehensive Analysis.
2,024
Journal of medical Internet research
BACKGROUND: Large language models (LLMs) are increasingly integrated into medical education, with transformative potential for learning and assessment. However, their performance across diverse medical exams globally has remained underexplored. OBJECTIVE: This study aims to introduce MedExamLLM, a comprehensive platform designed to systematically evaluate the performance of LLMs on medical exams worldwide. Specifically, the platform seeks to (1) compile and curate performance data for diverse LLMs on worldwide medical exams; (2) analyze trends and disparities in LLM capabilities across geographic regions, languages, and contexts; and (3) provide a resource for researchers, educators, and developers to explore and advance the integration of artificial intelligence in medical education. METHODS: A systematic search was conducted on April 25, 2024, in the PubMed database to identify relevant publications. Inclusion criteria encompassed peer-reviewed, English-language, original research articles that evaluated at least one LLM on medical exams. Exclusion criteria included review articles, non-English publications, preprints, and studies without relevant data on LLM performance. The screening process for candidate publications was independently conducted by 2 researchers to ensure accuracy and reliability. Data, including exam information, data process information, model performance, data availability, and references, were manually curated, standardized, and organized. These curated data were integrated into the MedExamLLM platform, enabling its functionality to visualize and analyze LLM performance across geographic, linguistic, and exam characteristics. The web platform was developed with a focus on accessibility, interactivity, and scalability to support continuous data updates and user engagement. RESULTS: A total of 193 articles were included for final analysis. MedExamLLM comprised information for 16 LLMs on 198 medical exams conducted in 28 countries across 15 languages from the year 2009 to the year 2023. The United States accounted for the highest number of medical exams and related publications, with English being the dominant language used in these exams. The Generative Pretrained Transformer (GPT) series models, especially GPT-4, demonstrated superior performance, achieving pass rates significantly higher than other LLMs. The analysis revealed significant variability in the capabilities of LLMs across different geographic and linguistic contexts. CONCLUSIONS: MedExamLLM is an open-source, freely accessible, and publicly available online platform providing comprehensive performance evaluation information and evidence knowledge about LLMs on medical exams around the world. The MedExamLLM platform serves as a valuable resource for educators, researchers, and developers in the fields of clinical medicine and artificial intelligence. By synthesizing evidence on LLM capabilities, the platform provides valuable insights to support the integration of artificial intelligence into medical education. Limitations include potential biases in the data source and the exclusion of non-English literature. Future research should address these gaps and explore methods to enhance LLM performance in diverse contexts.
Zong H; Wu R; Cha J; Wang J; Wu E; Li J; Zhou Y; Zhang C; Feng W; Shen B
21
39546795
Examining the Role of Large Language Models in Orthopedics: Systematic Review.
2,024
Journal of medical Internet research
BACKGROUND: Large language models (LLMs) can understand natural language and generate corresponding text, images, and even videos based on prompts, which holds great potential in medical scenarios. Orthopedics is a significant branch of medicine, and orthopedic diseases contribute to a significant socioeconomic burden, which could be alleviated by the application of LLMs. Several pioneers in orthopedics have conducted research on LLMs across various subspecialties to explore their performance in addressing different issues. However, there are currently few reviews and summaries of these studies, and a systematic summary of existing research is absent. OBJECTIVE: The objective of this review was to comprehensively summarize research findings on the application of LLMs in the field of orthopedics and explore the potential opportunities and challenges. METHODS: PubMed, Embase, and Cochrane Library databases were searched from January 1, 2014, to February 22, 2024, with the language limited to English. The terms, which included variants of "large language model," "generative artificial intelligence," "ChatGPT," and "orthopaedics," were divided into 2 categories: large language model and orthopedics. After completing the search, the study selection process was conducted according to the inclusion and exclusion criteria. The quality of the included studies was assessed using the revised Cochrane risk-of-bias tool for randomized trials and CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence) guidance. Data extraction and synthesis were conducted after the quality assessment. RESULTS: A total of 68 studies were selected. The application of LLMs in orthopedics involved the fields of clinical practice, education, research, and management. Of these 68 studies, 47 (69%) focused on clinical practice, 12 (18%) addressed orthopedic education, 8 (12%) were related to scientific research, and 1 (1%) pertained to the field of management. Of the 68 studies, only 8 (12%) recruited patients, and only 1 (1%) was a high-quality randomized controlled trial. ChatGPT was the most commonly mentioned LLM tool. There was considerable heterogeneity in the definition, measurement, and evaluation of the LLMs' performance across the different studies. For diagnostic tasks alone, the accuracy ranged from 55% to 93%. When performing disease classification tasks, ChatGPT with GPT-4's accuracy ranged from 2% to 100%. With regard to answering questions in orthopedic examinations, the scores ranged from 45% to 73.6% due to differences in models and test selections. CONCLUSIONS: LLMs cannot replace orthopedic professionals in the short term. However, using LLMs as copilots could be a potential approach to effectively enhance work efficiency at present. More high-quality clinical trials are needed in the future, aiming to identify optimal applications of LLMs and advance orthopedics toward higher efficiency and precision.
Zhang C; Liu S; Zhou X; Zhou S; Tian Y; Wang S; Xu N; Li W
32
38952020
Data Set and Benchmark (MedGPTEval) to Evaluate Responses From Large Language Models in Medicine: Evaluation Development and Validation.
2,024
JMIR medical informatics
BACKGROUND: Large language models (LLMs) have achieved great progress in natural language processing tasks and demonstrated the potential for use in clinical applications. Despite their capabilities, LLMs in the medical domain are prone to generating hallucinations (not fully reliable responses). Hallucinations in LLMs' responses create substantial risks, potentially threatening patients' physical safety. Thus, to perceive and prevent this safety risk, it is essential to evaluate LLMs in the medical domain and build a systematic evaluation. OBJECTIVE: We developed a comprehensive evaluation system, MedGPTEval, composed of criteria, medical data sets in Chinese, and publicly available benchmarks. METHODS: First, a set of evaluation criteria was designed based on a comprehensive literature review. Second, existing candidate criteria were optimized by using a Delphi method with 5 experts in medicine and engineering. Third, 3 clinical experts designed medical data sets to interact with LLMs. Finally, benchmarking experiments were conducted on the data sets. The responses generated by chatbots based on LLMs were recorded for blind evaluations by 5 licensed medical experts. The evaluation criteria that were obtained covered medical professional capabilities, social comprehensive capabilities, contextual capabilities, and computational robustness, with 16 detailed indicators. The medical data sets include 27 medical dialogues and 7 case reports in Chinese. Three chatbots were evaluated: ChatGPT by OpenAI; ERNIE Bot by Baidu, Inc; and Doctor PuJiang (Dr PJ) by Shanghai Artificial Intelligence Laboratory. RESULTS: Dr PJ outperformed ChatGPT and ERNIE Bot in the multiple-turn medical dialogues and case report scenarios. Dr PJ also outperformed ChatGPT in the semantic consistency rate and complete error rate category, indicating better robustness. However, Dr PJ had slightly lower scores in medical professional capabilities compared with ChatGPT in the multiple-turn dialogue scenario. CONCLUSIONS: MedGPTEval provides comprehensive criteria to evaluate chatbots by LLMs in the medical domain, open-source data sets, and benchmarks assessing 3 LLMs. Experimental results demonstrate that Dr PJ outperforms ChatGPT and ERNIE Bot in social and professional contexts. Therefore, such an assessment system can be easily adopted by researchers in this community to augment an open-source data set.
Xu J; Lu L; Peng X; Pang J; Ding J; Yang L; Song H; Li K; Sun X; Zhang S
10
38875696
Triage Performance Across Large Language Models, ChatGPT, and Untrained Doctors in Emergency Medicine: Comparative Study.
2,024
Journal of medical Internet research
BACKGROUND: Large language models (LLMs) have demonstrated impressive performances in various medical domains, prompting an exploration of their potential utility within the high-demand setting of emergency department (ED) triage. This study evaluated the triage proficiency of different LLMs and ChatGPT, an LLM-based chatbot, compared to professionally trained ED staff and untrained personnel. We further explored whether LLM responses could guide untrained staff in effective triage. OBJECTIVE: This study aimed to assess the efficacy of LLMs and the associated product ChatGPT in ED triage compared to personnel of varying training status and to investigate if the models' responses can enhance the triage proficiency of untrained personnel. METHODS: A total of 124 anonymized case vignettes were triaged by untrained doctors; different versions of currently available LLMs; ChatGPT; and professionally trained raters, who subsequently agreed on a consensus set according to the Manchester Triage System (MTS). The prototypical vignettes were adapted from cases at a tertiary ED in Germany. The main outcome was the level of agreement between raters' MTS level assignments, measured via quadratic-weighted Cohen kappa. The extent of over- and undertriage was also determined. Notably, instances of ChatGPT were prompted using zero-shot approaches without extensive background information on the MTS. The tested LLMs included raw GPT-4, Llama 3 70B, Gemini 1.5, and Mixtral 8x7b. RESULTS: GPT-4-based ChatGPT and untrained doctors showed substantial agreement with the consensus triage of professional raters (kappa=mean 0.67, SD 0.037 and kappa=mean 0.68, SD 0.056, respectively), significantly exceeding the performance of GPT-3.5-based ChatGPT (kappa=mean 0.54, SD 0.024; P<.001). When untrained doctors used this LLM for second-opinion triage, there was a slight but statistically insignificant performance increase (kappa=mean 0.70, SD 0.047; P=.97). Other tested LLMs performed similar to or worse than GPT-4-based ChatGPT or showed odd triaging behavior with the used parameters. LLMs and ChatGPT models tended toward overtriage, whereas untrained doctors undertriaged. CONCLUSIONS: While LLMs and the LLM-based product ChatGPT do not yet match professionally trained raters, their best models' triage proficiency equals that of untrained ED doctors. In its current form, LLMs or ChatGPT thus did not demonstrate gold-standard performance in ED triage and, in the setting of this study, failed to significantly improve untrained doctors' triage when used as decision support. Notable performance enhancements in newer LLM versions over older ones hint at future improvements with further technological development and specific training.
Masanneck L; Schmidt L; Seifert A; Kolsche T; Huntemann N; Jansen R; Mehsin M; Bernhard M; Meuth SG; Bohm L; Pawlitzki M
10
37665620
Artificial Intelligence in Medical Education: Comparative Analysis of ChatGPT, Bing, and Medical Students in Germany.
2,023
JMIR medical education
BACKGROUND: Large language models (LLMs) have demonstrated significant potential in diverse domains, including medicine. Nonetheless, there is a scarcity of studies examining their performance in medical examinations, especially those conducted in languages other than English, and in direct comparison with medical students. Analyzing the performance of LLMs in state medical examinations can provide insights into their capabilities and limitations and evaluate their potential role in medical education and examination preparation. OBJECTIVE: This study aimed to assess and compare the performance of 3 LLMs, GPT-4, Bing, and GPT-3.5-Turbo, in the German Medical State Examinations of 2022 and to evaluate their performance relative to that of medical students. METHODS: The LLMs were assessed on a total of 630 questions from the spring and fall German Medical State Examinations of 2022. The performance was evaluated with and without media-related questions. Statistical analyses included 1-way ANOVA and independent samples t tests for pairwise comparisons. The relative strength of the LLMs in comparison with that of the students was also evaluated. RESULTS: GPT-4 achieved the highest overall performance, correctly answering 88.1% of questions, closely followed by Bing (86.0%) and GPT-3.5-Turbo (65.7%). The students had an average correct answer rate of 74.6%. Both GPT-4 and Bing significantly outperformed the students in both examinations. When media questions were excluded, Bing achieved the highest performance of 90.7%, closely followed by GPT-4 (90.4%), while GPT-3.5-Turbo lagged (68.2%). There was a significant decline in the performance of GPT-4 and Bing in the fall 2022 examination, which was attributed to a higher proportion of media-related questions and a potential increase in question difficulty. CONCLUSIONS: LLMs, particularly GPT-4 and Bing, demonstrate potential as valuable tools in medical education and for pretesting examination questions. Their high performance, even relative to that of medical students, indicates promising avenues for further development and integration into the educational and clinical landscape.
Roos J; Kasapovic A; Jansen T; Kaczmarczyk R
21
40229614
Evaluating the Efficacy of Large Language Models in Generating Medical Documentation: A Comparative Study of ChatGPT-4, ChatGPT-4o, and Claude.
2,025
Aesthetic plastic surgery
BACKGROUND: Large language models (LLMs) have demonstrated transformative potential in health care. They can enhance clinical and academic medicine by facilitating accurate diagnoses, interpreting laboratory results, and automating documentation processes. This study evaluates the efficacy of LLMs in generating surgical operation reports and discharge summaries, focusing on accuracy, efficiency, and quality. METHODS: This study assessed the effectiveness of three leading LLMs-ChatGPT-4.0, ChatGPT-4o, and Claude-using six prompts and analyzing their responses for readability and output quality, validated by plastic surgeons. Readability was measured with the Flesch-Kincaid, Flesch reading ease scores, and Coleman-Liau Index, while reliability was evaluated using the DISCERN score. A paired two-tailed t-test (p<0.05) compared the statistical significance of these metrics and the time taken to generate operation reports and discharge summaries against the authors' results. RESULTS: Table 3 shows statistically significant differences in readability between ChatGPT-4o and Claude across all metrics, while ChatGPT-4 and Claude differ significantly in the Flesch reading ease and Coleman-Liau indices. Table 6 reveals extremely low p-values across BL, IS, and MM for all models, with Claude consistently outperforming both ChatGPT-4 and ChatGPT-4o. Additionally, Claude generated documents the fastest, completing tasks in approximately 10 to 14 s. These results suggest that Claude not only excels in readability but also demonstrates superior reliability and speed, making it an efficient choice for practical applications. CONCLUSION: The study highlights the importance of selecting appropriate LLMs for clinical use. Integrating these LLMs can streamline healthcare documentation, improve efficiency, and enhance patient outcomes through clearer communication and more accurate medical reports. LEVEL OF EVIDENCE V: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Lim B; Seth I; Maxwell M; Cuomo R; Ross RJ; Rozen WM
10
40305085
Accuracy of Large Language Models When Answering Clinical Research Questions: Systematic Review and Network Meta-Analysis.
2,025
Journal of medical Internet research
BACKGROUND: Large language models (LLMs) have flourished and gradually become an important research and application direction in the medical field. However, due to the high degree of specialization, complexity, and specificity of medicine, which results in extremely high accuracy requirements, controversy remains about whether LLMs can be used in the medical field. More studies have evaluated the performance of various types of LLMs in medicine, but the conclusions are inconsistent. OBJECTIVE: This study uses a network meta-analysis (NMA) to assess the accuracy of LLMs when answering clinical research questions to provide high-level evidence-based evidence for its future development and application in the medical field. METHODS: In this systematic review and NMA, we searched PubMed, Embase, Web of Science, and Scopus from inception until October 14, 2024. Studies on the accuracy of LLMs when answering clinical research questions were included and screened by reading published reports. The systematic review and NMA were conducted to compare the accuracy of different LLMs when answering clinical research questions, including objective questions, open-ended questions, top 1 diagnosis, top 3 diagnosis, top 5 diagnosis, and triage and classification. The NMA was performed using Bayesian frequency theory methods. Indirect intercomparisons between programs were performed using a grading scale. A larger surface under the cumulative ranking curve (SUCRA) value indicates a higher ranking of the corresponding LLM accuracy. RESULTS: The systematic review and NMA examined 168 articles encompassing 35,896 questions and 3063 clinical cases. Of the 168 studies, 40 (23.8%) were considered to have a low risk of bias, 128 (76.2%) had a moderate risk, and none were rated as having a high risk. ChatGPT-4o (SUCRA=0.9207) demonstrated strong performance in terms of accuracy for objective questions, followed by Aeyeconsult (SUCRA=0.9187) and ChatGPT-4 (SUCRA=0.8087). ChatGPT-4 (SUCRA=0.8708) excelled at answering open-ended questions. In terms of accuracy for top 1 diagnosis and top 3 diagnosis of clinical cases, human experts (SUCRA=0.9001 and SUCRA=0.7126, respectively) ranked the highest, while Claude 3 Opus (SUCRA=0.9672) performed well at the top 5 diagnosis. Gemini (SUCRA=0.9649) had the highest rated SUCRA value for accuracy in the area of triage and classification. CONCLUSIONS: Our study indicates that ChatGPT-4o has an advantage when answering objective questions. For open-ended questions, ChatGPT-4 may be more credible. Humans are more accurate at the top 1 diagnosis and top 3 diagnosis. Claude 3 Opus performs better at the top 5 diagnosis, while for triage and classification, Gemini is more advantageous. This analysis offers valuable insights for clinicians and medical practitioners, empowering them to effectively leverage LLMs for improved decision-making in learning, diagnosis, and management of various clinical scenarios. TRIAL REGISTRATION: PROSPERO CRD42024558245; https://www.crd.york.ac.uk/PROSPERO/view/CRD42024558245.
Wang L; Li J; Zhuang B; Huang S; Fang M; Wang C; Li W; Zhang M; Gong S
10
38343631
Almanac - Retrieval-Augmented Language Models for Clinical Medicine.
2,024
NEJM AI
BACKGROUND: Large language models (LLMs) have recently shown impressive zero-shot capabilities, whereby they can use auxiliary data, without the availability of task-specific training examples, to complete a variety of natural language tasks, such as summarization, dialogue generation, and question answering. However, despite many promising applications of LLMs in clinical medicine, adoption of these models has been limited by their tendency to generate incorrect and sometimes even harmful statements. METHODS: We tasked a panel of eight board-certified clinicians and two health care practitioners with evaluating Almanac, an LLM framework augmented with retrieval capabilities from curated medical resources for medical guideline and treatment recommendations. The panel compared responses from Almanac and standard LLMs (ChatGPT-4, Bing, and Bard) versus a novel data set of 314 clinical questions spanning nine medical specialties. RESULTS: Almanac showed a significant improvement in performance compared with the standard LLMs across axes of factuality, completeness, user preference, and adversarial safety. CONCLUSIONS: Our results show the potential for LLMs with access to domain-specific corpora to be effective in clinical decision-making. The findings also underscore the importance of carefully testing LLMs before deployment to mitigate their shortcomings. (Funded by the National Institutes of Health, National Heart, Lung, and Blood Institute.).
Zakka C; Shad R; Chaurasia A; Dalal AR; Kim JL; Moor M; Fong R; Phillips C; Alexander K; Ashley E; Boyd J; Boyd K; Hirsch K; Langlotz C; Lee R; Melia J; Nelson J; Sallam K; Tullis S; Vogelsong MA; Cunningham JP; Hiesinger W
10
40289855
Evaluation of Six Large Language Models for Clinical Decision Support: Application in Transfusion Decision-making for RhD Blood-type Patients.
2,025
Annals of laboratory medicine
BACKGROUND: Large language models (LLMs) have the potential for clinical decision support; however, their use in specific tasks, such as determining the RhD blood type for transfusion, remains underexplored. Therefore, we evaluated the accuracy of six LLMs in addressing RhD blood type-related issues in Korean healthcare. METHODS: Fifteen multiple-choice and true/false questions, based on real-world transfusion scenarios and reviewed by specialists, were developed. The questions were administered twice to six LLMs (Clova X, Gemini 1.0, Gemini 1.5, ChatGPT-3.5, GPT-4.0, and GPT-4o) in both Korean and English. Results were compared against the performance of 22 transfusion medicine experts. For particularly challenging questions, prompt engineering was applied, and the questions were reevaluated. RESULTS: GPT-4o demonstrated the highest accuracy rate in Korean (0.6), with significant differences compared with those of Clova X and Gemini (P <0.05). In English, the results were similar across all models. The transfusion experts achieved a higher accuracy rate (0.8). Among the five questions subjected to prompt engineering, only GPT-4o correctly responded to one, whereas the other models failed. All LLM models changed their responses or did not respond when the same question was repeated. CONCLUSIONS: GPT-4o showed the best overall performance among the models tested and may be beneficial in RhD blood product transfusion decision-making. However, its performance suggests that it may serve best in a supportive role rather than as a primary decision-making tool.
Lee JK; Choi S; Park S; Hwang SH; Cho D
10
40217905
Thyro-GenAI: A Chatbot Using Retrieval-Augmented Generative Models for Personalized Thyroid Disease Management.
2,025
Journal of clinical medicine
Background: Large language models (LLMs) have the potential to enhance information processing and clinical reasoning in the healthcare industry but are hindered by inaccuracies and hallucinations. The retrieval-augmented generation (RAG) technique may address these problems by integrating external knowledge sources. Methods: We developed a RAG-based chatbot called Thyro-GenAI by integrating a database of textbooks and guidelines with LLM. Thyro-GenAI and three service LLMs: OpenAI's ChatGPT-4o, Perplexity AI's ChatGPT-4o, and Anthropic's Claude 3.5 Sonnet, were asked personalized clinical questions about thyroid disease. Three thyroid specialists assessed the quality of the generated responses and references without being blinded, which allowed them to interact with different chatbot interfaces. Results: Thyro-GenAI achieved the highest inverse-weighted mean rank for overall response quality. The overall inverse-weighted mean rankings for Thyro-GenAI, ChatGPT, Perplexity, and Claude were 3.0, 2.3, 2.8, and 1.9, respectively. Thyro-GenAI also achieved the second-highest inverse-weighted mean rank for overall reference quality. The overall inverse-weighted mean rankings for Thyro-GenAI, ChatGPT, Perplexity, and Claude were 3.1, 2.3, 3.2, and 1.8, respectively. Conclusions: Thyro-GenAI produced patient-specific clinical reasoning output based on a vector database, with fewer hallucinations and more reliability, compared to service LLMs. This emphasis on evidence-based responses ensures its safety and validity, addressing a critical limitation of existing LLMs. By integrating RAG with LLMs, it has the potential to support frontline clinical decision-making, especially helping first-line physicians by offering reliable decision support while managing thyroid disease patients.
Shin M; Song J; Kim MG; Yu HW; Choe EK; Chai YJ
10
40295957
Utilizing Large language models to select literature for meta-analysis shows workload reduction while maintaining a similar recall level as manual curation.
2,025
BMC medical research methodology
BACKGROUND: Large language models (LLMs) like ChatGPT showed great potential in aiding medical research. A heavy workload in filtering records is needed during the research process of evidence-based medicine, especially meta-analysis. However, few studies tried to use LLMs to help screen records in meta-analysis. OBJECTIVE: In this research, we aimed to explore the possibility of incorporating multiple LLMs to facilitate the screening step based on the title and abstract of records during meta-analysis. METHODS: Various LLMs were evaluated, which includes GPT-3.5, GPT-4, Deepseek-R1-Distill, Qwen-2.5, Phi-4, Llama-3.1, Gemma-2 and Claude-2. To assess our strategy, we selected three meta-analyses from the literature, together with a glioma meta-analysis embedded in the study, as additional validation. For the automatic selection of records from curated meta-analyses, a four-step strategy called LARS-GPT was developed, consisting of (1) criteria selection and single-prompt (prompt with one criterion) creation, (2) best combination identification, (3) combined-prompt (prompt with one or more criteria) creation, and (4) request sending and answer summary. Recall, workload reduction, precision, and F1 score were calculated to assess the performance of LARS-GPT. RESULTS: A variable performance was found between different single-prompts, with a mean recall of 0.800. Based on these single-prompts, we were able to find combinations with better performance than the pre-set threshold. Finally, with a best combination of criteria identified, LARS-GPT showed a 40.1% workload reduction on average with a recall greater than 0.9. CONCLUSIONS: We show here the groundbreaking finding that automatic selection of literature for meta-analysis is possible with LLMs. We provide it here as a pipeline, LARS-GPT, which showed a great workload reduction while maintaining a pre-set recall.
Cai X; Geng Y; Du Y; Westerman B; Wang D; Ma C; Vallejo JJG
10
40072530
[Integration of large language models into the clinic : Revolution in analysing and processing patient data to increase efficiency and quality in radiology].
2,025
Radiologie (Heidelberg, Germany)
BACKGROUND: Large Language Models (LLMs) like ChatGPT, Llama and Claude are transforming healthcare by interpreting complex text, extracting information, and providing guideline-based support. Radiology, with its high patient volume and digital workflows, is a ideal field for LLM integration. OBJECTIVE: Assessment of the potential of LLMs to enhance efficiency, standardization, and decision support in radiology, while addressing ethical and regulatory challenges. MATERIAL AND METHODS: Pilot studies at Freiburg and Basel university hospitals evaluated local LLM systems for tasks like prior report summarization and guideline-driven reporting. Integration with Picture Archiving and Communication System (PACS) and Electronic Health Record (EHR) systems was achieved via Digital Imaging and Communications in Medicine (DICOM) and Fast Healthcare Interoperability Resources (FHIR) standards. Metrics included time savings, compliance with the European Union (EU) Artificial Intelligence (AI) Act, and user acceptance. RESULTS: LLMs demonstrate significant potential as a support tool for radiologists in clinical practice by reducing reporting times, automating routine tasks, and ensuring consistent, high-quality results. They also support interdisciplinary workflows (e.g., tumor boards) and meet data protection requirements when locally implemented. DISCUSSION: Local LLM systems are feasible and beneficial in radiology, enhancing efficiency and diagnostic quality. Future work should refine transparency, expand applications, and ensure LLMs complement medical expertise while adhering to ethical and legal standards.
Arnold P; Henkel M; Bamberg F; Kotter E
10
40055694
A systematic review of large language model (LLM) evaluations in clinical medicine.
2,025
BMC medical informatics and decision making
BACKGROUND: Large Language Models (LLMs), advanced AI tools based on transformer architectures, demonstrate significant potential in clinical medicine by enhancing decision support, diagnostics, and medical education. However, their integration into clinical workflows requires rigorous evaluation to ensure reliability, safety, and ethical alignment. OBJECTIVE: This systematic review examines the evaluation parameters and methodologies applied to LLMs in clinical medicine, highlighting their capabilities, limitations, and application trends. METHODS: A comprehensive review of the literature was conducted across PubMed, Scopus, Web of Science, IEEE Xplore, and arXiv databases, encompassing both peer-reviewed and preprint studies. Studies were screened against predefined inclusion and exclusion criteria to identify original research evaluating LLM performance in medical contexts. RESULTS: The results reveal a growing interest in leveraging LLM tools in clinical settings, with 761 studies meeting the inclusion criteria. While general-domain LLMs, particularly ChatGPT and GPT-4, dominated evaluations (93.55%), medical-domain LLMs accounted for only 6.45%. Accuracy emerged as the most commonly assessed parameter (21.78%). Despite these advancements, the evidence base highlights certain limitations and biases across the included studies, emphasizing the need for careful interpretation and robust evaluation frameworks. CONCLUSIONS: The exponential growth in LLM research underscores their transformative potential in healthcare. However, addressing challenges such as ethical risks, evaluation variability, and underrepresentation of critical specialties will be essential. Future efforts should prioritize standardized frameworks to ensure safe, effective, and equitable LLM integration in clinical practice.
Shool S; Adimi S; Saboori Amleshi R; Bitaraf E; Golpira R; Tara M
10
39591396
Evaluation of a Large Language Model on the American Academy of Pediatrics' PREP Emergency Medicine Question Bank.
2,024
Pediatric emergency care
BACKGROUND: Large language models (LLMs), including ChatGPT (Chat Generative Pretrained Transformer), a popular, publicly available LLM, represent an important innovation in the application of artificial intelligence. These systems generate relevant content by identifying patterns in large text datasets based on user input across various topics. We sought to evaluate the performance of ChatGPT in practice test questions designed to assess knowledge competency for pediatric emergency medicine (PEM). METHODS: We evaluated the performance of ChatGPT for PEM board certification using a popular question bank used for board certification in PEM published between 2022 and 2024. Clinicians assessed performance of ChatGPT by inputting prompts and recording the software's responses, asking each question over 3 separate iterations. We calculated correct answer percentages (defined as correct in at least 2/3 iterations) and assessed for agreement between the iterations using Fleiss' kappa. RESULTS: We included 215 questions over the 3 study years. ChatGPT responded correctly to 161 of PREP EM questions over 3 years (74.5%; 95% confidence interval, 68.5%-80.5%), which was similar within each study year (75.0%, 71.8%, and 77.8% for study years 2022, 2023, and 2024, respectively). Among correct responses, most were answered correctly on all 3 iterations (137/161, 85.1%). Performance varied by topic, with the highest scores in research and medical specialties and lower in procedures and toxicology. Fleiss' kappa across the 3 iterations was 0.71, indicating substantial agreement. CONCLUSION: ChatGPT provided correct answers to PEM responses in three-quarters of cases, over the recommended minimum of 65% provided by the question publisher for passing. Responses by ChatGPT included detailed explanations, suggesting potential use for medical education. We identified limitations in specific topics and image interpretation. These results demonstrate opportunities for LLMs to enhance both the education and clinical practice of PEM.
Ramgopal S; Varma S; Gorski JK; Kester KM; Shieh A; Suresh S
21
38051578
ChatGPT Versus Consultants: Blinded Evaluation on Answering Otorhinolaryngology Case-Based Questions.
2,023
JMIR medical education
BACKGROUND: Large language models (LLMs), such as ChatGPT (Open AI), are increasingly used in medicine and supplement standard search engines as information sources. This leads to more "consultations" of LLMs about personal medical symptoms. OBJECTIVE: This study aims to evaluate ChatGPT's performance in answering clinical case-based questions in otorhinolaryngology (ORL) in comparison to ORL consultants' answers. METHODS: We used 41 case-based questions from established ORL study books and past German state examinations for doctors. The questions were answered by both ORL consultants and ChatGPT 3. ORL consultants rated all responses, except their own, on medical adequacy, conciseness, coherence, and comprehensibility using a 6-point Likert scale. They also identified (in a blinded setting) if the answer was created by an ORL consultant or ChatGPT. Additionally, the character count was compared. Due to the rapidly evolving pace of technology, a comparison between responses generated by ChatGPT 3 and ChatGPT 4 was included to give an insight into the evolving potential of LLMs. RESULTS: Ratings in all categories were significantly higher for ORL consultants (P<.001). Although inferior to the scores of the ORL consultants, ChatGPT's scores were relatively higher in semantic categories (conciseness, coherence, and comprehensibility) compared to medical adequacy. ORL consultants identified ChatGPT as the source correctly in 98.4% (121/123) of cases. ChatGPT's answers had a significantly higher character count compared to ORL consultants (P<.001). Comparison between responses generated by ChatGPT 3 and ChatGPT 4 showed a slight improvement in medical accuracy as well as a better coherence of the answers provided. Contrarily, neither the conciseness (P=.06) nor the comprehensibility (P=.08) improved significantly despite the significant increase in the mean amount of characters by 52.5% (n= (1470-964)/964; P<.001). CONCLUSIONS: While ChatGPT provided longer answers to medical problems, medical adequacy and conciseness were significantly lower compared to ORL consultants' answers. LLMs have potential as augmentative tools for medical care, but their "consultation" for medical problems carries a high risk of misinformation as their high semantic quality may mask contextual deficits.
Buhr CR; Smith H; Huppertz T; Bahr-Hamm K; Matthias C; Blaikie A; Kelsey T; Kuhn S; Eckrich J
0-1
39019566
Development and evaluation of a large language model of ophthalmology in Chinese.
2,024
The British journal of ophthalmology
BACKGROUND: Large language models (LLMs), such as ChatGPT, have considerable implications for various medical applications. However, ChatGPT's training primarily draws from English-centric internet data and is not tailored explicitly to the medical domain. Thus, an ophthalmic LLM in Chinese is clinically essential for both healthcare providers and patients in mainland China. METHODS: We developed an LLM of ophthalmology (MOPH) using Chinese corpora and evaluated its performance in three clinical scenarios: ophthalmic board exams in Chinese, answering evidence-based medicine-oriented ophthalmic questions and diagnostic accuracy for clinical vignettes. Additionally, we compared MOPH's performance to that of human doctors. RESULTS: In the ophthalmic exam, MOPH's average score closely aligned with the mean score of trainees (64.7 (range 62-68) vs 66.2 (range 50-92), p=0.817), but achieving a score above 60 in all seven mock exams. In answering ophthalmic questions, MOPH demonstrated an adherence of 83.3% (25/30) of responses following Chinese guidelines (Likert scale 4-5). Only 6.7% (2/30, Likert scale 1-2) and 10% (3/30, Likert scale 3) of responses were rated as 'poor or very poor' or 'potentially misinterpretable inaccuracies' by reviewers. In diagnostic accuracy, although the rate of correct diagnosis by ophthalmologists was superior to that by MOPH (96.1% vs 81.1%, p>0.05), the difference was not statistically significant. CONCLUSION: This study demonstrated the promising performance of MOPH, a Chinese-specific ophthalmic LLM, in diverse clinical scenarios. MOPH has potential real-world applications in Chinese-language ophthalmology settings.
Zheng C; Ye H; Guo J; Yang J; Fei P; Yuan Y; Huang D; Huang Y; Peng J; Xie X; Xie M; Zhao P; Chen L; Zhang M
21
37083633
Trialling a Large Language Model (ChatGPT) in General Practice With the Applied Knowledge Test: Observational Study Demonstrating Opportunities and Limitations in Primary Care.
2,023
JMIR medical education
BACKGROUND: Large language models exhibiting human-level performance in specialized tasks are emerging; examples include Generative Pretrained Transformer 3.5, which underlies the processing of ChatGPT. Rigorous trials are required to understand the capabilities of emerging technology, so that innovation can be directed to benefit patients and practitioners. OBJECTIVE: Here, we evaluated the strengths and weaknesses of ChatGPT in primary care using the Membership of the Royal College of General Practitioners Applied Knowledge Test (AKT) as a medium. METHODS: AKT questions were sourced from a web-based question bank and 2 AKT practice papers. In total, 674 unique AKT questions were inputted to ChatGPT, with the model's answers recorded and compared to correct answers provided by the Royal College of General Practitioners. Each question was inputted twice in separate ChatGPT sessions, with answers on repeated trials compared to gauge consistency. Subject difficulty was gauged by referring to examiners' reports from 2018 to 2022. Novel explanations from ChatGPT-defined as information provided that was not inputted within the question or multiple answer choices-were recorded. Performance was analyzed with respect to subject, difficulty, question source, and novel model outputs to explore ChatGPT's strengths and weaknesses. RESULTS: Average overall performance of ChatGPT was 60.17%, which is below the mean passing mark in the last 2 years (70.42%). Accuracy differed between sources (P=.04 and .06). ChatGPT's performance varied with subject category (P=.02 and .02), but variation did not correlate with difficulty (Spearman rho=-0.241 and -0.238; P=.19 and .20). The proclivity of ChatGPT to provide novel explanations did not affect accuracy (P>.99 and .23). CONCLUSIONS: Large language models are approaching human expert-level performance, although further development is required to match the performance of qualified primary care physicians in the AKT. Validated high-performance models may serve as assistants or autonomous clinical tools to ameliorate the general practice workforce crisis.
Thirunavukarasu AJ; Hassan R; Mahmood S; Sanghera R; Barzangi K; El Mukashfi M; Shah S
21
37337531
Evaluating the limits of AI in medical specialisation: ChatGPT's performance on the UK Neurology Specialty Certificate Examination.
2,023
BMJ neurology open
BACKGROUND: Large language models such as ChatGPT have demonstrated potential as innovative tools for medical education and practice, with studies showing their ability to perform at or near the passing threshold in general medical examinations and standardised admission tests. However, no studies have assessed their performance in the UK medical education context, particularly at a specialty level, and specifically in the field of neurology and neuroscience. METHODS: We evaluated the performance of ChatGPT in higher specialty training for neurology and neuroscience using 69 questions from the Pool-Specialty Certificate Examination (SCE) Neurology Web Questions bank. The dataset primarily focused on neurology (80%). The questions spanned subtopics such as symptoms and signs, diagnosis, interpretation and management with some questions addressing specific patient populations. The performance of ChatGPT 3.5 Legacy, ChatGPT 3.5 Default and ChatGPT-4 models was evaluated and compared. RESULTS: ChatGPT 3.5 Legacy and ChatGPT 3.5 Default displayed overall accuracies of 42% and 57%, respectively, falling short of the passing threshold of 58% for the 2022 SCE neurology examination. ChatGPT-4, on the other hand, achieved the highest accuracy of 64%, surpassing the passing threshold and outperforming its predecessors across disciplines and subtopics. CONCLUSIONS: The advancements in ChatGPT-4's performance compared with its predecessors demonstrate the potential for artificial intelligence (AI) models in specialised medical education and practice. However, our findings also highlight the need for ongoing development and collaboration between AI developers and medical experts to ensure the models' relevance and reliability in the rapidly evolving field of medicine.
Giannos P
21
38261378
Assessing ChatGPT's Mastery of Bloom's Taxonomy Using Psychosomatic Medicine Exam Questions: Mixed-Methods Study.
2,024
Journal of medical Internet research
BACKGROUND: Large language models such as GPT-4 (Generative Pre-trained Transformer 4) are being increasingly used in medicine and medical education. However, these models are prone to "hallucinations" (ie, outputs that seem convincing while being factually incorrect). It is currently unknown how these errors by large language models relate to the different cognitive levels defined in Bloom's taxonomy. OBJECTIVE: This study aims to explore how GPT-4 performs in terms of Bloom's taxonomy using psychosomatic medicine exam questions. METHODS: We used a large data set of psychosomatic medicine multiple-choice questions (N=307) with real-world results derived from medical school exams. GPT-4 answered the multiple-choice questions using 2 distinct prompt versions: detailed and short. The answers were analyzed using a quantitative approach and a qualitative approach. Focusing on incorrectly answered questions, we categorized reasoning errors according to the hierarchical framework of Bloom's taxonomy. RESULTS: GPT-4's performance in answering exam questions yielded a high success rate: 93% (284/307) for the detailed prompt and 91% (278/307) for the short prompt. Questions answered correctly by GPT-4 had a statistically significant higher difficulty than questions answered incorrectly (P=.002 for the detailed prompt and P<.001 for the short prompt). Independent of the prompt, GPT-4's lowest exam performance was 78.9% (15/19), thereby always surpassing the "pass" threshold. Our qualitative analysis of incorrect answers, based on Bloom's taxonomy, showed that errors were primarily in the "remember" (29/68) and "understand" (23/68) cognitive levels; specific issues arose in recalling details, understanding conceptual relationships, and adhering to standardized guidelines. CONCLUSIONS: GPT-4 demonstrated a remarkable success rate when confronted with psychosomatic medicine multiple-choice exam questions, aligning with previous findings. When evaluated through Bloom's taxonomy, our data revealed that GPT-4 occasionally ignored specific facts (remember), provided illogical reasoning (understand), or failed to apply concepts to a new situation (apply). These errors, which were confidently presented, could be attributed to inherent model biases and the tendency to generate outputs that maximize likelihood.
Herrmann-Werner A; Festl-Wietek T; Holderried F; Herschbach L; Griewatz J; Masters K; Zipfel S; Mahling M
21
40053752
Detecting Artificial Intelligence-Generated Versus Human-Written Medical Student Essays: Semirandomized Controlled Study.
2,025
JMIR medical education
BACKGROUND: Large language models, exemplified by ChatGPT, have reached a level of sophistication that makes distinguishing between human- and artificial intelligence (AI)-generated texts increasingly challenging. This has raised concerns in academia, particularly in medicine, where the accuracy and authenticity of written work are paramount. OBJECTIVE: This semirandomized controlled study aims to examine the ability of 2 blinded expert groups with different levels of content familiarity-medical professionals and humanities scholars with expertise in textual analysis-to distinguish between longer scientific texts in German written by medical students and those generated by ChatGPT. Additionally, the study sought to analyze the reasoning behind their identification choices, particularly the role of content familiarity and linguistic features. METHODS: Between May and August 2023, a total of 35 experts (medical: n=22; humanities: n=13) were each presented with 2 pairs of texts on different medical topics. Each pair had similar content and structure: 1 text was written by a medical student, and the other was generated by ChatGPT (version 3.5, March 2023). Experts were asked to identify the AI-generated text and justify their choice. These justifications were analyzed through a multistage, interdisciplinary qualitative analysis to identify relevant textual features. Before unblinding, experts rated each text on 6 characteristics: linguistic fluency and spelling/grammatical accuracy, scientific quality, logical coherence, expression of knowledge limitations, formulation of future research questions, and citation quality. Univariate tests and multivariate logistic regression analyses were used to examine associations between participants' characteristics, their stated reasons for author identification, and the likelihood of correctly determining a text's authorship. RESULTS: Overall, in 48 out of 69 (70%) decision rounds, participants accurately identified the AI-generated texts, with minimal difference between groups (medical: 31/43, 72%; humanities: 17/26, 65%; odds ratio [OR] 1.37, 95% CI 0.5-3.9). While content errors had little impact on identification accuracy, stylistic features-particularly redundancy (OR 6.90, 95% CI 1.01-47.1), repetition (OR 8.05, 95% CI 1.25-51.7), and thread/coherence (OR 6.62, 95% CI 1.25-35.2)-played a crucial role in participants' decisions to identify a text as AI-generated. CONCLUSIONS: The findings suggest that both medical and humanities experts were able to identify ChatGPT-generated texts in medical contexts, with their decisions largely based on linguistic attributes. The accuracy of identification appears to be independent of experts' familiarity with the text content. As the decision-making process primarily relies on linguistic attributes-such as stylistic features and text coherence-further quasi-experimental studies using texts from other academic disciplines should be conducted to determine whether instructions based on these features can enhance lecturers' ability to distinguish between student-authored and AI-generated work.
Doru B; Maier C; Busse JS; Lucke T; Schonhoff J; Enax-Krumova E; Hessler S; Berger M; Tokic M
10
37099373
Performance of ChatGPT on UK Standardized Admission Tests: Insights From the BMAT, TMUA, LNAT, and TSA Examinations.
2,023
JMIR medical education
BACKGROUND: Large language models, such as ChatGPT by OpenAI, have demonstrated potential in various applications, including medical education. Previous studies have assessed ChatGPT's performance in university or professional settings. However, the model's potential in the context of standardized admission tests remains unexplored. OBJECTIVE: This study evaluated ChatGPT's performance on standardized admission tests in the United Kingdom, including the BioMedical Admissions Test (BMAT), Test of Mathematics for University Admission (TMUA), Law National Aptitude Test (LNAT), and Thinking Skills Assessment (TSA), to understand its potential as an innovative tool for education and test preparation. METHODS: Recent public resources (2019-2022) were used to compile a data set of 509 questions from the BMAT, TMUA, LNAT, and TSA covering diverse topics in aptitude, scientific knowledge and applications, mathematical thinking and reasoning, critical thinking, problem-solving, reading comprehension, and logical reasoning. This evaluation assessed ChatGPT's performance using the legacy GPT-3.5 model, focusing on multiple-choice questions for consistency. The model's performance was analyzed based on question difficulty, the proportion of correct responses when aggregating exams from all years, and a comparison of test scores between papers of the same exam using binomial distribution and paired-sample (2-tailed) t tests. RESULTS: The proportion of correct responses was significantly lower than incorrect ones in BMAT section 2 (P<.001) and TMUA paper 1 (P<.001) and paper 2 (P<.001). No significant differences were observed in BMAT section 1 (P=.2), TSA section 1 (P=.7), or LNAT papers 1 and 2, section A (P=.3). ChatGPT performed better in BMAT section 1 than section 2 (P=.047), with a maximum candidate ranking of 73% compared to a minimum of 1%. In the TMUA, it engaged with questions but had limited accuracy and no performance difference between papers (P=.6), with candidate rankings below 10%. In the LNAT, it demonstrated moderate success, especially in paper 2's questions; however, student performance data were unavailable. TSA performance varied across years with generally moderate results and fluctuating candidate rankings. Similar trends were observed for easy to moderate difficulty questions (BMAT section 1, P=.3; BMAT section 2, P=.04; TMUA paper 1, P<.001; TMUA paper 2, P=.003; TSA section 1, P=.8; and LNAT papers 1 and 2, section A, P>.99) and hard to challenging ones (BMAT section 1, P=.7; BMAT section 2, P<.001; TMUA paper 1, P=.007; TMUA paper 2, P<.001; TSA section 1, P=.3; and LNAT papers 1 and 2, section A, P=.2). CONCLUSIONS: ChatGPT shows promise as a supplementary tool for subject areas and test formats that assess aptitude, problem-solving and critical thinking, and reading comprehension. However, its limitations in areas such as scientific and mathematical knowledge and applications highlight the need for continuous development and integration with conventional learning strategies in order to fully harness its potential.
Giannos P; Delardas O
21
38153785
Differentiating ChatGPT-Generated and Human-Written Medical Texts: Quantitative Study.
2,023
JMIR medical education
BACKGROUND: Large language models, such as ChatGPT, are capable of generating grammatically perfect and human-like text content, and a large number of ChatGPT-generated texts have appeared on the internet. However, medical texts, such as clinical notes and diagnoses, require rigorous validation, and erroneous medical content generated by ChatGPT could potentially lead to disinformation that poses significant harm to health care and the general public. OBJECTIVE: This study is among the first on responsible artificial intelligence-generated content in medicine. We focus on analyzing the differences between medical texts written by human experts and those generated by ChatGPT and designing machine learning workflows to effectively detect and differentiate medical texts generated by ChatGPT. METHODS: We first constructed a suite of data sets containing medical texts written by human experts and generated by ChatGPT. We analyzed the linguistic features of these 2 types of content and uncovered differences in vocabulary, parts-of-speech, dependency, sentiment, perplexity, and other aspects. Finally, we designed and implemented machine learning methods to detect medical text generated by ChatGPT. The data and code used in this paper are published on GitHub. RESULTS: Medical texts written by humans were more concrete, more diverse, and typically contained more useful information, while medical texts generated by ChatGPT paid more attention to fluency and logic and usually expressed general terminologies rather than effective information specific to the context of the problem. A bidirectional encoder representations from transformers-based model effectively detected medical texts generated by ChatGPT, and the F(1) score exceeded 95%. CONCLUSIONS: Although text generated by ChatGPT is grammatically perfect and human-like, the linguistic characteristics of generated medical texts were different from those written by human experts. Medical text generated by ChatGPT could be effectively detected by the proposed machine learning algorithms. This study provides a pathway toward trustworthy and accountable use of large language models in medicine.
Liao W; Liu Z; Dai H; Xu S; Wu Z; Zhang Y; Huang X; Zhu D; Cai H; Li Q; Liu T; Li X
10
39322838
The Potential of Chat-Based Artificial Intelligence Models in Differentiating Between Keloid and Hypertrophic Scars: A Pilot Study.
2,024
Aesthetic plastic surgery
BACKGROUND: Lasting scars such as keloids and hypertrophic scars adversely affect a patient's quality of life. However, these scars are frequently underdiagnosed because of the complexity of the current diagnostic criteria and classification systems. This study aimed to explore the application of Large Language Models (LLMs) such as ChatGPT in diagnosing scar conditions and to propose a more accessible and straightforward diagnostic approach. METHODS: In this study, five artificial intelligence (AI) chatbots, including ChatGPT-4 (GPT-4), Bing Chat (Precise, Balanced, and Creative modes), and Bard, were evaluated for their ability to interpret clinical scar images using a standardized set of prompts. Thirty mock images of various scar types were analyzed, and each chatbot was queried five times to assess the diagnostic accuracy. RESULTS: GPT-4 had a significantly higher accuracy rate in diagnosing scars than Bing Chat. The overall accuracy rates of GPT-4 and Bing Chat were 36.0% and 22.0%, respectively (P = 0.027), with GPT-4 showing better performance in terms of specificity for keloids (0.6 vs. 0.006) and hypertrophic scars (0.72 vs. 0.0) than Bing Chat. CONCLUSIONS: Although currently available LLMs show potential for use in scar diagnostics, the current technology is still under development and is not yet sufficient for clinical application standards, highlighting the need for further advancements in AI for more accurate medical diagnostics. LEVEL OF EVIDENCE IV: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online instructions to authors www.springer.com/00266 .
Shiraishi M; Miyamoto S; Takeishi H; Kurita D; Furuse K; Ohba J; Moriwaki Y; Fujisawa K; Okazaki M
0-1
39307579
ChatGPT vs. sleep disorder specialist responses to common sleep queries: Ratings by experts and laypeople.
2,024
Sleep health
BACKGROUND: Many individuals use the Internet, including generative artificial intelligence like ChatGPT, for sleep-related information before consulting medical professionals. This study compared responses from sleep disorder specialists and ChatGPT to common sleep queries, with experts and laypersons evaluating the responses' accuracy and clarity. METHODS: We assessed responses from sleep medicine specialists and ChatGPT-4 to 140 sleep-related questions from the Korean Sleep Research Society's website. In a blinded study design, sleep disorder experts and laypersons rated the medical helpfulness, emotional supportiveness, and sentence comprehensibility of the responses on a 1-5 scale. RESULTS: Laypersons rated ChatGPT higher for medical helpfulness (3.79 +/- 0.90 vs. 3.44 +/- 0.99, p < .001), emotional supportiveness (3.48 +/- 0.79 vs. 3.12 +/- 0.98, p < .001), and sentence comprehensibility (4.24 +/- 0.79 vs. 4.14 +/- 0.96, p = .028). Experts also rated ChatGPT higher for emotional supportiveness (3.33 +/- 0.62 vs. 3.01 +/- 0.67, p < .001) but preferred specialists' responses for sentence comprehensibility (4.15 +/- 0.74 vs. 3.94 +/- 0.90, p < .001). When it comes to medical helpfulness, the experts rated the specialists' answers slightly higher than the laypersons did (3.70 +/- 0.84 vs. 3.63 +/- 0.87, p = .109). Experts slightly preferred specialist responses overall (56.0%), while laypersons favored ChatGPT (54.3%; p < .001). ChatGPT's responses were significantly longer (186.76 +/- 39.04 vs. 113.16 +/- 95.77 words, p < .001). DISCUSSION: Generative artificial intelligence like ChatGPT may help disseminate sleep-related medical information online. Laypersons appear to prefer ChatGPT's detailed, emotionally supportive responses over those from sleep disorder specialists.
Kim J; Lee SY; Kim JH; Shin DH; Oh EH; Kim JA; Cho JW
0-1
38602313
Importance of Patient History in Artificial Intelligence-Assisted Medical Diagnosis: Comparison Study.
2,024
JMIR medical education
BACKGROUND: Medical history contributes approximately 80% to a diagnosis, although physical examinations and laboratory investigations increase a physician's confidence in the medical diagnosis. The concept of artificial intelligence (AI) was first proposed more than 70 years ago. Recently, its role in various fields of medicine has grown remarkably. However, no studies have evaluated the importance of patient history in AI-assisted medical diagnosis. OBJECTIVE: This study explored the contribution of patient history to AI-assisted medical diagnoses and assessed the accuracy of ChatGPT in reaching a clinical diagnosis based on the medical history provided. METHODS: Using clinical vignettes of 30 cases identified in The BMJ, we evaluated the accuracy of diagnoses generated by ChatGPT. We compared the diagnoses made by ChatGPT based solely on medical history with the correct diagnoses. We also compared the diagnoses made by ChatGPT after incorporating additional physical examination findings and laboratory data alongside history with the correct diagnoses. RESULTS: ChatGPT accurately diagnosed 76.6% (23/30) of the cases with only the medical history, consistent with previous research targeting physicians. We also found that this rate was 93.3% (28/30) when additional information was included. CONCLUSIONS: Although adding additional information improves diagnostic accuracy, patient history remains a significant factor in AI-assisted medical diagnosis. Thus, when using AI in medical diagnosis, it is crucial to include pertinent and correct patient histories for an accurate diagnosis. Our findings emphasize the continued significance of patient history in clinical diagnoses in this age and highlight the need for its integration into AI-assisted medical diagnosis systems.
Fukuzawa F; Yanagita Y; Yokokawa D; Uchida S; Yamashita S; Li Y; Shikino K; Tsukamoto T; Noda K; Uehara T; Ikusaka M
10
38526538
Performance of ChatGPT on the India Undergraduate Community Medicine Examination: Cross-Sectional Study.
2,024
JMIR formative research
BACKGROUND: Medical students may increasingly use large language models (LLMs) in their learning. ChatGPT is an LLM at the forefront of this new development in medical education with the capacity to respond to multidisciplinary questions. OBJECTIVE: The aim of this study was to evaluate the ability of ChatGPT 3.5 to complete the Indian undergraduate medical examination in the subject of community medicine. We further compared ChatGPT scores with the scores obtained by the students. METHODS: The study was conducted at a publicly funded medical college in Hyderabad, India. The study was based on the internal assessment examination conducted in January 2023 for students in the Bachelor of Medicine and Bachelor of Surgery Final Year-Part I program; the examination of focus included 40 questions (divided between two papers) from the community medicine subject syllabus. Each paper had three sections with different weightage of marks for each section: section one had two long essay-type questions worth 15 marks each, section two had 8 short essay-type questions worth 5 marks each, and section three had 10 short-answer questions worth 3 marks each. The same questions were administered as prompts to ChatGPT 3.5 and the responses were recorded. Apart from scoring ChatGPT responses, two independent evaluators explored the responses to each question to further analyze their quality with regard to three subdomains: relevancy, coherence, and completeness. Each question was scored in these subdomains on a Likert scale of 1-5. The average of the two evaluators was taken as the subdomain score of the question. The proportion of questions with a score 50% of the maximum score (5) in each subdomain was calculated. RESULTS: ChatGPT 3.5 scored 72.3% on paper 1 and 61% on paper 2. The mean score of the 94 students was 43% on paper 1 and 45% on paper 2. The responses of ChatGPT 3.5 were also rated to be satisfactorily relevant, coherent, and complete for most of the questions (>80%). CONCLUSIONS: ChatGPT 3.5 appears to have substantial and sufficient knowledge to understand and answer the Indian medical undergraduate examination in the subject of community medicine. ChatGPT may be introduced to students to enable the self-directed learning of community medicine in pilot mode. However, faculty oversight will be required as ChatGPT is still in the initial stages of development, and thus its potential and reliability of medical content from the Indian context need to be further explored comprehensively.
Gandhi AP; Joesph FK; Rajagopal V; Aparnavi P; Katkuri S; Dayama S; Satapathy P; Khatib MN; Gaidhane S; Zahiruddin QS; Behera A
21
39712564
Comparative evaluation of artificial intelligence systems' accuracy in providing medical drug dosages: A methodological study.
2,024
World journal of methodology
BACKGROUND: Medication errors, especially in dosage calculation, pose risks in healthcare. Artificial intelligence (AI) systems like ChatGPT and Google Bard may help reduce errors, but their accuracy in providing medication information remains to be evaluated. AIM: To evaluate the accuracy of AI systems (ChatGPT 3.5, ChatGPT 4, Google Bard) in providing drug dosage information per Harrison's Principles of Internal Medicine. METHODS: A set of natural language queries mimicking real-world medical dosage inquiries was presented to the AI systems. Responses were analyzed using a 3-point Likert scale. The analysis, conducted with Python and its libraries, focused on basic statistics, overall system accuracy, and disease-specific and organ system accuracies. RESULTS: ChatGPT 4 outperformed the other systems, showing the highest rate of correct responses (83.77%) and the best overall weighted accuracy (0.6775). Disease-specific accuracy varied notably across systems, with some diseases being accurately recognized, while others demonstrated significant discrepancies. Organ system accuracy also showed variable results, underscoring system-specific strengths and weaknesses. CONCLUSION: ChatGPT 4 demonstrates superior reliability in medical dosage information, yet variations across diseases emphasize the need for ongoing improvements. These results highlight AI's potential in aiding healthcare professionals, urging continuous development for dependable accuracy in critical medical situations.
Ramasubramanian S; Balaji S; Kannan T; Jeyaraman N; Sharma S; Migliorini F; Balasubramaniam S; Jeyaraman M
10
39257533
Unlocking the potential of advanced large language models in medication review and reconciliation: A proof-of-concept investigation.
2,024
Exploratory research in clinical and social pharmacy
BACKGROUND: Medication review and reconciliation is essential for optimizing drug therapy and minimizing medication errors. Large language models (LLMs) have been recently shown to possess a lot of potential applications in healthcare field due to their abilities of deductive, abductive, and logical reasoning. The present study assessed the abilities of LLMs in medication review and medication reconciliation processes. METHODS: Four LLMs were prompted with appropriate queries related to dosing regimen errors, drug-drug interactions, therapeutic drug monitoring, and genomics-based decision-making process. The veracity of the LLM outputs were verified from validated sources using pre-validated criteria (accuracy, relevancy, risk management, hallucination mitigation, and citations and guidelines). The impacts of the erroneous responses on the patients' safety were categorized either as major or minor. RESULTS: In the assessment of four LLMs regarding dosing regimen errors, drug-drug interactions, and suggestions for dosing regimen adjustments based on therapeutic drug monitoring and genomics-based individualization of drug therapy, responses were generally consistent across prompts with no clear pattern in response quality among the LLMs. For identification of dosage regimen errors, ChatGPT performed well overall, except for the query related to simvastatin. In terms of potential drug-drug interactions, all LLMs recognized interactions with warfarin but missed the interaction between metoprolol and verapamil. Regarding dosage modifications based on therapeutic drug monitoring, Claude-Instant provided appropriate suggestions for two scenarios and nearly appropriate suggestions for the other two. Similarly, for genomics-based decision-making, Claude-Instant offered satisfactory responses for four scenarios, followed by Gemini for three. Notably, Gemini stood out by providing references to guidelines or citations even without prompting, demonstrating a commitment to accuracy and reliability in its responses. Minor impacts were noted in identifying appropriate dosing regimens and therapeutic drug monitoring, while major impacts were found in identifying drug interactions and making pharmacogenomic-based therapeutic decisions. CONCLUSION: Advanced LLMs hold significant promise in revolutionizing the medication review and reconciliation process in healthcare. Diverse impacts on patient safety were observed. Integrating and validating LLMs within electronic health records and prescription systems is essential to harness their full potential and enhance patient safety and care quality.
Sridharan K; Sivaramakrishnan G
10
38287940
Evaluating machine learning-enabled and multimodal data-driven exercise prescriptions for mental health: a randomized controlled trial protocol.
2,024
Frontiers in psychiatry
BACKGROUND: Mental illnesses represent a significant global health challenge, affecting millions with far-reaching social and economic impacts. Traditional exercise prescriptions for mental health often adopt a one-size-fits-all approach, which overlooks individual variations in mental and physical health. Recent advancements in artificial intelligence (AI) offer an opportunity to tailor these interventions more effectively. OBJECTIVE: This study aims to develop and evaluate a multimodal data-driven AI system for personalized exercise prescriptions, targeting individuals with mental illnesses. By leveraging AI, the study seeks to overcome the limitations of conventional exercise regimens and improve adherence and mental health outcomes. METHODS: The study is conducted in two phases. Initially, 1,000 participants will be recruited for AI model training and testing, with 800 forming the training set, augmented by 9,200 simulated samples generated by ChatGPT, and 200 as the testing set. Data annotation will be performed by experienced physicians from the Department of Mental Health at Guangdong Second Provincial General Hospital. Subsequently, a randomized controlled trial (RCT) with 40 participants will be conducted to compare the AI-driven exercise prescriptions against standard care. Assessments will be scheduled at 6, 12, and 18 months to evaluate cognitive, physical, and psychological outcomes. EXPECTED OUTCOMES: The AI-driven system is expected to demonstrate greater effectiveness in improving mental health outcomes compared to standard exercise prescriptions. Personalized exercise regimens, informed by comprehensive data analysis, are anticipated to enhance participant adherence and overall mental well-being. These outcomes could signify a paradigm shift in exercise prescription for mental health, paving the way for more personalized and effective treatment modalities. REGISTRATION AND ETHICAL APPROVAL: This is approved by Human Experimental Ethics Inspection of Guangzhou Sport University, and the registration is under review by ChiCTR.
Tan M; Xiao Y; Jing F; Xie Y; Lu S; Xiang M; Ren H
0-1
37629452
Comparing Meta-Analyses with ChatGPT in the Evaluation of the Effectiveness and Tolerance of Systemic Therapies in Moderate-to-Severe Plaque Psoriasis.
2,023
Journal of clinical medicine
BACKGROUND: Meta-analyses (MAs) and network meta-analyses (NMAs) are high-quality studies for assessing drug efficacy, but they are time-consuming and may be affected by biases. The capacity of artificial intelligence to aggregate huge amounts of information is emerging as particularly interesting for processing the volume of information needed to generate MAs. In this study, we analyzed whether the chatbot ChatGPT is able to summarize information in a useful fashion for providers and patients in a way that matches up with the results of MAs/NMAs. METHODS: We included 16 studies (13 NMAs and 3 MAs) that evaluate biologics (n = 6) and both biologic and systemic treatment (n = 10) for moderate-to-severe psoriasis, published between January 2021 and May 2023. RESULTS: The conclusions of the MAs/NMAs were compared to ChatGPT's answers to queries about the molecules evaluated in the selected MAs/NMAs. The reproducibility between the results of ChatGPT and the MAs/NMAs was random regarding drug safety. Regarding efficacy, ChatGPT reached the same conclusion as 5 out of the 16 studies (four out of four studies when three molecules were compared), gave acceptable answers in 7 out of 16 studies, and was inconclusive in 4 out of 16 studies. CONCLUSIONS: ChatGPT can generate conclusions that are similar to MAs when the efficacy of fewer drugs is compared but is still unable to summarize information in a way that matches up to the results of MAs/NMAs when more than three molecules are compared.
Lam Hoai XL; Simonart T
0-1
39184635
The Role of Artificial Intelligence in the Primary Prevention of Common Musculoskeletal Diseases.
2,024
Cureus
BACKGROUND: Musculoskeletal disorders (MSDs) are a leading cause of disability worldwide, with a growing burden across all demographics. With advancements in technology, conversational artificial intelligence (AI) platforms such as ChatGPT (OpenAI, San Francisco, CA) have become instrumental in disseminating health information. This study evaluated the effectiveness of ChatGPT versions 3.5 and 4 in delivering primary prevention information for common MSDs, emphasizing that the study is focused on prevention and not on diagnosis. METHODS: This mixed-methods study employed the CLEAR tool to assess the quality of responses from ChatGPT versions in terms of completeness, lack of false information, evidence support, appropriateness, and relevance. Responses were evaluated independently by two expert raters in a blinded manner. Statistical analyses included Wilcoxon signed-rank tests and paired samples t-tests to compare the performance across versions. RESULTS: ChatGPT-3.5 and ChatGPT-4 effectively provided primary prevention information, with overall performance ranging from satisfactory to excellent. Responses for low back pain, fractures, knee osteoarthritis, neck pain, and gout received excellent scores from both versions. Additionally, ChatGPT-4 was better than ChatGPT-3.5 in terms of completeness (p = 0.015), appropriateness (p = 0.007), and relevance (p = 0.036), and ChatGPT-4 performed better across most medical conditions (p = 0.010). CONCLUSIONS: ChatGPT versions 3.5 and 4 are effective tools for disseminating primary prevention information for common MSDs, with ChatGPT-4 showing superior performance. This study underscores the potential of AI in enhancing public health strategies through reliable and accessible health communication. Advanced models such as ChatGPT-4 can effectively contribute to the primary prevention of MSDs by delivering high-quality health information, highlighting the role of AIs in addressing the global burden of chronic diseases. It is important to note that these AI tools are intended for preventive education purposes only and not for diagnostic use. Continuous improvements are necessary to fully harness the potential of AI in preventive medicine. Future studies should explore other AI platforms, languages, and secondary and tertiary prevention measures to maximize the utility of AIs in global health contexts.
Yilmaz Muluk S; Olcucu N
32
36909565
Assessing the Accuracy and Reliability of AI-Generated Medical Responses: An Evaluation of the Chat-GPT Model.
2,023
Research square
BACKGROUND: Natural language processing models such as ChatGPT can generate text-based content and are poised to become a major information source in medicine and beyond. The accuracy and completeness of ChatGPT for medical queries is not known. METHODS: Thirty-three physicians across 17 specialties generated 284 medical questions that they subjectively classified as easy, medium, or hard with either binary (yes/no) or descriptive answers. The physicians then graded ChatGPT-generated answers to these questions for accuracy (6-point Likert scale; range 1 - completely incorrect to 6 - completely correct) and completeness (3-point Likert scale; range 1 - incomplete to 3 - complete plus additional context). Scores were summarized with descriptive statistics and compared using Mann-Whitney U or Kruskal-Wallis testing. RESULTS: Across all questions (n=284), median accuracy score was 5.5 (between almost completely and completely correct) with mean score of 4.8 (between mostly and almost completely correct). Median completeness score was 3 (complete and comprehensive) with mean score of 2.5. For questions rated easy, medium, and hard, median accuracy scores were 6, 5.5, and 5 (mean 5.0, 4.7, and 4.6; p=0.05). Accuracy scores for binary and descriptive questions were similar (median 6 vs. 5; mean 4.9 vs. 4.7; p=0.07). Of 36 questions with scores of 1-2, 34 were re-queried/re-graded 8-17 days later with substantial improvement (median 2 vs. 4; p<0.01). CONCLUSIONS: ChatGPT generated largely accurate information to diverse medical queries as judged by academic physician specialists although with important limitations. Further research and model development are needed to correct inaccuracies and for validation.
Johnson D; Goodman R; Patrinely J; Stone C; Zimmerman E; Donald R; Chang S; Berkowitz S; Finn A; Jahangir E; Scoville E; Reese T; Friedman D; Bastarache J; van der Heijden Y; Wright J; Carter N; Alexander M; Choe J; Chastain C; Zic J; Horst S; Turker I; Agarwal R; Osmundson E; Idrees K; Kieman C; Padmanabhan C; Bailey C; Schlegel C; Chambless L; Gibson M; Osterman T; Wheless L
0-1
39156049
Evaluating cognitive performance: Traditional methods vs. ChatGPT.
2,024
Digital health
BACKGROUND: NLP models like ChatGPT promise to revolutionize text-based content delivery, particularly in medicine. Yet, doubts remain about ChatGPT's ability to reliably support evaluations of cognitive performance, warranting further investigation into its accuracy and comprehensiveness in this area. METHOD: A cohort of 60 cognitively normal individuals and 30 stroke survivors underwent a comprehensive evaluation, covering memory, numerical processing, verbal fluency, and abstract thinking. Healthcare professionals and NLP models GPT-3.5 and GPT-4 conducted evaluations following established standards. Scores were compared, and efforts were made to refine scoring protocols and interaction methods to enhance ChatGPT's potential in these evaluations. RESULT: Within the cohort of healthy participants, the utilization of GPT-3.5 revealed significant disparities in memory evaluation compared to both physician-led assessments and those conducted utilizing GPT-4 (P < 0.001). Furthermore, within the domain of memory evaluation, GPT-3.5 exhibited discrepancies in 8 out of 21 specific measures when compared to assessments conducted by physicians (P < 0.05). Additionally, GPT-3.5 demonstrated statistically significant deviations from physician assessments in speech evaluation (P = 0.009). Among participants with a history of stroke, GPT-3.5 exhibited differences solely in verbal assessment compared to physician-led evaluations (P = 0.002). Notably, through the implementation of optimized scoring methodologies and refinement of interaction protocols, partial mitigation of these disparities was achieved. CONCLUSION: ChatGPT can produce evaluation outcomes comparable to traditional methods. Despite differences from physician evaluations, refinement of scoring algorithms and interaction protocols has improved alignment. ChatGPT performs well even in populations with specific conditions like stroke, suggesting its versatility. GPT-4 yields results closer to physician ratings, indicating potential for further enhancement. These findings highlight ChatGPT's importance as a supplementary tool, offering new avenues for information gathering in medical fields and guiding its ongoing development and application.
Fei X; Tang Y; Zhang J; Zhou Z; Yamamoto I; Zhang Y
0-1
39305476
Current application of ChatGPT in undergraduate nuclear medicine education: Taking Chongqing Medical University as an example.
2,025
Medical teacher
BACKGROUND: Nuclear Medicine(NM), as an inherently interdisciplinary field, integrates diverse scientific principles and advanced imaging techniques. The advent of ChatGPT, a large language model, opens new avenues for medical educational innovation. With its advanced natural language processing abilities and complex algorithms, ChatGPT holds the potential to substantially enrich medical education, particularly in NM. OBJECTIVE: To investigate the current application of ChatGPT in undergraduate Nuclear Medicine Education(NME). METHODS: Employing a mixed-methods sequential explanatory design, the research investigates the current status of NME, the use of ChatGPT and the attitude towards ChatGPT among teachers and students in the Second Clinical College of Chongqing Medical University. RESULTS: The investigation yields several salient findings: (1) Students and educators in NM face numerous challenges in the learning process; (2) ChatGPT is found to possess significant applicability and potential benefits in NME; (3) There is a pronounced inclination among respondents to adopt ChatGPT, with a keen interest in its diverse applications within the educational sphere. CONCLUSION: ChatGPT has been utilized to address the difficulties faced by undergraduates at Chongqing Medical University in NME, and has been applied in various aspects to assist learning. The findings of this survey may offer some insights into how ChatGPT can be integrated into practical medical education.
Deng A; Chen W; Dai J; Jiang L; Chen Y; Chen Y; Jiang J; Rao M
0-1
40139476
Can a Large Language Model Interpret Data in the Electronic Health Record to Infer Minimum Clinically Important Difference Achievement of Knee Osteoarthritis Outcome Score-Joint Replacement Score Following Total Knee Arthroplasty?
2,025
The Journal of arthroplasty
BACKGROUND: Obtaining total knee arthroplasty patient-reported outcomes for quality assessment is costly and difficult. We asked whether a large language model (LLM) could interpret electronic health record notes to differentiate patients attaining a 1-year minimum clinically important difference (MCID) for the Knee Osteoarthritis Outcome Score-Joint Replacement (KOOS-JR) from those who did not. We also investigated whether sufficient information to infer MCID achievement exists in the chart by having a blinded orthopaedic surgeon make the same determination. METHODS: In this retrospective case-control study, we selected 40 total knee arthroplasty patients who achieved 1-year KOOS-JR MCID and 40 who did not. Orthopaedic, emergency medicine, and primary care notes from zero to six months preoperatively and nine to 15 months postoperatively were deidentified. ChatGPT 3.5 (ChatGPT) interpreted these notes to determine whether the patient improved after surgery. A blinded orthopaedic surgeon classified these patients using all chart information. The sensitivity, specificity, and accuracy of ChatGPT and the surgeon's responses were calculated. RESULTS: ChatGPT classified 78 of 80 cases with 97% sensitivity, but only 33% specificity. The surgeon's assessment had 90% sensitivity and 63% specificity. Given the equal distribution of patients meeting or not meeting MCID, Chat GPT's accuracy was 65%. The surgeon's was 76%. CONCLUSIONS: ChatGPT's assessment of KOOS-JR MCID attainment had 97% sensitivity, but only 33% specificity. False positives were commonly due to the LLM not having access to, or not properly interpreting, signs of problems in the chart. This was an initial evaluation of the current ability of a general-purpose LLM to evaluate patient outcomes based on information in chart notes. An orthopaedic surgeon's assessment of the full chart suggests an opportunity to improve on this baseline performance, possibly enabling quality monitoring and identification of best practices across a large health care system. Additional work is needed to optimize model performance and confirm the utility of this approach.
Zalikha AK; Hong TS; Small EA; Constant M; Harris AHS; Giori NJ
32
39635018
Application of ChatGPT-4 to oculomics: a cost-effective osteoporosis risk assessment to enhance management as a proof-of-principles model in 3PM.
2,024
The EPMA journal
BACKGROUND: Oculomics is an emerging medical field that focuses on the study of the eye to detect and understand systemic diseases. ChatGPT-4 is a highly advanced AI model with multimodal capabilities, allowing it to process text and statistical data. Osteoporosis is a chronic condition presenting asymptomatically but leading to fractures if untreated. Current diagnostic methods like dual X-ray absorptiometry (DXA) are costly and involve radiation exposure. This study aims to develop a cost-effective osteoporosis risk prediction tool using ophthalmological data and ChatGPT-4 based on oculomics, aligning with predictive, preventive, and personalized medicine (3PM) principles. WORKING HYPOTHESIS AND METHODS: We hypothesize that leveraging ophthalmological data (oculomics) combined with AI-driven regression models developed by ChatGPT-4 can significantly improve the predictive accuracy for osteoporosis risk. This integration will facilitate earlier detection, enable more effective preventive strategies, and support personalized treatment plans tailored to individual patients. We utilized DXA and ophthalmological data from the Korea National Health and Nutrition Examination Survey to develop and validate osteopenia and osteoporosis prediction models. Ophthalmological and demographic data were integrated into logistic regression analyses, facilitated by ChatGPT-4, to create prediction formulas. These models were then converted into calculator software through automated coding by ChatGPT-4. RESULTS: ChatGPT-4 automatically developed prediction models based on key predictors of osteoporosis and osteopenia included age, gender, weight, and specific ophthalmological conditions such as cataracts and early age-related macular degeneration, and successfully implemented a risk calculator tool. The oculomics-based models outperformed traditional methods, with area under the curve of the receiver operating characteristic values of 0.785 for osteopenia and 0.866 for osteoporosis in the validation set. The calculator demonstrated high sensitivity and specificity, providing a reliable tool for early osteoporosis screening. CONCLUSIONS AND EXPERT RECOMMENDATIONS IN THE FRAMEWORK OF 3PM: This study illustrates the value of integrating ophthalmological data into multi-level diagnostics for osteoporosis, significantly improving the accuracy of health risk assessment and the identification of at-risk individuals. Aligned with the principles of 3PM, this approach fosters earlier detection and enables the development of individualized patient profiles, facilitating personalized and targeted treatment strategies. This study also highlights the potential of AI, specifically ChatGPT-4, in developing accessible, cost-effective, and radiation-free screening tools for advancing 3PM in clinical practice. Our findings emphasize the importance of a holistic approach, incorporating comprehensive health indices and interdisciplinary collaboration, to deliver personalized management plans. Preventive strategies should focus on lifestyle modifications and targeted interventions to enhance bone health, thereby preventing the progression of osteoporosis and contributing to overall patient well-being. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s13167-024-00378-0.
Choi JY; Han E; Yoo TK
0-1
38976865
ChatGPT With GPT-4 Outperforms Emergency Department Physicians in Diagnostic Accuracy: Retrospective Analysis.
2,024
Journal of medical Internet research
BACKGROUND: OpenAI's ChatGPT is a pioneering artificial intelligence (AI) in the field of natural language processing, and it holds significant potential in medicine for providing treatment advice. Additionally, recent studies have demonstrated promising results using ChatGPT for emergency medicine triage. However, its diagnostic accuracy in the emergency department (ED) has not yet been evaluated. OBJECTIVE: This study compares the diagnostic accuracy of ChatGPT with GPT-3.5 and GPT-4 and primary treating resident physicians in an ED setting. METHODS: Among 100 adults admitted to our ED in January 2023 with internal medicine issues, the diagnostic accuracy was assessed by comparing the diagnoses made by ED resident physicians and those made by ChatGPT with GPT-3.5 or GPT-4 against the final hospital discharge diagnosis, using a point system for grading accuracy. RESULTS: The study enrolled 100 patients with a median age of 72 (IQR 58.5-82.0) years who were admitted to our internal medicine ED primarily for cardiovascular, endocrine, gastrointestinal, or infectious diseases. GPT-4 outperformed both GPT-3.5 (P<.001) and ED resident physicians (P=.01) in diagnostic accuracy for internal medicine emergencies. Furthermore, across various disease subgroups, GPT-4 consistently outperformed GPT-3.5 and resident physicians. It demonstrated significant superiority in cardiovascular (GPT-4 vs ED physicians: P=.03) and endocrine or gastrointestinal diseases (GPT-4 vs GPT-3.5: P=.01). However, in other categories, the differences were not statistically significant. CONCLUSIONS: In this study, which compared the diagnostic accuracy of GPT-3.5, GPT-4, and ED resident physicians against a discharge diagnosis gold standard, GPT-4 outperformed both the resident physicians and its predecessor, GPT-3.5. Despite the retrospective design of the study and its limited sample size, the results underscore the potential of AI as a supportive diagnostic tool in ED settings.
Hoppe JM; Auer MK; Struven A; Massberg S; Stremmel C
10
39064053
Assessing the Accuracy of Artificial Intelligence Models in Scoliosis Classification and Suggested Therapeutic Approaches.
2,024
Journal of clinical medicine
Background: Open-source artificial intelligence models (OSAIMs) are increasingly being applied in various fields, including IT and medicine, offering promising solutions for diagnostic and therapeutic interventions. In response to the growing interest in AI for clinical diagnostics, we evaluated several OSAIMs-such as ChatGPT 4, Microsoft Copilot, Gemini, PopAi, You Chat, Claude, and the specialized PMC-LLaMA 13B-assessing their abilities to classify scoliosis severity and recommend treatments based on radiological descriptions from AP radiographs. Methods: Our study employed a two-stage methodology, where descriptions of single-curve scoliosis were analyzed by AI models following their evaluation by two independent neurosurgeons. Statistical analysis involved the Shapiro-Wilk test for normality, with non-normal distributions described using medians and interquartile ranges. Inter-rater reliability was assessed using Fleiss' kappa, and performance metrics, like accuracy, sensitivity, specificity, and F1 scores, were used to evaluate the AI systems' classification accuracy. Results: The analysis indicated that although some AI systems, like ChatGPT 4, Copilot, and PopAi, accurately reflected the recommended Cobb angle ranges for disease severity and treatment, others, such as Gemini and Claude, required further calibration. Particularly, PMC-LLaMA 13B expanded the classification range for moderate scoliosis, potentially influencing clinical decisions and delaying interventions. Conclusions: These findings highlight the need for the continuous refinement of AI models to enhance their clinical applicability.
Fabijan A; Zawadzka-Fabijan A; Fabijan R; Zakrzewski K; Nowoslawska E; Polis B
0-1
37987870
"ChatGPT, Can You Help Me Save My Child's Life?" - Diagnostic Accuracy and Supportive Capabilities to Lay Rescuers by ChatGPT in Prehospital Basic Life Support and Paediatric Advanced Life Support Cases - An In-silico Analysis.
2,023
Journal of medical systems
BACKGROUND: Paediatric emergencies are challenging for healthcare workers, first aiders, and parents waiting for emergency medical services to arrive. With the expected rise of virtual assistants, people will likely seek help from such digital AI tools, especially in regions lacking emergency medical services. Large Language Models like ChatGPT proved effective in providing health-related information and are competent in medical exams but are questioned regarding patient safety. Currently, there is no information on ChatGPT's performance in supporting parents in paediatric emergencies requiring help from emergency medical services. This study aimed to test 20 paediatric and two basic life support case vignettes for ChatGPT and GPT-4 performance and safety in children. METHODS: We provided the cases three times each to two models, ChatGPT and GPT-4, and assessed the diagnostic accuracy, emergency call advice, and the validity of advice given to parents. RESULTS: Both models recognized the emergency in the cases, except for septic shock and pulmonary embolism, and identified the correct diagnosis in 94%. However, ChatGPT/GPT-4 reliably advised to call emergency services only in 12 of 22 cases (54%), gave correct first aid instructions in 9 cases (45%) and incorrectly advised advanced life support techniques to parents in 3 of 22 cases (13.6%). CONCLUSION: Considering these results of the recent ChatGPT versions, the validity, reliability and thus safety of ChatGPT/GPT-4 as an emergency support tool is questionable. However, whether humans would perform better in the same situation is uncertain. Moreover, other studies have shown that human emergency call operators are also inaccurate, partly with worse performance than ChatGPT/GPT-4 in our study. However, one of the main limitations of the study is that we used prototypical cases, and the management may differ from urban to rural areas and between different countries, indicating the need for further evaluation of the context sensitivity and adaptability of the model. Nevertheless, ChatGPT and the new versions under development may be promising tools for assisting lay first responders, operators, and professionals in diagnosing a paediatric emergency. TRIAL REGISTRATION: Not applicable.
Bushuven S; Bentele M; Bentele S; Gerber B; Bansbach J; Ganter J; Trifunovic-Koenig M; Ranisch R
10
39445873
Comparing Provider and ChatGPT Responses to Breast Reconstruction Patient Questions in the Electronic Health Record.
2,024
Annals of plastic surgery
BACKGROUND: Patient-directed Electronic Health Record (EHR) messaging is used as an adjunct to enhance patient-physician interactions but further burdens the physician. There is a need for clear electronic patient communication in all aspects of medicine, including plastic surgery. We can potentially utilize innovative communication tools like ChatGPT. This study assesses ChatGPT's effectiveness in answering breast reconstruction queries, comparing its accuracy, empathy, and readability with healthcare providers' responses. METHODS: Ten deidentified questions regarding breast reconstruction were extracted from electronic messages. They were presented to ChatGPT3, ChatGPT4, plastic surgeons, and advanced practice providers for response. ChatGPT3 and ChatGPT4 were also prompted to give brief responses. Using 1-5 Likert scoring, accuracy and empathy were graded by 2 plastic surgeons and medical students, respectively. Readability was measured using Flesch Reading Ease. Grades were compared using 2-tailed t tests. RESULTS: Combined provider responses had better Flesch Reading Ease scores compared to all combined chatbot responses (53.3 +/- 13.3 vs 36.0 +/- 11.6, P < 0.001) and combined brief chatbot responses (53.3 +/- 13.3 vs 34.7 +/- 12.8, P < 0.001). Empathy scores were higher in all combined chatbot than in those from combined providers (2.9 +/- 0.8 vs 2.0 +/- 0.9, P < 0.001). There were no statistically significant differences in accuracy between combined providers and all combined chatbot responses (4.3 +/- 0.9 vs 4.5 +/- 0.6, P = 0.170) or combined brief chatbot responses (4.3 +/- 0.9 vs 4.6 +/- 0.6, P = 0.128). CONCLUSIONS: Amid the time constraints and complexities of plastic surgery decision making, our study underscores ChatGPT's potential to enhance patient communication. ChatGPT excels in empathy and accuracy, yet its readability presents limitations that should be addressed.
Soroudi D; Gozali A; Knox JA; Parmeshwar N; Sadjadi R; Wilson JC; Lee SA; Piper ML
32
39896176
Exploring potential drug-drug interactions in discharge prescriptions: ChatGPT's effectiveness in assessing those interactions.
2,025
Exploratory research in clinical and social pharmacy
BACKGROUND: Potential drug-drug interactions (pDDIs) pose substantial risks in clinical practice, leading to increased morbidity, mortality, and healthcare costs. Tools like Micromedex drug-drug interaction checker are commonly used to screen for pDDIs, yet emerging AI models, such as ChatGPT, offer the potential for supplementary pDDI prediction. However, the accuracy and reliability of these AI tools in a clinical context remain largely untested. OBJECTIVE: This study evaluates pDDIs in discharge prescriptions for medical ward patients and assesses ChatGPT-4.0's effectiveness in predicting these interactions compared to Micromedex drug-drug interaction checker. METHOD: A cross-sectional study was conducted over three months with 301 discharged patients. pDDIs were identified using Micromedex drug-drug interaction checker, detailing each interaction's occurrence, severity, onset, and documentation. ChatGPT-4.0 predictions were then analyzed against Micromedex data. Binary logistic regression analysis was applied to assess the influence of predictor variables in the occurrence of pDDIs. RESULTS: 1551 drugs were prescribed to 301 patients, averaging 5.15 per patient. pDDIs were detected in 60.13 % of patients, averaging 3.17 pDDIs per patient, ChatGPT-4.0 accurately identified pDDIs (100 % for occurrence) but had limited accuracy for severity (37.3 %) and moderate accuracy for onset (65.2 %). The most frequent major interaction was between Cefuroxime Axetil and Pantoprazole Sodium. Polypharmacy significantly increased the risk of pDDIs (OR: 3.960, p < 0.001). CONCLUSION: pDDIs are prevalent in internal medicine discharge prescriptions, with polypharmacy heightening the risk. While ChatGPT 4.0 accurately identifies pDDI occurrence, its limitations in predicting severity, onset, and documentation underscore healthcare professionals' need for careful oversight.
Thapa RB; Karki S; Shrestha S
10
39864953
Classifying Unstructured Text in Electronic Health Records for Mental Health Prediction Models: Large Language Model Evaluation Study.
2,025
JMIR medical informatics
BACKGROUND: Prediction models have demonstrated a range of applications across medicine, including using electronic health record (EHR) data to identify hospital readmission and mortality risk. Large language models (LLMs) can transform unstructured EHR text into structured features, which can then be integrated into statistical prediction models, ensuring that the results are both clinically meaningful and interpretable. OBJECTIVE: This study aims to compare the classification decisions made by clinical experts with those generated by a state-of-the-art LLM, using terms extracted from a large EHR data set of individuals with mental health disorders seen in emergency departments (EDs). METHODS: Using a dataset from the EHR systems of more than 50 health care provider organizations in the United States from 2016 to 2021, we extracted all clinical terms that appeared in at least 1000 records of individuals admitted to the ED for a mental health-related problem from a source population of over 6 million ED episodes. Two experienced mental health clinicians (one medically trained psychiatrist and one clinical psychologist) reached consensus on the classification of EHR terms and diagnostic codes into categories. We evaluated an LLM's agreement with clinical judgment across three classification tasks as follows: (1) classify terms into "mental health" or "physical health", (2) classify mental health terms into 1 of 42 prespecified categories, and (3) classify physical health terms into 1 of 19 prespecified broad categories. RESULTS: There was high agreement between the LLM and clinical experts when categorizing 4553 terms as "mental health" or "physical health" (kappa=0.77, 95% CI 0.75-0.80). However, there was still considerable variability in LLM-clinician agreement on the classification of mental health terms (kappa=0.62, 95% CI 0.59-0.66) and physical health terms (kappa=0.69, 95% CI 0.67-0.70). CONCLUSIONS: The LLM displayed high agreement with clinical experts when classifying EHR terms into certain mental health or physical health term categories. However, agreement with clinical experts varied considerably within both sets of mental and physical health term categories. Importantly, the use of LLMs presents an alternative to manual human coding, presenting great potential to create interpretable features for prediction models.
Cardamone NC; Olfson M; Schmutte T; Ungar L; Liu T; Cullen SW; Williams NJ; Marcus SC
10
38470459
Capability of GPT-4V(ision) in the Japanese National Medical Licensing Examination: Evaluation Study.
2,024
JMIR medical education
BACKGROUND: Previous research applying large language models (LLMs) to medicine was focused on text-based information. Recently, multimodal variants of LLMs acquired the capability of recognizing images. OBJECTIVE: We aim to evaluate the image recognition capability of generative pretrained transformer (GPT)-4V, a recent multimodal LLM developed by OpenAI, in the medical field by testing how visual information affects its performance to answer questions in the 117th Japanese National Medical Licensing Examination. METHODS: We focused on 108 questions that had 1 or more images as part of a question and presented GPT-4V with the same questions under two conditions: (1) with both the question text and associated images and (2) with the question text only. We then compared the difference in accuracy between the 2 conditions using the exact McNemar test. RESULTS: Among the 108 questions with images, GPT-4V's accuracy was 68% (73/108) when presented with images and 72% (78/108) when presented without images (P=.36). For the 2 question categories, clinical and general, the accuracies with and those without images were 71% (70/98) versus 78% (76/98; P=.21) and 30% (3/10) versus 20% (2/10; P>/=.99), respectively. CONCLUSIONS: The additional information from the images did not significantly improve the performance of GPT-4V in the Japanese National Medical Licensing Examination.
Nakao T; Miki S; Nakamura Y; Kikuchi T; Nomura Y; Hanaoka S; Yoshikawa T; Abe O
21
40164490
Evaluating ChatGPT for converting clinic letters into patient-friendly language.
2,025
BJGP open
BACKGROUND: Previous research has shown that communication with patients in language they understand leads to greater comprehension of treatment and diagnoses but can be time consuming for clinicians. AIM: Here we sought to investigate the utility of ChatGPT to translate clinic letters into language patients understood, without loss of clinical information and to assess what impact this had on patients understanding of letter content. DESIGN & SETTING: Single blinded quantitative study using objective and subjective analysis of language complexity. METHOD: Twenty-three clinic letters were provided by consultants across 8 specialties. Letters were inputted into ChatGPT with a prompt related to improve understanding for patients. Patient representatives were then asked to rate their understanding of the content of letters. RESULTS: Translation of letters by ChatGPT resulted in no loss of clinical information, but did result in significant increase in understanding, satisfaction and decrease in the need to obtain medical help to translate the letter contents by patient representatives compared with clinician written originals. CONCLUSION: Overall, we conclude that ChatGPT can be used to translate clinic letters into patient friendly language without loss of clinical content, and that these letters are preferred by patients.
Cork S; Hopcroft K
10
39255030
Prompt Engineering Paradigms for Medical Applications: Scoping Review.
2,024
Journal of medical Internet research
BACKGROUND: Prompt engineering, focusing on crafting effective prompts to large language models (LLMs), has garnered attention for its capabilities at harnessing the potential of LLMs. This is even more crucial in the medical domain due to its specialized terminology and language technicity. Clinical natural language processing applications must navigate complex language and ensure privacy compliance. Prompt engineering offers a novel approach by designing tailored prompts to guide models in exploiting clinically relevant information from complex medical texts. Despite its promise, the efficacy of prompt engineering in the medical domain remains to be fully explored. OBJECTIVE: The aim of the study is to review research efforts and technical approaches in prompt engineering for medical applications as well as provide an overview of opportunities and challenges for clinical practice. METHODS: Databases indexing the fields of medicine, computer science, and medical informatics were queried in order to identify relevant published papers. Since prompt engineering is an emerging field, preprint databases were also considered. Multiple data were extracted, such as the prompt paradigm, the involved LLMs, the languages of the study, the domain of the topic, the baselines, and several learning, design, and architecture strategies specific to prompt engineering. We include studies that apply prompt engineering-based methods to the medical domain, published between 2022 and 2024, and covering multiple prompt paradigms such as prompt learning (PL), prompt tuning (PT), and prompt design (PD). RESULTS: We included 114 recent prompt engineering studies. Among the 3 prompt paradigms, we have observed that PD is the most prevalent (78 papers). In 12 papers, PD, PL, and PT terms were used interchangeably. While ChatGPT is the most commonly used LLM, we have identified 7 studies using this LLM on a sensitive clinical data set. Chain-of-thought, present in 17 studies, emerges as the most frequent PD technique. While PL and PT papers typically provide a baseline for evaluating prompt-based approaches, 61% (48/78) of the PD studies do not report any nonprompt-related baseline. Finally, we individually examine each of the key prompt engineering-specific information reported across papers and find that many studies neglect to explicitly mention them, posing a challenge for advancing prompt engineering research. CONCLUSIONS: In addition to reporting on trends and the scientific landscape of prompt engineering, we provide reporting guidelines for future studies to help advance research in the medical field. We also disclose tables and figures summarizing medical prompt engineering papers available and hope that future contributions will leverage these existing works to better advance the field.
Zaghir J; Naguib M; Bjelogrlic M; Neveol A; Tannier X; Lovis C
10
38239905
ChatGPT is not ready yet for use in providing mental health assessment and interventions.
2,023
Frontiers in psychiatry
BACKGROUND: Psychiatry is a specialized field of medicine that focuses on the diagnosis, treatment, and prevention of mental health disorders. With advancements in technology and the rise of artificial intelligence (AI), there has been a growing interest in exploring the potential of AI language models systems, such as Chat Generative Pre-training Transformer (ChatGPT), to assist in the field of psychiatry. OBJECTIVE: Our study aimed to evaluates the effectiveness, reliability and safeness of ChatGPT in assisting patients with mental health problems, and to assess its potential as a collaborative tool for mental health professionals through a simulated interaction with three distinct imaginary patients. METHODS: Three imaginary patient scenarios (cases A, B, and C) were created, representing different mental health problems. All three patients present with, and seek to eliminate, the same chief complaint (i.e., difficulty falling asleep and waking up frequently during the night in the last 2 degrees weeks). ChatGPT was engaged as a virtual psychiatric assistant to provide responses and treatment recommendations. RESULTS: In case A, the recommendations were relatively appropriate (albeit non-specific), and could potentially be beneficial for both users and clinicians. However, as complexity of clinical cases increased (cases B and C), the information and recommendations generated by ChatGPT became inappropriate, even dangerous; and the limitations of the program became more glaring. The main strengths of ChatGPT lie in its ability to provide quick responses to user queries and to simulate empathy. One notable limitation is ChatGPT inability to interact with users to collect further information relevant to the diagnosis and management of a patient's clinical condition. Another serious limitation is ChatGPT inability to use critical thinking and clinical judgment to drive patient's management. CONCLUSION: As for July 2023, ChatGPT failed to give the simple medical advice given certain clinical scenarios. This supports that the quality of ChatGPT-generated content is still far from being a guide for users and professionals to provide accurate mental health information. It remains, therefore, premature to conclude on the usefulness and safety of ChatGPT in mental health practice.
Dergaa I; Fekih-Romdhane F; Hallit S; Loch AA; Glenn JM; Fessi MS; Ben Aissa M; Souissi N; Guelmami N; Swed S; El Omri A; Bragazzi NL; Ben Saad H
0-1
37040823
Using a Google Web Search Analysis to Assess the Utility of ChatGPT in Total Joint Arthroplasty.
2,023
The Journal of arthroplasty
BACKGROUND: Rapid technological advancements have laid the foundations for the use of artificial intelligence in medicine. The promise of machine learning (ML) lies in its potential ability to improve treatment decision making, predict adverse outcomes, and streamline the management of perioperative healthcare. In an increasing consumer-focused health care model, unprecedented access to information may extend to patients using ChatGPT to gain insight into medical questions. The main objective of our study was to replicate a patient's internet search in order to assess the appropriateness of ChatGPT, a novel machine learning tool released in 2022 that provides dialogue responses to queries, in comparison to Google Web Search, the most widely used search engine in the United States today, as a resource for patients for online health information. For the 2 different search engines, we compared i) the most frequently asked questions (FAQs) associated with total knee arthroplasty (TKA) and total hip arthroplasty (THA) by question type and topic; ii) the answers to the most frequently asked questions; as well as iii) the FAQs yielding a numerical response. METHODS: A Google web search was performed with the following search terms: "total knee replacement" and "total hip replacement." These terms were individually entered and the first 10 FAQs were extracted along with the source of the associated website for each question. The following statements were inputted into ChatGPT: 1) "Perform a google search with the search term 'total knee replacement' and record the 10 most FAQs related to the search term" as well as 2) "Perform a google search with the search term 'total hip replacement' and record the 10 most FAQs related to the search term." A Google web search was repeated with the same search terms to identify the first 10 FAQs that included a numerical response for both "total knee replacement" and "total hip replacement." These questions were then inputted into ChatGPT and the questions and answers were recorded. RESULTS: There were 5 of 20 (25%) questions that were similar when performing a Google web search and a search of ChatGPT for all search terms. Of the 20 questions asked for the Google Web Search, 13 of 20 were provided by commercial websites. For ChatGPT, 15 of 20 (75%) questions were answered by government websites, with the most frequent one being PubMed. In terms of numerical questions, 11 of 20 (55%) of the most FAQs provided different responses between a Google web search and ChatGPT. CONCLUSION: A comparison of the FAQs by a Google web search with attempted replication by ChatGPT revealed heterogenous questions and responses for open and discrete questions. ChatGPT should remain a trending use as a potential resource to patients that needs further corroboration until its ability to provide credible information is verified and concordant with the goals of the physician and the patient alike.
Dubin JA; Bains SS; Chen Z; Hameed D; Nace J; Mont MA; Delanois RE
32
39230947
Evaluating the Capabilities of Generative AI Tools in Understanding Medical Papers: Qualitative Study.
2,024
JMIR medical informatics
BACKGROUND: Reading medical papers is a challenging and time-consuming task for doctors, especially when the papers are long and complex. A tool that can help doctors efficiently process and understand medical papers is needed. OBJECTIVE: This study aims to critically assess and compare the comprehension capabilities of large language models (LLMs) in accurately and efficiently understanding medical research papers using the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) checklist, which provides a standardized framework for evaluating key elements of observational study. METHODS: The study is a methodological type of research. The study aims to evaluate the understanding capabilities of new generative artificial intelligence tools in medical papers. A novel benchmark pipeline processed 50 medical research papers from PubMed, comparing the answers of 6 LLMs (GPT-3.5-Turbo, GPT-4-0613, GPT-4-1106, PaLM 2, Claude v1, and Gemini Pro) to the benchmark established by expert medical professors. Fifteen questions, derived from the STROBE checklist, assessed LLMs' understanding of different sections of a research paper. RESULTS: LLMs exhibited varying performance, with GPT-3.5-Turbo achieving the highest percentage of correct answers (n=3916, 66.9%), followed by GPT-4-1106 (n=3837, 65.6%), PaLM 2 (n=3632, 62.1%), Claude v1 (n=2887, 58.3%), Gemini Pro (n=2878, 49.2%), and GPT-4-0613 (n=2580, 44.1%). Statistical analysis revealed statistically significant differences between LLMs (P<.001), with older models showing inconsistent performance compared to newer versions. LLMs showcased distinct performances for each question across different parts of a scholarly paper-with certain models like PaLM 2 and GPT-3.5 showing remarkable versatility and depth in understanding. CONCLUSIONS: This study is the first to evaluate the performance of different LLMs in understanding medical papers using the retrieval augmented generation method. The findings highlight the potential of LLMs to enhance medical research by improving efficiency and facilitating evidence-based decision-making. Further research is needed to address limitations such as the influence of question formats, potential biases, and the rapid evolution of LLM models.
Akyon SH; Akyon FC; Camyar AS; Hizli F; Sari T; Hizli S
10
38148925
The ability of artificial intelligence tools to formulate orthopaedic clinical decisions in comparison to human clinicians: An analysis of ChatGPT 3.5, ChatGPT 4, and Bard.
2,024
Journal of orthopaedics
BACKGROUND: Recent advancements in artificial intelligence (AI) have sparked interest in its integration into clinical medicine and education. This study evaluates the performance of three AI tools compared to human clinicians in addressing complex orthopaedic decisions in real-world clinical cases. QUESTIONS/PURPOSES: To evaluate the ability of commonly used AI tools to formulate orthopaedic clinical decisions in comparison to human clinicians. PATIENTS AND METHODS: The study used OrthoBullets Cases, a publicly available clinical cases collaboration platform where surgeons from around the world choose treatment options based on peer-reviewed standardised treatment polls. The clinical cases cover various orthopaedic categories. Three AI tools, (ChatGPT 3.5, ChatGPT 4, and Bard), were evaluated. Uniform prompts were used to input case information including questions relating to the case, and the AI tools' responses were analysed for alignment with the most popular response, within 10%, and within 20% of the most popular human responses. RESULTS: In total, 8 clinical categories comprising of 97 questions were analysed. ChatGPT 4 demonstrated the highest proportion of most popular responses (proportion of most popular response: ChatGPT 4 68.0%, ChatGPT 3.5 40.2%, Bard 45.4%, P value < 0.001), outperforming other AI tools. AI tools performed poorer in questions that were considered controversial (where disagreement occurred in human responses). Inter-tool agreement, as evaluated using Cohen's kappa coefficient, ranged from 0.201 (ChatGPT 4 vs. Bard) to 0.634 (ChatGPT 3.5 vs. Bard). However, AI tool responses varied widely, reflecting a need for consistency in real-world clinical applications. CONCLUSIONS: While AI tools demonstrated potential use in educational contexts, their integration into clinical decision-making requires caution due to inconsistent responses and deviations from peer consensus. Future research should focus on specialised clinical AI tool development to maximise utility in clinical decision-making. LEVEL OF EVIDENCE: IV.
Agharia S; Szatkowski J; Fraval A; Stevens J; Zhou Y
32
40209205
Large Language Models in Biochemistry Education: Comparative Evaluation of Performance.
2,025
JMIR medical education
BACKGROUND: Recent advancements in artificial intelligence (AI), particularly in large language models (LLMs), have started a new era of innovation across various fields, with medicine at the forefront of this technological revolution. Many studies indicated that at the current level of development, LLMs can pass different board exams. However, the ability to answer specific subject-related questions requires validation. OBJECTIVE: The objective of this study was to conduct a comprehensive analysis comparing the performance of advanced LLM chatbots-Claude (Anthropic), GPT-4 (OpenAI), Gemini (Google), and Copilot (Microsoft)-against the academic results of medical students in the medical biochemistry course. METHODS: We used 200 USMLE (United States Medical Licensing Examination)-style multiple-choice questions (MCQs) selected from the course exam database. They encompassed various complexity levels and were distributed across 23 distinctive topics. The questions with tables and images were not included in the study. The results of 5 successive attempts by Claude 3.5 Sonnet, GPT-4-1106, Gemini 1.5 Flash, and Copilot to answer this questionnaire set were evaluated based on accuracy in August 2024. Statistica 13.5.0.17 (TIBCO Software Inc) was used to analyze the data's basic statistics. Considering the binary nature of the data, the chi-square test was used to compare results among the different chatbots, with a statistical significance level of P<.05. RESULTS: On average, the selected chatbots correctly answered 81.1% (SD 12.8%) of the questions, surpassing the students' performance by 8.3% (P=.02). In this study, Claude showed the best performance in biochemistry MCQs, correctly answering 92.5% (185/200) of questions, followed by GPT-4 (170/200, 85%), Gemini (157/200, 78.5%), and Copilot (128/200, 64%). The chatbots demonstrated the best results in the following 4 topics: eicosanoids (mean 100%, SD 0%), bioenergetics and electron transport chain (mean 96.4%, SD 7.2%), hexose monophosphate pathway (mean 91.7%, SD 16.7%), and ketone bodies (mean 93.8%, SD 12.5%). The Pearson chi-square test indicated a statistically significant association between the answers of all 4 chatbots (P<.001 to P<.04). CONCLUSIONS: Our study suggests that different AI models may have unique strengths in specific medical fields, which could be leveraged for targeted support in biochemistry courses. This performance highlights the potential of AI in medical education and assessment.
Bolgova O; Shypilova I; Mavrych V
21
40001164
Quality assurance and validity of AI-generated single best answer questions.
2,025
BMC medical education
BACKGROUND: Recent advancements in generative artificial intelligence (AI) have opened new avenues in educational methodologies, particularly in medical education. This study seeks to assess whether generative AI might be useful in addressing the depletion of assessment question banks, a challenge intensified during the Covid-era due to the prevalence of open-book examinations, and to augment the pool of formative assessment opportunities available to students. While many recent publications have sought to ascertain whether AI can achieve a passing standard in existing examinations, this study investigates the potential for AI to generate the exam itself. This research utilized a commercially available AI large language model (LLM), OpenAI GPT-4, to generate 220 single best answer (SBA) questions, adhering to Medical Schools Council Assessment Alliance guidelines the and a selection of Learning Outcomes (LOs) of the Scottish Graduate-Entry Medicine (ScotGEM) program. All questions were assessed by an expert panel for accuracy and quality. A total of 50 AI-generated and 50 human-authored questions were used to create two 50-item formative SBA examinations for Year 1 and Year 2 ScotGEM students. Each exam, delivered via the Speedwell eSystem, comprised 25 AI-generated and 25 human-authored questions presented in random order. Students completed the online, closed-book exams on personal devices under exam conditions that reflected summative examinations. The performance of both AI-generated and human-authored questions was evaluated, focusing on facility and discrimination index as key metrics. The screening process revealed that 69% of AI-generated SBAs were fit for inclusion in the examinations with little or no modifications required. Modifications, when necessary, were predominantly due to reasons such as the inclusion of "all of the above" options, usage of American English spellings, and non-alphabetized answer choices. 31% of questions were rejected for inclusion in the examinations, due to factual inaccuracies and non-alignment with students' learning. When included in an examination, post hoc statistical analysis indicated no significant difference in performance between the AI- and human- authored questions in terms of facility and discrimination index. DISCUSSION AND CONCLUSION: The outcomes of this study suggest that AI LLMs can generate SBA questions that are in line with best-practice guidelines and specific LOs. However, a robust quality assurance process is necessary to ensure that erroneous questions are identified and rejected. The insights gained from this research provide a foundation for further investigation into refining AI prompts, aiming for a more reliable generation of curriculum-aligned questions. LLMs show significant potential in supplementing traditional methods of question generation in medical education. This approach offers a viable solution to rapidly replenish and diversify assessment resources in medical curricula, marking a step forward in the intersection of AI and education.
Ahmed A; Kerr E; O'Malley A
21
39730155
ChatGPT (GPT-4) versus doctors on complex cases of the Swedish family medicine specialist examination: an observational comparative study.
2,024
BMJ open
BACKGROUND: Recent breakthroughs in artificial intelligence research include the development of generative pretrained transformers (GPT). ChatGPT has been shown to perform well when answering several sets of medical multiple-choice questions. However, it has not been tested for writing free-text assessments of complex cases in primary care. OBJECTIVES: To compare the performance of ChatGPT, version GPT-4, with that of real doctors. DESIGN AND SETTING: A blinded observational comparative study conducted in the Swedish primary care setting. Responses from GPT-4 and real doctors to cases from the Swedish family medicine specialist examination were scored by blinded reviewers, and the scores were compared. PARTICIPANTS: Anonymous responses from the Swedish family medicine specialist examination 2017-2022 were used. OUTCOME MEASURES: Primary: the mean difference in scores between GPT-4's responses and randomly selected responses by human doctors, as well as between GPT-4's responses and top-tier responses by human doctors. Secondary: the correlation between differences in response length and response score; the intraclass correlation coefficient between reviewers; and the percentage of maximum score achieved by each group in different subject categories. RESULTS: The mean scores were 6.0, 7.2 and 4.5 for randomly selected doctor responses, top-tier doctor responses and GPT-4 responses, respectively, on a 10-point scale. The scores for the random doctor responses were, on average, 1.6 points higher than those of GPT-4 (p<0.001, 95% CI 0.9 to 2.2) and the top-tier doctor scores were, on average, 2.7 points higher than those of GPT-4 (p<0.001, 95 % CI 2.2 to 3.3). Following the release of GPT-4o, the experiment was repeated, although this time with only a single reviewer scoring the answers. In this follow-up, random doctor responses were scored 0.7 points higher than those of GPT-4o (p=0.044). CONCLUSION: In complex primary care cases, GPT-4 performs worse than human doctors taking the family medicine specialist examination. Future GPT-based chatbots may perform better, but comprehensive evaluations are needed before implementing chatbots for medical decision support in primary care.
Arvidsson R; Gunnarsson R; Entezarjou A; Sundemo D; Wikberg C
21
39504445
ChatGPT-4 Omni Performance in USMLE Disciplines and Clinical Skills: Comparative Analysis.
2,024
JMIR medical education
BACKGROUND: Recent studies, including those by the National Board of Medical Examiners, have highlighted the remarkable capabilities of recent large language models (LLMs) such as ChatGPT in passing the United States Medical Licensing Examination (USMLE). However, there is a gap in detailed analysis of LLM performance in specific medical content areas, thus limiting an assessment of their potential utility in medical education. OBJECTIVE: This study aimed to assess and compare the accuracy of successive ChatGPT versions (GPT-3.5, GPT-4, and GPT-4 Omni) in USMLE disciplines, clinical clerkships, and the clinical skills of diagnostics and management. METHODS: This study used 750 clinical vignette-based multiple-choice questions to characterize the performance of successive ChatGPT versions (ChatGPT 3.5 [GPT-3.5], ChatGPT 4 [GPT-4], and ChatGPT 4 Omni [GPT-4o]) across USMLE disciplines, clinical clerkships, and in clinical skills (diagnostics and management). Accuracy was assessed using a standardized protocol, with statistical analyses conducted to compare the models' performances. RESULTS: GPT-4o achieved the highest accuracy across 750 multiple-choice questions at 90.4%, outperforming GPT-4 and GPT-3.5, which scored 81.1% and 60.0%, respectively. GPT-4o's highest performances were in social sciences (95.5%), behavioral and neuroscience (94.2%), and pharmacology (93.2%). In clinical skills, GPT-4o's diagnostic accuracy was 92.7% and management accuracy was 88.8%, significantly higher than its predecessors. Notably, both GPT-4o and GPT-4 significantly outperformed the medical student average accuracy of 59.3% (95% CI 58.3-60.3). CONCLUSIONS: GPT-4o's performance in USMLE disciplines, clinical clerkships, and clinical skills indicates substantial improvements over its predecessors, suggesting significant potential for the use of this technology as an educational aid for medical students. These findings underscore the need for careful consideration when integrating LLMs into medical education, emphasizing the importance of structured curricula to guide their appropriate use and the need for ongoing critical analyses to ensure their reliability and effectiveness.
Bicknell BT; Butler D; Whalen S; Ricks J; Dixon CJ; Clark AB; Spaedy O; Skelton A; Edupuganti N; Dzubinski L; Tate H; Dyess G; Lindeman B; Lehmann LS
21
39496149
Accuracy of Prospective Assessments of 4 Large Language Model Chatbot Responses to Patient Questions About Emergency Care: Experimental Comparative Study.
2,024
Journal of medical Internet research
BACKGROUND: Recent surveys indicate that 48% of consumers actively use generative artificial intelligence (AI) for health-related inquiries. Despite widespread adoption and the potential to improve health care access, scant research examines the performance of AI chatbot responses regarding emergency care advice. OBJECTIVE: We assessed the quality of AI chatbot responses to common emergency care questions. We sought to determine qualitative differences in responses from 4 free-access AI chatbots, for 10 different serious and benign emergency conditions. METHODS: We created 10 emergency care questions that we fed into the free-access versions of ChatGPT 3.5 (OpenAI), Google Bard, Bing AI Chat (Microsoft), and Claude AI (Anthropic) on November 26, 2023. Each response was graded by 5 board-certified emergency medicine (EM) faculty for 8 domains of percentage accuracy, presence of dangerous information, factual accuracy, clarity, completeness, understandability, source reliability, and source relevancy. We determined the correct, complete response to the 10 questions from reputable and scholarly emergency medical references. These were compiled by an EM resident physician. For the readability of the chatbot responses, we used the Flesch-Kincaid Grade Level of each response from readability statistics embedded in Microsoft Word. Differences between chatbots were determined by the chi-square test. RESULTS: Each of the 4 chatbots' responses to the 10 clinical questions were scored across 8 domains by 5 EM faculty, for 400 assessments for each chatbot. Together, the 4 chatbots had the best performance in clarity and understandability (both 85%), intermediate performance in accuracy and completeness (both 50%), and poor performance (10%) for source relevance and reliability (mostly unreported). Chatbots contained dangerous information in 5% to 35% of responses, with no statistical difference between chatbots on this metric (P=.24). ChatGPT, Google Bard, and Claud AI had similar performances across 6 out of 8 domains. Only Bing AI performed better with more identified or relevant sources (40%; the others had 0%-10%). Flesch-Kincaid Reading level was 7.7-8.9 grade for all chatbots, except ChatGPT at 10.8, which were all too advanced for average emergency patients. Responses included both dangerous (eg, starting cardiopulmonary resuscitation with no pulse check) and generally inappropriate advice (eg, loosening the collar to improve breathing without evidence of airway compromise). CONCLUSIONS: AI chatbots, though ubiquitous, have significant deficiencies in EM patient advice, despite relatively consistent performance. Information for when to seek urgent or emergent care is frequently incomplete and inaccurate, and patients may be unaware of misinformation. Sources are not generally provided. Patients who use AI to guide health care decisions assume potential risks. AI chatbots for health should be subject to further research, refinement, and regulation. We strongly recommend proper medical consultation to prevent potential adverse outcomes.
Yau JY; Saadat S; Hsu E; Murphy LS; Roh JS; Suchard J; Tapia A; Wiechmann W; Langdorf MI
43
39382347
Registered Nurses' Attitudes Towards ChatGPT and Self-Directed Learning: A Cross-Sectional Study.
2,024
Journal of advanced nursing
BACKGROUND: Self-directed, lifelong learning is essential for nurses' competence in complex healthcare environments, which are characterised by rapid advancements in medicine and technology and nursing shortages. Previous studies have demonstrated that ChatGPT technology fosters self-directed learning by motivating users to engage with it. OBJECTIVES: To explore the relationships amongst socio-demographic data, attitudes towards ChatGPT use, and self-directed learning amongst registered nurses in Taiwan. METHODS: A cross-sectional study design with an online survey was adopted. Registered nurses from various healthcare settings were recruited through Facebook and LINE, a widely used messaging application in East Asia, reaching over 1000 nurses across five distinct online groups. An online survey was used to collect data, including socio-demographic characteristics, attitudes towards ChatGPT use, and a self-directed learning scale. Data were analysed using descriptive statistical methods, t-tests, Pearson's correlation, one-way analysis of variance, and multiple linear regression analysis. RESULTS: Amongst the 330 participants, 50.6% worked in hospitals, 51.8% had more than 15 years of work experience, and 78.2% did not hold supervisory positions. Of the participants, 46.7% had used ChatGPT. For all nurses, work experience and awareness of ChatGPT statistically significantly predicted self-directed learning, explaining 32.0% of the variance. For those familiar with ChatGPT, work experience in nursing and the technological/social influence of ChatGPT statistically significantly predicted self-directed learning, explaining 35.3% of the variance. CONCLUSIONS: Work experience in nursing provides critical opportunities for professional development and training. Therefore, ChatGPT-supported self-directed learning should be customised for degrees of experience to optimise continuous education. IMPLICATIONS FOR NURSING MANAGEMENT AND HEALTH POLICY: This study explores nurses' diverse use of and attitudes towards ChatGPT for self-directed learning. It suggests that administrators customise support and training when incorporating ChatGPT into professional development, accounting for nurses' varied experiences to enhance learning outcomes. PATIENT OR PUBLIC CONTRIBUTION: No patient or public contribution. REPORTING METHOD: This study adhered to the relevant cross-sectional STROBE guidelines.
Chang LC; Wang YN; Lin HL; Liao LL
10
40229613
Man Versus Machine: A Comparative Study of Human and ChatGPT-Generated Abstracts in Plastic Surgery Research.
2,025
Aesthetic plastic surgery
BACKGROUND: Since its 2022 release, ChatGPT has gained recognition for its potential to expedite time-consuming writing tasks like scientific writing. Well-written scientific abstracts are essential for clear and efficient communication of research findings. This study aims to explore ChatGPT-4's capability to produce well-crafted abstracts. METHODS: Ten abstract-less plastic surgery articles from PubMed were uploaded to ChatGPT, each with a prompt to generate one abstract. Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease Score (FRES) were calculated for all abstracts. Additionally, three physician evaluators blindly assessed the ten original and ten ChatGPT-generated abstracts using a 5-point Likert scale. Results were compared and analyzed using descriptive statistics with mean and standard deviation (SD). RESULTS: The original abstracts averaged an FKGL of 14.1 (SD 2.9) and an FRES of 25.2 (SD 14.2), while ChatGPT-generated abstracts had scores of 15.6 (SD 2.4) and 15.4 (SD 13.1), respectively. Collectively, evaluators identified two-thirds of the ChatGPT abstracts, but preferred the ChatGPT abstracts 90% of the time. On average, the evaluators found the ChatGPT abstracts to be more "well written" (4.23 vs. 3.50, p value < 0.001) and "clear and concise" (4.30 vs. 3.53, p value < 0.001) compared to the original abstracts. CONCLUSIONS: Despite a slightly higher reading level, evaluators generally preferred ChatGPT abstracts, which received higher ratings overall. These findings suggest ChatGPT holds promise in expediting the creation of high-quality scientific abstracts, potentially enhancing efficiency in research and scientific writing tasks. However, due to its exploratory nature, this study calls for additional research to validate these promising findings. LEVEL OF EVIDENCE IV: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.
Pressman SM; Garcia JP; Borna S; Gomez-Cabello CA; Haider SA; Haider CR; Forte AJ
10