pmid
stringlengths 8
8
| title
stringlengths 3
289
| year
int64 2.02k
2.03k
| journal
stringlengths 3
221
| doi
stringclasses 1
value | mesh
stringclasses 1
value | keywords
stringclasses 1
value | abstract
stringlengths 115
3.67k
| authors
stringlengths 3
798
| cluster
class label 5
classes |
|---|---|---|---|---|---|---|---|---|---|
37561097
|
Examining Real-World Medication Consultations and Drug-Herb Interactions: ChatGPT Performance Evaluation.
| 2,023
|
JMIR medical education
|
BACKGROUND: Since OpenAI released ChatGPT, with its strong capability in handling natural tasks and its user-friendly interface, it has garnered significant attention. OBJECTIVE: A prospective analysis is required to evaluate the accuracy and appropriateness of medication consultation responses generated by ChatGPT. METHODS: A prospective cross-sectional study was conducted by the pharmacy department of a medical center in Taiwan. The test data set comprised retrospective medication consultation questions collected from February 1, 2023, to February 28, 2023, along with common questions about drug-herb interactions. Two distinct sets of questions were tested: real-world medication consultation questions and common questions about interactions between traditional Chinese and Western medicines. We used the conventional double-review mechanism. The appropriateness of each response from ChatGPT was assessed by 2 experienced pharmacists. In the event of a discrepancy between the assessments, a third pharmacist stepped in to make the final decision. RESULTS: Of 293 real-world medication consultation questions, a random selection of 80 was used to evaluate ChatGPT's performance. ChatGPT exhibited a higher appropriateness rate in responding to public medication consultation questions compared to those asked by health care providers in a hospital setting (31/51, 61% vs 20/51, 39%; P=.01). CONCLUSIONS: The findings from this study suggest that ChatGPT could potentially be used for answering basic medication consultation questions. Our analysis of the erroneous information allowed us to identify potential medical risks associated with certain questions; this problem deserves our close attention.
|
Hsu HY; Hsu KC; Hou SY; Wu CL; Hsieh YW; Cheng YD
| 10
|
|||
37456381
|
Snakebite Advice and Counseling From Artificial Intelligence: An Acute Venomous Snakebite Consultation With ChatGPT.
| 2,023
|
Cureus
|
BACKGROUND: Snakebites, particularly from venomous species, present a significant global public health challenge. Access to accurate and timely information regarding snakebite prevention, recognition, and management is crucial for minimizing morbidity and mortality. Artificial intelligence (AI) language models, such as ChatGPT (Chat Generative Pre-trained Transformer), have the potential to revolutionize the dissemination of medical information and improve patient education and satisfaction. METHODS: This study aimed to explore the utility of ChatGPT, an advanced language model, in simulating acute venomous snakebite consultations. Nine hypothetical questions based on comprehensive snakebite management guidelines were posed to ChatGPT, and the responses were evaluated by clinical toxicologists and emergency medicine physicians. RESULTS: ChatGPT provided accurate and informative responses related to the immediate management of snakebites, the urgency of seeking medical attention, symptoms, and health issues following venomous snakebites, the role of antivenom, misconceptions about snakebites, recovery, pain management, and prevention strategies. The model highlighted the importance of seeking professional medical care and adhering to healthcare practitioners' advice. However, some limitations were identified, including outdated knowledge, lack of personalization, and inability to consider regional variations and individual characteristics. CONCLUSION: ChatGPT demonstrated proficiency in generating intelligible and well-informed responses related to venomous snakebites. It offers accessible and real-time advice, making it a valuable resource for preliminary information, education, and triage support in remote or underserved areas. While acknowledging its limitations, such as the need for up-to-date information and personalized advice, ChatGPT can serve as a supplementary source of information to complement professional medical consultation and enhance patient education. Future research should focus on addressing the identified limitations and establishing region-specific guidelines for snakebite management.
|
Altamimi I; Altamimi A; Alhumimidi AS; Altamimi A; Temsah MH
| 0-1
|
|||
40393017
|
Assessing ChatGPT's Capability as a New Age Standardized Patient: Qualitative Study.
| 2,025
|
JMIR medical education
|
BACKGROUND: Standardized patients (SPs) have been crucial in medical education, offering realistic patient interactions to students. Despite their benefits, SP training is resource-intensive and access can be limited. Advances in artificial intelligence (AI), particularly with large language models such as ChatGPT, present new opportunities for virtual SPs, potentially addressing these limitations. OBJECTIVES: This study aims to assess medical students' perceptions and experiences of using ChatGPT as an SP and to evaluate ChatGPT's effectiveness in performing as a virtual SP in a medical school setting. METHODS: This qualitative study, approved by the American University of Antigua Institutional Review Board, involved 9 students (5 females and 4 males, aged 22-48 years) from the American University of Antigua College of Medicine. Students were observed during a live role-play, interacting with ChatGPT as an SP using a predetermined prompt. A structured 15-question survey was administered before and after the interaction. Thematic analysis was conducted on the transcribed and coded responses, with inductive category formation. RESULTS: Thematic analysis identified key themes preinteraction including technology limitations (eg, prompt engineering difficulties), learning efficacy (eg, potential for personalized learning and reduced interview stress), verisimilitude (eg, absence of visual cues), and trust (eg, concerns about AI accuracy). Postinteraction, students noted improvements in prompt engineering, some alignment issues (eg, limited responses on sensitive topics), maintained learning efficacy (eg, convenience and repetition), and continued verisimilitude challenges (eg, lack of empathy and nonverbal cues). No significant trust issues were reported postinteraction. Despite some limitations, students found ChatGPT as a valuable supplement to traditional SPs, enhancing practice flexibility and diagnostic skills. CONCLUSIONS: ChatGPT can effectively augment traditional SPs in medical education, offering accessible, flexible practice opportunities. However, it cannot fully replace human SPs due to limitations in verisimilitude and prompt engineering challenges. Integrating prompt engineering into medical curricula and continuous advancements in AI are recommended to enhance the use of virtual SPs.
|
Cross J; Kayalackakom T; Robinson RE; Vaughans A; Sebastian R; Hood R; Lewis C; Devaraju S; Honnavar P; Naik S; Joseph J; Anand N; Mohammed A; Johnson A; Cohen E; Adeniji T; Nnenna Nnaji A; George JE
| 10
|
|||
37371718
|
PRISMA Systematic Literature Review, including with Meta-Analysis vs. Chatbot/GPT (AI) regarding Current Scientific Data on the Main Effects of the Calf Blood Deproteinized Hemoderivative Medicine (Actovegin) in Ischemic Stroke.
| 2,023
|
Biomedicines
|
BACKGROUND: Stroke is a significant public health problem and a leading cause of death and long-term disability worldwide. Several treatments for ischemic stroke have been developed, but these treatments have limited effectiveness. One potential treatment for this condition is Actovegin((R))/AODEJIN, a calf blood deproteinized hemodialysate/ultrafiltrate that has been shown to have pleiotropic/multifactorial and possibly multimodal effects. The actual actions of this medicine are thought to be mediated by its ability to reduce oxidative stress, inflammation, and apoptosis and to enhance neuronal survival and plasticity. METHODS: To obtain the most up-to-date information on the effects of Actovegin((R))/AODEJIN in ischemic stroke, we systematically reviewed the literature published in the last two years. This review builds upon our previous systematic literature review published in 2020, which used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method to search for and select related articles over almost two decades, between 1 January 2001 and 31 December 2019. Additionally, we compared the results of our PRISMA search (human intelligence-based) with those obtained from an interrogation of a GPT-based chatbot (ChatGPT) in order to ensure comprehensive coverage of potentially relevant studies. RESULTS: Our updated review found limited new evidence on the use of Actovegin((R))/AODEJIN in ischemic stroke, although the number of articles on this subject consistently increased compared to that from our initial systematic literature review. Specifically, we found five articles up to 2020 and eight more until December 2022. While these studies suggest that Actovegin((R))/AODEJIN may have neuroprotective effects in ischemic stroke, further clinical trials are needed to confirm these findings. Consequently, we performed a funnel analysis to evaluate the potential for publication bias. DISCUSSION: Our funnel analysis showed no evidence of publication bias, suggesting that the limited number of studies identified was not due to publication bias but rather due to a lack of research in this area. However, there are limitations when using ChatGPT, particularly in distinguishing between truth and falsehood and determining the appropriateness of interpolation. Nevertheless, AI can provide valuable support in conducting PRISMA-type systematic literature reviews, including meta-analyses. CONCLUSIONS: The limited number of studies identified in our review highlights the need for additional research in this area, especially as no available therapeutic agents are capable of curing central nervous system lesions. Any contribution, including that of Actovegin (with consideration of a positive balance between benefits and risks), is worthy of further study and periodic reappraisal. The evolving advancements in AI may play a role in the near future.
|
Anghelescu A; Firan FC; Onose G; Munteanu C; Trandafir AI; Ciobanu I; Gheorghita S; Ciobanu V
| 10
|
|||
38449683
|
Exploring the proficiency of ChatGPT-4: An evaluation of its performance in the Taiwan advanced medical licensing examination.
| 2,024
|
Digital health
|
BACKGROUND: Taiwan is well-known for its quality healthcare system. The country's medical licensing exams offer a way to evaluate ChatGPT's medical proficiency. METHODS: We analyzed exam data from February 2022, July 2022, February 2023, and July 2033. Each exam included four papers with 80 single-choice questions, grouped as descriptive or picture-based. We used ChatGPT-4 for evaluation. Incorrect answers prompted a "chain of thought" approach. Accuracy rates were calculated as percentages. RESULTS: ChatGPT-4's accuracy in medical exams ranged from 63.75% to 93.75% (February 2022-July 2023). The highest accuracy (93.75%) was in February 2022's Medicine Exam (3). Subjects with the highest misanswered rates were ophthalmology (28.95%), breast surgery (27.27%), plastic surgery (26.67%), orthopedics (25.00%), and general surgery (24.59%). While using "chain of thought," the "Accuracy of (CoT) prompting" ranged from 0.00% to 88.89%, and the final overall accuracy rate ranged from 90% to 98%. CONCLUSION: ChatGPT-4 succeeded in Taiwan's medical licensing exams. With the "chain of thought" prompt, it improved accuracy to over 90%.
|
Lin SY; Chan PK; Hsu WH; Kao CH
| 21
|
|||
39118469
|
Assessing the Ability of a Large Language Model to Score Free-Text Medical Student Clinical Notes: Quantitative Study.
| 2,024
|
JMIR medical education
|
BACKGROUND: Teaching medical students the skills required to acquire, interpret, apply, and communicate clinical information is an integral part of medical education. A crucial aspect of this process involves providing students with feedback regarding the quality of their free-text clinical notes. OBJECTIVE: The goal of this study was to assess the ability of ChatGPT 3.5, a large language model, to score medical students' free-text history and physical notes. METHODS: This is a single-institution, retrospective study. Standardized patients learned a prespecified clinical case and, acting as the patient, interacted with medical students. Each student wrote a free-text history and physical note of their interaction. The students' notes were scored independently by the standardized patients and ChatGPT using a prespecified scoring rubric that consisted of 85 case elements. The measure of accuracy was percent correct. RESULTS: The study population consisted of 168 first-year medical students. There was a total of 14,280 scores. The ChatGPT incorrect scoring rate was 1.0%, and the standardized patient incorrect scoring rate was 7.2%. The ChatGPT error rate was 86%, lower than the standardized patient error rate. The ChatGPT mean incorrect scoring rate of 12 (SD 11) was significantly lower than the standardized patient mean incorrect scoring rate of 85 (SD 74; P=.002). CONCLUSIONS: ChatGPT demonstrated a significantly lower error rate compared to standardized patients. This is the first study to assess the ability of a generative pretrained transformer (GPT) program to score medical students' standardized patient-based free-text clinical notes. It is expected that, in the near future, large language models will provide real-time feedback to practicing physicians regarding their free-text notes. GPT artificial intelligence programs represent an important advance in medical education and medical practice.
|
Burke HB; Hoang A; Lopreiato JO; King H; Hemmer P; Montgomery M; Gagarin V
| 21
|
|||
39388702
|
Navigating Nephrology's Decline Through a GPT-4 Analysis of Internal Medicine Specialties in the United States: Qualitative Study.
| 2,024
|
JMIR medical education
|
BACKGROUND: The 2024 Nephrology fellowship match data show the declining interest in nephrology in the United States, with an 11% drop in candidates and a mere 66% (321/488) of positions filled. OBJECTIVE: The study aims to discern the factors influencing this trend using ChatGPT, a leading chatbot model, for insights into the comparative appeal of nephrology versus other internal medicine specialties. METHODS: Using the GPT-4 model, the study compared nephrology with 13 other internal medicine specialties, evaluating each on 7 criteria including intellectual complexity, work-life balance, procedural involvement, research opportunities, patient relationships, career demand, and financial compensation. Each criterion was assigned scores from 1 to 10, with the cumulative score determining the ranking. The approach included counteracting potential bias by instructing GPT-4 to favor other specialties over nephrology in reverse scenarios. RESULTS: GPT-4 ranked nephrology only above sleep medicine. While nephrology scored higher than hospice and palliative medicine, it fell short in key criteria such as work-life balance, patient relationships, and career demand. When examining the percentage of filled positions in the 2024 appointment year match, nephrology's filled rate was 66%, only higher than the 45% (155/348) filled rate of geriatric medicine. Nephrology's score decreased by 4%-14% in 5 criteria including intellectual challenge and complexity, procedural involvement, career opportunity and demand, research and academic opportunities, and financial compensation. CONCLUSIONS: ChatGPT does not favor nephrology over most internal medicine specialties, highlighting its diminishing appeal as a career choice. This trend raises significant concerns, especially considering the overall physician shortage, and prompts a reevaluation of factors affecting specialty choice among medical residents.
|
Miao J; Thongprayoon C; Garcia Valencia O; Craici IM; Cheungpasitporn W
| 10
|
|||
39050145
|
Assessing accuracy of ChatGPT in response to questions from day to day pharmaceutical care in hospitals.
| 2,024
|
Exploratory research in clinical and social pharmacy
|
BACKGROUND: The advent of Large Language Models (LLMs) such as ChatGPT introduces opportunities within the medical field. Nonetheless, use of LLM poses a risk when healthcare practitioners and patients present clinical questions to these programs without a comprehensive understanding of its suitability for clinical contexts. OBJECTIVE: The objective of this study was to assess ChatGPT's ability to generate appropriate responses to clinical questions that hospital pharmacists could encounter during routine patient care. METHODS: Thirty questions from 10 different domains within clinical pharmacy were collected during routine care. Questions were presented to ChatGPT in a standardized format, including patients' age, sex, drug name, dose, and indication. Subsequently, relevant information regarding specific cases were provided, and the prompt was concluded with the query "what would a hospital pharmacist do?". The impact on accuracy was assessed for each domain by modifying personification to "what would you do?", presenting the question in Dutch, and regenerating the primary question. All responses were independently evaluated by two senior hospital pharmacists, focusing on the availability of an advice, accuracy and concordance. RESULTS: In 77% of questions, ChatGPT provided an advice in response to the question. For these responses, accuracy and concordance were determined. Accuracy was correct and complete for 26% of responses, correct but incomplete for 22% of responses, partially correct and partially incorrect for 30% of responses and completely incorrect for 22% of responses. The reproducibility was poor, with merely 10% of responses remaining consistent upon regeneration of the primary question. CONCLUSIONS: While concordance of responses was excellent, the accuracy and reproducibility were poor. With the described method, ChatGPT should not be used to address questions encountered by hospital pharmacists during their shifts. However, it is important to acknowledge the limitations of our methodology, including potential biases, which may have influenced the findings.
|
van Nuland M; Lobbezoo AH; van de Garde EMW; Herbrink M; van Heijl I; Bognar T; Houwen JPA; Dekens M; Wannet D; Egberts T; van der Linden PD
| 10
|
|||
39430693
|
Assessing AI efficacy in medical knowledge tests: A study using Taiwan's internal medicine exam questions from 2020 to 2023.
| 2,024
|
Digital health
|
BACKGROUND: The aim of this study is to evaluate the ability of generative artificial intelligence (AI) models to handle specialized medical knowledge and problem-solving in a formal examination context. METHODS: This research utilized internal medicine exam questions provided by the Taiwan Internal Medicine Society from 2020 to 2023, testing three AI models: GPT-4o, Claude_3.5 Sonnet, and Gemini Advanced models. Rejected queries for Gemini Advanced were translated into French for resubmission. Performance was assessed using IBM SPSS Statistics 26, with accuracy percentages calculated and statistical analyses such as Pearson correlation and analysis of variance (ANOVA) performed to gauge AI efficacy. RESULTS: GPT-4o's top annual score was 86.25 in 2022, with an average of 81.97. Claude_3.5 Sonnet reached a peak score of 88.13 in 2021 and 2022, averaging 84.85, while Gemini Advanced lagged with an average score of 69.84. In specific specialties, Claude_3.5 Sonnet scored highest in Psychiatry (100%) and Nephrology (97.26%), with GPT-4o performing similarly well in Hematology & oncology (97.10%) and Nephrology (94.52%). Gemini's best scores were in Psychiatry (86.96%) and Hematology & Oncology (82.76%). Gemini Advanced models struggled with Neurology, scoring below 60%. Additionally, all models performed better on text-based questions than on image-based ones, without significant differences. Claude 3 Opus scored highest on COVID-19-related questions at 89.29%, followed by GPT-4o at 75.00% and Gemini Advanced at 67.86%. CONCLUSIONS: AI models showed varied proficiency across medical specialties and question types. GPT-4o demonstrated higher image-based correction rates. Claude_3.5 Sonnet generally and consistently outperformed others, highlighting significant potential for AI in assisting medical education.
|
Lin SY; Hsu YY; Ju SW; Yeh PC; Hsu WH; Kao CH
| 21
|
|||
38797622
|
ChatGPT performance on radiation technologist and therapist entry to practice exams.
| 2,024
|
Journal of medical imaging and radiation sciences
|
BACKGROUND: The aim of this study was to describe the proficiency of ChatGPT (GPT-4) on certification style exams from the Canadian Association of Medical Radiation Technologists (CAMRT), and describe its performance across multiple exam attempts. METHODS: ChatGPT was prompted with questions from CAMRT practice exams in the disciplines of radiological technology, magnetic resonance (MRI), nuclear medicine and radiation therapy (87-98 questions each). ChatGPT attempted each exam five times. Exam performance was evaluated using descriptive statistics, stratified by discipline and question type (knowledge, application, critical thinking). Light's Kappa was used to assess agreement in answers across attempts. RESULTS: Using a passing grade of 65 %, ChatGPT passed the radiological technology exam only once (20 %), MRI all five times (100 %), nuclear medicine three times (60 %), and radiation therapy all five times (100 %). ChatGPT's performance was best on knowledge questions across all disciplines except radiation therapy. It performed worst on critical thinking questions. Agreement in ChatGPT's responses across attempts was substantial within the disciplines of radiological technology, MRI, and nuclear medicine, and almost perfect for radiation therapy. CONCLUSION: ChatGPT (GPT-4) was able to pass certification style exams for radiation technologists and therapists, but its performance varied between disciplines. The algorithm demonstrated substantial to almost perfect agreement in the responses it provided across multiple exam attempts. Future research evaluating ChatGPT's performance on standardized tests should consider using repeated measures.
|
Duggan R; Tsuruda KM
| 21
|
|||
37549788
|
Can ChatGPT pass the thoracic surgery exam?
| 2,023
|
The American journal of the medical sciences
|
BACKGROUND: The capacity of ChatGPT in academic environments and medical exams is being discovered more and more every day. In this study, we tested the success of ChatGPT on Turkish-language thoracic surgery exam questions. METHODS: ChatGPT was provided with a total of 105 questions divided into seven distinct groups, each of which contained 15 questions. Along with the success of the students, the success of ChatGPT-3.5 and ChatGPT-4 architectures in answering the questions correctly was analyzed. RESULTS: The overall mean score of students was 12.50 +/- 1.20, corresponding to 83.33%. Moreover, ChatGPT-3.5 managed to surpass students' score of 12.5 with an average of 13.57 +/- 0.49 questions correctly on average, while ChatGPT-4 answered 14 +/- 0.76 questions correctly (83.3%, 90.48%, and 93.33%, respectively). CONCLUSIONS: When the results of this study and other similar studies in the literature are evaluated together, ChatGPT, which was developed for general purpose, can also produce successful results in a specific field of medicine. AI-powered applications are becoming more and more useful and valuable in providing academic knowledge.
|
Gencer A; Aydin S
| 21
|
|||
40211256
|
Evaluating the agreement between ChatGPT-4 and validated questionnaires in screening for anxiety and depression in college students: a cross-sectional study.
| 2,025
|
BMC psychiatry
|
BACKGROUND: The Chat Generative Pre-trained Transformer (ChatGPT), an artificial intelligence-based web application, has demonstrated substantial potential across various knowledge domains, particularly in medicine. This cross-sectional study assessed the validity and possible usefulness of the ChatGPT-4 in assessing anxiety and depression by comparing two questionnaires. METHODS: This study tasked ChatGPT-4 with generating a structured interview questionnaire based on the validated Patient Health Questionnaire-9 (PHQ-9) and Generalized Anxiety Disorder Scale-7 (GAD-7). These new measures were referred to as GPT-PHQ-9 and GPT-GAD-7. This study utilized Spearman correlation analysis, Intra-class correlation coefficients (ICC), Youden's index, receiver operating characteristic (ROC) and Bland-Altman plots to evaluate the consistency between scores from a ChatGPT-4 adapted questionnaire and those from a validated questionnaire. RESULTS: A total of 200 college students participated. Cronbach's alpha indicated acceptable reliability for both GPT-PHQ-9 (alpha = 0.75) and GPT-GAD-7 (alpha = 0.76). ICC values were 0.80 for PHQ-9 and 0.70 for GAD-7. Spearman's correlation showed moderate associations with PHQ-9 (p = 0.63) and GAD-7 (p = 0.68). ROC curve analysis revealed optimal cutoffs of 9.5 for depressive symptoms and 6.5 for anxiety symptoms, both with high sensitivity and specificity. CONCLUSIONS: The questionnaire adapted by ChatGPT-4 demonstrated good consistency with the validated questionnaire. Future studies should investigate the usefulness of the ChatGPT designed questionnaire in different populations.
|
Liu J; Gu J; Tong M; Yue Y; Qiu Y; Zeng L; Yu Y; Yang F; Zhao S
| 0-1
|
|||
40393915
|
Evaluating the Agreement Between ChatGPT-4 and Validated Mental Health Scales in Older Adults: A Cross-Sectional Study.
| 2,025
|
The American journal of geriatric psychiatry : official journal of the American Association for Geriatric Psychiatry
|
BACKGROUND: The chat generative pretrained transformer (ChatGPT), an artificial intelligence-based web application, has demonstrated substantial potential across various knowledge domains, particularly in medicine. This cross-sectional study assessed the validity and possible usefulness of ChatGPT-4 in assessing mental health by comparing several scales. METHODS: A cross-sectional study recruited 127 older adults (>/=60 years old) from December 2023 to October 2024 in Wuhan. Scenarios contextualized to daily life were adapted using ChatGPT-4 to adapt six scales (PHQ-9, GAD-7, K10, PSS-4, ULS-6, WHO-5). The level of agreement between the ChatGPT-4 adapted scales and the traditional scales was compared using Spearman correlation, Cronbach's alpha coefficient, intraclass correlation coefficient (ICC), and Bland-Altman analysis. RESULTS: The ChatGPT-adapted questionnaires showed moderate to strong correlations with traditionally validated measures of anxiety, depression, psychological distress, perceived stress, loneliness, and well-being. Significant positive correlations were observed for total scores, including PHQ-9 (rho = 0.61), GAD-7 (rho = 0.66), K10 (rho = 0.75), PSS-4 (rho = 0.71), and ULS-6 (rho = 0.73), with slightly weaker correlation for WHO-5 (rho = 0.41). Reliability analyses yielded Cronbach's alpha values ranging from 0.66 to 0.91 and ICCs ranging from 0.47 to 0.81, confirming strong internal consistency and test-retest reliability. CONCLUSIONS: Moderate to high correlations were found between the adapted ChatGPT-4 questionnaire and the traditional scale, indicating that it shows promise as a supplemental mental health assessment tool. However, further research is needed to explore its broader applicability.
|
Liu J; Gu J; Tong M; Yue Y; Qiu Y; Zeng L; Yu Y; Yang F
| 0-1
|
|||
39139744
|
Understanding How ChatGPT May Become a Clinical Administrative Tool Through an Investigation on the Ability to Answer Common Patient Questions Concerning Ulnar Collateral Ligament Injuries.
| 2,024
|
Orthopaedic journal of sports medicine
|
BACKGROUND: The consumer availability and automated response functions of chat generator pretrained transformer (ChatGPT-4), a large language model, poise this application to be utilized for patient health queries and may have a role in serving as an adjunct to minimize administrative and clinical burden. PURPOSE: To evaluate the ability of ChatGPT-4 to respond to patient inquiries concerning ulnar collateral ligament (UCL) injuries and compare these results with the performance of Google. STUDY DESIGN: Cross-sectional study. METHODS: Google Web Search was used as a benchmark, as it is the most widely used search engine worldwide and the only search engine that generates frequently asked questions (FAQs) when prompted with a query, allowing comparisons through a systematic approach. The query "ulnar collateral ligament reconstruction" was entered into Google, and the top 10 FAQs, answers, and their sources were recorded. ChatGPT-4 was prompted to perform a Google search of FAQs with the same query and to record the sources of answers for comparison. This process was again replicated to obtain 10 new questions requiring numeric instead of open-ended responses. Finally, responses were graded independently for clinical accuracy (grade 0 = inaccurate, grade 1 = somewhat accurate, grade 2 = accurate) by 2 fellowship-trained sports medicine surgeons (D.W.A, J.S.D.) blinded to the search engine and answer source. RESULTS: ChatGPT-4 used a greater proportion of academic sources than Google to provide answers to the top 10 FAQs, although this was not statistically significant (90% vs 50%; P = .14). In terms of question overlap, 40% of the most common questions on Google and ChatGPT-4 were the same. When comparing FAQs with numeric responses, 20% of answers were completely overlapping, 30% demonstrated partial overlap, and the remaining 50% did not demonstrate any overlap. All sources used by ChatGPT-4 to answer these FAQs were academic, while only 20% of sources used by Google were academic (P = .0007). The remaining Google sources included social media (40%), medical practices (20%), single-surgeon websites (10%), and commercial websites (10%). The mean (+/- standard deviation) accuracy for answers given by ChatGPT-4 was significantly greater compared with Google for the top 10 FAQs (1.9 +/- 0.2 vs 1.2 +/- 0.6; P = .001) and top 10 questions with numeric answers (1.8 +/- 0.4 vs 1 +/- 0.8; P = .013). CONCLUSION: ChatGPT-4 is capable of providing responses with clinically relevant content concerning UCL injuries and reconstruction. ChatGPT-4 utilized a greater proportion of academic websites to provide responses to FAQs representative of patient inquiries compared with Google Web Search and provided significantly more accurate answers. Moving forward, ChatGPT has the potential to be used as a clinical adjunct when answering queries about UCL injuries and reconstruction, but further validation is warranted before integrated or autonomous use in clinical settings.
|
Varady NH; Lu AZ; Mazzucco M; Dines JS; Altchek DW; Williams RJ 3rd; Kunze KN
| 32
|
|||
37609022
|
ChatGPT-4: Transforming Medical Education and Addressing Clinical Exposure Challenges in the Post-pandemic Era.
| 2,023
|
Indian journal of orthopaedics
|
BACKGROUND: The COVID-19 pandemic has affected medical education, constraining clinical exposure and posing unprecedented challenges for students and junior doctors. This research explores the potential of artificial intelligence (AI), specifically the ChatGPT-4 language model, to transform medical education and address the deficiencies in clinical exposure during the post-pandemic era. RESEARCH QUESTIONS/PURPOSE: What is the potential of AI large language models in delivering safe and coherent medical advice to junior doctors for clinical orthopaedic scenarios? PATIENTS AND METHODS: A series of diverse orthopaedic questions was presented to ChatGPT-4, from general medicine to highly specialised fields. The questions were based on a variety of common orthopaedic presentations including neck of femur fracture, compartment syndrome, pulmonary embolism, and a motor vehicle accident. A validated questionnaire (Likert Scale) was implemented to evaluate the answers produced by ChatGPT-4. RESULTS: Our results indicate that ChatGPT-4 exhibits exceptional proficiency in delivering accurate and coherent medical advice. Its intuitive interface, accessibility, and sophisticated algorithm render it an ideal supplementary tool for medical students and junior doctors. Despite certain limitations, such as its inability to fully address highly specialised areas, this study highlights the potential of AI and ChatGPT-4 to revolutionise medical education and fill the clinical exposure void generated by the pandemic. Future research should concentrate on the practical application of ChatGPT-4 in real-world medical environments and its integration with other emerging technologies to optimise its influence on the education and training of healthcare professionals. CONCLUSIONS: ChatGPT-4's integration into orthopaedic education and practice can mitigate pandemic-related experience gaps, promoting self-directed, personalised learning and decision-making support for interns and residents. Future advancements may address limitations to enhance healthcare professionals' learning and expertise. LEVEL OF EVIDENCE: Level III evidence-observational study.
|
Lower K; Seth I; Lim B; Seth N
| 32
|
|||
39923067
|
AI versus human-generated multiple-choice questions for medical education: a cohort study in a high-stakes examination.
| 2,025
|
BMC medical education
|
BACKGROUND: The creation of high-quality multiple-choice questions (MCQs) is essential for medical education assessments but is resource-intensive and time-consuming when done by human experts. Large language models (LLMs) like ChatGPT-4o offer a promising alternative, but their efficacy remains unclear, particularly in high-stakes exams. OBJECTIVE: This study aimed to evaluate the quality and psychometric properties of ChatGPT-4o-generated MCQs compared to human-created MCQs in a high-stakes medical licensing exam. METHODS: A prospective cohort study was conducted among medical doctors preparing for the Primary Examination on Emergency Medicine (PEEM) organised by the Hong Kong College of Emergency Medicine in August 2024. Participants attempted two sets of 100 MCQs-one AI-generated and one human-generated. Expert reviewers assessed MCQs for factual correctness, relevance, difficulty, alignment with Bloom's taxonomy (remember, understand, apply and analyse), and item writing flaws. Psychometric analyses were performed, including difficulty and discrimination indices and KR-20 reliability. Candidate performance and time efficiency were also evaluated. RESULTS: Among 24 participants, AI-generated MCQs were easier (mean difficulty index = 0.78 +/- 0.22 vs. 0.69 +/- 0.23, p < 0.01) but showed similar discrimination indices to human MCQs (mean = 0.22 +/- 0.23 vs. 0.26 +/- 0.26). Agreement was moderate (ICC = 0.62, p = 0.01, 95% CI: 0.12-0.84). Expert reviews identified more factual inaccuracies (6% vs. 4%), irrelevance (6% vs. 0%), and inappropriate difficulty levels (14% vs. 1%) in AI MCQs. AI questions primarily tested lower-order cognitive skills, while human MCQs better assessed higher-order skills (chi(2) = 14.27, p = 0.003). AI significantly reduced time spent on question generation (24.5 vs. 96 person-hours). CONCLUSION: ChatGPT-4o demonstrates the potential for efficiently generating MCQs but lacks the depth needed for complex assessments. Human review remains essential to ensure quality. Combining AI efficiency with expert oversight could optimise question creation for high-stakes exams, offering a scalable model for medical education that balances time efficiency and content quality.
|
Law AK; So J; Lui CT; Choi YF; Cheung KH; Kei-Ching Hung K; Graham CA
| 21
|
|||
38686550
|
Exploring the Performance of ChatGPT Versions 3.5, 4, and 4 With Vision in the Chilean Medical Licensing Examination: Observational Study.
| 2,024
|
JMIR medical education
|
BACKGROUND: The deployment of OpenAI's ChatGPT-3.5 and its subsequent versions, ChatGPT-4 and ChatGPT-4 With Vision (4V; also known as "GPT-4 Turbo With Vision"), has notably influenced the medical field. Having demonstrated remarkable performance in medical examinations globally, these models show potential for educational applications. However, their effectiveness in non-English contexts, particularly in Chile's medical licensing examinations-a critical step for medical practitioners in Chile-is less explored. This gap highlights the need to evaluate ChatGPT's adaptability to diverse linguistic and cultural contexts. OBJECTIVE: This study aims to evaluate the performance of ChatGPT versions 3.5, 4, and 4V in the EUNACOM (Examen Unico Nacional de Conocimientos de Medicina), a major medical examination in Chile. METHODS: Three official practice drills (540 questions) from the University of Chile, mirroring the EUNACOM's structure and difficulty, were used to test ChatGPT versions 3.5, 4, and 4V. The 3 ChatGPT versions were provided 3 attempts for each drill. Responses to questions during each attempt were systematically categorized and analyzed to assess their accuracy rate. RESULTS: All versions of ChatGPT passed the EUNACOM drills. Specifically, versions 4 and 4V outperformed version 3.5, achieving average accuracy rates of 79.32% and 78.83%, respectively, compared to 57.53% for version 3.5 (P<.001). Version 4V, however, did not outperform version 4 (P=.73), despite the additional visual capabilities. We also evaluated ChatGPT's performance in different medical areas of the EUNACOM and found that versions 4 and 4V consistently outperformed version 3.5. Across the different medical areas, version 3.5 displayed the highest accuracy in psychiatry (69.84%), while versions 4 and 4V achieved the highest accuracy in surgery (90.00% and 86.11%, respectively). Versions 3.5 and 4 had the lowest performance in internal medicine (52.74% and 75.62%, respectively), while version 4V had the lowest performance in public health (74.07%). CONCLUSIONS: This study reveals ChatGPT's ability to pass the EUNACOM, with distinct proficiencies across versions 3.5, 4, and 4V. Notably, advancements in artificial intelligence (AI) have not significantly led to enhancements in performance on image-based questions. The variations in proficiency across medical fields suggest the need for more nuanced AI training. Additionally, the study underscores the importance of exploring innovative approaches to using AI to augment human cognition and enhance the learning process. Such advancements have the potential to significantly influence medical education, fostering not only knowledge acquisition but also the development of critical thinking and problem-solving skills among health care professionals.
|
Rojas M; Rojas M; Burgess V; Toro-Perez J; Salehi S
| 21
|
|||
37812468
|
ChatGPT-Generated Differential Diagnosis Lists for Complex Case-Derived Clinical Vignettes: Diagnostic Accuracy Evaluation.
| 2,023
|
JMIR medical informatics
|
BACKGROUND: The diagnostic accuracy of differential diagnoses generated by artificial intelligence chatbots, including ChatGPT models, for complex clinical vignettes derived from general internal medicine (GIM) department case reports is unknown. OBJECTIVE: This study aims to evaluate the accuracy of the differential diagnosis lists generated by both third-generation ChatGPT (ChatGPT-3.5) and fourth-generation ChatGPT (ChatGPT-4) by using case vignettes from case reports published by the Department of GIM of Dokkyo Medical University Hospital, Japan. METHODS: We searched PubMed for case reports. Upon identification, physicians selected diagnostic cases, determined the final diagnosis, and displayed them into clinical vignettes. Physicians typed the determined text with the clinical vignettes in the ChatGPT-3.5 and ChatGPT-4 prompts to generate the top 10 differential diagnoses. The ChatGPT models were not specially trained or further reinforced for this task. Three GIM physicians from other medical institutions created differential diagnosis lists by reading the same clinical vignettes. We measured the rate of correct diagnosis within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and the top diagnosis. RESULTS: In total, 52 case reports were analyzed. The rates of correct diagnosis by ChatGPT-4 within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and top diagnosis were 83% (43/52), 81% (42/52), and 60% (31/52), respectively. The rates of correct diagnosis by ChatGPT-3.5 within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and top diagnosis were 73% (38/52), 65% (34/52), and 42% (22/52), respectively. The rates of correct diagnosis by ChatGPT-4 were comparable to those by physicians within the top 10 (43/52, 83% vs 39/52, 75%, respectively; P=.47) and within the top 5 (42/52, 81% vs 35/52, 67%, respectively; P=.18) differential diagnosis lists and top diagnosis (31/52, 60% vs 26/52, 50%, respectively; P=.43) although the difference was not significant. The ChatGPT models' diagnostic accuracy did not significantly vary based on open access status or the publication date (before 2011 vs 2022). CONCLUSIONS: This study demonstrates the potential diagnostic accuracy of differential diagnosis lists generated using ChatGPT-3.5 and ChatGPT-4 for complex clinical vignettes from case reports published by the GIM department. The rate of correct diagnoses within the top 10 and top 5 differential diagnosis lists generated by ChatGPT-4 exceeds 80%. Although derived from a limited data set of case reports from a single department, our findings highlight the potential utility of ChatGPT-4 as a supplementary tool for physicians, particularly for those affiliated with the GIM department. Further investigations should explore the diagnostic accuracy of ChatGPT by using distinct case materials beyond its training data. Such efforts will provide a comprehensive insight into the role of artificial intelligence in enhancing clinical decision-making.
|
Hirosawa T; Kawamura R; Harada Y; Mizuta K; Tokumasu K; Kaji Y; Suzuki T; Shimizu T
| 10
|
|||
39319150
|
Assessing the ChatGPT aptitude: A competent and effective Dermatology doctor?
| 2,024
|
Heliyon
|
BACKGROUND: The efficacy and adeptness of ChatGPT 3.5 and ChatGPT 4.0 in the precise diagnosis and management of conditions like atopic dermatitis and Autoimmune blistering skin diseases (AIBD) remain to be elucidated. So this study examined the accuracy and effectiveness of the ChatGPT responses related to understanding, therapies, and specific cases of these two conditions. METHOD: Firstly, the responses provided by ChatGPTs to a set of 50 questionnaires underwent evaluation by five distinct dermatologists, with complete adjudication of the third-party reviewer. The comparative analysis included the evaluative efficacy of both ChatGPT3.5 and ChatGPT4.0 against the diagnostic abilities exhibited by three distinct cohorts of qualified clinical professionals. And then, an examination was conducted to assess the diagnostic proficiency of ChatGPT3.5 and ChatGPT4.0 in the context of diagnosing specific instances of skin blistering autoimmune diseases. RESULTS: In assessing the proficiency of ChatGPTs in generating responses related to fundamental knowledge about AD it is noteworthy that both versions of ChatGPTs, despite their lack of specialized training on medical databases, exhibited a commendable capacity to yield solutions that exhibited a substantial degree of concurrence with evidence-based medical information. Accordingly we observed that the performance of ChatGPT-4.0 beyond that of the ChatGPT-3.5. However, it it crucial to emphasize that ChatGPT-4.0 did not show the ability to offer answers surpassing those provided by associate senior, and senior medical professionals. In the assessment designed to determine the proficiency of ChatGPTs in recognizing particular type of AIBD, it is evident that both ChatGPT-4 and ChatGPT-3.5 demonstrated inadequacy in providing responses that are both precise and accurate for each individual occurrence of this skin condition. CONCLUSION: Both ChatGPT-3.5 and ChatGPT-4.0 satisfactory for addressing fundamental inquiries related to atopic dermatitis, however they prove insufficient for diagnosing AIBD. The progress of ChatGPT in achieving utility within the professional medical domain remains a considerable journey ahead.
|
Lian C; Yuan X; Chokkakula S; Wang G; Song B; Wang Z; Fan G; Yin C
| 0-1
|
|||
39241674
|
Accuracy assessment of ChatGPT responses to frequently asked questions regarding anterior cruciate ligament surgery.
| 2,024
|
The Knee
|
BACKGROUND: The emergence of artificial intelligence (AI) has allowed users to have access to large sources of information in a chat-like manner. Thereby, we sought to evaluate ChatGPT-4 response's accuracy to the 10 patient most frequently asked questions (FAQs) regarding anterior cruciate ligament (ACL) surgery. METHODS: A list of the top 10 FAQs pertaining to ACL surgery was created after conducting a search through all Sports Medicine Fellowship Institutions listed on the Arthroscopy Association of North America (AANA) and American Orthopaedic Society of Sports Medicine (AOSSM) websites. A Likert scale was used to grade response accuracy by two sports medicine fellowship-trained surgeons. Cohen's kappa was used to assess inter-rater agreement. Reproducibility of the responses over time was also assessed. RESULTS: Five of the 10 responses received a 'completely accurate' grade by two-fellowship trained surgeons with three additional replies receiving a 'completely accurate' status by at least one. Moreover, inter-rater reliability accuracy assessment revealed a moderate agreement between fellowship-trained attending physicians (weighted kappa = 0.57, 95% confidence interval 0.15-0.99). Additionally, 80% of the responses were reproducible over time. CONCLUSION: ChatGPT can be considered an accurate additional tool to answer general patient questions regarding ACL surgery. None the less, patient-surgeon interaction should not be deferred and must continue to be the driving force for information retrieval. Thus, the general recommendation is to address any questions in the presence of a qualified specialist.
|
Villarreal-Espinosa JB; Berreta RS; Allende F; Garcia JR; Ayala S; Familiari F; Chahla J
| 32
|
|||
37424120
|
Pediatrics in Artificial Intelligence Era: A Systematic Review on Challenges, Opportunities, and Explainability.
| 2,023
|
Indian pediatrics
|
BACKGROUND: The emergence of artificial intelligence (AI) tools such as ChatGPT and Bard is disrupting a broad swathe of fields, including medicine. In pediatric medicine, AI is also increasingly being used across multiple subspecialties. However, the practical application of AI still faces a number of key challenges. Consequently, there is a requirement for a concise overview of the roles of AI across the multiple domains of pediatric medicine, which the current study seeks to address. AIM: To systematically assess the challenges, opportunities, and explainability of AI in pediatric medicine. METHODOLOGY: A systematic search was carried out on peer-reviewed databases, PubMed Central, Europe PubMed Central, and grey literature using search terms related to machine learning (ML) and AI for the years 2016 to 2022 in the English language. A total of 210 articles were retrieved that were screened with PRISMA for abstract, year, language, context, and proximal relevance to research aims. A thematic analysis was carried out to extract findings from the included studies. RESULTS: Twenty articles were selected for data abstraction and analysis, with three consistent themes emerging from these articles. In particular, eleven articles address the current state-of-the-art application of AI in diagnosing and predicting health conditions such as behavioral and mental health, cancer, syndromic and metabolic diseases. Five articles highlight the specific challenges of AI deployment in pediatric medicines: data security, handling, authentication, and validation. Four articles set out future opportunities for AI to be adapted: the incorporation of Big Data, cloud computing, precision medicine, and clinical decision support systems. These studies collectively critically evaluate the potential of AI in overcoming current barriers to adoption. CONCLUSION: AI is proving disruptive within pediatric medicine and is presently associated with challenges, opportunities, and the need for explainability. AI should be viewed as a tool to enhance and support clinical decision-making rather than a substitute for human judgement and expertise. Future research should consequently focus on obtaining comprehensive data to ensure the generalizability of research findings.
|
Balla Y; Tirunagari S; Windridge D
| 10
|
|||
38116306
|
ChatGPT and Clinical Training: Perception, Concerns, and Practice of Pharm-D Students.
| 2,023
|
Journal of multidisciplinary healthcare
|
BACKGROUND: The emergence of Chat-Generative Pre-trained Transformer (ChatGPT) by OpenAI has revolutionized AI technology, demonstrating significant potential in healthcare and pharmaceutical education, yet its real-world applicability in clinical training warrants further investigation. METHODS: A cross-sectional study was conducted between April and May 2023 to assess PharmD students' perceptions, concerns, and experiences regarding the integration of ChatGPT into clinical pharmacy education. The study utilized a convenient sampling method through online platforms and involved a questionnaire with sections on demographics, perceived benefits, concerns, and experience with ChatGPT. Statistical analysis was performed using SPSS, including descriptive and inferential analyses. RESULTS: The findings of the study involving 211 PharmD students revealed that the majority of participants were male (77.3%), and had prior experience with artificial intelligence (68.2%). Over two-thirds were aware of ChatGPT. Most students (n= 139, 65.9%) perceived potential benefits in using ChatGPT for various clinical tasks, with concerns including over-reliance, accuracy, and ethical considerations. Adoption of ChatGPT in clinical training varied, with some students not using it at all, while others utilized it for tasks like evaluating drug-drug interactions and developing care plans. Previous users tended to have higher perceived benefits and lower concerns, but the differences were not statistically significant. CONCLUSION: Utilizing ChatGPT in clinical training offers opportunities, but students' lack of trust in it for clinical decisions highlights the need for collaborative human-ChatGPT decision-making. It should complement healthcare professionals' expertise and be used strategically to compensate for human limitations. Further research is essential to optimize ChatGPT's effective integration.
|
Zawiah M; Al-Ashwal FY; Gharaibeh L; Abu Farha R; Alzoubi KH; Abu Hammour K; Qasim QA; Abrah F
| 10
|
|||
39515421
|
Evaluating a generative artificial intelligence accuracy in providing medication instructions from smartphone images.
| 2,025
|
Journal of the American Pharmacists Association : JAPhA
|
BACKGROUND: The Food and Drug Administration mandates patient labeling materials like the Medication Guide (MG) and Instructions for Use (IFU) to support appropriate medication use. However, challenges such as low health literacy and difficulties navigating these materials may lead to incorrect medication usage, resulting in therapy failure or adverse outcomes. The rise of generative AI, presents an opportunity to provide scalable, personalized patient education through image recognition and text generation. OBJECTIVE: This study aimed to evaluate the accuracy and safety of medication instructions generated by ChatGPT based on user-provided drug images, compared to the manufacturer's standard instructions. METHODS: Images of 12 medications requiring multiple steps for administration were uploaded to ChatGPT's image recognition function. ChatGPT's responses were compared to the official IFU and MG using text classifiers, Count Vectorization (CountVec), and Term Frequency-Inverse Document Frequency (TF-IDF). The clinical accuracy was further evaluated by independent pharmacists to determine if ChatGPT responses were valid for patient instruction. RESULTS: ChatGPT correctly identified all medications and generated patient instructions. CountVec outperformed TF-IDF in text similarity analysis, with an average similarity score of 76%. However, clinical evaluation revealed significant gaps in the instructions, particularly for complex administration routes, where ChatGPT's guidance lacked essential details, leading to lower clinical accuracy scores. CONCLUSION: While ChatGPT shows promise in generating patient-friendly medication instructions, its effectiveness varies based on the complexity of the medication. The findings underscore the need for further refinement and clinical oversight to ensure the safety and accuracy of AI-generated medical guidance, particularly for medications with complex administration processes.
|
Yassin Y; Nguyen T; Panchal K; Getchell K; Aungst T
| 10
|
|||
39322837
|
Advancement of Generative Pre-trained Transformer Chatbots in Answering Clinical Questions in the Practical Rhinoplasty Guideline.
| 2,025
|
Aesthetic plastic surgery
|
BACKGROUND: The Generative Pre-trained Transformer (GPT) series, which includes ChatGPT, is an artificial large language model that provides human-like text dialogue. This study aimed to evaluate the performance of artificial intelligence chatbots in answering clinical questions based on practical rhinoplasty guidelines. METHODS: Clinical questions (CQs) developed from the guidelines were used as question sources. For each question, we asked GPT-4 and GPT-3.5 (ChatGPT), developed by OpenAI, to provide answers for the CQs, Policy Level, Aggregate Evidence Quality, Level of Confidence in Evidence, and References. We compared the performance of the two types of artificial intelligence (AI) chatbots. RESULTS: A total of 10 questions were included in the final analysis, and the AI chatbots correctly answered 90.0% of these. GPT-4 demonstrated a lower accuracy rate than GPT-3.5 in answering CQs, although without statistically significant difference (86.0% vs. 94.0%; p = 0.05), whereas GPT-4 showed significantly higher accuracy for the level of confidence in Evidence than GPT-3.5 (52.0% vs. 28.0%; p < 0.01). No statistical differences were observed in Policy Level, Aggregate Evidence Quality, and Reference Match. In addition, GPT-4 rated significantly higher in presenting existing references than GPT-3.5 (36.9% vs. 24.1%; p = 0.01). CONCLUSIONS: The overall performance of GPT-4 was similar to that of GPT-3.5. However, GPT-4 provided existing references at a higher rate than GPT-3.5. GPT-4 has the potential to provide a more accurate reference in professional fields, including rhinoplasty. LEVEL OF EVIDENCE V: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
|
Shiraishi M; Tsuruda S; Tomioka Y; Chang J; Hori A; Ishii S; Fujinaka R; Ando T; Ohba J; Okazaki M
| 32
|
|||
40116759
|
Performance of Plug-In Augmented ChatGPT and Its Ability to Quantify Uncertainty: Simulation Study on the German Medical Board Examination.
| 2,025
|
JMIR medical education
|
BACKGROUND: The GPT-4 is a large language model (LLM) trained and fine-tuned on an extensive dataset. After the public release of its predecessor in November 2022, the use of LLMs has seen a significant spike in interest, and a multitude of potential use cases have been proposed. In parallel, however, important limitations have been outlined. Particularly, current LLMs encounter limitations, especially in symbolic representation and accessing contemporary data. The recent version of GPT-4, alongside newly released plugin features, has been introduced to mitigate some of these limitations. OBJECTIVE: Before this background, this work aims to investigate the performance of GPT-3.5, GPT-4, GPT-4 with plugins, and GPT-4 with plugins using pretranslated English text on the German medical board examination. Recognizing the critical importance of quantifying uncertainty for LLM applications in medicine, we furthermore assess this ability and develop a new metric termed "confidence accuracy" to evaluate it. METHODS: We used GPT-3.5, GPT-4, GPT-4 with plugins, and GPT-4 with plugins and translation to answer questions from the German medical board examination. Additionally, we conducted an analysis to assess how the models justify their answers, the accuracy of their responses, and the error structure of their answers. Bootstrapping and CIs were used to evaluate the statistical significance of our findings. RESULTS: This study demonstrated that available GPT models, as LLM examples, exceeded the minimum competency threshold established by the German medical board for medical students to obtain board certification to practice medicine. Moreover, the models could assess the uncertainty in their responses, albeit exhibiting overconfidence. Additionally, this work unraveled certain justification and reasoning structures that emerge when GPT generates answers. CONCLUSIONS: The high performance of GPTs in answering medical questions positions it well for applications in academia and, potentially, clinical practice. Its capability to quantify uncertainty in answers suggests it could be a valuable artificial intelligence agent within the clinical decision-making loop. Nevertheless, significant challenges must be addressed before artificial intelligence agents can be robustly and safely implemented in the medical domain.
|
Madrid J; Diehl P; Selig M; Rolauffs B; Hans FP; Busch HJ; Scheef T; Benning L
| 21
|
|||
38037784
|
Exploring artificial intelligence in the Nigerian medical educational space: An online cross-sectional study of perceptions, risks and benefits among students and lecturers from ten universities.
| 2,023
|
The Nigerian postgraduate medical journal
|
BACKGROUND: The impact of artificial intelligence (AI) has been compared to that of the Internet and printing, evoking both apprehension and anticipation in an uncertain world. OBJECTIVE: This study aimed to explore the perceptions of medical students and faculty members from ten universities across Nigeria regarding AI. METHODS: Using Google Forms and WhatsApp, a cross-sectional online survey was administered to clinical year medical students and their lecturers from ten medical schools representing all the six geopolitical zones of Nigeria. RESULTS: The survey received 1003 responses, of which 708 (70.7%) were from students and 294 (29.3%) were from lecturers. Both groups displayed an average level of knowledge, with students (Median:4, range -5 to 12) significantly outperforming lecturers (Median:3, range -5 to 15). Social media (61.2%) was the most common form of first contact with AI. Participants demonstrated a favourable attitude towards AI, with a median score of 6.8 out of 10. Grammar checkers (62.3%) were the most commonly reported AI tool used, while ChatGPT (43.6%) was the most frequently mentioned dedicated AI tool. Students were significantly more likely than lecturers to have used AI tools in the past but <5% of both groups had received prior AI training. Excitement about the potential of AI slightly outweighed concerns regarding future risks. A significantly higher proportion of students compared to lecturers believed that AI could dehumanise health care (70.6% vs. 60.8%), render physicians redundant (57.6% vs. 34.7%), diminish physicians' skills (79.3% vs. 71.3%) and ultimately harm patients (28.6% vs. 20.6%). CONCLUSION: The simultaneous fascination and apprehension with AI observed among both lecturers and students in our study mirrors the global trend. This finding was particularly evident in students who, despite possessing greater knowledge of AI compared to their lecturers, did not exhibit a corresponding reduction in their fear of AI.
|
Oluwadiya KS; Adeoti AO; Agodirin SO; Nottidge TE; Usman MI; Gali MB; Onyemaechi NO; Ramat AM; Adedire A; Zakari LY
| 10
|
|||
39883487
|
Using Large Language Models to Detect and Understand Drug Discontinuation Events in Web-Based Forums: Development and Validation Study.
| 2,025
|
Journal of medical Internet research
|
BACKGROUND: The implementation of large language models (LLMs), such as BART (Bidirectional and Auto-Regressive Transformers) and GPT-4, has revolutionized the extraction of insights from unstructured text. These advancements have expanded into health care, allowing analysis of social media for public health insights. However, the detection of drug discontinuation events (DDEs) remains underexplored. Identifying DDEs is crucial for understanding medication adherence and patient outcomes. OBJECTIVE: The aim of this study is to provide a flexible framework for investigating various clinical research questions in data-sparse environments. We provide an example of the utility of this framework by identifying DDEs and their root causes in an open-source web-based forum, MedHelp, and by releasing the first open-source DDE datasets to aid further research in this domain. METHODS: We used several LLMs, including GPT-4 Turbo, GPT-4o, DeBERTa (Decoding-Enhanced Bidirectional Encoder Representations from Transformer with Disentangled Attention), and BART, among others, to detect and determine the root causes of DDEs in user comments posted on MedHelp. Our study design included the use of zero-shot classification, which allows these models to make predictions without task-specific training. We split user comments into sentences and applied different classification strategies to assess the performance of these models in identifying DDEs and their root causes. RESULTS: Among the selected models, GPT-4o performed the best at determining the root causes of DDEs, predicting only 12.9% of root causes incorrectly (hamming loss). Among the open-source models tested, BART demonstrated the best performance in detecting DDEs, achieving an F(1)-score of 0.86, a false positive rate of 2.8%, and a false negative rate of 6.5%, all without any fine-tuning. The dataset included 10.7% (107/1000) DDEs, emphasizing the models' robustness in an imbalanced data context. CONCLUSIONS: This study demonstrated the effectiveness of open- and closed-source LLMs, such as GPT-4o and BART, for detecting DDEs and their root causes from publicly accessible data through zero-shot classification. The robust and scalable framework we propose can aid researchers in addressing data-sparse clinical research questions. The launch of open-access DDE datasets has the potential to stimulate further research and novel discoveries in this field.
|
Trevena W; Zhong X; Alvarado M; Semenov A; Oktay A; Devlin D; Gohil AY; Chittimouju SH
| 10
|
|||
39913914
|
ChatGPT-4 Performance on German Continuing Medical Education-Friend or Foe (Trick or Treat)? Protocol for a Randomized Controlled Trial.
| 2,025
|
JMIR research protocols
|
BACKGROUND: The increasing development and spread of artificial and assistive intelligence is opening up new areas of application not only in applied medicine but also in related fields such as continuing medical education (CME), which is part of the mandatory training program for medical doctors in Germany. This study aimed to determine whether medical laypersons can successfully conduct training courses specifically for physicians with the help of a large language model (LLM) such as ChatGPT-4. This study aims to qualitatively and quantitatively investigate the impact of using artificial intelligence (AI; specifically ChatGPT) on the acquisition of credit points in German postgraduate medical education. OBJECTIVE: Using this approach, we wanted to test further possible applications of AI in the postgraduate medical education setting and obtain results for practical use. Depending on the results, the potential influence of LLMs such as ChatGPT-4 on CME will be discussed, for example, as part of a SWOT (strengths, weaknesses, opportunities, threats) analysis. METHODS: We designed a randomized controlled trial, in which adult high school students attempt to solve CME tests across six medical specialties in three study arms in total with 18 CME training courses per study arm under different interventional conditions with varying amounts of permitted use of ChatGPT-4. Sample size calculation was performed including guess probability (20% correct answers, SD=40%; confidence level of 1-alpha=.95/alpha=.05; test power of 1-beta=.95; P<.05). The study was registered at open scientific framework. RESULTS: As of October 2024, the acquisition of data and students to participate in the trial is ongoing. Upon analysis of our acquired data, we predict our findings to be ready for publication as soon as early 2025. CONCLUSIONS: We aim to prove that the advances in AI, especially LLMs such as ChatGPT-4 have considerable effects on medical laypersons' ability to successfully pass CME tests. The implications that this holds on how the concept of continuous medical education requires reevaluation are yet to be contemplated. TRIAL REGISTRATION: OSF Registries 10.17605/OSF.IO/MZNUF; https://osf.io/mznuf. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): PRR1-10.2196/63887.
|
Burisch C; Bellary A; Breuckmann F; Ehlers J; Thal SC; Sellmann T; Godde D
| 21
|
|||
39966152
|
Assessing the Informational Value of Large Language Models Responses in Aesthetic Surgery: A Comparative Analysis with Expert Opinions.
| 2,025
|
Aesthetic plastic surgery
|
BACKGROUND: The increasing popularity of Large Language Models (LLMs) in various healthcare settings has raised questions about their ability to provide accurate and reliable information. This study aimed to evaluate the informational value of Large Language Models responses in aesthetic plastic surgery by comparing them with the opinions of experienced surgeons. METHODS: Thirty patients undergoing three common aesthetic procedures-dermal fillers, botulinum toxin injections, and aesthetic blepharoplasty-were selected. The most frequently asked questions by these patients were recorded and submitted to ChatGpt 3.5 and Google Bard v.1.53. The answers provided by the Large Language Models were then evaluated by 13 experienced aesthetic plastic surgeons on a Likert scale for accessibility, accuracy, and overall usefulness. RESULTS: The overall ratings of the chatbot responses were moderate, with surgeons generally finding them to be accurate and clear. However, the lack of transparency regarding the sources of the information provided by the LLMs made it impossible to fully evaluate their credibility. CONCLUSIONS: While chatbots have the potential to provide patients with convenient access to information about aesthetic plastic surgery, their current limitations in terms of transparency and comprehensiveness warrant caution in their use as a primary source of information. Further research is needed to develop more robust and reliable LLMs for healthcare applications. LEVEL OF EVIDENCE I: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
|
Grippaudo FR; Jeri M; Pezzella M; Orlando M; Ribuffo D
| 32
|
|||
39752214
|
ChatGPT's Attitude, Knowledge, and Clinical Application in Geriatrics Practice and Education: Exploratory Observational Study.
| 2,025
|
JMIR formative research
|
BACKGROUND: The increasing use of ChatGPT in clinical practice and medical education necessitates the evaluation of its reliability, particularly in geriatrics. OBJECTIVE: This study aimed to evaluate ChatGPT's trustworthiness in geriatrics through 3 distinct approaches: evaluating ChatGPT's geriatrics attitude, knowledge, and clinical application with 2 vignettes of geriatric syndromes (polypharmacy and falls). METHODS: We used the validated University of California, Los Angeles, geriatrics attitude and knowledge instruments to evaluate ChatGPT's geriatrics attitude and knowledge and compare its performance with that of medical students, residents, and geriatrics fellows from reported results in the literature. We also evaluated ChatGPT's application to 2 vignettes of geriatric syndromes (polypharmacy and falls). RESULTS: The mean total score on geriatrics attitude of ChatGPT was significantly lower than that of trainees (medical students, internal medicine residents, and geriatric medicine fellows; 2.7 vs 3.7 on a scale from 1-5; 1=strongly disagree; 5=strongly agree). The mean subscore on positive geriatrics attitude of ChatGPT was higher than that of the trainees (medical students, internal medicine residents, and neurologists; 4.1 vs 3.7 on a scale from 1 to 5 where a higher score means a more positive attitude toward older adults). The mean subscore on negative geriatrics attitude of ChatGPT was lower than that of the trainees and neurologists (1.8 vs 2.8 on a scale from 1 to 5 where a lower subscore means a less negative attitude toward aging). On the University of California, Los Angeles geriatrics knowledge test, ChatGPT outperformed all medical students, internal medicine residents, and geriatric medicine fellows from validated studies (14.7 vs 11.3 with a score range of -18 to +18 where +18 means that all questions were answered correctly). Regarding the polypharmacy vignette, ChatGPT not only demonstrated solid knowledge of potentially inappropriate medications but also accurately identified 7 common potentially inappropriate medications and 5 drug-drug and 3 drug-disease interactions. However, ChatGPT missed 5 drug-disease and 1 drug-drug interaction and produced 2 hallucinations. Regarding the fall vignette, ChatGPT answered 3 of 5 pretests correctly and 2 of 5 pretests partially correctly, identified 6 categories of fall risks, followed fall guidelines correctly, listed 6 key physical examinations, and recommended 6 categories of fall prevention methods. CONCLUSIONS: This study suggests that ChatGPT can be a valuable supplemental tool in geriatrics, offering reliable information with less age bias, robust geriatrics knowledge, and comprehensive recommendations for managing 2 common geriatric syndromes (polypharmacy and falls) that are consistent with evidence from guidelines, systematic reviews, and other types of studies. ChatGPT's potential as an educational and clinical resource could significantly benefit trainees, health care providers, and laypeople. Further research using GPT-4o, larger geriatrics question sets, and more geriatric syndromes is needed to expand and confirm these findings before adopting ChatGPT widely for geriatrics education and practice.
|
Cheng HY
| 43
|
|||
38882956
|
Does the Information Quality of ChatGPT Meet the Requirements of Orthopedics and Trauma Surgery?
| 2,024
|
Cureus
|
BACKGROUND: The integration of artificial intelligence (AI) in medicine, particularly through AI-based language models like ChatGPT, offers a promising avenue for enhancing patient education and healthcare delivery. This study aims to evaluate the quality of medical information provided by Chat Generative Pre-trained Transformer (ChatGPT) regarding common orthopedic and trauma surgical procedures, assess its limitations, and explore its potential as a supplementary source for patient education. METHODS: Using the GPT-3.5-Turbo version of ChatGPT, simulated patient information was generated for 20 orthopedic and trauma surgical procedures. The study utilized standardized information forms as a reference for evaluating ChatGPT's responses. The accuracy and quality of the provided information were assessed using a modified DISCERN instrument, and a global medical assessment was conducted to categorize the information's usefulness and reliability. RESULTS: ChatGPT mentioned an average of 47% of relevant keywords across procedures, with a variance in the mention rate between 30.5% and 68.6%. The average modified DISCERN (mDISCERN) score was 2.4 out of 5, indicating a moderate to low quality of information. None of the ChatGPT-generated fact sheets were rated as "very useful," with 45% deemed "somewhat useful," 35% "not useful," and 20% classified as "dangerous." A positive correlation was found between higher mDISCERN scores and better physician ratings, suggesting that information quality directly impacts perceived utility. CONCLUSION: While AI-based language models like ChatGPT hold significant promise for medical education and patient care, the current quality of information provided in the field of orthopedics and trauma surgery is suboptimal. Further development and refinement of AI sources and algorithms are necessary to improve the accuracy and reliability of medical information. This study underscores the need for ongoing research and development in AI applications in healthcare, emphasizing the critical role of accurate, high-quality information in patient education and informed consent processes.
|
Kasapovic A; Ali T; Babasiz M; Bojko J; Gathen M; Kaczmarczyk R; Roos J
| 32
|
|||
38506920
|
Incorporating ChatGPT in Medical Informatics Education: Mixed Methods Study on Student Perceptions and Experiential Integration Proposals.
| 2,024
|
JMIR medical education
|
BACKGROUND: The integration of artificial intelligence (AI) technologies, such as ChatGPT, in the educational landscape has the potential to enhance the learning experience of medical informatics students and prepare them for using AI in professional settings. The incorporation of AI in classes aims to develop critical thinking by encouraging students to interact with ChatGPT and critically analyze the responses generated by the chatbot. This approach also helps students develop important skills in the field of biomedical and health informatics to enhance their interaction with AI tools. OBJECTIVE: The aim of the study is to explore the perceptions of students regarding the use of ChatGPT as a learning tool in their educational context and provide professors with examples of prompts for incorporating ChatGPT into their teaching and learning activities, thereby enhancing the educational experience for students in medical informatics courses. METHODS: This study used a mixed methods approach to gain insights from students regarding the use of ChatGPT in education. To accomplish this, a structured questionnaire was applied to evaluate students' familiarity with ChatGPT, gauge their perceptions of its use, and understand their attitudes toward its use in academic and learning tasks. Learning outcomes of 2 courses were analyzed to propose ChatGPT's incorporation in master's programs in medicine and medical informatics. RESULTS: The majority of students expressed satisfaction with the use of ChatGPT in education, finding it beneficial for various purposes, including generating academic content, brainstorming ideas, and rewriting text. While some participants raised concerns about potential biases and the need for informed use, the overall perception was positive. Additionally, the study proposed integrating ChatGPT into 2 specific courses in the master's programs in medicine and medical informatics. The incorporation of ChatGPT was envisioned to enhance student learning experiences and assist in project planning, programming code generation, examination preparation, workflow exploration, and technical interview preparation, thus advancing medical informatics education. In medical teaching, it will be used as an assistant for simplifying the explanation of concepts and solving complex problems, as well as for generating clinical narratives and patient simulators. CONCLUSIONS: The study's valuable insights into medical faculty students' perspectives and integration proposals for ChatGPT serve as an informative guide for professors aiming to enhance medical informatics education. The research delves into the potential of ChatGPT, emphasizes the necessity of collaboration in academic environments, identifies subject areas with discernible benefits, and underscores its transformative role in fostering innovative and engaging learning experiences. The envisaged proposals hold promise in empowering future health care professionals to work in the rapidly evolving era of digital health care.
|
Magalhaes Araujo S; Cruz-Correia R
| 10
|
|||
40106227
|
Performance of ChatGPT-4 on Taiwanese Traditional Chinese Medicine Licensing Examinations: Cross-Sectional Study.
| 2,025
|
JMIR medical education
|
BACKGROUND: The integration of artificial intelligence (AI), notably ChatGPT, into medical education, has shown promising results in various medical fields. Nevertheless, its efficacy in traditional Chinese medicine (TCM) examinations remains understudied. OBJECTIVE: This study aims to (1) assess the performance of ChatGPT on the TCM licensing examination in Taiwan and (2) evaluate the model's explainability in answering TCM-related questions to determine its suitability as a TCM learning tool. METHODS: We used the GPT-4 model to respond to 480 questions from the 2022 TCM licensing examination. This study compared the performance of the model against that of licensed TCM doctors using 2 approaches, namely direct answer selection and provision of explanations before answer selection. The accuracy and consistency of AI-generated responses were analyzed. Moreover, a breakdown of question characteristics was performed based on the cognitive level, depth of knowledge, types of questions, vignette style, and polarity of questions. RESULTS: ChatGPT achieved an overall accuracy of 43.9%, which was lower than that of 2 human participants (70% and 78.4%). The analysis did not reveal a significant correlation between the accuracy of the model and the characteristics of the questions. An in-depth examination indicated that errors predominantly resulted from a misunderstanding of TCM concepts (55.3%), emphasizing the limitations of the model with regard to its TCM knowledge base and reasoning capability. CONCLUSIONS: Although ChatGPT shows promise as an educational tool, its current performance on TCM licensing examinations is lacking. This highlights the need for enhancing AI models with specialized TCM training and suggests a cautious approach to utilizing AI for TCM education. Future research should focus on model improvement and the development of tailored educational applications to support TCM learning.
|
Tseng LW; Lu YC; Tseng LC; Chen YC; Chen HY
| 21
|
|||
39760952
|
Artificial Intelligence in Physical Therapy: Evaluating ChatGPT's Role in Clinical Decision Support for Musculoskeletal Care.
| 2,025
|
Annals of biomedical engineering
|
BACKGROUND: The integration of artificial intelligence into medicine has attracted increasing attention in recent years. ChatGPT has emerged as a promising tool for delivering evidence-based recommendations in various clinical domains. However, the application of ChatGPT to physical therapy for musculoskeletal conditions has yet to be investigated. METHODS: Thirty clinical questions related to spinal, lower extremity, and upper extremity conditions were quired to ChatGPT-4. Responses were assessed for accuracy against clinical practice guidelines by two reviewers. Intra- and inter-rater reliability were measured using Fleiss' kappa (k). RESULTS: ChatGPT's responses were consistent with CPG recommendations for 80% of the questions. Performance was highest for upper extremity conditions (100%) and lowest for spinal conditions (60%), with a moderate performance for lower extremity conditions (87%). Intra-rater reliability was good (k = 0.698 and k = 0.631 for the two reviewers), and inter-rater reliability was very good (k = 0.847). CONCLUSION: ChatGPT demonstrates promise as a supplementary decision-making support tool for physical therapy, with good accuracy and reliability in aligning with clinical practice guideline recommendations. Further research is needed to evaluate its performance across broader scenarios and refine its clinical applicability.
|
Hao J; Yao Z; Tang Y; Remis A; Wu K; Yu X
| 32
|
|||
39013110
|
ChatGPT vs Medical Professional: Analyzing Responses to Laboratory Medicine Questions on Social Media.
| 2,024
|
Clinical chemistry
|
BACKGROUND: The integration of ChatGPT, a large language model (LLM) developed by OpenAI, into healthcare has sparked significant interest due to its potential to enhance patient care and medical education. With the increasing trend of patients accessing laboratory results online, there is a pressing need to evaluate the effectiveness of ChatGPT in providing accurate laboratory medicine information. Our study evaluates ChatGPT's effectiveness in addressing patient questions in this area, comparing its performance with that of medical professionals on social media. METHODS: This study sourced patient questions and medical professional responses from Reddit and Quora, comparing them with responses generated by ChatGPT versions 3.5 and 4.0. Experienced laboratory medicine professionals evaluated the responses for quality and preference. Evaluation results were further analyzed using R software. RESULTS: The study analyzed 49 questions, with evaluators reviewing responses from both medical professionals and ChatGPT. ChatGPT's responses were preferred by 75.9% of evaluators and generally received higher ratings for quality. They were noted for their comprehensive and accurate information, whereas responses from medical professionals were valued for their conciseness. The interrater agreement was fair, indicating some subjectivity but a consistent preference for ChatGPT's detailed responses. CONCLUSIONS: ChatGPT demonstrates potential as an effective tool for addressing queries in laboratory medicine, often surpassing medical professionals in response quality. These results support the need for further research to confirm ChatGPT's utility and explore its integration into healthcare settings.
|
Girton MR; Greene DN; Messerlian G; Keren DF; Yu M
| 0-1
|
|||
39844924
|
Large Language Models in Healthcare: A Bibliometric Analysis and Examination of Research Trends.
| 2,025
|
Journal of multidisciplinary healthcare
|
BACKGROUND: The integration of large language models (LLMs) in healthcare has generated significant interest due to their potential to improve diagnostic accuracy, personalization of treatment, and patient care efficiency. OBJECTIVE: This study aims to conduct a comprehensive bibliometric analysis to identify current research trends, main themes and future directions regarding applications in the healthcare sector. METHODS: A systematic scan of publications until 08.05.2024 was carried out from an important database such as Web of Science.Using bibliometric tools such as VOSviewer and CiteSpace, we analyzed data covering publication counts, citation analysis, co-authorship, co- occurrence of keywords and thematic development to map the intellectual landscape and collaborative networks in this field. RESULTS: The analysis included more than 500 articles published between 2021 and 2024. The United States, Germany and the United Kingdom were the top contributors to this field. The study highlights that neural network applications in diagnostic imaging, natural language processing for clinical documentation, and patient data in the field of general internal medicine, radiology, medical informatics, health care services, surgery, oncology, ophthalmology, neurology, orthopedics and psychiatry have seen significant growth in publications over the past two years. Keyword trend analysis revealed emerging sub-themes such as clinical research, artificial intelligence, ChatGPT, education, natural language processing, clinical management, virtual reality, chatbot, indicating a shift towards addressing the broader implications of LLM application in healthcare. CONCLUSION: The use of LLM in healthcare is an expanding field with significant academic and clinical interest. This bibliometric analysis not only maps the current state of the research, but also identifies important areas that require further research and development. Continued advances in this field are expected to significantly impact future healthcare applications, with a focus on increasing the accuracy and personalization of patient care through advanced data analytics.
|
Gencer G; Gencer K
| 10
|
|||
38414074
|
Challenges and barriers of using large language models (LLM) such as ChatGPT for diagnostic medicine with a focus on digital pathology - a recent scoping review.
| 2,024
|
Diagnostic pathology
|
BACKGROUND: The integration of large language models (LLMs) like ChatGPT in diagnostic medicine, with a focus on digital pathology, has garnered significant attention. However, understanding the challenges and barriers associated with the use of LLMs in this context is crucial for their successful implementation. METHODS: A scoping review was conducted to explore the challenges and barriers of using LLMs, in diagnostic medicine with a focus on digital pathology. A comprehensive search was conducted using electronic databases, including PubMed and Google Scholar, for relevant articles published within the past four years. The selected articles were critically analyzed to identify and summarize the challenges and barriers reported in the literature. RESULTS: The scoping review identified several challenges and barriers associated with the use of LLMs in diagnostic medicine. These included limitations in contextual understanding and interpretability, biases in training data, ethical considerations, impact on healthcare professionals, and regulatory concerns. Contextual understanding and interpretability challenges arise due to the lack of true understanding of medical concepts and lack of these models being explicitly trained on medical records selected by trained professionals, and the black-box nature of LLMs. Biases in training data pose a risk of perpetuating disparities and inaccuracies in diagnoses. Ethical considerations include patient privacy, data security, and responsible AI use. The integration of LLMs may impact healthcare professionals' autonomy and decision-making abilities. Regulatory concerns surround the need for guidelines and frameworks to ensure safe and ethical implementation. CONCLUSION: The scoping review highlights the challenges and barriers of using LLMs in diagnostic medicine with a focus on digital pathology. Understanding these challenges is essential for addressing the limitations and developing strategies to overcome barriers. It is critical for health professionals to be involved in the selection of data and fine tuning of the models. Further research, validation, and collaboration between AI developers, healthcare professionals, and regulatory bodies are necessary to ensure the responsible and effective integration of LLMs in diagnostic medicine.
|
Ullah E; Parwani A; Baig MM; Singh R
| 10
|
|||
39851275
|
The Potential Clinical Utility of the Customized Large Language Model in Gastroenterology: A Pilot Study.
| 2,024
|
Bioengineering (Basel, Switzerland)
|
Background: The large language model (LLM) has the potential to be applied to clinical practice. However, there has been scarce study on this in the field of gastroenterology. Aim: This study explores the potential clinical utility of two LLMs in the field of gastroenterology: a customized GPT model and a conventional GPT-4o, an advanced LLM capable of retrieval-augmented generation (RAG). Method: We established a customized GPT with the BM25 algorithm using Open AI's GPT-4o model, which allows it to produce responses in the context of specific documents including textbooks of internal medicine (in English) and gastroenterology (in Korean). Also, we prepared a conventional ChatGPT 4o (accessed on 16 October 2024) access. The benchmark (written in Korean) consisted of 15 clinical questions developed by four clinical experts, representing typical questions for medical students. The two LLMs, a gastroenterology fellow, and an expert gastroenterologist were tested to assess their performance. Results: While the customized LLM correctly answered 8 out of 15 questions, the fellow answered 10 correctly. When the standardized Korean medical terms were replaced with English terminology, the LLM's performance improved, answering two additional knowledge-based questions correctly, matching the fellow's score. However, judgment-based questions remained a challenge for the model. Even with the implementation of 'Chain of Thought' prompt engineering, the customized GPT did not achieve improved reasoning. Conventional GPT-4o achieved the highest score among the AI models (14/15). Although both models performed slightly below the expert gastroenterologist's level (15/15), they show promising potential for clinical applications (scores comparable with or higher than that of the gastroenterology fellow). Conclusions: LLMs could be utilized to assist with specialized tasks such as patient counseling. However, RAG capabilities by enabling real-time retrieval of external data not included in the training dataset, appear essential for managing complex, specialized content, and clinician oversight will remain crucial to ensure safe and effective use in clinical practice.
|
Gong EJ; Bang CS; Lee JJ; Park J; Kim E; Kim S; Kimm M; Choi SH
| 21
|
|||
39593074
|
Qualitative metrics from the biomedical literature for evaluating large language models in clinical decision-making: a narrative review.
| 2,024
|
BMC medical informatics and decision making
|
BACKGROUND: The large language models (LLMs), most notably ChatGPT, released since November 30, 2022, have prompted shifting attention to their use in medicine, particularly for supporting clinical decision-making. However, there is little consensus in the medical community on how LLM performance in clinical contexts should be evaluated. METHODS: We performed a literature review of PubMed to identify publications between December 1, 2022, and April 1, 2024, that discussed assessments of LLM-generated diagnoses or treatment plans. RESULTS: We selected 108 relevant articles from PubMed for analysis. The most frequently used LLMs were GPT-3.5, GPT-4, Bard, LLaMa/Alpaca-based models, and Bing Chat. The five most frequently used criteria for scoring LLM outputs were "accuracy", "completeness", "appropriateness", "insight", and "consistency". CONCLUSIONS: The most frequently used criteria for defining high-quality LLMs have been consistently selected by researchers over the past 1.5 years. We identified a high degree of variation in how studies reported their findings and assessed LLM performance. Standardized reporting of qualitative evaluation metrics that assess the quality of LLM outputs can be developed to facilitate research studies on LLMs in healthcare.
|
Ho CN; Tian T; Ayers AT; Aaron RE; Phillips V; Wolf RM; Mathioudakis N; Dai T; Klonoff DC
| 10
|
|||
40348690
|
Can Gpt-4o Accurately Diagnose Trauma X-Rays? A Comparative Study with Expert Evaluations.
| 2,025
|
The Journal of emergency medicine
|
BACKGROUND: The latest artificial intelligence (AI) model, GPT-4o, introduced by OpenAI, can process visual data, presenting a novel opportunity for radiographic evaluation in trauma patients. OBJECTIVE: This study aimed to assess the efficacy of GPT-4o in interpreting radiographs for traumatic bone pathologies and to compare its performance with that of emergency medicine and orthopedic specialists. METHODS: The study involved 10 emergency medicine specialists, 10 orthopedic specialists, and the GPT-4o AI model, evaluating 25 cases of traumatic bone pathologies of the upper and lower extremities selected from the Radiopaedia website. Participants were asked to identify fractures or dislocations in the radiographs within 45 minutes. GPT-4o was instructed to perform the same task in 10 different chat sessions. RESULTS: Emergency medicine specialists and orthopedic specialists demonstrated an average accuracy of 82.8% and 87.2%, respectively, in radiograph interpretation. In contrast, GPT-4o achieved an accuracy of only 11.2%. Statistical analysis revealed significant differences among the three groups (p < 0.001), with GPT-4o performing significantly worse than both groups of specialists. CONCLUSION: GPT-4o's ability to interpret radiographs of traumatic bone pathologies is currently limited and significantly inferior to that of trained specialists. These findings underscore the ongoing need for human expertise in trauma diagnosis and highlight the challenges of applying AI to complex medical imaging tasks.
|
Ozturk A; Gunay S; Ates S; Yigit Yavuz Yigit Y
| 0-1
|
|||
39509695
|
Applications and Concerns of ChatGPT and Other Conversational Large Language Models in Health Care: Systematic Review.
| 2,024
|
Journal of medical Internet research
|
BACKGROUND: The launch of ChatGPT (OpenAI) in November 2022 attracted public attention and academic interest to large language models (LLMs), facilitating the emergence of many other innovative LLMs. These LLMs have been applied in various fields, including health care. Numerous studies have since been conducted regarding how to use state-of-the-art LLMs in health-related scenarios. OBJECTIVE: This review aims to summarize applications of and concerns regarding conversational LLMs in health care and provide an agenda for future research in this field. METHODS: We used PubMed, ACM, and the IEEE digital libraries as primary sources for this review. We followed the guidance of PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) to screen and select peer-reviewed research articles that (1) were related to health care applications and conversational LLMs and (2) were published before September 1, 2023, the date when we started paper collection. We investigated these papers and classified them according to their applications and concerns. RESULTS: Our search initially identified 820 papers according to targeted keywords, out of which 65 (7.9%) papers met our criteria and were included in the review. The most popular conversational LLM was ChatGPT (60/65, 92% of papers), followed by Bard (Google LLC; 1/65, 2% of papers), LLaMA (Meta; 1/65, 2% of papers), and other LLMs (6/65, 9% papers). These papers were classified into four categories of applications: (1) summarization, (2) medical knowledge inquiry, (3) prediction (eg, diagnosis, treatment recommendation, and drug synergy), and (4) administration (eg, documentation and information collection), and four categories of concerns: (1) reliability (eg, training data quality, accuracy, interpretability, and consistency in responses), (2) bias, (3) privacy, and (4) public acceptability. There were 49 (75%) papers using LLMs for either summarization or medical knowledge inquiry, or both, and there are 58 (89%) papers expressing concerns about either reliability or bias, or both. We found that conversational LLMs exhibited promising results in summarization and providing general medical knowledge to patients with a relatively high accuracy. However, conversational LLMs such as ChatGPT are not always able to provide reliable answers to complex health-related tasks (eg, diagnosis) that require specialized domain expertise. While bias or privacy issues are often noted as concerns, no experiments in our reviewed papers thoughtfully examined how conversational LLMs lead to these issues in health care research. CONCLUSIONS: Future studies should focus on improving the reliability of LLM applications in complex health-related tasks, as well as investigating the mechanisms of how LLM applications bring bias and privacy issues. Considering the vast accessibility of LLMs, legal, social, and technical efforts are all needed to address concerns about LLMs to promote, improve, and regularize the application of LLMs in health care.
|
Wang L; Wan Z; Ni C; Song Q; Li Y; Clayton E; Malin B; Yin Z
| 10
|
|||
39402411
|
Comparing Scoring Consistency of Large Language Models with Faculty for Formative Assessments in Medical Education.
| 2,025
|
Journal of general internal medicine
|
BACKGROUND: The Liaison Committee on Medical Education requires that medical students receive individualized feedback on their self-directed learning skills. Pre-clinical students are asked to complete multiple spaced critical appraisal assignments. However, the individual feedback requires significant faculty time. As large language models (LLMs) can score and generate feedback, we explored their use in grading formative assessments through validity and feasibility lenses. OBJECTIVE: To explore the consistency and feasibility of using an LLM to assess and provide feedback for formative assessments in undergraduate medical education. DESIGN AND PARTICIPANTS: This was a cross-sectional study of pre-clinical students' critical appraisal assignments at University of Illinois College of Medicine (UICOM) during the 2022-2023 academic year. INTERVENTION: An initial sample of ten assignments was used to develop a prompt. For each student entry, the de-identified assignment and prompt were provided to ChatGPT 3.5, and its scoring was compared to the existing faculty grade. MAIN MEASURES: Differences in scoring of individual items between ChatGPT and faculty were assessed. Scoring consistency using inter-rater reliability (IRR) was calculated as percent exact agreement. Chi-squared test was used to determine if there were significant differences in scores. Psychometric characteristics including internal-consistency reliability, area under precision-recall curve (AUCPR), and cost were studied. KEY RESULTS: In this cross-sectional study, 111 pre-clinical students' faculty graded assignments were compared with that of ChatGPT and the scoring of individual items was comparable. The overall agreement between ChatGPT and faculty was 67% (OR = 2.53, P < 0.001); mean AUCPR was 0.69 (range 0.61-0.76). Internal-consistency reliability of ChatGPT was 0.64 and its use resulted in a fivefold reduction in faculty time, and potential savings of 150 faculty hours. CONCLUSIONS: This study of psychometric characteristics of ChatGPT demonstrates the potential role for LLMs to assist faculty in assessing and providing feedback for formative assignments.
|
Sreedhar R; Chang L; Gangopadhyaya A; Shiels PW; Loza J; Chi E; Gabel E; Park YS
| 21
|
|||
39507012
|
Performance of ChatGPT on prehospital acute ischemic stroke and large vessel occlusion (LVO) stroke screening.
| 2,024
|
Digital health
|
BACKGROUND: The management of acute ischemic stroke (AIS) is time-sensitive, yet prehospital delays remain prevalent. The application of large language models (LLMs) for medical text analysis may play a potential role in clinical decision support. We assess the performance of LLMs on prehospital AIS and large vessel occlusion (LVO) stroke screening. METHODS: This retrospective study sourced cases from the electronic medical record database of the emergency department (ED) at Maoming People's Hospital, encompassing patients who presented to the ED between June and November 2023. We evaluate the diagnostic accuracy of GPT-3.5 and GPT-4 for the detection of AIS and LVO stroke by comparing the sensitivity, specificity, accuracy, positive predictive value, negative predictive value, and positive likelihood ratio and AUC of both LLMs. The neurological reasoning of LLMs was rated on a five-point Likert scale for factual correctness and the occurrence of errors. RESULT: On 400 records from 400 patients (mean age, 70.0 years +/- 12.5 [SD]; 273 male), GPT-4 outperformed GPT-3.5 in AIS screening (AUC 0.75 (0.65-0.84) vs 0.59 (0.50-0.69), P = 0.015) and LVO identification (AUC 0.71 (0.65-0.77) vs 0.60 (0.53-0.66), P < 0.001). GPT-4 achieved higher accuracy than GPT-3.5 in screening of AIS (89.3% [95% CI: 85.8, 91.9] vs 86.5% [95% CI: 82.8, 89.5]) and LVO stroke identification (67.0% [95% CI: 62.3%, 71.4%] vs 47.3% [95% CI: 42.4%, 52.2%]). In neurological reasoning, GPT-4 had higher Likert scale scores for factual correctness (4.24 vs 3.62), with a lower rate of error (6.8% vs 24.8%) than GPT-3.5 (all P < 0.001). CONCLUSIONS: The result demonstrates that LLMs possess diagnostic capability in the prehospital identification of ischemic stroke, with the ability to exhibit neurologically informed reasoning processes. Notably, GPT-4 outperforms GPT-3.5 in the recognition of AIS and LVO stroke, achieving results comparable to prehospital scales. LLMs are supposed to become a promising supportive decision-making tool for EMS practitioners in screening prehospital stroke.
|
Wang X; Ye S; Feng J; Feng K; Yang H; Li H
| 10
|
|||
39731895
|
Evaluating LLM-based generative AI tools in emergency triage: A comparative study of ChatGPT Plus, Copilot Pro, and triage nurses.
| 2,025
|
The American journal of emergency medicine
|
BACKGROUND: The number of emergency department (ED) visits has been on steady increase globally. Artificial Intelligence (AI) technologies, including Large Language Model (LLMs)-based generative AI models, have shown promise in improving triage accuracy. This study evaluates the performance of ChatGPT and Copilot in triage at a high-volume urban hospital, hypothesizing that these tools can match trained physicians' accuracy and reduce human bias amidst ED crowding challenges. METHODS: This single-center, prospective observational study was conducted in an urban ED over one week. Adult patients were enrolled through random 24-h intervals. Exclusions included minors, trauma cases, and incomplete data. Triage nurses assessed patients while an emergency medicine (EM) physician documented clinical vignettes and assigned emergency severity index (ESI) levels. These vignettes were then introduced to ChatGPT and Copilot for comparison with the triage nurse's decision. RESULTS: The overall triage accuracy was 65.2 % for nurses, 66.5 % for ChatGPT, and 61.8 % for Copilot, with no significant difference (p = 0.000). Moderate agreement was observed between the EM physician and ChatGPT, triage nurses, and Copilot (Cohen's Kappa = 0.537, 0.477, and 0.472, respectively). In recognizing high-acuity patients, ChatGPT and Copilot outperformed triage nurses (87.8 % and 85.7 % versus 32.7 %, respectively). Compared to ChatGPT and Copilot, nurses significantly under-triaged patients (p < 0.05). The analysis of predictive performance for ChatGPT, Copilot, and triage nurses demonstrated varying discrimination abilities across ESI levels, all of which were statistically significant (p < 0.05). ChatGPT and Copilot exhibited consistent accuracy across age, gender, and admission time, whereas triage nurses were more likely to mistriage patients under 45 years old. CONCLUSION: ChatGPT and Copilot outperform traditional nurse triage in identifying high-acuity patients, but real-time ED capacity data is crucial to prevent overcrowding and ensure high-quality of emergency care.
|
Arslan B; Nuhoglu C; Satici MO; Altinbilek E
| 10
|
|||
38915174
|
Evaluation of ChatGPT-Generated Differential Diagnosis for Common Diseases With Atypical Presentation: Descriptive Research.
| 2,024
|
JMIR medical education
|
BACKGROUND: The persistence of diagnostic errors, despite advances in medical knowledge and diagnostics, highlights the importance of understanding atypical disease presentations and their contribution to mortality and morbidity. Artificial intelligence (AI), particularly generative pre-trained transformers like GPT-4, holds promise for improving diagnostic accuracy, but requires further exploration in handling atypical presentations. OBJECTIVE: This study aimed to assess the diagnostic accuracy of ChatGPT in generating differential diagnoses for atypical presentations of common diseases, with a focus on the model's reliance on patient history during the diagnostic process. METHODS: We used 25 clinical vignettes from the Journal of Generalist Medicine characterizing atypical manifestations of common diseases. Two general medicine physicians categorized the cases based on atypicality. ChatGPT was then used to generate differential diagnoses based on the clinical information provided. The concordance between AI-generated and final diagnoses was measured, with a focus on the top-ranked disease (top 1) and the top 5 differential diagnoses (top 5). RESULTS: ChatGPT's diagnostic accuracy decreased with an increase in atypical presentation. For category 1 (C1) cases, the concordance rates were 17% (n=1) for the top 1 and 67% (n=4) for the top 5. Categories 3 (C3) and 4 (C4) showed a 0% concordance for top 1 and markedly lower rates for the top 5, indicating difficulties in handling highly atypical cases. The chi2 test revealed no significant difference in the top 1 differential diagnosis accuracy between less atypical (C1+C2) and more atypical (C3+C4) groups (chi(2)1=2.07; n=25; P=.13). However, a significant difference was found in the top 5 analyses, with less atypical cases showing higher accuracy (chi(2)1=4.01; n=25; P=.048). CONCLUSIONS: ChatGPT-4 demonstrates potential as an auxiliary tool for diagnosing typical and mildly atypical presentations of common diseases. However, its performance declines with greater atypicality. The study findings underscore the need for AI systems to encompass a broader range of linguistic capabilities, cultural understanding, and diverse clinical scenarios to improve diagnostic utility in real-world settings.
|
Shikino K; Shimizu T; Otsuka Y; Tago M; Takahashi H; Watari T; Sasaki Y; Iizuka G; Tamura H; Nakashima K; Kunitomo K; Suzuki M; Aoyama S; Kosaka S; Kawahigashi T; Matsumoto T; Orihara F; Morikawa T; Nishizawa T; Hoshina Y; Yamamoto Y; Matsuo Y; Unoki Y; Kimura H; Tokushima M; Watanuki S; Saito T; Otsuka F; Tokuda Y
| 10
|
|||
38329802
|
Comparison of the Performance of GPT-3.5 and GPT-4 With That of Medical Students on the Written German Medical Licensing Examination: Observational Study.
| 2,024
|
JMIR medical education
|
BACKGROUND: The potential of artificial intelligence (AI)-based large language models, such as ChatGPT, has gained significant attention in the medical field. This enthusiasm is driven not only by recent breakthroughs and improved accessibility, but also by the prospect of democratizing medical knowledge and promoting equitable health care. However, the performance of ChatGPT is substantially influenced by the input language, and given the growing public trust in this AI tool compared to that in traditional sources of information, investigating its medical accuracy across different languages is of particular importance. OBJECTIVE: This study aimed to compare the performance of GPT-3.5 and GPT-4 with that of medical students on the written German medical licensing examination. METHODS: To assess GPT-3.5's and GPT-4's medical proficiency, we used 937 original multiple-choice questions from 3 written German medical licensing examinations in October 2021, April 2022, and October 2022. RESULTS: GPT-4 achieved an average score of 85% and ranked in the 92.8th, 99.5th, and 92.6th percentiles among medical students who took the same examinations in October 2021, April 2022, and October 2022, respectively. This represents a substantial improvement of 27% compared to GPT-3.5, which only passed 1 out of the 3 examinations. While GPT-3.5 performed well in psychiatry questions, GPT-4 exhibited strengths in internal medicine and surgery but showed weakness in academic research. CONCLUSIONS: The study results highlight ChatGPT's remarkable improvement from moderate (GPT-3.5) to high competency (GPT-4) in answering medical licensing examination questions in German. While GPT-4's predecessor (GPT-3.5) was imprecise and inconsistent, it demonstrates considerable potential to improve medical education and patient care, provided that medically trained users critically evaluate its results. As the replacement of search engines by AI tools seems possible in the future, further studies with nonprofessional questions are needed to assess the safety and accuracy of ChatGPT for the general population.
|
Meyer A; Riese J; Streichert T
| 21
|
|||
40158187
|
Generating learning guides for medical education with LLMs and statistical analysis of test results.
| 2,025
|
BMC medical education
|
BACKGROUND: The Progress Test Medizin (PTM) is a formative test for medical students issued twice a year by the Charite-Universitatsmedizin Berlin. The PTM provides a numerical feedback based on a global view of the strengths and weaknesses of students. This feedback can benefit from more fine-grained information, pinpointing the topics where students need to improve, as well as advice on what they should learn in light of their results. The scale of the PTM, taken by more than 10,000 participants every academic semester, makes it necessary to automate this task. METHODS: We have developed a seven-step approach based on large language models and statistical analysis to fulfil the purpose of this study. Firstly, a large language model (ChatGPT 4.0) identified keywords in the form of MeSH terms from all 200 questions of one PTM run. These keywords were checked against the list of medical terms included in the Medical Subject Headings (MeSH) thesaurus published by the National Library of Medicine (NLM). Meanwhile, answer patterns of PTM questions were also analysed to find empirical relationships between questions. With this information, we obtained series of questions related to specific MeSH terms and used them to develop a framework that allowed us to assess the performance of PTM participants and compose personalized feedback structured around a curated list of medical topics. RESULTS: We used data from a past PTM to simulate the generation of personalized feedback for 1,401 test participants, thereby producing specific information about their knowledge regarding a number of topics ranging from 34 to 243. Substantial knowledge gaps were found in 14.67% to 21.76% of rated learning topics, depending on the benchmarking set considered. CONCLUSION: We designed and tested a method to generate student feedback covering up to 243 medical topics defined by MeSH terms. The feedback generated with data from students in later stages of their studies was more detailed, as they tend to face more questions matching their knowledge level.
|
Rosello Atanet I; Tomova M; Sieg M; Sehy V; Mader P; Marz M
| 21
|
|||
40008118
|
Evaluating the Use of Artificial Intelligence as a Study Tool for Preclinical Medical School Exams.
| 2,025
|
Journal of medical education and curricular development
|
BACKGROUND: The purpose of this 2024 study was to determine if there is an association between the usage of artificial intelligence (AI) to study and exam scores of medical students in the preclinical phase of their schooling. METHODS: We created and distributed a survey via an unbiased third-party to students in the class of 2027 at the Kirk Kerkorian School of Medicine at UNLV to evaluate students AI use to study for their preclinical system-based exams. Students were categorized into two groups, those that use AI to study and those who do not. Two-sample t-tests were run to compare the mean exam scores of both groups on six different organ system exams as well as the cumulative final exam score for each group. The group that did use AI was further asked about which AI tools they use and how exactly they use these tools to study for preclinical examinations. RESULTS: The results of the study showed that there is no statistically significant difference in exam scores between students who use AI for study purposes and students who do not. It was also found that most AI users studied with ChatGPT. The most common way users studied was by using AI to simplify and clarify topics they did not understand. CONCLUSIONS: Based on the results of this study, we concluded that usage of AI programs for students for medical examinations did not yield a positive or negative effect on students' organ system-based exam scores.
|
Sakelaris PG; Novotny KV; Borvick MS; Lagasca GG; Simanton EG
| 10
|
|||
39221376
|
Comparison of the Performance of Artificial Intelligence Versus Medical Professionals in the Polish Final Medical Examination.
| 2,024
|
Cureus
|
BACKGROUND: The rapid development of artificial intelligence (AI) technologies like OpenAI's Generative Pretrained Transformer (GPT), particularly ChatGPT, has shown promising applications in various fields, including medicine. This study evaluates ChatGPT's performance on the Polish Final Medical Examination (LEK), comparing its efficacy to that of human test-takers. METHODS: The study analyzed ChatGPT's ability to answer 196 multiple-choice questions from the spring 2021 LEK. Questions were categorized into "clinical cases" and "other" general medical knowledge, and then divided according to medical fields. Two versions of ChatGPT (3.5 and 4.0) were tested. Statistical analyses, including Pearson's chi(2) test, and Mann-Whitney U test, were conducted to compare the AI's performance and confidence levels. RESULTS: ChatGPT 3.5 correctly answered 50.51% of the questions, while ChatGPT 4.0 answered 77.55% correctly, surpassing the 56% passing threshold. Version 3.5 showed significantly higher confidence in correct answers, whereas version 4.0 maintained consistent confidence regardless of answer accuracy. No significant differences in performance were observed across different medical fields. CONCLUSIONS: ChatGPT 4.0 demonstrated the ability to pass the LEK, indicating substantial potential for AI in medical education and assessment. Future improvements in AI models, such as the anticipated ChatGPT 5.0, may enhance further performance, potentially equaling or surpassing human test-takers.
|
Jaworski A; Jasinski D; Jaworski W; Hop A; Janek A; Slawinska B; Konieczniak L; Rzepka M; Jung M; Syslo O; Jarzabek V; Blecha Z; Harazinski K; Jasinska N
| 21
|
|||
39042876
|
Appraisal of ChatGPT's Aptitude for Medical Education: Comparative Analysis With Third-Year Medical Students in a Pulmonology Examination.
| 2,024
|
JMIR medical education
|
BACKGROUND: The rapid evolution of ChatGPT has generated substantial interest and led to extensive discussions in both public and academic domains, particularly in the context of medical education. OBJECTIVE: This study aimed to evaluate ChatGPT's performance in a pulmonology examination through a comparative analysis with that of third-year medical students. METHODS: In this cross-sectional study, we conducted a comparative analysis with 2 distinct groups. The first group comprised 244 third-year medical students who had previously taken our institution's 2020 pulmonology examination, which was conducted in French. The second group involved ChatGPT-3.5 in 2 separate sets of conversations: without contextualization (V1) and with contextualization (V2). In both V1 and V2, ChatGPT received the same set of questions administered to the students. RESULTS: V1 demonstrated exceptional proficiency in radiology, microbiology, and thoracic surgery, surpassing the majority of medical students in these domains. However, it faced challenges in pathology, pharmacology, and clinical pneumology. In contrast, V2 consistently delivered more accurate responses across various question categories, regardless of the specialization. ChatGPT exhibited suboptimal performance in multiple choice questions compared to medical students. V2 excelled in responding to structured open-ended questions. Both ChatGPT conversations, particularly V2, outperformed students in addressing questions of low and intermediate difficulty. Interestingly, students showcased enhanced proficiency when confronted with highly challenging questions. V1 fell short of passing the examination. Conversely, V2 successfully achieved examination success, outperforming 139 (62.1%) medical students. CONCLUSIONS: While ChatGPT has access to a comprehensive web-based data set, its performance closely mirrors that of an average medical student. Outcomes are influenced by question format, item complexity, and contextual nuances. The model faces challenges in medical contexts requiring information synthesis, advanced analytical aptitude, and clinical judgment, as well as in non-English language assessments and when confronted with data outside mainstream internet sources.
|
Cherif H; Moussa C; Missaoui AM; Salouage I; Mokaddem S; Dhahri B
| 21
|
|||
39348189
|
Accuracy of a Commercial Large Language Model (ChatGPT) to Perform Disaster Triage of Simulated Patients Using the Simple Triage and Rapid Treatment (START) Protocol: Gage Repeatability and Reproducibility Study.
| 2,024
|
Journal of medical Internet research
|
BACKGROUND: The release of ChatGPT (OpenAI) in November 2022 drastically reduced the barrier to using artificial intelligence by allowing a simple web-based text interface to a large language model (LLM). One use case where ChatGPT could be useful is in triaging patients at the site of a disaster using the Simple Triage and Rapid Treatment (START) protocol. However, LLMs experience several common errors including hallucinations (also called confabulations) and prompt dependency. OBJECTIVE: This study addresses the research problem: "Can ChatGPT adequately triage simulated disaster patients using the START protocol?" by measuring three outcomes: repeatability, reproducibility, and accuracy. METHODS: Nine prompts were developed by 5 disaster medicine physicians. A Python script queried ChatGPT Version 4 for each prompt combined with 391 validated simulated patient vignettes. Ten repetitions of each combination were performed for a total of 35,190 simulated triages. A reference standard START triage code for each simulated case was assigned by 2 disaster medicine specialists (JMF and MV), with a third specialist (LC) added if the first two did not agree. Results were evaluated using a gage repeatability and reproducibility study (gage R and R). Repeatability was defined as variation due to repeated use of the same prompt. Reproducibility was defined as variation due to the use of different prompts on the same patient vignette. Accuracy was defined as agreement with the reference standard. RESULTS: Although 35,102 (99.7%) queries returned a valid START score, there was considerable variability. Repeatability (use of the same prompt repeatedly) was 14% of the overall variation. Reproducibility (use of different prompts) was 4.1% of the overall variation. The accuracy of ChatGPT for START was 63.9% with a 32.9% overtriage rate and a 3.1% undertriage rate. Accuracy varied by prompt with a maximum of 71.8% and a minimum of 46.7%. CONCLUSIONS: This study indicates that ChatGPT version 4 is insufficient to triage simulated disaster patients via the START protocol. It demonstrated suboptimal repeatability and reproducibility. The overall accuracy of triage was only 63.9%. Health care professionals are advised to exercise caution while using commercial LLMs for vital medical determinations, given that these tools may commonly produce inaccurate data, colloquially referred to as hallucinations or confabulations. Artificial intelligence-guided tools should undergo rigorous statistical evaluation-using methods such as gage R and R-before implementation into clinical settings.
|
Franc JM; Hertelendy AJ; Cheng L; Hata R; Verde M
| 10
|
|||
38055323
|
Performance Comparison of ChatGPT-4 and Japanese Medical Residents in the General Medicine In-Training Examination: Comparison Study.
| 2,023
|
JMIR medical education
|
BACKGROUND: The reliability of GPT-4, a state-of-the-art expansive language model specializing in clinical reasoning and medical knowledge, remains largely unverified across non-English languages. OBJECTIVE: This study aims to compare fundamental clinical competencies between Japanese residents and GPT-4 by using the General Medicine In-Training Examination (GM-ITE). METHODS: We used the GPT-4 model provided by OpenAI and the GM-ITE examination questions for the years 2020, 2021, and 2022 to conduct a comparative analysis. This analysis focused on evaluating the performance of individuals who were concluding their second year of residency in comparison to that of GPT-4. Given the current abilities of GPT-4, our study included only single-choice exam questions, excluding those involving audio, video, or image data. The assessment included 4 categories: general theory (professionalism and medical interviewing), symptomatology and clinical reasoning, physical examinations and clinical procedures, and specific diseases. Additionally, we categorized the questions into 7 specialty fields and 3 levels of difficulty, which were determined based on residents' correct response rates. RESULTS: Upon examination of 137 GM-ITE questions in Japanese, GPT-4 scores were significantly higher than the mean scores of residents (residents: 55.8%, GPT-4: 70.1%; P<.001). In terms of specific disciplines, GPT-4 scored 23.5 points higher in the "specific diseases," 30.9 points higher in "obstetrics and gynecology," and 26.1 points higher in "internal medicine." In contrast, GPT-4 scores in "medical interviewing and professionalism," "general practice," and "psychiatry" were lower than those of the residents, although this discrepancy was not statistically significant. Upon analyzing scores based on question difficulty, GPT-4 scores were 17.2 points lower for easy problems (P=.007) but were 25.4 and 24.4 points higher for normal and difficult problems, respectively (P<.001). In year-on-year comparisons, GPT-4 scores were 21.7 and 21.5 points higher in the 2020 (P=.01) and 2022 (P=.003) examinations, respectively, but only 3.5 points higher in the 2021 examinations (no significant difference). CONCLUSIONS: In the Japanese language, GPT-4 also outperformed the average medical residents in the GM-ITE test, originally designed for them. Specifically, GPT-4 demonstrated a tendency to score higher on difficult questions with low resident correct response rates and those demanding a more comprehensive understanding of diseases. However, GPT-4 scored comparatively lower on questions that residents could readily answer, such as those testing attitudes toward patients and professionalism, as well as those necessitating an understanding of context and communication. These findings highlight the strengths and limitations of artificial intelligence applications in medical education and practice.
|
Watari T; Takagi S; Sakaguchi K; Nishizaki Y; Shimizu T; Yamamoto Y; Tokuda Y
| 21
|
|||
38295466
|
Assessing the precision of artificial intelligence in ED triage decisions: Insights from a study with ChatGPT.
| 2,024
|
The American journal of emergency medicine
|
BACKGROUND: The rise in emergency department presentations globally poses challenges for efficient patient management. To address this, various strategies aim to expedite patient management. Artificial intelligence's (AI) consistent performance and rapid data interpretation extend its healthcare applications, especially in emergencies. The introduction of a robust AI tool like ChatGPT, based on GPT-4 developed by OpenAI, can benefit patients and healthcare professionals by improving the speed and accuracy of resource allocation. This study examines ChatGPT's capability to predict triage outcomes based on local emergency department rules. METHODS: This study is a single-center prospective observational study. The study population consists of all patients who presented to the emergency department with any symptoms and agreed to participate. The study was conducted on three non-consecutive days for a total of 72 h. Patients' chief complaints, vital parameters, medical history and the area to which they were directed by the triage team in the emergency department were recorded. Concurrently, an emergency medicine physician inputted the same data into previously trained GPT-4, according to local rules. According to this data, the triage decisions made by GPT-4 were recorded. In the same process, an emergency medicine specialist determined where the patient should be directed based on the data collected, and this decision was considered the gold standard. Accuracy rates and reliability for directing patients to specific areas by the triage team and GPT-4 were evaluated using Cohen's kappa test. Furthermore, the accuracy of the patient triage process performed by the triage team and GPT-4 was assessed by receiver operating characteristic (ROC) analysis. Statistical analysis considered a value of p < 0.05 as significant. RESULTS: The study was carried out on 758 patients. Among the participants, 416 (54.9%) were male and 342 (45.1%) were female. Evaluating the primary endpoints of our study - the agreement between the decisions of the triage team, GPT-4 decisions in emergency department triage, and the gold standard - we observed almost perfect agreement both between the triage team and the gold standard and between GPT-4 and the gold standard (Cohen's Kappa 0.893 and 0.899, respectively; p < 0.001 for each). CONCLUSION: Our findings suggest GPT-4 possess outstanding predictive skills in triaging patients in an emergency setting. GPT-4 can serve as an effective tool to support the triage process.
|
Pasli S; Sahin AS; Beser MF; Topcuoglu H; Yadigaroglu M; Imamoglu M
| 10
|
|||
38472675
|
Enhanced Artificial Intelligence Strategies in Renal Oncology: Iterative Optimization and Comparative Analysis of GPT 3.5 Versus 4.0.
| 2,024
|
Annals of surgical oncology
|
BACKGROUND: The rise of artificial intelligence (AI) in medicine has revealed the potential of ChatGPT as a pivotal tool in medical diagnosis and treatment. This study assesses the efficacy of ChatGPT versions 3.5 and 4.0 in addressing renal cell carcinoma (RCC) clinical inquiries. Notably, fine-tuning and iterative optimization of the model corrected ChatGPT's limitations in this area. METHODS: In our study, 80 RCC-related clinical questions from urology experts were posed three times to both ChatGPT 3.5 and ChatGPT 4.0, seeking binary (yes/no) responses. We then statistically analyzed the answers. Finally, we fine-tuned the GPT-3.5 Turbo model using these questions, and assessed its training outcomes. RESULTS: We found that the average accuracy rates of answers provided by ChatGPT versions 3.5 and 4.0 were 67.08% and 77.50%, respectively. ChatGPT 4.0 outperformed ChatGPT 3.5, with a higher accuracy rate in responses (p < 0.05). By counting the number of correct responses to the 80 questions, we then found that although ChatGPT 4.0 performed better (p < 0.05), both versions were subject to instability in answering. Finally, by fine-tuning the GPT-3.5 Turbo model, we found that the correct rate of responses to these questions could be stabilized at 93.75%. Iterative optimization of the model can result in 100% response accuracy. CONCLUSION: We compared ChatGPT versions 3.5 and 4.0 in addressing clinical RCC questions, identifying their limitations. By applying the GPT-3.5 Turbo fine-tuned model iterative training method, we enhanced AI strategies in renal oncology. This approach is set to enhance ChatGPT's database and clinical guidance capabilities, optimizing AI in this field.
|
Liang R; Zhao A; Peng L; Xu X; Zhong J; Wu F; Yi F; Zhang S; Wu S; Hou J
| 0-1
|
|||
39610054
|
Impact of artificial intelligence in managing musculoskeletal pathologies in physiatry: a qualitative observational study evaluating the potential use of ChatGPT versus Copilot for patient information and clinical advice on low back pain.
| 2,025
|
Journal of Yeungnam medical science
|
BACKGROUND: The self-management of low back pain (LBP) through patient information interventions offers significant benefits in terms of cost, reduced work absenteeism, and overall healthcare utilization. Using a large language model (LLM), such as ChatGPT (OpenAI) or Copilot (Microsoft), could potentially enhance these outcomes further. Thus, it is important to evaluate the LLMs ChatGPT and Copilot in providing medical advice for LBP and assessing the impact of clinical context on the quality of responses. METHODS: This was a qualitative comparative observational study. It was conducted within the Department of Physical Medicine and Rehabilitation, University of Montreal in Montreal, QC, Canada. ChatGPT and Copilot were used to answer 27 common questions related to LBP, with and without a specific clinical context. The responses were evaluated by physiatrists for validity, safety, and usefulness using a 4-point Likert scale (4, most favorable). RESULTS: Both ChatGPT and Copilot demonstrated good performance across all measures. Validity scores were 3.33 for ChatGPT and 3.18 for Copilot, safety scores were 3.19 for ChatGPT and 3.13 for Copilot, and usefulness scores were 3.60 for ChatGPT and 3.57 for Copilot. The inclusion of clinical context did not significantly change the results. CONCLUSION: LLMs, such as ChatGPT and Copilot, can provide reliable medical advice on LBP, irrespective of the detailed clinical context, supporting their potential to aid in patient self-management.
|
Ah-Yan C; Boissonnault E; Boudier-Reveret M; Mares C
| 32
|
|||
38180782
|
Pure Wisdom or Potemkin Villages? A Comparison of ChatGPT 3.5 and ChatGPT 4 on USMLE Step 3 Style Questions: Quantitative Analysis.
| 2,024
|
JMIR medical education
|
BACKGROUND: The United States Medical Licensing Examination (USMLE) has been critical in medical education since 1992, testing various aspects of a medical student's knowledge and skills through different steps, based on their training level. Artificial intelligence (AI) tools, including chatbots like ChatGPT, are emerging technologies with potential applications in medicine. However, comprehensive studies analyzing ChatGPT's performance on USMLE Step 3 in large-scale scenarios and comparing different versions of ChatGPT are limited. OBJECTIVE: This paper aimed to analyze ChatGPT's performance on USMLE Step 3 practice test questions to better elucidate the strengths and weaknesses of AI use in medical education and deduce evidence-based strategies to counteract AI cheating. METHODS: A total of 2069 USMLE Step 3 practice questions were extracted from the AMBOSS study platform. After including 229 image-based questions, a total of 1840 text-based questions were further categorized and entered into ChatGPT 3.5, while a subset of 229 questions were entered into ChatGPT 4. Responses were recorded, and the accuracy of ChatGPT answers as well as its performance in different test question categories and for different difficulty levels were compared between both versions. RESULTS: Overall, ChatGPT 4 demonstrated a statistically significant superior performance compared to ChatGPT 3.5, achieving an accuracy of 84.7% (194/229) and 56.9% (1047/1840), respectively. A noteworthy correlation was observed between the length of test questions and the performance of ChatGPT 3.5 (rho=-0.069; P=.003), which was absent in ChatGPT 4 (P=.87). Additionally, the difficulty of test questions, as categorized by AMBOSS hammer ratings, showed a statistically significant correlation with performance for both ChatGPT versions, with rho=-0.289 for ChatGPT 3.5 and rho=-0.344 for ChatGPT 4. ChatGPT 4 surpassed ChatGPT 3.5 in all levels of test question difficulty, except for the 2 highest difficulty tiers (4 and 5 hammers), where statistical significance was not reached. CONCLUSIONS: In this study, ChatGPT 4 demonstrated remarkable proficiency in taking the USMLE Step 3, with an accuracy rate of 84.7% (194/229), outshining ChatGPT 3.5 with an accuracy rate of 56.9% (1047/1840). Although ChatGPT 4 performed exceptionally, it encountered difficulties in questions requiring the application of theoretical concepts, particularly in cardiology and neurology. These insights are pivotal for the development of examination strategies that are resilient to AI and underline the promising role of AI in the realm of medical education and diagnostics.
|
Knoedler L; Alfertshofer M; Knoedler S; Hoch CC; Funk PF; Cotofana S; Maheta B; Frank K; Brebant V; Prantl L; Lamby P
| 21
|
|||
38823007
|
Enhancing readability of USFDA patient communications through large language models: a proof-of-concept study.
| 2,024
|
Expert review of clinical pharmacology
|
BACKGROUND: The US Food and Drug Administration (USFDA) communicates new drug safety concerns through drug safety communications (DSCs) and medication guides (MGs), which often challenge patients with average reading abilities due to their complexity. This study assesses whether large language models (LLMs) can enhance the readability of these materials. METHODS: We analyzed the latest DSCs and MGs, using ChatGPT 4.0(c) and Gemini(c) to simplify them to a sixth-grade reading level. Outputs were evaluated for readability, technical accuracy, and content inclusiveness. RESULTS: Original materials were difficult to read (DSCs grade level 13, MGs 22). LLMs significantly improved readability, reducing the grade levels to more accessible readings (Single prompt - DSCs: ChatGPT 4.0(c) 10.1, Gemini(c) 8; MGs: ChatGPT 4.0(c) 7.1, Gemini(c) 6.5. Multiple prompts - DSCs: ChatGPT 4.0(c) 10.3, Gemini(c) 7.5; MGs: ChatGPT 4.0(c) 8, Gemini(c) 6.8). LLM outputs retained technical accuracy and key messages. CONCLUSION: LLMs can significantly simplify complex health-related information, making it more accessible to patients. Future research should extend these findings to other languages and patient groups in real-world settings.
|
Sridharan K; Sivaramakrishnan G
| 10
|
|||
40093710
|
Advancing personalized medicine in digital health: The role of artificial intelligence in enhancing clinical interpretation of 24-h ambulatory blood pressure monitoring.
| 2,025
|
Digital health
|
Background: The use of artificial intelligence (AI) for interpreting ambulatory blood pressure monitoring (ABPM) data is gaining traction in clinical practice. Evaluating the accuracy of AI models, like ChatGPT 4.0, in clinical settings can inform their integration into healthcare processes. However, limited research has been conducted to validate the performance of such models against expert interpretations in real-world clinical scenarios. Methods: A total of 53 ABPM records from Mayo Clinic, Minnesota, were analyzed. ChatGPT 4.0's interpretations were compared with consensus results from two experienced nephrologists, based on the American College of Cardiology/American Heart Association (ACC/AHA) guidelines. The study assessed ChatGPT's accuracy and reliability over two rounds of testing, with a three-month interval between rounds. Results: ChatGPT achieved an accuracy of 87% for identifying hypertension, 89% for nocturnal hypertension, 81% for nocturnal dipping, and 94% for abnormal heart rate. ChatGPT correctly identified all conditions in 60% of ABPM records. The percentage agreement between the first and second round of ChatGPT's analysis was 81% in identifying hypertension, 85% in nocturnal hypertension, 89% in nocturnal dipping, and 94% in abnormal heart rate. There was no significant difference in accuracy between the first and second round (all p > 0.05). The Kappa statistic was 0.63 for identifying hypertension, 0.66 for nocturnal hypertension, 0.76 for nocturnal dipping, and 0.70 for abnormal heart rate. Conclusions: ChatGPT 4.0 demonstrates potential as a reliable tool for interpreting 24-h ABPM data, achieving substantial agreement with expert nephrologists. These findings underscore the potential for AI integration into hypertension management workflows, while highlighting the need for further validation in larger, diverse cohorts.
|
Alam SF; Thongprayoon C; Miao J; Pham JH; Sheikh MS; Garcia Valencia OA; Schwartz GL; Craici IM; Gonzalez Suarez ML; Cheungpasitporn W
| 10
|
|||
38180787
|
Artificial Intelligence in Medicine: Cross-Sectional Study Among Medical Students on Application, Education, and Ethical Aspects.
| 2,024
|
JMIR medical education
|
BACKGROUND: The use of artificial intelligence (AI) in medicine not only directly impacts the medical profession but is also increasingly associated with various potential ethical aspects. In addition, the expanding use of AI and AI-based applications such as ChatGPT demands a corresponding shift in medical education to adequately prepare future practitioners for the effective use of these tools and address the associated ethical challenges they present. OBJECTIVE: This study aims to explore how medical students from Germany, Austria, and Switzerland perceive the use of AI in medicine and the teaching of AI and AI ethics in medical education in accordance with their use of AI-based chat applications, such as ChatGPT. METHODS: This cross-sectional study, conducted from June 15 to July 15, 2023, surveyed medical students across Germany, Austria, and Switzerland using a web-based survey. This study aimed to assess students' perceptions of AI in medicine and the integration of AI and AI ethics into medical education. The survey, which included 53 items across 6 sections, was developed and pretested. Data analysis used descriptive statistics (median, mode, IQR, total number, and percentages) and either the chi-square or Mann-Whitney U tests, as appropriate. RESULTS: Surveying 487 medical students across Germany, Austria, and Switzerland revealed limited formal education on AI or AI ethics within medical curricula, although 38.8% (189/487) had prior experience with AI-based chat applications, such as ChatGPT. Despite varied prior exposures, 71.7% (349/487) anticipated a positive impact of AI on medicine. There was widespread consensus (385/487, 74.9%) on the need for AI and AI ethics instruction in medical education, although the current offerings were deemed inadequate. Regarding the AI ethics education content, all proposed topics were rated as highly relevant. CONCLUSIONS: This study revealed a pronounced discrepancy between the use of AI-based (chat) applications, such as ChatGPT, among medical students in Germany, Austria, and Switzerland and the teaching of AI in medical education. To adequately prepare future medical professionals, there is an urgent need to integrate the teaching of AI and AI ethics into the medical curricula.
|
Weidener L; Fischer M
| 10
|
|||
38049066
|
Effectiveness of ChatGPT in clinical pharmacy and the role of artificial intelligence in medication therapy management.
| 2,024
|
Journal of the American Pharmacists Association : JAPhA
|
BACKGROUND: The use of artificial intelligence (AI) to optimize medication therapy management (MTM) in identifying drug interactions may potentially improve MTM efficiency. ChatGPT, an AI language model, may be applied to identify medication interventions by integrating patient and drug databases. ChatGPT has been shown to be effective in other areas of clinical medicine, from diagnosis to patient management. However, ChatGPT's ability to manage MTM related activities is little known. OBJECTIVES: To evaluate the effectiveness of ChatGPT in MTM services in simple, complex, and very complex cases to understand AI contributions in MTM. METHODS: Two clinical pharmacists rated and validated the difficulty of patient cases from simple, complex, and very complex. ChatGPT's response to the cases was assessed based on 3 criteria: the ability to identify drug interactions, precision in recommending alternatives, and appropriateness in devising management plans. Two clinical pharmacists validated the accuracy of ChatGPT's responses and compared them to actual answers for each complexity level. RESULTS: ChatGPT 4.0 accurately solved 39 out of 39 (100 %) patient cases. ChatGPT successfully identified drug interactions, provided therapy recommendations and formulated general management plans, but it did not recommend specific dosages. Results suggest it can assist pharmacists in formulating MTM plans to improve overall efficiency. CONCLUSION: The application of ChatGPT in MTM has the potential to enhance patient safety and involvement, lower healthcare costs, and assist healthcare providers in medication management and identifying drug interactions. Future pharmacists can utilize AI models such as ChatGPT to improve patient care. The future of the pharmacy profession will depend on how the field responds to the changing need for patient care optimized by AI and automation.
|
Roosan D; Padua P; Khan R; Khan H; Verzosa C; Wu Y
| 10
|
|||
38789962
|
Evaluating the accuracy of Chat Generative Pre-trained Transformer version 4 (ChatGPT-4) responses to United States Food and Drug Administration (FDA) frequently asked questions about dental amalgam.
| 2,024
|
BMC oral health
|
BACKGROUND: The use of artificial intelligence in the field of health sciences is becoming widespread. It is known that patients benefit from artificial intelligence applications on various health issues, especially after the pandemic period. One of the most important issues in this regard is the accuracy of the information provided by artificial intelligence applications. OBJECTIVE: The purpose of this study was to the frequently asked questions about dental amalgam, as determined by the United States Food and Drug Administration (FDA), which is one of these information resources, to Chat Generative Pre-trained Transformer version 4 (ChatGPT-4) and to compare the content of the answers given by the application with the answers of the FDA. METHODS: The questions were directed to ChatGPT-4 on May 8th and May 16th, 2023, and the responses were recorded and compared at the word and meaning levels using ChatGPT. The answers from the FDA webpage were also recorded. The responses were compared for content similarity in "Main Idea", "Quality Analysis", "Common Ideas", and "Inconsistent Ideas" between ChatGPT-4's responses and FDA's responses. RESULTS: ChatGPT-4 provided similar responses at one-week intervals. In comparison with FDA guidance, it provided answers with similar information content to frequently asked questions. However, although there were some similarities in the general aspects of the recommendation regarding amalgam removal in the question, the two texts are not the same, and they offered different perspectives on the replacement of fillings. CONCLUSIONS: The findings of this study indicate that ChatGPT-4, an artificial intelligence based application, encompasses current and accurate information regarding dental amalgam and its removal, providing it to individuals seeking access to such information. Nevertheless, we believe that numerous studies are required to assess the validity and reliability of ChatGPT-4 across diverse subjects.
|
Buldur M; Sezer B
| 32
|
|||
40235859
|
Evaluating the accuracy of ChatGPT in delivering patient instructions for medications: an exploratory case study.
| 2,025
|
Frontiers in artificial intelligence
|
BACKGROUND: The use of ChatGPT in healthcare is still in its early stages; however, it has the potential to become a cornerstone in modern healthcare systems. This study aims to assess the accuracy of output of ChatGPT compared with those of CareNotes((R)) in providing patient instructions for three medications: tirzepatide, citalopram, and apixaban. METHODS: An exploratory case study was conducted using a published questionnaire to evaluate ChatGPT-generated reports against patient instructions from CareNotes((R)). The evaluation focused on the completeness and correctness of the reports, as well as their potential to cause harm or lead to poor medication adherence. The evaluation was conducted by four pharmacy experts and 33 PharmD interns. RESULTS: The evaluators indicated that the ChatGPT reports of tirzepatide, citalopram, and apixaban were correct but lacked completeness. Additionally, ChatGPT reports have the potential to cause harm and may negatively affect medication adherence. CONCLUSION: Although ChatGPT demonstrated promising results, particularly in terms of correctness, it cannot yet be considered a reliable standalone source of patient drug information.
|
Abanmy NO; Al-Ghreimil N; Alsabhan JF; Al-Baity H; Aljadeed R
| 10
|
|||
40023616
|
Can ChatGPT-4 perform as a competent physician based on the Chinese critical care examination?
| 2,025
|
Journal of critical care
|
BACKGROUND: The use of ChatGPT in medical applications is of increasing interest. However, its efficacy in critical care medicine remains uncertain. This study aims to assess ChatGPT-4's performance in critical care examination, providing insights into its potential as a tool for clinical decision-making. METHODS: A dataset from the Chinese Health Professional Technical Qualification Examination for Critical Care Medicine, covering four components-fundamental knowledge, specialized knowledge, professional practical skills, and related medical knowledge-was utilized. ChatGPT-4 answered 600 questions, which were evaluated by critical care experts using a standardized rubric. RESULTS: ChatGPT-4 achieved a 73.5 % success rate, surpassing the 60 % passing threshold in four components, with the highest accuracy in fundamental knowledge (81.94 %). ChatGPT-4 performed significantly better on single-choice questions than on multiple-choice questions (76.72 % vs. 51.32 %, p < 0.001), while no significant difference was observed between case-based and non-case-based questions. CONCLUSION: ChatGPT demonstrated notable strengths in critical care examination, highlighting its potential for supporting clinical decision-making, information retrieval, and medical education. However, caution is required regarding its potential to generate inaccurate responses. Its application in critical care must therefore be carefully supervised by medical professionals to ensure both the accuracy of the information and patient safety.
|
Wang X; Tang J; Feng Y; Tang C; Wang X
| 21
|
|||
37860071
|
Applications of ChatGPT and Large Language Models in Medicine and Health Care: Benefits and Pitfalls.
| 2,023
|
Federal practitioner : for the health care professionals of the VA, DoD, and PHS
|
BACKGROUND: The use of large language models like ChatGPT is becoming increasingly popular in health care settings. These artificial intelligence models are trained on vast amounts of data and can be used for various tasks, such as language translation, summarization, and answering questions. OBSERVATIONS: Large language models have the potential to revolutionize the industry by assisting medical professionals with administrative tasks, improving diagnostic accuracy, and engaging patients. However, pitfalls exist, such as its inability to distinguish between real and fake information and the need to comply with privacy, security, and transparency principles. CONCLUSIONS: Careful consideration is needed to ensure the responsible and ethical use of large language models in medicine and health care. The development of [artificial intelligence] is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Bill Gates1.
|
Borkowski AA; Jakey CE; Mastorides SM; Kraus AL; Vidyarthi G; Viswanadhan N; Lezama JL
| 10
|
|||
38691128
|
[ChatGPT for use in technology-enhanced learning in anesthesiology and emergency medicine and potential clinical application of AI language models : Between hype and reality around artificial intelligence in medical use].
| 2,024
|
Die Anaesthesiologie
|
BACKGROUND: The utilization of AI language models in education and academia is currently a subject of research, and applications in clinical settings are also being tested. Studies conducted by various research groups have demonstrated that language models can answer questions related to medical board examinations, and there are potential applications of these models in medical education as well. RESEARCH QUESTION: This study aims to investigate the extent to which current version language models prove effective for addressing medical inquiries, their potential utility in medical education, and the challenges that still exist in the functioning of AI language models. METHOD: The program ChatGPT, based on GPT 3.5, had to answer 1025 questions from the second part (M2) of the medical board examination. The study examined whether any errors and what types of errors occurred. Additionally, the language model was asked to generate essays on the learning objectives outlined in the standard curriculum for specialist training in anesthesiology and the supplementary qualification in emergency medicine. These essays were analyzed afterwards and checked for errors and anomalies. RESULTS: The findings indicated that ChatGPT was able to correctly answer the questions with an accuracy rate exceeding 69%, even when the questions included references to visual aids. This represented an improvement in the accuracy of answering board examination questions compared to a study conducted in March; however, when it came to generating essays a high error rate was observed. DISCUSSION: Considering the current pace of ongoing improvements in AI language models, widespread clinical implementation, especially in emergency departments as well as emergency and intensive care medicine with the assistance of medical trainees, is a plausible scenario. These models can provide insights to support medical professionals in their work, without relying solely on the language model. Although the use of these models in education holds promise, it currently requires a significant amount of supervision. Due to hallucinations caused by inadequate training environments for the language model, the generated texts might deviate from the current state of scientific knowledge. Direct deployment in patient care settings without permanent physician supervision does not yet appear to be achievable at present.
|
Humbsch P; Horn E; Bohm K; Gintrowicz R
| 21
|
|||
40432855
|
Performance of Artificial Intelligence in Addressing Questions Regarding the Management of Pediatric Supracondylar Humerus Fractures.
| 2,025
|
Journal of the Pediatric Orthopaedic Society of North America
|
BACKGROUND: The vast accessibility of artificial intelligence (AI) has enabled its utilization in medicine to improve patient education, augment patient-physician communications, support research efforts, and enhance medical student education. However, there is significant concern that these models may provide responses that are incorrect, biased, or lacking in the required nuance and complexity of best practice clinical decision-making. Currently, there is a paucity of literature comparing the quality and reliability of AI-generated responses. The purpose of this study was to assess the ability of ChatGPT and Gemini to generate reponses to the 2022 American Academy of Orthopaedic Surgeons' (AAOS) current practice guidlines on pediatric supracondylar humerus fractures. We hypothesized that both ChatGPT and Gemini would demonstrate high-quality, evidence-based responses with no significant difference between the models across evaluation criteria. METHODS: The responses from ChatGPT and Gemini to responses based on the 14 AAOS guidelines were evaluated by seven fellowship-trained pediatric orthopaedic surgeons using a questionnaire to assess five key characteristics on a scale from 1 to 5. The prompts were categorized into nonoperative or preoperative management and diagnosis, surgical timing and technique, and rehabilitation and prevention. Statistical analysis included mean scoring, standard deviation, and two-sided t-tests to compare the performance between ChatGPT and Gemini. Scores were then evaluated for inter-rater reliability. RESULTS: ChatGPT and Gemini demonstrated consistent performance across the criteria, with high mean scores across all criteria except for evidence-based responses. Mean scores were highest for clarity (ChatGPT: 3.745 +/- 0.237, Gemini 4.388 +/- 0.154) and lowest for evidence-based responses (ChatGPT: 1.816 +/- 0.181, Gemini: 3.765 +/- 0.229). There were notable statistically significant differences across all criteria, with Gemini having higher mean scores in each criterion (P < .001). Gemini achieved statistically higher ratings in the relevance (P = .03) and evidence-based (P < .001) criteria. Both large language models (LLMs) performed comparably in the accuracy, clarity, and completeness criteria (P > .05). CONCLUSIONS: ChatGPT and Gemini produced responses aligned with the 2022 AAOS current guideline practices for pediatric supracondylar humerus fractures. Gemini outperformed ChatGPT across all criteria, with the greatest difference in scores seen in the evidence-based category. This study emphasizes the potential for LLMs, particularly Gemini, to provide pertinent clinical information for managing pediatric supracondylar humerus fractures. KEY CONCEPTS: (1)The accessibility of artificial intelligence has enabled its utilization in medicine to improve patient education, support research efforts, enhance medical student education, and augment patient-physician communications.(2)There is a significant concern that artificial intelligence may provide responses that are incorrect, biased, or lacking in the required nuance and complexity of best practice clinical decision-making.(3)There is a paucity of literature comparing the quality and reliability of AI-generated responses regarding management of pediatric supracondylar humerus fractures.(4)In our study, both ChatGPT and Gemini produced responses that were well aligned with the AAOS current guideline practices for pediatric supracondylar humerus fractures; however, Gemini outperformed ChatGPT across all criteria, with the greatest difference in scores seen in the evidence-based category. LEVEL OF EVIDENCE: Level II.
|
Milner JD; Quinn MS; Schmitt P; Knebel A; Henstenburg J; Nasreddine A; Boulos AR; Schiller JR; Eberson CP; Cruz AI Jr
| 32
|
|||
40318209
|
Advancements in large language model accuracy for answering physical medicine and rehabilitation board review questions.
| 2,025
|
PM & R : the journal of injury, function, and rehabilitation
|
BACKGROUND: There have been significant advances in machine learning and artificial intelligence technology over the past few years, leading to the release of large language models (LLMs) such as ChatGPT. There are many potential applications for LLMs in health care, but it is critical to first determine how accurate LLMs are before putting them into practice. No studies have evaluated the accuracy and precision of LLMs in responding to questions related to the field of physical medicine and rehabilitation (PM&R). OBJECTIVE: To determine the accuracy and precision of two OpenAI LLMs (GPT-3.5, released in November 2022, and GPT-4o, released in May 2024) in answering questions related to PM&R knowledge. DESIGN: Cross-sectional study. Both LLMs were tested on the same 744 PM&R knowledge questions that covered all aspects of the field (general rehabilitation, stroke, traumatic brain injury, spinal cord injury, musculoskeletal medicine, pain medicine, electrodiagnostic medicine, pediatric rehabilitation, prosthetics and orthotics, rheumatology, and pharmacology). Each LLM was tested three times on the same question set to assess for precision. SETTING: N/A. PATIENTS: N/A. INTERVENTIONS: N/A. MAIN OUTCOME MEASURE: Percentage of correctly answered questions. RESULTS: For three runs of the 744-question set, GPT-3.5 answered 56.3%, 56.5%, and 56.9% of the questions correctly. For three runs of the same question set, GPT-4o answered 83.6%, 84%, and 84.1% of the questions correctly. GPT-4o outperformed GPT-3.5 in all subcategories of PM&R questions. CONCLUSIONS: LLM technology is rapidly advancing, with the more recent GPT-4o model performing much better on PM&R knowledge questions compared to GPT-3.5. There is potential for LLMs in augmenting clinical practice, medical training, and patient education. However, the technology has limitations and physicians should remain cautious in using it in practice at this time.
|
Bitterman J; D'Angelo A; Holachek A; Eubanks JE
| 21
|
|||
39378442
|
Assessment of ChatGPT-4 in Family Medicine Board Examinations Using Advanced AI Learning and Analytical Methods: Observational Study.
| 2,024
|
JMIR medical education
|
BACKGROUND: This research explores the capabilities of ChatGPT-4 in passing the American Board of Family Medicine (ABFM) Certification Examination. Addressing a gap in existing literature, where earlier artificial intelligence (AI) models showed limitations in medical board examinations, this study evaluates the enhanced features and potential of ChatGPT-4, especially in document analysis and information synthesis. OBJECTIVE: The primary goal is to assess whether ChatGPT-4, when provided with extensive preparation resources and when using sophisticated data analysis, can achieve a score equal to or above the passing threshold for the Family Medicine Board Examinations. METHODS: In this study, ChatGPT-4 was embedded in a specialized subenvironment, "AI Family Medicine Board Exam Taker," designed to closely mimic the conditions of the ABFM Certification Examination. This subenvironment enabled the AI to access and analyze a range of relevant study materials, including a primary medical textbook and supplementary web-based resources. The AI was presented with a series of ABFM-type examination questions, reflecting the breadth and complexity typical of the examination. Emphasis was placed on assessing the AI's ability to interpret and respond to these questions accurately, leveraging its advanced data processing and analysis capabilities within this controlled subenvironment. RESULTS: In our study, ChatGPT-4's performance was quantitatively assessed on 300 practice ABFM examination questions. The AI achieved a correct response rate of 88.67% (95% CI 85.08%-92.25%) for the Custom Robot version and 87.33% (95% CI 83.57%-91.10%) for the Regular version. Statistical analysis, including the McNemar test (P=.45), indicated no significant difference in accuracy between the 2 versions. In addition, the chi-square test for error-type distribution (P=.32) revealed no significant variation in the pattern of errors across versions. These results highlight ChatGPT-4's capacity for high-level performance and consistency in responding to complex medical examination questions under controlled conditions. CONCLUSIONS: The study demonstrates that ChatGPT-4, particularly when equipped with specialized preparation and when operating in a tailored subenvironment, shows promising potential in handling the intricacies of medical board examinations. While its performance is comparable with the expected standards for passing the ABFM Certification Examination, further enhancements in AI technology and tailored training methods could push these capabilities to new heights. This exploration opens avenues for integrating AI tools such as ChatGPT-4 in medical education and assessment, emphasizing the importance of continuous advancement and specialized training in medical applications of AI.
|
Goodings AJ; Kajitani S; Chhor A; Albakri A; Pastrak M; Kodancha M; Ives R; Lee YB; Kajitani K
| 21
|
|||
39238293
|
ChatGPT in Clinical Medicine, Urology and Academia: A Review.
| 2,024
|
Archivos espanoles de urologia
|
BACKGROUND: This study aims to provide a comprehensive overview of the current literature on the utilisation of ChatGPT in the fields of clinical medicine, urology, and academic medicine, while also addressing the associated ethical challenges and potential risks. METHODS: This narrative review conducted an extensive search of the PubMed and MEDLINE databases, covering the period from January 2022 to January 2024. The search phrases employed were "urologic surgery" in conjunction with "artificial intelligence", "machine learning", "neural network", "ChatGPT", "urology", and "medicine". The initial studies were chosen from the screened research to examine the possible interaction between those entities. Research utilising animal models was excluded. RESULTS: ChatGPT has demonstrated its usefulness in clinical settings by producing precise clinical correspondence, discharge summaries, and medical records, thereby assisting in these laborious tasks, especially with the latest iterations of ChatGPT. Furthermore, patients can access essential medical information by inquiring with ChatGPT. Nevertheless, there are multiple concerns regarding the correctness of the system, including allegations of falsified data and references. These issues emphasise the importance of having a doctor oversee the final result to guarantee patient safety. ChatGPT shows potential in academic medicine for generating drafts and organising datasets. However, the presence of guidelines and plagiarism-detection technologies is necessary to mitigate the risks of plagiarism and the use of faked data when using it for academic purposes. CONCLUSIONS: ChatGPT should be utilised as a supplementary tool by urologists and academicians. However, it is now advisable to have human oversight to guarantee patient safety, uphold academic integrity, and maintain transparency.
|
Tzelves L; Kapriniotis K; Feretzakis G; Katsimperis S; Manolitsis I; Juliebo-Jones P; Pietropaolo A; Tonyali S; Bellos T; Somani B
| 0-1
|
|||
38928668
|
Google Bard and ChatGPT in Orthopedics: Which Is the Better Doctor in Sports Medicine and Pediatric Orthopedics? The Role of AI in Patient Education.
| 2,024
|
Diagnostics (Basel, Switzerland)
|
BACKGROUND: This study evaluates the potential of ChatGPT and Google Bard as educational tools for patients in orthopedics, focusing on sports medicine and pediatric orthopedics. The aim is to compare the quality of responses provided by these natural language processing (NLP) models, addressing concerns about the potential dissemination of incorrect medical information. METHODS: Ten ACL- and flat foot-related questions from a Google search were presented to ChatGPT-3.5 and Google Bard. Expert orthopedic surgeons rated the responses using the Global Quality Score (GQS). The study minimized bias by clearing chat history before each question, maintaining respondent anonymity and employing statistical analysis to compare response quality. RESULTS: ChatGPT-3.5 and Google Bard yielded good-quality responses, with average scores of 4.1 +/- 0.7 and 4 +/- 0.78, respectively, for sports medicine. For pediatric orthopedics, Google Bard scored 3.5 +/- 1, while the average score for responses generated by ChatGPT was 3.8 +/- 0.83. In both cases, no statistically significant difference was found between the platforms (p = 0.6787, p = 0.3092). Despite ChatGPT's responses being considered more readable, both platforms showed promise for AI-driven patient education, with no reported misinformation. CONCLUSIONS: ChatGPT and Google Bard demonstrate significant potential as supplementary patient education resources in orthopedics. However, improvements are needed for increased reliability. The study underscores the evolving role of AI in orthopedics and calls for continued research to ensure a conscientious integration of AI in healthcare education.
|
Giorgino R; Alessandri-Bonetti M; Del Re M; Verdoni F; Peretti GM; Mangiavini L
| 32
|
|||
38638404
|
Evaluating ChatGPT's efficacy in assessing the safety of non-prescription medications and supplements in patients with kidney disease.
| 2,024
|
Digital health
|
BACKGROUND: This study investigated the efficacy of ChatGPT-3.5 and ChatGPT-4 in assessing drug safety for patients with kidney diseases, comparing their performance to Micromedex, a well-established drug information source. Despite the perception of non-prescription medications and supplements as safe, risks exist, especially for those with kidney issues. The study's goal was to evaluate ChatGPT's versions for their potential in clinical decision-making regarding kidney disease patients. METHOD: The research involved analyzing 124 common non-prescription medications and supplements using ChatGPT-3.5 and ChatGPT-4 with queries about their safety for people with kidney disease. The AI responses were categorized as "generally safe," "potentially harmful," or "unknown toxicity." Simultaneously, these medications and supplements were assessed in Micromedex using similar categories, allowing for a comparison of the concordance between the two resources. RESULTS: Micromedex identified 85 (68.5%) medications as generally safe, 35 (28.2%) as potentially harmful, and 4 (3.2%) of unknown toxicity. ChatGPT-3.5 identified 89 (71.8%) as generally safe, 11 (8.9%) as potentially harmful, and 24 (19.3%) of unknown toxicity. GPT-4 identified 82 (66.1%) as generally safe, 29 (23.4%) as potentially harmful, and 13 (10.5%) of unknown toxicity. The overall agreement between Micromedex and ChatGPT-3.5 was 64.5% and ChatGPT-4 demonstrated a higher agreement at 81.4%. Notably, ChatGPT-3.5's suboptimal performance was primarily influenced by a lower concordance rate among supplements, standing at 60.3%. This discrepancy could be attributed to the limited data on supplements within ChatGPT-3.5, with supplements constituting 80% of medications identified as unknown. CONCLUSION: ChatGPT's capabilities in evaluating the safety of non-prescription drugs and supplements for kidney disease patients are modest compared to established drug information resources. Neither ChatGPT-3.5 nor ChatGPT-4 can be currently recommended as reliable drug information sources for this demographic. The results highlight the need for further improvements in the model's accuracy and reliability in the medical domain.
|
Sheikh MS; Barreto EF; Miao J; Thongprayoon C; Gregoire JR; Dreesman B; Erickson SB; Craici IM; Cheungpasitporn W
| 10
|
|||
40083047
|
While GPT-3.5 is unable to pass the Physician Licensing Exam in Taiwan, GPT-4 successfully meets the criteria.
| 2,025
|
Journal of the Chinese Medical Association : JCMA
|
BACKGROUND: This study investigates the performance of ChatGPT-3.5 and ChatGPT-4 in answering medical questions from Taiwan's Physician Licensing Exam, ranging from basic medical knowledge to specialized clinical topics. It aims to understand these artificial intelligence (AI) models' capabilities in a non-English context, specifically traditional Chinese. METHODS: The study incorporated questions from the Taiwan Physician Licensing Exam in 2022, excluding image-based queries. Each question was manually input into ChatGPT, and responses were compared with official answers from Taiwan's Ministry of Examination. Differences across specialties and question types were assessed using the Kruskal-Wallis and Fisher's exact tests. RESULTS: ChatGPT-3.5 achieved an average accuracy of 67.7% in basic medical sciences and 53.2% in clinical medicine. Meanwhile, ChatGPT-4 significantly outperformed ChatGPT-3.5, with average accuracies of 91.9% and 90.7%, respectively. ChatGPT-3.5 scored above 60.0% in seven out of 10 basic medical science subjects and three of 14 clinical subjects, while ChatGPT-4 scored above 60.0% in every subject. The type of question did not significantly affect accuracy rates. CONCLUSION: ChatGPT-3.5 showed proficiency in basic medical sciences but was less reliable in clinical medicine, whereas ChatGPT-4 demonstrated strong capabilities in both areas. However, their proficiency varied across different specialties. The type of question had minimal impact on performance. This study highlights the potential of AI models in medical education and non-English languages examination and the need for cautious and informed implementation in educational settings due to variability across specialties.
|
Chen TA; Lin KC; Lin MH; Chang HT; Chen YC; Chen TJ
| 21
|
|||
40273640
|
ChatGPT-supported patient triage with voice commands in the emergency department: A prospective multicenter study.
| 2,025
|
The American journal of emergency medicine
|
BACKGROUND: Triage aims to prioritize patients according to their medical urgency by accurately evaluating their clinical conditions, managing waiting times efficiently, and improving the overall effectiveness of emergency care. This study aims to assess ChatGPT's performance in patient triage across four emergency departments with varying dynamics and to provide a detailed analysis of its strengths and weaknesses. METHODS: In this multicenter, prospective study, we compared the triage decisions made by ChatGPT-4o and the triage personnel with the gold standard decisions determined by an emergency medicine (EM) specialist. In the hospitals where we conducted the study, triage teams routinely direct patients to the appropriate ED areas based on the Emergency Severity Index (ESI) system and the hospital's local triage protocols. During the study period, the triage team collected patient data, including chief complaints, comorbidities, and vital signs, and used this information to make the initial triage decisions. An independent physician simultaneously entered the same data into ChatGPT using voice commands. At the same time, an EM specialist, present in the triage room throughout the study period, reviewed the same patient data and determined the gold standard triage decisions, strictly adhering to both the hospital's local protocols and the ESI system. Before initiating the study, we customized ChatGPT for each hospital by designing prompts that incorporated both the general principles of the ESI triage system and the specific triage rules of each hospital. The model's overall, hospital-based, and area-based performance was evaluated, with Cohen's Kappa, F1 score, and performance analyses conducted. RESULTS: This study included 6657 patients. The overall agreement between triage personnel and GPT-4o with the gold standard was nearly perfect (Cohen's kappa = 0.782 and 0.833, respectively). The overall F1 score was 0.863 for the triage team, while GPT-4 achieved an F1 score of 0.897, demonstrating superior performance. ROC curve analysis showed the lowest performance in the yellow zone of a tertiary hospital (AUC = 0.75) and in the red zone of another tertiary hospital (AUC = 0.78). However, overall, AUC values greater than 0.90 were observed, indicating high accuracy. CONCLUSION: ChatGPT generally outperformed triage personnel in patient triage across emergency departments with varying conditions, demonstrating high agreement with the gold standard decision. However, in tertiary hospitals, its performance was relatively lower in triaging patients with more complex symptoms, particularly those requiring triage to the yellow and red zones.
|
Pasli S; Yadigaroglu M; Kirimli EN; Beser MF; Unutmaz I; Ayhan AO; Karakurt B; Sahin AS; Hicyilmaz HI; Imamoglu M
| 10
|
|||
38596832
|
Using ChatGPT in Psychiatry to Design Script Concordance Tests in Undergraduate Medical Education: Mixed Methods Study.
| 2,024
|
JMIR medical education
|
BACKGROUND: Undergraduate medical studies represent a wide range of learning opportunities served in the form of various teaching-learning modalities for medical learners. A clinical scenario is frequently used as a modality, followed by multiple-choice and open-ended questions among other learning and teaching methods. As such, script concordance tests (SCTs) can be used to promote a higher level of clinical reasoning. Recent technological developments have made generative artificial intelligence (AI)-based systems such as ChatGPT (OpenAI) available to assist clinician-educators in creating instructional materials. OBJECTIVE: The main objective of this project is to explore how SCTs generated by ChatGPT compared to SCTs produced by clinical experts on 3 major elements: the scenario (stem), clinical questions, and expert opinion. METHODS: This mixed method study evaluated 3 ChatGPT-generated SCTs with 3 expert-created SCTs using a predefined framework. Clinician-educators as well as resident doctors in psychiatry involved in undergraduate medical education in Quebec, Canada, evaluated via a web-based survey the 6 SCTs on 3 criteria: the scenario, clinical questions, and expert opinion. They were also asked to describe the strengths and weaknesses of the SCTs. RESULTS: A total of 102 respondents assessed the SCTs. There were no significant distinctions between the 2 types of SCTs concerning the scenario (P=.84), clinical questions (P=.99), and expert opinion (P=.07), as interpretated by the respondents. Indeed, respondents struggled to differentiate between ChatGPT- and expert-generated SCTs. ChatGPT showcased promise in expediting SCT design, aligning well with Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition criteria, albeit with a tendency toward caricatured scenarios and simplistic content. CONCLUSIONS: This study is the first to concentrate on the design of SCTs supported by AI in a period where medicine is changing swiftly and where technologies generated from AI are expanding much faster. This study suggests that ChatGPT can be a valuable tool in creating educational materials, and further validation is essential to ensure educational efficacy and accuracy.
|
Hudon A; Kiepura B; Pelletier M; Phan V
| 10
|
|||
38328046
|
A Comprehensive Evaluation of Large Language Models in Mining Gene Interactions and Pathway Knowledge.
| 2,024
|
bioRxiv : the preprint server for biology
|
BACKGROUND: Understanding complex biological pathways, including gene-gene interactions and gene regulatory networks, is critical for exploring disease mechanisms and drug development. Manual literature curation of biological pathways is useful but cannot keep up with the exponential growth of the literature. Large-scale language models (LLMs), notable for their vast parameter sizes and comprehensive training on extensive text corpora, have great potential in automated text mining of biological pathways. METHOD: This study assesses the effectiveness of 21 LLMs, including both API-based models and open-source models. The evaluation focused on two key aspects: gene regulatory relations (specifically, 'activation', 'inhibition', and 'phosphorylation') and KEGG pathway component recognition. The performance of these models was analyzed using statistical metrics such as precision, recall, F1 scores, and the Jaccard similarity index. RESULTS: Our results indicated a significant disparity in model performance. Among the API-based models, ChatGPT-4 and Claude-Pro showed superior performance, with an F1 score of 0.4448 and 0.4386 for the gene regulatory relation prediction, and a Jaccard similarity index of 0.2778 and 0.2657 for the KEGG pathway prediction, respectively. Open-source models lagged their API-based counterparts, where Falcon-180b-chat and llama1-7b led with the highest performance in gene regulatory relations (F1 of 0.2787 and 0.1923, respectively) and KEGG pathway recognition (Jaccard similarity index of 0.2237 and 0. 2207, respectively). CONCLUSION: LLMs are valuable in biomedical research, especially in gene network analysis and pathway mapping. However, their effectiveness varies, necessitating careful model selection. This work also provided a case study and insight into using LLMs as knowledge graphs.
|
Azam M; Chen Y; Arowolo MO; Liu H; Popescu M; Xu D
| 10
|
|||
40274037
|
Leveraging natural language processing to elucidate real-world clinical decision-making paradigms: A proof of concept study.
| 2,025
|
Journal of biomedical informatics
|
BACKGROUND: Understanding how clinicians arrive at decisions in actual practice settings is vital for advancing personalized, evidence-based care. However, systematic analysis of qualitative decision data poses challenges. METHODS: We analyzed transcribed interviews with Hebrew-speaking clinicians on decision processes using natural language processing (NLP). Word frequency and characterized terminology use, while large language models (ChatGPT from OpenAI and Gemini by Google) identified potential cognitive paradigms. RESULTS: Word frequency analysis of clinician interviews identified experience and knowledge as most influential on decision-making. NLP tentatively recognized heuristics-based reasoning grounded in past cases and intuition as dominant cognitive paradigms. Elements of shared decision-making through individualizing care with patients and families were also observed. Limited Hebrew clinical language resources required developing preliminary lexicons and dynamically adjusting stopwords. Findings also provided preliminary support for heuristics guiding clinical judgment while highlighting needs for broader sampling and enhanced analytical frameworks. CONCLUSIONS: This study represents the first use of integrated qualitative and computational methods to systematically elucidate clinical decision-making. Findings supported experience-based heuristics guiding cognition. With methodological enhancements, similar analyses could transform global understanding of tailored care delivery. Standardizing interdisciplinary collaborations on developing NLP tools and analytical frameworks may advance equitable, evidence-based healthcare by elucidating real-world clinical reasoning processes across diverse populations and settings.
|
Alon Y; Naimi E; Levin C; Videl H; Saban M
| 0-1
|
|||
37830257
|
Reshaping medical education: Performance of ChatGPT on a PES medical examination.
| 2,024
|
Cardiology journal
|
BACKGROUND: We are currently experiencing a third digital revolution driven by artificial intelligence (AI), and the emergence of new chat generative pre-trained transformer (ChatGPT) represents a significant technological advancement with profound implications for global society, especially in the field of education. METHODS: The aim of this study was to see how well ChatGPT performed on medical school exams and to highlight how it might change medical education and practice. Recently, OpenAI's ChatGPT (OpenAI, San Francisco; GPT-4 May 24 Version) was put to the test against a significant Polish medical specialization licensing exam (PES), and the results are in. The version of ChatGPT-4 used in this study was the most up-to-date model at the time of publication (GPT-4). ChatGPT answered questions from June 28, 2023, to June 30, 2023. RESULTS: ChatGPT demonstrates notable advancements in natural language processing models on the tasks of medical question answering. In June 2023, the performance of ChatGPT was assessed based on its ability to answer a set of 120 questions, where it achieved a correct response rate of 67.1%, accurately responding to 80 questions. CONCLUSIONS: ChatGPT may be used as an assistance tool in medical education. While ChatGPT can serve as a valuable tool in medical education, it cannot fully replace human expertise and knowledge due to its inherent limitations.
|
Wojcik S; Rulkiewicz A; Pruszczyk P; Lisik W; Pobozy M; Domienik-Karlowicz J
| 21
|
|||
38564282
|
A Comparative Study of Large Language Models, Human Experts, and Expert-Edited Large Language Models to Neuro-Ophthalmology Questions.
| 2,025
|
Journal of neuro-ophthalmology : the official journal of the North American Neuro-Ophthalmology Society
|
BACKGROUND: While large language models (LLMs) are increasingly used in medicine, their effectiveness compared with human experts remains unclear. This study evaluates the quality and empathy of Expert + AI, human experts, and LLM responses in neuro-ophthalmology. METHODS: This randomized, masked, multicenter cross-sectional study was conducted from June to July 2023. We randomly assigned 21 neuro-ophthalmology questions to 13 experts. Each expert provided an answer and then edited a ChatGPT-4-generated response, timing both tasks. In addition, 5 LLMs (ChatGPT-3.5, ChatGPT-4, Claude 2, Bing, Bard) generated responses. Anonymized and randomized responses from Expert + AI, human experts, and LLMs were evaluated by the remaining 12 experts. The main outcome was the mean score for quality and empathy, rated on a 1-5 scale. RESULTS: Significant differences existed between response types for both quality and empathy ( P < 0.0001, P < 0.0001). For quality, Expert + AI (4.16 +/- 0.81) performed the best, followed by GPT-4 (4.04 +/- 0.92), GPT-3.5 (3.99 +/- 0.87), Claude (3.6 +/- 1.09), Expert (3.56 +/- 1.01), Bard (3.5 +/- 1.15), and Bing (3.04 +/- 1.12). For empathy, Expert + AI (3.63 +/- 0.87) had the highest score, followed by GPT-4 (3.6 +/- 0.88), Bard (3.54 +/- 0.89), GPT-3.5 (3.5 +/- 0.83), Bing (3.27 +/- 1.03), Expert (3.26 +/- 1.08), and Claude (3.11 +/- 0.78). For quality ( P < 0.0001) and empathy ( P = 0.002), Expert + AI performed better than Expert. Time taken for expert-created and expert-edited LLM responses was similar ( P = 0.75). CONCLUSIONS: Expert-edited LLM responses had the highest expert-determined ratings of quality and empathy warranting further exploration of their potential benefits in clinical settings.
|
Tailor PD; Dalvin LA; Starr MR; Tajfirouz DA; Chodnicki KD; Brodsky MC; Mansukhani SA; Moss HE; Lai KE; Ko MW; Mackay DD; Di Nome MA; Dumitrascu OM; Pless ML; Eggenberger ER; Chen JJ
| 0-1
|
|||
39910559
|
AI-based medical ethics education: examining the potential of large language models as a tool for virtue cultivation.
| 2,025
|
BMC medical education
|
BACKGROUND: With artificial intelligence (AI) increasingly revolutionising medicine, this study critically evaluates the integration of large language models (LLMs), known for advanced text processing and generation capabilities, in medical ethics education, focusing on promoting virtue. Positing LLMs as central to mimicking nuanced human communication, it examines their use in medical education and the ethicality of embedding AI in such contexts. METHOD: Using a hybrid approach that combines principlist and non-principlist methodologies, we position LLMs as exemplars and advisors. RESULTS: We discuss the imperative for including AI ethics in medical curricula and its utility as an educational tool, identify the lack of educational resources in medical ethics education, and advocate for future LLMs to mitigate this problem as a "second-best" tool. We also emphasise the critical importance of instilling virtue in medical ethics education and illustrate how LLMs can effectively impart moral knowledge and model virtue cultivation. We address expected counter-arguments to using LLMs in this area and explain their profound potential to enrich medical ethics education, including facilitating the acquisition of moral knowledge and developing ethically grounded practitioners. CONCLUSIONS: The study involved a comprehensive exploration of the function of LLMs in medical ethics education, positing that tools such as ChatGPT can profoundly enhance the learning experience in the future. This is achieved through tailored, interactive educational encounters while addressing the ethical nuances of their use in educational settings.
|
Okamoto S; Kataoka M; Itano M; Sawai T
| 10
|
|||
37750374
|
GPTZero Performance in Identifying Artificial Intelligence-Generated Medical Texts: A Preliminary Study.
| 2,023
|
Journal of Korean medical science
|
BACKGROUND: With emergence of chatbots to help authors with scientific writings, editors should have tools to identify artificial intelligence-generated texts. GPTZero is among the first websites that has sought media attention claiming to differentiate machine-generated from human-written texts. METHODS: Using 20 text pieces generated by ChatGPT in response to arbitrary questions on various topics in medicine and 30 pieces chosen from previously published medical articles, the performance of GPTZero was assessed. RESULTS: GPTZero had a sensitivity of 0.65 (95% confidence interval, 0.41-0.85); specificity, 0.90 (0.73-0.98); accuracy, 0.80 (0.66-0.90); and positive and negative likelihood ratios, 6.5 (2.1-19.9) and 0.4 (0.2-0.7), respectively. CONCLUSION: GPTZero has a low false-positive (classifying a human-written text as machine-generated) and a high false-negative rate (classifying a machine-generated text as human-written).
|
Habibzadeh F
| 10
|
|||
38970829
|
Comparative outcomes of AI-assisted ChatGPT and face-to-face consultations in infertility patients: a cross-sectional study.
| 2,024
|
Postgraduate medical journal
|
BACKGROUND: With the advent of artificial intelligence (AI) in healthcare, digital platforms like ChatGPT offer innovative alternatives to traditional medical consultations. This study seeks to understand the comparative outcomes of AI-assisted ChatGPT consultations and conventional face-to-face interactions among infertility patients. METHODS: A cross-sectional study was conducted involving 120 infertility patients, split evenly between those consulting via ChatGPT and traditional face-to-face methods. The primary outcomes assessed were patient satisfaction, understanding, and consultation duration. Secondary outcomes included demographic information, clinical history, and subsequent actions post-consultation. RESULTS: While both consultation methods had a median age of 34 years, patients using ChatGPT reported significantly higher satisfaction levels (median 4 out of 5) compared to face-to-face consultations (median 3 out of 5; p < 0.001). The ChatGPT group also experienced shorter consultation durations, with a median difference of 12.5 minutes (p < 0.001). However, understanding, demographic distributions, and subsequent actions post-consultation were comparable between the two groups. CONCLUSIONS: AI-assisted ChatGPT consultations offer a promising alternative to traditional face-to-face consultations in assisted reproductive medicine. While patient satisfaction was higher and consultation durations were shorter with ChatGPT, further studies are required to understand the long-term implications and clinical outcomes associated with AI-driven medical consultations. Key Messages What is already known on this topic: Artificial intelligence (AI) applications, such as ChatGPT, have shown potential in various healthcare settings, including primary care and mental health support. Infertility is a significant global health issue that requires extensive consultations, often facing challenges such as long waiting times and varied patient satisfaction. Previous studies suggest that AI can offer personalized care and immediate feedback, but its efficacy compared with traditional consultations in reproductive medicine was not well-studied. What this study adds: This study demonstrates that AI-assisted ChatGPT consultations result in significantly higher patient satisfaction and shorter consultation durations compared with traditional face-to-face consultations among infertility patients. Both consultation methods were comparable in terms of patient understanding, demographic distributions, and subsequent actions postconsultation. How this study might affect research, practice, or policy: The findings suggest that AI-driven consultations could serve as an effective and efficient alternative to traditional methods, potentially reducing consultation times and improving patient satisfaction in reproductive medicine. Further research could explore the long-term impacts and broader applications of AI in clinical settings, influencing future healthcare practices and policies toward integrating AI technologies.
|
Cheng S; Xiao Y; Liu L; Sun X
| 0-1
|
|||
39933171
|
Smart Pharmaceutical Monitoring System With Personalized Medication Schedules and Self-Management Programs for Patients With Diabetes: Development and Evaluation Study.
| 2,025
|
Journal of medical Internet research
|
BACKGROUND: With the climbing incidence of type 2 diabetes, the health care system is under pressure to manage patients with this condition properly. Particularly, pharmacological therapy constitutes the most fundamental means of controlling blood glucose levels and preventing the progression of complications. However, its effectiveness is often hindered by factors such as treatment complexity, polypharmacy, and poor patient adherence. As new technologies, artificial intelligence and digital technologies are covering all aspects of the medical and health care field, but their application and evaluation in the domain of diabetes research remain limited. OBJECTIVE: This study aims to develop and establish a stand-alone diabetes management service system designed to enhance self-management support for patients, as well as to assess its performance with experienced health care professionals. METHODS: Diabetes Universal Medication Schedule (DUMS) system is grounded in official medicine instructions and evidence-based data to establish medication constraints and drug-drug interaction profiles. Individualized medication schedules and self-management programs were generated based on patient-specific conditions and needs, using an app framework to build patient-side contact pathways. The system's ability to provide medication guidance and health management was assessed by senior health care professionals using a 5-point Likert scale across 3 groups: outputs generated by the system (DUMS group), outputs refined by pharmacists (intervention group), and outputs generated by ChatGPT-4 (GPT-4 group). RESULTS: We constructed a cloud-based drug information management system loaded with 475 diabetes treatment-related medications; 684 medication constraints; and 12,351 drug-drug interactions and theoretical supports. The generated personalized medication plan and self-management program included recommended dosing times, disease education, dietary considerations, and lifestyle recommendations to help patients with diabetes achieve correct medication use and active disease management. Reliability analysis demonstrated that the DUMS group outperformed the GPT-4 group in medication schedule accuracy and safety, as well as comprehensiveness and richness of the self-management program (P<.001). The intervention group outperformed the DUMS and GPT-4 groups on all indicator scores. CONCLUSIONS: DUMS's treatment monitoring service can provide reliable self-management support for patients with diabetes. ChatGPT-4, powered by artificial intelligence, can act as a collaborative assistant to health care professionals in clinical contexts, although its performance still requires further training and optimization.
|
Xiao J; Li M; Cai R; Huang H; Yu H; Huang L; Li J; Yu T; Zhang J; Cheng S
| 10
|
|||
37115245
|
[Artificial intelligence: How will ChatGPT and other AI applications change our everyday medical practice?].
| 2,023
|
Medizinische Klinik, Intensivmedizin und Notfallmedizin
|
BACKGROUND: With the free provision of the chat robot "ChatGPT" by the company OpenAI in November 2022, an application of artificial intelligence (AI) became tangible for everyone. OBJECTIVES: An explanation of the basic functionality of large language models (LLM) is given, followed by a presentation of application options of ChatGPT in medicine, and an outlook and discussion of possible dangers of AI applications. METHODS: Problem solving with ChatGPT using concrete examples. Analysis and discussion of the available scientific literature. RESULTS: There has been a significant increase in the use of AI applications in scientific work, especially in scientific writing. Wide application of LLM in writing medical documentation is conceivable. Technical functionality allows the use of AI applications as a diagnostic support system. There is a risk of spreading and entrenching inaccuracies and bias through application of LLM. Regulation of this new technology is pending. CONCLUSION: AI applications such as ChatGPT have the potential to permanently change everyday medical practice. An examination of this technology and evaluation of opportunities and risks is warranted.
|
Sonntagbauer M; Haar M; Kluge S
| 10
|
|||
37770637
|
Exploring the Potential of ChatGPT-4 in Responding to Common Questions About Abdominoplasty: An AI-Based Case Study of a Plastic Surgery Consultation.
| 2,024
|
Aesthetic plastic surgery
|
BACKGROUND: With the increasing integration of artificial intelligence (AI) in health care, AI chatbots like ChatGPT-4 are being used to deliver health information. OBJECTIVES: This study aimed to assess the capability of ChatGPT-4 in answering common questions related to abdominoplasty, evaluating its potential as an adjunctive tool in patient education and preoperative consultation. METHODS: A variety of common questions about abdominoplasty were submitted to ChatGPT-4. These questions were sourced from a question list provided by the American Society of Plastic Surgery to ensure their relevance and comprehensiveness. An experienced plastic surgeon meticulously evaluated the responses generated by ChatGPT-4 in terms of informational depth, response articulation, and competency to determine the proficiency of the AI in providing patient-centered information. RESULTS: The study showed that ChatGPT-4 can give clear answers, making it useful for answering common queries. However, it struggled with personalized advice and sometimes provided incorrect or outdated references. Overall, ChatGPT-4 can effectively share abdominoplasty information, which may help patients better understand the procedure. Despite these positive findings, the AI needs more refinement, especially in providing personalized and accurate information, to fully meet patient education needs in plastic surgery. CONCLUSIONS: Although ChatGPT-4 shows promise as a resource for patient education, continuous improvements and rigorous checks are essential for its beneficial integration into healthcare settings. The study emphasizes the need for further research, particularly focused on improving the personalization and accuracy of AI responses. LEVEL OF EVIDENCE V: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
|
Li W; Chen J; Chen F; Liang J; Yu H
| 32
|
|||
37615692
|
[Large language models such as ChatGPT and GPT-4 for patient-centered care in radiology].
| 2,023
|
Radiologie (Heidelberg, Germany)
|
BACKGROUND: With the introduction of ChatGPT in late November 2022, large language models based on artificial intelligence have gained worldwide recognition. These language models are trained on vast amounts of data, enabling them to process complex tasks in seconds and provide detailed, high-level text-based responses. OBJECTIVE: To provide an overview of the most widely discussed large language models, ChatGPT and GPT‑4, with a focus on potential applications for patient-centered radiology. MATERIALS AND METHODS: A PubMed search of both large language models was performed using the terms "ChatGPT" and "GPT-4", with subjective selection and completion in the form of a narrative review. RESULTS: The generic nature of language models holds great promise for radiology, enabling both patients and referrers to facilitate understanding of radiological findings, overcome language barriers, and improve the quality of informed consent discussions. This could represent a significant step towards patient-centered or person-centered radiology. CONCLUSION: Large language models represent a promising tool for improving the communication of findings, interdisciplinary collaboration, and workflow in radiology. However, important privacy issues and the reliable applicability of these models in medicine remain to be addressed.
|
Fink MA
| 10
|
|||
39622707
|
Evaluating AI Competence in Specialized Medicine: Comparative Analysis of ChatGPT and Neurologists in a Neurology Specialist Examination in Spain.
| 2,024
|
JMIR medical education
|
BACKGROUND: With the rapid advancement of artificial intelligence (AI) in various fields, evaluating its application in specialized medical contexts becomes crucial. ChatGPT, a large language model developed by OpenAI, has shown potential in diverse applications, including medicine. OBJECTIVE: This study aims to compare the performance of ChatGPT with that of attending neurologists in a real neurology specialist examination conducted in the Valencian Community, Spain, assessing the AI's capabilities and limitations in medical knowledge. METHODS: We conducted a comparative analysis using the 2022 neurology specialist examination results from 120 neurologists and responses generated by ChatGPT versions 3.5 and 4. The examination consisted of 80 multiple-choice questions, with a focus on clinical neurology and health legislation. Questions were classified according to Bloom's Taxonomy. Statistical analysis of performance, including the kappa coefficient for response consistency, was performed. RESULTS: Human participants exhibited a median score of 5.91 (IQR: 4.93-6.76), with 32 neurologists failing to pass. ChatGPT-3.5 ranked 116th out of 122, answering 54.5% of questions correctly (score 3.94). ChatGPT-4 showed marked improvement, ranking 17th with 81.8% of correct answers (score 7.57), surpassing several human specialists. No significant variations were observed in the performance on lower-order questions versus higher-order questions. Additionally, ChatGPT-4 demonstrated increased interrater reliability, as reflected by a higher kappa coefficient of 0.73, compared to ChatGPT-3.5's coefficient of 0.69. CONCLUSIONS: This study underscores the evolving capabilities of AI in medical knowledge assessment, particularly in specialized fields. ChatGPT-4's performance, outperforming the median score of human participants in a rigorous neurology examination, represents a significant milestone in AI development, suggesting its potential as an effective tool in specialized medical education and assessment.
|
Ros-Arlanzon P; Perez-Sempere A
| 21
|
|||
39868292
|
CARDBiomedBench: A Benchmark for Evaluating Large Language Model Performance in Biomedical Research: A novel question-and-answer benchmark designed to assess Large Language Models' comprehension of biomedical research, piloted on Neurodegenerative Diseases.
| 2,025
|
bioRxiv : the preprint server for biology
|
BACKGROUNDS: Biomedical research requires sophisticated understanding and reasoning across multiple specializations. While large language models (LLMs) show promise in scientific applications, their capability to safely and accurately support complex biomedical research remains uncertain. METHODS: We present CARDBiomedBench, a novel question-and-answer benchmark for evaluating LLMs in biomedical research. For our pilot implementation, we focus on neurodegenerative diseases (NDDs), a domain requiring integration of genetic, molecular, and clinical knowledge. The benchmark combines expert-annotated question-answer (Q/A) pairs with semi-automated data augmentation, drawing from authoritative public resources including drug development data, genome-wide association studies (GWAS), and Summary-data based Mendelian Randomization (SMR) analyses. We evaluated seven private and open-source LLMs across ten biological categories and nine reasoning skills, using novel metrics to assess both response quality and safety. RESULTS: Our benchmark comprises over 68,000 Q/A pairs, enabling robust evaluation of LLM performance. Current state-of-the-art models show significant limitations: models like Claude-3.5-Sonnet demonstrates excessive caution (Response Quality Rate: 25% [95% CI: 25% +/- 1], Safety Rate: 76% +/- 1), while others like ChatGPT-4o exhibits both poor accuracy and unsafe behavior (Response Quality Rate: 37% +/- 1, Safety Rate: 31% +/- 1). These findings reveal fundamental gaps in LLMs' ability to handle complex biomedical information. CONCLUSION: CARDBiomedBench establishes a rigorous standard for assessing LLM capabilities in biomedical research. Our pilot evaluation in the NDD domain reveals critical limitations in current models' ability to safely and accurately process complex scientific information. Future iterations will expand to other biomedical domains, supporting the development of more reliable AI systems for accelerating scientific discovery.
|
Bianchi O; Willey M; Alvarado CX; Danek B; Khani M; Kuznetsov N; Dadu A; Shah S; Koretsky MJ; Makarious MB; Weller C; Levine KS; Kim S; Jarreau P; Vitale D; Marsan E; Iwaki H; Leonard H; Bandres-Ciga S; Singleton AB; Nalls MA; Mokhtari S; Khashabi D; Faghri F
| 10
|
|||
39289734
|
Comparative performance analysis of large language models: ChatGPT-3.5, ChatGPT-4 and Google Gemini in glucocorticoid-induced osteoporosis.
| 2,024
|
Journal of orthopaedic surgery and research
|
BACKGROUNDS: The use of large language models (LLMs) in medicine can help physicians improve the quality and effectiveness of health care by increasing the efficiency of medical information management, patient care, medical research, and clinical decision-making. METHODS: We collected 34 frequently asked questions about glucocorticoid-induced osteoporosis (GIOP), covering topics related to the disease's clinical manifestations, pathogenesis, diagnosis, treatment, prevention, and risk factors. We also generated 25 questions based on the 2022 American College of Rheumatology Guideline for the Prevention and Treatment of Glucocorticoid-Induced Osteoporosis (2022 ACR-GIOP Guideline). Each question was posed to the LLM (ChatGPT-3.5, ChatGPT-4, and Google Gemini), and three senior orthopedic surgeons independently rated the responses generated by the LLMs. Three senior orthopedic surgeons independently rated the answers based on responses ranging between 1 and 4 points. A total score (TS) > 9 indicated 'good' responses, 6 </= TS </= 9 indicated 'moderate' responses, and TS < 6 indicated 'poor' responses. RESULTS: In response to the general questions related to GIOP and the 2022 ACR-GIOP Guidelines, Google Gemini provided more concise answers than the other LLMs. In terms of pathogenesis, ChatGPT-4 had significantly higher total scores (TSs) than ChatGPT-3.5. The TSs for answering questions related to the 2022 ACR-GIOP Guideline by ChatGPT-4 were significantly higher than those for Google Gemini. ChatGPT-3.5 and ChatGPT-4 had significantly higher self-corrected TSs than pre-corrected TSs, while Google Gemini self-corrected for responses that were not significantly different than before. CONCLUSIONS: Our study showed that Google Gemini provides more concise and intuitive responses than ChatGPT-3.5 and ChatGPT-4. ChatGPT-4 performed significantly better than ChatGPT3.5 and Google Gemini in terms of answering general questions about GIOP and the 2022 ACR-GIOP Guidelines. ChatGPT3.5 and ChatGPT-4 self-corrected better than Google Gemini.
|
Tong L; Zhang C; Liu R; Yang J; Sun Z
| 0-1
|
|||
38794965
|
Evaluating the Accuracy, Comprehensiveness, and Validity of ChatGPT Compared to Evidence-Based Sources Regarding Common Surgical Conditions: Surgeons' Perspectives.
| 2,025
|
The American surgeon
|
BackgroundThis study aims to assess the accuracy, comprehensiveness, and validity of ChatGPT compared to evidence-based sources regarding the diagnosis and management of common surgical conditions by surveying the perceptions of U.S. board-certified practicing surgeons.MethodsAn anonymous cross-sectional survey was distributed to U.S. practicing surgeons from June 2023 to March 2024. The survey comprised 94 multiple-choice questions evaluating diagnostic and management information for five common surgical conditions from evidence-based sources or generated by ChatGPT. Statistical analysis included descriptive statistics and paired-sample t-tests.ResultsParticipating surgeons were primarily aged 40-50 years (43%), male (86%), White (57%), and had 5-10 years or >15 years of experience (86%). The majority of surgeons had no prior experience with ChatGPT in surgical practice (86%). For material discussing both acute cholecystitis and upper gastrointestinal hemorrhage, evidence-based sources were rated as significantly more comprehensive (3.57 (+/-.535) vs 2.00 (+/-1.16), P = .025) (4.14 (+/-.69) vs 2.43 (+/-.98), P < .001) and valid (3.71 (+/-.488) vs 2.86 (+/-1.07), P = .045) (3.71 (+/-.76) vs 2.71 (+/-.95) P = .038) than ChatGPT. However, there was no significant difference in accuracy between the two sources (3.71 vs 3.29, P = .289) (3.57 vs 2.71, P = .111).ConclusionSurveyed U.S. board-certified practicing surgeons rated evidence-based sources as significantly more comprehensive and valid compared to ChatGPT across the majority of surveyed surgical conditions. However, there was no significant difference in accuracy between the sources across the majority of surveyed conditions. While ChatGPT may offer potential benefits in surgical practice, further refinement and validation are necessary to enhance its utility and acceptance among surgeons.
|
Nasef H; Patel H; Amin Q; Baum S; Ratnasekera A; Ang D; Havron WS; Nakayama D; Elkbuli A
| 32
|
|||
37115365
|
Exploring the Potential of GPT-4 in Biomedical Engineering: The Dawn of a New Era.
| 2,023
|
Annals of biomedical engineering
|
Biomedical engineering is a relatively young interdisciplinary field based on engineering, biology, and medicine. Of note, the rapid progress of artificial intelligence (AI)-based technologies has made a significant impact on the biomedical engineering field, and continuously bring innovations and breakthroughs. Recently, ChatGPT, an AI chatbot developed by OpenAI company, has gained tremendous attention due to its powerful natural language generation and understanding ability. In this study, we explored potential of GPT-4 in the eight branches of biomedical engineering including medical imaging, medical devices, bioinformatics, biomaterials, biomechanics, gene and cell engineering, tissue engineering, and neural engineering. Our results show that the application of GPT-4 will bring new opportunities for the development of this field.
|
Cheng K; Guo Q; He Y; Lu Y; Gu S; Wu H
| 10
|
|||
38306900
|
PubMed and beyond: biomedical literature search in the age of artificial intelligence.
| 2,024
|
EBioMedicine
|
Biomedical research yields vast information, much of which is only accessible through the literature. Consequently, literature search is crucial for healthcare and biomedicine. Recent improvements in artificial intelligence (AI) have expanded functionality beyond keywords, but they might be unfamiliar to clinicians and researchers. In response, we present an overview of over 30 literature search tools tailored to common biomedical use cases, aiming at helping readers efficiently fulfill their information needs. We first discuss recent improvements and continued challenges of the widely used PubMed. Then, we describe AI-based literature search tools catering to five specific information needs: 1. Evidence-based medicine. 2. Precision medicine and genomics. 3. Searching by meaning, including questions. 4. Finding related articles with literature recommendation. 5. Discovering hidden associations through literature mining. Finally, we discuss the impacts of recent developments of large language models such as ChatGPT on biomedical information seeking.
|
Jin Q; Leaman R; Lu Z
| 10
|
|||
39518681
|
Harnessing the Power of ChatGPT in Cardiovascular Medicine: Innovations, Challenges, and Future Directions.
| 2,024
|
Journal of clinical medicine
|
Cardiovascular diseases remain the leading cause of morbidity and mortality globally, posing significant challenges to public health. The rapid evolution of artificial intelligence (AI), particularly with large language models such as ChatGPT, has introduced transformative possibilities in cardiovascular medicine. This review examines ChatGPT's broad applications in enhancing clinical decision-making-covering symptom analysis, risk assessment, and differential diagnosis; advancing medical education for both healthcare professionals and patients; and supporting research and academic communication. Key challenges associated with ChatGPT, including potential inaccuracies, ethical considerations, data privacy concerns, and inherent biases, are discussed. Future directions emphasize improving training data quality, developing specialized models, refining AI technology, and establishing regulatory frameworks to enhance ChatGPT's clinical utility and mitigate associated risks. As cardiovascular medicine embraces AI, ChatGPT stands out as a powerful tool with substantial potential to improve therapeutic outcomes, elevate care quality, and advance research innovation. Fully understanding and harnessing this potential is essential for the future of cardiovascular health.
|
Leon M; Ruaengsri C; Pelletier G; Bethencourt D; Shibata M; Flores MQ; Shudo Y
| 10
|
|||
37973369
|
Assessing the performance of ChatGPT in bioethics: a large language model's moral compass in medicine.
| 2,024
|
Journal of medical ethics
|
Chat Generative Pre-Trained Transformer (ChatGPT) has been a growing point of interest in medical education yet has not been assessed in the field of bioethics. This study evaluated the accuracy of ChatGPT-3.5 (April 2023 version) in answering text-based, multiple choice bioethics questions at the level of US third-year and fourth-year medical students. A total of 114 bioethical questions were identified from the widely utilised question banks UWorld and AMBOSS. Accuracy, bioethical categories, difficulty levels, specialty data, error analysis and character count were analysed. We found that ChatGPT had an accuracy of 59.6%, with greater accuracy in topics surrounding death and patient-physician relationships and performed poorly on questions pertaining to informed consent. Of all the specialties, it performed best in paediatrics. Yet, certain specialties and bioethical categories were under-represented. Among the errors made, it tended towards content errors and application errors. There were no significant associations between character count and accuracy. Nevertheless, this investigation contributes to the ongoing dialogue on artificial intelligence's (AI) role in healthcare and medical education, advocating for further research to fully understand AI systems' capabilities and constraints in the nuanced field of medical bioethics.
|
Chen J; Cadiente A; Kasselman LJ; Pilkington B
| 21
|
|||
39306288
|
Evaluating Performance of ChatGPT on MKSAP Cardiology Board Review Questions.
| 2,024
|
International journal of cardiology
|
Chat Generative Pretrained Transformer (ChatGPT) is a natural language processing tool created by OpenAI. Much of the discussion regarding artificial intelligence (AI) in medicine is the ability of the language to enhance medical practice, improve efficiency and decrease errors. The objective of this study was to analyze the ability of ChatGPT to answer board-style cardiovascular medicine questions by using the Medical Knowledge Self-Assessment Program (MKSAP).The study evaluated the performance of ChatGPT (versions 3.5 and 4), alongside internal medicine residents and internal medicine and cardiology attendings, in answering 98 multiple-choice questions (MCQs) from the Cardiovascular Medicine Chapter of MKSAP. ChatGPT-4 demonstrated an accuracy of 74.5 %, comparable to internal medicine (IM) intern (63.3 %), senior resident (63.3 %), internal medicine attending physician (62.2 %), and ChatGPT-3.5 (64.3 %) but significantly lower than cardiology attending physician (85.7 %). Subcategory analysis revealed no statistical difference between ChatGPT and physicians, except in valvular heart disease where cardiology attending outperformed ChatGPT (p = 0.031) for version 3.5, and for heart failure (p = 0.046) where ChatGPT-4 outperformed senior resident. While ChatGPT shows promise in certain subcategories, in order to establish AI as a reliable educational tool for medical professionals, performance of ChatGPT will likely need to surpass the accuracy of instructors, ideally achieving the near-perfect score on posed questions.
|
Milutinovic S; Petrovic M; Begosh-Mayne D; Lopez-Mattei J; Chazal RA; Wood MJ; Escarcega RO
| 21
|
|||
37295794
|
Addition of dexamethasone to prolong peripheral nerve blocks: a ChatGPT-created narrative review.
| 2,024
|
Regional anesthesia and pain medicine
|
Chat Generative Pre-trained Transformer (ChatGPT), an artificial intelligence chatbot, produces detailed responses and human-like coherent answers, and has been used in the clinical and academic medicine. To evaluate its accuracy in regional anesthesia topics, we produced a ChatGPT review on the addition of dexamethasone to prolong peripheral nerve blocks. A group of experts in regional anesthesia and pain medicine were invited to help shape the topic to be studied, refine the questions entered in to the ChatGPT program, vet the manuscript for accuracy, and create a commentary on the article. Although ChatGPT produced an adequate summary of the topic for a general medical or lay audience, the review that were created appeared to be inadequate for a subspecialty audience as the expert authors. Major concerns raised by the authors included the poor search methodology, poor organization/lack of flow, inaccuracies/omissions of text or references, and lack of novelty. At this time, we do not believe ChatGPT is able to replace human experts and is extremely limited in providing original, creative solutions/ideas and interpreting data for a subspecialty medical review article.
|
Wu CL; Cho B; Gabriel R; Hurley R; Liu J; Mariano ER; Mathur V; Memtsoudis SG; Grant MC
| 0-1
|
|||
38156230
|
All aboard the ChatGPT steamroller: Top 10 ways to make artificial intelligence work for healthcare professionals.
| 2,023
|
Antimicrobial stewardship & healthcare epidemiology : ASHE
|
Chat Generative Pre-trained Transformer (ChatGPT), the flagship generative artificial intelligence (AI) chatbot by OpenAI, is transforming many things in medicine, from healthcare and research to medical education. It is anticipated to integrate in many aspects of the medical industry, and we should brace for this inevitability and use it to our advantage. Here are proposed ways you can use ChatGPT in medicine with some specific use cases in antimicrobial stewardship and hospital epidemiology.
|
Non LR
| 10
|
|||
39259341
|
Analysis of Responses of GPT-4 V to the Japanese National Clinical Engineer Licensing Examination.
| 2,024
|
Journal of medical systems
|
Chat Generative Pretrained Transformer (ChatGPT; OpenAI) is a state-of-the-art large language model that can simulate human-like conversations based on user input. We evaluated the performance of GPT-4 V in the Japanese National Clinical Engineer Licensing Examination using 2,155 questions from 2012 to 2023. The average correct answer rate for all questions was 86.0%. In particular, clinical medicine, basic medicine, medical materials, biological properties, and mechanical engineering achieved a correct response rate of >/= 90%. Conversely, medical device safety management, electrical and electronic engineering, and extracorporeal circulation obtained low correct answer rates ranging from 64.8% to 76.5%. The correct answer rates for questions that included figures/tables, required numerical calculation, figure/table intersection calculation, and knowledge of Japanese Industrial Standards were 55.2%, 85.8%, 64.2% and 31.0%, respectively. The reason for the low correct answer rates is that ChatGPT lacked recognition of the images and knowledge of standards and laws. This study concludes that careful attention is required when using ChatGPT because several of its explanations lack the correct description.
|
Ishida K; Arisaka N; Fujii K
| 21
|
|||
37252576
|
A Case Study Demonstrating Applications of ChatGPT in the Clinical Management of Treatment-Resistant Schizophrenia.
| 2,023
|
Cureus
|
Chat Generative Pre-trained Transformer, also known as ChatGPT, is a new artificial intelligence (AI) program that responds to user inquiry with discourse resembling human language. The range of ChatGPT capabilities caught the interest of the medical world after it demonstrated its ability to pass medical boards examinations. In this case report, we present the clinical treatment of a 22-year-old male diagnosed with treatment-resistant schizophrenia (TRS) and compare the medical management suggested by ChatGPT to current standards of care in order to assess the program's ability to identify the disorder, evaluate potential medical and psychiatric work-up, and develop a treatment plan addressing the distinct nuances of our patient. In our inquiry with ChatGPT, we found that it can accurately identify our patient as having TRS and order appropriate tests to methodically rule out alternative causes of acute psychosis. Furthermore, the AI program suggests pharmacologic treatment options including clozapine with adjuvant medications, and nonpharmacologic treatment options including electroconvulsive therapy (ECT), repetitive transcranial magnetic stimulation (rTMS), and psychotherapy which align with current standards of care. Lastly, ChatGPT provides a comprehensive list of side effects associated with antipsychotics and mood stabilizers used to treat TRS. We found both potential for and limitations in the clinical application of ChatGPT to assist in the assessment and management of complex medical conditions. Overall, ChatGPT may serve as a powerful tool to organize medical data in a meaningful and palatable format for medical professionals to reference during patient care.
|
Galido PV; Butala S; Chakerian M; Agustines D
| 0-1
|
|||
37965416
|
Ectopic Pituitary Neuroendocrine Tumor: A Case Report Written With the Help of ChatGPT.
| 2,023
|
Cureus
|
ChatGPT (Chat Generative Pre-trained Transformer) is a chat robot program developed by OpenAI (OpenAI Incorporated, Mission District, San Francisco, USA). It is a natural-language processing tool driven by artificial intelligence technology. It can complete tasks such as writing emails, video scripts, copywriting, translation, coding, and writing papers. At present, several studies have been conducted on the use of this tool for academic writing, so we report a rare case of suprasellar ectopic pituitary neuroendocrine tumor (PitNET) secreting growth hormones, in a female 53-year-old patient with facial changes and hypertrophy of her hands and feet for six years. When asking questions on unusual topics, ChatGPT can provide answers but not specific ones, and similar to most machine learning solutions, verifying the correctness of information is a problem. Question asked from ChatGPT: Summarize ectopic pituitary neuroendocrine tumor. "Suprasellar heterotopic pituitary neuroendocrine tumors refer to tumors formed by the heterotopic growth of pituitary neuroendocrine tumor cells in the sellar region other than the pituitary glands. This tumor is common in middle-aged women, and it often manifests as headache, visual impairment, and endocrine dysfunction. Diagnosis is usually made by combining history, physical examination, blood tests, and imaging. Treatment is primarily performed through surgical resection of the tumor, radiotherapy, and drug therapy, and the prognosis is mostly good." The content in quotation marks is generated by ChatGPT.
|
Li J; Zhong J; Li Z; Xiao Y; Wang S
| 10
|
|||
37465809
|
When Precision Meets Penmanship: ChatGPT and Surgery Documentation.
| 2,023
|
Cureus
|
ChatGPT (Chatbot Generative Pre-Trained Transformer) is an artificial intelligence with several potential applications in the field of medicine. As a large language model, it is particularly good at generating text. This study investigates the use of ChatGPT in constructing operation notes for laparoscopic appendicectomy, one of the most common surgical procedures in the UK. We prompted ChatGPT-4, the latest generation of ChatGPT, to produce operation notes for laparoscopic appendicectomy, which were then evaluated against 'Getting It Right First Time' (GIRFT) recommendations. GIRFT is an organisation that has collaborated with the National Health Service (NHS) to improve surgical documentation guidelines. Excluding certain items documented elsewhere in patient records, the generated notes were assessed against 30 key points in GIRFT recommendations. This process was repeated three times to obtain an average score. Our results showed that ChatGPT generated operation notes in seconds, with an average coverage of 78.8% (23.66 out of 30 points) of the GIRFT guidelines, surpassing average compliance with similar guidelines from the Royal College of Surgeons (RCS). However, the quality of ChatGPT's output was found to be dependent on the quality of the prompt, highlighting the need for verification of the generated content. Additionally, secure integration with electronic health records is required before ChatGPT can be adopted into the NHS.
|
Robinson A; Aggarwal S Jr
| 32
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.