pmid
stringlengths
8
8
title
stringlengths
3
289
year
int64
2.02k
2.03k
journal
stringlengths
3
221
doi
stringclasses
1 value
mesh
stringclasses
1 value
keywords
stringclasses
1 value
abstract
stringlengths
115
3.67k
authors
stringlengths
3
798
cluster
class label
5 classes
38618446
Performance of ChatGPT vs. HuggingChat on OB-GYN Topics.
2,024
Cureus
Background While large language models show potential as beneficial tools in medicine, their reliability, especially in the realm of obstetrics and gynecology (OB-GYN), is not fully comprehended. This study seeks to measure and contrast the performance of ChatGPT and HuggingChat in addressing OB-GYN-related medical examination questions, offering insights into their effectiveness in this specialized field. Methods ChatGPT and HuggingChat were subjected to two standardized multiple-choice question banks: Test 1, developed by the National Board of Medical Examiners (NBME), and Test 2, gathered from the Association of Professors of Gynecology & Obstetrics (APGO) Web-Based Interactive Self-Evaluation (uWISE). Responses were analyzed and compared for correctness. Results The two-proportion z-test revealed no statistically significant difference in performance between ChatGPT and HuggingChat on both medical examinations. For Test 1, ChatGPT scored 90%, while HuggingChat scored 85% (p = 0.6). For Test 2, ChatGPT correctly answered 70% of questions, while HuggingChat correctly answered 62% of questions (p = 0.4). Conclusion Awareness of the strengths and weaknesses of artificial intelligence allows for the proper and effective use of its knowledge. Our findings indicate that there is no statistically significant difference in performance between ChatGPT and HuggingChat in addressing medical inquiries. Nonetheless, both platforms demonstrate considerable promise for applications within the medical domain.
Kirshteyn G; Golan R; Chaet M
43
39421095
Comparison of Gemini Advanced and ChatGPT 4.0's Performances on the Ophthalmology Resident Ophthalmic Knowledge Assessment Program (OKAP) Examination Review Question Banks.
2,024
Cureus
Background With advancements in natural language processing, tools such as Chat Generative Pre-Trained Transformers (ChatGPT) version 4.0 and Google Bard's Gemini Advanced are being increasingly evaluated for their potential in various medical applications. The objective of this study was to systematically assess the performance of these language learning models (LLMs) on both image and non-image-based questions within the specialized field of Ophthalmology. We used a review question bank for the Ophthalmic Knowledge Assessment Program (OKAP) used by ophthalmology residents nationally to prepare for the Ophthalmology Board Exam to assess the accuracy and performance of ChatGPT and Gemini Advanced. Methodology A total of 260 randomly generated multiple-choice questions from the OphthoQuestions question bank were run through ChatGPT and Gemini Advanced. A simulated 260-question OKAP examination was created at random from the bank. Question-specific data were analyzed, including overall percent correct, subspecialty accuracy, whether the question was "high yield," difficulty (1-4), and question type (e.g., image, text). To compare the performance of ChatGPT-4 and Gemini on the difficulty of questions, we utilized the standard deviation of user answer choices to determine question difficulty. In this study, a statistical analysis of Google Sheets was conducted using two-tailed t-tests with unequal variance to compare the performance of ChatGPT-4.0 and Google's Gemini Advanced across various question types, subspecialties, and difficulty levels. Results In total, 259 of the 260 questions were included in the study as one question used a video that any form of ChatGPT could not interpret as of May 1, 2024. For text-only questions, ChatGPT-4.0.0 correctly answered 57.14% (148/259, p < 0.018), and Gemini Advanced correctly answered 46.72% (121/259, p < 0.018). Both versions answered most questions without a prompt and would have received a below-average score on the OKAP. Moreover, there were 27 questions utilizing a secondary prompt in ChatGPT-4.0 compared to 67 questions in Gemini Advanced. ChatGPT-4.0 performed 68.99% on easier questions (<2 on a scale from 1-4) and 44.96% on harder questions (>2 on a scale from 1-4). On the other hand, Gemini Advanced performed 49.61% on easier questions (<2 on a scale from 1-4) and 44.19% on harder questions (>2 on a scale from 1-4). There was a statistically significant difference in accuracy between ChatGPT-4.0 and Gemini Advanced for easy (p < 0.0015) but not for hard (p < 0.55) questions. For image-only questions, ChatGPT-4.0 correctly answered 39.58% (19/48, p < 0.013), and Gemini Advanced correctly answered 33.33% (16/48, p < 0.022), resulting in a statistically insignificant difference in accuracy between ChatGPT-4.0 and Gemini Advanced (p < 0.530). A comparison against text-only and image-based questions demonstrated a statistically significant difference in accuracy for both ChatGPT-4.0 (p < 0.013) and Gemini Advanced (p < 0.022). Conclusions This study provides evidence that ChatGPT-4.0 performs better on the OKAP-style exams and is improved compared to Gemini Advanced within the context of ophthalmic multiple-choice questions. This may show an opportunity for increased worth for ChatGPT in ophthalmic medical education. While showing promise within medical education, caution should be used as a more detailed evaluation of reliability is needed.
Gill GS; Tsai J; Moxam J; Sanghvi HA; Gupta S
21
39878409
Use of ChatGPT Large Language Models to Extract Details of Recommendations for Additional Imaging From Free-Text Impressions of Radiology Reports.
2,025
AJR. American journal of roentgenology
BACKGROUND. Automated extraction of actionable details of recommendations for additional imaging (RAIs) from radiology reports could facilitate tracking and timely completion of clinically necessary RAIs and thereby potentially reduce diagnostic delays. OBJECTIVE. The purpose of the study was to assess the performance of large language models (LLMs) in extracting actionable details of RAIs from radiology reports. METHODS. This retrospective single-center study evaluated reports of diagnostic radiology examinations performed across modalities and care settings within five subspecialties (abdominal imaging, musculoskeletal imaging, neuroradiology, nuclear medicine, thoracic imaging) in August 2023. Of reports identified by a previously validated natural language processing algorithm to contain an RAI, 250 were randomly selected; 231 of these reports were confirmed to contain an RAI on manual review and formed the study sample. Twenty-five reports were used to engineer a prompt instructing an LLM, when inputted in a report impression containing an RAI, to extract details about the modality, body part, time frame, and rationale of the RAI; the remaining 206 reports were used for testing the prompt in combination with GPT-3.5 and GPT-4. A 4th-year medical student and radiologist from the relevant subspecialty independently classified the LLM outputs as correct versus incorrect for extracting the four actionable details of RAIs in comparison with the report impressions; a third reviewer assisted in resolving discrepancies. Extraction accuracy was summarized and compared between LLMs using consensus assessments. RESULTS. For GPT-3.5 and GPT-4, the two reviewers agreed about classification of LLM output as correct versus incorrect with respect to report impressions for 95.6% and 94.2% for RAI modality, 89.3% and 88.3% for RAI body part, 96.1% and 95.1% for RAI time frame, and 89.8% and 88.8% for RAI rationale, respectively. GPT-4 was more accurate than GPT-3.5 in extracting RAI modality (94.2% [194/206] vs 85.4% [176/206], p < .001), RAI body part (86.9% [179/206] vs 77.2% [159/206], p = .004), and RAI time frame (99.0% [204/206] vs 95.6% [197/206], p = .02). Both LLMs had accuracy of 91.7% (189/206) for extracting RAI rationale. CONCLUSION. LLMs were used to extract actionable details of RAIs from free-text impression sections of radiology reports; GPT-4 outperformed GPT-3.5. CLINICAL IMPACT. The technique could represent an innovative method to facilitate timely completion of clinically necessary radiologist recommendations.
Li KW; Lacson R; Guenette JP; DiPiro PJ; Burk KS; Kapoor N; Salah F; Khorasani R
10
36946005
Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma.
2,023
Clinical and molecular hepatology
BACKGROUND/AIMS: Patients with cirrhosis and hepatocellular carcinoma (HCC) require extensive and personalized care to improve outcomes. ChatGPT (Generative Pre-trained Transformer), a large language model, holds the potential to provide professional yet patient-friendly support. We aimed to examine the accuracy and reproducibility of ChatGPT in answering questions regarding knowledge, management, and emotional support for cirrhosis and HCC. METHODS: ChatGPT's responses to 164 questions were independently graded by two transplant hepatologists and resolved by a third reviewer. The performance of ChatGPT was also assessed using two published questionnaires and 26 questions formulated from the quality measures of cirrhosis management. Finally, its emotional support capacity was tested. RESULTS: We showed that ChatGPT regurgitated extensive knowledge of cirrhosis (79.1% correct) and HCC (74.0% correct), but only small proportions (47.3% in cirrhosis, 41.1% in HCC) were labeled as comprehensive. The performance was better in basic knowledge, lifestyle, and treatment than in the domains of diagnosis and preventive medicine. For the quality measures, the model answered 76.9% of questions correctly but failed to specify decision-making cut-offs and treatment durations. ChatGPT lacked knowledge of regional guidelines variations, such as HCC screening criteria. However, it provided practical and multifaceted advice to patients and caregivers regarding the next steps and adjusting to a new diagnosis. CONCLUSION: We analyzed the areas of robustness and limitations of ChatGPT's responses on the management of cirrhosis and HCC and relevant emotional support. ChatGPT may have a role as an adjunct informational tool for patients and physicians to improve outcomes.
Yeo YH; Samaan JS; Ng WH; Ting PS; Trivedi H; Vipani A; Ayoub W; Yang JD; Liran O; Spiegel B; Kuo A
0-1
38509182
ChatGPT-3.5 and Bing Chat in ophthalmology: an updated evaluation of performance, readability, and informative sources.
2,024
Eye (London, England)
BACKGROUND/OBJECTIVES: Experimental investigation. Bing Chat (Microsoft) integration with ChatGPT-4 (OpenAI) integration has conferred the capability of accessing online data past 2021. We investigate its performance against ChatGPT-3.5 on a multiple-choice question ophthalmology exam. SUBJECTS/METHODS: In August 2023, ChatGPT-3.5 and Bing Chat were evaluated against 913 questions derived from the Academy's Basic and Clinical Science Collection collection. For each response, the sub-topic, performance, Simple Measure of Gobbledygook readability score (measuring years of required education to understand a given passage), and cited resources were collected. The primary outcomes were the comparative scores between models, and qualitatively, the resources referenced by Bing Chat. Secondary outcomes included performance stratified by response readability, question type (explicit or situational), and BCSC sub-topic. RESULTS: Across 913 questions, ChatGPT-3.5 scored 59.69% [95% CI 56.45,62.94] while Bing Chat scored 73.60% [95% CI 70.69,76.52]. Both models performed significantly better in explicit than clinical reasoning questions. Both models performed best on general medicine questions than ophthalmology subsections. Bing Chat referenced 927 online entities and provided at-least one citation to 836 of the 913 questions. The use of more reliable (peer-reviewed) sources was associated with higher likelihood of correct response. The most-cited resources were eyewiki.aao.org, aao.org, wikipedia.org, and ncbi.nlm.nih.gov. Bing Chat showed significantly better readability than ChatGPT-3.5, averaging a reading level of grade 11.4 [95% CI 7.14, 15.7] versus 12.4 [95% CI 8.77, 16.1], respectively (p-value < 0.0001, rho = 0.25). CONCLUSIONS: The online access, improved readability, and citation feature of Bing Chat confers additional utility for ophthalmology learners. We recommend critical appraisal of cited sources during response interpretation.
Tao BK; Hua N; Milkovich J; Micieli JA
21
40218132
Can ChatGPT Help General Practitioners Become Acquainted with Conversations About Dying? A Simulated Single-Case Study.
2,025
Healthcare (Basel, Switzerland)
Background/Objectives: General practitioners (GPs) should be able to initiate open conversations about death with their patients. It is hypothesized that a change in attitude regarding discussions of death with patients may be accomplished through doctors' training, particularly with the use of artificial intelligence (AI). This study aimed to evaluate whether OpenAI's ChatGPT can simulate a medical communication scenario involving a GP consulting a patient who is dying at home. Methods: ChatGPT-4o was prompted to generate a medical communication scenario in which a GP consults with a patient dying at home. ChatGPT-4o was instructed to follow seven predefined steps from an evidence-based model for discussing dying with patients and their family caregivers. The output was assessed by comparing each step of the conversation to the model's recommendations. Results: ChatGPT-4o created a seven-step scenario based on the initial prompt and addressed almost all intended recommendations. However, two points were not addressed: ChatGPT-4o did not use terms like "dying", "passing away", or "death", although the concept was present from the beginning of the conversation with the patient. Additionally, cultural and religious backgrounds related to dying and death were not discussed. Conclusions: ChatGPT-4o can be used as a supportive tool for introducing GPs to the language and sequencing of speech acts that form a successful foundation for meaningful, sensitive conversations about dying, without requiring advanced technical resources and without placing any burden on real patients.
Prazeres F
0-1
40004661
Language Artificial Intelligence Models as Pioneers in Diagnostic Medicine? A Retrospective Analysis on Real-Time Patients.
2,025
Journal of clinical medicine
Background/Objectives: GPT-3.5 and GPT-4 has shown promise in assisting healthcare professionals with clinical questions. However, their performance in real-time clinical scenarios remains underexplored. This study aims to evaluate their precision and reliability compared to board-certified emergency department attendings, highlighting their potential in improving patient care. We hypothesized that board-certified emergency department attendings at Maimonides Medical Center exhibit higher accuracy and reliability than GPT-3.5 and GPT-4 in generating differentials based on history and physical examination for patients presenting to the emergency department. Methods: Real-time patient data from Maimonides Medical Center's emergency department, collected from 1 January 2023 to 1 March 2023 were analyzed. Demographic details, symptoms, medical history, and discharge diagnoses recorded by emergency room attendings were examined. AI algorithms (ChatGPT-3.5 and GPT-4) generated differential diagnoses, which were compared with those by attending physicians. Accuracy was determined by comparing each rater's diagnoses with the gold standard discharge diagnosis, calculating the proportion of correctly identified cases. Precision was assessed using Cohen's kappa coefficient and Intraclass Correlation Coefficient to measure agreement between raters. Results: Mean age of patients was 49.12 years, with 57.3% males and 42.7% females. Chief complaints included fever/sepsis (24.7%), gastrointestinal issues (17.7%), and cardiovascular problems (16.4%). Diagnostic accuracy against discharge diagnoses was highest for ChatGPT-4 (85.5%), followed by ChatGPT-3.5 (84.6%) and ED attendings (83%). Cohen's kappa demonstrated moderate agreement (0.7) between AI models, with lower agreement observed for ED attendings. Stratified analysis revealed higher accuracy for gastrointestinal complaints with Chat GPT-4 (87.5%) and cardiovascular complaints with Chat GPT-3.5 (81.34%). Conclusions: Our study demonstrates that Chat GPT-4 and GPT-3.5 exhibit comparable diagnostic accuracy to board-certified emergency department attendings, highlighting their potential to aid decision-making in dynamic clinical settings. The stratified analysis revealed comparable reliability and precision of the AI chat bots for cardiovascular complaints which represents a significant proportion of the high-risk patients presenting to the emergency department and provided targeted insights into rater performance within specific medical domains. This study contributes to integrating AI models into medical practice, enhancing efficiency and effectiveness in clinical decision-making. Further research is warranted to explore broader applications of AI in healthcare.
Naeem A; Khan O; Baqir SM; Jana K; Shankar P; Kaur A; Zaaya M; Sajid F; Mohsin F; Boadla MR; Oo A; Wong V; Noor M; Sandhu SPS; Slobodyanuk K; Shetty V; Tokayer AZ
10
39201195
Assessment Study of ChatGPT-3.5's Performance on the Final Polish Medical Examination: Accuracy in Answering 980 Questions.
2,024
Healthcare (Basel, Switzerland)
BACKGROUND/OBJECTIVES: The use of artificial intelligence (AI) in education is dynamically growing, and models such as ChatGPT show potential in enhancing medical education. In Poland, to obtain a medical diploma, candidates must pass the Medical Final Examination, which consists of 200 questions with one correct answer per question, is administered in Polish, and assesses students' comprehensive medical knowledge and readiness for clinical practice. The aim of this study was to determine how ChatGPT-3.5 handles questions included in this exam. METHODS: This study considered 980 questions from five examination sessions of the Medical Final Examination conducted by the Medical Examination Center in the years 2022-2024. The analysis included the field of medicine, the difficulty index of the questions, and their type, namely theoretical versus case-study questions. RESULTS: The average correct answer rate achieved by ChatGPT for the five examination sessions hovered around 60% and was lower (p < 0.001) than the average score achieved by the examinees. The lowest percentage of correct answers was in hematology (42.1%), while the highest was in endocrinology (78.6%). The difficulty index of the questions showed a statistically significant correlation with the correctness of the answers (p = 0.04). Questions for which ChatGPT-3.5 provided incorrect answers had a lower (p < 0.001) percentage of correct responses. The type of questions analyzed did not significantly affect the correctness of the answers (p = 0.46). CONCLUSIONS: This study indicates that ChatGPT-3.5 can be an effective tool for assisting in passing the final medical exam, but the results should be interpreted cautiously. It is recommended to further verify the correctness of the answers using various AI tools.
Siebielec J; Ordak M; Oskroba A; Dworakowska A; Bujalska-Zadrozny M
21
39035269
Evaluating GPT-4V's performance in the Japanese national dental examination: A challenge explored.
2,024
Journal of dental sciences
BACKGROUND/PURPOSE: Rapid advancements in AI technology have led to significant interest in its application across various fields, including medicine and dentistry. This study aimed to assess the capabilities of ChatGPT-4V with image recognition in answering image-based questions from the Japanese National Dental Examination (JNDE) to explore its potential as an educational support tool for dental students. MATERIALS AND METHODS: The dataset used questions from the JNDE, which was conducted in January 2023, with a focus on image-related queries. ChatGPT-4V was utilized, and standardized prompts, question texts, and images were input. Data and statistical analyses were conducted using Qlik Sense(R) and GraphPad Prism. RESULTS: The overall correct response rate of ChatGPT-4V for image-based JNDE questions was 35.0 %. The correct response rates were 57.1 % for compulsory questions, 43.6 % for general questions, and 28.6 % for clinical practical questions. In specialties like Dental Anesthesiology and Endodontics, ChatGPT-4V achieved correct response rates above 70 %, while response rates for Orthodontics and Oral Surgery were lower. A higher number of images in questions was correlated with lower accuracy, suggesting an impact of the number of images on correct and incorrect responses. CONCLUSION: While innovative, ChatGPT-4V's image recognition feature exhibited limitations, especially in handling image-intensive and complex clinical practical questions, and is not yet fully suitable as an educational support tool for dental students at its current stage. Further technological refinement and re-evaluation with a broader dataset are recommended.
Morishita M; Fukuda H; Muraoka K; Nakamura T; Hayashi M; Yoshioka I; Ono K; Awano S
0-1
38771247
The Use of Generative AI for Scientific Literature Searches for Systematic Reviews: ChatGPT and Microsoft Bing AI Performance Evaluation.
2,024
JMIR medical informatics
BACKGROUND: A large language model is a type of artificial intelligence (AI) model that opens up great possibilities for health care practice, research, and education, although scholars have emphasized the need to proactively address the issue of unvalidated and inaccurate information regarding its use. One of the best-known large language models is ChatGPT (OpenAI). It is believed to be of great help to medical research, as it facilitates more efficient data set analysis, code generation, and literature review, allowing researchers to focus on experimental design as well as drug discovery and development. OBJECTIVE: This study aims to explore the potential of ChatGPT as a real-time literature search tool for systematic reviews and clinical decision support systems, to enhance their efficiency and accuracy in health care settings. METHODS: The search results of a published systematic review by human experts on the treatment of Peyronie disease were selected as a benchmark, and the literature search formula of the study was applied to ChatGPT and Microsoft Bing AI as a comparison to human researchers. Peyronie disease typically presents with discomfort, curvature, or deformity of the penis in association with palpable plaques and erectile dysfunction. To evaluate the quality of individual studies derived from AI answers, we created a structured rating system based on bibliographic information related to the publications. We classified its answers into 4 grades if the title existed: A, B, C, and F. No grade was given for a fake title or no answer. RESULTS: From ChatGPT, 7 (0.5%) out of 1287 identified studies were directly relevant, whereas Bing AI resulted in 19 (40%) relevant studies out of 48, compared to the human benchmark of 24 studies. In the qualitative evaluation, ChatGPT had 7 grade A, 18 grade B, 167 grade C, and 211 grade F studies, and Bing AI had 19 grade A and 28 grade C studies. CONCLUSIONS: This is the first study to compare AI and conventional human systematic review methods as a real-time literature collection tool for evidence-based medicine. The results suggest that the use of ChatGPT as a tool for real-time evidence generation is not yet accurate and feasible. Therefore, researchers should be cautious about using such AI. The limitations of this study using the generative pre-trained transformer model are that the search for research topics was not diverse and that it did not prevent the hallucination of generative AI. However, this study will serve as a standard for future studies by providing an index to verify the reliability and consistency of generative AI from a user's point of view. If the reliability and consistency of AI literature search services are verified, then the use of these technologies will help medical research greatly.
Gwon YN; Kim JH; Chung HS; Jung EJ; Chun J; Lee S; Shim SR
10
38898239
Can AI Answer My Questions? Utilizing Artificial Intelligence in the Perioperative Assessment for Abdominoplasty Patients.
2,024
Aesthetic plastic surgery
BACKGROUND: Abdominoplasty is a common operation, used for a range of cosmetic and functional issues, often in the context of divarication of recti, significant weight loss, and after pregnancy. Despite this, patient-surgeon communication gaps can hinder informed decision-making. The integration of large language models (LLMs) in healthcare offers potential for enhancing patient information. This study evaluated the feasibility of using LLMs for answering perioperative queries. METHODS: This study assessed the efficacy of four leading LLMs-OpenAI's ChatGPT-3.5, Anthropic's Claude, Google's Gemini, and Bing's CoPilot-using fifteen unique prompts. All outputs were evaluated using the Flesch-Kincaid, Flesch Reading Ease score, and Coleman-Liau index for readability assessment. The DISCERN score and a Likert scale were utilized to evaluate quality. Scores were assigned by two plastic surgical residents and then reviewed and discussed until a consensus was reached by five plastic surgeon specialists. RESULTS: ChatGPT-3.5 required the highest level for comprehension, followed by Gemini, Claude, then CoPilot. Claude provided the most appropriate and actionable advice. In terms of patient-friendliness, CoPilot outperformed the rest, enhancing engagement and information comprehensiveness. ChatGPT-3.5 and Gemini offered adequate, though unremarkable, advice, employing more professional language. CoPilot uniquely included visual aids and was the only model to use hyperlinks, although they were not very helpful and acceptable, and it faced limitations in responding to certain queries. CONCLUSION: ChatGPT-3.5, Gemini, Claude, and Bing's CoPilot showcased differences in readability and reliability. LLMs offer unique advantages for patient care but require careful selection. Future research should integrate LLM strengths and address weaknesses for optimal patient education. LEVEL OF EVIDENCE V: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Lim B; Seth I; Cuomo R; Kenney PS; Ross RJ; Sofiadellis F; Pentangelo P; Ceccaroni A; Alfano C; Rozen WM
32
37509379
Enhancing Triage Efficiency and Accuracy in Emergency Rooms for Patients with Metastatic Prostate Cancer: A Retrospective Analysis of Artificial Intelligence-Assisted Triage Using ChatGPT 4.0.
2,023
Cancers
BACKGROUND: Accurate and efficient triage is crucial for prioritizing care and managing resources in emergency rooms. This study investigates the effectiveness of ChatGPT, an advanced artificial intelligence system, in assisting health providers with decision-making for patients presenting with metastatic prostate cancer, focusing on the potential to improve both patient outcomes and resource allocation. METHODS: Clinical data from patients with metastatic prostate cancer who presented to the emergency room between 1 May 2022 and 30 April 2023 were retrospectively collected. The primary outcome was the sensitivity and specificity of ChatGPT in determining whether a patient required admission or discharge. The secondary outcomes included the agreement between ChatGPT and emergency medicine physicians, the comprehensiveness of diagnoses, the accuracy of treatment plans proposed by both parties, and the length of medical decision making. RESULTS: Of the 147 patients screened, 56 met the inclusion criteria. ChatGPT had a sensitivity of 95.7% in determining admission and a specificity of 18.2% in discharging patients. In 87.5% of cases, ChatGPT made the same primary diagnoses as physicians, with more accurate terminology use (42.9% vs. 21.4%, p = 0.02) and more comprehensive diagnostic lists (median number of diagnoses: 3 vs. 2, p < 0.001). Emergency Severity Index scores calculated by ChatGPT were not associated with admission (p = 0.12), hospital stay length (p = 0.91) or ICU admission (p = 0.54). Despite shorter mean word count (169 +/- 66 vs. 272 +/- 105, p < 0.001), ChatGPT was more likely to give additional treatment recommendations than physicians (94.3% vs. 73.5%, p < 0.001). CONCLUSIONS: Our hypothesis-generating data demonstrated that ChatGPT is associated with a high sensitivity in determining the admission of patients with metastatic prostate cancer in the emergency room. It also provides accurate and comprehensive diagnoses. These findings suggest that ChatGPT has the potential to assist health providers in improving patient triage in emergency settings, and may enhance both efficiency and quality of care provided by the physicians.
Gebrael G; Sahu KK; Chigarira B; Tripathi N; Mathew Thomas V; Sayegh N; Maughan BL; Agarwal N; Swami U; Li H
10
40206348
Artificial intelligence-large language models (AI-LLMs) for reliable and accurate cardiotocography (CTG) interpretation in obstetric practice.
2,025
Computational and structural biotechnology journal
BACKGROUND: Accurate cardiotocography (CTG) interpretation is vital for the monitoring of fetal well-being during pregnancy and labor. Advanced artificial intelligence (AI) tools such as AI-large language models (AI-LLMs) may enhance the accuracy of CTG interpretation, but their potential has not been extensively evaluated. OBJECTIVE: This study aimed to assess the performance of three AI-LLMs (ChatGPT-4o, Gemini Advanced, and Copilot) in CTG image interpretation, compare their results to those of junior (JHDs) and senior human doctors (SHDs), and evaluate their reliability in clinical decision-making. STUDY DESIGN: Seven CTG images were interpreted by the three AI-LLMs, five SHDs, and five JHDs, with the evaluations scored by five blinded maternal-fetal medicine experts using a Likert scale for five parameters (relevance, clarity, depth, focus, and coherence). The homogeneity of the expert ratings and group performances were statistically compared. RESULTS: ChatGPT-4o scored 77.86, outperforming the Gemini Advanced (57.14), Copilot (47.29), and JHDs (61.57). Its performance closely approached that of the SHDs (80.43), with no statistically significant difference between the two (p > 0.05). ChatGPT-4o excelled in the depth parameter and was only marginally inferior to the SHDs regarding the other parameters. CONCLUSION: ChatGPT-4o demonstrated superior performance among the AI-LLMs, surpassed JHDs in CTG interpretation, and closely matched the performance level of SHDs. AI-LLMs, particularly ChatGPT-4o, are promising tools for assisting obstetricians, improving diagnostic accuracy, and enhancing obstetric patient care.
Gumilar KE; Wardhana MP; Akbar MIA; Putra AS; Banjarnahor DPP; Mulyana RS; Fatati I; Yu ZY; Hsu YC; Dachlan EG; Lu CH; Liao LN; Tan M
0-1
39848078
Large language models vs human for classifying clinical documents.
2,025
International journal of medical informatics
BACKGROUND: Accurate classification of medical records is crucial for clinical documentation, particularly when using the 10th revision of the International Classification of Diseases (ICD-10) coding system. The use of machine learning algorithms and Systematized Nomenclature of Medicine (SNOMED) mapping has shown promise in performing these classifications. However, challenges remain, particularly in reducing false negatives, where certain diagnoses are not correctly identified by either approach. OBJECTIVE: This study explores the potential of leveraging advanced large language models to improve the accuracy of ICD-10 classifications in challenging cases of medical records where machine learning and SNOMED mapping fail. METHODS: We evaluated the performance of ChatGPT 3.5 and ChatGPT 4 in classifying ICD-10 codes from discharge summaries within selected records of the Medical Information Mart for Intensive Care (MIMIC) IV dataset. These records comprised 802 discharge summaries identified as false negatives by both machine learning and SNOMED mapping methods, showing their challenging case. Each summary was assessed by ChatGPT 3.5 and 4 using a classification prompt, and the results were compared to human coder evaluations. Five human coders, with a combined experience of over 30 years, independently classified a stratified sample of 100 summaries to validate ChatGPT's performance. RESULTS: ChatGPT 4 demonstrated significantly improved consistency over ChatGPT 3.5, with matching results between runs ranging from 86% to 89%, compared to 57% to 67% for ChatGPT 3.5. The classification accuracy of ChatGPT 4 was variable across different ICD-10 codes. Overall, human coders performed better than ChatGPT. However, ChatGPT matched the median performance of human coders, achieving an accuracy rate of 22%. CONCLUSION: This study underscores the potential of integrating advanced language models with clinical coding processes to improve documentation accuracy. ChatGPT 4 demonstrated improved consistency and comparable performance to median human coders, achieving 22% accuracy in challenging cases. Combining ChatGPT with methods like SNOMED mapping could further enhance clinical coding accuracy, particularly for complex scenarios.
Mustafa A; Naseem U; Rahimi Azghadi M
10
38989848
Assessing GPT-4's Performance in Delivering Medical Advice: Comparative Analysis With Human Experts.
2,024
JMIR medical education
BACKGROUND: Accurate medical advice is paramount in ensuring optimal patient care, and misinformation can lead to misguided decisions with potentially detrimental health outcomes. The emergence of large language models (LLMs) such as OpenAI's GPT-4 has spurred interest in their potential health care applications, particularly in automated medical consultation. Yet, rigorous investigations comparing their performance to human experts remain sparse. OBJECTIVE: This study aims to compare the medical accuracy of GPT-4 with human experts in providing medical advice using real-world user-generated queries, with a specific focus on cardiology. It also sought to analyze the performance of GPT-4 and human experts in specific question categories, including drug or medication information and preliminary diagnoses. METHODS: We collected 251 pairs of cardiology-specific questions from general users and answers from human experts via an internet portal. GPT-4 was tasked with generating responses to the same questions. Three independent cardiologists (SL, JHK, and JJC) evaluated the answers provided by both human experts and GPT-4. Using a computer interface, each evaluator compared the pairs and determined which answer was superior, and they quantitatively measured the clarity and complexity of the questions as well as the accuracy and appropriateness of the responses, applying a 3-tiered grading scale (low, medium, and high). Furthermore, a linguistic analysis was conducted to compare the length and vocabulary diversity of the responses using word count and type-token ratio. RESULTS: GPT-4 and human experts displayed comparable efficacy in medical accuracy ("GPT-4 is better" at 132/251, 52.6% vs "Human expert is better" at 119/251, 47.4%). In accuracy level categorization, humans had more high-accuracy responses than GPT-4 (50/237, 21.1% vs 30/238, 12.6%) but also a greater proportion of low-accuracy responses (11/237, 4.6% vs 1/238, 0.4%; P=.001). GPT-4 responses were generally longer and used a less diverse vocabulary than those of human experts, potentially enhancing their comprehensibility for general users (sentence count: mean 10.9, SD 4.2 vs mean 5.9, SD 3.7; P<.001; type-token ratio: mean 0.69, SD 0.07 vs mean 0.79, SD 0.09; P<.001). Nevertheless, human experts outperformed GPT-4 in specific question categories, notably those related to drug or medication information and preliminary diagnoses. These findings highlight the limitations of GPT-4 in providing advice based on clinical experience. CONCLUSIONS: GPT-4 has shown promising potential in automated medical consultation, with comparable medical accuracy to human experts. However, challenges remain particularly in the realm of nuanced clinical judgment. Future improvements in LLMs may require the integration of specific clinical reasoning pathways and regulatory oversight for safe use. Further research is needed to understand the full potential of LLMs across various medical specialties and conditions.
Jo E; Song S; Kim JH; Lim S; Kim JH; Cha JJ; Kim YM; Joo HJ
0-1
40264428
Efficient spheroid morphology assessment with a ChatGPT data analyst: implications for cell therapy.
2,025
BioTechniques
BACKGROUND: Adipose-derived stem cells (ADSCs) exhibit promising potential for the treatment of various diseases, including osteoarthritis. Spheroids derived from ADSCs are a viable treatment option with enhanced anti-inflammatory effects and tissue repair capabilities. OBJECTIVE: SphereRing((R)) is a rotating donut-shaped tube that efficiently produces large quantities of spheroids. However, accurately measuring spheroid size for spheroid quality assessment is challenging. This study aimed to develop an automated method for measuring spheroid size using deep learning through the ChatGPT Data Analyst for image recognition and processing. METHOD: The area, perimeter, and circularity of spheroids generated with the SphereRing system were analyzed using ChatGPT Data Analyst and ImageJ. Measurement accuracy was validated using Bland-Altman analysis and scatter plot correlation coefficients. RESULTS: ChatGPT Data Analyst was consistent with ImageJ for all parameters. Bland-Altman plots demonstrated strong agreement; most data points were within the 95% limits. CONCLUSION: The ChatGPT Data Analyst provides a reliable and efficient alternative for assessing spheroid quality. This method reduces human error and improves reproducibility to enhance spheroid quality control. Thus, this method has potential applications in regenerative medicine.
Sakamoto T; Koma H; Kuwano A; Horie T; Fuku A; Kitajima H; Nakamura Y; Tanida I; Nakade Y; Hirata H; Tachi Y; Sunami H; Sakamoto D; Yamada S; Yamamoto N; Shimizu Y; Ishigaki Y; Ichiseki T; Kaneuji A; Osawa S; Kawahara N
10
38420977
Advancing Medical Practice with Artificial Intelligence: ChatGPT in Healthcare.
2,024
The Israel Medical Association journal : IMAJ
BACKGROUND: Advancements in artificial intelligence (AI) and natural language processing (NLP) have led to the development of language models such as ChatGPT. These models have the potential to transform healthcare and medical research. However, understanding their applications and limitations is essential. OBJECTIVES: To present a view of ChatGPT research and to critically assess ChatGPT's role in medical writing and clinical environments. METHODS: We performed a literature review via the PubMed search engine from 20 November 2022, to 23 April 2023. The search terms included ChatGPT, OpenAI, and large language models. We included studies that focused on ChatGPT, explored its use or implications in medicine, and were original research articles. The selected studies were analyzed considering study design, NLP tasks, main findings, and limitations. RESULTS: Our study included 27 articles that examined ChatGPT's performance in various tasks and medical fields. These studies covered knowledge assessment, writing, and analysis tasks. While ChatGPT was found to be useful in tasks such as generating research ideas, aiding clinical reasoning, and streamlining workflows, limitations were also identified. These limitations included inaccuracies, inconsistencies, fictitious information, and limited knowledge, highlighting the need for further improvements. CONCLUSIONS: The review underscores ChatGPT's potential in various medical applications. Yet, it also points to limitations that require careful human oversight and responsible use to improve patient care, education, and decision-making.
Tessler I; Wolfovitz A; Livneh N; Gecel NA; Sorin V; Barash Y; Konen E; Klang E
10
38391252
Is ChatGPT knowledgeable of acute coronary syndromes and pertinent European Society of Cardiology Guidelines?
2,024
Minerva cardiology and angiology
BACKGROUND: Advancements in artificial intelligence are being seen in multiple fields, including medicine, and this trend is likely to continue going forward. To analyze the accuracy and reproducibility of ChatGPT answers about acute coronary syndromes (ACS). METHODS: The questions asked to ChatGPT were prepared in two categories. A list of frequently asked questions (FAQs) created from inquiries asked by the public and while preparing the scientific question list, 2023 European Society of Cardiology (ESC) Guidelines for the management of ACS and ESC Clinical Practice Guidelines were used. Accuracy and reproducibility of ChatGPT responses about ACS were evaluated by two cardiologists with ten years of experience using Global Quality Score (GQS). RESULTS: Eventually, 72 FAQs related to ACS met the study inclusion criteria. In total, 65 (90.3%) ChatGPT answers scored GQS 5, which indicated highest accuracy and proficiency. None of the ChatGPT responses to FAQs about ACS scored GQS 1. In addition, highest accuracy and reliability of ChatGPT answers was obtained for the prevention and lifestyle section with GQS 5 for 19 (95%) answers, and GQS 4 for 1 (5%) answer. In contrast, accuracy and proficiency of ChatGPT answers were lowest for the treatment and management section. Moreover, 68 (88.3%) ChatGPT responses for guideline based questions scored GQS 5. Reproducibility of ChatGPT answers was 94.4% for FAQs and 90.9% for ESC guidelines questions. CONCLUSIONS: This study shows for the first time that ChatGPT can give accurate and sufficient responses to more than 90% of FAQs about ACS. In addition, proficiency and correctness of ChatGPT answers about questions depending on ESC guidelines was also substantial.
Gurbuz DC; Varis E
0-1
38907651
Evaluation of a Large Language Model's Ability to Assist in an Orthopedic Hand Clinic.
2,024
Hand (New York, N.Y.)
BACKGROUND: Advancements in artificial intelligence technology, such as OpenAI's large language model, ChatGPT, could transform medicine through applications in a clinical setting. This study aimed to assess the utility of ChatGPT as a clinical assistant in an orthopedic hand clinic. METHODS: Nine clinical vignettes, describing various common and uncommon hand pathologies, were constructed and reviewed by 4 fellowship-trained orthopedic hand surgeons and an orthopedic resident. ChatGPT was given these vignettes and asked to generate a differential diagnosis, potential workup plan, and provide treatment options for its top differential. Responses were graded for accuracy and the overall utility scored on a 5-point Likert scale. RESULTS: The diagnostic accuracy of ChatGPT was 7 out of 9 cases, indicating an overall accuracy rate of 78%. ChatGPT was less reliable with more complex pathologies and failed to identify an intentionally incorrect presentation. ChatGPT received a score of 3.8 +/- 1.4 for correct diagnosis, 3.4 +/- 1.4 for helpfulness in guiding patient management, 4.1 +/- 1.0 for appropriate workup for the actual diagnosis, 4.3 +/- 0.8 for an appropriate recommended treatment plan for the diagnosis, and 4.4 +/- 0.8 for the helpfulness of treatment options in managing patients. CONCLUSION: ChatGPT was successful in diagnosing most of the conditions; however, the overall utility of its advice was variable. While it performed well in recommending treatments, it faced difficulties in providing appropriate diagnoses for uncommon pathologies. In addition, it failed to identify an obvious error in presenting pathology.
Kotzur T; Singh A; Parker J; Peterson B; Sager B; Rose R; Corley F; Brady C
32
39817149
ChatGPT is an Unreliable Source of Peer-Reviewed Information for Common Total Knee and Hip Arthroplasty Patient Questions.
2,025
Advances in orthopedics
Background: Advances in artificial intelligence (AI), machine learning, and publicly accessible language model tools such as ChatGPT-3.5 continue to shape the landscape of modern medicine and patient education. ChatGPT's open access (OA), instant, human-sounding interface capable of carrying discussion on myriad topics makes it a potentially useful resource for patients seeking medical advice. As it pertains to orthopedic surgery, ChatGPT may become a source to answer common preoperative questions regarding total knee arthroplasty (TKA) and total hip arthroplasty (THA). Since ChatGPT can utilize the peer-reviewed literature to source its responses, this study seeks to characterize the validity of its responses to common TKA and THA questions and characterize the peer-reviewed literature that it uses to formulate its responses. Methods: Preoperative TKA and THA questions were formulated by fellowship-trained adult reconstruction surgeons based on common questions posed by patients in the clinical setting. Questions were inputted into ChatGPT with the initial request of using solely the peer-reviewed literature to generate its responses. The validity of each response was rated on a Likert scale by the fellowship-trained surgeons, and the sources utilized were characterized in terms of accuracy of comparison to existing publications, publication date, study design, level of evidence, journal of publication, journal impact factor based on the clarivate analytics factor tool, journal OA status, and whether the journal is based in the United States. Results: A total of 109 sources were cited by ChatGPT in its answers to 17 questions regarding TKA procedures and 16 THA procedures. Thirty-nine sources (36%) were deemed accurate or able to be directly traced to an existing publication. Of these, seven (18%) were identified as duplicates, yielding a total of 32 unique sources that were identified as accurate and further characterized. The most common characteristics of these sources included dates of publication between 2011 and 2015 (10), publication in The Journal of Bone and Joint Surgery (13), journal impact factors between 5.1 and 10.0 (17), internationally based journals (17), and journals that are not OA (28). The most common study designs were retrospective cohort studies and case series (seven each). The level of evidence was broadly distributed between Levels I, III, and IV (seven each). The averages for the Likert scales for medical accuracy and completeness were 4.4/6 and 1.92/3, respectively. Conclusions: Investigation into ChatGPT's response quality and use of peer-reviewed sources when prompted with archetypal pre-TKA and pre-THA questions found ChatGPT to provide mostly reliable responses based on fellowship-trained orthopedic surgeon review of 4.4/6 for accuracy and 1.92/3 for completeness despite a 64.22% rate of citing inaccurate references. This study suggests that until ChatGPT is proven to be a reliable source of valid information and references, patients must exercise extreme caution in directing their pre-TKA and THA questions to this medium.
Schwartzman JD; Shaath MK; Kerr MS; Green CC; Haidukewych GJ
32
37750052
Evaluating the Sensitivity, Specificity, and Accuracy of ChatGPT-3.5, ChatGPT-4, Bing AI, and Bard Against Conventional Drug-Drug Interactions Clinical Tools.
2,023
Drug, healthcare and patient safety
BACKGROUND: AI platforms are equipped with advanced ‎algorithms that have the potential to offer a wide range of ‎applications in healthcare services. However, information about the accuracy of AI chatbots against ‎conventional drug-drug interaction tools is limited‎. This study aimed to assess the sensitivity, specificity, and accuracy of ChatGPT-3.5, ChatGPT-4, Bing AI, and Bard in predicting drug-drug interactions. METHODS: AI-based chatbots (ie, ChatGPT-3.5, ChatGPT-4, Microsoft Bing AI, and Google Bard) were compared for their abilities to detect clinically relevant DDIs for 255 drug pairs. Descriptive statistics, such as specificity, sensitivity, accuracy, negative predictive value (NPV), and positive predictive value (PPV), were calculated for each tool. RESULTS: When a subscription tool was used as a reference, the specificity ranged from a low of 0.372 (ChatGPT-3.5) to a high of 0.769 (Microsoft Bing AI). Also, Microsoft Bing AI had the highest performance with an accuracy score of 0.788, with ChatGPT-3.5 having the lowest accuracy rate of 0.469. There was an overall improvement in performance for all the programs when the reference tool switched to a free DDI source, but still, ChatGPT-3.5 had the lowest specificity (0.392) and accuracy (0.525), and Microsoft Bing AI demonstrated the highest specificity (0.892) and accuracy (0.890). When assessing the consistency of accuracy across two different drug classes, ChatGPT-3.5 and ChatGPT-4 showed the highest ‎variability in accuracy. In addition, ChatGPT-3.5, ChatGPT-4, and Bard exhibited the highest ‎fluctuations in specificity when analyzing two medications belonging to the same drug class. CONCLUSION: Bing AI had the highest accuracy and specificity, outperforming Google's Bard, ChatGPT-3.5, and ChatGPT-4. The findings highlight the significant potential these AI tools hold in transforming patient care. While the current AI platforms evaluated are not without limitations, their ability to quickly analyze potentially significant interactions with good sensitivity suggests a promising step towards improved patient safety.
Al-Ashwal FY; Zawiah M; Gharaibeh L; Abu-Farha R; Bitar AN
10
37549499
Performance and exploration of ChatGPT in medical examination, records and education in Chinese: Pave the way for medical AI.
2,023
International journal of medical informatics
BACKGROUND: Although chat generative pre-trained transformer (ChatGPT) has made several successful attempts in the medical field, most notably in answering medical questions in English, no studies have evaluated ChatGPT's performance in a Chinese context for a medical task. OBJECTIVE: The aim of this study was to evaluate ChatGPT's ability to understand medical knowledge in Chinese, as well as its potential to serve as an electronic health infrastructure for medical development, by evaluating its performance in medical examinations, records, and education. METHOD: The Chinese (CNMLE) and English (ENMLE) datasets of the China National Medical Licensing Examination and the Chinese dataset (NEEPM) of the China National Entrance Examination for Postgraduate Clinical Medicine Comprehensive Ability were used to evaluate the performance of ChatGPT (GPT-3.5 and GPT-4). We assessed answer accuracy, verbal fluency, and the classification of incorrect responses owing to hallucinations on multiple occasions. In addition, we tested ChatGPT's performance on discharge summaries and group learning in a Chinese context on a small scale. RESULTS: The accuracy of GPT-3.5 in CNMLE, ENMLE, and NEEPM was 56% (56/100), 76% (76/100), and 62% (62/100), respectively, compared to that of GPT-4, which was of 84% (84/100), 86% (86/100), and 82% (82/100). The verbal fluency of all the ChatGPT responses exceeded 95%. Among the GPT-3.5 incorrect responses, the proportions of open-domain hallucinations were 66 % (29/44), 54 % (14/24), and 63 % (24/38), whereas close-domain hallucinations accounted for 34 % (15/44), 46 % (14/24), and 37 % (14/38), respectively. By contrast, GPT-4 open-domain hallucinations accounted for 56% (9/16), 43% (6/14), and 83% (15/18), while close-domain hallucinations accounted for 44% (7/16), 57% (8/14), and 17% (3/18), respectively. In the discharge summary, ChatGPT demonstrated logical coherence, however GPT-3.5 could not fulfill the quality requirements, while GPT-4 met the qualification of 60% (6/10). In group learning, the verbal fluency and interaction satisfaction with ChatGPT were 100% (10/10). CONCLUSION: ChatGPT based on GPT-4 is at par with Chinese medical practitioners who passed the CNMLE and at the standard required for admission to clinical medical graduate programs in China. The GPT-4 shows promising potential for discharge summarization and group learning. Additionally, it shows high verbal fluency, resulting in a positive human-computer interaction experience. GPT-4 significantly improves multiple capabilities and reduces hallucinations compared to the previous GPT-3.5 model, with a particular leap forward in the Chinese comprehension capability of medical tasks. Artificial intelligence (AI) systems face the challenges of hallucinations, legal risks, and ethical issues. However, we discovered ChatGPT's potential to promote medical development as an electronic health infrastructure, paving the way for Medical AI to become necessary.
Wang H; Wu W; Dou Z; He L; Yang L
21
39137416
Evaluation of Generative Language Models in Personalizing Medical Information: Instrument Validation Study.
2,024
JMIR AI
BACKGROUND: Although uncertainties exist regarding implementation, artificial intelligence-driven generative language models (GLMs) have enormous potential in medicine. Deployment of GLMs could improve patient comprehension of clinical texts and improve low health literacy. OBJECTIVE: The goal of this study is to evaluate the potential of ChatGPT-3.5 and GPT-4 to tailor the complexity of medical information to patient-specific input education level, which is crucial if it is to serve as a tool in addressing low health literacy. METHODS: Input templates related to 2 prevalent chronic diseases-type II diabetes and hypertension-were designed. Each clinical vignette was adjusted for hypothetical patient education levels to evaluate output personalization. To assess the success of a GLM (GPT-3.5 and GPT-4) in tailoring output writing, the readability of pre- and posttransformation outputs were quantified using the Flesch reading ease score (FKRE) and the Flesch-Kincaid grade level (FKGL). RESULTS: Responses (n=80) were generated using GPT-3.5 and GPT-4 across 2 clinical vignettes. For GPT-3.5, FKRE means were 57.75 (SD 4.75), 51.28 (SD 5.14), 32.28 (SD 4.52), and 28.31 (SD 5.22) for 6th grade, 8th grade, high school, and bachelor's, respectively; FKGL mean scores were 9.08 (SD 0.90), 10.27 (SD 1.06), 13.4 (SD 0.80), and 13.74 (SD 1.18). GPT-3.5 only aligned with the prespecified education levels at the bachelor's degree. Conversely, GPT-4's FKRE mean scores were 74.54 (SD 2.6), 71.25 (SD 4.96), 47.61 (SD 6.13), and 13.71 (SD 5.77), with FKGL mean scores of 6.3 (SD 0.73), 6.7 (SD 1.11), 11.09 (SD 1.26), and 17.03 (SD 1.11) for the same respective education levels. GPT-4 met the target readability for all groups except the 6th-grade FKRE average. Both GLMs produced outputs with statistically significant differences (P<.001; 8th grade P<.001; high school P<.001; bachelors P=.003; FKGL: 6th grade P=.001; 8th grade P<.001; high school P<.001; bachelors P<.001) between mean FKRE and FKGL across input education levels. CONCLUSIONS: GLMs can change the structure and readability of medical text outputs according to input-specified education. However, GLMs categorize input education designation into 3 broad tiers of output readability: easy (6th and 8th grade), medium (high school), and difficult (bachelor's degree). This is the first result to suggest that there are broader boundaries in the success of GLMs in output text simplification. Future research must establish how GLMs can reliably personalize medical texts to prespecified education levels to enable a broader impact on health care literacy.
Spina A; Andalib S; Flores D; Vermani R; Halaseh FF; Nelson AM
10
38874270
ChatGPT in Urology: Bridging Knowledge and Practice for Tomorrow's Healthcare, a Comprehensive Review.
2,024
Journal of endourology
Background: Among emerging AI technologies, Chat-Generative Pre-Trained Transformer (ChatGPT) emerges as a notable language model, uniquely developed through artificial intelligence research. Its proven versatility across various domains, from language translation to healthcare data processing, underscores its promise within medical documentation, diagnostics, research, and education. The current comprehensive review aimed to investigate the utility of ChatGPT in urology education and practice and to highlight its potential limitations. Methods: The authors conducted a comprehensive literature review of the use of ChatGPT and its applications in urology education, research, and practice. Through a systematic review of the literature, with a search strategy using databases, such as PubMed and Embase, we analyzed the advantages and limitations of using ChatGPT in urology and evaluated its potential impact. Results: A total of 78 records were eligible for inclusion. The benefits of ChatGPT were frequently cited across various contexts. In educational/academic benefits mentioned in 21 records (87.5%), ChatGPT showed the ability to assist urologists by offering precise information and responding to inquiries derived from patient data analysis, thereby supporting decision making; in 18 records (75%), advantages comprised personalized medicine, predictive capabilities for disease risks and outcomes, streamlining clinical workflows and improved diagnostics. Nevertheless, apprehensions were expressed regarding potential misinformation, underscoring the necessity for human supervision to guarantee patient safety and address ethical concerns. Conclusion: The potential applications of ChatGPT hold the capacity to bring about transformative changes in urology education, research, and practice. AI technology can serve as a useful tool to augment human intelligence; however, it is essential to use it in a responsible and ethical manner.
Solano C; Tarazona N; Angarita GP; Medina AA; Ruiz S; Pedroza VM; Traxer O
0-1
39373234
Vaccination hesitancy: agreement between WHO and ChatGPT-4.0 or Gemini Advanced.
2,025
Annali di igiene : medicina preventiva e di comunita
BACKGROUND: An increasing number of individuals use online Artificial Intelligence (AI) - based chatbots to retrieve information on health-related topics. This study aims to evaluate the accuracy in answering vaccine-related answers of the currently most commonly used, advanced chatbots - ChatGPT-4.0 and Google Gemini Advanced. METHODS: We compared the answers provided by the World Health Organization (WHO) to 38 open questions on vaccination myths and misconception, with the answers created by ChatGPT-4.0 and Gemini Advanced. Responses were considered as "appropriate", if the information provided was coherent and not in contrast to current WHO recommendations or to drug regulatory indications. RESULTS AND CONCLUSIONS: The rate of agreement between WHO answers and Chat-GPT-4.0 or Gemini Advanced was very high, as both provided 36 (94.7%) appropriate responses. The few discrepancies between WHO and AI-chatbots answers could not be considered "harmful", and both chatbots often invited the user to check reliable sources, such as CDC or the WHO websites, or to contact a local healthcare professional. In their current versions, both AI-chatbots may already be powerful instrument to support the traditional communication tools in primary prevention, with the potential to improve health literacy, medication adherence, and vaccine hesitancy and concerns. Given the rapid evolution of AI-based systems, further studies are strongly needed to monitor their accuracy and reliability over time.
Fiore M; Bianconi A; Acuti Martellucci C; Rosso A; Zauli E; Flacco ME; Manzoli L
43
38065864
Evaluating cluster analysis techniques in ChatGPT versus R-language with visualizations of author collaborations and keyword cooccurrences on articles in the Journal of Medicine (Baltimore) 2023: Bibliometric analysis.
2,023
Medicine
BACKGROUND: Analyses of author collaborations and keyword co-occurrences are frequently used in bibliographic research. However, no studies have introduced a straightforward yet effective approach, such as utilizing ChatGPT with Code Interpreter (ChatGPT_CI) or the R language, for creating cluster-oriented networks. This research aims to compare cluster analysis methods in ChatGPT_CI and R, visualize country-specific author collaborations, and then demonstrate the most effective approach. METHODS: The research focused on articles and review pieces from Medicine (Baltimore) published in 2023. By August 20, 2023, we had gathered metadata for 1976 articles using the Web of Science core collections. The efficiency and effectiveness of cluster displays between ChatGPT_CI and R were compared by evaluating their time consumption. The best method was then employed to present a series of visualizations of country-specific author collaborations, rooted in social network and cluster analyses. Visualization techniques incorporating network charts, chord diagrams, circle bar plots, circle packing plots, heat dendrograms, dendrograms, and word clouds were demonstrated. We further highlighted the research profiles of 2 prolific authors using timeline visuals. RESULTS: The research findings include that (1) the most active contributors were China, Nanjing Medical University (China), the Medical School Department, and Dr Chou from Taiwan when considering countries, institutions, departments, and individual authors, respectively; (2) the highest cited articles originated from Medicine (Baltimore) accounting for 4.53%: New England Journal of Medicine, PLOS ONE, LANCET, and The Journal of the American Medical Association, with respective contributions of 3.25%, 2.7%, 2.52%, and 1.54%; (3) visual cluster analysis in R proved to be more efficient and effective than ChatGPT_CI, reducing the time taken from 1 hour to just 3 minutes; (4) 7 cluster-focused networks were crafted using R on a custom platform; and (5) the research trajectories of 2 prominent authors (Dr Brin from the United States and Dr Chow from Taiwan) and articles themes in Medicine 2023 were depicted using timeline visuals. CONCLUSIONS: This research highlighted the efficient and effective methods for conducting cluster analyses of author collaborations using R. For future related studies, such as keyword co-occurrence analysis, R is recommended as a viable alternative for bibliographic research.
Cheng YZ; Lai TH; Chien TW; Chou W
10
38182023
Do ChatGPT and Google differ in answers to commonly asked patient questions regarding total shoulder and total elbow arthroplasty?
2,024
Journal of shoulder and elbow surgery
BACKGROUND: Artificial intelligence (AI) and large language models (LLMs) offer a new potential resource for patient education. The answers by Chat Generative Pre-Trained Transformer (ChatGPT), a LLM AI text bot, to frequently asked questions (FAQs) were compared to answers provided by a contemporary Google search to determine the reliability of information provided by these sources for patient education in upper extremity arthroplasty. METHODS: "Total shoulder arthroplasty" (TSA) and "total elbow arthroplasty" (TEA) were entered into Google Search and ChatGPT 3.0 to determine the ten most FAQs. On Google, the FAQs were obtained through the "people also ask" section, while ChatGPT was asked to provide the ten most FAQs. Each question, answer, and reference(s) cited were recorded. A modified version of the Rothwell system was used to categorize questions into 10 subtopics: special activities, timeline of recovery, restrictions, technical details, cost, indications/management, risks and complications, pain, longevity, and evaluation of surgery. Each reference was categorized into the following groups: commercial, academic, medical practice, single surgeon personal, or social media. Questions for TSA and TEA were combined for analysis and compared between Google and ChatGPT with a 2 sample Z-test for proportions. RESULTS: Overall, most questions were related to procedural indications or management (17.5%). There were no significant differences between Google and ChatGPT between question categories. The majority of references were from academic websites (65%). ChatGPT produced a greater number of academic references compared to Google (80% vs. 50%; P = .047), while Google more commonly provided medical practice references (25% vs. 0%; P = .017). CONCLUSION: In conjunction with patient-physician discussions, AI LLMs may provide a reliable resource for patients. By providing information based on academic references, these tools have the potential to improve health literacy and improved shared decision making for patients searching for information about TSA and TEA. CLINICAL SIGNIFICANCE: With the rising prevalence of AI programs, it is essential to understand how these applications affect patient education in medicine.
Tharakan S; Klein B; Bartlett L; Atlas A; Parada SA; Cohn RM
32
39592492
Performance of Artificial Intelligence Chatbots in Answering Clinical Questions on Japanese Practical Guidelines for Implant-based Breast Reconstruction.
2,025
Aesthetic plastic surgery
BACKGROUND: Artificial intelligence (AI) chatbots, including ChatGPT-4 (GPT-4) and Grok-1 (Grok), have been shown to be potentially useful in several medical fields, but have not been examined in plastic and aesthetic surgery. The aim of this study is to evaluate the responses of these AI chatbots for clinical questions (CQs) related to the guidelines for implant-based breast reconstruction (IBBR) published by the Japan Society of Plastic and Reconstructive Surgery (JSPRS) in 2021. METHODS: CQs in the JSPRS guidelines were used as question sources. Responses from two AI chatbots, GPT-4 and Grok, were evaluated for accuracy, informativeness, and readability by five Japanese Board-certified breast reconstruction specialists and five Japanese clinical fellows of plastic surgery. RESULTS: GPT-4 outperformed Grok significantly in terms of accuracy (p < 0.001), informativeness (p < 0.001), and readability (p < 0.001) when evaluated by plastic surgery fellows. Compared to the original guidelines, Grok scored significantly lower in all three areas (all p < 0.001). The accuracy of GPT-4 was rated to be significantly higher based on scores given by plastic surgery fellows compared to those of breast reconstruction specialists (p = 0.012), whereas there was no significant difference between these scores for Grok. CONCLUSIONS: The study suggests that GPT-4 has the potential to assist in interpreting and applying clinical guidelines for IBBR but importantly there is still a risk that AI chatbots can misinform. Further studies are needed to understand the broader role of current and future AI chatbots in breast reconstruction surgery. LEVEL OF EVIDENCE IV: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine Ratings, please refer to Table of Contents or online Instructions to Authors www.springer.com/00266 .
Shiraishi M; Sowa Y; Tomita K; Terao Y; Satake T; Muto M; Morita Y; Higai S; Toyohara Y; Kurokawa Y; Sunaga A; Okazaki M
32
38693697
Exploring the Performance of ChatGPT-4 in the Taiwan Audiologist Qualification Examination: Preliminary Observational Study Highlighting the Potential of AI Chatbots in Hearing Care.
2,024
JMIR medical education
BACKGROUND: Artificial intelligence (AI) chatbots, such as ChatGPT-4, have shown immense potential for application across various aspects of medicine, including medical education, clinical practice, and research. OBJECTIVE: This study aimed to evaluate the performance of ChatGPT-4 in the 2023 Taiwan Audiologist Qualification Examination, thereby preliminarily exploring the potential utility of AI chatbots in the fields of audiology and hearing care services. METHODS: ChatGPT-4 was tasked to provide answers and reasoning for the 2023 Taiwan Audiologist Qualification Examination. The examination encompassed six subjects: (1) basic auditory science, (2) behavioral audiology, (3) electrophysiological audiology, (4) principles and practice of hearing devices, (5) health and rehabilitation of the auditory and balance systems, and (6) auditory and speech communication disorders (including professional ethics). Each subject included 50 multiple-choice questions, with the exception of behavioral audiology, which had 49 questions, amounting to a total of 299 questions. RESULTS: The correct answer rates across the 6 subjects were as follows: 88% for basic auditory science, 63% for behavioral audiology, 58% for electrophysiological audiology, 72% for principles and practice of hearing devices, 80% for health and rehabilitation of the auditory and balance systems, and 86% for auditory and speech communication disorders (including professional ethics). The overall accuracy rate for the 299 questions was 75%, which surpasses the examination's passing criteria of an average 60% accuracy rate across all subjects. A comprehensive review of ChatGPT-4's responses indicated that incorrect answers were predominantly due to information errors. CONCLUSIONS: ChatGPT-4 demonstrated a robust performance in the Taiwan Audiologist Qualification Examination, showcasing effective logical reasoning skills. Our results suggest that with enhanced information accuracy, ChatGPT-4's performance could be further improved. This study indicates significant potential for the application of AI chatbots in audiology and hearing care services.
Wang S; Mo C; Chen Y; Dai X; Wang H; Shen X
32
38184368
Assessing the clinical reasoning of ChatGPT for mechanical thrombectomy in patients with stroke.
2,024
Journal of neurointerventional surgery
BACKGROUND: Artificial intelligence (AI) has become a promising tool in medicine. ChatGPT, a large language model AI Chatbot, shows promise in supporting clinical practice. We assess the potential of ChatGPT as a clinical reasoning tool for mechanical thrombectomy in patients with stroke. METHODS: An internal validation of the abilities of ChatGPT was first performed using artificially created patient scenarios before assessment of real patient scenarios from the medical center's stroke database. All patients with large vessel occlusions who underwent mechanical thrombectomy at Tulane Medical Center between January 1, 2022 and December 31, 2022 were included in the study. The performance of ChatGPT in evaluating which patients should undergo mechanical thrombectomy was compared with the decisions made by board-certified stroke neurologists and neurointerventionalists. The interpretation skills, clinical reasoning, and accuracy of ChatGPT were analyzed. RESULTS: 102 patients with large vessel occlusions underwent mechanical thrombectomy. ChatGPT agreed with the physician's decision whether or not to pursue thrombectomy in 54.3% of the cases. ChatGPT had mistakes in 8.8% of the cases, consisting of mathematics, logic, and misinterpretation errors. In the internal validation phase, ChatGPT was able to provide nuanced clinical reasoning and was able to perform multi-step thinking, although with an increased rate of making mistakes. CONCLUSION: ChatGPT shows promise in clinical reasoning, including the ability to factor a patient's underlying comorbidities when considering mechanical thrombectomy. However, ChatGPT is prone to errors as well and should not be relied on as a sole decision-making tool in its present form, but it has potential to assist clinicians with more efficient work flow.
Chen TC; Couldwell MW; Singer J; Singer A; Koduri L; Kaminski E; Nguyen K; Multala E; Dumont AS; Wang A
0-1
40191936
The role of artificial intelligence in cardiovascular research: Fear less and live bolder.
2,025
European journal of clinical investigation
BACKGROUND: Artificial intelligence (AI) has captured the attention of everyone, including cardiovascular (CV) clinicians and scientists. Moving beyond philosophical debates, modern cardiology cannot overlook AI's growing influence but must actively explore its potential applications in clinical practice and research methodology. METHODS AND RESULTS: AI offers exciting possibilities for advancing CV medicine by uncovering disease heterogeneity, integrating complex multimodal data, and enhancing treatment strategies. In this review, we discuss the innovative applications of AI in cardiac electrophysiology, imaging, angiography, biomarkers, and genomic data, as well as emerging tools like face recognition and speech analysis. Furthermore, we focus on the expanding role of machine learning (ML) in predicting CV risk and outcomes, outlining a roadmap for the implementation of AI in CV care delivery. While the future of AI holds great promise, technical limitations and ethical challenges remain significant barriers to its widespread clinical adoption. CONCLUSIONS: Addressing these issues through the development of high-quality standards and involving key stakeholders will be essential for AI to transform cardiovascular care safely and effectively.
Scuricini A; Ramoni D; Liberale L; Montecucco F; Carbone F
10
38528129
Exploring the Unknown: Evaluating ChatGPT's Performance in Uncovering Novel Aspects of Plastic Surgery and Identifying Areas for Future Innovation.
2,024
Aesthetic plastic surgery
BACKGROUND: Artificial intelligence (AI) has emerged as a powerful tool in various medical fields, including plastic surgery. This study aims to evaluate the performance of ChatGPT, an AI language model, in elucidating historical aspects of plastic surgery and identifying potential avenues for innovation. METHODS: A comprehensive analysis of ChatGPT's responses to a diverse range of plastic surgery-related inquiries was performed. The quality of the AI-generated responses was assessed based on their relevance, accuracy, and novelty. Additionally, the study examined the AI's ability to recognize gaps in existing knowledge and propose innovative solutions. ChatGPT's responses were analysed by specialist plastic surgeons with extensive research experience, and quantitatively analysed with a Likert scale. RESULTS: ChatGPT demonstrated a high degree of proficiency in addressing a wide array of plastic surgery-related topics. The AI-generated responses were found to be relevant and accurate in most cases. However, it demonstrated convergent thinking and failed to generate genuinely novel ideas to revolutionize plastic surgery. Instead, it suggested currently popular trends that demonstrate great potential for further advancements. Some of the references presented were also erroneous as they cannot be validated against the existing literature. CONCLUSION: Although ChatGPT requires major improvements, this study highlights its potential as an effective tool for uncovering novel aspects of plastic surgery and identifying areas for future innovation. By leveraging the capabilities of AI language models, plastic surgeons may drive advancements in the field. Further studies are needed to cautiously explore the integration of AI-driven insights into clinical practice and to evaluate their impact on patient outcomes. LEVEL OF EVIDENCE V: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.
Lim B; Seth I; Xie Y; Kenney PS; Cuomo R; Rozen WM
32
37713254
The Potential of ChatGPT as a Self-Diagnostic Tool in Common Orthopedic Diseases: Exploratory Study.
2,023
Journal of medical Internet research
BACKGROUND: Artificial intelligence (AI) has gained tremendous popularity recently, especially the use of natural language processing (NLP). ChatGPT is a state-of-the-art chatbot capable of creating natural conversations using NLP. The use of AI in medicine can have a tremendous impact on health care delivery. Although some studies have evaluated ChatGPT's accuracy in self-diagnosis, there is no research regarding its precision and the degree to which it recommends medical consultations. OBJECTIVE: The aim of this study was to evaluate ChatGPT's ability to accurately and precisely self-diagnose common orthopedic diseases, as well as the degree of recommendation it provides for medical consultations. METHODS: Over a 5-day course, each of the study authors submitted the same questions to ChatGPT. The conditions evaluated were carpal tunnel syndrome (CTS), cervical myelopathy (CM), lumbar spinal stenosis (LSS), knee osteoarthritis (KOA), and hip osteoarthritis (HOA). Answers were categorized as either correct, partially correct, incorrect, or a differential diagnosis. The percentage of correct answers and reproducibility were calculated. The reproducibility between days and raters were calculated using the Fleiss kappa coefficient. Answers that recommended that the patient seek medical attention were recategorized according to the strength of the recommendation as defined by the study. RESULTS: The ratios of correct answers were 25/25, 1/25, 24/25, 16/25, and 17/25 for CTS, CM, LSS, KOA, and HOA, respectively. The ratios of incorrect answers were 23/25 for CM and 0/25 for all other conditions. The reproducibility between days was 1.0, 0.15, 0.7, 0.6, and 0.6 for CTS, CM, LSS, KOA, and HOA, respectively. The reproducibility between raters was 1.0, 0.1, 0.64, -0.12, and 0.04 for CTS, CM, LSS, KOA, and HOA, respectively. Among the answers recommending medical attention, the phrases "essential," "recommended," "best," and "important" were used. Specifically, "essential" occurred in 4 out of 125, "recommended" in 12 out of 125, "best" in 6 out of 125, and "important" in 94 out of 125 answers. Additionally, 7 out of the 125 answers did not include a recommendation to seek medical attention. CONCLUSIONS: The accuracy and reproducibility of ChatGPT to self-diagnose five common orthopedic conditions were inconsistent. The accuracy could potentially be improved by adding symptoms that could easily identify a specific location. Only a few answers were accompanied by a strong recommendation to seek medical attention according to our study standards. Although ChatGPT could serve as a potential first step in accessing care, we found variability in accurate self-diagnosis. Given the risk of harm with self-diagnosis without medical follow-up, it would be prudent for an NLP to include clear language alerting patients to seek expert medical opinions. We hope to shed further light on the use of AI in a future clinical study.
Kuroiwa T; Sarcon A; Ibara T; Yamada E; Yamamoto A; Tsukamoto K; Fujita K
32
37707884
The Potential and Concerns of Using AI in Scientific Research: ChatGPT Performance Evaluation.
2,023
JMIR medical education
BACKGROUND: Artificial intelligence (AI) has many applications in various aspects of our daily life, including health, criminal, education, civil, business, and liability law. One aspect of AI that has gained significant attention is natural language processing (NLP), which refers to the ability of computers to understand and generate human language. OBJECTIVE: This study aims to examine the potential for, and concerns of, using AI in scientific research. For this purpose, high-impact research articles were generated by analyzing the quality of reports generated by ChatGPT and assessing the application's impact on the research framework, data analysis, and the literature review. The study also explored concerns around ownership and the integrity of research when using AI-generated text. METHODS: A total of 4 articles were generated using ChatGPT, and thereafter evaluated by 23 reviewers. The researchers developed an evaluation form to assess the quality of the articles generated. Additionally, 50 abstracts were generated using ChatGPT and their quality was evaluated. The data were subjected to ANOVA and thematic analysis to analyze the qualitative data provided by the reviewers. RESULTS: When using detailed prompts and providing the context of the study, ChatGPT would generate high-quality research that could be published in high-impact journals. However, ChatGPT had a minor impact on developing the research framework and data analysis. The primary area needing improvement was the development of the literature review. Moreover, reviewers expressed concerns around ownership and the integrity of the research when using AI-generated text. Nonetheless, ChatGPT has a strong potential to increase human productivity in research and can be used in academic writing. CONCLUSIONS: AI-generated text has the potential to improve the quality of high-impact research articles. The findings of this study suggest that decision makers and researchers should focus more on the methodology part of the research, which includes research design, developing research tools, and analyzing data in depth, to draw strong theoretical and practical implications, thereby establishing a revolution in scientific research in the era of AI. The practical implications of this study can be used in different fields such as medical education to deliver materials to develop the basic competencies for both medicine students and faculty members.
Khlaif ZN; Mousa A; Hattab MK; Itmazi J; Hassan AA; Sanmugam M; Ayyoub A
10
39318412
Appraisal of ChatGPT's responses to common patient questions regarding Tommy John surgery.
2,024
Shoulder & elbow
BACKGROUND: Artificial intelligence (AI) has progressed at a fast pace. ChatGPT, a rapidly expanding AI platform, has several growing applications in medicine and patient care. However, its ability to provide high-quality answers to patient questions about orthopedic procedures such as Tommy John surgery is unknown. Our objective is to evaluate the quality of information provided by ChatGPT 3.5 and 4.0 in response to patient questions regarding Tommy John surgery. METHODS: Twenty-five patient questions regarding Tommy John surgery were posed to ChatGPT 3.5 and 4.0. Readability was assessed via Flesch Kincaid Reading Ease, Flesh Kinkaid Grade Level, Gunning Fog Score, Simple Measure of Gobbledygook, Coleman Liau, and Automated Readability Index. The quality of each response was graded using a 5-point Likert scale. RESULTS: ChatGPT generated information at an educational level that greatly exceeds the recommended level. ChatGPT 4.0 produced slightly better responses to common questions regarding Tommy John surgery with fewer inaccuracies than ChatGPT 3.5. CONCLUSION: Although ChatGPT can provide accurate information regarding Tommy John surgery, its responses may not be easily comprehended by the average patient. As AI platforms become more accessible to the public, patients must be aware of their limitations.
Shaari AL; Fano AN; Anakwenze O; Klifto C
32
38133911
Medical Student Experiences and Perceptions of ChatGPT and Artificial Intelligence: Cross-Sectional Study.
2,023
JMIR medical education
BACKGROUND: Artificial intelligence (AI) has the potential to revolutionize the way medicine is learned, taught, and practiced, and medical education must prepare learners for these inevitable changes. Academic medicine has, however, been slow to embrace recent AI advances. Since its launch in November 2022, ChatGPT has emerged as a fast and user-friendly large language model that can assist health care professionals, medical educators, students, trainees, and patients. While many studies focus on the technology's capabilities, potential, and risks, there is a gap in studying the perspective of end users. OBJECTIVE: The aim of this study was to gauge the experiences and perspectives of graduating medical students on ChatGPT and AI in their training and future careers. METHODS: A cross-sectional web-based survey of recently graduated medical students was conducted in an international academic medical center between May 5, 2023, and June 13, 2023. Descriptive statistics were used to tabulate variable frequencies. RESULTS: Of 325 applicants to the residency programs, 265 completed the survey (an 81.5% response rate). The vast majority of respondents denied using ChatGPT in medical school, with 20.4% (n=54) using it to help complete written assessments and only 9.4% using the technology in their clinical work (n=25). More students planned to use it during residency, primarily for exploring new medical topics and research (n=168, 63.4%) and exam preparation (n=151, 57%). Male students were significantly more likely to believe that AI will improve diagnostic accuracy (n=47, 51.7% vs n=69, 39.7%; P=.001), reduce medical error (n=53, 58.2% vs n=71, 40.8%; P=.002), and improve patient care (n=60, 65.9% vs n=95, 54.6%; P=.007). Previous experience with AI was significantly associated with positive AI perception in terms of improving patient care, decreasing medical errors and misdiagnoses, and increasing the accuracy of diagnoses (P=.001, P<.001, P=.008, respectively). CONCLUSIONS: The surveyed medical students had minimal formal and informal experience with AI tools and limited perceptions of the potential uses of AI in health care but had overall positive views of ChatGPT and AI and were optimistic about the future of AI in medical education and health care. Structured curricula and formal policies and guidelines are needed to adequately prepare medical learners for the forthcoming integration of AI in medicine.
Alkhaaldi SMI; Kassab CH; Dimassi Z; Oyoun Alsoud L; Al Fahim M; Al Hageh C; Ibrahim H
10
38708385
Evaluating Artificial Intelligence's Role in Teaching the Reporting and Interpretation of Computed Tomographic Angiography for Preoperative Planning of the Deep Inferior Epigastric Artery Perforator Flap.
2,024
JPRAS open
BACKGROUND: Artificial intelligence (AI) has the potential to transform preoperative planning for breast reconstruction by enhancing the efficiency, accuracy, and reliability of radiology reporting through automatic interpretation and perforator identification. Large language models (LLMs) have recently advanced significantly in medicine. This study aimed to evaluate the proficiency of contemporary LLMs in interpreting computed tomography angiography (CTA) scans for deep inferior epigastric perforator (DIEP) flap preoperative planning. METHODS: Four prominent LLMs, ChatGPT-4, BARD, Perplexity, and BingAI, answered six questions on CTA scan reporting. A panel of expert plastic surgeons with extensive experience in breast reconstruction assessed the responses using a Likert scale. In contrast, the responses' readability was evaluated using the Flesch Reading Ease score, the Flesch-Kincaid Grade level, and the Coleman-Liau Index. The DISCERN score was utilized to determine the responses' suitability. Statistical significance was identified through a t-test, and P-values < 0.05 were considered significant. RESULTS: BingAI provided the most accurate and useful responses to prompts, followed by Perplexity, ChatGPT, and then BARD. BingAI had the greatest Flesh Reading Ease (34.7+/-5.5) and DISCERN (60.5+/-3.9) scores. Perplexity had higher Flesch-Kincaid Grade level (20.5+/-2.7) and Coleman-Liau Index (17.8+/-1.6) scores than other LLMs. CONCLUSION: LLMs exhibit limitations in their capabilities of reporting CTA for preoperative planning of breast reconstruction, yet the rapid advancements in technology hint at a promising future. AI stands poised to enhance the education of CTA reporting and aid preoperative planning. In the future, AI technology could provide automatic CTA interpretation, enhancing the efficiency, accuracy, and reliability of CTA reports.
Lim B; Cevik J; Seth I; Sofiadellis F; Ross RJ; Rozen WM; Cuomo R
32
37863479
The Role of Artificial Intelligence in Surgery: What do General Surgery Residents Think?
2,024
The American surgeon
BACKGROUND: Artificial intelligence (AI) holds significant potential in medical education and patient care, but its rapid emergence presents ethical and practical challenges. This study explored the perspectives of surgical residents on AI's role in medicine. METHODS: We performed a cross-sectional study surveying general surgery residents at a university-affiliated teaching hospital about their views on AI in medicine and surgical training. The survey covered demographics, residents' understanding of AI, its integration into medical practice, and use of AI tools like ChatGPT. The survey design was inspired by a recent national survey and underwent pretesting before deployment. RESULTS: Of the 31 participants surveyed, 24% identified diagnostics as AI's top application, 12% favored its use in identifying anatomical structures in surgeries, and 20% endorsed AI integration into EMRs for predictive models. Attitudes toward AI varied based on its intended application: 77.41% expressed concern about AI making life decisions and 70.97% felt excited about its application for repetitive tasks. A significant 67.74% believed AI could enhance the understanding of medical knowledge. Perception of AI integration varied with AI familiarity (P = .01), with more knowledgeable respondents expressing more positivity. Moreover, familiarity influenced the perceived academic use of ChatGPT (P = .039) and attitudes toward AI in operating rooms (P = .032). Conclusion: This study provides insights into surgery residents' perceptions of AI in medical practice and training. These findings can inform future research, shape policy decisions, and guide AI development, promoting a harmonious collaboration between AI and surgeons to improve both training and patient care.
St John A; Cooper L; Kavic SM
10
38912098
AI-Generated Graduate Medical Education Content for Total Joint Arthroplasty: Comparing ChatGPT Against Orthopaedic Fellows.
2,024
Arthroplasty today
BACKGROUND: Artificial intelligence (AI) in medicine has primarily focused on diagnosing and treating diseases and assisting in the development of academic scholarly work. This study aimed to evaluate a new use of AI in orthopaedics: content generation for professional medical education. Quality, accuracy, and time were compared between content created by ChatGPT and orthopaedic surgery clinical fellows. METHODS: ChatGPT and 3 orthopaedic adult reconstruction fellows were tasked with creating educational summaries of 5 total joint arthroplasty-related topics. Responses were evaluated across 5 domains by 4 blinded reviewers from different institutions who are all current or former total joint arthroplasty fellowship directors or national arthroplasty board review course directors. RESULTS: ChatGPT created better orthopaedic content than fellows when mean aggregate scores for all 5 topics and domains were compared (P </= .001). The only domain in which fellows outperformed ChatGPT was the integration of key points and references (P = .006). ChatGPT outperformed the fellows in response time, averaging 16.6 seconds vs the fellows' 94 minutes per prompt (P = .002). CONCLUSIONS: With its efficient and accurate content generation, the current findings underscore ChatGPT's potential as an adjunctive tool to enhance orthopaedic arthroplasty graduate medical education. Future studies are warranted to explore AI's role further and optimize its utility in augmenting the educational development of arthroplasty trainees.
DeCook R; Muffly BT; Mahmood S; Holland CT; Ayeni AM; Ast MP; Bolognese MP; Guild GN 3rd; Sheth NP; Pean CA; Premkumar A
32
40384805
Exploring Filipino Medical Students' Attitudes and Perceptions of Artificial Intelligence in Medical Education: A Mixed-Methods Study.
2,024
MedEdPublish (2016)
BACKGROUND: Artificial intelligence (AI) is emerging as one of the most revolutionary technologies shaping the educational system utilized by this generation of learners globally. AI enables opportunities for innovative learning experiences, while helping teachers devise teaching strategies through automation and intelligent tutoring systems. The integration of AI into medical education has potential for advancing health management frameworks and elevating the quality of patient care. However, developing countries, including the Philippines, face issues on equitable AI use. Furthermore, medical educators struggle in learning AI which imposes a challenge in teaching its use. To address this, the current study aims to investigate the current perceptions of medical students on the role of AI in medical education and practice of medicine. METHODS: The study utilized a mixed-methods approach to quantitatively and qualitatively assess the current attitudes and perceptions of medicine students of AI. Quantitative assessment was done via survey and qualitative analysis via focus group discussion. Participants were composed of 20 medical students from the College of Medicine, University of the Philippines Manila. RESULTS: Analysis of the attitudes and perceptions of Filipino medical students on AI showed that participants had a baseline understanding and awareness, but lack opportunities in studying medicine and clinical practice. Majority of participants recognize the advantages in medical education but have reservations on its overall application in a clinical setting. CONCLUSIONS: The results of this investigation can direct future studies that aim to guide educators on the emerging role of AI in medical practice and the healthcare system, on its effect on physicians-in-training under contemporary medical educational practices. Findings from our study revealed key focal points which need to be sufficiently addressed in order to better equip medical students with knowledge, tools, and skills needed to utilize and integrate AI into their education and eventual practice as healthcare professionals.
Falcon RMG; Alcazar RMU; Babaran HG; Caragay BDB; Corpuz CAA; Kho MVS; Perez ACN; Isip-Tan ITC
10
37654681
Evaluation of Artificial Intelligence-generated Responses to Common Plastic Surgery Questions.
2,023
Plastic and reconstructive surgery. Global open
BACKGROUND: Artificial intelligence (AI) is increasingly used to answer questions, yet the accuracy and validity of current tools are uncertain. In contrast to internet queries, AI generates summary responses as definitive. The internet is rife with inaccuracies, and plastic surgery management guidelines evolve, making verifiable information important. METHODS: We posed 10 questions about breast implant-associated illness, anaplastic large lymphoma, and squamous carcinoma to Bing, using the "more balanced" option, and to ChatGPT. Answers were reviewed by two plastic surgeons for accuracy and fidelity to information on the Food and Drug Administration (FDA) and American Society of Plastic Surgeons (ASPS) websites. We also presented 10 multiple-choice questions from the 2022 plastic surgery in-service examination to Bing, using the "more precise" option, and ChatGPT. Questions were repeated three times over consecutive weeks, and answers were evaluated for accuracy and stability. RESULTS: Compared with answers from the FDA and ASPS, Bing and ChatGPT were accurate. Bing answered 10 of the 30 multiple-choice questions correctly, nine incorrectly, and did not answer 11. ChatGPT correctly answered 16 and incorrectly answered 14. In both parts, responses from Bing were shorter, less detailed, and referred to verified and unverified sources; ChatGPT did not provide citations. CONCLUSIONS: These AI tools provided accurate information from the FDA and ASPS websites, but neither consistently answered questions requiring nuanced decision-making correctly. Advances in applications to plastic surgery will require algorithms that selectively identify, evaluate, and exclude information to enhance the accuracy, precision, validity, reliability, and utility of AI-generated responses.
Copeland-Halperin LR; O'Brien L; Copeland M
32
40423076
A Talk with ChatGPT: The Role of Artificial Intelligence in Shaping the Future of Cardiology and Electrophysiology.
2,025
Journal of personalized medicine
Background: Artificial intelligence (AI) is poised to significantly impact the future of cardiology and electrophysiology, offering new tools to interpret complex datasets, improve diagnosis, optimize clinical workflows, and personalize therapy. ChatGPT-4o, a leading AI-based language model, exemplifies the transformative potential of AI in clinical research, medical education, and patient care. Aim and Methods: In this paper, we present an exploratory dialogue with ChatGPT to assess the role of AI in shaping the future of cardiology, with a particular focus on arrhythmia management and cardiac electrophysiology. Topics discussed include AI applications in ECG interpretation, arrhythmia detection, procedural guidance during ablation, and risk stratification for sudden cardiac death. We also examine the risks associated with AI use, including overreliance, interpretability challenges, data bias, and generalizability. Conclusions: The integration of AI into cardiovascular care offers the potential to enhance diagnostic accuracy, tailor interventions, and support decision-making. However, the adoption of AI must be carefully balanced with clinical expertise and ethical considerations. By fostering collaboration between clinicians and AI developers, it is possible to guide the development of reliable, transparent, and effective tools that will shape the future of personalized cardiology and electrophysiology.
Cersosimo A; Zito E; Pierucci N; Matteucci A; La Fazia VM
10
40046930
Artificial intelligence in healthcare education: evaluating the accuracy of ChatGPT, Copilot, and Google Gemini in cardiovascular pharmacology.
2,025
Frontiers in medicine
BACKGROUND: Artificial intelligence (AI) is revolutionizing medical education; however, its limitations remain underexplored. This study evaluated the accuracy of three generative AI tools-ChatGPT-4, Copilot, and Google Gemini-in answering multiple-choice questions (MCQ) and short-answer questions (SAQ) related to cardiovascular pharmacology, a key subject in healthcare education. METHODS: Using free versions of each AI tool, we administered 45 MCQs and 30 SAQs across three difficulty levels: easy, intermediate, and advanced. AI-generated answers were reviewed by three pharmacology experts. The accuracy of MCQ responses was recorded as correct or incorrect, while SAQ responses were rated on a 1-5 scale based on relevance, completeness, and correctness. RESULTS: ChatGPT, Copilot, and Gemini demonstrated high accuracy scores in easy and intermediate MCQs (87-100%). While all AI models showed a decline in performance on the advanced MCQ section, only Copilot (53% accuracy) and Gemini (20% accuracy) had significantly lower scores compared to their performance on easy-intermediate levels. SAQ evaluations revealed high accuracy scores for ChatGPT (overall 4.7 +/- 0.3) and Copilot (overall 4.5 +/- 0.4) across all difficulty levels, with no significant differences between the two tools. In contrast, Gemini's SAQ performance was markedly lower across all levels (overall 3.3 +/- 1.0). CONCLUSION: ChatGPT-4 demonstrates the highest accuracy in addressing both MCQ and SAQ cardiovascular pharmacology questions, regardless of difficulty level. Copilot ranks second after ChatGPT, while Google Gemini shows significant limitations in handling complex MCQs and providing accurate responses to SAQ-type questions in this field. These findings can guide the ongoing refinement of AI tools for specialized medical education.
Salman IM; Ameer OZ; Khanfar MA; Hsieh YH
21
38106923
Medicine and Pharmacy Students' Knowledge, Attitudes, and Practice regarding Artificial Intelligence Programs: Jordan and West Bank of Palestine.
2,023
Advances in medical education and practice
BACKGROUND: Artificial intelligence (AI) programs generate responses to input text, showcasing their innovative capabilities in education and demonstrating various potential benefits, particularly in the field of medical education. The current knowledge of health profession students about AI programs has still not been assessed in Jordan and the West Bank of Palestine (WBP). AIM: This study aimed to assess students' awareness and practice of AI programs in medicine and pharmacy in Jordan and the WBP. METHODS: This study was in the form of an observational, cross-sectional survey. A questionnaire was electronically distributed among students of medicine and pharmacy at An-Najah National University (WBP), Al-Isra University (Jordan), and Al-Balqa Applied University (Jordan). The questionnaire consisted of three main categories: sociodemographic characteristics of the participants, practice of AI programs, and perceptions of AI programs, including ChatGPT. RESULTS: A total of 321 students responded to the distributed questionnaire, and 261 participants (81.3%) stated that they had heard about AI programs. In addition, 135 participants had used AI programs before (42.1%), while less than half the participants used them in their university studies (44.2%): for drug information (44.5%), homework (38.9%), and writing research articles (39.3%). There was significantly (48.3%, P<0.005) more conviction in the use of AI programs for writing research articles among pharmacy students from Palestine compared to Jordan. Lastly, there was significantly more (53.8%, P<0.05) AI program use among medicine students than pharmacy students. CONCLUSION: While most medicine and pharmacy students had heard about AI programs, only a small proportion of the participants had used them in their medical study. In addition, attitudes and practice related to AI programs in their education differs between medicine and pharmacy students and between WBP and Jordan.
Mosleh R; Jarrar Q; Jarrar Y; Tazkarji M; Hawash M
10
38649959
Artificial intelligence and medical education: application in classroom instruction and student assessment using a pharmacology & therapeutics case study.
2,024
BMC medical education
BACKGROUND: Artificial intelligence (AI) tools are designed to create or generate content from their trained parameters using an online conversational interface. AI has opened new avenues in redefining the role boundaries of teachers and learners and has the potential to impact the teaching-learning process. METHODS: In this descriptive proof-of- concept cross-sectional study we have explored the application of three generative AI tools on drug treatment of hypertension theme to generate: (1) specific learning outcomes (SLOs); (2) test items (MCQs- A type and case cluster; SAQs; OSPE); (3) test standard-setting parameters for medical students. RESULTS: Analysis of AI-generated output showed profound homology but divergence in quality and responsiveness to refining search queries. The SLOs identified key domains of antihypertensive pharmacology and therapeutics relevant to stages of the medical program, stated with appropriate action verbs as per Bloom's taxonomy. Test items often had clinical vignettes aligned with the key domain stated in search queries. Some test items related to A-type MCQs had construction defects, multiple correct answers, and dubious appropriateness to the learner's stage. ChatGPT generated explanations for test items, this enhancing usefulness to support self-study by learners. Integrated case-cluster items had focused clinical case description vignettes, integration across disciplines, and targeted higher levels of competencies. The response of AI tools on standard-setting varied. Individual questions for each SAQ clinical scenario were mostly open-ended. The AI-generated OSPE test items were appropriate for the learner's stage and identified relevant pharmacotherapeutic issues. The model answers supplied for both SAQs and OSPEs can aid course instructors in planning classroom lessons, identifying suitable instructional methods, establishing rubrics for grading, and for learners as a study guide. Key lessons learnt for improving AI-generated test item quality are outlined. CONCLUSIONS: AI tools are useful adjuncts to plan instructional methods, identify themes for test blueprinting, generate test items, and guide test standard-setting appropriate to learners' stage in the medical program. However, experts need to review the content validity of AI-generated output. We expect AIs to influence the medical education landscape to empower learners, and to align competencies with curriculum implementation. AI literacy is an essential competency for health professionals.
Sridharan K; Sequeira RP
21
38728687
The Role of Large Language Models in Transforming Emergency Medicine: Scoping Review.
2,024
JMIR medical informatics
BACKGROUND: Artificial intelligence (AI), more specifically large language models (LLMs), holds significant potential in revolutionizing emergency care delivery by optimizing clinical workflows and enhancing the quality of decision-making. Although enthusiasm for integrating LLMs into emergency medicine (EM) is growing, the existing literature is characterized by a disparate collection of individual studies, conceptual analyses, and preliminary implementations. Given these complexities and gaps in understanding, a cohesive framework is needed to comprehend the existing body of knowledge on the application of LLMs in EM. OBJECTIVE: Given the absence of a comprehensive framework for exploring the roles of LLMs in EM, this scoping review aims to systematically map the existing literature on LLMs' potential applications within EM and identify directions for future research. Addressing this gap will allow for informed advancements in the field. METHODS: Using PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) criteria, we searched Ovid MEDLINE, Embase, Web of Science, and Google Scholar for papers published between January 2018 and August 2023 that discussed LLMs' use in EM. We excluded other forms of AI. A total of 1994 unique titles and abstracts were screened, and each full-text paper was independently reviewed by 2 authors. Data were abstracted independently, and 5 authors performed a collaborative quantitative and qualitative synthesis of the data. RESULTS: A total of 43 papers were included. Studies were predominantly from 2022 to 2023 and conducted in the United States and China. We uncovered four major themes: (1) clinical decision-making and support was highlighted as a pivotal area, with LLMs playing a substantial role in enhancing patient care, notably through their application in real-time triage, allowing early recognition of patient urgency; (2) efficiency, workflow, and information management demonstrated the capacity of LLMs to significantly boost operational efficiency, particularly through the automation of patient record synthesis, which could reduce administrative burden and enhance patient-centric care; (3) risks, ethics, and transparency were identified as areas of concern, especially regarding the reliability of LLMs' outputs, and specific studies highlighted the challenges of ensuring unbiased decision-making amidst potentially flawed training data sets, stressing the importance of thorough validation and ethical oversight; and (4) education and communication possibilities included LLMs' capacity to enrich medical training, such as through using simulated patient interactions that enhance communication skills. CONCLUSIONS: LLMs have the potential to fundamentally transform EM, enhancing clinical decision-making, optimizing workflows, and improving patient outcomes. This review sets the stage for future advancements by identifying key research areas: prospective validation of LLM applications, establishing standards for responsible use, understanding provider and patient perceptions, and improving physicians' AI literacy. Effective integration of LLMs into EM will require collaborative efforts and thorough evaluation to ensure these technologies can be safely and effectively applied.
Preiksaitis C; Ashenburg N; Bunney G; Chu A; Kabeer R; Riley F; Ribeira R; Rose C
10
39352738
"Doctor ChatGPT, Can You Help Me?" The Patient's Perspective: Cross-Sectional Study.
2,024
Journal of medical Internet research
BACKGROUND: Artificial intelligence and the language models derived from it, such as ChatGPT, offer immense possibilities, particularly in the field of medicine. It is already evident that ChatGPT can provide adequate and, in some cases, expert-level responses to health-related queries and advice for patients. However, it is currently unknown how patients perceive these capabilities, whether they can derive benefit from them, and whether potential risks, such as harmful suggestions, are detected by patients. OBJECTIVE: This study aims to clarify whether patients can get useful and safe health care advice from an artificial intelligence chatbot assistant. METHODS: This cross-sectional study was conducted using 100 publicly available health-related questions from 5 medical specialties (trauma, general surgery, otolaryngology, pediatrics, and internal medicine) from a web-based platform for patients. Responses generated by ChatGPT-4.0 and by an expert panel (EP) of experienced physicians from the aforementioned web-based platform were packed into 10 sets consisting of 10 questions each. The blinded evaluation was carried out by patients regarding empathy and usefulness (assessed through the question: "Would this answer have helped you?") on a scale from 1 to 5. As a control, evaluation was also performed by 3 physicians in each respective medical specialty, who were additionally asked about the potential harm of the response and its correctness. RESULTS: In total, 200 sets of questions were submitted by 64 patients (mean 45.7, SD 15.9 years; 29/64, 45.3% male), resulting in 2000 evaluated answers of ChatGPT and the EP each. ChatGPT scored higher in terms of empathy (4.18 vs 2.7; P<.001) and usefulness (4.04 vs 2.98; P<.001). Subanalysis revealed a small bias in terms of levels of empathy given by women in comparison with men (4.46 vs 4.14; P=.049). Ratings of ChatGPT were high regardless of the participant's age. The same highly significant results were observed in the evaluation of the respective specialist physicians. ChatGPT outperformed significantly in correctness (4.51 vs 3.55; P<.001). Specialists rated the usefulness (3.93 vs 4.59) and correctness (4.62 vs 3.84) significantly lower in potentially harmful responses from ChatGPT (P<.001). This was not the case among patients. CONCLUSIONS: The results indicate that ChatGPT is capable of supporting patients in health-related queries better than physicians, at least in terms of written advice through a web-based platform. In this study, ChatGPT's responses had a lower percentage of potentially harmful advice than the web-based EP. However, it is crucial to note that this finding is based on a specific study design and may not generalize to all health care settings. Alarmingly, patients are not able to independently recognize these potential dangers.
Armbruster J; Bussmann F; Rothhaas C; Titze N; Grutzner PA; Freischmidt H
43
39814698
Performance of Artificial Intelligence Chatbots on Ultrasound Examinations: Cross-Sectional Comparative Analysis.
2,025
JMIR medical informatics
BACKGROUND: Artificial intelligence chatbots are being increasingly used for medical inquiries, particularly in the field of ultrasound medicine. However, their performance varies and is influenced by factors such as language, question type, and topic. OBJECTIVE: This study aimed to evaluate the performance of ChatGPT and ERNIE Bot in answering ultrasound-related medical examination questions, providing insights for users and developers. METHODS: We curated 554 questions from ultrasound medicine examinations, covering various question types and topics. The questions were posed in both English and Chinese. Objective questions were scored based on accuracy rates, whereas subjective questions were rated by 5 experienced doctors using a Likert scale. The data were analyzed in Excel. RESULTS: Of the 554 questions included in this study, single-choice questions comprised the largest share (354/554, 64%), followed by short answers (69/554, 12%) and noun explanations (63/554, 11%). The accuracy rates for objective questions ranged from 8.33% to 80%, with true or false questions scoring highest. Subjective questions received acceptability rates ranging from 47.62% to 75.36%. ERNIE Bot was superior to ChatGPT in many aspects (P<.05). Both models showed a performance decline in English, but ERNIE Bot's decline was less significant. The models performed better in terms of basic knowledge, ultrasound methods, and diseases than in terms of ultrasound signs and diagnosis. CONCLUSIONS: Chatbots can provide valuable ultrasound-related answers, but performance differs by model and is influenced by language, question type, and topic. In general, ERNIE Bot outperforms ChatGPT. Users and developers should understand model performance characteristics and select appropriate models for different questions and languages to optimize chatbot use.
Zhang Y; Lu X; Luo Y; Zhu Y; Ling W
21
40387311
The Performance of AI in Dermatology Exams: The Exam Success and Limits of ChatGPT.
2,025
Journal of cosmetic dermatology
BACKGROUND: Artificial intelligence holds significant potential in dermatology. OBJECTIVES: This study aimed to explore the potential and limitations of artificial intelligence applications in dermatology education by evaluating ChatGPT's performance on questions from the dermatology residency exam. METHOD: In this study, the dermatology residency exam results for ChatGPT versions 3.5 and 4.0 were compared with those of resident doctors across various seniority levels. Dermatology resident doctors were categorized into four seniority levels based on their education, and a total of 100 questions-25 multiple-choice questions for each seniority level-were included in the exam. The same questions were also administered to ChatGPT versions 3.5 and 4.0, and the scores were analyzed statistically. RESULTS: ChatGPT 3.5 performed poorly, especially when compared to senior residents. Second (p = 0.038), third (p = 0.041), and fourth-year senior resident physicians (p = 0.020) scored significantly higher than ChatGPT 3.5. ChatGPT 4.0 showed similar performance compared to first- and third-year senior resident physicians, but performed worse in comparison to second (p = 0.037) and fourth-year senior resident physicians (p = 0.029). Both versions scored lower as seniority and exam difficulty increased. ChatGPT 3.5 passed the first and second-year exams but failed the third and fourth-year exams. ChatGPT 4.0 passed the first, second, and third-year exams but failed the fourth-year exam. These findings suggest that ChatGPT was not on par with senior resident physicians, particularly on topics requiring advanced knowledge; however, version 4.0 proved to be more effective than version 3.5. CONCLUSION: In the future, as ChatGPT's language support and knowledge of medicine improve, it can be used more effectively in educational processes.
Gocer Gurok N; Ozturk S
21
40393042
Comparison of ChatGPT and Internet Research for Clinical Research and Decision-Making in Occupational Medicine: Randomized Controlled Trial.
2,025
JMIR formative research
BACKGROUND: Artificial intelligence is becoming a part of daily life and the medical field. Generative artificial intelligence models, such as GPT-4 and ChatGPT, are experiencing a surge in popularity due to their enhanced performance and reliability. However, the application of these models in specialized domains, such as occupational medicine, remains largely unexplored. OBJECTIVE: This study aims to assess the potential suitability of a generative large language model, such as ChatGPT, as a support tool for medical research and even clinical decisions in occupational medicine in Germany. METHODS: In this randomized controlled study, the usability of ChatGPT for medical research and clinical decision-making was investigated using a web application developed for this purpose. Eligibility criteria were being a physician or medical student. Participants (N=56) were asked to work on 3 cases of occupational lung diseases and answer case-related questions. They were allocated via coin weighted for proportions of physicians in each group into 2 groups. One group researched the cases using an integrated chat application similar to ChatGPT based on the latest GPT-4-Turbo model, while the other used their usual research methods, such as Google, Amboss, or DocCheck. The primary outcome was case performance based on correct answers, while secondary outcomes included changes in specific question accuracy and self-assessed occupational medicine expertise before and after case processing. Group assignment was not traditionally blinded, as the chat window indicated membership; participants only knew the study examined web-based research, not group specifics. RESULTS: Participants of the ChatGPT group (n=27) showed better performance in specific research, for example, for potentially hazardous substances or activities (eg, case 1: ChatGPT group 2.5 hazardous substances that cause pleural changes versus 1.8 in a group with own research; P=.01; Cohen r=-0.38), and led to an increase in self-assessment with regard to specialist knowledge (from 3.9 to 3.4 in the ChatGPT group vs from 3.5 to 3.4 in the own research group; German school grades between 1=very good and 6=unsatisfactory; P=.047). However, clinical decisions, for example, whether an occupational disease report should be filed, were more often made correctly as a result of the participant's own research (n=29; eg, case 1: Should an occupational disease report be filed? Yes for 7 participants in the ChatGPT group vs 14 in their own research group; P=.007; odds ratio 6.00, 95% CI 1.54-23.36). CONCLUSIONS: ChatGPT can be a useful tool for targeted medical research, even for rather specific questions in occupational medicine regarding occupational diseases. However, clinical decisions should currently only be supported and not made by the large language model. Future systems should be critically assessed, even if the initial results are promising.
Weuthen FA; Otte N; Krabbe H; Kraus T; Krabbe J
0-1
38546736
Performance of GPT-4V in Answering the Japanese Otolaryngology Board Certification Examination Questions: Evaluation Study.
2,024
JMIR medical education
BACKGROUND: Artificial intelligence models can learn from medical literature and clinical cases and generate answers that rival human experts. However, challenges remain in the analysis of complex data containing images and diagrams. OBJECTIVE: This study aims to assess the answering capabilities and accuracy of ChatGPT-4 Vision (GPT-4V) for a set of 100 questions, including image-based questions, from the 2023 otolaryngology board certification examination. METHODS: Answers to 100 questions from the 2023 otolaryngology board certification examination, including image-based questions, were generated using GPT-4V. The accuracy rate was evaluated using different prompts, and the presence of images, clinical area of the questions, and variations in the answer content were examined. RESULTS: The accuracy rate for text-only input was, on average, 24.7% but improved to 47.3% with the addition of English translation and prompts (P<.001). The average nonresponse rate for text-only input was 46.3%; this decreased to 2.7% with the addition of English translation and prompts (P<.001). The accuracy rate was lower for image-based questions than for text-only questions across all types of input, with a relatively high nonresponse rate. General questions and questions from the fields of head and neck allergies and nasal allergies had relatively high accuracy rates, which increased with the addition of translation and prompts. In terms of content, questions related to anatomy had the highest accuracy rate. For all content types, the addition of translation and prompts increased the accuracy rate. As for the performance based on image-based questions, the average of correct answer rate with text-only input was 30.4%, and that with text-plus-image input was 41.3% (P=.02). CONCLUSIONS: Examination of artificial intelligence's answering capabilities for the otolaryngology board certification examination improves our understanding of its potential and limitations in this field. Although the improvement was noted with the addition of translation and prompts, the accuracy rate for image-based questions was lower than that for text-based questions, suggesting room for improvement in GPT-4V at this stage. Furthermore, text-plus-image input answers a higher rate in image-based questions. Our findings imply the usefulness and potential of GPT-4V in medicine; however, future consideration of safe use methods is needed.
Noda M; Ueno T; Koshu R; Takaso Y; Shimada MD; Saito C; Sugimoto H; Fushiki H; Ito M; Nomura A; Yoshizaki T
0-1
38935937
Assessing Generative Pretrained Transformers (GPT) in Clinical Decision-Making: Comparative Analysis of GPT-3.5 and GPT-4.
2,024
Journal of medical Internet research
BACKGROUND: Artificial intelligence, particularly chatbot systems, is becoming an instrumental tool in health care, aiding clinical decision-making and patient engagement. OBJECTIVE: This study aims to analyze the performance of ChatGPT-3.5 and ChatGPT-4 in addressing complex clinical and ethical dilemmas, and to illustrate their potential role in health care decision-making while comparing seniors' and residents' ratings, and specific question types. METHODS: A total of 4 specialized physicians formulated 176 real-world clinical questions. A total of 8 senior physicians and residents assessed responses from GPT-3.5 and GPT-4 on a 1-5 scale across 5 categories: accuracy, relevance, clarity, utility, and comprehensiveness. Evaluations were conducted within internal medicine, emergency medicine, and ethics. Comparisons were made globally, between seniors and residents, and across classifications. RESULTS: Both GPT models received high mean scores (4.4, SD 0.8 for GPT-4 and 4.1, SD 1.0 for GPT-3.5). GPT-4 outperformed GPT-3.5 across all rating dimensions, with seniors consistently rating responses higher than residents for both models. Specifically, seniors rated GPT-4 as more beneficial and complete (mean 4.6 vs 4.0 and 4.6 vs 4.1, respectively; P<.001), and GPT-3.5 similarly (mean 4.1 vs 3.7 and 3.9 vs 3.5, respectively; P<.001). Ethical queries received the highest ratings for both models, with mean scores reflecting consistency across accuracy and completeness criteria. Distinctions among question types were significant, particularly for the GPT-4 mean scores in completeness across emergency, internal, and ethical questions (4.2, SD 1.0; 4.3, SD 0.8; and 4.5, SD 0.7, respectively; P<.001), and for GPT-3.5's accuracy, beneficial, and completeness dimensions. CONCLUSIONS: ChatGPT's potential to assist physicians with medical issues is promising, with prospects to enhance diagnostics, treatments, and ethics. While integration into clinical workflows may be valuable, it must complement, not replace, human expertise. Continued research is essential to ensure safe and effective implementation in clinical environments.
Lahat A; Sharif K; Zoabi N; Shneor Patt Y; Sharif Y; Fisher L; Shani U; Arow M; Levin R; Klang E
43
39147878
A pilot feasibility study comparing large language models in extracting key information from ICU patient text records from an Irish population.
2,024
Intensive care medicine experimental
BACKGROUND: Artificial intelligence, through improved data management and automated summarisation, has the potential to enhance intensive care unit (ICU) care. Large language models (LLMs) can interrogate and summarise large volumes of medical notes to create succinct discharge summaries. In this study, we aim to investigate the potential of LLMs to accurately and concisely synthesise ICU discharge summaries. METHODS: Anonymised clinical notes from ICU admissions were used to train and validate a prompting structure in three separate LLMs (ChatGPT, GPT-4 API and Llama 2) to generate concise clinical summaries. Summaries were adjudicated by staff intensivists on ability to identify and appropriately order a pre-defined list of important clinical events as well as readability, organisation, succinctness, and overall rank. RESULTS: In the development phase, text from five ICU episodes was used to develop a series of prompts to best capture clinical summaries. In the testing phase, a summary produced by each LLM from an additional six ICU episodes was utilised for evaluation. Overall ability to identify a pre-defined list of important clinical events in the summary was 41.5 +/- 15.2% for GPT-4 API, 19.2 +/- 20.9% for ChatGPT and 16.5 +/- 14.1% for Llama2 (p = 0.002). GPT-4 API followed by ChatGPT had the highest score to appropriately order a pre-defined list of important clinical events in the summary as well as readability, organisation, succinctness, and overall rank, whilst Llama2 scored lowest for all. GPT-4 API produced minor hallucinations, which were not present in the other models. CONCLUSION: Differences exist in large language model performance in readability, organisation, succinctness, and sequencing of clinical events compared to others. All encountered issues with narrative coherence and omitted key clinical data and only moderately captured all clinically meaningful data in the correct order. However, these technologies suggest future potential for creating succinct discharge summaries.
Urquhart E; Ryan J; Hartigan S; Nita C; Hanley C; Moran P; Bates J; Jooste R; Judge C; Laffey JG; Madden MG; McNicholas BA
10
38520394
Accuracy of Information given by ChatGPT for Patients with Inflammatory Bowel Disease in Relation to ECCO Guidelines.
2,024
Journal of Crohn's & colitis
BACKGROUND: As acceptance of artificial intelligence [AI] platforms increases, more patients will consider these tools as sources of information. The ChatGPT architecture utilizes a neural network to process natural language, thus generating responses based on the context of input text. The accuracy and completeness of ChatGPT3.5 in the context of inflammatory bowel disease [IBD] remains unclear. METHODS: In this prospective study, 38 questions worded by IBD patients were inputted into ChatGPT3.5. The following topics were covered: [1] Crohn's disease [CD], ulcerative colitis [UC], and malignancy; [2] maternal medicine; [3] infection and vaccination; and [4] complementary medicine. Responses given by ChatGPT were assessed for accuracy [1-completely incorrect to 5-completely correct] and completeness [3-point Likert scale; range 1-incomplete to 3-complete] by 14 expert gastroenterologists, in comparison with relevant ECCO guidelines. RESULTS: In terms of accuracy, most replies [84.2%] had a median score of >/=4 (interquartile range [IQR]: 2) and a mean score of 3.87 [SD: +/-0.6]. For completeness, 34.2% of the replies had a median score of 3 and 55.3% had a median score of between 2 and <3. Overall, the mean rating was 2.24 [SD: +/-0.4, median: 2, IQR: 1]. Though groups 3 and 4 had a higher mean for both accuracy and completeness, there was no significant scoring variation between the four question groups [Kruskal-Wallis test p > 0.05]. However, statistical analysis for the different individual questions revealed a significant difference for both accuracy [p < 0.001] and completeness [p < 0.001]. The questions which rated the highest for both accuracy and completeness were related to smoking, while the lowest rating was related to screening for malignancy and vaccinations especially in the context of immunosuppression and family planning. CONCLUSION: This is the first study to demonstrate the capability of an AI-based system to provide accurate and comprehensive answers to real-world patient queries in IBD. AI systems may serve as a useful adjunct for patients, in addition to standard of care in clinics and validated patient information resources. However, responses in specialist areas may deviate from evidence-based guidance and the replies need to give more firm advice.
Sciberras M; Farrugia Y; Gordon H; Furfaro F; Allocca M; Torres J; Arebi N; Fiorino G; Iacucci M; Verstockt B; Magro F; Katsanos K; Busuttil J; De Giovanni K; Fenech VA; Chetcuti Zammit S; Ellul P
0-1
38219629
Artificial intelligence and ChatGPT: An otolaryngology patient's ally or foe?
2,024
American journal of otolaryngology
BACKGROUND: As artificial intelligence (AI) is integrating into the healthcare sphere, there is a need to evaluate its effectiveness in the various subspecialties of medicine, including otolaryngology. Our study intends to provide a cursory review of ChatGPT's diagnostic capability, ability to convey pathophysiology in simple terms, accuracy in providing management recommendations, and appropriateness in follow up and post-operative recommendations in common otolaryngologic conditions. METHODS: Adenotonsillectomy (T&A), tympanoplasty (TP), endoscopic sinus surgery (ESS), parotidectomy (PT), and total laryngectomy (TL) were substituted for the word procedure in the following five questions and input into ChatGPT version 3.5: "How do I know if I need (procedure)," "What are treatment alternatives to (procedure)," "What are the risks of (procedure)," "How is a (procedure) performed," and "What is the recovery process for (procedure)?" Two independent study members analyzed the output and discrepancies were reviewed, discussed, and reconciled between study members. RESULTS: In terms of management recommendations, ChatGPT was able to give generalized statements of evaluation, need for intervention, and the basics of the procedure without major aberrant errors or risks of safety. ChatGPT was successful in providing appropriate treatment alternatives in all procedures tested. When queried for methodology, risks, and procedural steps, ChatGPT lacked precision in the description of procedural steps, missed key surgical details, and did not accurately provide all major risks of each procedure. In terms of the recovery process, ChatGPT showed promise in T&A, TP, ESS, and PT but struggled in the complexity of TL, stating the patient could speak immediately after surgery without speech therapy. CONCLUSIONS: ChatGPT accurately demonstrated the need for intervention, management recommendations, and treatment alternatives in common ENT procedures. However, ChatGPT was not able to replace an otolaryngologist's clinical reasoning necessary to discuss procedural methodology, risks, and the recovery process in complex procedures. As AI becomes further integrated into healthcare, there is a need to continue to explore its indications, evaluate its limits, and refine its use to the otolaryngologist's advantage.
Langlie J; Kamrava B; Pasick LJ; Mei C; Hoffer ME
32
37307503
ChatGPT in medical school: how successful is AI in progress testing?
2,023
Medical education online
BACKGROUND: As generative artificial intelligence (AI), ChatGPT provides easy access to a wide range of information, including factual knowledge in the field of medicine. Given that knowledge acquisition is a basic determinant of physicians' performance, teaching and testing different levels of medical knowledge is a central task of medical schools. To measure the factual knowledge level of the ChatGPT responses, we compared the performance of ChatGPT with that of medical students in a progress test. METHODS: A total of 400 multiple-choice questions (MCQs) from the progress test in German-speaking countries were entered into ChatGPT's user interface to obtain the percentage of correctly answered questions. We calculated the correlations of the correctness of ChatGPT responses with behavior in terms of response time, word count, and difficulty of a progress test question. RESULTS: Of the 395 responses evaluated, 65.5% of the progress test questions answered by ChatGPT were correct. On average, ChatGPT required 22.8 s (SD 17.5) for a complete response, containing 36.2 (SD 28.1) words. There was no correlation between the time used and word count with the accuracy of the ChatGPT response (correlation coefficient for time rho = -0.08, 95% CI [-0.18, 0.02], t(393) = -1.55, p = 0.121; for word count rho = -0.03, 95% CI [-0.13, 0.07], t(393) = -0.54, p = 0.592). There was a significant correlation between the difficulty index of the MCQs and the accuracy of the ChatGPT response (correlation coefficient for difficulty: rho = 0.16, 95% CI [0.06, 0.25], t(393) = 3.19, p = 0.002). CONCLUSION: ChatGPT was able to correctly answer two-thirds of all MCQs at the German state licensing exam level in Progress Test Medicine and outperformed almost all medical students in years 1-3. The ChatGPT answers can be compared with the performance of medical students in the second half of their studies.
Friederichs H; Friederichs WJ; Marz M
21
40163112
Online Health Information-Seeking in the Era of Large Language Models: Cross-Sectional Web-Based Survey Study.
2,025
Journal of medical Internet research
BACKGROUND: As large language model (LLM)-based chatbots such as ChatGPT (OpenAI) grow in popularity, it is essential to understand their role in delivering online health information compared to other resources. These chatbots often generate inaccurate content, posing potential safety risks. This motivates the need to examine how users perceive and act on health information provided by LLM-based chatbots. OBJECTIVE: This study investigates the patterns, perceptions, and actions of users seeking health information online, including LLM-based chatbots. The relationships between online health information-seeking behaviors and important sociodemographic characteristics are examined as well. METHODS: A web-based survey of crowd workers was conducted via Prolific. The questionnaire covered sociodemographic information, trust in health care providers, eHealth literacy, artificial intelligence (AI) attitudes, chronic health condition status, online health information source types, perceptions, and actions, such as cross-checking or adherence. Quantitative and qualitative analyses were applied. RESULTS: Most participants consulted search engines (291/297, 98%) and health-related websites (203/297, 68.4%) for their health information, while 21.2% (63/297) used LLM-based chatbots, with ChatGPT and Microsoft Copilot being the most popular. Most participants (268/297, 90.2%) sought information on health conditions, with fewer seeking advice on medication (179/297, 60.3%), treatments (137/297, 46.1%), and self-diagnosis (62/297, 23.2%). Perceived information quality and trust varied little across source types. The preferred source for validating information from the internet was consulting health care professionals (40/132, 30.3%), while only a very small percentage of participants (5/214, 2.3%) consulted AI tools to cross-check information from search engines and health-related websites. For information obtained from LLM-based chatbots, 19.4% (12/63) of participants cross-checked the information, while 48.4% (30/63) of participants followed the advice. Both of these rates were lower than information from search engines, health-related websites, forums, or social media. Furthermore, use of LLM-based chatbots for health information was negatively correlated with age (rho=-0.16, P=.006). In contrast, attitudes surrounding AI for medicine had significant positive correlations with the number of source types consulted for health advice (rho=0.14, P=.01), use of LLM-based chatbots for health information (rho=0.31, P<.001), and number of health topics searched (rho=0.19, P<.001). CONCLUSIONS: Although traditional online sources remain dominant, LLM-based chatbots are emerging as a resource for health information for some users, specifically those who are younger and have a higher trust in AI. The perceived quality and trustworthiness of health information varied little across source types. However, the adherence to health information from LLM-based chatbots seemed more cautious compared to search engines or health-related websites. As LLMs continue to evolve, enhancing their accuracy and transparency will be essential in mitigating any potential risks by supporting responsible information-seeking while maximizing the potential of AI in health contexts.
Yun HS; Bickmore T
43
37458761
Evaluating ChatGPT as an adjunct for the multidisciplinary tumor board decision-making in primary breast cancer cases.
2,023
Archives of gynecology and obstetrics
BACKGROUND: As the available information about breast cancer is growing every day, the decision-making process for the therapy is getting more complex. ChatGPT as a transformer-based language model possesses the ability to write scientific articles and pass medical exams. But is it able to support the multidisciplinary tumor board (MDT) in the planning of the therapy of patients with breast cancer? MATERIAL AND METHODS: We performed a pilot study on 10 consecutive cases of breast cancer patients discussed in MDT at our department in January 2023. Included were patients with a primary diagnosis of early breast cancer. The recommendation of MDT was compared with the recommendation of the ChatGPT for particular patients and the clinical score of the agreement was calculated. RESULTS: Results showed that ChatGPT provided mostly general answers regarding chemotherapy, breast surgery, radiation therapy, chemotherapy, and antibody therapy. It was able to identify risk factors for hereditary breast cancer and point out the elderly patient indicated for chemotherapy to evaluate the cost/benefit effect. ChatGPT wrongly identified the patient with Her2 1 + and 2 + (FISH negative) as in need of therapy with an antibody and called endocrine therapy "hormonal treatment". CONCLUSIONS: Support of artificial intelligence by finding individualized and personalized therapy for our patients in the time of rapidly expanding amount of information is looking for the ways in the clinical routine. ChatGPT has the potential to find its spot in clinical medicine, but the current version is not able to provide specific recommendations for the therapy of patients with primary breast cancer.
Lukac S; Dayan D; Fink V; Leinert E; Hartkopf A; Veselinovic K; Janni W; Rack B; Pfister K; Heitmeir B; Ebner F
0-1
40257390
Artificial intelligence in asthma health literacy: a comparative analysis of ChatGPT versus Gemini.
2,025
The Journal of asthma : official journal of the Association for the Care of Asthma
BACKGROUND: Asthma is a complex and heterogeneous chronic disease affecting over 300 million individuals worldwide. Despite advances in pharmacotherapy, poor disease control remains a major challenge, necessitating innovative approaches to patient education and self-management. Artificial intelligence driven chatbots, such as ChatGPT and Gemini, have the potential to enhance asthma care by providing real-time, evidence-based information. As asthma management moves toward personalized medicine, AI could support individualized education and treatment guidance. However, concerns remain regarding the accuracy and reliability of AI-generated medical content. OBJECTIVE: This study evaluated the accuracy of ChatGPT (version 4.0) and Gemini (version 1.2) in providing asthma-related health information using the Patient-completed Asthma Knowledge Questionnaire, a validated asthma literacy tool. METHODS: A cross-sectional study was conducted in which both AI models answered 54 standardized asthma-related items. Responses were classified as correct or incorrect based on alignment with validated clinical knowledge. Accuracy was assessed using descriptive statistics, Cohen's kappa for inter-model agreement, and chi-square tests for comparative performance. RESULTS: ChatGPT achieved an accuracy of 96.3% (52/54 correct; 95% CI: 87.5%-99.0%), while Gemini scored 92.6% (50/54 correct; 95% CI: 82.5%-97.1%), with no statistically significant difference (p = 0.67). Cohen's kappa demonstrated near-perfect agreement for ChatGPT (kappa = 0.91) and strong agreement for Gemini (kappa = 0.82). CONCLUSION: ChatGPT and Gemini demonstrated high accuracy in delivering asthma-related health information, supporting their potential as adjunct tools for patient education. AI models could potentially play a role in personalized asthma management by providing tailored treatment guidance and improving patient engagement.
Hoj S; Backer V; Ulrik CS; Sigsgaard T; Meteran H
43
40345326
Artificial Intelligence-generated answers to patients' questions on asthma: the AIR-Asthma study.
2,025
The journal of allergy and clinical immunology. In practice
BACKGROUND: Asthma is a prevalent chronic respiratory disease requiring ongoing patient education and individualized management. The increasing reliance on digital tools, particularly generative artificial intelligence (AI), to answer health-related questions has raised concerns about the accuracy, reliability, and comprehensibility of AI-generated information for people living with asthma. OBJECTIVE: To systematically evaluate reliability, accuracy, comprehensiveness, and understandability of responses generated by three widely used AI-based chatbots (ChatGPT, Bard, Copilot) to common questions formulated by people with asthma. METHODS: In this cross-sectional study, 15 questions regarding asthma management were formulated by patients and categorized by difficulty. Responses from ChatGPT, Bard, and Copilot were evaluated by international experts for accuracy and comprehensiveness, and by patient representatives for understandability. Reliability was assessed through consistency testing across devices. A blinded evaluation was conducted. RESULTS: A total of 21 experts and 16 patient representatives participated in the evaluation. ChatGPT demonstrated the highest reliability (15/15 responses), accuracy (median score 9.0 [IQR 7.0-9.0]), and comprehensiveness (8.0 [8.0-9.0]) compared to Bard and Copilot (P < 0.0001). Bard achieved superior scores in understandability (median score 9.0 [8.0-10.0]) (P < 0.0001). Performance differences were consistent across question difficulty levels. CONCLUSION: AI-driven chatbots can provide generally accurate and understandable responses to asthma-related questions. Variability in reliability and accuracy underscores the need for caution in clinical contexts. AI tools may complement but cannot replace professional medical advice in asthma management.
Nigro M; Aliverti A; Angelucci A; Braido F; Canonica GW; Bossios A; Pinnock H; Boyd J; Powell P; Aliberti S
43
36819954
ChatGPT Output Regarding Compulsory Vaccination and COVID-19 Vaccine Conspiracy: A Descriptive Study at the Outset of a Paradigm Shift in Online Search for Information.
2,023
Cureus
BACKGROUND: Being on the verge of a revolutionary approach to gathering information, ChatGPT (an artificial intelligence (AI)-based language model developed by OpenAI, and capable of producing human-like text) could be the prime motive of a paradigm shift on how humans will acquire information. Despite the concerns related to the use of such a promising tool in relation to the future of the quality of education, this technology will soon be incorporated into web search engines mandating the need to evaluate the output of such a tool. Previous studies showed that dependence on some sources of online information (e.g., social media platforms) was associated with higher rates of vaccination hesitancy. Therefore, the aim of the current study was to describe the output of ChatGPT regarding coronavirus disease 2019 (COVID-19) vaccine conspiracy beliefs. and compulsory vaccination. METHODS: The current descriptive study was conducted on January 14, 2023 using the ChatGPT from OpenAI (OpenAI, L.L.C., San Francisco, CA, USA). The output was evaluated by two authors and the degree of agreement regarding the correctness, clarity, conciseness, and bias was evaluated using Cohen's kappa. RESULTS: The ChatGPT responses were dismissive of conspiratorial ideas about severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) origins labeling it as non-credible and lacking scientific evidence. Additionally, ChatGPT responses were totally against COVID-19 vaccine conspiracy statements. Regarding compulsory vaccination, ChatGPT responses were neutral citing the following as advantages of this strategy: protecting public health, maintaining herd immunity, reducing the spread of disease, cost-effectiveness, and legal obligation, and on the other hand, it cited the following as disadvantages of compulsory vaccination: ethical and legal concerns, mistrust and resistance, logistical challenges, and limited resources and knowledge. CONCLUSIONS: The current study showed that ChatGPT could be a source of information to challenge COVID-19 vaccine conspiracies. For compulsory vaccination, ChatGPT resonated with the divided opinion in the scientific community toward such a strategy; nevertheless, it detailed the pros and cons of this approach. As it currently stands, the judicious use of ChatGPT could be utilized as a user-friendly source of COVID-19 vaccine information that could challenge conspiracy ideas with clear, concise, and non-biased content. However, ChatGPT content cannot be used as an alternative to the original reliable sources of vaccine information (e.g., the World Health Organization [WHO] and the Centers for Disease Control and Prevention [CDC]).
Sallam M; Salim NA; Al-Tammemi AB; Barakat M; Fayyad D; Hallit S; Harapan H; Hallit R; Mahafzah A
43
39265225
The practical use of artificial intelligence in Transfusion Medicine and Apheresis.
2,024
Transfusion and apheresis science : official journal of the World Apheresis Association : official journal of the European Society for Haemapheresis
BACKGROUND: Blood and plasma volume calculations are a daily part of practice for many Transfusion Medicine and Apheresis practitioners. Though many formulas exist, each facility may have their own modifications to consider. ChatGPT (Generative Pre-trained Transformer) provides a new and exciting pathway for those with no programming experience to create personalized programs to meet the demands of daily practice. Additionally, this pathway creates computer programs that provide accurate and reproducible outputs. Herein, we aimed to create a step-by-step process for clinicians to create customized computer programs for use in everyday practice. METHODS: We created a process of inputs to ChatGPT-4(0), which generated computer programming code. This code was copied and pasted into Notepad (and saved as a Python file) and Google Colaboratory to verify functionality. We validated the durability of our process by repeating it over a 5-day timeframe and by recruiting volunteers to reproduce our outputs using the suggested process. RESULTS: Computer code generated by ChatGPT-4(0) in response to our common language inputs was accurate and durable over time. The code was fully functional in both Python and Colaboratory. Volunteers reproduced our process and outputs with minimal assistance. CONCLUSION: We analyzed the practical application of ChatGPT-4(0) and artificial intelligence (AI) to perform daily calculations encountered in Transfusion Medicine. Our results provide a proof of concept that people with no programming experience can create customizable solutions for their own facilities. Our future work will expand to the creation of comprehensive and customizable websites designed for each individual user.
Anstey C; Ullman D; Su L; Su C; Siniard C; Simmons S; Edberg J; Williams LA 3rd
10
40252296
AI-Assisted Blood Gas Interpretation: A Comparative Study With an Emergency Physician.
2,025
The American journal of emergency medicine
BACKGROUND: Blood gas interpretation is critical in emergency settings. Large language models like ChatGPT are increasingly used in clinical contexts, but their accuracy in interpreting arterial blood gases (ABGs) requires further validation. OBJECTIVE: To evaluate ChatGPT's interpretive concordance with an emergency physician across 25 theoretical ABG scenarios. METHODS: ABG cases covering respiratory and metabolic emergencies (e.g., COPD, DKA, AKI, sepsis, poisoning) were analyzed by both ChatGPT and a specialist. Five interpretation criteria were used: pH, primary disorder, compensation, likely diagnosis, and clinical recommendation. RESULTS: Concordance was >/=90% in COPD, asthma, and pulmonary edema; 80-90% in DKA, AKI, and lactic acidosis; <70% in toxicologic and mixed acid-base cases. ChatGPT's recommendations were clinically safe even when diagnostic clarity was limited. CONCLUSION: ChatGPT shows high concordance with clinical interpretation in typical ABG cases but has limitations in complex or contextual diagnoses. These findings support its potential as a supportive tool in emergency medicine.
Gun M
10
38076046
ChatGPT-assisted deep learning for diagnosing bone metastasis in bone scans: Bridging the AI Gap for Clinicians.
2,023
Heliyon
BACKGROUND: Bone scans are often used to identify bone metastases, but their low specificity may necessitate further studies. Deep learning models may improve diagnostic accuracy but require both medical and programming expertise. Therefore, we investigated the feasibility of constructing a deep learning model employing ChatGPT for the diagnosis of bone metastasis in bone scans and to evaluate its diagnostic performance. METHOD: We examined 4626 consecutive cancer patients (age, 65.1 +/- 11.3 years; 2334 female) who had bone scans for metastasis assessment. A nuclear medicine physician developed a deep learning model using ChatGPT 3.5 (OpenAI). We employed ResNet50 as the backbone network and compared the diagnostic performance of four strategies (original training set, original training set with 1:10 class weight, 10-fold data augmentation for positive images only, and 10-fold data augmentation for all images) to address the class imbalance. We used a class activation map algorithm for visualization. RESULTS: Among the four strategies, the deep learning model with 10-fold data augmentation for positive cases only, using a batch size of 16 and an epoch size of 150, achieved the area under curve of 0.8156, the sensitivity of 56.0 %, and specificity of 88.7 %. The class activation map indicated that the model focused on disseminated bone metastases within the spine but might confuse them with benign spinal lesions or intense urinary activity. CONCLUSIONS: Our study illustrates that a clinical physician with rudimentary programming skills can develop a deep learning model for medical image analysis, such as diagnosing bone metastasis in bone scans using ChatGPT. Model visualization may offer guidance in enhancing deep learning model development, including preprocessing, and potentially support clinical decision-making processes.
Son HJ; Kim SJ; Pak S; Lee SH
0-1
37903939
Consulting the Digital Doctor: Google Versus ChatGPT as Sources of Information on Breast Implant-Associated Anaplastic Large Cell Lymphoma and Breast Implant Illness.
2,024
Aesthetic plastic surgery
BACKGROUND: Breast implant-associated anaplastic large cell lymphoma (BIA-ALCL) is a rare complication associated with the use of breast implants. Breast implant illness (BII) is another potentially concerning issue related to breast implants. This study aims to assess the quality of ChatGPT as a potential source of patient education by comparing the answers to frequently asked questions on BIA-ALCL and BII provided by ChatGPT and Google. METHODS: The Google and ChatGPT answers to the 10 most frequently asked questions on the search terms "breast implant associated anaplastic large cell lymphoma" and "breast implant illness" were recorded. Five blinded breast plastic surgeons were then asked to grade the quality of the answers according to the Global Quality Score (GQS). A Wilcoxon paired t-test was performed to evaluate the difference in GQS ratings for Google and ChatGPT answers. The sources provided by Google and ChatGPT were also categorized and assessed. RESULTS: In a comparison of answers provided by Google and ChatGPT on BIA-ALCL and BII, ChatGPT significantly outperformed Google. For BIA-ALCL, Google's average score was 2.72 +/- 1.44, whereas ChatGPT scored an average of 4.18 +/- 1.04 (p < 0.01). For BII, Google's average score was 2.66 +/- 1.24, while ChatGPT scored an average of 4.28 +/- 0.97 (p < 0.01). The superiority of ChatGPT's responses was attributed to their comprehensive nature and recognition of existing knowledge gaps. However, some of ChatGPT's answers had inaccessible sources. CONCLUSION: ChatGPT outperforms Google in providing high-quality answers to commonly asked questions on BIA-ALCL and BII, highlighting the potential of AI technologies in patient education. LEVEL OF EVIDENCE: Level III, comparative study LEVEL OF EVIDENCE III: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Liu HY; Alessandri Bonetti M; De Lorenzi F; Gimbel ML; Nguyen VT; Egro FM
32
37692649
The Use of AI in Diagnosing Diseases and Providing Management Plans: A Consultation on Cardiovascular Disorders With ChatGPT.
2,023
Cureus
BACKGROUND: Cardiovascular diseases (CVDs) have remained the leading causes of death worldwide and substantially contribute to loss of health and excess health system costs. According to WHO, cardiovascular diseases (CVDs) take an estimated 17.9 million lives each year. One of the reasons for an immensely high fatality in CVDs is lack of efficient diagnosis and prompt treatment. Timely recognition and management are crucial to minimize mortality. In the advancing world, AI (artificial intelligence) and machine learning technologies continue to progress, this advancement has opened new avenues for innovative approaches in the field of medicine. Despite the rapid development in the field of AI, there is a limited understanding of the potential benefits among clinicians and medical practitioners. METHODS: In this study, we aimed to investigate the potential that the AI language model holds to assist health practitioners in the diagnosis and treatment of cardiovascular disorders. We asked Chat Generative Pre-trained Transformer (ChatGPT) 10 hypothetical questions simulating clinical consultation. The responses given by ChatGPT were accessed for its accuracy and accessibility by a team of medical specialists and cardiologists with extensive experience in managing cardiovascular disorders. Result: Out of the 10 clinical scenarios inserted in ChatGPT, eight were perfectly diagnosed, however, the other two answers given by ChatGPT were not entirely incorrect since those conditions were associated with the actual diagnosis. Furthermore, the management plans and the treatment protocols that were given by ChatGPT were in line with the literature and current medical knowledge. The exact drug names and regimens were not provided but a general guideline that was given by this AI tool is definitely beneficial for junior doctors in getting an idea on how to proceed or refresh their previous knowledge. CONCLUSION: ChatGPT is a valuable resource in the field of medicine. Its comprehensive and properly organized response in an understandable language has made it an effective and efficient tool to be used. However, it is crucial to note that its limitations, such as the need for all associated and typical signs, symptoms, and physical examination findings, and its inability to personalize treatments need to be acknowledged.
Rizwan A; Sadiq T
10
40396096
Discussion of the ability to use chatGPT to answer questions related to esophageal cancer of patient concern.
2,025
Journal of family medicine and primary care
BACKGROUND: Chat Generation Pre-Trained Converter (ChatGPT) is a language processing model based on artificial intelligence (AI). It covers a wide range of topics, including medicine, and can provide patients with knowledge about esophageal cancer. OBJECTIVE: Based on its risk, this study aimed to assess ChatGPT's accuracy in answering patients' questions about esophageal cancer. METHODS: By referring to professional association websites, social software and the author's clinical experience, 55 questions concerned by Chinese patients and their families were generated and scored by two deputy chief physicians of esophageal cancer. The answers were: (1) comprehensive/correct, (2) incomplete/partially correct, (3) partially accurate, partially inaccurate, and (4) completely inaccurate/irrelevant. Score differences are resolved by a third reviewer. RESULTS: Out of 55 questions, 24 (43.6%) of the answers provided by ChatGPT were complete and correct, 13 (23.6%) were correct but incomplete, 18 (32.7%) were partially wrong, and no answers were completely wrong. Comprehensive and correct answers were highest in the field of prevention (50 percent), while partially incorrect answers were highest in the field of treatment (77.8 percent). CONCLUSION: ChatGPT can accurately answer the questions about the prevention and diagnosis of esophageal cancer, but it cannot accurately answer the questions about the treatment and prognosis of esophageal cancer. Further investigation and refinement of this widely used large-scale language model are needed before it can be recommended to patients with esophageal cancer, and ongoing research is still needed to verify the safety and accuracy of these tools and their medical applications.
Yu F; Lei M; Wang S; Liu M; Fu X; Yu Y
0-1
38854916
Surveyed veterinary students in Australia find ChatGPT practical and relevant while expressing no concern about artificial intelligence replacing veterinarians.
2,024
Veterinary record open
BACKGROUND: Chat Generative Pre-trained Transformer (ChatGPT) is a freely available online artificial intelligence (AI) program capable of understanding and generating human-like language. This study assessed veterinary students' perceptions about ChatGPT in education and practice. It compared perceptions about ChatGPT between students who had completed a critical analysis task and those who had not. METHODS: This cross-sectional study surveyed 498 Doctor of Veterinary Medicine (DVM) students at The University of Sydney, Australia. Second-year DVM students researched a veterinary pathogen and then completed a critical analysis of ChatGPT (version 3.5) output for the same pathogen. A survey based on the Technology Acceptance Model was then delivered to all DVM students from all years of the programme, collecting data using Likert-style, categorical and free-text items. RESULTS: Over 75% of the 100 respondents reported having used ChatGPT. The students found ChatGPT's output relevant and practical for their use but perceived it as inaccurate. They perceived ChatGPT output to be more useful for veterinary students than for pet owners or veterinarians. Those who had completed the critical analysis assignment had a more positive view of ChatGPT's practicality for veterinary students but noted its authoritative tone even when delivering inaccurate information. Over 50% of the students agreed that information about tools such as ChatGPT should be included in the veterinary curriculum. Students agreed that veterinarians should embrace AI but disagreed that AI would eventually replace the need for veterinarians. CONCLUSIONS: A critical appraisal of outputs from AI tools such as ChatGPT may help prepare future veterinarians for the effective use of these tools.
Worthing KA; Roberts M; Slapeta J
10
38589561
Blepharoptosis Consultation with Artificial Intelligence: Aesthetic Surgery Advice and Counseling from Chat Generative Pre-Trained Transformer (ChatGPT).
2,024
Aesthetic plastic surgery
BACKGROUND: Chat generative pre-trained transformer (ChatGPT) is a publicly available extensive artificial intelligence (AI) language model that leverages deep learning to generate text that mimics human conversations. In this study, the performance of ChatGPT was assessed by offering insightful and precise answers to a series of fictional questions and emulating a preliminary consultation on blepharoplasty. METHODS: ChatGPT was posed with questions derived from a blepharoplasty checklist provided by the American Society of Plastic Surgeons. Board-certified plastic surgeons and non-medical staff members evaluated the responses for accuracy, informativeness, and accessibility. RESULTS: Nine questions were used in this study. Regarding informativeness, the average score given by board-certified plastic surgeons was significantly lower than that given by non-medical staff members (2.89 +/- 0.72 vs 4.41 +/- 0.71; p = 0.042). No statistically significant differences were observed in accuracy (p = 0.56) or accessibility (p = 0.11). CONCLUSIONS: Our results emphasize the effectiveness of ChatGPT in simulating doctor-patient conversations during blepharoplasty. Non-medical individuals found its responses more informative compared with the surgeons. Although limited in terms of specialized guidance, ChatGPT offers foundational surgical information. Further exploration is warranted to elucidate the broader role of AI in esthetic surgical consultations. LEVEL OF EVIDENCE V: Observational study under respected authorities. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Shiraishi M; Tanigawa K; Tomioka Y; Miyakuni A; Moriwaki Y; Yang R; Oba J; Okazaki M
32
39473997
Can ChatGPT Be Used as a Research Assistant and a Patient Consultant in Plastic Surgery? A Review of 3 Key Information Domains.
2,024
Eplasty
BACKGROUND: Chat Generative Pretrained Transformer (ChatGPT), a newly developed pretrained artificial intelligence (AI) chatbot, is able to interpret and respond to user-generated questions. As such, many questions have been raised about its potential uses and limitations. While preliminary literature suggests that ChatGPT can be used in medicine as a research assistant and patient consultant, its reliability in providing original and accurate information is still unknown. Therefore, the purpose of this project was to conduct a review on the utility of ChatGPT in plastic surgery. METHODS: On August 25, 2023, a thorough literature search was conducted on PubMed. Papers involving ChatGPT and medical research were included. Papers that were not written in English were excluded. Related papers were evaluated and synthesized into 3 information domains: generating original research topics, summarizing and extracting information from medical literature and databases, and conducting patient consultation. RESULTS: Out of 57 initial papers, 8 met inclusion criteria. An additional 2 were added based on the references of relevant papers, bringing the total number to 10. ChatGPT can be useful in helping clinicians brainstorm and gain a general understanding of the literature landscape. However, its inability to give patient-specific information and act as a reliable source of information limit its use in patient consultation. CONCLUSION: ChatGPT can be a useful tool in the conception of and execution of literature searches and research information retrieval (with increased reliability when queries are specific); however, the technology is currently not reliable enough to be implemented in a clinical setting.
Campolo JA; Kwon DY; Henderson PW
32
37294147
ChatGPT failed Taiwan's Family Medicine Board Exam.
2,023
Journal of the Chinese Medical Association : JCMA
BACKGROUND: Chat Generative Pre-trained Transformer (ChatGPT), OpenAI Limited Partnership, San Francisco, CA, USA is an artificial intelligence language model gaining popularity because of its large database and ability to interpret and respond to various queries. Although it has been tested by researchers in different fields, its performance varies depending on the domain. We aimed to further test its ability in the medical field. METHODS: We used questions from Taiwan's 2022 Family Medicine Board Exam, which combined both Chinese and English and covered various question types, including reverse questions and multiple-choice questions, and mainly focused on general medical knowledge. We pasted each question into ChatGPT and recorded its response, comparing it to the correct answer provided by the exam board. We used SAS 9.4 (Cary, North Carolina, USA) and Excel to calculate the accuracy rates for each question type. RESULTS: ChatGPT answered 52 questions out of 125 correctly, with an accuracy rate of 41.6%. The questions' length did not affect the accuracy rates. These were 45.5%, 33.3%, 58.3%, 50.0%, and 43.5% for negative-phrase questions, multiple-choice questions, mutually exclusive options, case scenario questions, and Taiwan's local policy-related questions, with no statistical difference observed. CONCLUSION: ChatGPT's accuracy rate was not good enough for Taiwan's Family Medicine Board Exam. Possible reasons include the difficulty level of the specialist exam and the relatively weak database of traditional Chinese language resources. However, ChatGPT performed acceptably in negative-phrase questions, mutually exclusive questions, and case scenario questions, and it can be a helpful tool for learning and exam preparation. Future research can explore ways to improve ChatGPT's accuracy rate for specialized exams and other domains.
Weng TL; Wang YM; Chang S; Chen TJ; Hwang SJ
21
37822477
Perception of Chat Generative Pre-trained Transformer (Chat-GPT) AI tool amongst MSK clinicians.
2,023
Journal of clinical orthopaedics and trauma
BACKGROUND: Chat Generative Pre-trained Transformer (ChatGPT); an open access artificial intelligence (AI) tool has been in the limelight with its ability to respond to prompts, analyse data information using algorithms to augment efficiency in day-to-day activities across a spectrum of human activities including MSK/Orthopaedic science. PURPOSE OF THE STUDY: The purpose of this cross-sectional survey has been to analyse the knowledge, understanding of the role of Chat Generative Pre-trained Transformer (ChatGPT) and its implications in clinical practice as well as research in medicine. MATERIAL & METHODS: An online cross-sectional survey of 10 questions (multiple choice and free text) was circulated amongst orthopaedic surgeons, musculoskeletal radiologists and Rheumatologists in India and UK, to evaluate perception of Chat Generative Pre-trained Transformer (ChatGPT) AI Tool. RESULTS: We had 125 responses with majority being aware of ChatGPT though a minority had used it. There was consensus that its going have detrimental effect on workforce with majority of the opinion that they would be used to create radiology reports. Mixed responses were noted regarding the quality of research and role of ChatGPT being an anonymous author. CONCLUSION: There is a considerable debate amongst clinicians of orthopaedic, radiology and rheumatology -specialities. The attitudes are mixed but mainly positive, although there are many concerns about the still-evolving new technology. LEVEL OF STUDY: Diagnostic Study level 4.
Iyengar KP; Yousef MMA; Nune A; Sharma GK; Botchu R
32
39536965
How soon will surgeons become mere technicians? Chatbot performance in managing clinical scenarios.
2,024
The Journal of thoracic and cardiovascular surgery
BACKGROUND: Chatbot use has developed a presence in medicine and surgery and has been proposed to help guide clinical decision making. However, the accuracy of information provided by artificial intelligence (AI) platforms has been called into question. We evaluated the performance of 4 popular chatbots on a board-style examination and compared results with a group of board-certified thoracic surgeons. METHODS: Clinical scenarios were developed within domains based on the American Board of Thoracic Surgery (ABTS) Qualifying Exam. Each scenario included 3 stems written with the Key Feature methodology related to diagnosis, evaluation, and treatment. Ten scenarios were presented to ChatGPT-4, Bard (now Gemini), Perplexity, and Claude 2, as well as to randomly selected ABTS-certified surgeons. The maximum possible score was 3 points per scenario. Critical failures were identified during exam development; if they occurred in any of the 3 stems the entire question received a score of 0. The Mann-Whitney U test was used to compare surgeon scores and chatbot scores. RESULTS: Examinations were completed by 21 surgeons, the majority of whom (n = 14; 66%) practiced in academic or university settings. The median score per scenario was 1.06 for chatbots, compared to 1.88 for surgeons (difference, 0.66; P = .019). Surgeon median scores were better than chatbot median scores for all except 2 scenarios. Chatbot answers were significantly more likely to be deemed critical failures compared to those provided by surgeons (median, 0.50 per chatbot/scenario vs 0.19 per surgeon/scenario; P = .016). CONCLUSIONS: Four popular chatbots performed at a significantly lower level than board-certified surgeons. Implementation of AI should be undertaken with caution in clinical decision making.
Bryan DS; Platz JJ; Naunheim KS; Ferguson MK
43
38574939
Chatbot Reliability in Managing Thoracic Surgical Clinical Scenarios.
2,024
The Annals of thoracic surgery
BACKGROUND: Chatbot use in medicine is growing, and concerns have been raised regarding their accuracy. This study assessed the performance of 4 different chatbots in managing thoracic surgical clinical scenarios. METHODS: Topic domains were identified and clinical scenarios were developed within each domain. Each scenario included 3 stems using Key Feature methods related to diagnosis, evaluation, and treatment. Twelve scenarios were presented to ChatGPT-4 (OpenAI), Bard (recently renamed Gemini; Google), Perplexity (Perplexity AI), and Claude 2 (Anthropic) in 3 separate runs. Up to 1 point was awarded for each stem, yielding a potential of 3 points per scenario. Critical failures were identified before scoring; if they occurred, the stem and overall scenario scores were adjusted to 0. We arbitrarily established a threshold of >/=2 points mean adjusted score per scenario as a passing grade and established a critical fail rate of >/=30% as failure to pass. RESULTS: The bot performances varied considerably within each run, and their overall performance was a fail on all runs (critical mean scenario fails of 83%, 71%, and 71%). The bots trended toward "learning" from the first to the second run, but without improvement in overall raw (1.24 +/- 0.47 vs 1.63 +/- 0.76 vs 1.51 +/- 0.60; P = .29) and adjusted (0.44 +/- 0.54 vs 0.80 +/- 0.94 vs 0.76 +/- 0.81; P = .48) scenario scores after all runs. CONCLUSIONS: Chatbot performance in managing clinical scenarios was insufficient to provide reliable assistance. This is a cautionary note against reliance on the current accuracy of chatbots in complex thoracic surgery medical decision making.
Platz JJ; Bryan DS; Naunheim KS; Ferguson MK
43
39132729
Performance of AI-powered chatbots in diagnosing acute pulmonary thromboembolism from given clinical vignettes.
2,024
Acute medicine
BACKGROUND: Chatbots hold great potential to serve as support tool in diagnosis and clinical decision process. In this study, we aimed to evaluate the accuracy of chatbots in diagnosing pulmonary embolism (PE). Furthermore, we assessed their performance in determining the PE severity. METHOD: 65 case reports meeting our inclusion criteria were selected for this study. Two emergency medicine (EM) physicians crafted clinical vignettes and introduced them to the Bard, Bing, and ChatGPT-3.5 with asking the top 10 diagnoses. After obtaining all differential diagnoses lists, vignettes enriched with supplemental data redirected to the chatbots with asking the severity of PE. RESULTS: ChatGPT-3.5, Bing, and Bard listed PE within the top 10 diagnoses list with accuracy rates of 92.3%, 92.3%, and 87.6%, respectively. For the top 3 diagnoses, Bard achieved 75.4% accuracy, while ChatGPT and Bing both had 67.7%. As the top diagnosis, Bard, ChatGPT-3.5, and Bing were accurate in 56.9%, 47.7% and 30.8% cases, respectively. Significant differences between Bard and both Bing (p=0.000) and ChatGPT (p=0.007) were noted in this group. Massive PEs were correctly identified with over 85% success rate. Overclassification rates for Bard, ChatGPT-3.5 and Bing at 38.5%, 23.3% and 20%, respectively. Misclassification rates were highest in submassive group. CONCLUSION: Although chatbots aren't intended for diagnosis, their high level of diagnostic accuracy and success rate in identifying massive PE underscore the promising potential of chatbots as clinical decision support tool. However, further research with larger patient datasets is required to validate and refine their performance in real-world clinical settings.
Arslan B; Sutasir MN; Altinbilek E
43
37590034
Using ChatGPT as a Learning Tool in Acupuncture Education: Comparative Study.
2,023
JMIR medical education
BACKGROUND: ChatGPT (Open AI) is a state-of-the-art artificial intelligence model with potential applications in the medical fields of clinical practice, research, and education. OBJECTIVE: This study aimed to evaluate the potential of ChatGPT as an educational tool in college acupuncture programs, focusing on its ability to support students in learning acupuncture point selection, treatment planning, and decision-making. METHODS: We collected case studies published in Acupuncture in Medicine between June 2022 and May 2023. Both ChatGPT-3.5 and ChatGPT-4 were used to generate suggestions for acupuncture points based on case presentations. A Wilcoxon signed-rank test was conducted to compare the number of acupuncture points generated by ChatGPT-3.5 and ChatGPT-4, and the overlapping ratio of acupuncture points was calculated. RESULTS: Among the 21 case studies, 14 studies were included for analysis. ChatGPT-4 generated significantly more acupuncture points (9.0, SD 1.1) compared to ChatGPT-3.5 (5.6, SD 0.6; P<.001). The overlapping ratios of acupuncture points for ChatGPT-3.5 (0.40, SD 0.28) and ChatGPT-4 (0.34, SD 0.27; P=.67) were not significantly different. CONCLUSIONS: ChatGPT may be a useful educational tool for acupuncture students, providing valuable insights into personalized treatment plans. However, it cannot fully replace traditional diagnostic methods, and further studies are needed to ensure its safe and effective implementation in acupuncture education.
Lee H
32
39284182
Performance of ChatGPT in the In-Training Examination for Anesthesiology and Pain Medicine Residents in South Korea: Observational Study.
2,024
JMIR medical education
BACKGROUND: ChatGPT has been tested in health care, including the US Medical Licensing Examination and specialty exams, showing near-passing results. Its performance in the field of anesthesiology has been assessed using English board examination questions; however, its effectiveness in Korea remains unexplored. OBJECTIVE: This study investigated the problem-solving performance of ChatGPT in the fields of anesthesiology and pain medicine in the Korean language context, highlighted advancements in artificial intelligence (AI), and explored its potential applications in medical education. METHODS: We investigated the performance (number of correct answers/number of questions) of GPT-4, GPT-3.5, and CLOVA X in the fields of anesthesiology and pain medicine, using in-training examinations that have been administered to Korean anesthesiology residents over the past 5 years, with an annual composition of 100 questions. Questions containing images, diagrams, or photographs were excluded from the analysis. Furthermore, to assess the performance differences of the GPT across different languages, we conducted a comparative analysis of the GPT-4's problem-solving proficiency using both the original Korean texts and their English translations. RESULTS: A total of 398 questions were analyzed. GPT-4 (67.8%) demonstrated a significantly better overall performance than GPT-3.5 (37.2%) and CLOVA-X (36.7%). However, GPT-3.5 and CLOVA X did not show significant differences in their overall performance. Additionally, the GPT-4 showed superior performance on questions translated into English, indicating a language processing discrepancy (English: 75.4% vs Korean: 67.8%; difference 7.5%; 95% CI 3.1%-11.9%; P=.001). CONCLUSIONS: This study underscores the potential of AI tools, such as ChatGPT, in medical education and practice but emphasizes the need for cautious application and further refinement, especially in non-English medical contexts. The findings suggest that although AI advancements are promising, they require careful evaluation and development to ensure acceptable performance across diverse linguistic and professional settings.
Yoon SH; Oh SK; Lim BG; Lee HJ
21
37812998
Appraising the performance of ChatGPT in psychiatry using 100 clinical case vignettes.
2,023
Asian journal of psychiatry
BACKGROUND: ChatGPT has emerged as the most advanced and rapidly developing large language chatbot system. With its immense potential ranging from answering a simple query to cracking highly competitive medical exams, ChatGPT continues to impress the scientists and researchers worldwide giving room for more discussions regarding its utility in various fields. One such field of attention is Psychiatry. With suboptimal diagnosis and treatment, assuring mental health and well-being is a challenge in many countries, particularly developing nations. To this regard, we conducted an evaluation to assess the performance of ChatGPT 3.5 in Psychiatry using clinical cases to provide evidence-based information regarding the implication of ChatGPT 3.5 in enhancing mental health and well-being. METHODS: ChatGPT 3.5 was used in this experimental study to initiate the conversations and collect responses to clinical vignettes in Psychiatry. Using 100 clinical case vignettes, the replies were assessed by expert faculties from the Department of Psychiatry. There were 100 different psychiatric illnesses represented in the cases. We recorded and assessed the initial ChatGPT 3.5 responses. The evaluation was conducted using the objective of questions that were put forth at the conclusion of the case, and the aim of the questions was divided into 10 categories. The grading was completed by taking the mean value of the scores provided by the evaluators. Graphs and tables were used to represent the grades. RESULTS: The evaluation report suggests that ChatGPT 3.5 fared extremely well in Psychiatry by receiving "Grade A" ratings in 61 out of 100 cases, "Grade B" ratings in 31, and "Grade C" ratings in 8. Majority of the queries were concerned with the management strategies, which were followed by diagnosis, differential diagnosis, assessment, investigation, counselling, clinical reasoning, ethical reasoning, prognosis, and request acceptance. ChatGPT 3.5 performed extremely well, especially in generating management strategies followed by diagnoses for different psychiatric conditions. There were no responses which were graded "D" indicating that there were no errors in the diagnosis or response for clinical care. Only a few discrepancies and additional details were missed in a few responses that received a "Grade C" CONCLUSION: It is evident from our study that ChatGPT 3.5 has appreciable knowledge and interpretation skills in Psychiatry. Thus, ChatGPT 3.5 undoubtedly has the potential to transform the field of Medicine and we emphasize its utility in Psychiatry through the finding of our study. However, for any AI model to be successful, assuring the reliability, validation of information, proper guidelines and implementation framework are necessary.
Franco D'Souza R; Amanullah S; Mathew M; Surapaneni KM
0-1
39314814
Exploring community pharmacists' attitudes in Thailand towards ChatGPT usage: A pilot qualitative investigation.
2,024
Digital health
BACKGROUND: ChatGPT has recently emerged as a disruptive technology, potentially impacting various societal dimensions, including pharmacy practices. In Thailand, community pharmacists are navigating transitions as patients increasingly rely on digital tools for healthcare recommendations. This study explores the attitudes of community pharmacists in Hatyai, one of Thailand's most populated cities, towards the integration of ChatGPT in pharmacy services. METHOD: ChatGPT-3.5 was used to generate responses to three questions concerning the use of medicine in special populations in the Thai language. These responses were then incorporated into a questionnaire and evaluated using a Likert scale from 1 to 5. Participants who consented were asked to rate the responses and participate in an in-depth interview. RESULTS: The majority of participants rated the responses favorably, with scores of 4 and 5 accounting for at least 60% of the ratings. Only a small proportion of responses received doubtful ratings (score of 3) or was in disagreement, ranging from 20% to 40%. Moreover, open opinions extracted from the interviews suggested that participants viewed ChatGPT as a capable assistant, as it provided fast yet reasonably accurate information in the Thai language. CONCLUSION: The findings indicate that community pharmacists view ChatGPT as a capable assistant, albeit noting the need for further refinements. The study underscores the importance for pharmacists to proactively adapt to technological advancements, particularly those affecting patient safety, to enhance healthcare delivery and optimize treatment outcomes.
Boonrit N; Chaisawat K; Phueakong C; Nootong N; Ruanglertboon W
10
38729608
Unlocking the future of patient Education: ChatGPT vs. LexiComp(R) as sources of patient education materials.
2,025
Journal of the American Pharmacists Association : JAPhA
BACKGROUND: ChatGPT is a conversational artificial intelligence technology that has shown application in various facets of healthcare. With the increased use of AI, it is imperative to assess the accuracy and comprehensibility of AI platforms. OBJECTIVE: This pilot project aimed to assess the understandability, readability, and accuracy of ChatGPT as a source of medication-related patient education as compared with an evidence-based medicine tertiary reference resource, LexiComp(R). METHODS: Patient education materials (PEMs) were obtained from ChatGPT and LexiComp(R) for 8 common medications (albuterol, apixaban, atorvastatin, hydrocodone/acetaminophen, insulin glargine, levofloxacin, omeprazole, and sacubitril/valsartan). PEMs were extracted, blinded, and assessed by 2 investigators independently. The primary outcome was a comparison of the Patient Education Materials Assessment Tool-printable (PEMAT-P). Secondary outcomes included Flesch reading ease, Flesch Kincaid grade level, percent passive sentences, word count, and accuracy. A 7-item accuracy checklist for each medication was generated by expert consensus among pharmacist investigators, with LexiComp(R) PEMs serving as the control. PEMAT-P interrater reliability was determined via intraclass correlation coefficient (ICC). Flesch reading ease, Flesch Kincaid grade level, percent passive sentences, and word count were calculated by Microsoft(R) Word(R). Continuous data were assessed using the Student's t-test via SPSS (version 20.0). RESULTS: No difference was found in the PEMAT-P understandability score of PEMs produced by ChatGPT versus LexiComp(R) [77.9% (11.0) vs. 72.5% (2.4), P=0.193]. Reading level was higher with ChatGPT [8.6 (1.2) vs. 5.6 (0.3), P < 0.001). ChatGPT PEMs had a lower percentage of passive sentences and lower word count. The average accuracy score of ChatGPT PEMs was 4.25/7 (61%), with scores ranging from 29% to 86%. CONCLUSION: Despite comparable PEMAT-P scores, ChatGPT PEMs did not meet grade level targets. Lower word count and passive text with ChatGPT PEMs could benefit patients, but the variable accuracy scores prevent routine use of ChatGPT to produce medication-related PEMs at this time.
Covington EW; Watts Alexander CS; Sewell J; Hutchison AM; Kay J; Tocco L; Hyte M
10
38684536
Performance of ChatGPT in Answering Clinical Questions on the Practical Guideline of Blepharoptosis.
2,024
Aesthetic plastic surgery
BACKGROUND: ChatGPT is a free artificial intelligence (AI) language model developed and released by OpenAI in late 2022. This study aimed to evaluate the performance of ChatGPT to accurately answer clinical questions (CQs) on the Guideline for the Management of Blepharoptosis published by the American Society of Plastic Surgeons (ASPS) in 2022. METHODS: CQs in the guideline were used as question sources in both English and Japanese. For each question, ChatGPT provided answers for CQs, evidence quality, recommendation strength, reference match, and answered word counts. We compared the performance of ChatGPT in each component between English and Japanese queries. RESULTS: A total of 11 questions were included in the final analysis, and ChatGPT answered 61.3% of these correctly. ChatGPT demonstrated a higher accuracy rate in English answers for CQs compared to Japanese answers for CQs (76.4% versus 46.4%; p = 0.004) and word counts (123 words versus 35.9 words; p = 0.004). No statistical differences were noted for evidence quality, recommendation strength, and reference match. A total of 697 references were proposed, but only 216 of them (31.0%) existed. CONCLUSIONS: ChatGPT demonstrates potential as an adjunctive tool in the management of blepharoptosis. However, it is crucial to recognize that the existing AI model has distinct limitations, and its primary role should be to complement the expertise of medical professionals. LEVEL OF EVIDENCE V: Observational study under respected authorities. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Shiraishi M; Tomioka Y; Miyakuni A; Ishii S; Hori A; Park H; Ohba J; Okazaki M
0-1
37634667
Chat GPT as a Neuro-Score Calculator: Analysis of a Large Language Model's Performance on Various Neurological Exam Grading Scales.
2,023
World neurosurgery
BACKGROUND: ChatGPT is a large language model artificial intelligence chatbot that has been applied to different aspects of the medical field. Our study aims to assess the quality of chatGPT to evaluate patients based on their exams for different scores including Glasgow Coma Scale (GCS), intracranial hemorrhage score (ICH), and Hunt & Hess (H&H) classification. METHODS: We created batches of patient test cases with detailed neurological exams, totaling 20 cases and created variants of increasing complex phrasing of the test cases. Using ChatGPT, we assessed repeatability and quantified the errors, including the average error rate (AER) and magnitude of errors (AME). We repeated this process for the H&H and the ICH score using base cases. Specific prompts were created for each calculator. RESULTS: The GCS calculator on 10 base test cases had an AER/AME of 10%/0.150. The accuracy of ChatGPT decreased with increasing complexity; for example, in a variation where crucial information was missing, the AER was 45% for 20 cases. For H&H, AER/AME was 13%/0.13 and for ICH, AER/AME was 27.5%/0.325. Using a simple prompt resulted in a significantly higher error rate of 70%. CONCLUSIONS: ChatGPT demonstrates ability in this proof-of-concept experiment in evaluating neuroexams using established assessment scales including GCS, ICH, and H&H. However, it has limitations in accuracy and may "hallucinate" with complex or vague descriptions. Nonetheless, ChatGPT, has promising potential in medicine.
Chen TC; Kaminski E; Koduri L; Singer A; Singer J; Couldwell M; Delashaw J; Dumont A; Wang A
0-1
38573944
Performance of ChatGPT on Chinese Master's Degree Entrance Examination in Clinical Medicine.
2,024
PloS one
BACKGROUND: ChatGPT is a large language model designed to generate responses based on a contextual understanding of user queries and requests. This study utilised the entrance examination for the Master of Clinical Medicine in Traditional Chinese Medicine to assesses the reliability and practicality of ChatGPT within the domain of medical education. METHODS: We selected 330 single and multiple-choice questions from the 2021 and 2022 Chinese Master of Clinical Medicine comprehensive examinations, which did not include any images or tables. To ensure the test's accuracy and authenticity, we preserved the original format of the query and alternative test texts, without any modifications or explanations. RESULTS: Both ChatGPT3.5 and GPT-4 attained average scores surpassing the admission threshold. Noteworthy is that ChatGPT achieved the highest score in the Medical Humanities section, boasting a correct rate of 93.75%. However, it is worth noting that ChatGPT3.5 exhibited the lowest accuracy percentage of 37.5% in the Pathology division, while GPT-4 also displayed a relatively lower correctness percentage of 60.23% in the Biochemistry section. An analysis of sub-questions revealed that ChatGPT demonstrates superior performance in handling single-choice questions but performs poorly in multiple-choice questions. CONCLUSION: ChatGPT exhibits a degree of medical knowledge and the capacity to aid in diagnosing and treating diseases. Nevertheless, enhancements are warranted to address its accuracy and reliability limitations. Imperatively, rigorous evaluation and oversight must accompany its utilization, accompanied by proactive measures to surmount prevailing constraints.
Li KC; Bu ZJ; Shahjalal M; He BX; Zhuang ZF; Li C; Liu JP; Wang B; Liu ZL
21
37548997
Performance of ChatGPT on the Situational Judgement Test-A Professional Dilemmas-Based Examination for Doctors in the United Kingdom.
2,023
JMIR medical education
BACKGROUND: ChatGPT is a large language model that has performed well on professional examinations in the fields of medicine, law, and business. However, it is unclear how ChatGPT would perform on an examination assessing professionalism and situational judgement for doctors. OBJECTIVE: We evaluated the performance of ChatGPT on the Situational Judgement Test (SJT): a national examination taken by all final-year medical students in the United Kingdom. This examination is designed to assess attributes such as communication, teamwork, patient safety, prioritization skills, professionalism, and ethics. METHODS: All questions from the UK Foundation Programme Office's (UKFPO's) 2023 SJT practice examination were inputted into ChatGPT. For each question, ChatGPT's answers and rationales were recorded and assessed on the basis of the official UK Foundation Programme Office scoring template. Questions were categorized into domains of Good Medical Practice on the basis of the domains referenced in the rationales provided in the scoring sheet. Questions without clear domain links were screened by reviewers and assigned one or multiple domains. ChatGPT's overall performance, as well as its performance across the domains of Good Medical Practice, was evaluated. RESULTS: Overall, ChatGPT performed well, scoring 76% on the SJT but scoring full marks on only a few questions (9%), which may reflect possible flaws in ChatGPT's situational judgement or inconsistencies in the reasoning across questions (or both) in the examination itself. ChatGPT demonstrated consistent performance across the 4 outlined domains in Good Medical Practice for doctors. CONCLUSIONS: Further research is needed to understand the potential applications of large language models, such as ChatGPT, in medical education for standardizing questions and providing consistent rationales for examinations assessing professionalism and ethics.
Borchert RJ; Hickman CR; Pepys J; Sadler TJ
21
40354644
Global Health care Professionals' Perceptions of Large Language Model Use In Practice: Cross-Sectional Survey Study.
2,025
JMIR medical education
BACKGROUND: ChatGPT is a large language model-based chatbot developed by OpenAI. ChatGPT has many potential applications to health care, including enhanced diagnostic accuracy and efficiency, improved treatment planning, and better patient outcomes. However, health care professionals' perceptions of ChatGPT and similar artificial intelligence tools are not well known. Understanding these attitudes is important to inform the best approaches to exploring their use in medicine. OBJECTIVE: Our aim was to evaluate the health care professionals' awareness and perceptions regarding potential applications of ChatGPT in the medical field, including potential benefits and challenges of adoption. METHODS: We designed a 33-question online survey that was distributed among health care professionals via targeted emails and professional Twitter and LinkedIn accounts. The survey included a range of questions to define respondents' demographic characteristics, familiarity with ChatGPT, perceptions of this tool's usefulness and reliability, and opinions on its potential to improve patient care, research, and education efforts. RESULTS: One hundred and fifteen health care professionals from 21 countries responded to the survey, including physicians, nurses, researchers, and educators. Of these, 101 (87.8%) had heard of ChatGPT, mainly from peers, social media, and news, and 77 (76.2%) had used ChatGPT at least once. Participants found ChatGPT to be helpful for writing manuscripts (n=31, 45.6%), emails (n=25, 36.8%), and grants (n=12, 17.6%); accessing the latest research and evidence-based guidelines (n=21, 30.9%); providing suggestions on diagnosis or treatment (n=15, 22.1%); and improving patient communication (n=12, 17.6%). Respondents also felt that the ability of ChatGPT to access and summarize research articles (n=22, 46.8%), provide quick answers to clinical questions (n=15, 31.9%), and generate patient education materials (n=10, 21.3%) was helpful. However, there are concerns regarding the use of ChatGPT, for example, the accuracy of responses (n=14, 29.8%), limited applicability in specific practices (n=18, 38.3%), and legal and ethical considerations (n=6, 12.8%), mainly related to plagiarism or copyright violations. Participants stated that safety protocols such as data encryption (n=63, 62.4%) and access control (n=52, 51.5%) could assist in ensuring patient privacy and data security. CONCLUSIONS: Our findings show that ChatGPT use is widespread among health care professionals in daily clinical, research, and educational activities. The majority of our participants found ChatGPT to be useful; however, there are concerns about patient privacy, data security, and its legal and ethical issues as well as the accuracy of its information. Further studies are required to understand the impact of ChatGPT and other large language models on clinical, educational, and research outcomes, and the concerns regarding its use must be addressed systematically and through appropriate methods.
Ozkan E; Tekin A; Ozkan MC; Cabrera D; Niven A; Dong Y
43
37851495
Health Care Trainees' and Professionals' Perceptions of ChatGPT in Improving Medical Knowledge Training: Rapid Survey Study.
2,023
Journal of medical Internet research
BACKGROUND: ChatGPT is a powerful pretrained large language model. It has both demonstrated potential and raised concerns related to knowledge translation and knowledge transfer. To apply and improve knowledge transfer in the real world, it is essential to assess the perceptions and acceptance of the users of ChatGPT-assisted training. OBJECTIVE: We aimed to investigate the perceptions of health care trainees and professionals on ChatGPT-assisted training, using biomedical informatics as an example. METHODS: We used purposeful sampling to include all health care undergraduate trainees and graduate professionals (n=195) from January to May 2023 in the School of Public Health at the National Defense Medical Center in Taiwan. Subjects were asked to watch a 2-minute video introducing 5 scenarios about ChatGPT-assisted training in biomedical informatics and then answer a self-designed online (web- and mobile-based) questionnaire according to the Kirkpatrick model. The survey responses were used to develop 4 constructs: "perceived knowledge acquisition," "perceived training motivation," "perceived training satisfaction," and "perceived training effectiveness." The study used structural equation modeling (SEM) to evaluate and test the structural model and hypotheses. RESULTS: The online questionnaire response rate was 152 of 195 (78%); 88 of 152 participants (58%) were undergraduate trainees and 90 of 152 participants (59%) were women. The ages ranged from 18 to 53 years (mean 23.3, SD 6.0 years). There was no statistical difference in perceptions of training evaluation between men and women. Most participants were enthusiastic about the ChatGPT-assisted training, while the graduate professionals were more enthusiastic than undergraduate trainees. Nevertheless, some concerns were raised about potential cheating on training assessment. The average scores for knowledge acquisition, training motivation, training satisfaction, and training effectiveness were 3.84 (SD 0.80), 3.76 (SD 0.93), 3.75 (SD 0.87), and 3.72 (SD 0.91), respectively (Likert scale 1-5: strongly disagree to strongly agree). Knowledge acquisition had the highest score and training effectiveness the lowest. In the SEM results, training effectiveness was influenced predominantly by knowledge acquisition and partially met the hypotheses in the research framework. Knowledge acquisition had a direct effect on training effectiveness, training satisfaction, and training motivation, with beta coefficients of .80, .87, and .97, respectively (all P<.001). CONCLUSIONS: Most health care trainees and professionals perceived ChatGPT-assisted training as an aid in knowledge transfer. However, to improve training effectiveness, it should be combined with empirical experts for proper guidance and dual interaction. In a future study, we recommend using a larger sample size for evaluation of internet-connected large language models in medical knowledge transfer.
Hu JM; Liu FC; Chu CM; Chang YT
10
38912370
ChatGPT Is Moderately Accurate in Providing a General Overview of Orthopaedic Conditions.
2,024
JB & JS open access
BACKGROUND: ChatGPT is an artificial intelligence chatbot capable of providing human-like responses for virtually every possible inquiry. This advancement has provoked public interest regarding the use of ChatGPT, including in health care. The purpose of the present study was to investigate the quantity and accuracy of ChatGPT outputs for general patient-focused inquiries regarding 40 orthopaedic conditions. METHODS: For each of the 40 conditions, ChatGPT (GPT-3.5) was prompted with the text "I have been diagnosed with [condition]. Can you tell me more about it?" The numbers of treatment options, risk factors, and symptoms given for each condition were compared with the number in the corresponding American Academy of Orthopaedic Surgeons (AAOS) OrthoInfo website article for information quantity assessment. For accuracy assessment, an attending orthopaedic surgeon ranked the outputs in the categories of <50%, 50% to 74%, 75% to 99%, and 100% accurate. An orthopaedics sports medicine fellow also independently ranked output accuracy. RESULTS: Compared with the AAOS OrthoInfo website, ChatGPT provided significantly fewer treatment options (mean difference, -2.5; p < 0.001) and risk factors (mean difference, -1.1; p = 0.02) but did not differ in the number of symptoms given (mean difference, -0.5; p = 0.31). The surgical treatment options given by ChatGPT were often nondescript (n = 20 outputs), such as "surgery" as the only operative treatment option. Regarding accuracy, most conditions (26 of 40; 65%) were ranked as mostly (75% to 99%) accurate, with the others (14 of 40; 35%) ranked as moderately (50% to 74%) accurate, by an attending surgeon. Neither surgeon ranked any condition as mostly inaccurate (<50% accurate). Interobserver agreement between accuracy ratings was poor (kappa = 0.03; p = 0.30). CONCLUSIONS: ChatGPT provides at least moderately accurate outputs for general inquiries of orthopaedic conditions but is lacking in the quantity of information it provides for risk factors and treatment options. Professional organizations, such as the AAOS, are the preferred source of musculoskeletal information when compared with ChatGPT. CLINICAL RELEVANCE: ChatGPT is an emerging technology with potential roles and limitations in patient education that are still being explored.
Sparks CA; Fasulo SM; Windsor JT; Bankauskas V; Contrada EV; Kraeutler MJ; Scillia AJ
32
38434792
Performance of ChatGPT on the National Korean Occupational Therapy Licensing Examination.
2,024
Digital health
BACKGROUND: ChatGPT is an artificial intelligence-based large language model (LLM). ChatGPT has been widely applied in medicine, but its application in occupational therapy has been lacking. OBJECTIVE: This study examined the accuracy of ChatGPT on the National Korean Occupational Therapy Licensing Examination (NKOTLE) and investigated its potential for application in the field of occupational therapy. METHODS: ChatGPT 3.5 was used during the five years of the NKOTLE with Korean prompts. Multiple choice questions were entered manually by three dependent encoders, and scored according to the number of correct answers. RESULTS: During the most recent five years, ChatGPT did not achieve a passing score of 60% accuracy and exhibited interrater agreement of 0.6 or higher. CONCLUSION: ChatGPT could not pass the NKOTLE but demonstrated a high level of agreement between raters. Even though the potential of ChatGPT to pass the NKOTLE is currently inadequate, it performed very close to the passing level even with only Korean prompts.
Lee SA; Heo S; Park JH
21
38230387
Utilizing ChatGPT in Telepharmacy.
2,024
Cureus
BACKGROUND: ChatGPT is an artificial intelligence-powered chatbot that has demonstrated capabilities in numerous fields, including medical and healthcare sciences. This study evaluates the potential for ChatGPT application in telepharmacy, the delivering of pharmaceutical care via means of telecommunications, through assessing its interactions, adherence to instructions, and ability to role-play as a pharmacist while handling a series of life-like scenario questions. METHODS: Two versions (ChatGPT 3.5 and 4.0, OpenAI) were assessed using two independent trials each. ChatGPT was instructed to act as a pharmacist and answer patient inquiries, followed by a set of 20 assessment questions. Then, ChatGPT was instructed to stop its act, provide feedback and list its sources for drug information. The responses to the assessment questions were evaluated in terms of accuracy, precision and clarity using a 4-point Likert-like scale. RESULTS: ChatGPT demonstrated the ability to follow detailed instructions, role-play as a pharmacist, and appropriately handle all questions. ChatGPT was able to understand case details, recognize generic and brand drug names, identify drug side effects, interactions, prescription requirements and precautions, and provide proper point-by-point instructions regarding administration, dosing, storage and disposal. The overall means of pooled scores were 3.425 (0.712) and 3.7 (0.61) for ChatGPT 3.5 and 4.0, respectively. The rank distribution of scores was not significantly different (P>0.05). None of the answers could be considered directly harmful or labeled as entirely or mostly incorrect, and most point deductions were due to other factors such as indecisiveness, adding immaterial information, missing certain considerations, or partial unclarity. The answers were similar in length across trials and appropriately concise. ChatGPT 4.0 showed superior performance, higher consistency, better character adherence and the ability to report various reliable information sources. However, it only allowed an input of 40 questions every three hours and provided inaccurate feedback regarding the number of assessed patients, compared to 3.5 which allowed unlimited input but was unable to provide feedback. CONCLUSIONS: Integrating ChatGPT in telepharmacy holds promising potential; however, a number of drawbacks are to be overcome in order to function effectively.
Bazzari FH; Bazzari AH
10
37314466
Evaluation of the Artificial Intelligence Chatbot on Breast Reconstruction and Its Efficacy in Surgical Research: A Case Study.
2,023
Aesthetic plastic surgery
BACKGROUND: ChatGPT is an open-source artificial intelligence (AI) chatbot that uses deep learning to produce human-like text dialog. Its potential applications in the scientific community are vast; however, its efficacy on performing comprehensive literature searches, data analysis and report writing in aesthetic plastic surgery topics remains unknown. This study aims to evaluate both the accuracy and comprehensiveness of ChatGPT's responses to assess its suitability for use in aesthetic plastic surgery research. METHODS: Six questions were prompted to ChatGPT on post-mastectomy breast reconstruction. First two questions focused on the current evidence and options for breast reconstruction post-mastectomy, and remaining four questions focused specifically on autologous breast reconstruction. Using the Likert framework, the responses provided by ChatGPT were qualitatively assessed for accuracy and information content by two specialist plastic surgeons with extensive experience in the field. RESULTS: ChatGPT provided relevant, accurate information; however, it lacked depth. It could provide no more than a superficial overview in response to more esoteric questions and generated incorrect references. It created non-existent references, cited wrong journal and date, which poses a significant challenge in maintaining academic integrity and caution of its use in academia. CONCLUSION: While ChatGPT demonstrated proficiency in summarizing existing knowledge, it created fictitious references which poses a significant concern of its use in academia and healthcare. Caution should be exercised in interpreting its responses in the aesthetic plastic surgical field and should only be used for such with sufficient oversight. LEVEL OF EVIDENCE IV: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Xie Y; Seth I; Rozen WM; Hunter-Smith DJ
32
37095384
Aesthetic Surgery Advice and Counseling from Artificial Intelligence: A Rhinoplasty Consultation with ChatGPT.
2,023
Aesthetic plastic surgery
BACKGROUND: ChatGPT is an open-source artificial large language model that uses deep learning to produce human-like text dialogue. This observational study evaluated the ability of ChatGPT to provide informative and accurate responses to a set of hypothetical questions designed to simulate an initial consultation about rhinoplasty. METHODS: Nine questions were prompted to ChatGPT on rhinoplasty. The questions were sourced from a checklist published by the American Society of Plastic Surgeons, and the responses were assessed for accessibility, informativeness, and accuracy by Specialist Plastic Surgeons with extensive experience in rhinoplasty. RESULTS: ChatGPT was able to provide coherent and easily comprehensible answers to the questions posed, demonstrating its understanding of natural language in a health-specific context. The responses emphasized the importance of an individualized approach, particularly in aesthetic plastic surgery. However, the study also highlighted ChatGPT's limitations in providing more detailed or personalized advice. CONCLUSION: Overall, the results suggest that ChatGPT has the potential to provide valuable information to patients in a medical context, particularly in situations where patients may be hesitant to seek advice from medical professionals or where access to medical advice is limited. However, further research is needed to determine the scope and limitations of AI language models in this domain and to assess the potential benefits and risks associated with their use. LEVEL OF EVIDENCE V: Observational study under respected authorities. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Xie Y; Seth I; Hunter-Smith DJ; Rozen WM; Ross R; Lee M
32
39042885
Assessing ChatGPT's Competency in Addressing Interdisciplinary Inquiries on Chatbot Uses in Sports Rehabilitation: Simulation Study.
2,024
JMIR medical education
BACKGROUND: ChatGPT showcases exceptional conversational capabilities and extensive cross-disciplinary knowledge. In addition, it can perform multiple roles in a single chat session. This unique multirole-playing feature positions ChatGPT as a promising tool for exploring interdisciplinary subjects. OBJECTIVE: The aim of this study was to evaluate ChatGPT's competency in addressing interdisciplinary inquiries based on a case study exploring the opportunities and challenges of chatbot uses in sports rehabilitation. METHODS: We developed a model termed PanelGPT to assess ChatGPT's competency in addressing interdisciplinary topics through simulated panel discussions. Taking chatbot uses in sports rehabilitation as an example of an interdisciplinary topic, we prompted ChatGPT through PanelGPT to role-play a physiotherapist, psychologist, nutritionist, artificial intelligence expert, and athlete in a simulated panel discussion. During the simulation, we posed questions to the panel while ChatGPT acted as both the panelists for responses and the moderator for steering the discussion. We performed the simulation using ChatGPT-4 and evaluated the responses by referring to the literature and our human expertise. RESULTS: By tackling questions related to chatbot uses in sports rehabilitation with respect to patient education, physiotherapy, physiology, nutrition, and ethical considerations, responses from the ChatGPT-simulated panel discussion reasonably pointed to various benefits such as 24/7 support, personalized advice, automated tracking, and reminders. ChatGPT also correctly emphasized the importance of patient education, and identified challenges such as limited interaction modes, inaccuracies in emotion-related advice, assurance of data privacy and security, transparency in data handling, and fairness in model training. It also stressed that chatbots are to assist as a copilot, not to replace human health care professionals in the rehabilitation process. CONCLUSIONS: ChatGPT exhibits strong competency in addressing interdisciplinary inquiry by simulating multiple experts from complementary backgrounds, with significant implications in assisting medical education.
McBee JC; Han DY; Liu L; Ma L; Adjeroh DA; Xu D; Hu G
32
37546795
Interdisciplinary Inquiry via PanelGPT: Application to Explore Chatbot Application in Sports Rehabilitation.
2,023
medRxiv : the preprint server for health sciences
BACKGROUND: ChatGPT showcases exceptional conversational capabilities and extensive cross-disciplinary knowledge. In addition, it possesses the ability to perform multiple roles within a single chat session. This unique multi-role-playing feature positions ChatGPT as a promising tool to explore interdisciplinary subjects. OBJECTIVE: The study intended to guide ChatGPT for interdisciplinary exploration through simulated panel discussions. As a proof-of-concept, we employed this method to evaluate the advantages and challenges of using chatbots in sports rehabilitation. METHODS: We proposed a model termed PanelGPT to explore ChatGPTs' knowledge graph on interdisciplinary topics through simulated panel discussions. Applied to "chatbots in sports rehabilitation", ChatGPT role-played both the moderator and panelists, which included a physiotherapist, psychologist, nutritionist, AI expert, and an athlete. We act as the audience posed questions to the panel, with ChatGPT acting as both the panelists for responses and the moderator for hosting the discussion. We performed the simulation using the ChatGPT-4 model and evaluated the responses with existing literature and human expertise. RESULTS: Each simulation mimicked a real-life panel discussion: The moderator introduced the panel and posed opening/closing questions, to which all panelists responded. The experts engaged with each other to address inquiries from the audience, primarily from their respective fields of expertise. By tackling questions related to education, physiotherapy, physiology, nutrition, and ethical consideration, the discussion highlighted benefits such as 24/7 support, personalized advice, automated tracking, and reminders. It also emphasized the importance of user education and identified challenges such as limited interaction modes, inaccuracies in emotion-related advice, assurance on data privacy and security, transparency in data handling, and fairness in model training. The panelists reached a consensus that chatbots are designed to assist, not replace, human healthcare professionals in the rehabilitation process. CONCLUSIONS: Compared to a typical conversation with ChatGPT, the multi-perspective approach of PanelGPT facilitates a comprehensive understanding of an interdisciplinary topic by integrating insights from experts with complementary knowledge. Beyond addressing the exemplified topic of chatbots in sports rehabilitation, the model can be adapted to tackle a wide array of interdisciplinary topics within educational, research, and healthcare settings.
McBee JC; Han DY; Liu L; Ma L; Adjeroh DA; Xu D; Hu G
32
39137029
Understanding Health Care Students' Perceptions, Beliefs, and Attitudes Toward AI-Powered Language Models: Cross-Sectional Study.
2,024
JMIR medical education
BACKGROUND: ChatGPT was not intended for use in health care, but it has potential benefits that depend on end-user understanding and acceptability, which is where health care students become crucial. There is still a limited amount of research in this area. OBJECTIVE: The primary aim of our study was to assess the frequency of ChatGPT use, the perceived level of knowledge, the perceived risks associated with its use, and the ethical issues, as well as attitudes toward the use of ChatGPT in the context of education in the field of health. In addition, we aimed to examine whether there were differences across groups based on demographic variables. The second part of the study aimed to assess the association between the frequency of use, the level of perceived knowledge, the level of risk perception, and the level of perception of ethics as predictive factors for participants' attitudes toward the use of ChatGPT. METHODS: A cross-sectional survey was conducted from May to June 2023 encompassing students of medicine, nursing, dentistry, nutrition, and laboratory science across the Americas. The study used descriptive analysis, chi-square tests, and ANOVA to assess statistical significance across different categories. The study used several ordinal logistic regression models to analyze the impact of predictive factors (frequency of use, perception of knowledge, perception of risk, and ethics perception scores) on attitude as the dependent variable. The models were adjusted for gender, institution type, major, and country. Stata was used to conduct all the analyses. RESULTS: Of 2661 health care students, 42.99% (n=1144) were unaware of ChatGPT. The median score of knowledge was "minimal" (median 2.00, IQR 1.00-3.00). Most respondents (median 2.61, IQR 2.11-3.11) regarded ChatGPT as neither ethical nor unethical. Most participants (median 3.89, IQR 3.44-4.34) "somewhat agreed" that ChatGPT (1) benefits health care settings, (2) provides trustworthy data, (3) is a helpful tool for clinical and educational medical information access, and (4) makes the work easier. In total, 70% (7/10) of people used it for homework. As the perceived knowledge of ChatGPT increased, there was a stronger tendency with regard to having a favorable attitude toward ChatGPT. Higher ethical consideration perception ratings increased the likelihood of considering ChatGPT as a source of trustworthy health care information (odds ratio [OR] 1.620, 95% CI 1.498-1.752), beneficial in medical issues (OR 1.495, 95% CI 1.452-1.539), and useful for medical literature (OR 1.494, 95% CI 1.426-1.564; P<.001 for all results). CONCLUSIONS: Over 40% of American health care students (1144/2661, 42.99%) were unaware of ChatGPT despite its extensive use in the health field. Our data revealed the positive attitudes toward ChatGPT and the desire to learn more about it. Medical educators must explore how chatbots may be included in undergraduate health care education programs.
Cherrez-Ojeda I; Gallardo-Bastidas JC; Robles-Velasco K; Osorio MF; Velez Leon EM; Leon Velastegui M; Pauletto P; Aguilar-Diaz FC; Squassi A; Gonzalez Eras SP; Cordero Carrasco E; Chavez Gonzalez KL; Calderon JC; Bousquet J; Bedbrook A; Faytong-Haro M
10
37544801
Revolutionizing Healthcare with ChatGPT: An Early Exploration of an AI Language Model's Impact on Medicine at Large and its Role in Pediatric Surgery.
2,023
Journal of pediatric surgery
BACKGROUND: ChatGPT, a natural language processing model, has shown great promise in revolutionizing the field of medicine. This paper presents a comprehensive evaluation of the transformative potential of OpenAI's ChatGPT on healthcare and scientific research, with an exploration on its prospective capacity to impact the field of pediatric surgery. METHODS: Through an extensive review of the literature, we illuminate ChatGPT's applications in clinical healthcare and medical research while presenting the ethical considerations surrounding its use. RESULTS: Our review reveals the exciting work done so far evaluating the numerous potential uses of ChatGPT in clinical medicine and medical research, but it also shows that significant research and advancements in natural language processing models are still needed. CONCLUSION: ChatGPT has immense promise in transforming how we provide healthcare and how we conduct research. Currently, further robust research on the safety, effectiveness, and ethical considerations of ChatGPT is greatly needed. LEVEL OF STUDY: V.
Xiao D; Meyers P; Upperman JS; Robinson JR
10
39121303
ChatGPT in medicine: A cross-disciplinary systematic review of ChatGPT's (artificial intelligence) role in research, clinical practice, education, and patient interaction.
2,024
Medicine
BACKGROUND: ChatGPT, a powerful AI language model, has gained increasing prominence in medicine, offering potential applications in healthcare, clinical decision support, patient communication, and medical research. This systematic review aims to comprehensively assess the applications of ChatGPT in healthcare education, research, writing, patient communication, and practice while also delineating potential limitations and areas for improvement. METHOD: Our comprehensive database search retrieved relevant papers from PubMed, Medline and Scopus. After the screening process, 83 studies met the inclusion criteria. This review includes original studies comprising case reports, analytical studies, and editorials with original findings. RESULT: ChatGPT is useful for scientific research and academic writing, and assists with grammar, clarity, and coherence. This helps non-English speakers and improves accessibility by breaking down linguistic barriers. However, its limitations include probable inaccuracy and ethical issues, such as bias and plagiarism. ChatGPT streamlines workflows and offers diagnostic and educational potential in healthcare but exhibits biases and lacks emotional sensitivity. It is useful in inpatient communication, but requires up-to-date data and faces concerns about the accuracy of information and hallucinatory responses. CONCLUSION: Given the potential for ChatGPT to transform healthcare education, research, and practice, it is essential to approach its adoption in these areas with caution due to its inherent limitations.
Fatima A; Shafique MA; Alam K; Fadlalla Ahmed TK; Mustafa MS
10
39285377
Performance of ChatGPT-3.5 and GPT-4 in national licensing examinations for medicine, pharmacy, dentistry, and nursing: a systematic review and meta-analysis.
2,024
BMC medical education
BACKGROUND: ChatGPT, a recently developed artificial intelligence (AI) chatbot, has demonstrated improved performance in examinations in the medical field. However, thus far, an overall evaluation of the potential of ChatGPT models (ChatGPT-3.5 and GPT-4) in a variety of national health licensing examinations is lacking. This study aimed to provide a comprehensive assessment of the ChatGPT models' performance in national licensing examinations for medical, pharmacy, dentistry, and nursing research through a meta-analysis. METHODS: Following the PRISMA protocol, full-text articles from MEDLINE/PubMed, EMBASE, ERIC, Cochrane Library, Web of Science, and key journals were reviewed from the time of ChatGPT's introduction to February 27, 2024. Studies were eligible if they evaluated the performance of a ChatGPT model (ChatGPT-3.5 or GPT-4); related to national licensing examinations in the fields of medicine, pharmacy, dentistry, or nursing; involved multiple-choice questions; and provided data that enabled the calculation of effect size. Two reviewers independently completed data extraction, coding, and quality assessment. The JBI Critical Appraisal Tools were used to assess the quality of the selected articles. Overall effect size and 95% confidence intervals [CIs] were calculated using a random-effects model. RESULTS: A total of 23 studies were considered for this review, which evaluated the accuracy of four types of national licensing examinations. The selected articles were in the fields of medicine (n = 17), pharmacy (n = 3), nursing (n = 2), and dentistry (n = 1). They reported varying accuracy levels, ranging from 36 to 77% for ChatGPT-3.5 and 64.4-100% for GPT-4. The overall effect size for the percentage of accuracy was 70.1% (95% CI, 65-74.8%), which was statistically significant (p < 0.001). Subgroup analyses revealed that GPT-4 demonstrated significantly higher accuracy in providing correct responses than its earlier version, ChatGPT-3.5. Additionally, in the context of health licensing examinations, the ChatGPT models exhibited greater proficiency in the following order: pharmacy, medicine, dentistry, and nursing. However, the lack of a broader set of questions, including open-ended and scenario-based questions, and significant heterogeneity were limitations of this meta-analysis. CONCLUSIONS: This study sheds light on the accuracy of ChatGPT models in four national health licensing examinations across various countries and provides a practical basis and theoretical support for future research. Further studies are needed to explore their utilization in medical and health education by including a broader and more diverse range of questions, along with more advanced versions of AI chatbots.
Jin HK; Lee HE; Kim E
21
39526823
Evaluating the performance and clinical decision-making impact of ChatGPT-4 in reproductive medicine.
2,025
International journal of gynaecology and obstetrics: the official organ of the International Federation of Gynaecology and Obstetrics
BACKGROUND: ChatGPT, a sophisticated language model developed by OpenAI, has the potential to offer professional and patient-friendly support. We aimed to assess the accuracy and reproducibility of ChatGPT-4 in answering questions related to knowledge, management, and support within the field of reproductive medicine. METHODS: ChatGPT-4 was used to respond to queries sourced from a domestic attending physician examination database, as well as to address both local and international treatment guidelines within the field of reproductive medicine. Each response generated by ChatGPT-4 was independently evaluated by a trio of experts specializing in reproductive medicine. The experts used four qualitative measures-relevance, accuracy, completeness, and understandability-to assess each response. RESULTS: We found that ChatGPT-4 demonstrated extensive knowledge in reproductive medicine, with median scores for relevance, accuracy, completeness, and comprehensibility of objective questions being 4, 3.5, 3, and 3, respectively. However, the composite accuracy rate for multiple-choice questions was 63.38%. Significant discrepancies were observed among the three experts' scores across all four measures. Expert 1 generally provided higher and more consistent scores, while Expert 3 awarded lower scores for accuracy. ChatGPT-4's responses to both domestic and international guidelines showed varying levels of understanding, with a lack of knowledge on regional guideline variations. However, it offered practical and multifaceted advice regarding next steps and adjusting to new guidelines. CONCLUSIONS: We analyzed the strengths and limitations of ChatGPT-4's responses on the management of reproductive medicine and relevant support. ChatGPT-4 might serve as a supplementary informational tool for patients and physicians to improve outcomes in the field of reproductive medicine.
Chen R; Zeng D; Li Y; Huang R; Sun D; Li T
0-1
38335017
Performance of ChatGPT on the Chinese Postgraduate Examination for Clinical Medicine: Survey Study.
2,024
JMIR medical education
BACKGROUND: ChatGPT, an artificial intelligence (AI) based on large-scale language models, has sparked interest in the field of health care. Nonetheless, the capabilities of AI in text comprehension and generation are constrained by the quality and volume of available training data for a specific language, and the performance of AI across different languages requires further investigation. While AI harbors substantial potential in medicine, it is imperative to tackle challenges such as the formulation of clinical care standards; facilitating cultural transitions in medical education and practice; and managing ethical issues including data privacy, consent, and bias. OBJECTIVE: The study aimed to evaluate ChatGPT's performance in processing Chinese Postgraduate Examination for Clinical Medicine questions, assess its clinical reasoning ability, investigate potential limitations with the Chinese language, and explore its potential as a valuable tool for medical professionals in the Chinese context. METHODS: A data set of Chinese Postgraduate Examination for Clinical Medicine questions was used to assess the effectiveness of ChatGPT's (version 3.5) medical knowledge in the Chinese language, which has a data set of 165 medical questions that were divided into three categories: (1) common questions (n=90) assessing basic medical knowledge, (2) case analysis questions (n=45) focusing on clinical decision-making through patient case evaluations, and (3) multichoice questions (n=30) requiring the selection of multiple correct answers. First of all, we assessed whether ChatGPT could meet the stringent cutoff score defined by the government agency, which requires a performance within the top 20% of candidates. Additionally, in our evaluation of ChatGPT's performance on both original and encoded medical questions, 3 primary indicators were used: accuracy, concordance (which validates the answer), and the frequency of insights. RESULTS: Our evaluation revealed that ChatGPT scored 153.5 out of 300 for original questions in Chinese, which signifies the minimum score set to ensure that at least 20% more candidates pass than the enrollment quota. However, ChatGPT had low accuracy in answering open-ended medical questions, with only 31.5% total accuracy. The accuracy for common questions, multichoice questions, and case analysis questions was 42%, 37%, and 17%, respectively. ChatGPT achieved a 90% concordance across all questions. Among correct responses, the concordance was 100%, significantly exceeding that of incorrect responses (n=57, 50%; P<.001). ChatGPT provided innovative insights for 80% (n=132) of all questions, with an average of 2.95 insights per accurate response. CONCLUSIONS: Although ChatGPT surpassed the passing threshold for the Chinese Postgraduate Examination for Clinical Medicine, its performance in answering open-ended medical questions was suboptimal. Nonetheless, ChatGPT exhibited high internal concordance and the ability to generate multiple insights in the Chinese language. Future research should investigate the language-based discrepancies in ChatGPT's performance within the health care context.
Yu P; Fang C; Liu X; Fu W; Ling J; Yan Z; Jiang Y; Cao Z; Wu M; Chen Z; Zhu W; Zhang Y; Abudukeremu A; Wang Y; Liu X; Wang J
21
37853081
Can ChatGPT be the Plastic Surgeon's New Digital Assistant? A Bibliometric Analysis and Scoping Review of ChatGPT in Plastic Surgery Literature.
2,024
Aesthetic plastic surgery
BACKGROUND: ChatGPT, an artificial intelligence (AI) chatbot that uses natural language processing (NLP) to interact in a humanlike manner, has made significant contributions to various healthcare fields, including plastic surgery. However, its widespread use has raised ethical and security concerns. This study examines the presence of ChatGPT, an artificial intelligence (AI) chatbot, in the literature of plastic surgery. METHODS: A bibliometric analysis and scoping review of the ChatGPT plastic surgery literature were performed. PubMed was queried using the search term "ChatGPT" to identify all biomedical literature on ChatGPT, with only studies related to plastic, reconstructive, or aesthetic surgery topics being considered eligible for inclusion. RESULTS: The analysis included 30 out of 724 articles retrieved from PubMed, focusing on publications from December 2022 to July 2023. Four key areas of research emerged: applications in research/creation of original work, clinical application, surgical education, and ethics/commentary on previous studies. The versatility of ChatGPT in research, its potential in surgical education, and its role in enhancing patient education were explored. Ethical concerns regarding patient privacy, plagiarism, and the accuracy of information obtained from ChatGPT-generated sources were also highlighted. CONCLUSION: While ethical concerns persist, the study underscores the potential of ChatGPT in plastic surgery research and practice, emphasizing the need for careful utilization and collaboration to optimize its benefits while minimizing risks. LEVEL OF EVIDENCE V: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Liu HY; Alessandri-Bonetti M; Arellano JA; Egro FM
32