pmid
stringlengths
8
8
title
stringlengths
3
289
year
int64
2.02k
2.03k
journal
stringlengths
3
221
doi
stringclasses
1 value
mesh
stringclasses
1 value
keywords
stringclasses
1 value
abstract
stringlengths
115
3.67k
authors
stringlengths
3
798
cluster
class label
5 classes
38534904
ChatGPT's Response Consistency: A Study on Repeated Queries of Medical Examination Questions.
2,024
European journal of investigation in health, psychology and education
(1) Background: As the field of artificial intelligence (AI) evolves, tools like ChatGPT are increasingly integrated into various domains of medicine, including medical education and research. Given the critical nature of medicine, it is of paramount importance that AI tools offer a high degree of reliability in the information they provide. (2) Methods: A total of n = 450 medical examination questions were manually entered into ChatGPT thrice, each for ChatGPT 3.5 and ChatGPT 4. The responses were collected, and their accuracy and consistency were statistically analyzed throughout the series of entries. (3) Results: ChatGPT 4 displayed a statistically significantly improved accuracy with 85.7% compared to that of 57.7% of ChatGPT 3.5 (p < 0.001). Furthermore, ChatGPT 4 was more consistent, correctly answering 77.8% across all rounds, a significant increase from the 44.9% observed from ChatGPT 3.5 (p < 0.001). (4) Conclusions: The findings underscore the increased accuracy and dependability of ChatGPT 4 in the context of medical education and potential clinical decision making. Nonetheless, the research emphasizes the indispensable nature of human-delivered healthcare and the vital role of continuous assessment in leveraging AI in medicine.
Funk PF; Hoch CC; Knoedler S; Knoedler L; Cotofana S; Sofo G; Bashiri Dezfouli A; Wollenberg B; Guntinas-Lichius O; Alfertshofer M
21
39310381
Preliminary discrimination and evaluation of clinical application value of ChatGPT4o in bone tumors.
2,024
Journal of bone oncology
*Evaluation of making up ChatGPT4o in the preliminary pathological diagnosis of bone tumors.*ChatGPT-4o's proficiency in analyzing pathological images and providing initial diagnoses of bone tumor characteristics is comparable to that of senior pathologists in the Tertiary hospital doctors group, with both surpassing the Remote grassroots doctors group.*AI, like ChatGPT-4o, has the potential to enhance diagnostic capabilities for remote grassroots doctors and improve sensitivity to reduce missed diagnosis rates among tertiary hospital doctors in identifying bone tumors.
Huang L; Hu J; Cai Q; Ye A; Chen Y; Yang Xiao-Zhi Z; Liu Y; Zheng J; Meng Z
0-1
39465720
Enhancing English abstract quality for non-English speaking authors using ChatGPT: A comparative study of Taiwan, Japan, China, and South Korea with slope graphs.
2,024
Medicine
A clear and proficient English abstract is crucial for disseminating research findings to a global audience, significantly impacting the accessibility and visibility of research from non-English speaking countries. Despite the adoption of ChatGPT since November 30, 2022, a comprehensive analysis of improvements in English abstracts in scholarly journals has not been conducted. This study aims to identify which authors from Taiwan, Japan, China, and South Korea (TJCS) have shown the most improvement in English abstracts. Article abstracts published in Medicine (Baltimore) sourced from the Web of Science Core Collection from 2020 to 2023 were downloaded. A mixed-methods approach was employed, combining quantitative analysis of linguistic quality indicators and qualitative assessments of coherence and engagement using the Rasch model. Ten quality indicators were determined by prompting ChatGPT. Two scenarios were analyzed: (1) generative pretrained transformer (GPT) versus non-GPT (each with 30 abstracts from 2021) and (2) TJCS in comparison (each with 100 abstracts from 2021 and 2023, respectively). Standardized mean differences were compared using paired samples t test. Visuals including forest plots, Rasch Wright Map, the slope graph, and scatter plot with 95% control lines were used to examine the 2 scenarios. (1) No significant difference was found between GPT and non-GPT abstracts with Rasch logit scores of 3.31 and 3.17, respectively (P = .42), likely due to small sample size (n = 30); (2) significant difference exists between 2020 and 2023 in each country, and between South Korea and Taiwan in 2020. Among TJCS, Taiwan showed the greatest improvement in English abstract quality post-ChatGPT implementation, followed by Japan, China, and South Korea. The English abstracts in Medicine (Baltimore) have improved, reflecting the tool's positive impact on enhancing technical language. This study demonstrates that ChatGPT can enhance the quality of English abstracts for authors from non-English speaking regions, although the assumption that all authors use ChatGPT is invalid and impractical. The findings underscore the value of artificial intelligence tools in academic writing and recommend further investigation into the long-term implications of artificial intelligence integration in scholarly communication.
Chou W; Chow JC
10
38383555
Large language models streamline automated machine learning for clinical studies.
2,024
Nature communications
A knowledge gap persists between machine learning (ML) developers (e.g., data scientists) and practitioners (e.g., clinicians), hampering the full utilization of ML for clinical data analysis. We investigated the potential of the ChatGPT Advanced Data Analysis (ADA), an extension of GPT-4, to bridge this gap and perform ML analyses efficiently. Real-world clinical datasets and study details from large trials across various medical specialties were presented to ChatGPT ADA without specific guidance. ChatGPT ADA autonomously developed state-of-the-art ML models based on the original study's training data to predict clinical outcomes such as cancer development, cancer progression, disease complications, or biomarkers such as pathogenic gene sequences. Following the re-implementation and optimization of the published models, the head-to-head comparison of the ChatGPT ADA-crafted ML models and their respective manually crafted counterparts revealed no significant differences in traditional performance metrics (p >/= 0.072). Strikingly, the ChatGPT ADA-crafted ML models often outperformed their counterparts. In conclusion, ChatGPT ADA offers a promising avenue to democratize ML in medicine by simplifying complex data analyses, yet should enhance, not replace, specialized training and resources, to promote broader applications in medical research and practice.
Tayebi Arasteh S; Han T; Lotfinia M; Kuhl C; Kather JN; Truhn D; Nebelung S
10
39978844
Clinically Relevant Family Medicine Research: Board Certification Updates.
2,024
Journal of the American Board of Family Medicine : JABFM
A new Patient Psychological Safety Scale (PPSS) has potential to address an often-unrecognized problem. Should HbA1c be used to follow diabetes in patients with concurrent sickle cell disease? Are there significant differences resulting from HbA1c point-of-care versus send-off testing? Which treatment for which type of incontinence? Which factors are more predictive of emotional exhaustion for clinicians versus nonclinician staff? Does your office apply fluoride to young children's teeth? Is testosterone deficiency associated with death in older men? How does ChatGPT impact board certification exams? What is the most effective treatment for vasomotor symptoms associated with menopause?
Bowman MA; Seehusen DA; Britz J; Ledford CJW
0-1
39569401
Can ChatGPT help patients understand radiopharmaceutical extravasations?
2,024
Frontiers in nuclear medicine
A previously published paper in the official journal of the Society of Nuclear Medicine and Molecular Imaging (SNMMI) concluded that the artificial intelligence chatbot ChatGPT may offer an adequate substitute for nuclear medicine staff informational counseling to patients in an investigated setting of (18)F-FDG PET/CT. To ensure consistency with the previous paper, the author and a team of experts followed a similar methodology and evaluated whether ChatGPT could adequately offer a substitute for nuclear medicine staff informational counseling to patients regarding radiopharmaceutical extravasations. We asked ChatGPT fifteen questions regarding radiopharmaceutical extravasations. Each question or prompt was queried three times. Using the same evaluation criteria as the previously published paper, the ChatGPT responses were evaluated by two nuclear medicine trained physicians and one nuclear medicine physicist for appropriateness and helpfulness. These evaluators found ChatGPT responses to be either highly appropriate or quite appropriate in 100% of questions and very helpful or quite helpful in 93% of questions. The interobserver agreement among the evaluators, assessed using the Intraclass Correlation Coefficient (ICC), was found to be 0.72, indicating good overall agreement. The evaluators also rated the inconsistency across the three ChatGPT responses for each question and found irrelevant or minor inconsistencies in 87% of questions and some differences relevant to main content in the other 13% of the questions. One physician evaluated the quality of the references listed by ChatGPT as the source material it used in generating its responses. The reference check revealed no AI hallucinations. The evaluator concluded that ChatGPT used fully validated references (appropriate, identifiable, and accessible) to generate responses for eleven of the fifteen questions and used generally available medical and ethical guidelines to generate responses for four questions. Based on these results we concluded that ChatGPT may be a reliable resource for patients interested in radiopharmaceutical extravasations. However, these validated and verified ChatGPT responses differed significantly from official positions and public comments regarding radiopharmaceutical extravasations made by the SNMMI and nuclear medicine staff. Since patients are increasingly relying on the internet for information about their medical procedures, the differences need to be addressed.
Alvarez M
43
38352437
Evaluating ChatGPT's Accuracy in Providing Screening Mammography Recommendations among Older Women: Artificial Intelligence and Cancer Communication.
2,024
Research square
Abstract Objective: The U.S. Preventive Services Task Force (USPSTF) recommends biennial screening mammography through age 74. Guidelines vary as to whether or not they recommended mammography screening to women aged 75 and older. This study aims to determine the ability of ChatGPT to provide appropriate recommendations for breast cancer screening in patients aged 75 years and older. Methods: 12 questions and 4 clinical vignettes addressing fundamental concepts about breast cancer screening and prevention in patients aged 75 years and older were created and asked to ChatGPT three consecutive times to generate 3 sets of responses. The responses were graded by a multi-disciplinary panel of experts in the intersection of breast cancer screening and aging . The responses were graded as 'appropriate', 'inappropriate', or 'unreliable' based on the reviewer's clinical judgment, content of the response, and whether the content was consistent across the three responses . Appropriateness was determined through a majority consensus. Results: The responses generated by ChatGPT were appropriate for 11/17 questions (64%). Three questions were graded as inappropriate (18%) and 2 questions were graded as unreliable (12%). A consensus was not reached on one question (6%) and was graded as no consensus. Conclusions: While recognizing the limitations of ChatGPT, it has potential to provide accurate health care information and could be utilized by healthcare professionals to assist in providing recommendations for breast cancer screening in patients age 75 years and older. Physician oversight will be necessary, due to the possibility of ChatGPT to provide inappropriate and unreliable responses, and the importance of accuracy in medicine.
Braithwaite D; Karanth SD; Divaker J; Schoenborn N; Lin K; Richman I; Hochhegger B; O'Neill S; Schonberg M
0-1
37433676
ChatGPT in Nuclear Medicine Education.
2,023
Journal of nuclear medicine technology
Academic integrity has been challenged by artificial intelligence algorithms in teaching institutions, including those providing nuclear medicine training. The GPT 3.5-powered ChatGPT chatbot released in late November 2022 has emerged as an immediate threat to academic and scientific writing. Methods: Both examinations and written assignments for nuclear medicine courses were tested using ChatGPT. Included was a mix of core theory subjects offered in the second and third years of the nuclear medicine science course. Long-answer-style questions (8 subjects) and calculation-style questions (2 subjects) were included for examinations. ChatGPT was also used to produce responses to authentic writing tasks (6 subjects). ChatGPT responses were evaluated by Turnitin plagiarism-detection software for similarity and artificial intelligence scores, scored against standardized rubrics, and compared with the mean performance of student cohorts. Results: ChatGPT powered by GPT 3.5 performed poorly in the 2 calculation examinations (overall, 31.7% compared with 67.3% for students), with particularly poor performance in complex-style questions. ChatGPT failed each of 6 written tasks (overall, 38.9% compared with 67.2% for students), with worsening performance corresponding to increasing writing and research expectations in the third year. In the 8 examinations, ChatGPT performed better than students for general or early subjects but poorly for advanced and specific subjects (overall, 51% compared with 57.4% for students). Conclusion: Although ChatGPT poses a risk to academic integrity, its usefulness as a cheating tool can be constrained by higher-order taxonomies. Unfortunately, the constraints to higher-order learning and skill development also undermine potential applications of ChatGPT for enhancing learning. There are several potential applications of ChatGPT for teaching nuclear medicine students.
Currie G; Barry K
0-1
37225599
Academic integrity and artificial intelligence: is ChatGPT hype, hero or heresy?
2,023
Seminars in nuclear medicine
Academic integrity in both higher education and scientific writing has been challenged by developments in artificial intelligence. The limitations associated with algorithms have been largely overcome by the recently released ChatGPT; a chatbot powered by GPT-3.5 capable of producing accurate and human-like responses to questions in real-time. Despite the potential benefits, ChatGPT confronts significant limitations to its usefulness in nuclear medicine and radiology. Most notably, ChatGPT is prone to errors and fabrication of information which poses a risk to professionalism, ethics and integrity. These limitations simultaneously undermine the value of ChatGPT to the user by not producing outcomes at the expected standard. Nonetheless, there are a number of exciting applications of ChatGPT in nuclear medicine across education, clinical and research sectors. Assimilation of ChatGPT into practice requires redefining of norms, and re-engineering of information expectations.
Currie GM
0-1
38248809
Personalized Medicine in Urolithiasis: AI Chatbot-Assisted Dietary Management of Oxalate for Kidney Stone Prevention.
2,024
Journal of personalized medicine
Accurate information regarding oxalate levels in foods is essential for managing patients with hyperoxaluria, oxalate nephropathy, or those susceptible to calcium oxalate stones. This study aimed to assess the reliability of chatbots in categorizing foods based on their oxalate content. We assessed the accuracy of ChatGPT-3.5, ChatGPT-4, Bard AI, and Bing Chat to classify dietary oxalate content per serving into low (<5 mg), moderate (5-8 mg), and high (>8 mg) oxalate content categories. A total of 539 food items were processed through each chatbot. The accuracy was compared between chatbots and stratified by dietary oxalate content categories. Bard AI had the highest accuracy of 84%, followed by Bing (60%), GPT-4 (52%), and GPT-3.5 (49%) (p < 0.001). There was a significant pairwise difference between chatbots, except between GPT-4 and GPT-3.5 (p = 0.30). The accuracy of all the chatbots decreased with a higher degree of dietary oxalate content categories but Bard remained having the highest accuracy, regardless of dietary oxalate content categories. There was considerable variation in the accuracy of AI chatbots for classifying dietary oxalate content. Bard AI consistently showed the highest accuracy, followed by Bing Chat, GPT-4, and GPT-3.5. These results underline the potential of AI in dietary management for at-risk patient groups and the need for enhancements in chatbot algorithms for clinical accuracy.
Aiumtrakul N; Thongprayoon C; Arayangkool C; Vo KB; Wannaphut C; Suppadungsuk S; Krisanapan P; Garcia Valencia OA; Qureshi F; Miao J; Cheungpasitporn W
43
39300965
Developing large language models to detect adverse drug events in posts on x.
2,024
Journal of biopharmaceutical statistics
Adverse drug events (ADEs) are one of the major causes of hospital admissions and are associated with increased morbidity and mortality. Post-marketing ADE identification is one of the most important phases of drug safety surveillance. Traditionally, data sources for post-marketing surveillance mainly come from spontaneous reporting system such as the Food and Drug Administration Adverse Event Reporting System (FAERS). Social media data such as posts on X (formerly Twitter) contain rich patient and medication information and could potentially accelerate drug surveillance research. However, ADE information in social media data is usually locked in the text, making it difficult to be employed by traditional statistical approaches. In recent years, large language models (LLMs) have shown promise in many natural language processing tasks. In this study, we developed several LLMs to perform ADE classification on X data. We fine-tuned various LLMs including BERT-base, Bio_ClinicalBERT, RoBERTa, and RoBERTa-large. We also experimented ChatGPT few-shot prompting and ChatGPT fine-tuned on the whole training data. We then evaluated the model performance based on sensitivity, specificity, negative predictive value, positive predictive value, accuracy, F1-measure, and area under the ROC curve. Our results showed that RoBERTa-large achieved the best F1-measure (0.8) among all models followed by ChatGPT fine-tuned model with F1-measure of 0.75. Our feature importance analysis based on 1200 random samples and RoBERTa-Large showed the most important features are as follows: "withdrawals"/"withdrawal", "dry", "dealing", "mouth", and "paralysis". The good model performance and clinically relevant features show the potential of LLMs in augmenting ADE detection for post-marketing drug safety surveillance.
Deng Y; Xing Y; Quach J; Chen X; Wu X; Zhang Y; Moureaud C; Yu M; Zhao Y; Wang L; Zhong S
10
37256297
The Artificial Intelligence application in Aesthetic Medicine: How ChatGPT can Revolutionize the Aesthetic World.
2,023
Aesthetic plastic surgery
Aesthetic medicine is witnessing a growing importance of ChatGPT and artificial intelligence (AI) technologies, as highlighted by the pioneering work of Xie et al. in their article, "Aesthetic Surgery Advice and Counseling from Artificial Intelligence: A Rhinoplasty Consultation with ChatGPT." These advancements promise to revolutionize patient consultations, treatment planning, and follow-up care. AI-driven chatbots, such as ChatGPT, can enhance patient consultations by providing accurate and reliable information on aesthetic procedures, their risks, benefits, and potential outcomes, enabling well-informed decisions and improved treatment outcomes. Furthermore, AI can personalize treatment plans by analyzing patient data, leading to increased precision and satisfaction. AI-powered platforms can also streamline patient follow-up and monitoring, improving patient outcomes and resource utilization, while serving as a valuable educational tool for clinicians. Despite these benefits, AI integration in aesthetic medicine raises concerns about data privacy, security, and potential biases in AI algorithms. To address these challenges, the aesthetic medicine community must establish ethical guidelines, adopt stringent security protocols, and ensure diverse and representative datasets for AI training. Additionally, maintaining the personal connection between patients and providers is crucial for preserving the human touch in patient care.Level of Evidence V This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors https://www.springer.com/00266 .
Buzzaccarini G; Degliuomini RS; Borin M
32
39377065
Large language model to multimodal large language model: A journey to shape the biological macromolecules to biological sciences and medicine.
2,024
Molecular therapy. Nucleic acids
After ChatGPT was released, large language models (LLMs) became more popular. Academicians use ChatGPT or LLM models for different purposes, and the use of ChatGPT or LLM is increasing from medical science to diversified areas. Recently, the multimodal LLM (MLLM) has also become popular. Therefore, we comprehensively illustrate the LLM and MLLM models for a complete understanding. We also aim for simple and extended reviews of LLMs and MLLMs for a broad category of readers, such as researchers, students in diversified fields, and other academicians. The review article illustrates the LLM and MLLM models, their working principles, and their applications in diversified fields. First, we demonstrate the technical concept of LLMs, working principle, Black Box, and the evolution of LLMs. To explain the working principle, we discuss the tokenization process, token representation, and token relationships. We also extensively demonstrate the application of LLMs in biological macromolecules, medical science, biological science, and other areas. We illustrate the multimodal applications of LLMs or MLLMs. Finally, we illustrate the limitations, challenges, and future prospects of LLMs. The review acts as a booster dose for clinicians, a primer for molecular biologists, and a catalyst for scientists, and also benefits diversified academicians.
Bhattacharya M; Pal S; Chatterjee S; Lee SS; Chakraborty C
10
39663218
Text-to-Video Models and Sora in Plastic Surgery: Pearls, Pitfalls, and Prospectives.
2,024
Aesthetic plastic surgery
After the groundbreaking release of the highly acclaimed chatbot ChatGPT, which revolutionized the field of artificial intelligence (AI) last year, OpenAI has once again astounded the world with the unveiling of their latest generative AI model, Sora, on February 16, 2024. This cutting-edge model has the remarkable ability to generate videos up to a duration of 60 seconds solely through text instructions. With a series of AI-generated contents, such as AI chat, AI drawing, and AI music, emerging one after another, the era of "AI revolution" that had a disruptive impact on modern life has arrived. Meanwhile, AI has made significant achievements in the medical field, especially in diagnosing based on medical imaging. This article briefly describes the development history of text-to-video models and provides a detailed introduction to the Sora model, including its portrayal of the human face and contour, inspiring its potential applications in plastic surgery. It also provides a prospect for other AI-generated content technologies, such as text-to-holography and text-to-material objects.Level of Evidence V This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Kang Y; Wang S; Zhu L
10
39583462
Assessing the Performance of ChatGPT in Answering Patients' Questions Regarding Congenital Bicuspid Aortic Valve.
2,024
Cureus
AIM: Artificial intelligence (AI) models, such as ChatGPT, are widely being used in academia as well as by the common public. In the field of medicine, the information obtained by the professionals as well as by the patients from the AI tools has significant advantages while at the same time posing valid concerns regarding the validity and adequacy of information regarding healthcare delivery and utilization. Therefore, it is important to vet these AI tools through the prism of practicing physicians. METHODS: To demonstrate the immense utility as well as potential concerns of using ChatGPT to gather medical information, a set of questions were posed to the chatbot regarding a hypothetical patient with a congenital bicuspid aortic valve (BAV), and the answers were recorded and reviewed based on three criteria: (i) readability/technicality; (ii) adequacy/completeness; and (iii) accuracy/authenticity. RESULTS: While the ChatGPT provided detailed information about clinical pictures, treatment, and outcomes regarding BAV, the information was generic and brief, and the utility was limited due to a lack of specific information based on an individual patient's clinical status. The authenticity of the information could not be verified due to a lack of citations. Further, human aspects that would normally emerge in nuanced doctor-patient communication were missing in the ChatGPT output. CONCLUSION: Although the performance of AI in medical care is expected to grow, imperfections and ethical concerns may remain a huge challenge in utilizing information from the chatbots alone without adequate communications with health providers, despite having numerous advantages of this technology to society in many walks of human life.
Barua M
43
39239689
Humans-written versus ChatGPT-generated case reports.
2,024
The journal of obstetrics and gynaecology research
AIM: Artificial intelligence, especially ChatGPT, has been used in various aspects of medicine; however, whether ChatGPT can be used in case report writing is unknown. This study aimed to provoke discussion and provide a platform for it. METHODS: I wrote a theoretical case report where cyst aspiration cured a twisted ovarian cyst (Manuscript 4). I tasked ChatGPT with generating case reports by inputting information at three different levels: (1) key message and case profile, (2) addition of key introduction information (including known facts and problems to be solved), and (3) further addition of main discussion points. These inputs resulted in the creation of Manuscripts 1-3, which were subjected to analysis. Manuscript 3, generated by ChatGPT with the deepest information input, was compared with Manuscript 4, the human-authored counterpart. RESULTS: With the least information, Manuscript 1 can stand on its own, but its content is superficial. The more detailed data input, the more readable and reasonable the manuscripts become. A human-written manuscript involves personal experience and viewpoints other than obstetrics-gynecology. CONCLUSIONS: Better input produced more reasonable and readable case reports. Human-written paper, compared with ChatGPT-generated one, can involve "human touch." Whether such human touch enriches the case report awaits further discussion. Whether ChatGPT can be used in case report writing, and if it can, to what extent, should be worthy of further study. I encourage every doctor to form their own stance towards ChatGPT use in medical writing.
Matsubara S
10
37967485
Performance of Google bard and ChatGPT in mass casualty incidents triage.
2,024
The American journal of emergency medicine
AIM: The objective of our research is to evaluate and compare the performance of ChatGPT, Google Bard, and medical students in performing START triage during mass casualty situations. METHOD: We conducted a cross-sectional analysis to compare ChatGPT, Google Bard, and medical students in mass casualty incident (MCI) triage using the Simple Triage And Rapid Treatment (START) method. A validated questionnaire with 15 diverse MCI scenarios was used to assess triage accuracy and content analysis in four categories: "Walking wounded," "Respiration," "Perfusion," and "Mental Status." Statistical analysis compared the results. RESULT: Google Bard demonstrated a notably higher accuracy of 60%, while ChatGPT achieved an accuracy of 26.67% (p = 0.002). Comparatively, medical students performed at an accuracy rate of 64.3% in a previous study. However, there was no significant difference observed between Google Bard and medical students (p = 0.211). Qualitative content analysis of 'walking-wounded', 'respiration', 'perfusion', and 'mental status' indicated that Google Bard outperformed ChatGPT. CONCLUSION: Google Bard was found to be superior to ChatGPT in correctly performing mass casualty incident triage. Google Bard achieved an accuracy of 60%, while chatGPT only achieved an accuracy of 26.67%. This difference was statistically significant (p = 0.002).
Gan RK; Ogbodo JC; Wee YZ; Gan AZ; Gonzalez PA
10
38478902
ChatGPT to generate clinical vignettes for teaching and multiple-choice questions for assessment: A randomized controlled experiment.
2,025
Medical teacher
AIM: This study aimed to evaluate the real-life performance of clinical vignettes and multiple-choice questions generated by using ChatGPT. METHODS: This was a randomized controlled study in an evidence-based medicine training program. We randomly assigned seventy-four medical students to two groups. The ChatGPT group received ill-defined cases generated by ChatGPT, while the control group received human-written cases. At the end of the training, they evaluated the cases by rating 10 statements using a Likert scale. They also answered 15 multiple-choice questions (MCQs) generated by ChatGPT. The case evaluations of the two groups were compared. Some psychometric characteristics (item difficulty and point-biserial correlations) of the test were also reported. RESULTS: None of the scores in 10 statements regarding the cases showed a significant difference between the ChatGPT group and the control group (p > .05). In the test, only six MCQs had acceptable levels (higher than 0.30) of point-biserial correlation, and five items could be considered acceptable in classroom settings. CONCLUSIONS: The results showed that the quality of the vignettes are comparable to those created by human authors, and some multiple-questions have acceptable psychometric characteristics. ChatGPT has potential in generating clinical vignettes for teaching and MCQs for assessment in medical education.
Coskun O; Kiyak YS; Budakoglu II
21
38420596
Using generative artificial intelligence in bibliometric analysis: 10 years of research trends from the European Resuscitation Congresses.
2,024
Resuscitation plus
AIMS: The aim of this study is to use generative artificial intelligence to perform bibliometric analysis on abstracts published at European Resuscitation Council (ERC) annual scientific congress and define trends in ERC guidelines topics over the last decade. METHODS: In this bibliometric analysis, the WebHarvy software (SysNucleus, India) was used to download data from the Resuscitation journal's website through the technique of web scraping. Next, the Chat Generative Pre-trained Transformer 4 (ChatGPT-4) application programming interface (Open AI, USA) was used to implement the multinomial classification of abstract titles following the ERC 2021 guidelines topics. RESULTS: From 2012 to 2022 a total of 2491 abstracts have been published at ERC congresses. Published abstracts ranged from 88 (in 2020) to 368 (in 2015). On average, the most common ERC guidelines topics were Adult basic life support (50.1%), followed by Adult advanced life support (41.5%), while Newborn resuscitation and support of transition of infants at birth (2.1%) was the least common topic. The findings also highlight that the Basic Life Support and Adult Advanced Life Support ERC guidelines topics have the strongest co-occurrence to all ERC guidelines topics, where the Newborn resuscitation and support of transition of infants at birth (2.1%; 52/2491) ERC guidelines topic has the weakest co-occurrence. CONCLUSION: This study demonstrates the capabilities of generative artificial intelligence in the bibliometric analysis of abstract titles using the example of resuscitation medicine research over the last decade at ERC conferences using large language models.
Fijacko N; Creber RM; Abella BS; Kocbek P; Metlicar S; Greif R; Stiglic G
10
39191671
Performance of the ChatGPT large language model for decision support in community pharmacy.
2,024
British journal of clinical pharmacology
AIMS: The aim of this study was to assess the ChatGPT-4 (ChatGPT) large language model (LLM) on tasks relevant to community pharmacy. METHODS: ChatGPT was assessed with community pharmacy-relevant test cases involving drug information retrieval, identifying labelling errors, prescription interpretation, decision-making under uncertainty and multidisciplinary consults. Drug information on rituximab, warfarin, and St. John's wort was queried. The decision-support scenarios consisted of a subject with swollen eyelids and a maculopapular rash in a subject on lisinopril and ferrous sulfate. The multidisciplinary scenarios required the integration of medication management with recommendations for healthy eating and physical activity/exercise. RESULTS: The responses from ChatGPT for rituximab, warfarin, and St. John's wort were satisfactory and cited drug databases and drug-specific monographs. ChatGPT identified labeling errors related to incorrect medication strength, form, route of administration, unit conversion, and directions. For the patient with inflamed eyelids, the course of action developed by ChatGPT was comparable to the pharmacist's approach. For the patient with the maculopapular rash, both the pharmacist and ChatGPT placed a drug reaction to either lisinopril or ferrous sulfate at the top of the differential. ChatGPT provided customized vaccination requirements for travel to Brazil, guidance on management of drug allergies and recovery from a knee injury. ChatGPT provided satisfactory medication management and wellness information for a diabetic on metformin and semaglutide. CONCLUSIONS: LLMs have the potential to become a powerful tool in community pharmacy. However, rigorous validation studies across diverse pharmacist queries, drug classes and populations, and engineering to secure patient privacy will be needed to enhance LLM utility.
Shin E; Hartman M; Ramanathan M
10
40066678
Assessing GPT-4's accuracy in answering clinical pharmacological questions on pain therapy.
2,025
British journal of clinical pharmacology
AIMS: This study aimed to evaluate the accuracy and completeness of GPT-4, a large language model, in answering clinical pharmacological questions related to pain therapy, with a focus on its potential as a tool for delivering patient-facing medical information. The objective was to assess its reliability in delivering medical information in the context of pain management. METHODS: A cross-sectional survey-based study was conducted with healthcare professionals, including physicians and pharmacists. Participants submitted up to 8 clinical pharmacology questions on pain management, focusing on drug interactions, dosages and contraindications. GPT-4's responses were evaluated based on comprehensibility, detail, satisfaction, medical-pharmacological accuracy and completeness. Additionally, responses were compared to the German Drug Directory to assess their accuracy. RESULTS: The majority of participants (99%) found GPT-4's responses comprehensible, while 84% considered the information detailed enough. Overall satisfaction was high, with 93% expressing satisfaction, and 96% deemed the responses medically accurate. However, only 63% rated the information as complete, with some identifying gaps in pharmacokinetics and drug interaction data. Usability was evaluated as good to excellent, with a System Usability Scale score of 83.38 (+/- 10.26). CONCLUSION: GPT-4 demonstrates potential as a tool for delivering medical information, particularly in pain management. However, limitations such as incomplete pharmacological data and the potential for contextual carryover in follow-up questions suggest that further refinement is necessary. Developing specialized artificial intelligence tools that integrate real-time pharmacological databases could improve accuracy and reliability for clinical decision-making.
Stroop A; Stroop T; Zawy Alsofy S; Wegner M; Nakamura M; Stroop R
10
38953544
Evaluation of artificial intelligence-generated drug therapy communication skill competencies in medical education.
2,024
British journal of clinical pharmacology
AIMS: This study compared three artificial intelligence (AI) platforms' potential to identify drug therapy communication competencies expected of a graduating medical doctor. METHODS: We presented three AI platforms, namely, Poe Assistant(c), ChatGPT(c) and Google Bard(c), with structured queries to generate communication skill competencies and case scenarios appropriate for graduating medical doctors. These case scenarios comprised 15 prototypical medical conditions that required drug prescriptions. Two authors independently evaluated the AI-enhanced clinical encounters, which integrated a diverse range of information to create patient-centred care plans. Through a consensus-based approach using a checklist, the communication components generated for each scenario were assessed. The instructions and warnings provided for each case scenario were evaluated by referencing the British National Formulary. RESULTS: AI platforms demonstrated overlap in competency domains generated, albeit with variations in wording. The domains of knowledge (basic and clinical pharmacology, prescribing, communication and drug safety) were unanimously recognized by all platforms. A broad consensus among Poe Assistant(c) and ChatGPT(c) on drug therapy-related communication issues specific to each case scenario was evident. The consensus primarily encompassed salutation, generic drug prescribed, treatment goals and follow-up schedules. Differences were observed in patient instruction clarity, listed side effects, warnings and patient empowerment. Google Bard did not provide guidance on patient communication issues. CONCLUSIONS: AI platforms recognized competencies with variations in how these were stated. Poe Assistant(c) and ChatGPT(c) exhibited alignment of communication issues. However, significant discrepancies were observed in specific skill components, indicating the necessity of human intervention to critically evaluate AI-generated outputs.
Sridharan K; Sequeira RP
10
37626010
Evaluating the performance of ChatGPT in clinical pharmacy: A comparative study of ChatGPT and clinical pharmacists.
2,024
British journal of clinical pharmacology
AIMS: To evaluate the performance of chat generative pretrained transformer (ChatGPT) in key domains of clinical pharmacy practice, including prescription review, patient medication education, adverse drug reaction (ADR) recognition, ADR causality assessment and drug counselling. METHODS: Questions and clinical pharmacist's answers were collected from real clinical cases and clinical pharmacist competency assessment. ChatGPT's responses were generated by inputting the same question into the 'New Chat' box of ChatGPT Mar 23 Version. Five licensed clinical pharmacists independently rated these answers on a scale of 0 (Completely incorrect) to 10 (Completely correct). The mean scores of ChatGPT and clinical pharmacists were compared using a paired 2-tailed Student's t-test. The text content of the answers was also descriptively summarized together. RESULTS: The quantitative results indicated that ChatGPT was excellent in drug counselling (ChatGPT: 8.77 vs. clinical pharmacist: 9.50, P = .0791) and weak in prescription review (5.23 vs. 9.90, P = .0089), patient medication education (6.20 vs. 9.07, P = .0032), ADR recognition (5.07 vs. 9.70, P = .0483) and ADR causality assessment (4.03 vs. 9.73, P = .023). The capabilities and limitations of ChatGPT in clinical pharmacy practice were summarized based on the completeness and accuracy of the answers. ChatGPT revealed robust retrieval, information integration and dialogue capabilities. It lacked medicine-specific datasets as well as the ability for handling advanced reasoning and complex instructions. CONCLUSIONS: While ChatGPT holds promise in clinical pharmacy practice as a supplementary tool, the ability of ChatGPT to handle complex problems needs further improvement and refinement.
Huang X; Estau D; Liu X; Yu Y; Qin J; Li Z
10
37123797
A Case Report on Ground-Level Alternobaric Vertigo Due to Eustachian Tube Dysfunction With the Assistance of Conversational Generative Pre-trained Transformer (ChatGPT).
2,023
Cureus
Alternatenobaric vertigo (ABV) develops when the middle ear pressure (MEP) is not equal at the same height in the sea or the air. This is possible when the altitude changes. Eustachian tube dysfunction (ETD) is a common cause of ABV. In this case report, we discuss a patient who experienced repeated bouts of ground-level alternobaric vertigo (GLABV) due to ETD. We also discuss how Conversational Generative Pre-trained Transformer (ChatGPT) might be used in the creation of this case report. A 41-year-old male patient complained of vertigo at ground level on several occasions. His medical history included chronic sinusitis, nasal congestion, and laryngopharyngeal reflux (LPR). During the physical exam, his tympanic membranes were dull and moved less. Tympanometry showed that he had an asymmetric type A and that both of his middle ears had negative pressure. The results of the audiometry test were normal, and the laryngoscopy revealed LPR. The patient was found to have GLABV because of ETD, and different treatment options, such as Eustachian tube catheterization (ETC), were thought about. This case study demonstrates how ChatGPT can be used to assist with medical documentation and the treatment of GLABV caused by ETD. Even though ChatGPT did not provide specific diagnostic or treatment recommendations for the patient's condition, it did assist the doctor in determining what was wrong and how to treat it while writing the case report. It also aided the doctor in writing the case report by allowing them to discuss it. The use of artificial intelligence (AI) tools such as ChatGPT has the potential to improve the accuracy and speed of medical documentation, thereby streamlining clinical workflows and improving patient care. Nonetheless, it is critical to consider the ethical implications of using AI in clinical practice This case study emphasizes the importance of understanding that ETD is a common cause of GLABV and how ChatGPT can aid in the diagnosis and treatment of this condition. More research is needed to fully understand how long-term AI interventions in medicine work and how reliable they are.
Kim HY
32
37179277
Artificial Intelligence in Intensive Care Medicine: Toward a ChatGPT/GPT-4 Way?
2,023
Annals of biomedical engineering
Although intensive care medicine (ICM) is a relatively young discipline, it has rapidly developed into a full-fledged and highly specialized specialty covering several fields of medicine. The COVID-19 pandemic led to a surge in intensive care unit demand and also bring unprecedented development opportunities for this area. Multiple new technologies such as artificial intelligence (AI) and machine learning (ML) were gradually being applied in this field. In this study, through an online survey, we have summarized the potential uses of ChatGPT/GPT-4 in ICM range from knowledge augmentation, device management, clinical decision-making support, early warning systems, and establishment of intensive care unit (ICU) database.
Lu Y; Wu H; Qi S; Cheng K
10
38056130
Assessing the potential of ChatGPT for psychodynamic formulations in psychiatry: An exploratory study.
2,024
Psychiatry research
Although there were several attempts to apply ChatGPT (Generative Pre-Trained Transformer) to medicine, little is known about therapeutic applications in psychiatry. In this exploratory study, we aimed to evaluate the characteristics and appropriateness of the psychodynamic formulations created by ChatGPT. Along with a case selected from the psychoanalytic literature, input prompts were designed to include different levels of background knowledge. These included naive prompts, keywords created by ChatGPT, keywords created by psychiatrists, and psychodynamic concepts from the literature. The psychodynamic formulations generated from the different prompts were evaluated by five psychiatrists from different institutions. We next conducted further tests in which instructions on the use of different psychodynamic models were added to the input prompts. The models used were ego psychology, self-psychology, and object relations. The results from naive prompts and psychodynamic concepts were rated as appropriate by most raters. The psychodynamic concept prompt output was rated the highest. Interrater agreement was statistically significant. The results from the tests using instructions in different psychoanalytic theories were also rated as appropriate by most raters. They included key elements of the psychodynamic formulation and suggested interpretations similar to the literature. These findings suggest potential of ChatGPT for use in psychiatry.
Hwang G; Lee DY; Seol S; Jung J; Choi Y; Her ES; An MH; Park RW
0-1
39011990
Large Language Model-Based Natural Language Encoding Could Be All You Need for Drug Biomedical Association Prediction.
2,024
Analytical chemistry
Analyzing drug-related interactions in the field of biomedicine has been a critical aspect of drug discovery and development. While various artificial intelligence (AI)-based tools have been proposed to analyze drug biomedical associations (DBAs), their feature encoding did not adequately account for crucial biomedical functions and semantic concepts, thereby still hindering their progress. Since the advent of ChatGPT by OpenAI in 2022, large language models (LLMs) have demonstrated rapid growth and significant success across various applications. Herein, LEDAP was introduced, which uniquely leveraged LLM-based biotext feature encoding for predicting drug-disease associations, drug-drug interactions, and drug-side effect associations. Benefiting from the large-scale knowledgebase pre-training, LLMs had great potential in drug development analysis owing to their holistic understanding of natural language and human topics. LEDAP illustrated its notable competitiveness in comparison with other popular DBA analysis tools. Specifically, even in simple conjunction with classical machine learning methods, LLM-based feature representations consistently enabled satisfactory performance across diverse DBA tasks like binary classification, multiclass classification, and regression. Our findings underpinned the considerable potential of LLMs in drug development research, indicating a catalyst for further progress in related fields.
Zhang H; Zhou Y; Zhang Z; Sun H; Pan Z; Mou M; Zhang W; Ye Q; Hou T; Li H; Hsieh CY; Zhu F
10
37568980
The Utility of Language Models in Cardiology: A Narrative Review of the Benefits and Concerns of ChatGPT-4.
2,023
International journal of environmental research and public health
Artificial intelligence (AI) and language models such as ChatGPT-4 (Generative Pretrained Transformer) have made tremendous advances recently and are rapidly transforming the landscape of medicine. Cardiology is among many of the specialties that utilize AI with the intention of improving patient care. Generative AI, with the use of its advanced machine learning algorithms, has the potential to diagnose heart disease and recommend management options suitable for the patient. This may lead to improved patient outcomes not only by recommending the best treatment plan but also by increasing physician efficiency. Language models could assist physicians with administrative tasks, allowing them to spend more time on patient care. However, there are several concerns with the use of AI and language models in the field of medicine. These technologies may not be the most up-to-date with the latest research and could provide outdated information, which may lead to an adverse event. Secondly, AI tools can be expensive, leading to increased healthcare costs and reduced accessibility to the general population. There is also concern about the loss of the human touch and empathy as AI becomes more mainstream. Healthcare professionals would need to be adequately trained to utilize these tools. While AI and language models have many beneficial traits, all healthcare providers need to be involved and aware of generative AI so as to assure its optimal use and mitigate any potential risks and challenges associated with its implementation. In this review, we discuss the various uses of language models in the field of cardiology.
Gala D; Makaryus AN
10
37601525
Artificial intelligence in medicine and research - the good, the bad, and the ugly.
2,023
Saudi journal of anaesthesia
Artificial intelligence (AI) broadly refers to machines that simulate intelligent human behavior, and research into this field is exponential and worldwide, with global players such as Microsoft battling with Google for supremacy and market share. This paper reviews the "good" aspects of AI in medicine for individuals who embrace the 4P model of medicine (Predictive, Preventive, Personalized, and Participatory) to medical assistants in diagnostics, surgery, and research. The "bad" aspects relate to the potential for errors, culpability, ethics, data loss and data breaches, and so on. The "ugly" aspects are deliberate personal malfeasances and outright scientific misconduct including the ease of plagiarism and fabrication, with particular reference to the novel ChatGPT as well as AI software that can also fabricate graphs and images. The issues pertaining to the potential dangers of creating rogue, super-intelligent AI systems that lead to a technological singularity and the ensuing perceived existential threat to mankind by leading AI researchers are also briefly discussed.
Grech V; Cuschieri S; Eldawlatly AA
10
38477743
The performance of artificial intelligence chatbot large language models to address skeletal biology and bone health queries.
2,024
Journal of bone and mineral research : the official journal of the American Society for Bone and Mineral Research
Artificial intelligence (AI) chatbots utilizing large language models (LLMs) have recently garnered significant interest due to their ability to generate humanlike responses to user inquiries in an interactive dialog format. While these models are being increasingly utilized to obtain medical information by patients, scientific and medical providers, and trainees to address biomedical questions, their performance may vary from field to field. The opportunities and risks these chatbots pose to the widespread understanding of skeletal health and science are unknown. Here we assess the performance of 3 high-profile LLM chatbots, Chat Generative Pre-Trained Transformer (ChatGPT) 4.0, BingAI, and Bard, to address 30 questions in 3 categories: basic and translational skeletal biology, clinical practitioner management of skeletal disorders, and patient queries to assess the accuracy and quality of the responses. Thirty questions in each of these categories were posed, and responses were independently graded for their degree of accuracy by four reviewers. While each of the chatbots was often able to provide relevant information about skeletal disorders, the quality and relevance of these responses varied widely, and ChatGPT 4.0 had the highest overall median score in each of the categories. Each of these chatbots displayed distinct limitations that included inconsistent, incomplete, or irrelevant responses, inappropriate utilization of lay sources in a professional context, a failure to take patient demographics or clinical context into account when providing recommendations, and an inability to consistently identify areas of uncertainty in the relevant literature. Careful consideration of both the opportunities and risks of current AI chatbots is needed to formulate guidelines for best practices for their use as source of information about skeletal health and biology.
Cung M; Sosa B; Yang HS; McDonald MM; Matthews BG; Vlug AG; Imel EA; Wein MN; Stein EM; Greenblatt MB
10
39690824
The use of ChatGPT in the dermatological field: a narrative review.
2,025
Clinical and experimental dermatology
Artificial intelligence (AI) encompasses the development of computer systems capable of tasks typically requiring human intelligence, such as visual perception, speech recognition, decision-making and language translation. Over time, numerous applications have emerged, with the integration of AI into medicine marking a significant leap forward in healthcare delivery, diagnosis and treatment. Among medical specialties, dermatology stands at the forefront of AI advancements, leveraging machine learning and deep learning to enhance dermatologists' abilities and improve patient care. ChatGPT is an advanced language model by OpenAI, originally designed for conversations, which has expanded its utility into diverse fields, including healthcare and dermatology. In this context, the aim of this review article was to explore the synergistic relationship between ChatGPT and dermatology, examining how this innovative AI model is reshaping skin health management, its potential applications, preliminary data on its efficiency and accuracy, as well as ethical and legal concerns related to the use of its tool.
Potestio L; Feo F; Martora F; Megna M; Napolitano M; D'Agostino M
0-1
38435218
Ocular Pathology and Genetics: Transformative Role of Artificial Intelligence (AI) in Anterior Segment Diseases.
2,024
Cureus
Artificial intelligence (AI) has become a revolutionary influence in the field of ophthalmology, providing unparalleled capabilities in data analysis and pattern recognition. This narrative review delves into the crucial role that AI plays, particularly in the context of anterior segment diseases with a genetic basis. Corneal dystrophies (CDs) exhibit significant genetic diversity, manifested by irregular substance deposition in the cornea. AI-driven diagnostic tools exhibit promising accuracy in the identification and classification of corneal diseases. Importantly, chat generative pre-trained transformer (ChatGPT)-4.0 shows significant advancement over its predecessor, ChatGPT-3.5. In the realm of glaucoma, AI significantly contributes to precise diagnostics through inventive algorithms and machine learning models, surpassing conventional methods. The incorporation of AI in predicting glaucoma progression and its role in augmenting diagnostic efficiency is readily apparent. Additionally, AI-powered models prove beneficial for early identification and risk assessment in cases of congenital cataracts, characterized by diverse inheritance patterns. Machine learning models achieving exceptional discrimination in identifying congenital cataracts underscore AI's remarkable potential. The review concludes by emphasizing the promising implications of AI in managing anterior segment diseases, spanning from early detection to the tailoring of personalized treatment strategies. These advancements signal a paradigm shift in ophthalmic care, offering optimism for enhanced patient outcomes and more streamlined healthcare delivery.
Venkatapathappa P; Sultana A; K S V; Mansour R; Chikkanarayanappa V; Rangareddy H
0-1
37500980
AI-ChatGPT/GPT-4: An Booster for the Development of Physical Medicine and Rehabilitation in the New Era!
2,024
Annals of biomedical engineering
Artificial intelligence (AI) has been driving the continuous development of the Physical Medicine and Rehabilitation (PM&R) fields. The latest release of ChatGPT/GPT-4 has shown us that AI can potentially transform the healthcare industry. In this study, we propose various ways in which ChatGPT/GPT-4 can display its talents in the field of PM&R in future. ChatGPT/GPT-4 is an essential tool for Physiatrists in the new era.
Peng S; Wang D; Liang Y; Xiao W; Zhang Y; Liu L
32
38201418
A Systematic Review and Meta-Analysis of Artificial Intelligence Tools in Medicine and Healthcare: Applications, Considerations, Limitations, Motivation and Challenges.
2,024
Diagnostics (Basel, Switzerland)
Artificial intelligence (AI) has emerged as a transformative force in various sectors, including medicine and healthcare. Large language models like ChatGPT showcase AI's potential by generating human-like text through prompts. ChatGPT's adaptability holds promise for reshaping medical practices, improving patient care, and enhancing interactions among healthcare professionals, patients, and data. In pandemic management, ChatGPT rapidly disseminates vital information. It serves as a virtual assistant in surgical consultations, aids dental practices, simplifies medical education, and aids in disease diagnosis. A total of 82 papers were categorised into eight major areas, which are G1: treatment and medicine, G2: buildings and equipment, G3: parts of the human body and areas of the disease, G4: patients, G5: citizens, G6: cellular imaging, radiology, pulse and medical images, G7: doctors and nurses, and G8: tools, devices and administration. Balancing AI's role with human judgment remains a challenge. A systematic literature review using the PRISMA approach explored AI's transformative potential in healthcare, highlighting ChatGPT's versatile applications, limitations, motivation, and challenges. In conclusion, ChatGPT's diverse medical applications demonstrate its potential for innovation, serving as a valuable resource for students, academics, and researchers in healthcare. Additionally, this study serves as a guide, assisting students, academics, and researchers in the field of medicine and healthcare alike.
Younis HA; Eisa TAE; Nasser M; Sahib TM; Noor AA; Alyasiri OM; Salisu S; Hayder IM; Younis HA
10
38827766
Application of ChatGPT for Orthopedic Surgeries and Patient Care.
2,024
Clinics in orthopedic surgery
Artificial intelligence (AI) has rapidly transformed various aspects of life, and the launch of the chatbot "ChatGPT" by OpenAI in November 2022 has garnered significant attention and user appreciation. ChatGPT utilizes natural language processing based on a "generative pre-trained transfer" (GPT) model, specifically the transformer architecture, to generate human-like responses to a wide range of questions and topics. Equipped with approximately 57 billion words and 175 billion parameters from online data, ChatGPT has potential applications in medicine and orthopedics. One of its key strengths is its personalized, easy-to-understand, and adaptive response, which allows it to learn continuously through user interaction. This article discusses how AI, especially ChatGPT, presents numerous opportunities in orthopedics, ranging from preoperative planning and surgical techniques to patient education and medical support. Although ChatGPT's user-friendly responses and adaptive capabilities are laudable, its limitations, including biased responses and ethical concerns, necessitate its cautious and responsible use. Surgeons and healthcare providers should leverage the strengths of the ChatGPT while recognizing its current limitations and verifying critical information through independent research and expert opinions. As AI technology continues to evolve, ChatGPT may become a valuable tool in orthopedic education and patient care, leading to improved outcomes and efficiency in healthcare delivery. The integration of AI into orthopedics offers substantial benefits but requires careful consideration and continuous improvement.
Morya VK; Lee HW; Shahid H; Magar AG; Lee JH; Kim JH; Jun L; Noh KC
32
37375838
The Role of AI in Drug Discovery: Challenges, Opportunities, and Strategies.
2,023
Pharmaceuticals (Basel, Switzerland)
Artificial intelligence (AI) has the potential to revolutionize the drug discovery process, offering improved efficiency, accuracy, and speed. However, the successful application of AI is dependent on the availability of high-quality data, the addressing of ethical concerns, and the recognition of the limitations of AI-based approaches. In this article, the benefits, challenges, and drawbacks of AI in this field are reviewed, and possible strategies and approaches for overcoming the present obstacles are proposed. The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods, as well as the potential advantages of AI in pharmaceutical research, are also discussed. Overall, this review highlights the potential of AI in drug discovery and provides insights into the challenges and opportunities for realizing its potential in this field. Note from the human authors: This article was created to test the ability of ChatGPT, a chatbot based on the GPT-3.5 language model, in terms of assisting human authors in writing review articles. The text generated by the AI following our instructions (see Supporting Information) was used as a starting point, and its ability to automatically generate content was evaluated. After conducting a thorough review, the human authors practically rewrote the manuscript, striving to maintain a balance between the original proposal and the scientific criteria. The advantages and limitations of using AI for this purpose are discussed in the last section.
Blanco-Gonzalez A; Cabezon A; Seco-Gonzalez A; Conde-Torres D; Antelo-Riveiro P; Pineiro A; Garcia-Fandino R
0-1
37809168
A Call to Address AI "Hallucinations" and How Healthcare Professionals Can Mitigate Their Risks.
2,023
Cureus
Artificial intelligence (AI) has transformed society in many ways. AI in medicine has the potential to improve medical care and reduce healthcare professional burnout but we must be cautious of a phenomenon termed "AI hallucinations"and how this term can lead to the stigmatization of AI systems and persons who experience hallucinations. We believe the term "AI misinformation" to be more appropriate and avoids contributing to stigmatization. Healthcare professionals can play an important role in AI's integration into medicine, especially regarding mental health services, so it is important that we continue to critically evaluate AI systems as they emerge.
Hatem R; Simmons B; Thornton JE
10
39621019
A Survey of Veterinary Student Perceptions on Integrating ChatGPT in Veterinary Education Through AI-Driven Exercises.
2,024
Journal of veterinary medical education
Artificial intelligence (AI) in education is rapidly gaining attention, particularly with tools like ChatGPT, which have the potential to transform learning experiences. However, the application of such tools in veterinary education remains underexplored. This study aimed to design an AI-driven exercise and investigate veterinary students' perceptions regarding the integration of ChatGPT into their education, specifically within the Year 5 Equine Medicine and Surgery course at City University of Hong Kong. Twenty-two veterinary students participated in an AI-driven exercise, where they created multiple-choice questions (MCQs) and evaluated ChatGPT's responses. The exercise was designed to promote active learning and a deeper understanding of complex concepts. The results indicate a generally positive reception, with 72.7% of students finding the exercise moderately to extremely engaging and 77.3% agreeing that it deepened their understanding. Additionally, 68.2% of students reported improvements in their critical thinking skills. Students with prior AI experience exhibited higher engagement levels and perceived the exercise as more effective. The study also found that engagement positively correlated with perceived usefulness, overall satisfaction, and the likelihood of recommending similar AI-driven exercises in other courses. Qualitative feedback underscored the interactive nature of this exercise and its usefulness in helping students understand complex concepts, although some students experienced confusion with AI-generated responses. While acknowledging the limitations of the technology and the small sample size, this study provides valuable insights into the potential benefits and challenges of incorporating AI-driven tools into veterinary education, highlighting the need for carefully considered integration of such tools into the curriculum.
Alonso Sousa S; Flay KJ
10
38721180
Redefining Healthcare With Artificial Intelligence (AI): The Contributions of ChatGPT, Gemini, and Co-pilot.
2,024
Cureus
Artificial Intelligence (AI) in healthcare marks a new era of innovation and efficiency, characterized by the emergence of sophisticated language models such as ChatGPT (OpenAI, San Francisco, CA, USA), Gemini Advanced (Google LLC, Mountain View, CA, USA), and Co-pilot (Microsoft Corp, Redmond, WA, USA). This review explores the transformative impact of these AI technologies on various facets of healthcare, from enhancing patient care and treatment protocols to revolutionizing medical research and tackling intricate health science challenges. ChatGPT, with its advanced natural language processing capabilities, leads the way in providing personalized mental health support and improving chronic condition management. Gemini Advanced extends the boundary of AI in healthcare through data analytics, facilitating early disease detection and supporting medical decision-making. Co-pilot, by integrating seamlessly with healthcare systems, optimizes clinical workflows and encourages a culture of innovation among healthcare professionals. Additionally, the review highlights the significant contributions of AI in accelerating medical research, particularly in genomics and drug discovery, thus paving the path for personalized medicine and more effective treatments. The pivotal role of AI in epidemiology, especially in managing infectious diseases such as COVID-19, is also emphasized, demonstrating its value in enhancing public health strategies. However, the integration of AI technologies in healthcare comes with challenges. Concerns about data privacy, security, and the need for comprehensive cybersecurity measures are discussed, along with the importance of regulatory compliance and transparent consent management to uphold ethical standards and patient autonomy. The review points out the necessity for seamless integration, interoperability, and the maintenance of AI systems' reliability and accuracy to fully leverage AI's potential in advancing healthcare.
Alhur A
10
37466157
The impact of Chat Generative Pre-trained Transformer (ChatGPT) on medical education.
2,023
Postgraduate medical journal
Artificial intelligence (AI) in medicine is developing rapidly. The advent of Chat Generative Pre-trained Transformer (ChatGPT) has taken the world by storm with its potential uses and efficiencies. However, technology leaders, researchers, educators, and policy makers have also sounded the alarm on its potential harms and unintended consequences. AI will increasingly find its way into medicine and is a force of both disruption and innovation. We discuss the potential benefits and limitations of this new league of technology and how medical educators have to develop skills and curricula to best harness this innovative power.
Heng JJY; Teo DB; Tan LF
10
37094759
Artificial Intelligence and new language models in Ophthalmology: Complications of the use of silicone oil in vitreoretinal surgery.
2,023
Archivos de la Sociedad Espanola de Oftalmologia
Artificial intelligence (AI) is an emerging technology that facilitates everyday tasks and automates tasks in various fields such as medicine. However, the emergence of a language model in academia has generated a lot of interest. This paper evaluates the potential of ChatGPT, a language model developed by OpenAI, and DALL-E 2, an image generator, in the writing of scientific articles in ophthalmology. The selected topic is the complications of the use of silicone oil in vitreoretinal surgery. ChatGPT was used to generate an abstract and a structured article, suggestions for a title and bibliographical references. In conclusion, despite the knowledge demonstrated by this tool, the scientific accuracy and reliability on specific topics is insufficient for the automatic generation of scientifically rigorous articles. In addition, scientists should be aware of the possible ethical and legal implications of these tools.
Valentin-Bravo FJ; Mateos-Alvarez E; Usategui-Martin R; Andres-Iglesias C; Pastor-Jimeno JC; Pastor-Idoate S
10
37692629
AI-Powered Chatbots in Medical Education: Potential Applications and Implications.
2,023
Cureus
Artificial intelligence (AI) is anticipated to have a considerable impact on the routine practice of medicine, spanning from medical education to clinical practice across specialties and, ultimately, patient care. With the imminent widespread adoption of AI in medical practice, it is imperative that medical schools adapt to the use of these advanced technologies in their curriculum to produce future healthcare professionals who can seamlessly integrate these tools into practice. Chatbots, AI systems programmed to process and generate human language, are currently being evaluated for various tasks in medical education. This paper explores the potential applications and implications of chatbots in medical education, specifically in learning and research. With their capability to summarize, simplify complex concepts, automate the creation of memory aids, and serve as an interactive tutor and point-of-care medical reference, chatbots have the potential to enhance students' comprehension, retention, and application of medical knowledge in real-time. While the integration of AI-powered chatbots in medical education presents numerous advantages, it is crucial for students to use these tools as assistive tools rather than relying on them entirely. Chatbots should be programmed to reference evidence-based medical resources and produce precise and trustworthy content that adheres to medical science standards, scientific writing guidelines, and ethical considerations.
Ghorashi N; Ismail A; Ghosh P; Sidawy A; Javan R
10
38011011
"You Are Not Alone": The Allure and Limitations of Artificial Intelligence in Serious Illness Communication.
2,024
Journal of palliative medicine
Artificial intelligence (AI) is changing the way clinicians practice medicine, and recent technological advancements have resulted in consumer-facing products that can respond to users with dynamic and nuanced language. Clinicians typically struggle with serious illness communication, such as delivering news about a poor prognosis. Palliative care clinicians receive extensive training in serious illness communication, but there is a paucity of such highly trained specialists. This article explores the allure of employing AI-powered chatbots to assist nonspecialist clinicians with serious illness communication and highlights the ethical and practical drawbacks. While outsourcing communication to new AI chatbot technologies may be inappropriate, there is a role for AI in training clinicians on effective language to use when discussing serious illness with their patients.
Burry N; Nakagawa S; Blinderman CD
0-1
38472350
Artificial Intelligence in Plastic Surgery: Analysis of Applications, Perspectives, and Psychological Impact.
2,025
Aesthetic plastic surgery
Artificial intelligence (AI) is emerging as a promising tool in the field of plastic surgery, offering a wide array of applications that enhance surgical outcomes, patient satisfaction, and overall efficiency. This paper explores the utilization of AI, highlighting its various advantages and potential drawbacks. AI-driven technologies such as computer vision, machine learning algorithms, and robotic assistance facilitate preoperative planning, intraoperative guidance, and postoperative monitoring. These advancements enable precise anatomical measurements, personalized treatment plans, and real-time feedback during surgery, leading to improved accuracy and safety. Furthermore, AI-powered image analysis aids in facial recognition, skin texture assessment, and simulation of surgical outcomes, enabling enhanced patient consultations and predictive modeling. However, the integration of AI in plastic surgery also presents challenges, including ethical concerns, data privacy, algorithm biases, and the need for comprehensive training among healthcare professionals. Additionally, the reliance on AI systems may potentially lead to over-reliance or reduced surgeon autonomy, necessitating careful validation and continuous refinement of these technologies. Despite these challenges, the synergistic collaboration between AI and plastic surgery holds great promise in advancing clinical practice, fostering innovation, and ultimately benefiting patients through optimized esthetic and reconstructive outcomes.Level of Evidence V This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors https://www.springer.com/00266 .
Barone M; De Bernardis R; Persichetti P
32
39984315
[Artificial intelligence in healthcare: A survival guide for internists].
2,025
La Revue de medecine interne
Artificial intelligence (AI) is experiencing considerable growth in medicine, driven by the explosion of available biomedical data and the emergence of new algorithmic architectures. Applications are rapidly multiplying, from diagnostic assistance to disease progression prediction, paving the way for more personalized medicine. The recent advent of large language models, such as ChatGPT, has particularly interested the medical community, thanks to their ease of use, but also raised questions about their reliability in medical contexts. This review presents the fundamental concepts of medical AI, specifically distinguishing traditional discriminative approaches from new generative models. We detail the different exploitable data sources and methodological pitfalls to avoid during the development of these tools. Finally, we address the practical and ethical implications of this technological revolution, emphasizing the importance of the medical community's appropriation of these tools.
Barba T; Robert M; Hot A
10
37493985
Artificial intelligence and ChatGPT in Orthopaedics and sports medicine.
2,023
Journal of experimental orthopaedics
Artificial intelligence (AI) is looked upon nowadays as the potential major catalyst for the fourth industrial revolution. In the last decade, AI use in Orthopaedics increased approximately tenfold. Artificial intelligence helps with tracking activities, evaluating diagnostic images, predicting injury risk, and several other uses. Chat Generated Pre-trained Transformer (ChatGPT), which is an AI-chatbot, represents an extremely controversial topic in the academic community. The aim of this review article is to simplify the concept of AI and study the extent of AI use in Orthopaedics and sports medicine literature. Additionally, the article will also evaluate the role of ChatGPT in scientific research and publications.Level of evidence: Level V, letter to review.
Fayed AM; Mansur NSB; de Carvalho KA; Behrens A; D'Hooghe P; de Cesar Netto C
32
37905264
Decoding Applications of Artificial Intelligence in Rheumatology.
2,023
Cureus
Artificial intelligence (AI) is not a newcomer in medicine. It has been employed for image analysis, disease diagnosis, drug discovery, and improving overall patient care. ChatGPT (Chat Generative Pre-trained Transformer, Inc., Delaware) has renewed interest and enthusiasm in artificial intelligence. Algorithms, machine learning, deep learning, and data analysis are some of the complex terminologies often encountered when health professionals try to learn AI. In this article, we try to review the practical applications of artificial intelligence in vernacular language in the fields of medicine and rheumatology in particular. From the standpoint of the everyday physician, we have endeavored to encapsulate the influence of AI on the cutting edge of medical practice and the potential revolutionary shift in the realm of rheumatology.
Chinnadurai S; Mahadevan S; Navaneethakrishnan B; Mamadapur M
10
40353268
Get the Artificial Intelligence (AI) Edge in Obstetrics and Gynaecology.
2,025
Journal of obstetrics and gynaecology of India
Artificial intelligence (AI) is on the fast track so far as the growth is concerned, from experimental to implementation in field of medicine. AI should be used ethically and intelligently. Large data base availability, advances in algorithm-theories, computing improvements has led to breakthrough in AI applications in current medicine. Machine learning (ML), which is subset of AI, allows computers to detect patterns through larger data base automatically, that can be used to make predictions. Is it paradigm shift? In the field of Obstetrics and Gynaecology, AI is used in reproductive medicine for diagnosis, treatment with fertility outcome, cancer treatment, USG-MRI images diagnosis, foetal echocardiography, cardiotocography (CTG), preterm labour prediction and urogynaecology. ChatGPT can be helpful in medical writing but there is always a challenge with respect to accuracy and reliability. AI can be used in research and experiments, there by strengthening evidence-based clinical practice. More research is ongoing on personalized diagnosis, treatment and remote medical expert team opinion. It does not replace the medical advice given by the clinician but that should not deter clinician by exploring more uses of AI. Despite various challenges and limitations, integration of AI in medical field is bound to progress in correct direction for better future.
Dalvi S
0-1
38323891
Exploring the Promise and Challenges of Artificial Intelligence in Biomedical Research and Clinical Practice.
2,024
Journal of cardiovascular pharmacology
Artificial intelligence (AI) is poised to revolutionize how science, and biomedical research in particular, are done. With AI, problem-solving and complex tasks using massive data sets can be performed at a much higher rate and dimensionality level compared with humans. With the ability to handle huge data sets and self-learn, AI is already being exploited in drug design, drug repurposing, toxicology, and material identification. AI could also be used in both basic and clinical research in study design, defining outcomes, analyzing data, interpreting findings, and even identifying the most appropriate areas of investigation and funding sources. State-of-the-art AI-based large language models, such as ChatGPT and Perplexity, are positioned to change forever how science is communicated and how scientists interact with one another and their profession, including postpublication appraisal and critique. Like all revolutions, upheaval will follow and not all outcomes can be predicted, necessitating guardrails at the onset, especially to minimize the untoward impact of the many drawbacks of large language models, which include lack of confidentiality, risk of hallucinations, and propagation of mainstream albeit potentially mistaken opinions and perspectives. In this review, we highlight areas of biomedical research that are already being reshaped by AI and how AI is likely to affect it further in the near future. We discuss the potential benefits of AI in biomedical research and address possible risks, some surrounding the creative process, that warrant further reflection.
Altara R; Basson CJ; Biondi-Zoccai G; Booz GW
10
39934059
Generative Artificial Intelligence in Academic Surgery: Ethical Implications and Transformative Potential.
2,025
The Journal of surgical research
Artificial intelligence (AI) is rapidly being used in medicine due to its advanced capabilities in image and video recognition, clinical decision support, surgical education, and administrative task automation. Large language models such as OpenAI's Generative Pretrained Transformer (GPT)-4 and Google's Bard have particularly revolutionized text generation, offering substantial benefits for the academic surgeon, including aiding in manuscript and grant writing. However, integrating AI into academic surgery necessitates addressing ethical concerns such as bias, transparency, and intellectual property. This paper provides guidelines and recommendations based on current literature around the opportunities and ethical challenges of AI in academic surgery. We discuss the underlying mechanisms of large language models, their potential biases, and the importance of responsible usage. Furthermore, we explore the ethical implications of AI in clinical documentation, highlighting improved efficiency and necessary privacy concerns. This review also addresses the critical issue of intellectual property dilemmas posed by AI-generated innovations in university settings. Finally, we propose guidelines for the responsible adoption of AI in academic and clinical environments, stressing the need for transparency, ethical training, and robust governance frameworks to ensure AI enhances, rather than undermines, academic integrity and patient care.
Robinson JR; Stey A; Schneider DF; Kothari AN; Lindeman B; Kaafarani HM; Haines KL
10
38674356
Innovations in Medicine: Exploring ChatGPT's Impact on Rare Disorder Management.
2,024
Genes
Artificial intelligence (AI) is rapidly transforming the field of medicine, announcing a new era of innovation and efficiency. Among AI programs designed for general use, ChatGPT holds a prominent position, using an innovative language model developed by OpenAI. Thanks to the use of deep learning techniques, ChatGPT stands out as an exceptionally viable tool, renowned for generating human-like responses to queries. Various medical specialties, including rheumatology, oncology, psychiatry, internal medicine, and ophthalmology, have been explored for ChatGPT integration, with pilot studies and trials revealing each field's potential benefits and challenges. However, the field of genetics and genetic counseling, as well as that of rare disorders, represents an area suitable for exploration, with its complex datasets and the need for personalized patient care. In this review, we synthesize the wide range of potential applications for ChatGPT in the medical field, highlighting its benefits and limitations. We pay special attention to rare and genetic disorders, aiming to shed light on the future roles of AI-driven chatbots in healthcare. Our goal is to pave the way for a healthcare system that is more knowledgeable, efficient, and centered around patient needs.
Zampatti S; Peconi C; Megalizzi D; Calvino G; Trastulli G; Cascella R; Strafella C; Caltagirone C; Giardina E
0-1
38516933
AI vs academia: Experimental study on AI text detectors' accuracy in behavioral health academic writing.
2,024
Accountability in research
Artificial Intelligence (AI) language models continue to expand in both access and capability. As these models have evolved, the number of academic journals in medicine and healthcare which have explored policies regarding AI-generated text has increased. The implementation of such policies requires accurate AI detection tools. Inaccurate detectors risk unnecessary penalties for human authors and/or may compromise the effective enforcement of guidelines against AI-generated content. Yet, the accuracy of AI text detection tools in identifying human-written versus AI-generated content has been found to vary across published studies. This experimental study used a sample of behavioral health publications and found problematic false positive and false negative rates from both free and paid AI detection tools. The study assessed 100 research articles from 2016-2018 in behavioral health and psychiatry journals and 200 texts produced by AI chatbots (100 by "ChatGPT" and 100 by "Claude"). The free AI detector showed a median of 27.2% for the proportion of academic text identified as AI-generated, while commercial software Originality.AI demonstrated better performance but still had limitations, especially in detecting texts generated by Claude. These error rates raise doubts about relying on AI detectors to enforce strict policies around AI text generation in behavioral health publications.
Popkov AA; Barrett TS
10
38458774
A Conversation with ChatGPT on Contentious Issues in Senescence and Cancer Research.
2,024
Molecular pharmacology
Artificial intelligence (AI) platforms, such as Generative Pretrained Transformer (ChatGPT), have achieved a high degree of popularity within the scientific community due to their utility in providing evidence-based reviews of the literature. However, the accuracy and reliability of the information output and the ability to provide critical analysis of the literature, especially with respect to highly controversial issues, has generally not been evaluated. In this work, we arranged a question/answer session with ChatGPT regarding several unresolved questions in the field of cancer research relating to therapy-induced senescence (TIS), including the topics of senescence reversibility, its connection to tumor dormancy, and the pharmacology of the newly emerging drug class of senolytics. ChatGPT generally provided responses consistent with the available literature, although occasionally overlooking essential components of the current understanding of the role of TIS in cancer biology and treatment. Although ChatGPT, and similar AI platforms, have utility in providing an accurate evidence-based review of the literature, their outputs should still be considered carefully, especially with respect to unresolved issues in tumor biology. SIGNIFICANCE STATEMENT: Artificial Intelligence platforms have provided great utility for researchers to investigate biomedical literature in a prompt manner. However, several issues arise when it comes to certain unresolved biological questions, especially in the cancer field. This work provided a discussion with ChatGPT regarding some of the yet-to-be-fully-elucidated conundrums of the role of therapy-induced senescence in cancer treatment and highlights the strengths and weaknesses in utilizing such platforms for analyzing the scientific literature on this topic.
Elshazly AM; Shahin U; Al Shboul S; Gewirtz DA; Saleh T
10
37528607
ChatGPT: Forensic, legal, and ethical issues.
2,024
Medicine, science, and the law
Artificial intelligence (AI) refers to a group of technologies that enable people to perform a variety of activities, including observing, comprehending, analysing and translating data, among other things. Nowadays, practically every school of thought is interested in AI. One such innovation, a chatbot by the name of ChatGPT (Chat Generative Pre-Trained Transformer), launched by OpenAI recently, has taken the internet by storm. It had one million users within 1 week of its launch. The present communication explores the practicability and versatility of the ChatGPT in forensic examinations and scenarios, and also addresses the ethical and legal issues surrounding its usage. The observations suggest that the said technology, in its current form, has limited relevance in the realm of forensic science and the law. Only human critical thinking, expertise, and practical experience can provide the information and competencies needed in the realms of forensics, research, clinical and legal practices. Thus, the ChatGPT should be used with utmost caution in the disciplines of medicine, forensic science and the law, irrespective of its many positive attributes.
Guleria A; Krishan K; Sharma V; Kanchan T
10
37771867
ChatGPT in action: Harnessing artificial intelligence potential and addressing ethical challenges in medicine, education, and scientific research.
2,023
World journal of methodology
Artificial intelligence (AI) tools, like OpenAI's Chat Generative Pre-trained Transformer (ChatGPT), hold considerable potential in healthcare, academia, and diverse industries. Evidence demonstrates its capability at a medical student level in standardized tests, suggesting utility in medical education, radiology reporting, genetics research, data optimization, and drafting repetitive texts such as discharge summaries. Nevertheless, these tools should augment, not supplant, human expertise. Despite promising applications, ChatGPT confronts limitations, including critical thinking tasks and generating false references, necessitating stringent cross-verification. Ensuing concerns, such as potential misuse, bias, blind trust, and privacy, underscore the need for transparency, accountability, and clear policies. Evaluations of AI-generated content and preservation of academic integrity are critical. With responsible use, AI can significantly improve healthcare, academia, and industry without compromising integrity and research quality. For effective and ethical AI deployment, collaboration amongst AI developers, researchers, educators, and policymakers is vital. The development of domain-specific tools, guidelines, regulations, and the facilitation of public dialogue must underpin these endeavors to responsibly harness AI's potential.
Jeyaraman M; Ramasubramanian S; Balaji S; Jeyaraman N; Nallakumarasamy A; Sharma S
10
37062612
[Artificial intelligence and internal medicine: The example of hydroxychloroquine according to ChatGPT].
2,023
La Revue de medecine interne
Artificial intelligence (AI) using deep learning is revolutionizing several fields, including medicine, with a wide range of applications. Available since the end of 2022, ChatGPT is a conversational AI or "chatbot", using artificial intelligence to dialogue with its users in all fields. Through the example of hydroxychloroquine (HCQ), we discuss its use for patients, clinicians, or researchers, and discuss its performance and limitations, particularly in relation to algorithmic bias. If AI tools using deep learning do not dispense with the expertise and experience of a clinician (at least, for the moment), they have a potential to improve or simplify our daily practice.
Nguyen Y; Costedoat-Chalumeau N
0-1
40184826
Artificial Intelligence in Health Care: A Rallying Cry for Critical Clinical Research and Ethical Thinking.
2,025
Clinical oncology (Royal College of Radiologists (Great Britain))
Artificial intelligence (AI) will impact a large proportion of jobs in the short to medium term, especially in the developed countries. The consequences will be felt across many sectors including health care, a critical sector for implementation of AI tools because glitches in algorithms or biases in training datasets may lead to suboptimal treatment that may negatively affect the health of an individual. The stakes are obviously higher in case of potentially life-threatening diseases such as cancer and therapies with a potential for causing severe or even fatal adverse events. Over the last two decades, much of the research on AI in health care has focussed on diagnostic radiology and digital pathology, but a solid body of research is emerging on AI tools in the radiation oncology workflow. Many of these applications are relatively uncontroversial, although there is still a lack of evidence regarding effectiveness rather than efficiency, and-the ultimate bar-evidence of clinical utility. Proponents of AI will argue that these algorithms should be implemented with robust human supervision. One challenge here is the deskilling effect associated with new technologies. We will become increasingly dependent on the AI tools over time, and we will become less capable of assessing the quality of the AI output. Much of this research appears almost old-fashioned in view of the rapid advances in Generative artificial intelligence (GenAI). GenAI can draw from multiple types of data and produce output that is personalised and appears relevant in the given context. Especially the rapid progress in large language models (LLMs) has opened a wide field of potential applications that were out of bounds just a few years ago. One LLM, Generative Pre-trained Transformer 4 (GPT-4), has been made widely accessible to end-users as ChatGPT-4, which passed a rigorous Turing test in a recent study. In this viewpoint, I argue for the necessity of independent academic research to establish evidence-based applications of AI in medicine. Algorithmic medicine is an intervention similar to a new drug or a new medical device. We should be especially concerned about under-represented minorities and rare/atypical clinical cases that may drown in the petabyte-sized training sets. A huge educational push is needed to ensure that the end-users of AI in health care understand the strengths and weaknesses of algorithmic medicine. Finally, we need to address the ethical boundaries for where and when GenAI can replace humans in the relation between patients and healthcare providers.
Bentzen SM
10
38523987
Ethical Concerns About ChatGPT in Healthcare: A Useful Tool or the Tombstone of Original and Reflective Thinking?
2,024
Cureus
Artificial intelligence (AI), the uprising technology of computer science aiming to create digital systems with human behavior and intelligence, seems to have invaded almost every field of modern life. Launched in November 2022, ChatGPT (Chat Generative Pre-trained Transformer) is a textual AI application capable of creating human-like responses characterized by original language and high coherence. Although AI-based language models have demonstrated impressive capabilities in healthcare, ChatGPT has received controversial annotations from the scientific and academic communities. This chatbot already appears to have a massive impact as an educational tool for healthcare professionals and transformative potential for clinical practice and could lead to dramatic changes in scientific research. Nevertheless, rational concerns were raised regarding whether the pre-trained, AI-generated text would be a menace not only for original thinking and new scientific ideas but also for academic and research integrity, as it gets more and more difficult to distinguish its AI origin due to the coherence and fluency of the produced text. This short review aims to summarize the potential applications and the consequential implications of ChatGPT in the three critical pillars of medicine: education, research, and clinical practice. In addition, this paper discusses whether the current use of this chatbot is in compliance with the ethical principles for the safe use of AI in healthcare, as determined by the World Health Organization. Finally, this review highlights the need for an updated ethical framework and the increased vigilance of healthcare stakeholders to harvest the potential benefits and limit the imminent dangers of this new innovative technology.
Kapsali MZ; Livanis E; Tsalikidis C; Oikonomou P; Voultsos P; Tsaroucha A
10
37833847
AI language models in human reproduction research: exploring ChatGPT's potential to assist academic writing.
2,023
Human reproduction (Oxford, England)
Artificial intelligence (AI)-driven language models have the potential to serve as an educational tool, facilitate clinical decision-making, and support research and academic writing. The benefits of their use are yet to be evaluated and concerns have been raised regarding the accuracy, transparency, and ethical implications of using this AI technology in academic publishing. At the moment, Chat Generative Pre-trained Transformer (ChatGPT) is one of the most powerful and widely debated AI language models. Here, we discuss its feasibility to answer scientific questions, identify relevant literature, and assist writing in the field of human reproduction. With consideration of the scarcity of data on this topic, we assessed the feasibility of ChatGPT in academic writing, using data from six meta-analyses published in a leading journal of human reproduction. The text generated by ChatGPT was evaluated and compared to the original text by blinded reviewers. While ChatGPT can produce high-quality text and summarize information efficiently, its current ability to interpret data and answer scientific questions is limited, and it cannot be relied upon for a literature search or accurate source citation due to the potential spread of incomplete or false information. We advocate for open discussions within the reproductive medicine research community to explore the advantages and disadvantages of implementing this AI technology. Researchers and reviewers should be informed about AI language models, and we encourage authors to transparently disclose their use.
Semrl N; Feigl S; Taumberger N; Bracic T; Fluhr H; Blockeel C; Kollmann M
10
38292297
Establishing priorities for implementation of large language models in pathology and laboratory medicine.
2,024
Academic pathology
Artificial intelligence and machine learning have numerous applications in pathology and laboratory medicine. The release of ChatGPT prompted speculation regarding the potentially transformative role of large-language models (LLMs) in academic pathology, laboratory medicine, and pathology education. Because of the potential to improve LLMs over the upcoming years, pathology and laboratory medicine clinicians are encouraged to embrace this technology, identify pathways by which LLMs may support our missions in education, clinical practice, and research, participate in the refinement of AI modalities, and design user-friendly interfaces that integrate these tools into our most important workflows. Challenges regarding the use of LLMs, which have already received considerable attention in a general sense, are also reviewed herein within the context of the pathology field and are important to consider as LLM applications are identified and operationalized.
Arvisais-Anhalt S; Gonias SL; Murray SG
10
38170274
Progression of an Artificial Intelligence Chatbot (ChatGPT) for Pediatric Cardiology Educational Knowledge Assessment.
2,024
Pediatric cardiology
Artificial intelligence chatbots, like ChatGPT, have become powerful tools that are disrupting how humans interact with technology. The potential uses within medicine are vast. In medical education, these chatbots have shown improvements, in a short time span, in generalized medical examinations. We evaluated the overall performance and improvement between ChatGPT 3.5 and 4.0 in a test of pediatric cardiology knowledge. ChatGPT 3.5 and ChatGPT 4.0 were used to answer text-based multiple-choice questions derived from a Pediatric Cardiology Board Review textbook. Each chatbot was given an 88 question test, subcategorized into 11 topics. We excluded questions with modalities other than text (sound clips or images). Statistical analysis was done using an unpaired two-tailed t-test. Of the same 88 questions, ChatGPT 4.0 answered 66% of the questions correctly (n = 58/88) which was significantly greater (p < 0.0001) than ChatGPT 3.5, which only answered 38% (33/88). The ChatGPT 4.0 version also did better on each subspeciality topic as compared to ChatGPT 3.5. While acknowledging that ChatGPT does not yet offer subspecialty level knowledge in pediatric cardiology, the performance in pediatric cardiology educational assessments showed a considerable improvement in a short period of time between ChatGPT 3.5 and 4.0.
Gritti MN; AlTurki H; Farid P; Morgan CT
21
39569947
[ARTIFICIAL INTELLIGENCE AND MEDICAL ETHICS].
2,024
Harefuah
Artificial intelligence has burst into our lives with great vigor in recent years. We encounter it in all areas of life, as well as in the field of medicine. The article refers to medical ethics in two areas: One field is medicine based on Mega Data and the other is the chatbot or ChatGPT. These two fields basically operate in three stages: collecting data, building a logarithm and drawing conclusions and a course of action. During the data collection phase, as doctors we must not forget to preserve the autonomy and medical confidentiality of the patient. Despite all the technology and innovations, in the end, the doctor makes decisions with the cooperation of the patient and the discretion on whether to use the diagnosis, treatment and knowledge which remains in the hands of the doctor. In the realm of research in reviewing materials and writing articles, when artificial intelligence is used, caution and criticality should be exercised, since the results obtained when using artificial intelligence can be doubly misleading.
Karni T
10
37812965
Improving radiology workflow using ChatGPT and artificial intelligence.
2,023
Clinical imaging
Artificial Intelligence is a branch of computer science that aims to create intelligent machines capable of performing tasks that typically require human intelligence. One of the branches of artificial intelligence is natural language processing, which is dedicated to studying the interaction between computers and human language. ChatGPT is a sophisticated natural language processing tool that can understand and respond to complex questions and commands in natural language. Radiology is a vital aspect of modern medicine that involves the use of imaging technologies to diagnose and treat medical conditions artificial intelligence, including ChatGPT, can be integrated into radiology workflows to improve efficiency, accuracy, and patient care. ChatGPT can streamline various radiology workflow steps, including patient registration, scheduling, patient check-in, image acquisition, interpretation, and reporting. While ChatGPT has the potential to transform radiology workflows, there are limitations to the technology that must be addressed, such as the potential for bias in artificial intelligence algorithms and ethical concerns. As technology continues to advance, ChatGPT is likely to become an increasingly important tool in the field of radiology, and in healthcare more broadly.
Mese I; Taslicay CA; Sivrioglu AK
0-1
37864445
Conversation between a clinical biologist and an artificial intelligence on prostate cancer biomarkers: a critical reading.
2,023
Annales de biologie clinique
Artificial intelligence is increasingly used in the field of medicine as a diagnostic aid, particularly for image analysis and more generally for data processing. Many artificial intelligence-based tools have been specifically developed for clinical biology, but some more general ones can help to improve the dissemination of medical knowledge. To test whether and to what extent an automated conversation tool could answer questions on a clinical biology topic (i.e. prostate cancer biomarkers), we questioned ChatGPT, an artificial intelligence-powered model dedicated to optimize language models for dialogue. Then we analyzed its responses.
Lamy PJ
10
37148260
Harvesting the Power of Artificial Intelligence for Surgery: Uses, Implications, and Ethical Considerations.
2,023
The American surgeon
Artificial intelligence is rapidly advancing, especially with the advent of ChatGPT technology, and its role in the world of medicine is expanding. Within surgery, AI has the capacity to improve efficiency and results in surgical treatments; however, it similarly has the potential to impose harm onto patients and undermine the role of medical providers. Its benefits may include improvements in surgical outcomes, spanning from enhanced pre-operative diagnostic capabilities to more refined intra-operative techniques, and long term patient experiences, by identifying and reducing complications. Nevertheless apprehensions revolve around laymen use potentially resulting in inappropriate therapeutic interventions, in addition to safety and ethical risks surrounding the use of patient data. Various strategies towards mitigating these harms must be considered, such as patient disclaimers and secondary review policies. While artificial intelligence brings exciting advancements to surgery, its integration must be cautiously monitored.
Kavian JA; Wilkey HL; Patel PA; Boyd CJ
10
38727914
Application of Deep Learning for Studying NMDA Receptors.
2,024
Methods in molecular biology (Clifton, N.J.)
Artificial intelligence underwent remarkable advancement in the past decade, revolutionizing our way of thinking and unlocking unprecedented opportunities across various fields, including drug development. The emergence of large pretrained models, such as ChatGPT, has even begun to demonstrate human-level performance in certain tasks.However, the difficulties of deploying and utilizing AI and pretrained model for nonexpert limited its practical use. To overcome this challenge, here we presented three highly accessible online tools based on a large pretrained model for chemistry, the Uni-Mol, for drug development against CNS diseases, including those targeting NMDA receptor: the blood-brain barrier (BBB) permeability prediction, the quantitative structure-activity relationship (QSAR) analysis system, and a versatile interface of the AI-based molecule generation model named VD-gen. We believe that these resources will effectively bridge the gap between cutting-edge AI technology and NMDAR experts, facilitating rapid and rational drug development.
Deng Z; Gu R; Wen H
0-1
38836893
ChatGPT: A Conceptual Review of Applications and Utility in the Field of Medicine.
2,024
Journal of medical systems
Artificial Intelligence, specifically advanced language models such as ChatGPT, have the potential to revolutionize various aspects of healthcare, medical education, and research. In this narrative review, we evaluate the myriad applications of ChatGPT in diverse healthcare domains. We discuss its potential role in clinical decision-making, exploring how it can assist physicians by providing rapid, data-driven insights for diagnosis and treatment. We review the benefits of ChatGPT in personalized patient care, particularly in geriatric care, medication management, weight loss and nutrition, and physical activity guidance. We further delve into its potential to enhance medical research, through the analysis of large datasets, and the development of novel methodologies. In the realm of medical education, we investigate the utility of ChatGPT as an information retrieval tool and personalized learning resource for medical students and professionals. There are numerous promising applications of ChatGPT that will likely induce paradigm shifts in healthcare practice, education, and research. The use of ChatGPT may come with several benefits in areas such as clinical decision making, geriatric care, medication management, weight loss and nutrition, physical fitness, scientific research, and medical education. Nevertheless, it is important to note that issues surrounding ethics, data privacy, transparency, inaccuracy, and inadequacy persist. Prior to widespread use in medicine, it is imperative to objectively evaluate the impact of ChatGPT in a real-world setting using a risk-based approach.
Rao SJ; Isath A; Krishnan P; Tangsrivimol JA; Virk HUH; Wang Z; Glicksberg BS; Krittanawong C
10
37369944
Advancing the Production of Clinical Medical Devices Through ChatGPT.
2,024
Annals of biomedical engineering
As a recently popular large language model, Chatbot Generative Pre-trained Transformer (ChatGPT) is highly valued in the field of clinical medicine. Due to the limited understanding of the potential impact of ChatGPT on the manufacturing side of clinical medical devices, we aim to fill this gap through this article. We elucidate the classification of medical devices and explore the positive contributions of ChatGPT in various aspects of medical device design, optimization, and improvement. However, limitations such as the potential for misinterpretation of user intent, lack of personal experience, and the need for human supervision should be taken into consideration. Striking a balance between ChatGPT and human expertise can ensure the safety, quality, and compliance of medical devices. This work contributes to the advancement of ChatGPT in the medical device manufacturing industry and highlights the synergistic relationship between artificial intelligence and human involvement in healthcare.
Li S; Guo Z; Zang X
10
37425598
Radiology Gets Chatty: The ChatGPT Saga Unfolds.
2,023
Cureus
As artificial intelligence (AI) continues to evolve and mature, it is increasingly finding applications in the field of healthcare, particularly in specialties like radiology that are data-heavy and image-focused. Language learning models (LLMs) such as OpenAI's Generative Pre-trained Transformer-4 (GPT-4) are new in the field of medicine and there is a paucity of literature regarding the possible utilities of GPT-4 given its novelty. We aim to present an in-depth exploration of the role of GPT-4, an advanced language model, in radiology. Giving the GPT-4 model prompts for generating reports, template generation, enhancing clinical decision-making, and suggesting captivating titles for research articles, patient communication, and education, can occasionally be quite generic, and at times, it may present factually incorrect content, which could lead to errors. The responses were then analyzed in detail regarding their potential utility in day-to-day radiologist workflow, patient education, and research processes. Further research is required to evaluate LLMs' accuracy and safety in clinical practice and to develop comprehensive guidelines for their implementation.
Grewal H; Dhillon G; Monga V; Sharma P; Buddhavarapu VS; Sidhu G; Kashyap R
10
37790062
A Radiation Oncology Board Exam of ChatGPT.
2,023
Cureus
As artificial intelligence (AI) models improve and become widely integrated into healthcare systems, healthcare providers must understand the strengths and limitations of AI tools to realize the full spectrum of potential patient-care benefits. However, most providers have a poor understanding of AI, leading to distrust and poor adoption of this emerging technology. To bridge this divide, this editorial presents a novel view of ChatGPT's current capabilities in the medical field of radiation oncology. By replicating the format of the oral qualification exam required for radiation oncology board certification, we demonstrate ChatGPT's ability to analyze a commonly encountered patient case, make diagnostic decisions, and integrate information to generate treatment recommendations. Through this simulation, we highlight ChatGPT's strengths and limitations in replicating human decision-making in clinical radiation oncology, while providing an accessible resource to educate radiation oncologists on the capabilities of AI chatbots.
Barbour AB; Barbour TA
43
37891532
ChatGPT and mycosis- a new weapon in the knowledge battlefield.
2,023
BMC infectious diseases
As current trend for physician tools, ChatGPT can sift through massive amounts of information and solve problems through easy-to-understand conversations, ultimately improving efficiency. Mycosis is currently facing great challenges, including high fungal burdens, high mortality, limited choice of antifungal drugs and increasing drug resistance. To address these challenges, We asked ChatGPT for fungal infection scenario-based questions and assessed its appropriateness, consistency, and potential pitfalls. We concluded ChatGPT can provide compelling responses to most prompts, including diagnosis, recommendations for examination, treatment and rational drug use. Moreover, we summarized exciting future applications in mycosis, such as clinical work, scientific research, education and healthcare. However, the largest barriers to implementation are deficits in indiviudal advice, timely literature updates, consistency, accuracy and data safety. To fully embrace the opportunity, we need to address these barriers and manage the risks. We expect that ChatGPT will become a new weapon in in the battlefield of mycosis.
Jin Y; Liu H; Zhao B; Pan W
43
37988149
The Intersection of ChatGPT, Clinical Medicine, and Medical Education.
2,023
JMIR medical education
As we progress deeper into the digital age, the robust development and application of advanced artificial intelligence (AI) technology, specifically generative language models like ChatGPT (OpenAI), have potential implications in all sectors including medicine. This viewpoint article aims to present the authors' perspective on the integration of AI models such as ChatGPT in clinical medicine and medical education. The unprecedented capacity of ChatGPT to generate human-like responses, refined through Reinforcement Learning with Human Feedback, could significantly reshape the pedagogical methodologies within medical education. Through a comprehensive review and the authors' personal experiences, this viewpoint article elucidates the pros, cons, and ethical considerations of using ChatGPT within clinical medicine and notably, its implications for medical education. This exploration is crucial in a transformative era where AI could potentially augment human capability in the process of knowledge creation and dissemination, potentially revolutionizing medical education and clinical practice. The importance of maintaining academic integrity and professional standards is highlighted. The relevance of establishing clear guidelines for the responsible and ethical use of AI technologies in clinical medicine and medical education is also emphasized.
Wong RS; Ming LC; Raja Ali RA
10
37273063
ChatGPTs' Journey in Medical Revolution: A Potential Panacea or a Hidden Pathogen?
2,023
Annals of biomedical engineering
At the fascinating intersection of artificial intelligence and medicine, ChatGPT morphs into a compact, personal digital physician. With a simple click, it furnishes an abundance of health-related information, initial medical consultations, and a plethora of disease management recommendations. Moreover, it stands at the ready to provide immediate mental health assistance in times of psychological distress. Yet, each innovation carries inherent challenges. As we embrace the conveniences proffered by ChatGPT, it is imperative that we grapple with associated issues such as data privacy, risk of misdiagnosis, complexities in human-machine interaction, and particular situations that elude its understanding. Let's probe further into this intriguing world, brimming with contention and prospects, and observe how ChatGPT traverses the landscape of digital health, uncovering the potential it holds for the future evolution of medical practice.
Yang J
10
39735146
Categorization of Novel Research Ideas Regarding Adolescent Idiopathic Scoliosis Generated by Artificial Intelligence.
2,024
Cureus
Background The generation of innovative research ideas is crucial to advancing the field of medicine. As physicians face increasingly demanding clinical schedules, it is important to identify tools that may expedite the research process. Artificial intelligence may offer a promising solution by enabling the efficient generation of novel research ideas. This study aimed to assess the feasibility of using artificial intelligence to build upon existing knowledge by generating innovative research questions. Methods A comparative evaluation study was conducted to assess the ability of AI models to generate novel research questions. The prompt "research ideas for adolescent idiopathic scoliosis" was input into ChatGPT 3.5, Gemini 1.5, Copilot, and Llama 3. This resulted in an output of several research questions ranging from 10 questions to 14 questions. A keyword-friendly modified version of the AI-generated responses was searched in the PubMed database. Results were limited to manuscripts published in the English language from the year 2000 to the present. Each response was then cross-referenced to the PubMed search results and assigned an originality score of 0-5, with 0 being the most original and 5 being not original at all, by adding one numerical value for each paper already published on the topic. The mean originality scores were calculated manually by summing the originality scores from all the responses from each AI model and then dividing that sum by the respective number of prompts generated by the AI. The standard deviation of the originality scores for each AI was calculated using the standard deviation function (STDEV) function in Google Sheets (Google, Mountain View, California). Each AI was also evaluated on its percent novelty, the percentage of total generated responses that yielded an originality score of 0 when searched in PubMed. Results Each AI produced varying numbers of research prompts that were inputted into PubMed. The mean originality scores for ChatGPT, Gemini, Copilot, and Llama were 4.2 +/- 1.9, 4.1 +/- 1.3, 4.0 +/- 1.6, and 3.8 +/- 1.7, respectively. Of ChatGPT's 12 prompts, 16.67% were completely novel (no prior research had been conducted on the topic provided by the AI model). 10.00% of Copilot's 10 prompts were completely novel, and 8.33% of Llama's 12 prompts were completely novel. None of Gemini's fourteen responses yielded an originality score of 0. Conclusions Our findings demonstrate that ChatGPT, Llama, and Copilot are capable of generating novel ideas in orthopaedics research. As these models continue to evolve and become even more refined with time, physicians and scientists should consider incorporating them when brainstorming and planning their research studies.
Leonardo CJ; Melcer K; Liu SH; Komatsu DE; Barsi JM
0-1
39081915
Comparing patient education tools for chronic pain medications: Artificial intelligence chatbot versus traditional patient information leaflets.
2,024
Indian journal of anaesthesia
BACKGROUND AND AIMS: Artificial intelligence (AI) chatbots like Conversational Generative Pre-trained Transformer (ChatGPT) have recently created much buzz, especially regarding patient education. Such informed patients understand and adhere to the management and get involved in shared decision making. The accuracy and understandability of the generated educational material are prime concerns. Thus, we compared ChatGPT with traditional patient information leaflets (PILs) about chronic pain medications. METHODS: Patients' frequently asked questions were generated from PILs available on the official websites of the British Pain Society (BPS) and the Faculty of Pain Medicine. Eight blinded annexures were prepared for evaluation, consisting of traditional PILs from the BPS and AI-generated patient information materials structured similar to PILs by ChatGPT. The authors performed a comparative analysis to assess materials' readability, emotional tone, accuracy, actionability, and understandability. Readability was measured using Flesch Reading Ease (FRE), Gunning Fog Index (GFI), and Flesch-Kincaid Grade Level (FKGL). Sentiment analysis determined emotional tone. An expert panel evaluated accuracy and completeness. Actionability and understandability were assessed with the Patient Education Materials Assessment Tool. RESULTS: Traditional PILs generally exhibited higher readability (P values < 0.05), with [mean (standard deviation)] FRE [62.25 (1.6) versus 48 (3.7)], GFI [11.85 (0.9) versus 13.65 (0.7)], and FKGL [8.33 (0.5) versus 10.23 (0.5)] but varied emotional tones, often negative, compared to more positive sentiments in ChatGPT-generated texts. Accuracy and completeness did not significantly differ between the two. Actionability and understandability scores were comparable. CONCLUSION: While AI chatbots offer efficient information delivery, ensuring accuracy and readability, patient-centeredness remains crucial. It is imperative to balance innovation with evidence-based practice.
Gondode P; Duggal S; Garg N; Sethupathy S; Asai O; Lohakare P
0-1
37385548
Harnessing language models for streamlined postcolonoscopy patient management: a novel approach.
2,023
Gastrointestinal endoscopy
BACKGROUND AND AIMS: ChatGPT, an advanced language model, is increasingly used in diverse fields, including medicine. This study explores using ChatGPT to optimize postcolonoscopy management by providing guideline-based recommendations and addressing low compliance rates and timing issues. METHODS: In this proof-of-concept study, 20 clinical scenarios were prepared as structured reports and free-text notes, and ChatGPT's responses were evaluated by 2 senior gastroenterologists. Compliance with guidelines and accuracy were assessed, and inter-rater agreement was calculated using Fleiss' kappa coefficient. RESULTS: ChatGPT exhibited 90% compliance with guidelines and 85% accuracy, with a very good inter-rater agreement (Fleiss' kappa coefficient of .84, P < .01). ChatGPT handled multiple variations and descriptions and crafted concise patient letters. CONCLUSIONS: Results suggest that ChatGPT could aid healthcare providers in making informed decisions and improve compliance with postcolonoscopy surveillance guidelines. Future research should investigate integrating ChatGPT into electronic health record systems and evaluating its effectiveness in different healthcare settings and populations.
Gorelik Y; Ghersin I; Maza I; Klein A
0-1
39866721
Can Artificial Intelligence Create an Accurate Colonoscopy Bowel Preparation Prompt?
2,025
Gastro hep advances
BACKGROUND AND AIMS: Colorectal cancer is the third most common cancer in the United States, with colonoscopy being the preferred screening method. Up to 25% of colonoscopies are associated with poor preparation which leads to prolonged procedure time, repeat colonoscopies, and decreased adenoma detection. Artificial intelligence (AI) is being increasingly used in medicine, assessing medical school exam questions, and writing medical reports. Its use in gastroenterology has been used to educate patients with cirrhosis and hepatocellular carcinoma, answer patient questions about colonoscopy and provide correct colonoscopy screening intervals, having the ability to augment the patient-provider relationship. This study aims at assessing the accuracy of a ChatGPT-generated precolonoscopy bowel preparation prompt. METHODS: A nonrandomized cross-sectional study assessing the perceptions of an AI-generated colonoscopy preparation prompt was conducted in a large multisite quaternary health-care institution in the northeast United States. All practicing gastroenterologists in the health system were surveyed, with 208 having a valid email address and were included in the study. A Research Electronic Data Capture survey was then distributed to all participants and analyzed using descriptive statistics. RESULTS: Overall, 91% of gastroenterologist physicians determined the prompt easy to understand, 95% thought the prompt was scientifically accurate and 66% were comfortable giving the prompt to their patients. Sixty four percent of reviewers correctly identified the ChatGPT-generated prompt, but only 32% were confident in their answer. CONCLUSION: The ability of ChatGPT to create a sufficient bowel preparation prompt highlights how physicians can incorporate AI into clinical practice to improve ease and efficiency of communication with patients when it comes to bowel preparation.
Wilkoff MH; Piniella NR; Advani R
0-1
40248774
Artificial intelligence enhanced Chatbot boom: A single center observational study to evaluate assistance in clinical anesthesiology.
2,025
Journal of anaesthesiology, clinical pharmacology
BACKGROUND AND AIMS: The field of anaesthesiology and perioperative medicine has explored advancements in science and technology, ensuring precision and personalized anesthesia plans. The surge in the usage of chat-generative pretrained transformer (Chat GPT) in medicine has evoked interest among anesthesiologists to assess its performance in the operating room. However, there is concern about accuracy, patient privacy and ethics. Our objective in this study assess whether Chat GPT can provide assistance in clinical decisions and compare them with those of resident anesthesiologists. MATERIAL AND METHODS: In this cross-sectional study conducted at a teaching hospital, a set of 30 hypothetical clinical scenarios in the operating room were presented to resident anesthesiologists and Chat-GPT 4. The first five scenarios out of 30 were typed with three additional prompts in the same chat to determine if there was any detailing of answers. The responses were labeled and assessed by three reviewers not involved in the study. RESULTS: The interclass coefficient (ICC) values show variation in the level of agreement between Chat GPT and anesthesiologists. For instance, the ICC of 0.41 between A1 and Chat GPT indicates a moderate level of agreement, whereas the ICC of 0.06 between A2 and Chat GPT suggests a comparatively weaker level of agreement. CONCLUSIONS: In this study, it was found that there were variations in the level of agreement between Chat GPT and resident anesthesiologists' response in terms of accuracy and comprehensiveness of responses in solving intraoperative scenarios. The use of prompts improved the agreement of Chat GPT with anesthesiologists.
Jois SM; Rangalakshmi S; Iyengar SMJ; Mahesh C; Devi LD; Namachivayam AK
0-1
38800159
The Potential of ChatGPT for High-Quality Information in Patient Education for Sports Surgery.
2,024
Cureus
BACKGROUND AND OBJECTIVE: Artificial intelligence (AI) advancements continue to have a profound impact on modern society, driving significant innovation and development across various fields. We sought to appraise the reliability of the information offered by Chat Generative Pre-Trained Transformer (ChatGPT) regarding diseases commonly associated with sports surgery. We hypothesized that ChatGPT could offer high-quality information on sports-related diseases and be used in patient education. METHODS: On September 11, 2023, specific sports surgery-related diseases were identified to ask ChatGPT-4 (personal communication, March 4, 2023). The informative texts provided by ChatGPT were recorded by a non-observer senior orthopedic surgeon for this study. Ten texts provided by ChatGPT related to sports surgery diseases were evaluated blindly by two observers. Observers assessed and scored these texts based on the sports surgery-specific scoring (SSSS) and DISCERN criteria. The precision of the disease-related information offered by ChatGPT was evaluated. RESULTS: The calculated average DISCERN score of the texts in the study was 44.75 points and the average SSSS score was 13.3 points. In the interclass correlation coefficient analysis of the measurements made by the observers, the agreement was found to be excellent (0.989; p < 0.001). CONCLUSION: ChatGPT has the potential to be used in patient education for sports surgery-related diseases. The potential to provide quality information in this regard seems to be an advantage.
Yuce A; Erkurt N; Yerli M; Misir A
32
38107064
Is ChatGPT accurate and reliable in answering questions regarding head and neck cancer?
2,023
Frontiers in oncology
BACKGROUND AND OBJECTIVE: Chat Generative Pre-trained Transformer (ChatGPT) is an artificial intelligence (AI)-based language processing model using deep learning to create human-like text dialogue. It has been a popular source of information covering vast number of topics including medicine. Patient education in head and neck cancer (HNC) is crucial to enhance the understanding of patients about their medical condition, diagnosis, and treatment options. Therefore, this study aims to examine the accuracy and reliability of ChatGPT in answering questions regarding HNC. METHODS: 154 head and neck cancer-related questions were compiled from sources including professional societies, institutions, patient support groups, and social media. These questions were categorized into topics like basic knowledge, diagnosis, treatment, recovery, operative risks, complications, follow-up, and cancer prevention. ChatGPT was queried with each question, and two experienced head and neck surgeons assessed each response independently for accuracy and reproducibility. Responses were rated on a scale: (1) comprehensive/correct, (2) incomplete/partially correct, (3) a mix of accurate and inaccurate/misleading, and (4) completely inaccurate/irrelevant. Discrepancies in grading were resolved by a third reviewer. Reproducibility was evaluated by repeating questions and analyzing grading consistency. RESULTS: ChatGPT yielded "comprehensive/correct" responses to 133/154 (86.4%) of the questions whereas, rates of "incomplete/partially correct" and "mixed with accurate and inaccurate data/misleading" responses were 11% and 2.6%, respectively. There were no "completely inaccurate/irrelevant" responses. According to category, the model provided "comprehensive/correct" answers to 80.6% of questions regarding "basic knowledge", 92.6% related to "diagnosis", 88.9% related to "treatment", 80% related to "recovery - operative risks - complications - follow-up", 100% related to "cancer prevention" and 92.9% related to "other". There was not any significant difference between the categories regarding the grades of ChatGPT responses (p=0.88). The rate of reproducibility was 94.1% (145 of 154 questions). CONCLUSION: ChatGPT generated substantially accurate and reproducible information to diverse medical queries related to HNC. Despite its limitations, it can be a useful source of information for both patients and medical professionals. With further developments in the model, ChatGPT can also play a crucial role in clinical decision support to provide the clinicians with up-to-date information.
Kuscu O; Pamuk AE; Sutay Suslu N; Hosal S
0-1
39828224
Using artificial intelligence to semi-automate trustworthiness assessment of randomized controlled trials: a case study.
2,025
Journal of clinical epidemiology
BACKGROUND AND OBJECTIVE: Randomized controlled trials (RCTs) are the cornerstone of evidence-based medicine. Unfortunately, not all RCTs are based on real data. This serious breach of research integrity compromises the reliability of systematic reviews and meta-analyses, leading to misinformed clinical guidelines and posing a risk to both individual and public health. While methods to detect problematic RCTs have been proposed, they are time-consuming and labor-intensive. The use of artificial intelligence large language models (LLMs) has the potential to accelerate the data collection needed to assess the trustworthiness of published RCTs. METHODS: We present a case study using ChatGPT powered by OpenAI's GPT-4o to assess an RCT paper. The case study focuses on applying the trustworthiness in randomised controlled trials (TRACT checklist) and automating data table extraction to accelerate statistical analysis targeting the trustworthiness of the data. We provide a detailed step-by-step outline of the process, along with considerations for potential improvements. RESULTS: ChatGPT completed all tasks by processing the PDF of the selected publication and responding to specific prompts. ChatGPT addressed items in the TRACT checklist effectively, demonstrating an ability to provide precise "yes" or "no" answers while quickly synthesizing information from both the paper and relevant online resources. A comparison of results generated by ChatGPT and the human assessor showed an 84% level of agreement of (16/19) TRACT items. This substantially accelerated the qualitative assessment process. Additionally, ChatGPT was able to extract efficiently the data tables as Microsoft Excel worksheets and reorganize the data, with three out of four extracted tables achieving an accuracy score of 100%, facilitating subsequent analysis and data verification. CONCLUSION: ChatGPT demonstrates potential in semiautomating the trustworthiness assessment of RCTs, though in our experience this required repeated prompting from the user. Further testing and refinement will involve applying ChatGPT to collections of RCT papers to improve the accuracy of data capture and lessen the role of the user. The ultimate aim is a completely automated process for large volumes of papers that seems plausible given our initial experience.
Au LS; Qu L; Nielsen J; Ge Z; Gurrin LC; Mol BW; Wang R
10
40245607
GDReCo: Fine-grained gene-disease relationship extraction corpus.
2,025
Computer methods and programs in biomedicine
BACKGROUND AND OBJECTIVE: Understanding gene-disease relationships is crucial for medical research, drug discovery, clinical diagnosis, and other fields. However, there is currently no high-quality, fine-grained corpus available for training Natural Language Processing (NLP) models, which have proven to be effective in knowledge extraction. METHODS: This study introduces a novel ontology framework for gene-disease associations, addressing the absence of a formal descriptive system and training corpus for NLP models. RESULTS: We developed the Gene Disease Relationship Extraction Corpus (GDReCo), a refined dataset of over 24,000+ cases, including 2300+ manually annotated and 22,000+ model-predicted instances. BERT-based models trained on this data achieved high F1-scores for "event" and "rel" relationships, validating its effectiveness for Gene-Disease Relationship Extraction (GDRE) tasks. CONCLUSIONS: GDReCo serves as a valuable resource for biomedical research, though ChatGPT's limitations in fine-grained relation extraction are noted.
Yu H; Wu J; Bian S; Zhang S; Wu Y; Zhou Z; Jia Q; Ni Y; Huang Z; Yan H; Wang W; He K; Shi J
10
39207788
Performance of Language Models on the Family Medicine In-Training Exam.
2,024
Family medicine
BACKGROUND AND OBJECTIVES: Artificial intelligence (AI), such as ChatGPT and Bard, has gained popularity as a tool in medical education. The use of AI in family medicine has not yet been assessed. The objective of this study is to compare the performance of three large language models (LLMs; ChatGPT 3.5, ChatGPT 4.0, and Google Bard) on the family medicine in-training exam (ITE). METHODS: The 193 multiple-choice questions of the 2022 ITE, written by the American Board of Family Medicine, were inputted in ChatGPT 3.5, ChatGPT 4.0, and Bard. The LLMs' performance was then scored and scaled. RESULTS: ChatGPT 4.0 scored 167/193 (86.5%) with a scaled score of 730 out of 800. According to the Bayesian score predictor, ChatGPT 4.0 has a 100% chance of passing the family medicine board exam. ChatGPT 3.5 scored 66.3%, translating to a scaled score of 400 and an 88% chance of passing the family medicine board exam. Bard scored 64.2%, with a scaled score of 380 and an 85% chance of passing the boards. Compared to the national average of postgraduate year 3 residents, only ChatGPT 4.0 surpassed the residents' mean of 68.4%. CONCLUSIONS: ChatGPT 4.0 was the only LLM that outperformed the family medicine postgraduate year 3 residents' national averages on the 2022 ITE, providing robust explanations and demonstrating its potential use in delivering background information on common medical concepts that appear on board exams.
Hanna RE; Smith LR; Mhaskar R; Hanna K
21
39336540
Perforator Selection with Computed Tomography Angiography for Unilateral Breast Reconstruction: A Clinical Multicentre Analysis.
2,024
Medicina (Kaunas, Lithuania)
Background and Objectives: Despite CTAs being critical for preoperative planning in autologous breast reconstruction, experienced plastic surgeons may have differing preferences for which side of the abdomen to use for unilateral breast reconstruction. Large language models (LLMs) have the potential to assist medical imaging interpretation. This study compares the perforator selection preferences of experienced plastic surgeons with four popular LLMs based on CTA images for breast reconstruction. Materials and Methods: Six experienced plastic surgeons from Australia, the US, Italy, Denmark, and Argentina reviewed ten CTA images, indicated their preferred side of the abdomen for unilateral breast reconstruction and recommended the type of autologous reconstruction. The LLMs were prompted to do the same. The average decisions were calculated, recorded in suitable tables, and compared. Results: The six consultants predominantly recommend the DIEP procedure (83%). This suggests experienced surgeons feel more comfortable raising DIEP than TRAM flaps, which they recommended only 3% of the time. They also favoured MS TRAM and SIEA less frequently (11% and 2%, respectively). Three LLMs-ChatGPT-4o, ChatGPT-4, and Bing CoPilot-exclusively recommended DIEP (100%), while Claude suggested DIEP 90% and MS TRAM 10%. Despite minor variations in side recommendations, consultants and AI models clearly preferred DIEP. Conclusions: Consultants and LLMs consistently preferred DIEP procedures, indicating strong confidence among experienced surgeons, though LLMs occasionally deviated in recommendations, highlighting limitations in their image interpretation capabilities. This emphasises the need for ongoing refinement of AI-assisted decision support systems to ensure they align more closely with expert clinical judgment and enhance their reliability in clinical practice.
Seth I; Lim B; Phan R; Xie Y; Kenney PS; Bukret WE; Thomsen JB; Cuomo R; Ross RJ; Ng SK; Rozen WM
32
37888068
Navigating the Landscape of Personalized Medicine: The Relevance of ChatGPT, BingChat, and Bard AI in Nephrology Literature Searches.
2,023
Journal of personalized medicine
BACKGROUND AND OBJECTIVES: Literature reviews are foundational to understanding medical evidence. With AI tools like ChatGPT, Bing Chat and Bard AI emerging as potential aids in this domain, this study aimed to individually assess their citation accuracy within Nephrology, comparing their performance in providing precise. MATERIALS AND METHODS: We generated the prompt to solicit 20 references in Vancouver style in each 12 Nephrology topics, using ChatGPT, Bing Chat and Bard. We verified the existence and accuracy of the provided references using PubMed, Google Scholar, and Web of Science. We categorized the validity of the references from the AI chatbot into (1) incomplete, (2) fabricated, (3) inaccurate, and (4) accurate. RESULTS: A total of 199 (83%), 158 (66%) and 112 (47%) unique references were provided from ChatGPT, Bing Chat and Bard, respectively. ChatGPT provided 76 (38%) accurate, 82 (41%) inaccurate, 32 (16%) fabricated and 9 (5%) incomplete references. Bing Chat provided 47 (30%) accurate, 77 (49%) inaccurate, 21 (13%) fabricated and 13 (8%) incomplete references. In contrast, Bard provided 3 (3%) accurate, 26 (23%) inaccurate, 71 (63%) fabricated and 12 (11%) incomplete references. The most common error type across platforms was incorrect DOIs. CONCLUSIONS: In the field of medicine, the necessity for faultless adherence to research integrity is highlighted, asserting that even small errors cannot be tolerated. The outcomes of this investigation draw attention to inconsistent citation accuracy across the different AI tools evaluated. Despite some promising results, the discrepancies identified call for a cautious and rigorous vetting of AI-sourced references in medicine. Such chatbots, before becoming standard tools, need substantial refinements to assure unwavering precision in their outputs.
Aiumtrakul N; Thongprayoon C; Suppadungsuk S; Krisanapan P; Miao J; Qureshi F; Cheungpasitporn W
10
40045700
Artificial intelligence chatbots in transfusion medicine: A cross-sectional study.
2,025
Vox sanguinis
BACKGROUND AND OBJECTIVES: The recent rise of artificial intelligence (AI) chatbots has attracted many users worldwide. However, expert evaluation is essential before relying on them for transfusion medicine (TM)-related information. This study aims to evaluate the performance of AI chatbots for accuracy, correctness, completeness and safety. MATERIALS AND METHODS: Six AI chatbots (ChatGPT 4, ChatGPT 4-o, Gemini Advanced, Copilot, Anthropic Claude 3.5 Sonnet, Meta AI) were tested using TM-related prompts at two time points, 30 days apart. Their responses were assessed by four TM experts. Evaluators' scores underwent inter-rater reliability testing. Responses from Day 30 were compared with those from Day 1 to evaluate consistency and potential evolution over time. RESULTS: All six chatbots exhibited some level of inconsistency and varying degrees of evolution in their responses over 30 days. None provided entirely correct, complete or safe answers to all questions. Among the chatbots tested, ChatGPT 4-o and Anthropic Claude 3.5 Sonnet demonstrated the highest accuracy and consistency, while Microsoft Copilot and Google Gemini Advanced showed the greatest evolution in their responses. As a limitation, the 30-day period may be too short for a precise assessment of chatbot evolution. CONCLUSION: At the time of the conduct of this study, none of the AI chatbots provided fully reliable, complete or safe responses to all TM-related prompts. However, ChatGPT 4-o and Anthropic Claude 3.5 Sonnet show the highest promise for future integration into TM practices. Given their variability and ongoing development, AI chatbots should not yet be relied upon as authoritative sources in TM without expert validation.
Srivastava P; Tewari A; Al-Riyami AZ
43
37519497
Analysing the Applicability of ChatGPT, Bard, and Bing to Generate Reasoning-Based Multiple-Choice Questions in Medical Physiology.
2,023
Cureus
Background Artificial intelligence (AI) is evolving in the medical education system. ChatGPT, Google Bard, and Microsoft Bing are AI-based models that can solve problems in medical education. However, the applicability of AI to create reasoning-based multiple-choice questions (MCQs) in the field of medical physiology is yet to be explored. Objective We aimed to assess and compare the applicability of ChatGPT, Bard, and Bing in generating reasoning-based MCQs for MBBS (Bachelor of Medicine, Bachelor of Surgery) undergraduate students on the subject of physiology. Methods The National Medical Commission of India has developed an 11-module physiology curriculum with various competencies. Two physiologists independently chose a competency from each module. The third physiologist prompted all three AIs to generate five MCQs for each chosen competency. The two physiologists who provided the competencies rated the MCQs generated by the AIs on a scale of 0-3 for validity, difficulty, and reasoning ability required to answer them. We analyzed the average of the two scores using the Kruskal-Wallis test to compare the distribution across the total and module-wise responses, followed by a post-hoc test for pairwise comparisons. We used Cohen's Kappa (Kappa) to assess the agreement in scores between the two raters. We expressed the data as a median with an interquartile range. We determined their statistical significance by a p-value <0.05. Results ChatGPT and Bard generated 110 MCQs for the chosen competencies. However, Bing provided only 100 MCQs as it failed to generate them for two competencies. The validity of the MCQs was rated as 3 (3-3) for ChatGPT, 3 (1.5-3) for Bard, and 3 (1.5-3) for Bing, showing a significant difference (p<0.001) among the models. The difficulty of the MCQs was rated as 1 (0-1) for ChatGPT, 1 (1-2) for Bard, and 1 (1-2) for Bing, with a significant difference (p=0.006). The required reasoning ability to answer the MCQs was rated as 1 (1-2) for ChatGPT, 1 (1-2) for Bard, and 1 (1-2) for Bing, with no significant difference (p=0.235). K was >/= 0.8 for all three parameters across all three AI models. Conclusion AI still needs to evolve to generate reasoning-based MCQs in medical physiology. ChatGPT, Bard, and Bing showed certain limitations. Bing generated significantly least valid MCQs, while ChatGPT generated significantly least difficult MCQs.
Agarwal M; Sharma P; Goswami A
21
37303324
Proof of Concept: Using ChatGPT to Teach Emergency Physicians How to Break Bad News.
2,023
Cureus
Background Breaking bad news is an essential skill for practicing physicians, particularly in the field of emergency medicine (EM). Patient-physician communication teaching has previously relied on standardized patient scenarios and objective structured clinical examination formats. The novel use of artificial intelligence (AI) chatbot technology, such as Chat Generative Pre-trained Transformer (ChatGPT), may provide an alternative role in graduate medical education in this area. As a proof of concept, the author demonstrates how providing detailed prompts to the AI chatbot can facilitate the design of a realistic clinical scenario, enable active roleplay, and deliver effective feedback to physician trainees. Methods ChatGPT-3.5 language model was utilized to assist in the roleplay of breaking bad news. A detailed input prompt was designed to outline rules of play and grading assessment via a standardized scale. User inputs (physician role), chatbot outputs (patient role) and ChatGPT-generated feedback were recorded. Results ChatGPT set up a realistic training scenario on breaking bad news based on the initial prompt. Active roleplay as a patient in an emergency department setting was accomplished, and clear feedback was provided to the user through the application of the Setting up, Perception, Invitation, Knowledge, Emotions with Empathy, and Strategy or Summary (SPIKES) framework for breaking bad news. Conclusion The novel use of AI chatbot technology to assist educators is abundant with potential. ChatGPT was able to design an appropriate scenario, provide a means for simulated patient-physician roleplay, and deliver real-time feedback to the physician user. Future studies are required to expand use to a targeted group of EM physician trainees and provide best practice guidelines for AI use in graduate medical education.
Webb JJ
10
37073184
The Capability of ChatGPT in Predicting and Explaining Common Drug-Drug Interactions.
2,023
Cureus
Background Drug-drug interactions (DDIs) can have serious consequences for patient health and well-being. Patients who are taking multiple medications may be at an increased risk of experiencing adverse events or drug toxicity if they are not aware of potential interactions between their medications. Many times, patients self-prescribe medications without knowing DDI. Objective The objective is to investigate the effectiveness of ChatGPT, a large language model, in predicting and explaining common DDIs. Methods A total of 40 DDIs lists were prepared from previously published literature. This list was used to converse with ChatGPT with a two-stage question. The first question was asked as "can I take X and Y together?" with two drug names. After storing the output, the next question was asked. The second question was asked as "why should I not take X and Y together?" The output was stored for further analysis. The responses were checked by two pharmacologists and the consensus output was categorized as "correct" and "incorrect." The "correct" ones were further classified as "conclusive" and "inconclusive." The text was checked for reading ease scores and grades of education required to understand the text. Data were tested by descriptive and inferential statistics. Results Among the 40 DDI pairs, one answer was incorrect in the first question. Among correct answers, 19 were conclusive and 20 were inconclusive. For the second question, one answer was wrong. Among correct answers, 17 were conclusive and 22 were inconclusive. The mean Flesch reading ease score was 27.64+/-10.85 in answers to the first question and 29.35+/-10.16 in answers to the second question, p = 0.47. The mean Flesh-Kincaid grade level was 15.06+/-2.79 in answers to the first question and 14.85+/-1.97 in answers to the second question, p = 0.69. When we compared the reading levels with hypothetical 6th grade, the grades were significantly higher than expected (t = 20.57, p < 0.0001 for first answers and t = 28.43, p < 0.0001 for second answers). Conclusion ChatGPT is a partially effective tool for predicting and explaining DDIs. Patients, who may not have immediate access to the healthcare facility for getting information about DDIs, may take help from ChatGPT. However, on several occasions, it may provide incomplete guidance. Further improvement is required for potential usage by patients for getting ideas about DDI.
Juhi A; Pipil N; Santra S; Mondal S; Behera JK; Mondal H
10
39845219
Evaluating the Accuracy of ChatGPT in the Japanese Board-Certified Physiatrist Examination.
2,024
Cureus
Background Generative artificial intelligence (AI), such as Chat Generative Pre-trained Transformer (ChatGPT), has shown potential in various medical applications, including answering licensing examination questions. However, its performance in rehabilitation medicine remains underexplored. This study aimed to evaluate the accuracy of ChatGPT4o in answering questions from the Japanese Board-Certified Physiatrist Examination and assess its potential as an educational and clinical support tool. Methods This study assessed the performance of ChatGPT4o on questions from the 2021-2023 Japanese Board-Certified Physiatrist Examinations. Questions were categorized into text- and image-based types and correct response rates were calculated. Errors were classified into informational, logical, or statistical. Results ChatGPT4o achieved correct response rates of 79.1% in 2021, 80.0% in 2022, and 86.3% in 2023, with an overall accuracy of 81.8%. The AI performed better on text-based (83.0%) than on image-based (70.0%) questions. Most errors (92.8%) were related to information. Conclusions ChatGPT4o demonstrated high accuracy in the Japanese Board-Certified Physiatrist Examination, particularly for text-based questions, demonstrating its potential as an educational tool. However, limitations in image interpretation and specialized topics indicate the need for further improvements for clinical application.
Kato Y; Ushida K; Momosaki R
21
37143631
Evaluating ChatGPT's Ability to Solve Higher-Order Questions on the Competency-Based Medical Education Curriculum in Medical Biochemistry.
2,023
Cureus
Background Healthcare-related artificial intelligence (AI) is developing. The capacity of the system to carry out sophisticated cognitive processes, such as problem-solving, decision-making, reasoning, and perceiving, is referred to as higher cognitive thinking in AI. This kind of thinking requires more than just processing facts; it also entails comprehending and working with abstract ideas, evaluating and applying data relevant to the context, and producing new insights based on prior learning and experience. ChatGPT is an artificial intelligence-based conversational software that can engage with people to answer questions and uses natural language processing models. The platform has created a worldwide buzz and keeps setting an ongoing trend in solving many complex problems in various dimensions. Nevertheless, ChatGPT's capacity to correctly respond to queries requiring higher-level thinking in medical biochemistry has not yet been investigated. So, this research aimed to evaluate ChatGPT's aptitude for responding to higher-order questions on medical biochemistry. Objective In this study, our objective was to determine whether ChatGPT can address higher-order problems related to medical biochemistry.​​​​​​ Methods​​​ This cross-sectional study was done online by conversing with the current version of ChatGPT (14 March 2023, which is presently free for registered users). It was presented with 200 medical biochemistry reasoning questions that require higher-order thinking. These questions were randomly picked from the institution's question bank and classified according to the Competency-Based Medical Education (CBME) curriculum's competency modules. The responses were collected and archived for subsequent research. Two expert biochemistry academicians examined the replies on a zero to five scale. The score's accuracy was determined by a one-sample Wilcoxon signed rank test using hypothetical values. Result The AI software answered 200 questions requiring higher-order thinking with a median score of 4.0 (Q1=3.50, Q3=4.50). Using a single sample Wilcoxon signed rank test, the result was less than the hypothetical maximum of five (p=0.001) and comparable to four (p=0.16). There was no difference in the replies to questions from different CBME modules in medical biochemistry (Kruskal-Wallis p=0.39). The inter-rater reliability of the scores scored by two biochemistry faculty members was outstanding (ICC=0.926 (95% CI: 0.814-0.971); F=19; p=0.001)​​​​​​ Conclusion The results of this research indicate that ChatGPT has the potential to be a successful tool for answering questions requiring higher-order thinking in medical biochemistry, with a median score of four out of five. However, continuous training and development with data of recent advances are essential to improve performance and make it functional for the ever-growing field of academic medical usage.
Ghosh A; Bir A
21
38111405
ChatGPT-4 and Human Researchers Are Equal in Writing Scientific Introduction Sections: A Blinded, Randomized, Non-inferiority Controlled Study.
2,023
Cureus
Background Natural language processing models are increasingly used in scientific research, and their ability to perform various tasks in the research process is rapidly advancing. This study aims to investigate whether Generative Pre-trained Transformer 4 (GPT-4) is equal to humans in writing introduction sections for scientific articles. Methods This randomized non-inferiority study was reported according to the Consolidated Standards of Reporting Trials for non-inferiority trials and artificial intelligence (AI) guidelines. GPT-4 was instructed to synthesize 18 introduction sections based on the aim of previously published studies, and these sections were compared to the human-written introductions already published in a medical journal. Eight blinded assessors randomly evaluated the introduction sections using 1-10 Likert scales. Results There was no significant difference between GPT-4 and human introductions regarding publishability and content quality. GPT-4 had one point significantly better scores in readability, which was considered a non-relevant difference. The majority of assessors (59%) preferred GPT-4, while 33% preferred human-written introductions. Based on Lix and Flesch-Kincaid scores, GPT-4 introductions were 10 and two points higher, respectively, indicating that the sentences were longer and had longer words. Conclusion GPT-4 was found to be equal to humans in writing introductions regarding publishability, readability, and content quality. The majority of assessors preferred GPT-4 introductions and less than half could determine which were written by GPT-4 or humans. These findings suggest that GPT-4 can be a useful tool for writing introduction sections, and further studies should evaluate its ability to write other parts of scientific articles.
Sikander B; Baker JJ; Deveci CD; Lund L; Rosenberg J
10
38238871
AI in the repurposing of potential herbs for filariasis therapy.
2,024
Journal of vector borne diseases
BACKGROUND OBJECTIVES: The goal of this study was to see how well an AI language model called Chat Generative Pre-trained Transformer (ChatGPT) assisted healthcare personnel in selecting relevant medications for filariasis therapy. A team of medical specialists and tropical medicine experts reviewed ChatGPT's recommendations for ten hypothetical filariasis clinical situations. METHODS: The purpose of this study was to look at the effectiveness of an AI language model called Chat Generative Pre-trained Transformer (ChatGPT) in supporting healthcare providers in picking appropriate drugs for filariasis treatment. Ten hypothetical filariasis clinical cases were submitted to ChatGPT, and its recommendations were evaluated by a panel of medical professionals and tropical medicine experts. RESULTS: ChatGPT gave appropriate suggestions for potential medication repurposing in filariasis treatment in all ten clinical scenarios. Its drug recommendations were in line with current medical research and literature. Despite the lack of particular treatment regimens, ChatGPT's general ideas proved useful for healthcare practitioners, providing insights and updates on prospective drug repurposing tactics. INTERPRETATION CONCLUSION: ChatGPT shows promise as a useful method for repurposing drugs in the treatment of filariasis. Its thorough and brief responses make it useful for finding possible pharmacological candidates. However, it is critical to recognize ChatGPT's limitations, such as the requirement for additional clinical information and the inability to change therapy. Further research and development are required to optimize its use in filariasis therapy settings.
Wiwanitmkit S; Wiwanitkit V
10
39404623
Testing the Ability and Limitations of ChatGPT to Generate Differential Diagnoses from Transcribed Radiologic Findings.
2,024
Radiology
Background The burgeoning interest in ChatGPT as a potentially useful tool in medicine highlights the necessity for systematic evaluation of its capabilities and limitations. Purpose To evaluate the accuracy, reliability, and repeatability of differential diagnoses produced by ChatGPT from transcribed radiologic findings. Materials and Methods Cases selected from a radiology textbook series spanning a variety of imaging modalities, subspecialties, and anatomic pathologies were converted into standardized prompts that were entered into ChatGPT (GPT-3.5 and GPT-4 algorithms; April 3 to June 1, 2023). Responses were analyzed for accuracy via comparison with the final diagnosis and top 3 differential diagnosis provided in the textbook, which served as the ground truth. Reliability, defined based on the frequency of algorithmic hallucination, was assessed through the identification of factually incorrect statements and fabricated references. Comparisons were made between the algorithms using the McNemar test and a generalized estimating equation model framework. Test-retest repeatability was measured by obtaining 10 independent responses from both algorithms for 10 cases in each subspecialty, and calculating the average pairwise percent agreement and Krippendorff alpha. Results A total of 339 cases were collected across multiple radiologic subspecialties. The overall accuracy of GPT-3.5 and GPT-4 for final diagnosis was 53.7% (182 of 339) and 66.1% (224 of 339; P < .001), respectively. The mean differential score (ie, proportion of top 3 diagnoses that matched the original literature differential diagnosis) for GPT-3.5 and GPT-4 was 0.50 and 0.54 (P = .06), respectively. Of the references provided in GPT-3.5 and GPT-4 responses, 39.9% (401 of 1006) and 14.3% (161 of 1124; P < .001), respectively, were fabricated. GPT-3.5 and GPT-4 generated false statements in 16.2% (55 of 339) and 4.7% (16 of 339; P < .001) of cases, respectively. The range of average pairwise percent agreement across subspecialties for the final diagnosis and top 3 differential diagnosis was 59%-98% and 23%-49%, respectively. Conclusion ChatGPT achieved the best results when the most up-to-date model (GPT-4) was used and when it was prompted for a single diagnosis. Hallucination frequency was lower with GPT-4 than with GPT-3.5, but repeatability was an issue for both models. (c) RSNA, 2024 Supplemental material is available for this article. See also the editorial by Chang in this issue.
Sun SH; Huynh K; Cortes G; Hill R; Tran J; Yeh L; Ngo AL; Houshyar R; Yaghmai V; Tran M
0-1
40416223
Performance of GPT-4o and DeepSeek-R1 in the Polish Infectious Diseases Specialty Exam.
2,025
Cureus
Background The past few years have been a time of rapid development in artificial intelligence (AI) and its implementation across numerous fields. This study aimed to compare the performance of GPT-4o (OpenAI, San Francisco, CA, USA) and DeepSeek-R1 (DeepSeek AI, Zhejiang, China) on the Polish specialty examination in infectious diseases. Materials and methods The study was conducted from April 1 to April 4, 2025, using the Autumn 2024 Polish specialty examination in infectious diseases. The examination comprised 120 questions, each presenting five answer options, with only one correct choice. The Center for Medical Education (CEM) in Lodz, Poland decided to withdraw one question due to the absence of a definitive correct answer and inconsistency with up-to-date clinical guidelines. Furthermore, the questions were classified as either 'clinical cases' or 'other' to enable a more in-depth evaluation of the potential of artificial intelligence in real-world clinical practice. The accuracy of the responses was verified using the official answer key approved by the CEM. To assess the accuracy and confidence level of the responses provided by GPT-4o and DeepSeek-R1, statistical methods were employed, including Pearson's chi(2) test, and Mann-Whitney U test. Results GPT-4o correctly answered 85 out of 199 questions (71.43%) while DeepSeek-R1 answered correctly 88 out of 199 questions (73.85%). A minimum of 72 (60.5%) correct responses is required to pass the examination. No statistically significant difference was observed between responses to 'clinical case' questions and 'other' questions for either AI model. For both AI models, a statistically significant difference was observed in the confidence levels between correct and incorrect answers, with higher confidence reported for correctly answered questions and lower confidence for incorrectly answered ones. Conclusions Both GPT-4o and DeepSeek-R1 demonstrated the ability to pass the Polish specialty examination in infectious diseases, suggesting their potential as educational tools. Additionally, it is noteworthy that DeepSeek-R1 achieved a performance comparable to GPT-4o, despite being a much newer model on the market and, according to available data, having been developed at significantly lower cost.
Blecha Z; Jasinski D; Jaworski A; Latkowska A; Jaworski W; Syslo O; Rubik N; Jastrzebska I; Harazinski K; Goliat W; Gmur M; Gajewski M; Slawinska B; Maryniak N
21
39156244
Performance of ChatGPT in Solving Questions From the Progress Test (Brazilian National Medical Exam): A Potential Artificial Intelligence Tool in Medical Practice.
2,024
Cureus
Background The use of artificial intelligence (AI) is not a recent phenomenon, but the latest advancements in this technology are making a significant impact across various fields of human knowledge. In medicine, this trend is no different, although it has developed at a slower pace. ChatGPT is an example of an AI-based algorithm capable of answering questions, interpreting phrases, and synthesizing complex information, potentially aiding and even replacing humans in various areas of social interest. Some studies have compared its performance in solving medical knowledge exams with medical students and professionals to verify AI accuracy. This study aimed to measure the performance of ChatGPT in answering questions from the Progress Test from 2021 to 2023. Methodology An observational study was conducted in which questions from the 2021 Progress Test and the regional tests (Southern Institutional Pedagogical Support Center II) of 2022 and 2023 were presented to ChatGPT 3.5. The results obtained were compared with the scores of first- to sixth-year medical students from over 120 Brazilian universities. All questions were presented sequentially, without any modification to their structure. After each question was presented, the platform's history was cleared, and the site was restarted. Results The platform achieved an average accuracy rate in 2021, 2022, and 2023 of 69.7%, 68.3%, and 67.2%, respectively, surpassing students from all medical years in the three tests evaluated, reinforcing findings in the current literature. The subject with the best score for the AI was Public Health, with a mean grade of 77.8%. Conclusions ChatGPT demonstrated the ability to answer medical questions with higher accuracy than humans, including students from the last year of medical school.
Rodrigues Alessi M; Gomes HA; Lopes de Castro M; Terumy Okamoto C
21
39364514
Evaluating the Adherence of Large Language Models to Surgical Guidelines: A Comparative Analysis of Chatbot Recommendations and North American Spine Society (NASS) Coverage Criteria.
2,024
Cureus
Background There has been a significant increase in cervical fusion procedures, both anterior and posterior, across the United States. Despite this upward trend, limited research exists on adherence to evidence-based medicine (EBM) guidelines for cervical fusion, highlighting a gap between recommended practices and surgeon preferences. Additionally, patients are increasingly utilizing large language models (LLMs) to aid in decision-making. Methodology This observational study evaluated the capacity of four LLMs, namely, Bard, BingAI, ChatGPT-3.5, and ChatGPT-4, to adhere to EBM guidelines, specifically the 2023 North American Spine Society (NASS) cervical fusion guidelines. Ten clinical vignettes were created based on NASS recommendations to determine when fusion was indicated. This novel approach assessed LLM performance in a clinical decision-making context without requiring institutional review board approval, as no human subjects were involved. Results No LLM achieved complete concordance with NASS guidelines, though ChatGPT-4 and Bing Chat exhibited the highest adherence at 60%. Discrepancies were notably observed in scenarios involving head-drop syndrome and pseudoarthrosis, where all LLMs failed to align with NASS recommendations. Additionally, only 25% of LLMs agreed with NASS guidelines for fusion in cases of cervical radiculopathy and as an adjunct to facet cyst resection. Conclusions The study underscores the need for improved LLM training on clinical guidelines and emphasizes the importance of considering the nuances of individual patient cases. While LLMs hold promise for enhancing guideline adherence in cervical fusion decision-making, their current performance indicates a need for further refinement and integration with clinical expertise to ensure optimal patient care. This study contributes to understanding the role of AI in healthcare, advocating for a balanced approach that leverages technological advancements while acknowledging the complexities of surgical decision-making.
Sarikonda A; Isch E; Self M; Sambangi A; Carreras A; Sivaganesan A; Harrop J; Jallo J
32
38953081
Using ChatGPT in the Development of Clinical Reasoning Cases: A Qualitative Study.
2,024
Cureus
Background There has been an explosion of commentary and discussion about the ethics and utility of using artificial intelligence in medicine, and its practical use in medical education is still being debated. Through qualitative research methods, this study aims to highlight the advantages and pitfalls of using ChatGPT in the development of clinical reasoning cases for medical student education. Methods Five highly experienced faculty in medical education were provided instructions to create unique clinical reasoning cases for three different chief concerns using ChatGPT 3.0. Faculty were then asked to reflect on and review the created cases. Finally, a focus group was conducted to further analyze and describe their experiences with the new technology. Results Overall, faculty found the use of ChatGPT in the development of clinical reasoning cases easy to use but difficult to get to certain objectives and largely incapable of being creative enough to create complexity for student use without heavy editing. The created cases did provide a helpful starting point and were extremely efficient; however, faculty did experience some medical inaccuracies and fact fabrication. Conclusion There is value to using ChatGPT to develop curricular content, especially for clinical reasoning cases, but it needs to be comprehensively reviewed and verified. To efficiently and effectively utilize the tool, educators will need to develop a framework that can be easily translatable into simple prompts that ChatGPT can understand. Future work will need to strongly consider the risks of recirculating biases and misinformation.
Wong K; Fayngersh A; Traba C; Cennimo D; Kothari N; Chen S
10
37303347
Exploring ChatGPT's Potential in Facilitating Adaptation of Clinical Guidelines: A Case Study of Diabetic Ketoacidosis Guidelines.
2,023
Cureus
Background This study aimed to evaluate the efficacy of ChatGPT, an advanced natural language processing model, in adapting and synthesizing clinical guidelines for diabetic ketoacidosis (DKA) by comparing and contrasting different guideline sources. Methodology We employed a comprehensive comparison approach and examined three reputable guideline sources: Diabetes Canada Clinical Practice Guidelines Expert Committee (2018), Emergency Management of Hyperglycaemia in Primary Care, and Joint British Diabetes Societies (JBDS) 02 The Management of Diabetic Ketoacidosis in Adults. Data extraction focused on diagnostic criteria, risk factors, signs and symptoms, investigations, and treatment recommendations. We compared the synthesized guidelines generated by ChatGPT and identified any misreporting or non-reporting errors. Results ChatGPT was capable of generating a comprehensive table comparing the guidelines. However, multiple recurrent errors, including misreporting and non-reporting errors, were identified, rendering the results unreliable. Additionally, inconsistencies were observed in the repeated reporting of data. The study highlights the limitations of using ChatGPT for the adaptation of clinical guidelines without expert human intervention. Conclusions Although ChatGPT demonstrates the potential for the synthesis of clinical guidelines, the presence of multiple recurrent errors and inconsistencies underscores the need for expert human intervention and validation. Future research should focus on improving the accuracy and reliability of ChatGPT, as well as exploring its potential applications in other areas of clinical practice and guideline development.
Hamed E; Eid A; Alberry M
0-1
39371744
GPT-4o vs. Human Candidates: Performance Analysis in the Polish Final Dentistry Examination.
2,024
Cureus
Background This study aims to evaluate the performance of OpenAI's GPT-4o in the Polish Final Dentistry Examination (LDEK) and compare it with human candidates' results. The LDEK is a standardized test essential for dental graduates in Poland to obtain their professional license. With artificial intelligence (AI) becoming increasingly integrated into medical and dental education, it is important to assess AI's capabilities in such high-stakes examinations. Materials and methods The study was conducted from August 1 to August 15, 2024, using the Spring 2023 LDEK exam. The exam comprised 200 multiple-choice questions, each with one correct answer among five options. Questions spanned various dental disciplines, including Conservative Dentistry with Endodontics, Pediatric Dentistry, Dental Surgery, Prosthetic Dentistry, Periodontology, Orthodontics, Emergency Medicine, Bioethics and Medical Law, Medical Certification, and Public Health. The exam organizers withdrew one question. GPT-4o was tested on these questions without access to the publicly available question bank. The AI model's responses were recorded, and each answer's confidence level was assessed. Correct answers were determined based on the official key provided by the Center for Medical Education (CEM) in Lodz, Poland. Statistical analyses, including Pearson's chi-square test and the Mann-Whitney U test, were performed to evaluate the accuracy and confidence of ChatGPT's answers across different dental fields. Results GPT-4o correctly answered 141 out of 199 valid questions (70.85%) and incorrectly answered 58 (29.15%). The AI performed better in fields like Conservative Dentistry with Endodontics (71.74%) and Prosthetic Dentistry (80%) but showed lower accuracy in Pediatric Dentistry (62.07%) and Orthodontics (52.63%). A statistically significant difference was observed between ChatGPT's performance on clinical case-based questions (36.36% accuracy) and other factual questions (72.87% accuracy), with a p-value of 0.025. Confidence levels also varied significantly between correct and incorrect answers, with a p-value of 0.0208. Conclusions GPT-4o's performance in the LDEK suggests it has potential as a supplementary educational tool in dentistry. However, the AI's limited clinical reasoning abilities, especially in complex scenarios, reveal a substantial gap between AI and human expertise. While ChatGPT demonstrates strong performance in factual recall, it cannot yet match the critical thinking and clinical judgment exhibited by human candidates.
Jaworski A; Jasinski D; Slawinska B; Blecha Z; Jaworski W; Kruplewicz M; Jasinska N; Syslo O; Latkowska A; Jung M
32