pmid
stringlengths 8
8
| title
stringlengths 3
289
| year
int64 2.02k
2.03k
| journal
stringlengths 3
221
| doi
stringclasses 1
value | mesh
stringclasses 1
value | keywords
stringclasses 1
value | abstract
stringlengths 115
3.67k
| authors
stringlengths 3
798
| cluster
class label 5
classes |
|---|---|---|---|---|---|---|---|---|---|
40043742
|
Solving Complex Pediatric Surgical Case Studies: A Comparative Analysis of Copilot, ChatGPT-4, and Experienced Pediatric Surgeons' Performance.
| 2,025
|
European journal of pediatric surgery : official journal of Austrian Association of Pediatric Surgery ... [et al] = Zeitschrift fur Kinderchirurgie
|
The emergence of large language models (LLMs) has led to notable advancements across multiple sectors, including medicine. Yet, their effect in pediatric surgery remains largely unexplored. This study aims to assess the ability of the artificial intelligence (AI) models ChatGPT-4 and Microsoft Copilot to propose diagnostic procedures, primary and differential diagnoses, as well as answer clinical questions using complex clinical case vignettes of classic pediatric surgical diseases.We conducted the study in April 2024. We evaluated the performance of LLMs using 13 complex clinical case vignettes of pediatric surgical diseases and compared responses to a human cohort of experienced pediatric surgeons. Additionally, pediatric surgeons rated the diagnostic recommendations of LLMs for completeness and accuracy. To determine differences in performance, we performed statistical analyses.ChatGPT-4 achieved a higher test score (52.1%) compared to Copilot (47.9%) but less than pediatric surgeons (68.8%). Overall differences in performance between ChatGPT-4, Copilot, and pediatric surgeons were found to be statistically significant (p < 0.01). ChatGPT-4 demonstrated superior performance in generating differential diagnoses compared to Copilot (p < 0.05). No statistically significant differences were found between the AI models regarding suggestions for diagnostics and primary diagnosis. Overall, the recommendations of LLMs were rated as average by pediatric surgeons.This study reveals significant limitations in the performance of AI models in pediatric surgery. Although LLMs exhibit potential across various areas, their reliability and accuracy in handling clinical decision-making tasks is limited. Further research is needed to improve AI capabilities and establish its usefulness in the clinical setting.
|
Gnatzy R; Lacher M; Berger M; Boettcher M; Deffaa OJ; Kubler J; Madadi-Sanjani O; Martynov I; Mayer S; Pakarinen MP; Wagner R; Wester T; Zani A; Aubert O
| 10
|
|||
39417635
|
GPT-4-based AI agents-the new expert system for detection of antimicrobial resistance mechanisms?
| 2,024
|
Journal of clinical microbiology
|
The European Committee on Antimicrobial Susceptibility Testing (EUCAST) recommends two steps for detecting beta-lactamases in Gram-negative bacteria. Screening for potential extended-spectrum beta-lactamase (ESBL), plasmid-mediated AmpC beta-lactamase, or carbapenemase production is confirmed. We aimed to validate generative pre-trained transformer (GPT)-4 and GPT-agent for pre-classification of disk diffusion to indicate potential beta-lactamases. We assigned 225 Gram-negative isolates based on phenotypic resistances against beta-lactam antibiotics and additional tests to one or more resistance mechanisms as follows: "none," "ESBL," "AmpC," or "carbapenemase." Next, we customized a GPT-agent with EUCAST guidelines and breakpoint table (v13.1). We compared routine diagnostics (reference) to those of (i) EUCAST-GPT-expert, (ii) microbiologists, and (iii) non-customized GPT-4. We determined sensitivities and specificities to flag suspect resistances. Three microbiologists showed concordance in 814/862 (94.4%) phenotypic categories and were used in median eight words (interquartile range [IQR] 4-11) for reasoning. Median sensitivity/specificity for ESBL, AmpC, and carbapenemase were 98%/99.1%, 96.8%/97.1%, and 95.5%/98.5%, respectively. Three prompts of EUCAST-GPT-expert showed concordance in 706/862 (81.9%) categories but were used in median 158 words (IQR 140-174) for reasoning. Sensitivity/specificity for ESBL, AmpC, and carbapenemase prediction were 95.4%/69.23%, 96.9%/86.3%, and 100%/98.8%, respectively. Non-customized GPT-4 could interpret 169/862 (19.6%) categories, and 137/169 (81.1%) agreed with routine diagnostics. Non-customized GPT-4 was used in median 85 words (IQR 72-105) for reasoning. Microbiologists showed higher concordance and shorter argumentations compared to GPT-agents. Humans showed higher specificities compared to GPT-agents. GPT-agent's unspecific flagging of ESBL and AmpC potentially results in additional testing, diagnostic delays, and higher costs. GPT-4 is not approved by regulatory bodies, but validation of large language models is needed. IMPORTANCE: The study titled "GPT-4-based AI agents-the new expert system for detection of antimicrobial resistance mechanisms?" is critically important as it explores the integration of advanced artificial intelligence (AI) technologies, like generative pre-trained transformer (GPT)-4, into the field of laboratory medicine, specifically in the diagnostics of antimicrobial resistance (AMR). With the growing challenge of AMR, there is a pressing need for innovative solutions that can enhance diagnostic accuracy and efficiency. This research assesses the capability of AI to support the existing two-step confirmatory process recommended by the European Committee on Antimicrobial Susceptibility Testing for detecting beta-lactamases in Gram-negative bacteria. By potentially speeding up and improving the precision of initial screenings, AI could reduce the time to appropriate treatment interventions. Furthermore, this study is vital for validating the reliability and safety of AI tools in clinical settings, ensuring they meet stringent regulatory standards before they can be broadly implemented. This could herald a significant shift in how laboratory diagnostics are performed, ultimately leading to better patient outcomes.
|
Giske CG; Bressan M; Fiechter F; Hinic V; Mancini S; Nolte O; Egli A
| 0-1
|
|||
37211242
|
ChatGPT and autoimmunity - A new weapon in the battlefield of knowledge.
| 2,023
|
Autoimmunity reviews
|
The field of medical research has been always full of innovation and huge leaps revolutionizing the scientific world. In the recent years, we have witnessed this firsthand by the evolution of Artificial Intelligence (AI), with ChatGPT being the most recent example. ChatGPT is a language chat bot which generates human-like texts based on data from the internet. If viewed from a medical point view, ChatGPT has shown capabilities of composing medical texts similar to those depicted by experienced authors, to solve clinical cases, to provide medical solutions, among other fascinating performances. Nevertheless, the value of the results, limitations, and clinical implications still need to be carefully evaluated. In our current paper on the role of ChatGPT in clinical medicine, particularly in the field of autoimmunity, we aimed to illustrate the implication of this technology alongside the latest utilization and limitations. In addition, we included an expert opinion on the cyber-related aspects of the bot potentially contributing to the risks attributed to its use, alongside proposed defense mechanisms. All of that, while taking into consideration the rapidity of the continuous improvement AI experiences on a daily basis.
|
Darkhabani M; Alrifaai MA; Elsalti A; Dvir YM; Mahroum N
| 10
|
|||
37162073
|
FUTURE OF THE LANGUAGE MODELS IN HEALTHCARE: THE ROLE OF CHATGPT.
| 2,023
|
Arquivos brasileiros de cirurgia digestiva : ABCD = Brazilian archives of digestive surgery
|
The field of medicine has always been at the forefront of technological innovation, constantly seeking new strategies to diagnose, treat, and prevent diseases. Guidelines for clinical practice to orientate medical teams regarding diagnosis, treatment, and prevention measures have increased over the years. The purpose is to gather the most medical knowledge to construct an orientation for practice. Evidence-based guidelines follow several main characteristics of a systematic review, including systematic and unbiased search, selection, and extraction of the source of evidence. In recent years, the rapid advancement of artificial intelligence has provided clinicians and patients with access to personalized, data-driven insights, support and new opportunities for healthcare professionals to improve patient outcomes, increase efficiency, and reduce costs. One of the most exciting developments in Artificial Intelligence has been the emergence of chatbots. A chatbot is a computer program used to simulate conversations with human users. Recently, OpenAI, a research organization focused on machine learning, developed ChatGPT, a large language model that generates human-like text. ChatGPT uses a type of AI known as a deep learning model. ChatGPT can quickly search and select pieces of evidence through numerous databases to provide answers to complex questions, reducing the time and effort required to research a particular topic manually. Consequently, language models can accelerate the creation of clinical practice guidelines. While there is no doubt that ChatGPT has the potential to revolutionize the way healthcare is delivered, it is essential to note that it should not be used as a substitute for human healthcare professionals. Instead, ChatGPT should be considered a tool that can be used to augment and support the work of healthcare professionals, helping them to provide better care to their patients.
|
Tustumi F; Andreollo NA; Aguilar-Nascimento JE
| 10
|
|||
37223340
|
ChatGPT-4 and the Global Burden of Disease Study: Advancing Personalized Healthcare Through Artificial Intelligence in Clinical and Translational Medicine.
| 2,023
|
Cureus
|
The fusion of insights from the comprehensive global burden of disease (GBD) study and the advanced artificial intelligence of open artificial intelligence (AI) chat generative pre-trained transformer version 4 (ChatGPT-4) brings the potential to transform personalized healthcare planning. By integrating the data-driven findings of the GBD study with the powerful conversational capabilities of ChatGPT-4, healthcare professionals can devise customized healthcare plans that are adapted to patients' lifestyles and preferences. We propose that this innovative partnership can lead to the creation of a novel AI-assisted personalized disease burden (AI-PDB) assessment and planning tool. For the successful implementation of this unconventional technology, it is crucial to ensure continuous and accurate updates, expert supervision, and address potential biases and limitations. Healthcare professionals and stakeholders should have a balanced and dynamic approach, emphasizing interdisciplinary collaborations, data accuracy, transparency, ethical compliance, and ongoing training. By investing in the unique strengths of both ChatGPT-4, especially its newly introduced features such as live internet browsing or plugins, and the GBD study, we may enhance personalized healthcare planning. This innovative approach has the potential to improve patient outcomes and optimize resource utilization, as well as pave the way for the worldwide implementation of precision medicine, thereby revolutionizing the existing healthcare landscape. However, to fully harness these benefits at both the global and individual levels, further research and development are warranted. This will ensure that we effectively tap into the potential of this synergy, bringing societies closer to a future where personalized healthcare is the norm rather than the exception.
|
Temsah MH; Jamal A; Aljamaan F; Al-Tawfiq JA; Al-Eyadhy A
| 10
|
|||
38482764
|
[Applications, techniques, and best practices for using ChatGPT].
| 2,024
|
Revue medicale suisse
|
The future of a machine writing our reports for us could also lead to it carrying out our consultations, a scenario whose relevance is open to debate. Nevertheless, the present offers us new artificial intelligence tools that can support us in our daily activities. The publication in 2017 of Transformers initiated a disruptive revolution by enabling the emergence of major language models, of which ChatGPT is the best known. In view of their growing adoption, the authors felt it would be useful to offer some pragmatic advice on how to improve the use of these tools. In this article, we first look at how ChatGPT works and its potential applications in medicine, before providing a practical guide to using it to get the best results.
|
Garin D; Lovis C
| 10
|
|||
38935938
|
ChatGPT and Medicine: Together We Embrace the AI Renaissance.
| 2,024
|
JMIR bioinformatics and biotechnology
|
The generative artificial intelligence (AI) model ChatGPT holds transformative prospects in medicine. The development of such models has signaled the beginning of a new era where complex biological data can be made more accessible and interpretable. ChatGPT is a natural language processing tool that can process, interpret, and summarize vast data sets. It can serve as a digital assistant for physicians and researchers, aiding in integrating medical imaging data with other multiomics data and facilitating the understanding of complex biological systems. The physician's and AI's viewpoints emphasize the value of such AI models in medicine, providing tangible examples of how this could enhance patient care. The editorial also discusses the rise of generative AI, highlighting its substantial impact in democratizing AI applications for modern medicine. While AI may not supersede health care professionals, practitioners incorporating AI into their practices could potentially have a competitive edge.
|
Hacking S
| 10
|
|||
39017630
|
Solving Advanced Task-Specific Problems in Measurement Sciences with Generative AI.
| 2,024
|
Analytical chemistry
|
The Generative Pre-Trained Transformer known as ChatGPT-4 has undergone extensive pretraining on a diverse data set, enabling it to generate coherent and contextually relevant text based on the input it receives. This capability allows it to perform tasks from answering questions and has attracted significant interest in material sciences, synthetic chemistry, and drug discovery. In this work, we posed four advanced task-specific problems to ChatGPT, which were recently published in leading journals for topics in analytical chemistry, spectroscopy, bioimage super-resolution, and electrochemistry. ChatGPT-4 successfully implemented the four ideas after assigning the "persona" to the AI and posing targeted questions. We show two cases where "unguided" ChatGPT could complete the assignments with minimal human direction. The construction of a microwave spectrum from a free induction curve and super-resolution of bioimages was accomplished using this approach. Two other specific tasks, correcting a complex baseline with morphological operations of set theory and estimating the diffusion current, required additional input, e.g., equations and specific directions from the user. In each case, the MATLAB code was eventually generated by ChatGPT-4 even when the original authors did not provide any code themselves. We show that a validation test must be implemented to detect and correct mistakes made by ChatGPT-4, followed by feedback correction. These approaches will pave the way for open and transparent science and eliminate the black boxes in measurement science when it comes to advanced data processing.
|
Wahab MF; Handlovic TT; Roy S; Burk RJ; Armstrong DW
| 0-1
|
|||
37699647
|
ChatGPT and Patient Information in Nuclear Medicine: GPT-3.5 Versus GPT-4.
| 2,023
|
Journal of nuclear medicine technology
|
The GPT-3.5-powered ChatGPT was released in late November 2022 powered by the generative pretrained transformer (GPT) version 3.5. It has emerged as a readily accessible source of patient information ahead of medical procedures. Although ChatGPT has purported benefits for supporting patient education and information, actual capability has not been evaluated. Moreover, the March 2023 emergence of paid subscription access to GPT-4 promises further enhanced capabilities requiring evaluation. Methods: ChatGPT was used to generate patient information sheets suitable for gaining informed consent for 7 common procedures in nuclear medicine. Responses were generated independently for both GPT-3.5 and GPT-4 architectures. Specific procedures were selected that had a long-standing history of use to avoid any bias associated with the September 2021 learning cutoff that constrains both GPT-3.5 and GPT-4 architectures. Each information sheet was independently evaluated by 3 expert assessors and ranked on the basis of accuracy, appropriateness, currency, and fitness for purpose. Results: ChatGPT powered by GPT-3.5 provided patient information that was appropriate in terms of being patient-facing but lacked accuracy and currency and omitted important information. GPT-3.5 produced patient information deemed not fit for the purpose. GPT-4 provided patient information enhanced across appropriateness, accuracy, and currency, despite some omission of information. GPT-4 produced patient information that was largely fit for the purpose. Conclusion: Although ChatGPT powered by GPT-3.5 is accessible and provides plausible patient information, inaccuracies and omissions present a risk to patients and informed consent. Conversely, GPT-4 is more accurate and fit for the purpose but, at the time of writing, was available only through a paid subscription.
|
Currie G; Robbie S; Tually P
| 0-1
|
|||
39842505
|
Is ChatGPT Ready for Public Use in Organ-Specific Drug Toxicity Research?
| 2,025
|
Drug discovery today
|
The growing impact of large language models (LLMs), such as ChatGPT, prompts questions about the reliability of their application in public health. We compared drug toxicity assessments by GPT-4 for liver, heart, and kidney against expert assessments using US Food and Drug Administration (FDA) drug-labeling documents. Two approaches were assessed: a 'General prompt', mimicking the conversational style used by the general public, and an 'Expert prompt' engineered to represent an approach of an expert. The Expert prompt achieved higher accuracy (64-75%) compared with the General prompt (48-72%), but the overall performance was moderate, indicating that caution is needed when using GPT-4 for public health. To improve reliability, an advanced framework,such as Retrieval Augmented Generation (RAG), might be required to leverage knowledge embedded in GPT-4.
|
Connor S; Wu L; Roberts RA; Tong W
| 10
|
|||
38153778
|
Empathy and Equity: Key Considerations for Large Language Model Adoption in Health Care.
| 2,023
|
JMIR medical education
|
The growing presence of large language models (LLMs) in health care applications holds significant promise for innovative advancements in patient care. However, concerns about ethical implications and potential biases have been raised by various stakeholders. Here, we evaluate the ethics of LLMs in medicine along 2 key axes: empathy and equity. We outline the importance of these factors in novel models of care and develop frameworks for addressing these alongside LLM deployment.
|
Koranteng E; Rao A; Flores E; Lev M; Landman A; Dreyer K; Succi M
| 10
|
|||
40380650
|
Bridging AI and Medical Expertise: ChatGPT's Success on the Medical Specialization Residency Admission Exam in Spain.
| 2,025
|
Studies in health technology and informatics
|
The growing use of Artificial Intelligence (AI) in healthcare, particularly focusing on the potential of generative AI models like ChatGPT-4 is a trending topic. The study examines how ChatGPT-4 performed on the national Medicine Residency exam in Spain, a highly selective test for accessing the medical specialization training program called MIR. ChatGPT-4 answered 210 questions, including 25 that required image interpretation. The chatbot correctly answered 150 out of 200 questions, achieving an estimated ranking of around 1900-2300 out of 11,577 candidates. This performance would allow access to most medical specialties in Spain. No significant differences were found between questions requiring image analysis and those that did not, but ChatGPT struggled with more difficult questions, showing a higher error rate for complex problems just like a human being. Despite its potential as an educational and problem-solving tool, the study highlights ChatGPT's limitations, including occasional "AI hallucinations" (incorrect or nonsensical answers) and variability in responses when questions were repeated. The study emphasizes that while AI tools such as ChatGPT can assist in education and medical tasks, they cannot replace qualified healthcare professionals, and their output requires careful verification.
|
Leis A; Mayer MA; Mayer A
| 21
|
|||
39982968
|
Large Language Models as Tools for Molecular Toxicity Prediction: AI Insights into Cardiotoxicity.
| 2,025
|
Journal of chemical information and modeling
|
The importance of drug toxicity assessment lies in ensuring the safety and efficacy of the pharmaceutical compounds. Predicting toxicity is crucial in drug development and risk assessment. This study compares the performance of GPT-4 and GPT-4o with traditional deep-learning and machine-learning models, WeaveGNN, MorganFP-MLP, SVC, and KNN, in predicting molecular toxicity, focusing on bone, neuro, and reproductive toxicity. The results indicate that GPT-4 is comparable to deep-learning and machine-learning models in certain areas. We utilized GPT-4 combined with molecular docking techniques to study the cardiotoxicity of three specific targets, examining traditional Chinese medicinal materials listed as both food and medicine. This approach aimed to explore the potential cardiotoxicity and mechanisms of action. The study found that components in Black Sesame, Ginger, Perilla, Sichuan Pagoda Tree Fruit, Galangal, Turmeric, Licorice, Chinese Yam, Amla, and Nutmeg exhibit toxic effects on cardiac target Cav1.2. The docking results indicated significant binding affinities, supporting the hypothesis of potential cardiotoxic effects.This research highlights the potential of ChatGPT in predicting molecular properties and its significance in medicinal chemistry, demonstrating its facilitation of a new research paradigm: with a data set, high-accuracy learning models can be generated without requiring computational knowledge or coding skills, making it accessible and easy to use.
|
Yang H; Xiu J; Yan W; Liu K; Cui H; Wang Z; He Q; Gao Y; Han W
| 10
|
|||
37546190
|
Biomedical Ethical Aspects Towards the Implementation of Artificial Intelligence in Medical Education.
| 2,023
|
Medical science educator
|
The increasing use of artificial intelligence (AI) in medicine is associated with new ethical challenges and responsibilities. However, special considerations and concerns should be addressed when integrating AI applications into medical education, where healthcare, AI, and education ethics collide. This commentary explores the biomedical ethical responsibilities of medical institutions in incorporating AI applications into medical education by identifying potential concerns and limitations, with the goal of implementing applicable recommendations. The recommendations presented are intended to assist in developing institutional guidelines for the ethical use of AI for medical educators and students.
|
Busch F; Adams LC; Bressem KK
| 10
|
|||
37873042
|
Healthcare's New Horizon With ChatGPT's Voice and Vision Capabilities: A Leap Beyond Text.
| 2,023
|
Cureus
|
The integration of artificial intelligence (AI) in healthcare is responsible for a paradigm shift in medicine. OpenAI's recent augmentation of their Generative Pre-trained Transformer (ChatGPT) large language model (LLM) with voice and image recognition capabilities (OpenAI, Delaware) presents another potential transformative tool for healthcare. Envision a healthcare setting where professionals engage in dynamic interactions with ChatGPT to navigate the complexities of atypical medical scenarios. In this innovative landscape, practitioners could solicit ChatGPT's expertise for concise summarizations and insightful extrapolations from a myriad of web-based resources pertaining to similar medical conditions. Furthermore, imagine patients using ChatGPT to identify abnormalities in medical images or skin lesions. While the prospects are diverse, challenges such as suboptimal audio quality and ensuring data security necessitate cautious integration in medical practice. Drawing insights from previous ChatGPT iterations could provide a prudent roadmap for navigating possible challenges. This editorial explores some possible horizons and potential hurdles of ChatGPT's enhanced functionalities in healthcare, emphasizing the importance of continued refinements and vigilance to maximize the benefits while minimizing risks. Through collaborative efforts between AI developers and healthcare professionals, another fusion of AI and healthcare can evolve into enriched patient care and enhanced medical experience.
|
Temsah R; Altamimi I; Alhasan K; Temsah MH; Jamal A
| 10
|
|||
38678576
|
AI chatbots in pet health care: Opportunities and challenges for owners.
| 2,024
|
Veterinary medicine and science
|
The integration of artificial intelligence (AI) into health care has seen remarkable advancements, with applications extending to animal health. This article explores the potential benefits and challenges associated with employing AI chatbots as tools for pet health care. Focusing on ChatGPT, a prominent language model, the authors elucidate its capabilities and its potential impact on pet owners' decision-making processes. AI chatbots offer pet owners access to extensive information on animal health, research studies and diagnostic options, providing a cost-effective and convenient alternative to traditional veterinary consultations. The fate of a case involving a Border Collie named Sassy demonstrates the potential benefits of AI in veterinary medicine. In this instance, ChatGPT played a pivotal role in suggesting a diagnosis that led to successful treatment, showcasing the potential of AI chatbots as valuable tools in complex cases. However, concerns arise regarding pet owners relying solely on AI chatbots for medical advice, potentially resulting in misdiagnosis, inappropriate treatment and delayed professional intervention. We emphasize the need for a balanced approach, positioning AI chatbots as supplementary tools rather than substitutes for licensed veterinarians. To mitigate risks, the article proposes strategies such as educating pet owners on AI chatbots' limitations, implementing regulations to guide AI chatbot companies and fostering collaboration between AI chatbots and veterinarians. The intricate web of responsibilities in this dynamic landscape underscores the importance of government regulations, the educational role of AI chatbots and the symbiotic relationship between AI technology and veterinary expertise. In conclusion, while AI chatbots hold immense promise in transforming pet health care, cautious and informed usage is crucial. By promoting awareness, establishing regulations and fostering collaboration, the article advocates for a responsible integration of AI chatbots to ensure optimal care for pets.
|
Jokar M; Abdous A; Rahmanian V
| 43
|
|||
39677224
|
Human-Computer Interaction: A Literature Review of Artificial Intelligence and Communication in Healthcare.
| 2,024
|
Cureus
|
The integration of artificial intelligence (AI) into healthcare communication has rapidly evolved, driven by advancements in large language models (LLMs) such as Chat Generative Pre-trained Transformer (ChatGPT). This literature review explores AI's role in patient-physician interactions, particularly focusing on its capacity to enhance communication by bridging language barriers, summarizing complex medical data, and offering empathetic responses. AI's strengths lie in its ability to deliver comprehensible, concise, and medically accurate information. Studies indicate AI can outperform human physicians in certain communicative aspects, such as empathy and clarity, with models like ChatGPT and the Medical Pathways Language Model (Med-PaLM) demonstrating high effectiveness in these areas. However, significant challenges remain, including occasional inaccuracies and "hallucinations," where AI-generated content is irrelevant or medically inaccurate. These limitations highlight the need for continued refinement in AI algorithms to ensure reliability and consistency in sensitive healthcare settings. The review underscores the potential of AI as a transformative tool in health communication while advocating for further research and policy development to mitigate risks and enhance AI's integration into clinical practice.
|
Clay TJ; Da Custodia Steel ZJ; Jacobs C
| 10
|
|||
39897335
|
OpenEvidence: Enhancing Medical Student Clinical Rotations With AI but With Limitations.
| 2,025
|
Cureus
|
The integration of artificial intelligence (AI) into healthcare has introduced tools that improve medical education and clinical practice. OpenEvidence is an example, providing real-time synthesis and access to medical literature, particularly for medical students during clinical rotations. By enabling efficient searches for clinical guidelines, diagnostic criteria, and therapeutic approaches, it streamlines decision-making and study preparation. Its ability to present recent publications and highlight less commonly discussed treatments supports evidence-based learning. Despite these strengths, OpenEvidence has limitations. It struggles with targeted searches for specific articles, authors, or journals and operates through an opaque curation process. Compared to ChatGPT, which offers conversational interactivity, and UpToDate, known for its comprehensive, CME-accredited content, OpenEvidence lacks certain advanced features. However, its user-friendly design and focus on clinical evidence make it a valuable, accessible alternative. This editorial critically examines OpenEvidence's capabilities and limitations, comparing it with established tools. It emphasizes the need for greater transparency, broader evidence integration, and enhanced functionality to maximize its impact. Addressing these challenges could improve OpenEvidence's utility, supporting a more effective, evidence-based approach to medical education and clinical practice.
|
Patel N; Grewal H; Buddhavarapu V; Dhillon G
| 0-1
|
|||
40052277
|
Leveraging ChatGPT in cardiogeriatrics.
| 2,025
|
Minerva cardiology and angiology
|
The integration of artificial intelligence (AI) into healthcare is transforming medical practice, and this holds true also for the prevention, diagnosis and treatment of cardiovascular disease in older patients. Large language models (LLMs) like ChatGPT (OpenAI, San Francisco, CA, USA) represent cutting edge AI tools which may offer significant potential to enhance patient care by improving communication, aiding in diagnosis, and assisting in treatment planning. In elderly patients, who often present with complex health profiles and multiple comorbidities, AI can prove particularly beneficial, and it can analyze extensive data to provide personalized, evidence-based recommendations. For instance, ChatGPT can support clinicians in managing polypharmacy by identifying potential drug interactions and suggesting optimal medication regimens, thereby reducing adverse effects. Additionally, AI tools can help overcome therapeutic inertia by prompting timely treatment adjustments, ensuring that elderly patients receive appropriate interventions. However, the successful implementation of AI in cardiogeriatrics requires robust technological infrastructures, a synergistic integration with electronic health records, and careful consideration of ethical and privacy concerns. Ongoing collaboration between technologists and healthcare professionals is essential to address these challenges and fully realize the benefits of AI in enhancing cardiovascular care for the elderly.
|
Antonazzo B; Vassiliou VS; Lauretti A; Biondi-Zoccai G
| 10
|
|||
37692617
|
Unraveling the Ethical Enigma: Artificial Intelligence in Healthcare.
| 2,023
|
Cureus
|
The integration of artificial intelligence (AI) into healthcare promises groundbreaking advancements in patient care, revolutionizing clinical diagnosis, predictive medicine, and decision-making. This transformative technology uses machine learning, natural language processing, and large language models (LLMs) to process and reason like human intelligence. OpenAI's ChatGPT, a sophisticated LLM, holds immense potential in medical practice, research, and education. However, as AI in healthcare gains momentum, it brings forth profound ethical challenges that demand careful consideration. This comprehensive review explores key ethical concerns in the domain, including privacy, transparency, trust, responsibility, bias, and data quality. Protecting patient privacy in data-driven healthcare is crucial, with potential implications for psychological well-being and data sharing. Strategies like homomorphic encryption (HE) and secure multiparty computation (SMPC) are vital to preserving confidentiality. Transparency and trustworthiness of AI systems are essential, particularly in high-risk decision-making scenarios. Explainable AI (XAI) emerges as a critical aspect, ensuring a clear understanding of AI-generated predictions. Cybersecurity becomes a pressing concern as AI's complexity creates vulnerabilities for potential breaches. Determining responsibility in AI-driven outcomes raises important questions, with debates on AI's moral agency and human accountability. Shifting from data ownership to data stewardship enables responsible data management in compliance with regulations. Addressing bias in healthcare data is crucial to avoid AI-driven inequities. Biases present in data collection and algorithm development can perpetuate healthcare disparities. A public-health approach is advocated to address inequalities and promote diversity in AI research and the workforce. Maintaining data quality is imperative in AI applications, with convolutional neural networks showing promise in multi-input/mixed data models, offering a comprehensive patient perspective. In this ever-evolving landscape, it is imperative to adopt a multidimensional approach involving policymakers, developers, healthcare practitioners, and patients to mitigate ethical concerns. By understanding and addressing these challenges, we can harness the full potential of AI in healthcare while ensuring ethical and equitable outcomes.
|
Jeyaraman M; Balaji S; Jeyaraman N; Yadav S
| 10
|
|||
37904646
|
The importance of transparency: Declaring the use of generative artificial intelligence (AI) in academic writing.
| 2,024
|
Journal of nursing scholarship : an official publication of Sigma Theta Tau International Honor Society of Nursing
|
The integration of generative artificial intelligence (AI) into academic research writing has revolutionized the field, offering powerful tools like ChatGPT and Bard to aid researchers in content generation and idea enhancement. We explore the current state of transparency regarding generative AI use in nursing academic research journals, emphasizing the need for explicitly declaring the use of generative AI by authors in the manuscript. Out of 125 nursing studies journals, 37.6% required explicit statements about generative AI use in their authors' guidelines. No significant differences in impact factors or journal categories were found between journals with and without such requirement. A similar evaluation of medicine, general and internal journals showed a lower percentage (14.5%) including the information about generative AI usage. Declaring generative AI tool usage is crucial for maintaining the transparency and credibility in academic writing. Additionally, extending the requirement for AI usage declarations to journal reviewers can enhance the quality of peer review and combat predatory journals in the academic publishing landscape. Our study highlights the need for active participation from nursing researchers in discussions surrounding standardization of generative AI declaration in academic research writing.
|
Tang A; Li KK; Kwok KO; Cao L; Luong S; Tam W
| 10
|
|||
38508662
|
Performance of GPT-4 in Membership of the Royal College of Paediatrics and Child Health-style examination questions.
| 2,024
|
BMJ paediatrics open
|
The large language model (LLM) ChatGPT has been shown to have considerable utility across medicine and healthcare. This paper aims to explore the capabilities of GPT-4 (Generative Pre-trained Transformer 4) in answering Membership of the Royal College of Paediatrics and Child Health (MRCPCH) written paper-style questions. GPT-4 was subjected to four publicly available sample papers designed for those preparing to sit MRCPCH theory components. The model received no specialised training or reinforcement. The average score across all four papers was 78.1%. The model provided reasoning for its answers despite this not being required by the questions. This performance strengthens the case for incorporating LLMs into supporting roles for practising clinicians and medical education in paediatrics.
|
Armitage R
| 21
|
|||
40256630
|
Large language models and rheumatology: are we there yet?
| 2,025
|
Rheumatology advances in practice
|
The last 2 years have marked the beginning of a golden age for natural language processing in medicine. The arrival of large language models (LLMs) and multimodal models have raised new opportunities and challenges for research and clinical practice. In rheumatology, a specialty rich in data and requiring complex decision-making, the use of these tools may transform diagnostic procedures, improve patient interaction and simplify data management, leading to more personalized and efficient healthcare outcomes. The objective of this article is to present an overview of the status of LLMs in the field of rheumatology while discussing some of the challenges ahead in this area of great potential.
|
Benavent D; Madrid-Garcia A
| 10
|
|||
39776586
|
Large language models facilitating modern molecular biology and novel drug development.
| 2,024
|
Frontiers in pharmacology
|
The latest breakthroughs in information technology and biotechnology have catalyzed a revolutionary shift within the modern healthcare landscape, with notable impacts from artificial intelligence (AI) and deep learning (DL). Particularly noteworthy is the adept application of large language models (LLMs), which enable seamless and efficient communication between scientific researchers and AI systems. These models capitalize on neural network (NN) architectures that demonstrate proficiency in natural language processing, thereby enhancing interactions. This comprehensive review outlines the cutting-edge advancements in the application of LLMs within the pharmaceutical industry, particularly in drug development. It offers a detailed exploration of the core mechanisms that drive these models and zeroes in on the practical applications of several models that show great promise in this domain. Additionally, this review delves into the pivotal technical and ethical challenges that arise with the practical implementation of LLMs. There is an expectation that LLMs will assume a more pivotal role in the development of innovative drugs and will ultimately contribute to the accelerated development of revolutionary pharmaceuticals.
|
Liu XH; Lu ZH; Wang T; Liu F
| 10
|
|||
38247934
|
ChatGPT in Occupational Medicine: A Comparative Study with Human Experts.
| 2,024
|
Bioengineering (Basel, Switzerland)
|
The objective of this study is to evaluate ChatGPT's accuracy and reliability in answering complex medical questions related to occupational health and explore the implications and limitations of AI in occupational health medicine. The study also provides recommendations for future research in this area and informs decision-makers about AI's impact on healthcare. A group of physicians was enlisted to create a dataset of questions and answers on Italian occupational medicine legislation. The physicians were divided into two teams, and each team member was assigned a different subject area. ChatGPT was used to generate answers for each question, with/without legislative context. The two teams then evaluated human and AI-generated answers blind, with each group reviewing the other group's work. Occupational physicians outperformed ChatGPT in generating accurate questions on a 5-point Likert score, while the answers provided by ChatGPT with access to legislative texts were comparable to those of professional doctors. Still, we found that users tend to prefer answers generated by humans, indicating that while ChatGPT is useful, users still value the opinions of occupational medicine professionals.
|
Padovan M; Cosci B; Petillo A; Nerli G; Porciatti F; Scarinci S; Carlucci F; Dell'Amico L; Meliani N; Necciari G; Lucisano VC; Marino R; Foddis R; Palla A
| 43
|
|||
40122672
|
The role of large language models in the peer-review process: opportunities and challenges for medical journal reviewers and editors.
| 2,025
|
Journal of educational evaluation for health professions
|
The peer review process ensures the integrity of scientific research. This is particularly important in the medical field, where research findings directly impact patient care. However, the rapid growth of publications has strained reviewers, causing delays and potential declines in quality. Generative artificial intelligence, especially large language models (LLMs) such as ChatGPT, may assist researchers with efficient, high-quality reviews. This review explores the integration of LLMs into peer review, highlighting their strengths in linguistic tasks and challenges in assessing scientific validity, particularly in clinical medicine. Key points for integration include initial screening, reviewer matching, feedback support, and language review. However, implementing LLMs for these purposes will necessitate addressing biases, privacy concerns, and data confidentiality. We recommend using LLMs as complementary tools under clear guidelines to support, not replace, human expertise in maintaining rigorous peer review standards.
|
Lee J; Lee J; Yoo JJ
| 10
|
|||
38073698
|
Potential and limitations of ChatGPT and generative artificial intelligence in medical safety education.
| 2,023
|
World journal of clinical cases
|
The primary objectives of medical safety education are to provide the public with essential knowledge about medications and to foster a scientific approach to drug usage. The era of using artificial intelligence to revolutionize medical safety education has already dawned, and ChatGPT and other generative artificial intelligence models have immense potential in this domain. Notably, they offer a wealth of knowledge, anonymity, continuous availability, and personalized services. However, the practical implementation of generative artificial intelligence models such as ChatGPT in medical safety education still faces several challenges, including concerns about the accuracy of information, legal responsibilities, and ethical obligations. Moving forward, it is crucial to intelligently upgrade ChatGPT by leveraging the strengths of existing medical practices. This task involves further integrating the model with real-life scenarios and proactively addressing ethical and security issues with the ultimate goal of providing the public with comprehensive, convenient, efficient, and personalized medical services.
|
Wang X; Liu XQ
| 10
|
|||
40067594
|
Transforming plastic surgery: an innovative role of Chat GPT in plastic surgery practices.
| 2,025
|
Updates in surgery
|
The proliferation of artificial intelligence (AI) in the healthcare sector is a present reality. The potential applications of Chat GPT in medicine are currently undergoing intense examination. This article seeks to examine the innovative capabilities and applications of Chat GPT in this field, highlighting its potential to revolutionize patient care and decision-making processes. PubMed, Scopus, Embase, Google Scholar, and Web of Science were searched by conducting a keyword search to locate studies examining the application of Chat GPT in the realm of plastic and reconstructive surgery. The titles, abstracts, and conclusions of the studies were scrutinized to select those most closely aligned with the focus of our study. This investigation involved a comprehensive review of 15 relevant articles from diverse geographical regions predominantly comprising original studies alongside five review articles. This study illustrates the significant promise of integrating Chat GPT across diverse areas of plastic surgery, encompassing research, surgeon and patient education, and clinical practice. However, the incorporation of Chat GPT into plastic surgery necessitates diligent oversight and the formulation of explicit guidelines and caution is necessary.
|
Mehraeen E; Attarian N; Tabari A; SeyedAlinaghi S
| 32
|
|||
40330395
|
A Practical Guide to the Utilization of ChatGPT in the Emergency Department: A Systematic Review of Current Applications, Future Directions, and Limitations.
| 2,025
|
Cureus
|
The rapid development of artificial intelligence (AI) tools across various medical specialties highlights the potential for AI to transform medicine over the next 20 years. Despite this potential, the adoption of AI can feel incremental and disconnected from the daily practice of individual clinicians. For emergency department (ED) physicians practicing in 2025, recognizing and evaluating AI tools available for immediate integration into practice is essential. One such tool is ChatGPT (OpenAI, San Francisco, California, United States), a large language model (LLM) that is free, easily accessible via smartphones or computers, and widely used across industries. However, its usability in the ED setting remains poorly characterized. This review explores the current evidence surrounding ChatGPT 4's applications in various ED physician tasks, documenting its strengths and limitations. While ChatGPT demonstrates significant utility in language generation and administrative tasks, its potential for supporting more complex tasks in medical decision-making is emerging but not yet robust. The available evidence is limited and variable and lacks standardization, reflecting a field still in its early stages of development. Notably, the performance improvements observed between ChatGPT 3.5 and ChatGPT 4 suggest that future iterations, such as the anticipated release of ChatGPT 5, could significantly impact these findings. This review provides a comprehensive snapshot of the current state of evidence regarding ChatGPT's use in the ED, offering both an evaluation of its capabilities and a practical guide for its appropriate use by ED clinicians today.
|
Meyer NS; Meyer JW
| 10
|
|||
37599212
|
[Applications and challenges of large language models in critical care medicine].
| 2,023
|
Zhonghua yi xue za zhi
|
The rapid development of big data methods and technologies has provided more and more new ideas and methods for clinical diagnosis and treatment. The emergence of large language models (LLM) has made it possible for human-computer interactive dialogues and applications in complex medical scenarios. Critical care medicine is a process of continuous dynamic targeted treatment. The huge data generated in this process needs to be integrated and optimized through models for clinical application, interaction in teaching simulation, and assistance in scientific research. Using the LLM represented by generative pre-trained transformer ChatGPT can initially realize the application in the diagnosis of severe diseases, the prediction of death risk and the management of medical records. At the same time, the time and space limitations, illusions and ethical and moral issues of ChatGPT emerged as the times require. In the future, it is undeniable that it may play a huge role in the diagnosis and treatment of critical care medicine, but the current application should be combined with more clinical knowledge reserves of critical care medicine to carefully judge its conclusions.
|
Su LX; Weng L; Li WX; Long Y
| 10
|
|||
37755165
|
A Bibliometric Analysis of the Rise of ChatGPT in Medical Research.
| 2,023
|
Medical sciences (Basel, Switzerland)
|
The rapid emergence of publicly accessible artificial intelligence platforms such as large language models (LLMs) has led to an equally rapid increase in articles exploring their potential benefits and risks. We performed a bibliometric analysis of ChatGPT literature in medicine and science to better understand publication trends and knowledge gaps. Following title, abstract, and keyword searches of PubMed, Embase, Scopus, and Web of Science databases for ChatGPT articles published in the medical field, articles were screened for inclusion and exclusion criteria. Data were extracted from included articles, with citation counts obtained from PubMed and journal metrics obtained from Clarivate Journal Citation Reports. After screening, 267 articles were included in the study, most of which were editorials or correspondence with an average of 7.5 +/- 18.4 citations per publication. Published articles on ChatGPT were authored largely in the United States, India, and China. The topics discussed included use and accuracy of ChatGPT in research, medical education, and patient counseling. Among non-surgical specialties, radiology published the most ChatGPT-related articles, while plastic surgery published the most articles among surgical specialties. The average citation number among the top 20 most-cited articles was 60.1 +/- 35.3. Among journals with the most ChatGPT-related publications, there were on average 10 +/- 3.7 publications. Our results suggest that managing the inevitable ethical and safety issues that arise with the implementation of LLMs will require further research exploring the capabilities and accuracy of ChatGPT, to generate policies guiding the adoption of artificial intelligence in medicine and science.
|
Barrington NM; Gupta N; Musmar B; Doyle D; Panico N; Godbole N; Reardon T; D'Amico RS
| 10
|
|||
37987431
|
Evaluating the Efficacy of ChatGPT in Navigating the Spanish Medical Residency Entrance Examination (MIR): Promising Horizons for AI in Clinical Medicine.
| 2,023
|
Clinics and practice
|
The rapid progress in artificial intelligence, machine learning, and natural language processing has led to increasingly sophisticated large language models (LLMs) for use in healthcare. This study assesses the performance of two LLMs, the GPT-3.5 and GPT-4 models, in passing the MIR medical examination for access to medical specialist training in Spain. Our objectives included gauging the model's overall performance, analyzing discrepancies across different medical specialties, discerning between theoretical and practical questions, estimating error proportions, and assessing the hypothetical severity of errors committed by a physician. MATERIAL AND METHODS: We studied the 2022 Spanish MIR examination results after excluding those questions requiring image evaluations or having acknowledged errors. The remaining 182 questions were presented to the LLM GPT-4 and GPT-3.5 in Spanish and English. Logistic regression models analyzed the relationships between question length, sequence, and performance. We also analyzed the 23 questions with images, using GPT-4's new image analysis capability. RESULTS: GPT-4 outperformed GPT-3.5, scoring 86.81% in Spanish (p < 0.001). English translations had a slightly enhanced performance. GPT-4 scored 26.1% of the questions with images in English. The results were worse when the questions were in Spanish, 13.0%, although the differences were not statistically significant (p = 0.250). Among medical specialties, GPT-4 achieved a 100% correct response rate in several areas, and the Pharmacology, Critical Care, and Infectious Diseases specialties showed lower performance. The error analysis revealed that while a 13.2% error rate existed, the gravest categories, such as "error requiring intervention to sustain life" and "error resulting in death", had a 0% rate. CONCLUSIONS: GPT-4 performs robustly on the Spanish MIR examination, with varying capabilities to discriminate knowledge across specialties. While the model's high success rate is commendable, understanding the error severity is critical, especially when considering AI's potential role in real-world medical practice and its implications for patient safety.
|
Guillen-Grima F; Guillen-Aguinaga S; Guillen-Aguinaga L; Alas-Brun R; Onambele L; Ortega W; Montejo R; Aguinaga-Ontoso E; Barach P; Aguinaga-Ontoso I
| 21
|
|||
38832311
|
Challenges and opportunities of artificial intelligence implementation within sports science and sports medicine teams.
| 2,024
|
Frontiers in sports and active living
|
The rapid progress in the development of automation and artificial intelligence (AI) technologies, such as ChatGPT, represents a step-wise change in human's interactions with technology as part of a broader complex, sociotechnical system. Based on historical parallels to the present moment, such changes are likely to bring forth structural shifts to the nature of work, where near and future technologies will occupy key roles as workers or assistants in sports science and sports medicine multidisciplinary teams (MDTs). This envisioned future may bring enormous benefits, as well as a raft of potential challenges. These challenges include the potential to remove many human roles and allocate them to semi- or fully-autonomous AI. Removing such roles and tasks from humans will make many current jobs and careers untenable, leaving a set of difficult and unrewarding tasks for the humans that remain. Paradoxically, replacing humans with technology increases system complexity and makes them more prone to failure. The automation and AI boom also brings substantial opportunities. Among them are automated sentiment analysis and Digital Twin technologies which may reveal novel insights into athlete health and wellbeing and team tactical patterns, respectively. However, without due consideration of the interactions between humans and technology in the broader system of sport, adverse impacts are likely to be felt. Human and AI teamwork may require new ways of thinking.
|
Naughton M; Salmon PM; Compton HR; McLean S
| 32
|
|||
38384621
|
Can DALL-E 3 Reliably Generate 12-Lead ECGs and Teaching Illustrations?
| 2,024
|
Cureus
|
The recent integration of the latest image generation model DALL-E 3 into ChatGPT allows text prompts to easily generate the corresponding images, enabling multimodal output from ChatGPT. We explored the feasibility of DALL-E 3 for drawing a 12-lead ECG and found that it can draw rudimentary 12-lead electrocardiograms (ECG) displaying some of the parameters, although the details are not completely accurate. We also explored DALL-E 3's capacity to create vivid illustrations for teaching resuscitation-related medical knowledge. DALL-E 3 produced accurate CPR illustrations emphasizing proper hand placement and technique. For ECG principles, it produced creative heart-shaped waveforms tying ECGs to the heart. With further training, DALL-E 3 shows promise to expand easy-to-understand visual medical teaching materials and ECG simulations for different disease states. In conclusion, DALL-E 3 has the potential to generate realistic 12-lead ECGs and teaching schematics, but expert validation is still needed.
|
Zhu L; Mou W; Wu K; Zhang J; Luo P
| 0-1
|
|||
38290759
|
Potential applications and implications of large language models in primary care.
| 2,024
|
Family medicine and community health
|
The recent release of highly advanced generative artificial intelligence (AI) chatbots, including ChatGPT and Bard, which are powered by large language models (LLMs), has attracted growing mainstream interest over its diverse applications in clinical practice, including in health and healthcare. The potential applications of LLM-based programmes in the medical field range from assisting medical practitioners in improving their clinical decision-making and streamlining administrative paperwork to empowering patients to take charge of their own health. However, despite the broad range of benefits, the use of such AI tools also comes with several limitations and ethical concerns that warrant further consideration, encompassing issues related to privacy, data bias, and the accuracy and reliability of information generated by AI. The focus of prior research has primarily centred on the broad applications of LLMs in medicine. To the author's knowledge, this is, the first article that consolidates current and pertinent literature on LLMs to examine its potential in primary care. The objectives of this paper are not only to summarise the potential benefits, risks and challenges of using LLMs in primary care, but also to offer insights into considerations that primary care clinicians should take into account when deciding to adopt and integrate such technologies into their clinical practice.
|
Andrew A
| 10
|
|||
38028668
|
Overview of Chatbots with special emphasis on artificial intelligence-enabled ChatGPT in medical science.
| 2,023
|
Frontiers in artificial intelligence
|
The release of ChatGPT has initiated new thinking about AI-based Chatbot and its application and has drawn huge public attention worldwide. Researchers and doctors have started thinking about the promise and application of AI-related large language models in medicine during the past few months. Here, the comprehensive review highlighted the overview of Chatbot and ChatGPT and their current role in medicine. Firstly, the general idea of Chatbots, their evolution, architecture, and medical use are discussed. Secondly, ChatGPT is discussed with special emphasis of its application in medicine, architecture and training methods, medical diagnosis and treatment, research ethical issues, and a comparison of ChatGPT with other NLP models are illustrated. The article also discussed the limitations and prospects of ChatGPT. In the future, these large language models and ChatGPT will have immense promise in healthcare. However, more research is needed in this direction.
|
Chakraborty C; Pal S; Bhattacharya M; Dash S; Lee SS
| 10
|
|||
38524814
|
Using artificial intelligence for exercise prescription in personalised health promotion: A critical evaluation of OpenAI's GPT-4 model.
| 2,024
|
Biology of sport
|
The rise of artificial intelligence (AI) applications in healthcare provides new possibilities for personalized health management. AI-based fitness applications are becoming more common, facilitating the opportunity for individualised exercise prescription. However, the use of AI carries the risk of inadequate expert supervision, and the efficacy and validity of such applications have not been thoroughly investigated, particularly in the context of diverse health conditions. The aim of the study was to critically assess the efficacy of exercise prescriptions generated by OpenAI's Generative Pre-Trained Transformer 4 (GPT-4) model for five example patient profiles with diverse health conditions and fitness goals. Our focus was to assess the model's ability to generate exercise prescriptions based on a singular, initial interaction, akin to a typical user experience. The evaluation was conducted by leading experts in the field of exercise prescription. Five distinct scenarios were formulated, each representing a hypothetical individual with a specific health condition and fitness objective. Upon receiving details of each individual, the GPT-4 model was tasked with generating a 30-day exercise program. These AI-derived exercise programs were subsequently subjected to a thorough evaluation by experts in exercise prescription. The evaluation encompassed adherence to established principles of frequency, intensity, time, and exercise type; integration of perceived exertion levels; consideration for medication intake and the respective medical condition; and the extent of program individualization tailored to each hypothetical profile. The AI model could create general safety-conscious exercise programs for various scenarios. However, the AI-generated exercise prescriptions lacked precision in addressing individual health conditions and goals, often prioritizing excessive safety over the effectiveness of training. The AI-based approach aimed to ensure patient improvement through gradual increases in training load and intensity, but the model's potential to fine-tune its recommendations through ongoing interaction was not fully satisfying. AI technologies, in their current state, can serve as supplemental tools in exercise prescription, particularly in enhancing accessibility for individuals unable to access, often costly, professional advice. However, AI technologies are not yet recommended as a substitute for personalized, progressive, and health condition-specific prescriptions provided by healthcare and fitness professionals. Further research is needed to explore more interactive use of AI models and integration of real-time physiological feedback.
|
Dergaa I; Saad HB; El Omri A; Glenn JM; Clark CCT; Washif JA; Guelmami N; Hammouda O; Al-Horani RA; Reynoso-Sanchez LF; Romdhani M; Paineiras-Domingos LL; Vancini RL; Taheri M; Mataruna-Dos-Santos LJ; Trabelsi K; Chtourou H; Zghibi M; Eken O; Swed S; Aissa MB; Shawki HH; El-Seedi HR; Mujika I; Seiler S; Zmijewski P; Pyne DB; Knechtle B; Asif IM; Drezner JA; Sandbakk O; Chamari K
| 0-1
|
|||
37432530
|
Use of ChatGPT on Taiwan's Examination for Medical Doctors.
| 2,024
|
Annals of biomedical engineering
|
The study evaluates the performance of OpenAI's GPT-3 model on answering medical exam questions from Staged Senior Professional and Technical Examinations Regulations for Medical Doctors in the field of internal medicine. The study used the official API to connect the questionnaire with the ChatGPT model, and the results showed that the AI model performed reasonably well, with the highest score of 8/13 in chest medicine. However, the overall performance of the AI model was limited, with only chest medicine scoring more than 60. ChatGPT scored relatively high in Chest medicine, Gastroenterology, and general medicine. One of the limitations of the study is the use of non-English text, which may affect the model's performance as the model is primarily trained on English text.
|
Kao YS; Chuang WK; Yang J
| 21
|
|||
37864817
|
The Need for Artificial Intelligence Curriculum in Military Medical Education.
| 2,024
|
Military medicine
|
The success of deep-learning algorithms in analyzing complex structured and unstructured multidimensional data has caused an exponential increase in the amount of research devoted to the applications of artificial intelligence (AI) in medicine in the past decade. Public release of large language models like ChatGPT the past year has generated an unprecedented storm of excitement and rumors of machine intelligence finally reaching or even surpassing human capability in detecting meaningful signals in complex multivariate data. Such enthusiasm, however, is met with an equal degree of both skepticism and fear over the social, legal, and moral implications of such powerful technology with relatively little safeguards or regulations on its development. The question remains in medicine of how to harness the power of AI to improve patient outcomes by increasing the diagnostic accuracy and treatment precision provided by medical professionals. Military medicine, given its unique mission and resource constraints,can benefit immensely from such technology. However, reaping such benefits hinges on the ability of the rising generations of military medical professionals to understand AI algorithms and their applications. Additionally, they should strongly consider working with them as an adjunct decision-maker and view them as a colleague to access and harness relevant information as opposed to something to be feared. Ideas expressed in this commentary were formulated by a military medical student during a two-month research elective working on a multidisciplinary team of computer scientists and clinicians at the National Library of Medicine advancing the state of the art of AI in medicine. A motivation to incorporate AI in the Military Health System is provided, including examples of applications in military medicine. Rationale is then given for inclusion of AI in education starting in medical school as well as a prudent implementation of these algorithms in a clinical workflow during graduate medical education. Finally, barriers to implementation are addressed along with potential solutions. The end state is not that rising military physicians are technical experts in AI; but rather that they understand how they can leverage its rapidly evolving capabilities to prepare for a future where AI will have a significant role in clinical care. The overall goal is to develop trained clinicians that can leverage these technologies to improve the Military Health System.
|
Spirnak JR; Antani S
| 10
|
|||
39467455
|
Drug repurposing in status epilepticus.
| 2,024
|
Epilepsy & behavior : E&B
|
The treatment of status epilepticus (SE) has changed little in the last 20 years, largely because of the high risks and costs of new drug development for SE. Moreover, SE poses specific challenges to drug development, such as patient diversity, logistical hurdles, and the need for acute treatment strategies that differ from chronic seizure prevention. This has reduced the appetite of industry to develop new drugs in this area. Drug repurposing is an attractive approach to address this unmet need. It offers significant advantages, including reduced development time, lower costs, and higher success rates, compared to novel drug development. Here I demonstrate how novel methods integrating biological knowledge and computational methods can be applied to drug repurposing in status epilepticus. Biological approaches focus on addressing mechanisms underlying drug resistance in SE (using for example ketamine, tacrolimus and safinamide) and longer-term consequences (using for example omaveloxolone, celecoxib and losartan). Additionally, artificial intelligence platforms, such as ChatGPT, can rapidly generate promising drug lists, while in silico methods can analyze gene expression changes to predict molecular targets. Combining AI and in silico approaches has identified several candidate drugs, including metformin, sirolimus and riluzole, for SE treatment. Despite the promise of repurposing, challenges remain, such as intellectual property issues and regulatory barriers. Nonetheless, drug repurposing presents a viable solution to the high costs and slow progress of traditional drug development for SE. This paper is based on a presentation made at the 9th London-Innsbruck Colloquium on Status Epilepticus and Acute Seizures, in April 2024.
|
Walker MC
| 0-1
|
|||
37779171
|
Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments.
| 2,023
|
Scientific reports
|
The United States Medical Licensing Examination (USMLE) has been a subject of performance study for artificial intelligence (AI) models. However, their performance on questions involving USMLE soft skills remains unexplored. This study aimed to evaluate ChatGPT and GPT-4 on USMLE questions involving communication skills, ethics, empathy, and professionalism. We used 80 USMLE-style questions involving soft skills, taken from the USMLE website and the AMBOSS question bank. A follow-up query was used to assess the models' consistency. The performance of the AI models was compared to that of previous AMBOSS users. GPT-4 outperformed ChatGPT, correctly answering 90% compared to ChatGPT's 62.5%. GPT-4 showed more confidence, not revising any responses, while ChatGPT modified its original answers 82.5% of the time. The performance of GPT-4 was higher than that of AMBOSS's past users. Both AI models, notably GPT-4, showed capacity for empathy, indicating AI's potential to meet the complex interpersonal, ethical, and professional demands intrinsic to the practice of medicine.
|
Brin D; Sorin V; Vaid A; Soroush A; Glicksberg BS; Charney AW; Nadkarni G; Klang E
| 21
|
|||
38247132
|
Potential applications of ChatGPT in obstetrics and gynecology in Korea: a review article.
| 2,024
|
Obstetrics & gynecology science
|
The use of chatbot technology, particularly chat generative pre-trained transformer (ChatGPT) with an impressive 175 billion parameters, has garnered significant attention across various domains, including Obstetrics and Gynecology (OBGYN). This comprehensive review delves into the transformative potential of chatbots with a special focus on ChatGPT as a leading artificial intelligence (AI) technology. Moreover, ChatGPT harnesses the power of deep learning algorithms to generate responses that closely mimic human language, opening up myriad applications in medicine, research, and education. In the field of medicine, ChatGPT plays a pivotal role in diagnosis, treatment, and personalized patient education. Notably, the technology has demonstrated remarkable capabilities, surpassing human performance in OBGYN examinations, and delivering highly accurate diagnoses. However, challenges remain, including the need to verify the accuracy of the information and address the ethical considerations and limitations. In the wide scope of chatbot technology, AI systems play a vital role in healthcare processes, including documentation, diagnosis, research, and education. Although promising, the limitations and occasional inaccuracies require validation by healthcare professionals. This review also examined global chatbot adoption in healthcare, emphasizing the need for user awareness to ensure patient safety. Chatbot technology holds great promise in OBGYN and medicine, offering innovative solutions while necessitating responsible integration to ensure patient care and safety.
|
Lee Y; Kim SY
| 10
|
|||
38242382
|
Comprehensive evaluation of molecule property prediction with ChatGPT.
| 2,024
|
Methods (San Diego, Calif.)
|
The versatility of ChatGPT in performing a diverse range of tasks has elicited considerable interest on its potential applications within professional fields. Taking drug discovery as a testbed, this paper provides a comprehensive evaluation of ChatGPT's ability on molecule property prediction. The study focuses on three aspects: 1) Effects of different prompt settings, where we investigate the impact of varying prompts on the prediction outcomes of ChatGPT; 2) Comprehensive evaluation on molecule property prediction, where we conduct a comprehensive evaluation on 53 ADMET-related endpoints; 3) Analysis of ChatGPT's potential and limitations, where we make comparisons with models tailored for molecule property prediction, thus gaining a more accurate understanding of ChatGPT's capabilities and limitations in this area. Through comprehensive evaluation, we find that 1) With appropriate prompt settings, ChatGPT can attain satisfactory prediction outcomes that are competitive with specialized models designed for those tasks. 2) Prompt settings significantly affect ChatGPT's performance. Among all prompt settings, the strategy of selecting examples in few-shot has the greatest impact on results. Scaffold sampling greatly outperforms random sampling. 3) The capacity of ChatGPT to accomplish high-precision predictions is significantly influenced by the quality of examples provided, which may constrain its practical applicability in real-world scenarios. This work highlights ChatGPT's potential and limitations on molecule property prediction, which we hope can inspire future design and evaluation of Large Language Models within scientific domains.
|
Cai X; Lai H; Wang X; Wang L; Liu W; Wang Y; Wang Z; Cao D; Zeng X
| 0-1
|
|||
38562449
|
Bioinformatics and biomedical informatics with ChatGPT: Year one review.
| 2,024
|
ArXiv
|
The year 2023 marked a significant surge in the exploration of applying large language model (LLM) chatbots, notably ChatGPT, across various disciplines. We surveyed the applications of ChatGPT in bioinformatics and biomedical informatics throughout the year, covering omics, genetics, biomedical text mining, drug discovery, biomedical image understanding, bioinformatics programming, and bioinformatics education. Our survey delineates the current strengths and limitations of this chatbot in bioinformatics and offers insights into potential avenues for future developments.
|
Wang J; Cheng Z; Yao Q; Liu L; Xu D; Hu G
| 10
|
|||
39364207
|
Bioinformatics and biomedical informatics with ChatGPT: Year one review.
| 2,024
|
Quantitative biology (Beijing, China)
|
The year 2023 marked a significant surge in the exploration of applying large language model chatbots, notably Chat Generative Pre-trained Transformer (ChatGPT), across various disciplines. We surveyed the application of ChatGPT in bioinformatics and biomedical informatics throughout the year, covering omics, genetics, biomedical text mining, drug discovery, biomedical image understanding, bioinformatics programming, and bioinformatics education. Our survey delineates the current strengths and limitations of this chatbot in bioinformatics and offers insights into potential avenues for future developments.
|
Wang J; Cheng Z; Yao Q; Liu L; Xu D; Hu G
| 0-1
|
|||
38384298
|
Generative artificial intelligence in drug discovery: basic framework, recent advances, challenges, and opportunities.
| 2,024
|
Frontiers in pharmacology
|
There are two main ways to discover or design small drug molecules. The first involves fine-tuning existing molecules or commercially successful drugs through quantitative structure-activity relationships and virtual screening. The second approach involves generating new molecules through de novo drug design or inverse quantitative structure-activity relationship. Both methods aim to get a drug molecule with the best pharmacokinetic and pharmacodynamic profiles. However, bringing a new drug to market is an expensive and time-consuming endeavor, with the average cost being estimated at around $2.5 billion. One of the biggest challenges is screening the vast number of potential drug candidates to find one that is both safe and effective. The development of artificial intelligence in recent years has been phenomenal, ushering in a revolution in many fields. The field of pharmaceutical sciences has also significantly benefited from multiple applications of artificial intelligence, especially drug discovery projects. Artificial intelligence models are finding use in molecular property prediction, molecule generation, virtual screening, synthesis planning, repurposing, among others. Lately, generative artificial intelligence has gained popularity across domains for its ability to generate entirely new data, such as images, sentences, audios, videos, novel chemical molecules, etc. Generative artificial intelligence has also delivered promising results in drug discovery and development. This review article delves into the fundamentals and framework of various generative artificial intelligence models in the context of drug discovery via de novo drug design approach. Various basic and advanced models have been discussed, along with their recent applications. The review also explores recent examples and advances in the generative artificial intelligence approach, as well as the challenges and ongoing efforts to fully harness the potential of generative artificial intelligence in generating novel drug molecules in a faster and more affordable manner. Some clinical-level assets generated form generative artificial intelligence have also been discussed in this review to show the ever-increasing application of artificial intelligence in drug discovery through commercial partnerships.
|
Gangwal A; Ansari A; Ahmad I; Azad AK; Kumarasamy V; Subramaniyan V; Wong LS
| 0-1
|
|||
39278424
|
Editorial Commentary: The Scope of Medical Research Concerning ChatGPT Remains Limited by Lack of Originality.
| 2,024
|
Arthroscopy : the journal of arthroscopic & related surgery : official publication of the Arthroscopy Association of North America and the International Arthroscopy Association
|
There is no shortage of literature surrounding ChatGPT and whether this large language model can provide accurate and clinically relevant information in response to simulated patient queries. Unfortunately, there is a shortage of literature addressing important considerations beyond these experimental and entertaining uses. Indeed, a trend for redundancy has emerged where most of the literature has applied ChatGPT to the same tasks while simply swapping the subject matter, resulting in a failure to expand the impact and reach of this potentially transformational artificial intelligence (AI) solution. Instead, research addressing pressing health care challenges and a renewed focus on novel use cases will allow for more meaningful research initiatives, product development, and tangible changes at both the system and point-of-care levels. Current target areas of interest in medicine that remain obstacles to patient care include prior authorization, administrative burden, documentation generation, medical triage and diagnosis, and patient communication efficiency. To advance this area of research toward such meaningful applications, a structured framework is necessary. Such frameworks should include problem identification; definition of key performance indicators; multidisciplinary and multi-institutional collaboration of those with domain expertise, including AI engineers and information technology specialists; policy and strategy development driven by executive-level personnel; institutional financial support and investment from key stakeholders for AI infrastructure and maintenance; and critical assessment of AI performance, bias, and equity.
|
Kunze KN
| 10
|
|||
39583413
|
The Role of Artificial Intelligence in Diagnostic Radiology.
| 2,024
|
Cureus
|
This article explores the significant impact of artificial intelligence (AI) on radiology through a comprehensive analysis of eight articles published between 2018 and 2024. With the rapid progress of modern science, the diagnostic methods in medicine are subject to change, which creates the need to consider and evaluate new diagnostic techniques such as artificial intelligence. In our study, we will evaluate the diagnostic accuracy of artificial intelligence and radiological image interpretation, as well as the pros and cons of its use and future development prospects in this field. In this article, we also consider the possibility of using GPT-4 for image analysis in radiology. Artificial intelligence is a revolutionary medical tool that can change diagnostic strategies to improve the quality of medical services.
|
Strubchevska O; Kozyk M; Kozyk A; Strubchevska K
| 0-1
|
|||
37638266
|
Exploring the Potential and Limitations of Chat Generative Pre-trained Transformer (ChatGPT) in Generating Board-Style Dermatology Questions: A Qualitative Analysis.
| 2,023
|
Cureus
|
This article investigates the limitations of Chat Generative Pre-trained Transformer (ChatGPT), a language model developed by OpenAI, as a study tool in dermatology. The study utilized ChatPDF, an application that integrates PDF files with ChatGPT, to generate American Board of Dermatology Applied Exam (ABD-AE)-style questions from continuing medical education articles from the Journal of the American Board of Dermatology. A qualitative analysis of the questions was conducted by two board-certified dermatologists, assessing accuracy, complexity, and clarity. Out of 40 questions generated, only 16 (40%) were deemed accurate and appropriate for ABD-AE study preparation. The remaining questions exhibited limitations, including low complexity, lack of clarity, and inaccuracies. The findings highlight the challenges faced by ChatGPT in understanding the domain-specific knowledge required in dermatology. Moreover, the model's inability to comprehend the context and generate high-quality distractor options, as well as the absence of image generation capabilities, further hinders its usefulness. The study emphasizes that while ChatGPT may aid in generating simple questions, it cannot replace the expertise of dermatologists and medical educators in developing high-quality, board-style questions that effectively evaluate candidates' knowledge and reasoning abilities.
|
Ayub I; Hamann D; Hamann CR; Davis MJ
| 21
|
|||
37746684
|
Artificial Intelligence: its Future and Impact on Acute Medicine.
| 2,023
|
Acute medicine
|
This commentary explores the potential impact of artificial intelligence (AI) in acute medicine, considering its possibilities and challenges. With its ability to simulate human intelligence, AI holds the promise for supporting timely decision-making and interventions in acute care. While AI has significantly contributed to improvements in various sectors, its implementation in healthcare remains limited. The development of AI tools tailored to acute medicine can improve clinical decision-making, and AI's role in streamlining administrative tasks, exemplified by ChatGPT, may offer immediate benefits. However, challenges include uniform data collection, privacy, bias, and preserving the doctor-patient relationship. Collaboration among AI researchers, healthcare professionals, and policymakers is crucial to harness the potential of AI in acute medicine and create a future where advanced technologies synergistically enhance human expertise.
|
Schinkel M; Paranjape K; Bhagirath SC; Nanayakkara P
| 10
|
|||
38724772
|
"Incorporating large language models into academic neurosurgery: embracing the new era".
| 2,024
|
Neurosurgical review
|
This correspondence examines how LLMs, such as ChatGPT, have an effect on academic neurosurgery. It emphasises the potential of LLMs in enhancing clinical decision-making, medical education, and surgical practice by providing real-time access to extensive medical literature and data analysis. Although this correspondence acknowledges the opportunities that come with the incorporation of LLMs, it also discusses challenges, such as data privacy, ethical considerations, and regulatory compliance. Additionally, recent studies have assessed the effectiveness of LLMs in perioperative patient communication and medical education, and stressed the need for cooperation between neurosurgeons, data scientists, and AI experts to address these challenges and fully exploit the potential of LLMs in improving patient care and outcomes in neurosurgery.
|
Aamir A; Hafsa H
| 0-1
|
|||
37450276
|
Towards Precision Medicine in Spinal Surgery: Leveraging AI Technologies.
| 2,024
|
Annals of biomedical engineering
|
This critique explores the implications of integrating artificial intelligence (AI) technology, specifically OpenAI's advanced language model GPT-4 and its interface, ChatGPT, into the field of spinal surgery. It examines the potential effects of algorithmic bias, unique challenges in surgical domains, access and equity issues, cost implications, global disparities in technology adoption, and the concept of technological determinism. It posits that biases present in AI training data may impact the quality and equity of healthcare outcomes. Challenges related to the unique nature of surgical procedures, including real-time decision-making, are also addressed. Concerns over access, equity, and cost implications underscore the potential for exacerbated healthcare disparities. Global disparities in technology adoption highlight the importance of global collaboration, technology transfer, and capacity building. Finally, the critique challenges the notion of technological determinism, emphasizing the continued importance of human judgement and patient-care provider relationship in healthcare. The critique calls for a comprehensive evaluation of AI technology integration in healthcare to ensure equitable and quality care.
|
Lawson McLean A
| 0-1
|
|||
37758854
|
Response Letter to "Testing ChatGPT's Capabilities for Social Media Content Analysis".
| 2,024
|
Aesthetic plastic surgery
|
This editorial discusses the innovative application of ChatGPT in categorizing and analysing social media content, with a focus on aesthetic medical fields. It highlights the revolutionary capabilities of AI in enhancing efficiency and objectivity over traditional human-driven methods. Alongside the benefits, it also considers ethical concerns surrounding privacy, consent, and inherent biases within AI models. The article explores the complexity of categorization, the limitations in understanding human nuances, and the impact on human creativity, including specific applications such as SEO writing. It concludes by emphasizing the need for careful integration of AI in our interconnected world, balancing technological advancements with ethical considerations and a recognition of the unique attributes of human intellect. LEVEL OF EVIDENCE V: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
|
Buzzaccarini G; Degliuomini RS; Borin M
| 10
|
|||
39359332
|
OpenAI o1-Preview vs. ChatGPT in Healthcare: A New Frontier in Medical AI Reasoning.
| 2,024
|
Cureus
|
This editorial explores the recent advancements in generative artificial intelligence with the newly-released OpenAI o1-Preview, comparing its capabilities to the traditional ChatGPT (GPT-4) model, particularly in the context of healthcare. While ChatGPT has shown many applications for general medical advice and patient interactions, OpenAI o1-Preview introduces new features with advanced reasoning skills using a chain of thought processes that could enable users to tackle more complex medical queries such as genetic disease discovery, multi-system or complex disease care, and medical research support. The article explores some of the new model's potential and other aspects that may affect its usage, like slower response times due to its extensive reasoning approach yet highlights its potential for reducing hallucinations and offering more accurate outputs for complex medical problems. Ethical challenges, data diversity, access equity, and transparency are also discussed, identifying key areas for future research, including optimizing the use of both models in tandem for healthcare applications. The editorial concludes by advocating for collaborative exploration of all large language models (LLMs), including the novel OpenAI o1-Preview, to fully utilize their transformative potential in medicine and healthcare delivery. This model, with its advanced reasoning capabilities, presents an opportunity to empower healthcare professionals, policymakers, and computer scientists to work together in transforming patient care, accelerating medical research, and enhancing healthcare outcomes. By optimizing the use of several LLM models in tandem, healthcare systems may enhance efficiency and precision, as well as mitigate previous LLM challenges, such as ethical concerns, access disparities, and technical limitations, steering to a new era of artificial intelligence (AI)-driven healthcare.
|
Temsah MH; Jamal A; Alhasan K; Temsah AA; Malki KH
| 10
|
|||
37605022
|
Testing ChatGPT's Capabilities for Social Media Content Analysis.
| 2,024
|
Aesthetic plastic surgery
|
This letter explores the potential of artificial intelligence models, specifically ChatGPT, for content analysis, namely for categorizing social media posts. The primary focus is on Twitter posts with the hashtag #plasticsurgery. Through integrating Python with the OpenAI API, the study provides a designed prompt to categorize tweet content. Looking forward, the utilization of AI in content analysis presents promising opportunities for advancing understanding of complex social phenomena.Level of Evidence V This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine Ratings, please refer to Table of Contents or online Instructions to Authors http://www.springer.com/00266 .
|
Haman M; Skolnik M
| 10
|
|||
37362116
|
The Urgent Need for Healthcare Workforce Upskilling and Ethical Considerations in the Era of AI-Assisted Medicine.
| 2,023
|
Indian journal of otolaryngology and head and neck surgery : official publication of the Association of Otolaryngologists of India
|
This letter is in response to the article "Enhancing India's Health Care during COVID Era: Role of Artificial Intelligence and Algorithms". While the integration of AI has the potential to improve patient outcomes and reduce the workload of healthcare professionals, there is a need for significant training and upskilling of healthcare providers. There are ethical and privacy concerns related to the use of AI in healthcare, which must be accompanied by rigorous guidelines. One solution to the overburdened healthcare systems in India is the use of new language generation models like ChatGPT to assist healthcare workers in writing discharge summaries. By using these technologies responsibly, we can improve healthcare outcomes and alleviate the burden on overworked healthcare professionals.
|
Rao D
| 10
|
|||
39755237
|
Introduction to Artificial Intelligence and Machine Learning in Pathology and Medicine: Generative and Nongenerative Artificial Intelligence Basics.
| 2,025
|
Modern pathology : an official journal of the United States and Canadian Academy of Pathology, Inc
|
This manuscript serves as an introduction to a comprehensive 7-part review article series on artificial intelligence (AI) and machine learning (ML) and their current and future influence within pathology and medicine. This introductory review provides a comprehensive grasp of this fast-expanding realm and its potential to transform medical diagnosis, workflow, research, and education. Fundamental terminology employed in AI-ML is covered using an extensive dictionary. The article also provides a broad overview of the main domains in the AI-ML field, encompassing both generative and nongenerative (traditional) AI, thereby serving as a primer to the other 6 review articles in this series that describe the details about statistics, regulations, bias, ethical dilemmas, and ML-Ops in AI-ML. The intent of these review articles is to better equip individuals who are or will be working in an AI-enabled health care system.
|
Rashidi HH; Pantanowitz J; Hanna MG; Tafti AP; Sanghani P; Buchinsky A; Fennell B; Deebajah M; Wheeler S; Pearce T; Abukhiran I; Robertson S; Palmer O; Gur M; Tran NK; Pantanowitz L
| 10
|
|||
40424337
|
"It is important to consult" a linguist: Verb-Argument Constructions in ChatGPT and human experts' medical and financial advice.
| 2,025
|
PloS one
|
This paper adopts a Usage-Based Construction Grammar perspective to compare human- and AI-generated language, focusing on Verb-Argument Constructions (VACs) as a lens for analysis. Specifically, we examine solicited advice texts in two domains-Finance and Medicine-produced by humans and ChatGPT across different GPT models (3.5, 4, and 4o) and interfaces (3.5 Web vs. 3.5 API). Our findings reveal broad consistency in the frequency and distribution of the most common VACs across human- and AI-generated texts, though ChatGPT exhibits a slightly higher reliance on the most frequent constructions. A closer examination of the verbs occupying these constructions uncovers significant differences in the meanings conveyed, with a notable growth away from human-like language production in macro level perspectives (e.g., length) and towards humanlike verb-VAC patterns with newer models. These results underscore the potential of VACs as a powerful tool for analyzing AI-generated language and tracking its evolution over time.
|
Casal JE; Stewart CM; Windsor AJ
| 10
|
|||
36869927
|
Evaluating the Feasibility of ChatGPT in Healthcare: An Analysis of Multiple Clinical and Research Scenarios.
| 2,023
|
Journal of medical systems
|
This paper aims to highlight the potential applications and limits of a large language model (LLM) in healthcare. ChatGPT is a recently developed LLM that was trained on a massive dataset of text for dialogue with users. Although AI-based language models like ChatGPT have demonstrated impressive capabilities, it is uncertain how well they will perform in real-world scenarios, particularly in fields such as medicine where high-level and complex thinking is necessary. Furthermore, while the use of ChatGPT in writing scientific articles and other scientific outputs may have potential benefits, important ethical concerns must also be addressed. Consequently, we investigated the feasibility of ChatGPT in clinical and research scenarios: (1) support of the clinical practice, (2) scientific production, (3) misuse in medicine and research, and (4) reasoning about public health topics. Results indicated that it is important to recognize and promote education on the appropriate use and potential pitfalls of AI-based LLMs in medicine.
|
Cascella M; Montomoli J; Bellini V; Bignami E
| 10
|
|||
36841840
|
Can artificial intelligence help for scientific writing?
| 2,023
|
Critical care (London, England)
|
This paper discusses the use of Artificial Intelligence Chatbot in scientific writing. ChatGPT is a type of chatbot, developed by OpenAI, that uses the Generative Pre-trained Transformer (GPT) language model to understand and respond to natural language inputs. AI chatbot and ChatGPT in particular appear to be useful tools in scientific writing, assisting researchers and scientists in organizing material, generating an initial draft and/or in proofreading. There is no publication in the field of critical care medicine prepared using this approach; however, this will be a possibility in the next future. ChatGPT work should not be used as a replacement for human judgment and the output should always be reviewed by experts before being used in any critical decision-making or application. Moreover, several ethical issues arise about using these tools, such as the risk of plagiarism and inaccuracies, as well as a potential imbalance in its accessibility between high- and low-income countries, if the software becomes paying. For this reason, a consensus on how to regulate the use of chatbots in scientific writing will soon be required.
|
Salvagno M; Taccone FS; Gerli AG
| 10
|
|||
37215063
|
ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations.
| 2,023
|
Frontiers in artificial intelligence
|
This paper presents an analysis of the advantages, limitations, ethical considerations, future prospects, and practical applications of ChatGPT and artificial intelligence (AI) in the healthcare and medical domains. ChatGPT is an advanced language model that uses deep learning techniques to produce human-like responses to natural language inputs. It is part of the family of generative pre-training transformer (GPT) models developed by OpenAI and is currently one of the largest publicly available language models. ChatGPT is capable of capturing the nuances and intricacies of human language, allowing it to generate appropriate and contextually relevant responses across a broad spectrum of prompts. The potential applications of ChatGPT in the medical field range from identifying potential research topics to assisting professionals in clinical and laboratory diagnosis. Additionally, it can be used to help medical students, doctors, nurses, and all members of the healthcare fraternity to know about updates and new developments in their respective fields. The development of virtual assistants to aid patients in managing their health is another important application of ChatGPT in medicine. Despite its potential applications, the use of ChatGPT and other AI tools in medical writing also poses ethical and legal concerns. These include possible infringement of copyright laws, medico-legal complications, and the need for transparency in AI-generated content. In conclusion, ChatGPT has several potential applications in the medical and healthcare fields. However, these applications come with several limitations and ethical considerations which are presented in detail along with future prospects in medicine and healthcare.
|
Dave T; Athaluri SA; Singh S
| 10
|
|||
37325497
|
Large language models and the emergence phenomena.
| 2,023
|
European journal of radiology open
|
This perspective explores the potential of emergence phenomena in large language models (LLMs) to transform data management and analysis in radiology. We provide a concise explanation of LLMs, define the concept of emergence in machine learning, offer examples of potential applications within the radiology field, and discuss risks and limitations. Our goal is to encourage radiologists to recognize and prepare for the impact this technology may have on radiology and medicine in the near future.
|
Sorin V; Klang E
| 10
|
|||
39689760
|
Generative Artificial Intelligence in Pathology and Medicine: A Deeper Dive.
| 2,025
|
Modern pathology : an official journal of the United States and Canadian Academy of Pathology, Inc
|
This review article builds upon the introductory piece in our 7-part series, delving deeper into the transformative potential of generative artificial intelligence (Gen AI) in pathology and medicine. The article explores the applications of Gen AI models in pathology and medicine, including the use of custom chatbots for diagnostic report generation, synthetic image synthesis for training new models, data set augmentation, hypothetical scenario generation for educational purposes, and the use of multimodal along with multiagent models. This article also provides an overview of the common categories within Gen AI models, discussing open-source and closed-source models, as well as specific examples of popular models such as GPT-4, Llama, Mistral, DALL-E, Stable Diffusion, and their associated frameworks (eg, transformers, generative adversarial networks, diffusion-based neural networks), along with their limitations and challenges, especially within the medical domain. We also review common libraries and tools that are currently deemed necessary to build and integrate such models. Finally, we look to the future, discussing the potential impact of Gen AI on health care, including benefits, challenges, and concerns related to privacy, bias, ethics, application programming interface costs, and security measures.
|
Rashidi HH; Pantanowitz J; Chamanzar A; Fennell B; Wang Y; Gullapalli RR; Tafti A; Deebajah M; Albahra S; Glassy E; Hanna MG; Pantanowitz L
| 10
|
|||
37792344
|
Can ChatGPT Provide Quality Information on Integrative Oncology? A Brief Report.
| 2,024
|
Journal of integrative and complementary medicine
|
This short report evaluated the accuracy and quality of information provided by ChatGPT regarding the use of complementary and integrative medicine for cancer. Using the QUality Evaluation Scoring Tool, a panel of 12 reviewers assessed ChatGPT's responses to 8 questions. The study found that ChatGPT provided moderate-quality responses that were relatively unbiased and not misleading. However, the chatbot's inability to reference specific scientific studies was a significant limitation. Patients with cancer should not rely on ChatGPT for clinical advice until further systematic validation. Future studies should examine how patients perceive ChatGPT's information and its impact on communication with health care professionals.
|
Lam CS; Hua R; Koon HK; Zhou KR; Lam TTN; Lee CP; Lin WL; Wong CL; Lau YM; Loong HH; Chung VC; Cheung YT
| 43
|
|||
38761230
|
Publication Trends and Hot Spots of ChatGPT's Application in the Medicine.
| 2,024
|
Journal of medical systems
|
This study aimed to analyze the current landscape of ChatGPT application in the medical field, assessing the current collaboration patterns and research topic hotspots to understand the impact and trends. By conducting a search in the Web of Science, we collected literature related to the applications of ChatGPT in medicine, covering the period from January 1, 2000 up to January 16, 2024. Bibliometric analyses were performed using CiteSpace (V6.2., Drexel University, PA, USA) and Microsoft Excel (Microsoft Corp.,WA, USA) to map the collaboration among countries/regions, the distribution of institutions and authors, and clustering of keywords. A total of 574 eligible articles were included, with 97.74% published in 2023. These articles span various disciplines, particularly in Health Care Sciences Services, with extensive international collaboration involving 73 countries. In terms of countries/regions studied, USA, India, and China led in the number of publications. USA ot only published nearly half of the total number of papers but also exhibits a highest collaborative capability. Regarding the co-occurrence of institutions and scholars, the National University of Singapore and Harvard University held significant influence in the cooperation network, with the top three authors in terms of publications being Wiwanitkit V (10 articles), Seth I (9 articles), Klang E (7 articles), and Kleebayoon A (7 articles). Through keyword clustering, the study identified 9 research theme clusters, among which "digital health"was not only the largest in scale but also had the most citations. The study highlights ChatGPT's cross-disciplinary nature and collaborative research in medicine, showcasing its growth potential, particularly in digital health and clinical decision support. Future exploration should examine the socio-economic and cultural impacts of this trend, along with ChatGPT's specific technical uses in medical practice.
|
Li ZQ; Wang XF; Liu JP
| 10
|
|||
39254919
|
Assessing knowledge about medical physics in language-generative AI with large language model: using the medical physicist exam.
| 2,024
|
Radiological physics and technology
|
This study aimed to evaluate the performance for answering the Japanese medical physicist examination and providing the benchmark of knowledge about medical physics in language-generative AI with large language model. We used questions from Japan's 2018, 2019, 2020, 2021 and 2022 medical physicist board examinations, which covered various question types, including multiple-choice questions, and mainly focused on general medicine and medical physics. ChatGPT-3.5 and ChatGPT-4.0 (OpenAI) were used. We compared the AI-based answers with the correct ones. The average accuracy rates were 42.2 +/- 2.5% (ChatGPT-3.5) and 72.7 +/- 2.6% (ChatGPT-4), showing that ChatGPT-4 was more accurate than ChatGPT-3.5 [all categories (except for radiation-related laws and recommendations/medical ethics): p value < 0.05]. Even with the ChatGPT model with higher accuracy, the accuracy rates were less than 60% in two categories; radiation metrology (55.6%), and radiation-related laws and recommendations/medical ethics (40.0%). These data provide the benchmark for knowledge about medical physics in ChatGPT and can be utilized as basic data for the development of various medical physics tools using ChatGPT (e.g., radiation therapy support tools with Japanese input).
|
Kadoya N; Arai K; Tanaka S; Kimura Y; Tozuka R; Yasui K; Hayashi N; Katsuta Y; Takahashi H; Inoue K; Jingu K
| 21
|
|||
38691404
|
Integrating Text and Image Analysis: Exploring GPT-4V's Capabilities in Advanced Radiological Applications Across Subspecialties.
| 2,024
|
Journal of medical Internet research
|
This study demonstrates that GPT-4V outperforms GPT-4 across radiology subspecialties in analyzing 207 cases with 1312 images from the Radiological Society of North America Case Collection.
|
Busch F; Han T; Makowski MR; Truhn D; Bressem KK; Adams L
| 0-1
|
|||
39451178
|
An investigative analysis - ChatGPT's capability to excel in the Polish speciality exam in pathology.
| 2,024
|
Polish journal of pathology : official journal of the Polish Society of Pathologists
|
This study evaluates the effectiveness of the ChatGPT-3.5 language model in providing correct answers to pathomorphology questions as required by the State Speciality Examination (PES). Artificial intelligence (AI) in medicine is generating increasing interest, but its potential needs thorough evaluation. A set of 119 exam questions by type and subtype were used, which were posed to the ChatGPT-3.5 model. Performance was analysed with regard to the success rate in different question categories and subtypes. ChatGPT-3.5 achieved a performance of 45.38%, which is significantly below the minimum PES pass threshold. The results achieved varied by question type and subtype, with better results in questions requiring "comprehension and critical thinking" than "memory". The analysis shows that, although ChatGPT-3.5 can be a useful teaching tool, its performance in providing correct answers to pathomorphology questions is significantly lower than that of human respondents. This conclusion highlights the need to further improve the AI model, taking into account the specificities of the medical field. Artificial intelligence can be helpful, but it cannot fully replace the experience and knowledge of specialists.
|
Bielowka M; Kufel J; Rojek M; Kaczynska D; Czogalik L; Mitrega A; Bartnikowska W; Kondol D; Palkij K; Mielcarska S
| 21
|
|||
39550662
|
[Not Available].
| 2,024
|
Recenti progressi in medicina
|
This study explores the potential use of ChatGPT, an AI-based language model, in assessing herbal-drug interactions (HDi) to enhance clinical decision-making. HDi can pose significant health risks by reducing drug efficacy or causing unwanted side effects. Clinical pharmacists play a key role in identifying these HDIs, and currently, there are limited tools available for checking drug interactions. The research focuses on a case study of a rectal adenocarcinoma patient treated with capecitabine and 26 supplements, which contain a total of 80 herbal substances. ChatGPT 3.5 was asked three questions regarding potential HDIs: "Are there possible HDIs?", "What is the pharmacokinetic mechanism?", and "What is the bibliographic source of the interaction?". The results were reviewed by an oncology clinical pharmacist and compared to existing databases and independent bibliographic research. The findings highlight ChatGPT's advantage in processing large amounts of data quickly, with 16% of interactions classified as "unlikely", confirmed by the pharmacist. However, 73% of the suggested mechanisms were false positives, and 4% were categorized as "hallucinations". Additionally, most of the bibliographic sources provided by ChatGPT were outdated or unavailable. While ChatGPT proves useful for initial HDI screening, its limitations include outdated data (last updated in January 2022), lack of access to private databases, and occasional inaccuracies. Further applications of AI in this area are recommended, though expert validation remains essential in the clinical decision-making process.
|
Fiordelisi M; Masucci S; Bianco A; Bellero M; Toma D; Campo N; Zichi C; Marino D; Sperti E; Valabrega G; Cena C; Fazzina G; Gasco A
| 10
|
|||
38371109
|
A Comparative Analysis of AI Models in Complex Medical Decision-Making Scenarios: Evaluating ChatGPT, Claude AI, Bard, and Perplexity.
| 2,024
|
Cureus
|
This study rigorously evaluates the performance of four artificial intelligence (AI) language models - ChatGPT, Claude AI, Google Bard, and Perplexity AI - across four key metrics: accuracy, relevance, clarity, and completeness. We used a strong mix of research methods, getting opinions from 14 scenarios. This helped us make sure our findings were accurate and dependable. The study showed that Claude AI performs better than others because it gives complete responses. Its average score was 3.64 for relevance and 3.43 for completeness compared to other AI tools. ChatGPT always did well, and Google Bard had unclear responses, which varied greatly, making it difficult to understand it, so there was no consistency in Google Bard. These results give important information about what AI language models are doing well or not for medical suggestions. They help us use them better, telling us how to improve future tech changes that use AI. The study shows that AI abilities match complex medical scenarios.
|
Uppalapati VK; Nag DS
| 10
|
|||
38746668
|
The application of large language models in medicine: A scoping review.
| 2,024
|
iScience
|
This study systematically reviewed the application of large language models (LLMs) in medicine, analyzing 550 selected studies from a vast literature search. LLMs like ChatGPT transformed healthcare by enhancing diagnostics, medical writing, education, and project management. They assisted in drafting medical documents, creating training simulations, and streamlining research processes. Despite their growing utility in assisted diagnosis and improving doctor-patient communication, challenges persisted, including limitations in contextual understanding and the risk of over-reliance. The surge in LLM-related research indicated a focus on medical writing, diagnostics, and patient communication, but highlighted the need for careful integration, considering validation, ethical concerns, and the balance with traditional medical practice. Future research directions suggested a focus on multimodal LLMs, deeper algorithmic understanding, and ensuring responsible, effective use in healthcare.
|
Meng X; Yan X; Zhang K; Liu D; Cui X; Yang Y; Zhang M; Cao C; Wang J; Wang X; Gao J; Wang YG; Ji JM; Qiu Z; Li M; Qian C; Guo T; Ma S; Wang Z; Guo Z; Lei Y; Shao C; Wang W; Fan H; Tang YD
| 10
|
|||
38656706
|
Evaluation of ChatGPT and Gemini large language models for pharmacometrics with NONMEM.
| 2,024
|
Journal of pharmacokinetics and pharmacodynamics
|
To assess ChatGPT 4.0 (ChatGPT) and Gemini Ultra 1.0 (Gemini) large language models on NONMEM coding tasks relevant to pharmacometrics and clinical pharmacology. ChatGPT and Gemini were assessed on tasks mimicking real-world applications of NONMEM. The tasks ranged from providing a curriculum for learning NONMEM, an overview of NONMEM code structure to generating code. Prompts in lay language to elicit NONMEM code for a linear pharmacokinetic (PK) model with oral administration and a more complex model with two parallel first-order absorption mechanisms were investigated. Reproducibility and the impact of "temperature" hyperparameter settings were assessed. The code was reviewed by two NONMEM experts. ChatGPT and Gemini provided NONMEM curriculum structures combining foundational knowledge with advanced concepts (e.g., covariate modeling and Bayesian approaches) and practical skills including NONMEM code structure and syntax. ChatGPT provided an informative summary of the NONMEM control stream structure and outlined the key NONMEM Translator (NM-TRAN) records needed. ChatGPT and Gemini were able to generate code blocks for the NONMEM control stream from the lay language prompts for the two coding tasks. The control streams contained focal structural and syntax errors that required revision before they could be executed without errors and warnings. The code output from ChatGPT and Gemini was not reproducible, and varying the temperature hyperparameter did not reduce the errors and omissions substantively. Large language models may be useful in pharmacometrics for efficiently generating an initial coding template for modeling projects. However, the output can contain errors and omissions that require correction.
|
Shin E; Yu Y; Bies RR; Ramanathan M
| 10
|
|||
39130248
|
Examining the Performance of ChatGPT 3.5 and Microsoft Copilot in Otolaryngology: A Comparative Study with Otolaryngologists' Evaluation.
| 2,024
|
Indian journal of otolaryngology and head and neck surgery : official publication of the Association of Otolaryngologists of India
|
To evaluate the response capabilities, in a public healthcare system otolaryngology job competition examination, of ChatGPT 3.5 and an internet-connected GPT-4 engine (Microsoft Copilot) with the real scores of otolaryngology specialists as the control group. In September 2023, 135 questions divided into theoretical and practical parts were input into ChatGPT 3.5 and an internet-connected GPT-4. The accuracy of AI responses was compared with the official results from otolaryngologists who took the exam, and statistical analysis was conducted using Stata 14.2. Copilot (GPT-4) outperformed ChatGPT 3.5. Copilot achieved a score of 88.5 points, while ChatGPT scored 60 points. Both AIs had discrepancies in their incorrect answers. Despite ChatGPT's proficiency, Copilot displayed superior performance, ranking as the second-best score among the 108 otolaryngologists who took the exam, while ChatGPT was placed 83rd. A chat powered by GPT-4 with internet access (Copilot) demonstrates superior performance in responding to multiple-choice medical questions compared to ChatGPT 3.5.
|
Mayo-Yanez M; Lechien JR; Maria-Saibene A; Vaira LA; Maniaci A; Chiesa-Estomba CM
| 32
|
|||
39774168
|
Healthcare professionals and the public sentiment analysis of ChatGPT in clinical practice.
| 2,025
|
Scientific reports
|
To explore the attitudes of healthcare professionals and the public on applying ChatGPT in clinical practice. The successful application of ChatGPT in clinical practice depends on technical performance and critically on the attitudes and perceptions of non-healthcare and healthcare. This study has a qualitative design based on artificial intelligence. This study was divided into five steps: data collection, data cleaning, validation of relevance, sentiment analysis, and content analysis using the K-means algorithm. This study comprised 3130 comments amounting to 1,593,650 words. The dictionary method showed positive and negative emotions such as anger, disgust, fear, sadness, surprise, good, and happy emotions. Healthcare professionals prioritized ChatGPT's efficiency but raised ethical and accountability concerns, while the public valued its accessibility and emotional support but expressed worries about privacy and misinformation. Bridging these perspectives by improving reliability, safeguarding privacy, and clearly defining ChatGPT's role is essential for its practical and ethical integration into clinical practice.
|
Lu L; Zhu Y; Yang J; Yang Y; Ye J; Ai S; Zhou Q
| 0-1
|
|||
37952004
|
Evaluation of prompt engineering strategies for pharmacokinetic data analysis with the ChatGPT large language model.
| 2,024
|
Journal of pharmacokinetics and pharmacodynamics
|
To systematically assess the ChatGPT large language model on diverse tasks relevant to pharmacokinetic data analysis. ChatGPT was evaluated with prototypical tasks related to report writing, code generation, non-compartmental analysis, and pharmacokinetic word problems. The writing task consisted of writing an introduction for this paper from a draft title. The coding tasks consisted of generating R code for semi-logarithmic graphing of concentration-time profiles and calculating area under the curve and area under the moment curve from time zero to infinity. Pharmacokinetics word problems on single intravenous, extravascular bolus, and multiple dosing were taken from a pharmacokinetics textbook. Chain-of-thought and problem separation were assessed as prompt engineering strategies when errors occurred. ChatGPT showed satisfactory performance on the report writing, code generation tasks and provided accurate information on the principles and methods underlying pharmacokinetic data analysis. However, ChatGPT had high error rates in numerical calculations involving exponential functions. The outputs generated by ChatGPT were not reproducible: the precise content of the output was variable albeit not necessarily erroneous for different instances of the same prompt. Incorporation of prompt engineering strategies reduced but did not eliminate errors in numerical calculations. ChatGPT has the potential to become a powerful productivity tool for writing, knowledge encapsulation, and coding tasks in pharmacokinetic data analysis. The poor accuracy of ChatGPT in numerical calculations require resolution before it can be reliably used for PK and pharmacometrics data analysis.
|
Shin E; Ramanathan M
| 10
|
|||
38903274
|
Causality Assessment of Adverse Drug Reaction Toxic Epidermal Necrolysis With the Aid of ChatGPT: A Case Report.
| 2,024
|
Cureus
|
Toxic epidermal necrolysis (TEN) is a severe and potentially fatal adverse drug reaction. This case report presents a 19-year-old male with pulmonary tuberculosis undergoing anti-tubercular therapy who developed TEN. The patient had multiple comorbidities including type 1 diabetes mellitus and multisystem atrophy. ChatGPT was utilized alongside conventional methods to assess causality. While conventional scoring systems estimated mortality at 58.3% (SCORTEN) and 12.3% (ABCD-10), ChatGPT yielded divergent scores. Causality assessment using WHO-Uppsala Monitoring Centre (UMC) and Naranjo's scale indicated rifampicin and isoniazid as probable causative agents. However, ChatGPT provided ambiguous results. The study underscores the potential of AI in pharmacovigilance but emphasizes caution due to discrepancies observed. Collaborative utilization of artificial intelligence (AI) with clinical judgment is advocated to enhance diagnostic accuracy and treatment decisions in adverse drug reactions. This case highlights the importance of integrating AI into drug safety systems while acknowledging its limitations to ensure optimal patient care.
|
Pandya S; Patel C; Sojitra B; Karamata H
| 10
|
|||
37128519
|
A Case of Delusional Disorder With Abuse of Isoniazid, Rifampicin, Pyrazinamide, and Ethambutol, the First-Line Anti-tuberculosis Therapy Drugs in India.
| 2,023
|
Cureus
|
Tuberculosis (TB) and mental illnesses frequently coexist and are both extremely common worldwide. Through the National Program for Elimination of Tuberculosis (NTEP), anti-tuberculosis therapy (ATT) medications are used to treat tuberculosis in India. We want to report a case 45-year-old patient from the state of Andhra Pradesh, India with comorbid delusional disorder leading to daily ATT drug consumption for the past 20 years. This unusual presentation demonstrates that abuse of a Schedule "H" substance like ATT is also conceivable. To stop "Off-label" purchases, strict measures must be taken. Before beginning ATT, evaluating the patient's mental health may be a wise move.
|
Sathiyamoorthi S; Pentapati SSK; Vullanki SS; Avula VCR; Aravindakshan R
| 10
|
|||
37344394
|
The potential of 'Segment Anything' (SAM) for universal intelligent ultrasound image guidance.
| 2,023
|
Bioscience trends
|
Ultrasound image guidance is a method often used to help provide care, and it relies on accurate perception of information, and particularly tissue recognition, to guide medical procedures. It is widely used in various scenarios that are often complex. Recent breakthroughs in large models, such as ChatGPT for natural language processing and Segment Anything Model (SAM) for image segmentation, have revolutionized interaction with information. These large models exhibit a revolutionized understanding of basic information, holding promise for medicine, including the potential for universal autonomous ultrasound image guidance. The current study evaluated the performance of SAM on commonly used ultrasound images and it discusses SAM's potential contribution to an intelligent image-guided framework, with a specific focus on autonomous and universal ultrasound image guidance. Results indicate that SAM performs well in ultrasound image segmentation and has the potential to enable universal intelligent ultrasound image guidance.
|
Ning G; Liang H; Jiang Z; Zhang H; Liao H
| 0-1
|
|||
38353440
|
[ChatGPT in clinical practice: prospects and challenges].
| 2,024
|
Revue medicale suisse
|
Virtually unknown to the greater public before November 2022, ChatGPT was made available in open access in Autumn 2022, driving the perspective of artificial intelligence integration to the forefront of daily life. The field of medicine hasn't been left aside, and sparks as much interest as it does questions. Although this tool has considerable potential for use in clinical practice, it, like others, has limitations that need to be clearly understood to avoid misuse. In addition, the legal framework and issues of data confidentiality are currently poorly defined, and clinicians will need to keep a close eye on legislative developments in this area.
|
Roustan D; Galland-Decker C; Marinoni C; Bastardot F
| 10
|
|||
37885558
|
Using Artificial Intelligence to Assess the Teratogenic Risk of Vitamin A Supplements.
| 2,023
|
Cureus
|
Vitamin A in high doses has been found to be highly teratogenic, leading to severe fetal abnormalities if exposure occurs during pregnancy. Hence, prescription vitamin A acne medications like isotretinoin are highly regulated via programs such as iPledge, which intend to avert fetal exposure to isotretinoin and to educate healthcare providers, pharmacists, and patients about the significant risks associated with isotretinoin and its appropriate usage conditions. However, over-the-counter (OTC) vitamin A supplements are not subject to these requirements, and calculating the vitamin A content of these supplements can be difficult due to the lack of Food and Drug Administration (FDA) regulations and inconsistencies in labeling. If the necessary information is provided, ChatGPT, a generative artificial intelligence (AI) tool, can help the general public calculate the vitamin A content of supplements. Nonetheless, supplement manufacturers do not always provide the data necessary for these calculations.
|
Holla S; Zamil DH; Paidisetty PS; Wang LK; Katta R
| 10
|
|||
38977450
|
Response to "Letter to the Editor-Exploring the Unknown: Evaluating ChatGPT's Performance in Uncovering Novel Aspects of Plastic Surgery and Identifying Areas for Future Innovation".
| 2,024
|
Aesthetic plastic surgery
|
We appreciate Dr. Qi and Dr. Niu for their insightful comments on our study, "Exploring the Unknown: Evaluating ChatGPT's Performance in Uncovering Novel Aspects of Plastic Surgery and Identifying Areas for Future Innovation." Their observations underscore significant considerations in the application of artificial intelligence (AI) in plastic surgery. We agree with their concern about potential biases in ChatGPT's responses. The AI's frequent attribution of the title "parent of plastic surgery" to Sir Harold Delf Gillies, despite gender-neutral terminology, highlights underlying biases from training data. These biases often reflect historical texts and contemporary writings. Addressing them requires refining training datasets for balanced representation and developing algorithms that adjust dynamically to diverse inputs. The authors also question the criteria ChatGPT uses to identify key contributions to plastic surgery. The AI's focus on microsurgery, minimally invasive techniques, and tissue engineering, while significant, may prioritize keyword prevalence over a holistic evaluation. Enhancing ChatGPT's capabilities through targeted training and input from subject matter experts could improve the AI's ability to generate more balanced outputs. The identified bias favoring reconstructive over cosmetic procedures is another critical point. While reconstructive advancements are transformative, cosmetic surgery also has significant innovations. Ensuring ChatGPT presents a balanced view of both reconstructive and cosmetic advancements is essential. This can be achieved by diversifying training data and calibrating the AI to give equitable weight to different subspecialties within plastic surgery. AI models like ChatGPT are proficient in processing and generating information but lack the human elements of creativity, intuition, and emotional depth critical for groundbreaking innovations. AI should complement, not replace, the expert judgment and innovative thinking of skilled plastic surgeons. Ensuring the accuracy of AI-generated responses is crucial. Clinicians must verify AIgenerated information against established medical literature and clinical guidelines to maintain accuracy in medical practice. Continuous feedback and improvement mechanisms are vital to enhance AI's clinical utility. The improvement of AI in plastic surgery will be driven by active involvement from surgeons, providing comprehensive and balanced data for training to ensure AI systems evolve to support and enhance clinical practice effectively.Level of Evidence V This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
|
Seth I; Lim B; Rozen WM
| 32
|
|||
37709536
|
ChatGPT: Can You Prepare My Patients for [(18)F]FDG PET/CT and Explain My Reports?
| 2,023
|
Journal of nuclear medicine : official publication, Society of Nuclear Medicine
|
We evaluated whether the artificial intelligence chatbot ChatGPT can adequately answer patient questions related to [(18)F]FDG PET/CT in common clinical indications before and after scanning. Methods: Thirteen questions regarding [(18)F]FDG PET/CT were submitted to ChatGPT. ChatGPT was also asked to explain 6 PET/CT reports (lung cancer, Hodgkin lymphoma) and answer 6 follow-up questions (e.g., on tumor stage or recommended treatment). To be rated "useful" or "appropriate," a response had to be adequate by the standards of the nuclear medicine staff. Inconsistency was assessed by regenerating responses. Results: Responses were rated "appropriate" for 92% of 25 tasks and "useful" for 96%. Considerable inconsistencies were found between regenerated responses for 16% of tasks. Responses to 83% of sensitive questions (e.g., staging/treatment options) were rated "empathetic." Conclusion: ChatGPT might adequately substitute for advice given to patients by nuclear medicine staff in the investigated settings. Improving the consistency of ChatGPT would further increase reliability.
|
Rogasch JMM; Metzger G; Preisler M; Galler M; Thiele F; Brenner W; Feldhaus F; Wetz C; Amthauer H; Furth C; Schatka I
| 43
|
|||
38736088
|
Pancytopenia Due to Folate Deficiency.
| 2,024
|
The Journal of the Association of Physicians of India
|
We found the article on "The Digital Technology in Clinical Medicine: From Calculators to ChatGPT" interesting.(1) According to Kulkarni et al., humanity has witnesses four important social system changes, starting with the primitive huntersgatherers and progressing to horticultural, agricultural, industrial, and the current fifth, which is based on digital information technology and has altered the way we present, recognize, and utilize different factors of production. In clinical medicine, digital technology has advanced significantly since the days of computations. According to Kulkarni et al., we have to benefit from these advancements as we all improve the lives of our patients while being cautious not to overturn the doctor-patient relationship. If technology, clinical expertise, and humanistic values are properly balanced, Kulkarni et al. concluded that the future is quite glorious.(1) Regulatory organizations are pushing for improvements through clinical trials as a result of recognition of the expanding influence of digital technology in healthcare delivery. The "World Health Organizations Guidelines for Digital Interventions" and the "Food and Drug Administration's Digital Health Center of Excellence" are only two of the projects that are currently being highlighted in the study as efforts to analyze and implement digital health services.
|
Anandan S; Soman S; Kumar JP; Shajee DS
| 10
|
|||
38736087
|
Digital Technology in Clinical Medicine: Correspondence.
| 2,024
|
The Journal of the Association of Physicians of India
|
We found the article on "The Digital Technology in Clinical Medicine: From Calculators to ChatGPT" interesting.(1) According to Kulkarni et al., humanity has witnesses four important social system changes, starting with the primitive huntersgatherers and progressing to horticultural, agricultural, industrial, and the current fifth, which is based on digital information technology and has altered the way we present, recognize, and utilize different factors of production. In clinical medicine, digital technology has advanced significantly since the days of computations. According to Kulkarni et al., we have to benefit from these advancements as we all improve the lives of our patients while being cautious not to overturn the doctor-patient relationship. If technology, clinical expertise, and humanistic values are properly balanced, Kulkarni et al. concluded that the future is quite glorious.(1) Regulatory organizations are pushing for improvements through clinical trials as a result of recognition of the expanding influence of digital technology in healthcare delivery. The "World Health Organizations Guidelines for Digital Interventions" and the "Food and Drug Administration's Digital Health Center of Excellence" are only two of the projects that are currently being highlighted in the study as efforts to analyze and implement digital health services.
|
Kleebayoon A; Wiwanitkit V
| 10
|
|||
38913203
|
Exploring the Unknown: Evaluating ChatGPT's Performance in Uncovering Novel Aspects of Plastic Surgery and Identifying Areas for Future Innovation.
| 2,024
|
Aesthetic plastic surgery
|
We have perused with keen interest the scholarly article titled "Exploring the Unknown: Evaluating ChatGPT's Performance in Uncovering Novel Aspects of Plastic Surgery and Identifying Areas for Future Innovation" penned by Lim et al. in the esteemed journal "Aesthetic Plastic Surgery". This paper evaluates ChatGPT's potential application in plastic surgery, exploring its responses on various themes including pioneers, advancements, and techniques, as well as flap grafting. While it offers valuable insights, questions arise. Firstly, ChatGPT's attribution of plastic surgery's progenitor to Sir Harold Delf Gillies raises concerns of underlying biases in its responses. Secondly, its assertion on paramount contributions prompts reflection on its discernment criteria. Can targeted training enhance its accuracy? Lastly, the discourse questions biases favoring reconstructive over cosmetic procedures. ChatGPT's responses, while proficient in addressing medical queries, face ongoing veracity challenges, necessitating clinician scrutiny. However, this scrutiny may surpass user expertise, requiring additional measures for accuracy assurance. Artificial Intelligence (AI) replicates human cognitive abilities but lacks human richness, potentially affecting transformative insights in plastic surgery's future trajectory. LEVEL OF EVIDENCE V: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
|
Qi W; Niu F
| 32
|
|||
39882908
|
Ejtm3 experiences after ChatGPT and other AI approaches: values, risks, countermeasures.
| 2,025
|
European journal of translational myology
|
We invariably hear that Artificial Intelligence (AI), a rapidly evolving technology, does not just creatively assemble known knowledge. We are told that AI learns, processes and creates, starting from fixed points to arrive at innovative solutions. In the case of scientific work, AI can generate data without ever having entered a laboratory, (i.e., blatantly plagiarizing the existing literature, a despicable old trick). How does an editor of a scientific journal recognize when she or he is faced with something like this? The solution is for editors and referees to rigorously evaluate the track records of submitting authors and what they are doing. For example, false color evaluations of 2D and 3D CT and MRI images have been used to validate functional electrical stimulation for degenerated denervated muscle and a home Full-Body In-Bed Gym program. These have been recently published in Ejtm and other journals. The editors and referees of Ejtm can exclude the possibility that the images were invented by ChatGPT. Why? Because they know the researchers: Marco Quadrelli, Aldo Morra, Daniele Coraci, Paolo Gargiulo and their collaborators as well! Artificial intelligence is not banned by the EJTM, but when submitting their manuscripts to previous and to a new Thematic Section dedicated to Generative AI in Translational Mobility Medicine authors must openly declare whether they have used artificial intelligence, of what type and for what purposes. This will not avoid risks of plagiarism or worse, but it will better establish possible liabilities.
|
Fano-Illic G; Coraci D; Maccarone MC; Masiero S; Quadrelli M; Morra A; Ravara B; Pond A; Forni R; Gargiulo P
| 10
|
|||
37065364
|
Extreme Hyperthermia Due to Methamphetamine Toxicity Presenting As ST-Elevation Myocardial Infarction on EKG: A Case Report Written With ChatGPT Assistance.
| 2,023
|
Cureus
|
We present a case report of a 37-year-old male who presented to the emergency department with altered mental status and electrocardiographic changes suggestive of an ST-elevation myocardial infarction (STEMI). He was ultimately diagnosed with extreme hyperthermia, secondary to drug use, which was managed promptly with supportive measures resulting in a successful outcome. This case highlights the importance of considering drug-induced hyperthermia as a potential cause of altered mental status and EKG changes in patients, especially in those with a history of drug abuse.
|
Schussler JM; Tomson C; Dresselhouse MP
| 10
|
|||
38222391
|
From Free-text Drug Labels to Structured Medication Terminology with BERT and GPT.
| 2,023
|
AMIA ... Annual Symposium proceedings. AMIA Symposium
|
We present a method to enrich controlled medication terminology from free-text drug labels. This is important because, while controlled medication terminology capture well-structured medication information, much of the information pertaining to medications is still found in free-text. First, we compared different Named Entity Recognition (NER) models including rule-based, feature-based, deep learning-based models with Transformers as well as ChatGPT, few-shot and fine-tuned GPT-3 to find the most suitable model that accurately extracts medication entities (ingredients, brand, dose, etc.) from free-text. Then, a rule-based Relation Extraction algorithm transforms NER results into a well-structured medication knowledge graph. Finally, a Medication Searching method takes the knowledge graph and matches it to relevant medications in the terminology server. An empirical evaluation on real-world drug labels shows that BERT-CRF was the most effective NER model with F-measure 95%. After performing terms normalization, the Medication Searching achieved an accuracy of 77% for when matching a label to relevant medication in the terminology server. The NER and Medication Searching models could be deployed as a web service capable of accepting free-text queries and returning structured medication information; thus providing a useful means of better managing medications information found in different health systems.
|
Ngo DH; Koopman B
| 10
|
|||
40294189
|
nan
| 2,025
|
nan
|
WHAT IS THE 2025 WATCH LIST? * The Watch List is an annual Horizon Scan report from Canada's Drug Agency that presents emerging technologies and issues that have the potential to shape the future of health care in Canada. * The 2025 Watch List focuses on the use of artificial intelligence (AI) technologies in health care and the issues that may arise with the implementation of these technologies. * AI technologies have the potential to significantly transform health care systems. These technologies could increase efficiency by reducing administrative burden, improve patient outcomes, and enhance patient experience by creating more access points to the health care system. However, there are also legal, ethical, environmental, and social implications with the rollout of these technologies. WHY IS THIS AN ISSUE? * Substantial public and private investments are being made in AI technologies for health care. AI technologies are already being implemented in some parts of the Canadian health care system. Commercial options, such as ChatGPT, allow AI technologies to be used by patients to assist with their health care journeys. Because they are readily available and easy to use, these same tools are sometimes used by clinicians and, in some cases, without sanction or training from employers or regulators. * AI health care technologies also present an opportunity to fundamentally change health care by their ability to replace, displace, or augment tasks that have traditionally required human cognition. The potential health human resources impact of machines taking on some this load is significant given the increasing demand for health care services and the finite capacity of health care systems in Canada. WHAT IS THE POTENTIAL IMPACT? * The Watch List signals which technologies are poised to make an impact and the policies, regulatory or organizational enablers, and/or guardrails that are needed to optimize the proliferation of these technologies in the health care system. * The 2025 Watch List also focuses on considerations for optimizing and accelerating implementation, such as the massive potential impact on operations, clinical outcomes, and staff and patient experience, while minimizing risks. WHAT ELSE DO WE NEED TO KNOW? * The 2025 Watch List of AI technologies and issues in health care was developed through consensus-based decision-making at a workshop in November 2024 including individuals from across Canada with experience and expertise in AI. . * The 2025 Watch List identifies and describes the top 5 new and emerging AI technologies in health care. Examples include AI for notetaking and AI for disease detection and diagnosis. We also explore some considerations for health care decision-makers about the impact these technologies may have on health human resources, health care infrastructure, and health equity. * The 2025 Watch List also identifies the top 5 issues related to AI technologies in health care. Examples include the importance of establishing guidelines around what data are used to train AI algorithms and how that might contribute to bias as well as considerations about the liability and accountability of health care providers and systems that use these technologies. These are key issues that warrant more attention and will influence the wider adoption, diffusion, and implementation of new and emerging AI technologies. * Monitoring ongoing developments and evidence related to the top technologies and issues highlighted in the 2025 Watch List can help guide health system planning in Canada and improve access to high-quality care.
|
nan
| 10
|
|||
37917126
|
The Impact of Multimodal Large Language Models on Health Care's Future.
| 2,023
|
Journal of medical Internet research
|
When large language models (LLMs) were introduced to the public at large in late 2022 with ChatGPT (OpenAI), the interest was unprecedented, with more than 1 billion unique users within 90 days. Until the introduction of Generative Pre-trained Transformer 4 (GPT-4) in March 2023, these LLMs only contained a single mode-text. As medicine is a multimodal discipline, the potential future versions of LLMs that can handle multimodality-meaning that they could interpret and generate not only text but also images, videos, sound, and even comprehensive documents-can be conceptualized as a significant evolution in the field of artificial intelligence (AI). This paper zooms in on the new potential of generative AI, a new form of AI that also includes tools such as LLMs, through the achievement of multimodal inputs of text, images, and speech on health care's future. We present several futuristic scenarios to illustrate the potential path forward as multimodal LLMs (M-LLMs) could represent the gateway between health care professionals and using AI for medical purposes. It is important to point out, though, that despite the unprecedented potential of generative AI in the form of M-LLMs, the human touch in medicine remains irreplaceable. AI should be seen as a tool that can augment health care professionals rather than replace them. It is also important to consider the human aspects of health care-empathy, understanding, and the doctor-patient relationship-when deploying AI.
|
Mesko B
| 10
|
|||
37410672
|
ChatGPT: the threats to medical education.
| 2,023
|
Postgraduate medical journal
|
While it offers abundant advantages, ChatGPT threatens to significantly harm the educational attainment, and the intellectual life, of students of medicine and the subjects that compliment it. This technology poses a serious threat to the ability of such students to deliver safe and effective medical care once they graduate to clinical practice. Institutions that providemedical education must react to the existence, availability, and rapidly increasing competency of GPT models. This article suggests an intervention by which this could be, at least partially, achieved.
|
Armitage RC
| 10
|
|||
36811129
|
Artificial Hallucinations in ChatGPT: Implications in Scientific Writing.
| 2,023
|
Cureus
|
While still in its infancy, ChatGPT (Generative Pretrained Transformer), introduced in November 2022, is bound to hugely impact many industries, including healthcare, medical education, biomedical research, and scientific writing. Implications of ChatGPT, that new chatbot introduced by OpenAI on academic writing, is largely unknown. In response to the Journal of Medical Science (Cureus) Turing Test - call for case reports written with the assistance of ChatGPT, we present two cases one of homocystinuria-associated osteoporosis, and the other is on late-onset Pompe disease (LOPD), a rare metabolic disorder. We tested ChatGPT to write about the pathogenesis of these conditions. We documented the positive, negative, and rather troubling aspects of our newly introduced chatbot's performance.
|
Alkaissi H; McFarlane SI
| 10
|
|||
39096130
|
Appropriateness of ChatGPT as a resource for medication-related questions.
| 2,024
|
British journal of clinical pharmacology
|
With its increasing popularity, healthcare professionals and patients may use ChatGPT to obtain medication-related information. This study was conducted to assess ChatGPT's ability to provide satisfactory responses (i.e., directly answers the question, accurate, complete and relevant) to medication-related questions posed to an academic drug information service. ChatGPT responses were compared to responses generated by the investigators through the use of traditional resources, and references were evaluated. Thirty-nine questions were entered into ChatGPT; the three most common categories were therapeutics (8; 21%), compounding/formulation (6; 15%) and dosage (5; 13%). Ten (26%) questions were answered satisfactorily by ChatGPT. Of the 29 (74%) questions that were not answered satisfactorily, deficiencies included lack of a direct response (11; 38%), lack of accuracy (11; 38%) and/or lack of completeness (12; 41%). References were included with eight (29%) responses; each included fabricated references. Presently, healthcare professionals and consumers should be cautioned against using ChatGPT for medication-related information.
|
Grossman S; Zerilli T; Nathan JP
| 10
|
|||
38836986
|
Assessing ChatGPT's Potential in HIV Prevention Communication: A Comprehensive Evaluation of Accuracy, Completeness, and Inclusivity.
| 2,024
|
AIDS and behavior
|
With the advancement of artificial intelligence(AI), platforms like ChatGPT have gained traction in different fields, including Medicine. This study aims to evaluate the potential of ChatGPT in addressing questions related to HIV prevention and to assess its accuracy, completeness, and inclusivity. A team consisting of 15 physicians, six members from HIV communities, and three experts in gender and queer studies designed an assessment of ChatGPT. Queries were categorized into five thematic groups: general HIV information, behaviors increasing HIV acquisition risk, HIV and pregnancy, HIV testing, and the prophylaxis use. A team of medical doctors was in charge of developing questions to be submitted to ChatGPT. The other members critically assessed the generated responses regarding level of expertise, accuracy, completeness, and inclusivity. The median accuracy score was 5.5 out of 6, with 88.4% of responses achieving a score >/= 5. Completeness had a median of 3 out of 3, while the median for inclusivity was 2 out of 3. Some thematic groups, like behaviors associated with HIV transmission and prophylaxis, exhibited higher accuracy, indicating variable performance across different topics. Issues of inclusivity were identified, notably the use of outdated terms and a lack of representation for some communities. ChatGPT demonstrates significant potential in providing accurate information on HIV-related topics. However, while responses were often scientifically accurate, they sometimes lacked the socio-political context and inclusivity essential for effective health communication. This underlines the importance of aligning AI-driven platforms with contemporary health communication strategies and ensuring the balance of accuracy and inclusivity.
|
De Vito A; Colpani A; Moi G; Babudieri S; Calcagno A; Calvino V; Ceccarelli M; Colpani G; d'Ettorre G; Di Biagio A; Farinella M; Falaguasta M; Foca E; Giupponi G; Habed AJ; Isenia WJ; Lo Caputo S; Marchetti G; Modesti L; Mussini C; Nunnari G; Rusconi S; Russo D; Saracino A; Serra PA; Madeddu G
| 43
|
|||
40084442
|
[Can large language models answer clinical questions?].
| 2,025
|
Recenti progressi in medicina
|
With the advancement of large language models (LLMs) such as ChatGPT, their application in medicine is growing, but it is crucial that the responses are aligned with international guidelines. Recent studies have shown that LLMs can be useful in the medical field, providing correct answers to questions about the management and treatment of specific diseases. However, the accuracy of these models must also include readability and thoroughness of the answers and consistency with guidelines. In addition to these characteristics, relevance, pertinence, and up-to-date nature of the sources used by the LLMs to answer questions must be ensured. Furthermore, studies are needed to investigate the consistency of responses across different LLMs and languages used by them, as well as training processes that ensure greater reliability, especially when dealing with rare or complex diseases. Although LLMs can support medical education and decision-making, their integration into clinical practice requires further validation and comparison with international guidelines.
|
Santoro E
| 10
|
|||
38315648
|
Harnessing the open access version of ChatGPT for enhanced clinical opinions.
| 2,024
|
PLOS digital health
|
With the advent of Large Language Models (LLMs) like ChatGPT, the integration of Generative Artificial Intelligence (GAI) into clinical medicine is becoming increasingly feasible. This study aimed to evaluate the ability of the freely available ChatGPT-3.5 to generate complex differential diagnoses, comparing its output to case records of the Massachusetts General Hospital published in the New England Journal of Medicine (NEJM). Forty case records were presented to ChatGPT-3.5, prompting it to provide a differential diagnosis and then narrow it down to the most likely diagnosis. The results indicated that the final diagnosis was included in ChatGPT-3.5's original differential list in 42.5% of the cases. After narrowing, ChatGPT correctly determined the final diagnosis in 27.5% of the cases, demonstrating a decrease in accuracy compared to previous studies using common chief complaints. These findings emphasize the necessity for further investigation into the capabilities and limitations of LLMs in clinical scenarios while highlighting the potential role of GAI as an augmented clinical opinion. Anticipating the growth and enhancement of GAI tools like ChatGPT, physicians and other healthcare workers will likely find increasing support in generating differential diagnoses. However, continued exploration and regulation are essential to ensure the safe and effective integration of GAI into healthcare practice. Future studies may seek to compare newer versions of ChatGPT or investigate patient outcomes with physicians integrating this GAI technology. Understanding and expanding GAI's capabilities, particularly in differential diagnosis, may foster innovation and provide additional resources, especially in underserved areas in the medical field.
|
Tenner ZM; Cottone MC; Chavez MR
| 10
|
|||
37789676
|
Large language models in radiology: fundamentals, applications, ethical considerations, risks, and future directions.
| 2,024
|
Diagnostic and interventional radiology (Ankara, Turkey)
|
With the advent of large language models (LLMs), the artificial intelligence revolution in medicine and radiology is now more tangible than ever. Every day, an increasingly large number of articles are published that utilize LLMs in radiology. To adopt and safely implement this new technology in the field, radiologists should be familiar with its key concepts, understand at least the technical basics, and be aware of the potential risks and ethical considerations that come with it. In this review article, the authors provide an overview of the LLMs that might be relevant to the radiology community and include a brief discussion of their short history, technical basics, ChatGPT, prompt engineering, potential applications in medicine and radiology, advantages, disadvantages and risks, ethical and regulatory considerations, and future directions.
|
Akinci D'Antonoli T; Stanzione A; Bluethgen C; Vernuccio F; Ugga L; Klontzas ME; Cuocolo R; Cannella R; Kocak B
| 10
|
|||
39563551
|
[Artificial intelligence and large language models: challenges and prospects in research and medicine].
| 2,024
|
Urologiia (Moscow, Russia : 1999)
|
With the development and spread of artificial intelligence, technologies based on the neural networks (for example, large language models) have attracted the most attention as promising methods for analyzing and processing data in various fields. Large language models (LLMs) are systems trained on huge amounts of text data and capable of generating answers to user queries. Examples of well-known LLMs are ChatGPT, Bing, Sparrow, BlenderBot, Bard, YandexGPT, GigaChat and others. Currently, artificial intelligence (AI) plays an important role in scientific and research work, including processing of medical data, making diagnoses, drafting scientific papers and documentation, writing articles, reviews and other academic materials. The evolution and use of large language models in various fields of medicine (and beyond) is presented in the article. In addition, the prospects for their future use, obstacles that hinder their active implementation and the importance of monitoring their use are analyzed.
|
Taratkin M S; Shchelkunova K Y; Azilgareeva C R; Ali S K; Morozov A O; Salpagarova A I; Gadzhieva Z K; Gazimiev M A
| 10
|
|||
39013770
|
[Not Available].
| 2,024
|
Journal international de bioethique et d'ethique des sciences
|
With the emergence of innovations and technological advancements – exemplified by telemedicine and more recently by the extremely rapid development of Generative Artificial Intelligence systems (Gen AI) like Large Language Models (LLM) such as ChatGPT – we are witnessing a progressive transformation of ancient Hippocratic medicine and the physician-patient relationship. These healthcare Gen AI, which carry inherently stakes, risks, opportunities, hopes, and concerns, justify an increased and reinforced ethical vigilance. These digital applications, multiplying, diversifying, and improving their performance day by day, tend to reshape the role and practices of healthcare professionals in their analytical and decision-making medical process.The digitalization of medicine will inevitably encourage physicians to undergo specific training and acquire new competences in new technologies so that they can learn to navigate better in this new ecosystem of knowledge and practices while preserving their medical expertise, skills, and critical thinking. Therefore, faced with the inevitable arrival of Gen AI in medicine, this article aims to question how we can approach and use this technological tool within an evolving ethical framework to maintain a quality healthcare service for users.
|
Monteil C; Beranger J
| 10
|
|||
38977771
|
The ethics of ChatGPT in medicine and healthcare: a systematic review on Large Language Models (LLMs).
| 2,024
|
NPJ digital medicine
|
With the introduction of ChatGPT, Large Language Models (LLMs) have received enormous attention in healthcare. Despite potential benefits, researchers have underscored various ethical implications. While individual instances have garnered attention, a systematic and comprehensive overview of practical applications currently researched and ethical issues connected to them is lacking. Against this background, this work maps the ethical landscape surrounding the current deployment of LLMs in medicine and healthcare through a systematic review. Electronic databases and preprint servers were queried using a comprehensive search strategy which generated 796 records. Studies were screened and extracted following a modified rapid review approach. Methodological quality was assessed using a hybrid approach. For 53 records, a meta-aggregative synthesis was performed. Four general fields of applications emerged showcasing a dynamic exploration phase. Advantages of using LLMs are attributed to their capacity in data analysis, information provisioning, support in decision-making or mitigating information loss and enhancing information accessibility. However, our study also identifies recurrent ethical concerns connected to fairness, bias, non-maleficence, transparency, and privacy. A distinctive concern is the tendency to produce harmful or convincing but inaccurate content. Calls for ethical guidance and human oversight are recurrent. We suggest that the ethical guidance debate should be reframed to focus on defining what constitutes acceptable human oversight across the spectrum of applications. This involves considering the diversity of settings, varying potentials for harm, and different acceptable thresholds for performance and certainty in healthcare. Additionally, critical inquiry is needed to evaluate the necessity and justification of LLMs' current experimental use.
|
Haltaufderheide J; Ranisch R
| 10
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.