pmid
stringlengths 8
8
| title
stringlengths 3
289
| year
int64 2.02k
2.03k
| journal
stringlengths 3
221
| doi
stringclasses 1
value | mesh
stringclasses 1
value | keywords
stringclasses 1
value | abstract
stringlengths 115
3.67k
| authors
stringlengths 3
798
| cluster
class label 5
classes |
|---|---|---|---|---|---|---|---|---|---|
38281582
|
ChatGPT in maternal-fetal medicine practice: a primer for clinicians.
| 2,024
|
American journal of obstetrics & gynecology MFM
|
ChatGPT (Generative Pre-trained Transformer), a language model that was developed by OpenAI and launched in November 2022, generates human-like responses to prompts using deep-learning technology. The integration of large language processing models into healthcare has the potential to improve the accessibility of medical information for both patients and health professionals alike. In this commentary, we demonstrated the ability of ChatGPT to produce patient information sheets. Four board-certified, maternal-fetal medicine attending physicians rated the accuracy and humanness of the information according to 2 predefined scales of accuracy and completeness. The median score for accuracy of information was rated 4.8 on a 6-point scale and the median score for completeness of information was 2.2 on a 3-point scale for the 5 patient information leaflets generated by ChatGPT. Concerns raised included the omission of clinically important information for patient counseling in some patient information leaflets and the inability to verify the source of information because ChatGPT does not provide references. ChatGPT is a powerful tool that has the potential to enhance patient care, but such a tool requires extensive validation and is perhaps best considered as an adjunct to clinical practice rather than as a tool to be used freely by the public for healthcare information.
|
Horgan R; Martins JG; Saade G; Abuhamad A; Kawakita T
| 10
|
|||
37433672
|
A Conversation with ChatGPT.
| 2,023
|
Journal of nuclear medicine technology
|
ChatGPT chatbot powered by GPT 3.5 was released in late November 2022 but has been rapidly assimilated into educational and clinical environments. Method: Insight into ChatGPT capabilities was undertaken in an interview-style approach with the chatbot itself. Results: ChatGPT powered by GPT 3.5 exudes confidence in its capabilities in supporting and enhancing student learning in nuclear medicine and in supporting clinical practice. ChatGPT is also self-aware of limitations and flaws in capabilities and the risks these pose to academic integrity. Conclusion: Further objective evaluation of ChatGPT capabilities in authentic learning and clinical scenarios is required.
|
Currie G
| 0-1
|
|||
39027317
|
Developing ChatGPT for biology and medicine: a complete review of biomedical question answering.
| 2,024
|
Biophysics reports
|
ChatGPT explores a strategic blueprint of question answering (QA) to deliver medical diagnoses, treatment recommendations, and other healthcare support. This is achieved through the increasing incorporation of medical domain data via natural language processing (NLP) and multimodal paradigms. By transitioning the distribution of text, images, videos, and other modalities from the general domain to the medical domain, these techniques have accelerated the progress of medical domain question answering (MDQA). They bridge the gap between human natural language and sophisticated medical domain knowledge or expert-provided manual annotations, handling large-scale, diverse, unbalanced, or even unlabeled data analysis scenarios in medical contexts. Central to our focus is the utilization of language models and multimodal paradigms for medical question answering, aiming to guide the research community in selecting appropriate mechanisms for their specific medical research requirements. Specialized tasks such as unimodal-related question answering, reading comprehension, reasoning, diagnosis, relation extraction, probability modeling, and others, as well as multimodal-related tasks like vision question answering, image captioning, cross-modal retrieval, report summarization, and generation, are discussed in detail. Each section delves into the intricate specifics of the respective method under consideration. This paper highlights the structures and advancements of medical domain explorations against general domain methods, emphasizing their applications across different tasks and datasets. It also outlines current challenges and opportunities for future medical domain research, paving the way for continued innovation and application in this rapidly evolving field. This comprehensive review serves not only as an academic resource but also delineates the course for future probes and utilization in the field of medical question answering.
|
Li Q; Li L; Li Y
| 10
|
|||
37061593
|
Behind the ChatGPT Hype: Are Its Suggestions Contributing to Addiction?
| 2,023
|
Annals of biomedical engineering
|
ChatGPT has been a frequent topic of discussion lately. All over the Internet, from YouTube to blogs, there have been reports about how ChatGPT is able to plan people's daily activities, even for a whole month. However, what matters is what activities ChatGPT recommends. When ChatGPT was trained on a vast amount of data from the Internet, we wondered if it would suggest activities that can lead to addiction. In our test, not once did ChatGPT recommend an activity related to alcohol, drug use, or any other activity that can lead to addiction with serious health consequences. Suggestions seemed more like self-improvement posts on blogs than discussion forums where people might mention drinking in the evenings. Thus, if a person were to use ChatGPT as a personal lifestyle advisor, it does not appear on the basis of this test that ChatGPT would recommend activities that would be fundamentally detrimental to their health. However, more detailed long-term testing of similar tools is needed before recommendations for use in practice can be made.
|
Haman M; Skolnik M
| 0-1
|
|||
39811862
|
Systematic review of ChatGPT accuracy and performance in Iran's medical licensing exams: A brief report.
| 2,024
|
Journal of education and health promotion
|
ChatGPT has demonstrated significant potential in various aspects of medicine, including its performance on licensing examinations. In this study, we systematically investigated ChatGPT's performance in Iranian medical exams and assessed the quality of the included studies using a previously published assessment checklist. The study found that ChatGPT achieved an accuracy range of 32-72% on basic science exams, 34-68.5% on pre-internship exams, and 32-84% on residency exams. Notably, its performance was generally higher when the input was provided in English compared to Persian. One study reported a 40% accuracy rate on an endodontic board exam. To establish ChatGPT as a supplementary tool in medical education and clinical practice, we suggest that dedicated guidelines and checklists are needed to ensure high-quality and consistent research in this emerging field.
|
Keshtkar A; Atighi F; Reihani H
| 21
|
|||
38866891
|
In-depth analysis of ChatGPT's performance based on specific signaling words and phrases in the question stem of 2377 USMLE step 1 style questions.
| 2,024
|
Scientific reports
|
ChatGPT has garnered attention as a multifaceted AI chatbot with potential applications in medicine. Despite intriguing preliminary findings in areas such as clinical management and patient education, there remains a substantial knowledge gap in comprehensively understanding the chances and limitations of ChatGPT's capabilities, especially in medical test-taking and education. A total of n = 2,729 USMLE Step 1 practice questions were extracted from the Amboss question bank. After excluding 352 image-based questions, a total of 2,377 text-based questions were further categorized and entered manually into ChatGPT, and its responses were recorded. ChatGPT's overall performance was analyzed based on question difficulty, category, and content with regards to specific signal words and phrases. ChatGPT achieved an overall accuracy rate of 55.8% in a total number of n = 2,377 USMLE Step 1 preparation questions obtained from the Amboss online question bank. It demonstrated a significant inverse correlation between question difficulty and performance with r(s) = -0.306; p < 0.001, maintaining comparable accuracy to the human user peer group across different levels of question difficulty. Notably, ChatGPT outperformed in serology-related questions (61.1% vs. 53.8%; p = 0.005) but struggled with ECG-related content (42.9% vs. 55.6%; p = 0.021). ChatGPT achieved statistically significant worse performances in pathophysiology-related question stems. (Signal phrase = "what is the most likely/probable cause"). ChatGPT performed consistent across various question categories and difficulty levels. These findings emphasize the need for further investigations to explore the potential and limitations of ChatGPT in medical examination and education.
|
Knoedler L; Knoedler S; Hoch CC; Prantl L; Frank K; Soiderer L; Cotofana S; Dorafshar AH; Schenck T; Vollbach F; Sofo G; Alfertshofer M
| 21
|
|||
40395874
|
ChatGPT for mechanobiology and medicine: A perspective.
| 2,023
|
Mechanobiology in medicine
|
ChatGPT has garnered significant attention for its impressive capabilities across various domains, including medicine and mechanobiology. In order to facilitate the integration of ChatGPT into research, this paper explores the applications of ChatGPT in these domains, focusing on its usage in (1) reading and writing, (2) retrieval and knowledge management, and (3) computation, simulation, and visualization. Meanwhile, this study acknowledges the limitations and challenges associated with ChatGPT's usage. We investigate the interaction between ChatGPT and external tools in these applications and advocate for the integration of more powerful tools in these research areas into ChatGPT to further expand its potential applications in medicine and mechanobiology.
|
Chen M; Li G
| 10
|
|||
38029273
|
The use of ChatGPT in occupational medicine: opportunities and threats.
| 2,023
|
Annals of occupational and environmental medicine
|
ChatGPT has the potential to revolutionize occupational medicine by providing a powerful tool for analyzing data, improving communication, and increasing efficiency. It can help identify patterns and trends in workplace health and safety, act as a virtual assistant for workers, employers, and occupational health professionals, and automate certain tasks. However, caution is required due to ethical concerns, the need to maintain confidentiality, and the risk of inconsistent or inaccurate results. ChatGPT cannot replace the crucial role of the occupational health professional in the medical surveillance of workers and the analysis of data on workers' health.
|
Sridi C; Brigui S
| 0-1
|
|||
37142327
|
Generative artificial intelligence: Can ChatGPT write a quality abstract?
| 2,023
|
Emergency medicine Australasia : EMA
|
ChatGPT is a generative artificial intelligence chatbot which may have a role in medicine and science. We investigated if the freely available version of ChatGPT can produce a quality conference abstract using a fictitious but accurately calculated data table as applied by a non-medically trained person. The resulting abstract was well written without obvious errors and followed the abstract instructions. One of the references was fictitious, known as 'hallucination'. ChatGPT or similar programmes, with careful review of the product by authors, may become a valuable scientific writing tool. The scientific and medical use of generative artificial intelligence, however, raises many questions.
|
Babl FE; Babl MP
| 10
|
|||
37264670
|
Performance of ChatGPT on Specialty Certificate Examination in Dermatology multiple-choice questions.
| 2,024
|
Clinical and experimental dermatology
|
ChatGPT is a large language model trained on increasingly large datasets by OpenAI to perform language-based tasks. It is capable of answering multiple-choice questions, such as those posed by the Specialty Certificate Examination (SCE) in Dermatology. We asked two iterations of ChatGPT: ChatGPT-3.5 and ChatGPT-4 84 multiple-choice sample questions from the sample SCE in Dermatology question bank. ChatGPT-3.5 achieved an overall score of 63%, and ChatGPT-4 scored 90% (a significant improvement in performance; P < 0.001). The typical pass mark for the SCE in Dermatology is 70-72%. ChatGPT-4 is therefore capable of answering clinical questions and achieving a passing grade in these sample questions. There are many possible educational and clinical implications for increasingly advanced artificial intelligence (AI) and its use in medicine, including in the diagnosis of dermatological conditions. Such advances should be embraced provided that patient safety is a core tenet, and the limitations of AI in the nuances of complex clinical cases are recognized.
|
Passby L; Jenko N; Wernham A
| 21
|
|||
38405625
|
ChatGPT in forensic sciences: a new Pandora's box with advantages and challenges to pay attention.
| 2,023
|
Forensic sciences research
|
ChatGPT is a variant of the generative pre-trained transformer (GPT) language model that uses large amounts of text-based training data and a transformer architecture to generate human-like text adjusted to the received prompts. ChatGPT presents several advantages in forensic sciences, namely, constituting a virtual assistant to aid lawyers, judges, and victims in managing and interpreting forensic expert data. But what would happen if ChatGPT began to be used to produce forensic expertise reports? Despite its potential applications, the use of ChatGPT and other Large Language Models and artificial intelligence tools in forensic writing also poses ethical and legal concerns, which are discussed in this perspective together with some expected future perspectives.
|
Dinis-Oliveira RJ; Azevedo RMS
| 10
|
|||
38096831
|
ChatGPT: opportunities and risks in the fields of medical care, teaching, and research.
| 2,023
|
Gaceta medica de Mexico
|
ChatGPT is a virtual assistant with artificial intelligence (AI) that uses natural language to communicate, i.e., it holds conversations as those that would take place with another human being. It can be applied at all educational levels, including medical education, where it can impact medical training, research, the writing of scientific articles, clinical care, and personalized medicine. It can modify interactions between physicians and patients and thus improve the standards of healthcare quality and safety, for example, by suggesting preventive measures in a patient that sometimes are not considered by the physician for multiple reasons. ChatGPT potential uses in medical education, as a tool to support the writing of scientific articles, as a medical care assistant for patients and doctors for a more personalized medical approach, are some of the applications discussed in this article. Ethical aspects, originality, inappropriate or incorrect content, incorrect citations, cybersecurity, hallucinations, and plagiarism are some examples of situations to be considered when using AI-based tools in medicine.
|
Gutierrez-Cirlos C; Carrillo-Perez DL; Bermudez-Gonzalez JL; Hidrogo-Montemayor I; Carrillo-Esper R; Sanchez-Mendiola M
| 10
|
|||
38358925
|
ChatGPT: How Closely Should We Be Watching?
| 2,023
|
Journal of insurance medicine (New York, N.Y.)
|
ChatGPT is about to make major inroads into clinical medicine. This article discusses the pros and cons of its use.
|
Meagher T
| 0-1
|
|||
37168166
|
ChatGPT for Future Medical and Dental Research.
| 2,023
|
Cureus
|
ChatGPT is an artificial intelligence (AI) chatbot developed by OpenAI and it first became available to the public in November 2022. ChatGPT can assist in finding academic papers on the web and summarizing them. This chatbot has the potential to be applied in scientific writing, it has the ability to generate automated drafts, summarize articles, and translate content from several languages. This in turn can make academic writing faster and less challenging. However, due to ethical considerations, its use in scientific writing should be regulated and carefully monitored. Few papers have discussed the use of ChatGPT in scientific research writing. This review aims to discuss all the relevant published papers that discuss the use of ChatGPT in medical and dental research.
|
Fatani B
| 10
|
|||
36981544
|
ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns.
| 2,023
|
Healthcare (Basel, Switzerland)
|
ChatGPT is an artificial intelligence (AI)-based conversational large language model (LLM). The potential applications of LLMs in health care education, research, and practice could be promising if the associated valid concerns are proactively examined and addressed. The current systematic review aimed to investigate the utility of ChatGPT in health care education, research, and practice and to highlight its potential limitations. Using the PRIMSA guidelines, a systematic search was conducted to retrieve English records in PubMed/MEDLINE and Google Scholar (published research or preprints) that examined ChatGPT in the context of health care education, research, or practice. A total of 60 records were eligible for inclusion. Benefits of ChatGPT were cited in 51/60 (85.0%) records and included: (1) improved scientific writing and enhancing research equity and versatility; (2) utility in health care research (efficient analysis of datasets, code generation, literature reviews, saving time to focus on experimental design, and drug discovery and development); (3) benefits in health care practice (streamlining the workflow, cost saving, documentation, personalized medicine, and improved health literacy); and (4) benefits in health care education including improved personalized learning and the focus on critical thinking and problem-based learning. Concerns regarding ChatGPT use were stated in 58/60 (96.7%) records including ethical, copyright, transparency, and legal issues, the risk of bias, plagiarism, lack of originality, inaccurate content with risk of hallucination, limited knowledge, incorrect citations, cybersecurity issues, and risk of infodemics. The promising applications of ChatGPT can induce paradigm shifts in health care education, research, and practice. However, the embrace of this AI chatbot should be conducted with extreme caution considering its potential limitations. As it currently stands, ChatGPT does not qualify to be listed as an author in scientific articles unless the ICMJE/COPE guidelines are revised or amended. An initiative involving all stakeholders in health care education, research, and practice is urgently needed. This will help to set a code of ethics to guide the responsible use of ChatGPT among other LLMs in health care and academia.
|
Sallam M
| 10
|
|||
37705958
|
A descriptive study based on the comparison of ChatGPT and evidence-based neurosurgeons.
| 2,023
|
iScience
|
ChatGPT is an artificial intelligence product developed by OpenAI. This study aims to investigate whether ChatGPT can respond in accordance with evidence-based medicine in neurosurgery. We generated 50 neurosurgical questions covering neurosurgical diseases. Each question was posed three times to GPT-3.5 and GPT-4.0. We also recruited three neurosurgeons with high, middle, and low seniority to respond to questions. The results were analyzed regarding ChatGPT's overall performance score, mean scores by the items' specialty classification, and question type. In conclusion, GPT-3.5's ability to respond in accordance with evidence-based medicine was comparable to that of neurosurgeons with low seniority, and GPT-4.0's ability was comparable to that of neurosurgeons with high seniority. Although ChatGPT is yet to be comparable to a neurosurgeon with high seniority, future upgrades could enhance its performance and abilities.
|
Liu J; Zheng J; Cai X; Wu D; Yin C
| 0-1
|
|||
37379067
|
Utility of ChatGPT in Clinical Practice.
| 2,023
|
Journal of medical Internet research
|
ChatGPT is receiving increasing attention and has a variety of application scenarios in clinical practice. In clinical decision support, ChatGPT has been used to generate accurate differential diagnosis lists, support clinical decision-making, optimize clinical decision support, and provide insights for cancer screening decisions. In addition, ChatGPT has been used for intelligent question-answering to provide reliable information about diseases and medical queries. In terms of medical documentation, ChatGPT has proven effective in generating patient clinical letters, radiology reports, medical notes, and discharge summaries, improving efficiency and accuracy for health care providers. Future research directions include real-time monitoring and predictive analytics, precision medicine and personalized treatment, the role of ChatGPT in telemedicine and remote health care, and integration with existing health care systems. Overall, ChatGPT is a valuable tool that complements the expertise of health care providers and improves clinical decision-making and patient care. However, ChatGPT is a double-edged sword. We need to carefully consider and study the benefits and potential dangers of ChatGPT. In this viewpoint, we discuss recent advances in ChatGPT research in clinical practice and suggest possible risks and challenges of using ChatGPT in clinical practice. It will help guide and support future artificial intelligence research similar to ChatGPT in health.
|
Liu J; Wang C; Liu S
| 10
|
|||
39949509
|
Benefits, limits, and risks of ChatGPT in medicine.
| 2,025
|
Frontiers in artificial intelligence
|
ChatGPT represents a transformative technology in healthcare, with demonstrated impacts across clinical practice, medical education, and research. Studies show significant efficiency gains, including 70% reduction in administrative time for discharge summaries and achievement of medical professional-level performance on standardized tests (60% accuracy on USMLE, 78.2% on PubMedQA). ChatGPT offers personalized learning platforms, automated scoring, and instant access to vast medical knowledge in medical education, addressing resource limitations and enhancing training efficiency. It streamlines clinical workflows by supporting triage processes, generating discharge summaries, and alleviating administrative burdens, allowing healthcare professionals to focus more on patient care. Additionally, ChatGPT facilitates remote monitoring and chronic disease management, providing personalized advice, medication reminders, and emotional support, thus bridging gaps between clinical visits. Its ability to process and synthesize vast amounts of data accelerates research workflows, aiding in literature reviews, hypothesis generation, and clinical trial designs. This paper aims to gather and analyze published studies involving ChatGPT, focusing on exploring its advantages and disadvantages within the healthcare context. To aid in understanding and progress, our analysis is organized into six key areas: (1) Information and Education, (2) Triage and Symptom Assessment, (3) Remote Monitoring and Support, (4) Mental Healthcare Assistance, (5) Research and Decision Support, and (6) Language Translation. Realizing ChatGPT's full potential in healthcare requires addressing key limitations, such as its lack of clinical experience, inability to process visual data, and absence of emotional intelligence. Ethical, privacy, and regulatory challenges further complicate its integration. Future improvements should focus on enhancing accuracy, developing multimodal AI models, improving empathy through sentiment analysis, and safeguarding against artificial hallucination. While not a replacement for healthcare professionals, ChatGPT can serve as a powerful assistant, augmenting their expertise to improve efficiency, accessibility, and quality of care. This collaboration ensures responsible adoption of AI in transforming healthcare delivery. While ChatGPT demonstrates significant potential in healthcare transformation, systematic evaluation of its implementation across different healthcare settings reveals varying levels of evidence quality-from robust randomized trials in medical education to preliminary observational studies in clinical practice. This heterogeneity in evidence quality necessitates a structured approach to future research and implementation.
|
Tangsrivimol JA; Darzidehkalani E; Virk HUH; Wang Z; Egger J; Wang M; Hacking S; Glicksberg BS; Strauss M; Krittanawong C
| 0-1
|
|||
37523010
|
[Big hype about ChapGPT in medicine : Is it something for rhythmologists? What must be taken into consideration?].
| 2,023
|
Herzschrittmachertherapie & Elektrophysiologie
|
ChatGPT, a chatbot based on a large language model, is currently attracting much attention. Modern machine learning (ML) architectures enable the program to answer almost any question, to summarize, translate, and even generate its own texts, all in a text-based dialogue with the user. Underlying technologies, summarized under the acronym NLP (natural language processing), go back to the 1960s. In almost all areas including medicine, ChatGPT is raising enormous hopes. It can easily pass medical exams and may be useful in patient care, diagnostic and therapeutic assistance, and medical research. The enthusiasm for this new technology shown even by medical professionals is surprising. Although the system knows much, it does not know everything; not everything it outputs is accurate either. Every output has to be carefully checked by the user for correctness, which is often not easily done since references to sources are lacking. Issues regarding data protection and ethics also arise. Today's language models are not free of bias and systematic distortion. These shortcomings have led to calls for stronger regulation of the use of ChatGPT and an increasing number of similar language models. However, this new technology represents an enormous progress in knowledge processing and dissemination. Numerous scenarios in which ChatGPT can provide assistance are conceivable, including in rhythmology. In the future, it will be crucial to render the models error-free and transparent and to clearly define the rules for their use. Responsible use requires systematic training to improve the digital competence of users, including physicians who use such programs.
|
Haverkamp W; Strodthoff N; Tennenbaum J; Israel C
| 10
|
|||
38261307
|
Large language model, AI and scientific research: why ChatGPT is only the beginning.
| 2,024
|
Journal of neurosurgical sciences
|
ChatGPT, a conversational artificial intelligence model based on the generative pre-trained transformer GPT architecture, has garnered widespread attention due to its user-friendly nature and diverse capabilities. This technology enables users of all backgrounds to effortlessly engage in human-like conversations and receive coherent and intelligible responses. Beyond casual interactions, ChatGPT offers compelling prospects for scientific research, facilitating tasks like literature review and content summarization, ultimately expediting and enhancing the academic writing process. Still, in the field of medicine and surgery, it has already shown its endless potential in many tasks (enhancing decision-making processes, aiding in surgical planning and simulation, providing real-time assistance during surgery, improving postoperative care and rehabilitation, contributing to training, education, research, and development). However, it is crucial to acknowledge the model's limitations, encompassing knowledge constraints and the potential for erroneous responses, as well as ethical and legal considerations. This paper explores the potential benefits and pitfalls of these innovative technologies in scientific research, shedding light on their transformative impact while addressing concerns surrounding their use.
|
Zangrossi P; Martini M; Guerrini F; DE Bonis P; Spena G
| 10
|
|||
39383119
|
ChatGPT M.D.: Is there any room for generative AI in neurology?
| 2,024
|
PloS one
|
ChatGPT, a general artificial intelligence, has been recognized as a powerful tool in scientific writing and programming but its use as a medical tool is largely overlooked. The general accessibility, rapid response time and comprehensive training database might enable ChatGPT to serve as a diagnostic augmentation tool in certain clinical settings. The diagnostic process in neurology is often challenging and complex. In certain time-sensitive scenarios, rapid evaluation and diagnostic decisions are needed, while in other cases clinicians are faced with rare disorders and atypical disease manifestations. Due to these factors, the diagnostic accuracy in neurology is often suboptimal. Here we evaluated whether ChatGPT can be utilized as a valuable and innovative diagnostic augmentation tool in various neurological settings. We used synthetic data generated by neurological experts to represent descriptive anamneses of patients with known neurology-related diseases, then the probability for an appropriate diagnosis made by ChatGPT was measured. To give clarity to the accuracy of the AI-determined diagnosis, all cases have been cross-validated by other experts and general medical doctors as well. We found that ChatGPT-determined diagnostic accuracy (ranging from 68.5% +/- 3.28% to 83.83% +/- 2.73%) can reach the accuracy of other experts (81.66% +/- 2.02%), furthermore, it surpasses the probability of an appropriate diagnosis if the examiner is a general medical doctor (57.15% +/- 2.64%). Our results showcase the efficacy of general artificial intelligence like ChatGPT as a diagnostic augmentation tool in medicine. In the future, AI-based supporting tools might be useful amendments in medical practice and help to improve the diagnostic process in neurology.
|
Nogradi B; Polgar TF; Meszlenyi V; Kadar Z; Hertelendy P; Csati A; Szpisjak L; Halmi D; Erdelyi-Furka B; Toth M; Molnar F; Toth D; Bosze Z; Boda K; Klivenyi P; Siklos L; Patai R
| 10
|
|||
39196640
|
Current Status of ChatGPT Use in Medical Education: Potentials, Challenges, and Strategies.
| 2,024
|
Journal of medical Internet research
|
ChatGPT, a generative pretrained transformer, has garnered global attention and sparked discussions since its introduction on November 30, 2022. However, it has generated controversy within the realms of medical education and scientific research. This paper examines the potential applications, limitations, and strategies for using ChatGPT. ChatGPT offers personalized learning support to medical students through its robust natural language generation capabilities, enabling it to furnish answers. Moreover, it has demonstrated significant use in simulating clinical scenarios, facilitating teaching and learning processes, and revitalizing medical education. Nonetheless, numerous challenges accompany these advancements. In the context of education, it is of paramount importance to prevent excessive reliance on ChatGPT and combat academic plagiarism. Likewise, in the field of medicine, it is vital to guarantee the timeliness, accuracy, and reliability of content generated by ChatGPT. Concurrently, ethical challenges and concerns regarding information security arise. In light of these challenges, this paper proposes targeted strategies for addressing them. First, the risk of overreliance on ChatGPT and academic plagiarism must be mitigated through ideological education, fostering comprehensive competencies, and implementing diverse evaluation criteria. The integration of contemporary pedagogical methodologies in conjunction with the use of ChatGPT serves to enhance the overall quality of medical education. To enhance the professionalism and reliability of the generated content, it is recommended to implement measures to optimize ChatGPT's training data professionally and enhance the transparency of the generation process. This ensures that the generated content is aligned with the most recent standards of medical practice. Moreover, the enhancement of value alignment and the establishment of pertinent legislation or codes of practice address ethical concerns, including those pertaining to algorithmic discrimination, the allocation of medical responsibility, privacy, and security. In conclusion, while ChatGPT presents significant potential in medical education, it also encounters various challenges. Through comprehensive research and the implementation of suitable strategies, it is anticipated that ChatGPT's positive impact on medical education will be harnessed, laying the groundwork for advancing the discipline and fostering the development of high-caliber medical professionals.
|
Xu T; Weng H; Liu F; Yang L; Luo Y; Ding Z; Wang Q
| 10
|
|||
37485160
|
Eyes on AI: ChatGPT's Transformative Potential Impact on Ophthalmology.
| 2,023
|
Cureus
|
ChatGPT, a large language model by OpenAI, has been adopted in various domains since its release in November 2022, but its application in ophthalmology remains less explored. This editorial assesses ChatGPT's potential applications and limitations in ophthalmology across clinical, educational, and research settings. In clinical settings, ChatGPT can serve as an assistant, offering diagnostic and therapeutic suggestions based on patient data and assisting in patient triage. However, its tendencies to generate inaccurate results and its inability to keep up with recent medical guidelines render it unsuitable for standalone clinical decision-making. Data security and compliance with the Health Insurance Portability and Accountability Act (HIPAA) also pose concerns, given ChatGPT's potential to inadvertently expose sensitive patient information. In education, ChatGPT can generate practice questions, provide explanations, and create patient education materials. However, its performance in answering domain-specific questions is suboptimal. In research, ChatGPT can facilitate literature reviews, data analysis, manuscript development, and peer review, but issues of accuracy, bias, and ethics need careful consideration. Ultimately, ensuring accuracy, ethical integrity, and data privacy is essential when integrating ChatGPT into ophthalmology.
|
Dossantos J; An J; Javan R
| 0-1
|
|||
40375935
|
Quantum leap in medical mentorship: exploring ChatGPT's transition from textbooks to terabytes.
| 2,025
|
Frontiers in medicine
|
ChatGPT, an advanced AI language model, presents a transformative opportunity in several fields including the medical education. This article examines the integration of ChatGPT into healthcare learning environments, exploring its potential to revolutionize knowledge acquisition, personalize education, support curriculum development, and enhance clinical reasoning. The AI's ability to swiftly access and synthesize medical information across various specialties offers significant value to students and professionals alike. It provides rapid answers to queries on medical theories, treatment guidelines, and diagnostic methods, potentially accelerating the learning curve. The paper emphasizes the necessity of verifying ChatGPT's outputs against authoritative medical sources. A key advantage highlighted is the AI's capacity to tailor learning experiences by assessing individual needs, accommodating diverse learning styles, and offering personalized feedback. The article also considers ChatGPT's role in shaping curricula and assessment techniques, suggesting that educators may need to adapt their methods to incorporate AI-driven learning tools. Additionally, it explores how ChatGPT could bolster clinical problem-solving through AI-powered simulations, fostering critical thinking and diagnostic acumen among students. While recognizing ChatGPT's transformative potential in medical education, the article stresses the importance of thoughtful implementation, continuous validation, and the establishment of protocols to ensure its responsible and effective application in healthcare education settings.
|
Chokkakula S; Chong S; Yang B; Jiang H; Yu J; Han R; Attitalla IH; Yin C; Zhang S
| 10
|
|||
38476626
|
The Potential Applications and Challenges of ChatGPT in the Medical Field.
| 2,024
|
International journal of general medicine
|
ChatGPT, an AI-driven conversational large language model (LLM), has garnered significant scholarly attention since its inception, owing to its manifold applications in the realm of medical science. This study primarily examines the merits, limitations, anticipated developments, and practical applications of ChatGPT in clinical practice, healthcare, medical education, and medical research. It underscores the necessity for further research and development to enhance its performance and deployment. Moreover, future research avenues encompass ongoing enhancements and standardization of ChatGPT, mitigating its limitations, and exploring its integration and applicability in translational and personalized medicine. Reflecting the narrative nature of this review, a focused literature search was performed to identify relevant publications on ChatGPT's use in medicine. This process was aimed at gathering a broad spectrum of insights to provide a comprehensive overview of the current state and future prospects of ChatGPT in the medical domain. The objective is to aid healthcare professionals in understanding the groundbreaking advancements associated with the latest artificial intelligence tools, while also acknowledging the opportunities and challenges presented by ChatGPT.
|
Mu Y; He D
| 10
|
|||
38172581
|
ChatGPT and Beyond: An overview of the growing field of large language models and their use in ophthalmology.
| 2,024
|
Eye (London, England)
|
ChatGPT, an artificial intelligence (AI) chatbot built on large language models (LLMs), has rapidly gained popularity. The benefits and limitations of this transformative technology have been discussed across various fields, including medicine. The widespread availability of ChatGPT has enabled clinicians to study how these tools could be used for a variety of tasks such as generating differential diagnosis lists, organizing patient notes, and synthesizing literature for scientific research. LLMs have shown promising capabilities in ophthalmology by performing well on the Ophthalmic Knowledge Assessment Program, providing fairly accurate responses to questions about retinal diseases, and in generating differential diagnoses list. There are current limitations to this technology, including the propensity of LLMs to "hallucinate", or confidently generate false information; their potential role in perpetuating biases in medicine; and the challenges in incorporating LLMs into research without allowing "AI-plagiarism" or publication of false information. In this paper, we provide a balanced overview of what LLMs are and introduce some of the LLMs that have been generated in the past few years. We discuss recent literature evaluating the role of these language models in medicine with a focus on ChatGPT. The field of AI is fast-paced, and new applications based on LLMs are being generated rapidly; therefore, it is important for ophthalmologists to be aware of how this technology works and how it may impact patient care. Here, we discuss the benefits, limitations, and future advancements of LLMs in patient care and research.
|
Kedia N; Sanjeev S; Ong J; Chhablani J
| 0-1
|
|||
39196686
|
"Where No One Has Gone Before": Questions to Ensure the Ethical, Rigorous, and Thoughtful Application of Artificial Intelligence in the Analysis of HIV Research.
| 2,024
|
The Journal of the Association of Nurses in AIDS Care : JANAC
|
ChatGPT, an artificial intelligence (AI) system released by OpenAI on November 30th, 2022, has upended scientific and educational paradigms, reshaping the way that we think about teaching, writing, and now research. Since that time, qualitative data analytic software programs such as ATLAS.ti have quickly incorporated AI into their programs to assist with or even replace human coding. Qualitative research is key to understanding the complexity and nuance of HIV-related behaviors, through descriptive and historical textual research, as well as the lived experiences of people with HIV. This commentary weighs the pros and cons of the use of AI coding in HIV-related qualitative research. We pose guiding questions that may help researchers evaluate the application and scope of AI in qualitative research as determined by the research question, underlying epistemology, and goal(s). Qualitative data encompasses a variety of media, methodologies, and styles that exist on a spectrum underpinned by epistemology. The research question and the data sources are informed by the researcher's epistemological viewpoint. Given the heterogeneous applications of qualitative research in nursing, medicine, and public health there are circumstances where qualitative AI coding is appropriate, but this should be congruent with the aims and underlying epistemology of the research.
|
Bergman AJ; McNabb KC; Relf MV; Dredze MH
| 10
|
|||
37038381
|
Overview of Early ChatGPT's Presence in Medical Literature: Insights From a Hybrid Literature Review by ChatGPT and Human Experts.
| 2,023
|
Cureus
|
ChatGPT, an artificial intelligence chatbot, has rapidly gained prominence in various domains, including medical education and healthcare literature. This hybrid narrative review, conducted collaboratively by human authors and ChatGPT, aims to summarize and synthesize the current knowledge of ChatGPT in the indexed medical literature during its initial four months. A search strategy was employed in PubMed and EuropePMC databases, yielding 65 and 110 papers, respectively. These papers focused on ChatGPT's impact on medical education, scientific research, medical writing, ethical considerations, diagnostic decision-making, automation potential, and criticisms. The findings indicate a growing body of literature on ChatGPT's applications and implications in healthcare, highlighting the need for further research to assess its effectiveness and ethical concerns.
|
Temsah O; Khan SA; Chaiah Y; Senjab A; Alhasan K; Jamal A; Aljamaan F; Malki KH; Halwani R; Al-Tawfiq JA; Temsah MH; Al-Eyadhy A
| 10
|
|||
37399030
|
ChatGPT, GPT-4, and Other Large Language Models: The Next Revolution for Clinical Microbiology?
| 2,023
|
Clinical infectious diseases : an official publication of the Infectious Diseases Society of America
|
ChatGPT, GPT-4, and Bard are highly advanced natural language process-based computer programs (chatbots) that simulate and process human conversation in written or spoken form. Recently released by the company OpenAI, ChatGPT was trained on billions of unknown text elements (tokens) and rapidly gained wide attention for its ability to respond to questions in an articulate manner across a wide range of knowledge domains. These potentially disruptive large language model (LLM) technologies have a broad range of conceivable applications in medicine and medical microbiology. In this opinion article, I describe how chatbot technologies work and discuss the strengths and weaknesses of ChatGPT, GPT-4, and other LLMs for applications in the routine diagnostic laboratory, focusing on various use cases for the pre- to post-analytical process.
|
Egli A
| 10
|
|||
38911678
|
ChatGPT in veterinary medicine: a practical guidance of generative artificial intelligence in clinics, education, and research.
| 2,024
|
Frontiers in veterinary science
|
ChatGPT, the most accessible generative artificial intelligence (AI) tool, offers considerable potential for veterinary medicine, yet a dedicated review of its specific applications is lacking. This review concisely synthesizes the latest research and practical applications of ChatGPT within the clinical, educational, and research domains of veterinary medicine. It intends to provide specific guidance and actionable examples of how generative AI can be directly utilized by veterinary professionals without a programming background. For practitioners, ChatGPT can extract patient data, generate progress notes, and potentially assist in diagnosing complex cases. Veterinary educators can create custom GPTs for student support, while students can utilize ChatGPT for exam preparation. ChatGPT can aid in academic writing tasks in research, but veterinary publishers have set specific requirements for authors to follow. Despite its transformative potential, careful use is essential to avoid pitfalls like hallucination. This review addresses ethical considerations, provides learning resources, and offers tangible examples to guide responsible implementation. A table of key takeaways was provided to summarize this review. By highlighting potential benefits and limitations, this review equips veterinarians, educators, and researchers to harness the power of ChatGPT effectively.
|
Chu CP
| 10
|
|||
37085182
|
Early applications of ChatGPT in medical practice, education and research.
| 2,023
|
Clinical medicine (London, England)
|
ChatGPT, which can automatically generate written responses to queries using internet sources, soon went viral after its release at the end of 2022. The performance of ChatGPT on medical exams shows results near the passing threshold, making it comparable to third-year medical students. It can also write academic abstracts or reviews at an acceptable level. However, it is not clear how ChatGPT deals with harmful content, misinformation or plagiarism; therefore, authors using ChatGPT professionally for academic writing should be cautious. ChatGPT also has the potential to facilitate the interaction between healthcare providers and patients in various ways. However, sophisticated tasks such as understanding the human anatomy are still a limitation of ChatGPT. ChatGPT can simplify radiological reports, but the possibility of incorrect statements and missing medical information remain. Although ChatGPT has the potential to change medical practice, education and research, further improvements of this application are needed for regular use in medicine.
|
Sedaghat S
| 10
|
|||
38310152
|
Evaluating AI in medicine: a comparative analysis of expert and ChatGPT responses to colorectal cancer questions.
| 2,024
|
Scientific reports
|
Colorectal cancer (CRC) is a global health challenge, and patient education plays a crucial role in its early detection and treatment. Despite progress in AI technology, as exemplified by transformer-like models such as ChatGPT, there remains a lack of in-depth understanding of their efficacy for medical purposes. We aimed to assess the proficiency of ChatGPT in the field of popular science, specifically in answering questions related to CRC diagnosis and treatment, using the book "Colorectal Cancer: Your Questions Answered" as a reference. In general, 131 valid questions from the book were manually input into ChatGPT. Responses were evaluated by clinical physicians in the relevant fields based on comprehensiveness and accuracy of information, and scores were standardized for comparison. Not surprisingly, ChatGPT showed high reproducibility in its responses, with high uniformity in comprehensiveness, accuracy, and final scores. However, the mean scores of ChatGPT's responses were significantly lower than the benchmarks, indicating it has not reached an expert level of competence in CRC. While it could provide accurate information, it lacked in comprehensiveness. Notably, ChatGPT performed well in domains of radiation therapy, interventional therapy, stoma care, venous care, and pain control, almost rivaling the benchmarks, but fell short in basic information, surgery, and internal medicine domains. While ChatGPT demonstrated promise in specific domains, its general efficiency in providing CRC information falls short of expert standards, indicating the need for further advancements and improvements in AI technology for patient education in healthcare.
|
Peng W; Feng Y; Yao C; Zhang S; Zhuo H; Qiu T; Zhang Y; Tang J; Gu Y; Sun Y
| 0-1
|
|||
38244054
|
Assessment of Pathology Domain-Specific Knowledge of ChatGPT and Comparison to Human Performance.
| 2,024
|
Archives of pathology & laboratory medicine
|
CONTEXT.-: Artificial intelligence algorithms hold the potential to fundamentally change many aspects of society. Application of these tools, including the publicly available ChatGPT, has demonstrated impressive domain-specific knowledge in many areas, including medicine. OBJECTIVES.-: To understand the level of pathology domain-specific knowledge for ChatGPT using different underlying large language models, GPT-3.5 and the updated GPT-4. DESIGN.-: An international group of pathologists (n = 15) was recruited to generate pathology-specific questions at a similar level to those that could be seen on licensing (board) examinations. The questions (n = 15) were answered by GPT-3.5, GPT-4, and a staff pathologist who recently passed their Canadian pathology licensing exams. Participants were instructed to score answers on a 5-point scale and to predict which answer was written by ChatGPT. RESULTS.-: GPT-3.5 performed at a similar level to the staff pathologist, while GPT-4 outperformed both. The overall score for both GPT-3.5 and GPT-4 was within the range of meeting expectations for a trainee writing licensing examinations. In all but one question, the reviewers were able to correctly identify the answers generated by GPT-3.5. CONCLUSIONS.-: By demonstrating the ability of ChatGPT to answer pathology-specific questions at a level similar to (GPT-3.5) or exceeding (GPT-4) a trained pathologist, this study highlights the potential of large language models to be transformative in this space. In the future, more advanced iterations of these algorithms with increased domain-specific knowledge may have the potential to assist pathologists and enhance pathology resident training.
|
Wang AY; Lin S; Tran C; Homer RJ; Wilsdon D; Walsh JC; Goebel EA; Sansano I; Sonawane S; Cockenpot V; Mukhopadhyay S; Taskin T; Zahra N; Cima L; Semerci O; Ozamrak BG; Mishra P; Vennavalli NS; Chen PC; Cecchini MJ
| 21
|
|||
39035703
|
The role of artificial intelligence in cosmetic and functional gynecology: Stepping into the third millennium.
| 2,024
|
European journal of obstetrics & gynecology and reproductive biology: X
|
Cosmetic and functional gynecology have gained popularity among patients, but the scientific literature in this field, particularly regarding the cosmetic aspect, is lacking. The use of evidence-based medicine is crucial to validate diagnostic tools and treatment protocols. However, the advent of artificial intelligence (AI) offers a promising solution to address this issue. ChatGPT, a sophisticated language model, can revolutionize AI in medicine, enabling accurate diagnosis, personalized treatment plans, and expedited research analysis. Cosmetic and functional gynecology can leverage AI to develop the field and improve evidence gathering. AI can aid in precise and personalized diagnosis, implement standardized assessment tools, simulate treatment outcomes, and assess under-skin anatomy through virtual reality. AI tools can assist clinicians in diagnosing and comparing difficult cases, calculate treatment risks, and contribute to standardization by collecting global evidence and generating guidelines. The use of AI in cosmetic and functional gynecology holds significant potential to advance the field and improve patient outcomes. This novel combination of AI and gynecology represents a groundbreaking development in medicine, emphasizing the importance of appropriate and correct AI usage.
|
Buzzaccarini G; Degliuomini RS; Etrusco A; Giannini A; D'Amato A; Gkouvi K; Berreni N; Magon N; Candiani M; Salvatore S
| 0-1
|
|||
38873307
|
Assessing the utility of artificial intelligence throughout the triage outpatients: a prospective randomized controlled clinical study.
| 2,024
|
Frontiers in public health
|
Currently, there are still many patients who require outpatient triage assistance. ChatGPT, a natural language processing tool powered by artificial intelligence technology, is increasingly utilized in medicine. To facilitate and expedite patients' navigation to the appropriate department, we conducted an outpatient triage evaluation of ChatGPT. For this evaluation, we posed 30 highly representative and common outpatient questions to ChatGPT and scored its responses using a panel of five experienced doctors. The consistency of manual triage and ChatGPT triage was assessed by five experienced doctors, and statistical analysis was performed using the Chi-square test. The expert ratings of ChatGPT's answers to these 30 frequently asked questions revealed 17 responses earning very high scores (10 and 9.5 points), 7 earning high scores (9 points), and 6 receiving low scores (8 and 7 points). Additionally, we conducted a prospective cohort study in which 45 patients completed forms detailing gender, age, and symptoms. Triage was then performed by outpatient triage staff and ChatGPT. Among the 45 patients, we found a high level of agreement between manual triage and ChatGPT triage (consistency: 93.3-100%, p<0.0001). We were pleasantly surprised to observe that ChatGPT's responses were highly professional, comprehensive, and humanized. This innovation can help patients win more treatment time, improve patient diagnosis and cure rates, and alleviate the pressure of medical staff shortage.
|
Liu X; Lai R; Wu C; Yan C; Gan Z; Yang Y; Zeng X; Liu J; Liao L; Lin Y; Jing H; Zhang W
| 10
|
|||
40267969
|
Comparative benchmarking of the DeepSeek large language model on medical tasks and clinical reasoning.
| 2,025
|
Nature medicine
|
DeepSeek is a newly introduced large language model (LLM) designed for enhanced reasoning, but its medical-domain capabilities have not yet been evaluated. Here we assessed the capabilities of three LLMs- DeepSeek-R1, ChatGPT-o1 and Llama 3.1-405B-in performing four different medical tasks: answering questions from the United States Medical Licensing Examination (USMLE), interpreting and reasoning on the basis of text-based diagnostic and management cases, providing tumor classification according to RECIST 1.1 criteria and providing summaries of diagnostic imaging reports across multiple modalities. In the USMLE test, the performance of DeepSeek-R1 (accuracy 0.92) was slightly inferior to that of ChatGPT-o1 (accuracy 0.95; P = 0.04) but better than that of Llama 3.1-405B (accuracy 0.83; P < 10(-3)). For text-based case challenges, DeepSeek-R1 performed similarly to ChatGPT-o1 (accuracy of 0.57 versus 0.55; P = 0.76 and 0.74 versus 0.76; P = 0.06, using New England Journal of Medicine and Medicilline databases, respectively). For RECIST classifications, DeepSeek-R1 also performed similarly to ChatGPT-o1 (0.74 versus 0.81; P = 0.10). Diagnostic reasoning steps provided by DeepSeek were deemed more accurate than those provided by ChatGPT and Llama 3.1-405B (average Likert score of 3.61, 3.22 and 3.13, respectively, P = 0.005 and P < 10(-3)). However, summarized imaging reports provided by DeepSeek-R1 exhibited lower global quality than those provided by ChatGPT-o1 (5-point Likert score: 4.5 versus 4.8; P < 10(-3)). This study highlights the potential of DeepSeek-R1 LLM for medical applications but also underlines areas needing improvements.
|
Tordjman M; Liu Z; Yuce M; Fauveau V; Mei Y; Hadjadj J; Bolger I; Almansour H; Horst C; Parihar AS; Geahchan A; Meribout A; Yatim N; Ng N; Robson P; Zhou A; Lewis S; Huang M; Deyer T; Taouli B; Lee HC; Fayad ZA; Mei X
| 21
|
|||
38477276
|
AI Chatbots and Challenges of HIPAA Compliance for AI Developers and Vendors.
| 2,023
|
The Journal of law, medicine & ethics : a journal of the American Society of Law, Medicine & Ethics
|
Developers and vendors of large language models ("LLMs") - such as ChatGPT, Google Bard, and Microsoft's Bing at the forefront-can be subject to Health Insurance Portability and Accountability Act of 1996 ("HIPAA") when they process protected health information ("PHI") on behalf of the HIPAA covered entities. In doing so, they become business associates or subcontractors of a business associate under HIPAA.
|
Rezaeikhonakdar D
| 10
|
|||
38826194
|
Exploiting ChatGPT for Diagnosing Autism-Associated Language Disorders and Identifying Distinct Features.
| 2,024
|
Research square
|
Diagnosing language disorders associated with autism is a complex and nuanced challenge, often hindered by the subjective nature and variability of traditional assessment methods. Traditional diagnostic methods not only require intensive human effort but also often result in delayed interventions due to their lack of speed and specificity. In this study, we explored the application of ChatGPT, a state-of-the-art large language model, to overcome these obstacles by enhancing diagnostic accuracy and profiling specific linguistic features indicative of autism. Leveraging ChatGPT's advanced natural language processing capabilities, this research aims to streamline and refine the diagnostic process. Specifically, we compared ChatGPT's performance with that of conventional supervised learning models, including BERT, a model acclaimed for its effectiveness in various natural language processing tasks. We showed that ChatGPT substantially outperformed these models, achieving over 13% improvement in both accuracy and F1-score in a zero-shot learning configuration. This marked enhancement highlights the model's potential as a superior tool for neurological diagnostics. Additionally, we identified ten distinct features of autism-associated language disorders that vary significantly across different experimental scenarios. These features, which included echolalia, pronoun reversal, and atypical language usage, were crucial for accurately diagnosing ASD and customizing treatment plans. Together, our findings advocate for adopting sophisticated AI tools like ChatGPT in clinical settings to assess and diagnose developmental disorders. Our approach not only promises greater diagnostic precision but also aligns with the goals of personalized medicine, potentially transforming the evaluation landscape for autism and similar neurological conditions.
|
Hu C; Li W; Ruan M; Yu X; Paul LK; Wang S; Li X
| 10
|
|||
38056135
|
ChatGPT as an aid for pathological diagnosis of cancer.
| 2,024
|
Pathology, research and practice
|
Diagnostic workup of cancer patients is highly reliant on the science of pathology using cytopathology, histopathology, and other ancillary techniques like immunohistochemistry and molecular cytogenetics. Data processing and learning by means of artificial intelligence (AI) has become a spearhead for the advancement of medicine, with pathology and laboratory medicine being no exceptions. ChatGPT, an artificial intelligence (AI)-based chatbot, that was recently launched by OpenAI, is currently a talk of the town, and its role in cancer diagnosis is also being explored meticulously. Pathology workflow by integration of digital slides, implementation of advanced algorithms, and computer-aided diagnostic techniques extend the frontiers of the pathologist's view beyond a microscopic slide and enables effective integration, assimilation, and utilization of knowledge that is beyond human limits and boundaries. Despite of it's numerous advantages in the pathological diagnosis of cancer, it comes with several challenges like integration of digital slides with input language parameters, problems of bias, and legal issues which have to be addressed and worked up soon so that we as a pathologists diagnosing malignancies are on the same band wagon and don't miss the train.
|
Malik S; Zaheer S
| 10
|
|||
40119714
|
Analysis of responses from artificial intelligence programs to medication-related questions derived from critical care guidelines.
| 2,025
|
American journal of health-system pharmacy : AJHP : official journal of the American Society of Health-System Pharmacists
|
DISCLAIMER: In an effort to expedite the publication of articles, AJHP is posting manuscripts online as soon as possible after acceptance. Accepted manuscripts have been peer-reviewed and copyedited, but are posted online before technical formatting and author proofing. These manuscripts are not the final version of record and will be replaced with the final article (formatted per AJHP style and proofed by the authors) at a later time. PURPOSE: To evaluate the recommendations given by 4 publicly available artificial intelligence (AI) programs in comparison to recommendations in current clinical practice guidelines (CPGs) focused on critically ill adults. METHODS: This study evaluated 4 publicly available large language models (LLMs): ChatGPT 4.0, Microsoft Copilot Google Gemini Version 1.5, and Meta AI. Each AI chatbot was prompted with medication-related questions related to 6 CPGs published by the Society of Critical Care Medicine (SCCM) and also asked to provide references to support its recommendations. Responses were categorized as correct, partially correct, not correct, or "other" (eg, the LLM answered a question not asked). RESULTS: In total, 43 responses were recorded for each AI program, with a significant difference (P = 0.007) in response types by AI program. Microsoft Copilot had the highest proportion of correct recommendations, followed by Meta AI, ChatGPT 4.0, and Google Gemini. All 4 LLMs gave some incorrect recommendations, with Gemini having the most incorrect responses, followed closely by ChatGPT. Copilot had the most responses in the "other" category (n = 5, 11.63%). On average, ChatGPT provided the greatest number of references per question (n = 4.54), followed by Google Gemini (n = 3.43), Meta AI (n = 3.06), and Microsoft Copilot (n = 2.04). CONCLUSION: Although they showed potential for future utility to pharmacists with further development and refinement, the evaluated AI programs did not consistently give accurate medication-related recommendations for the purpose of answering clinical questions such as those pertaining to critical care CPGs.
|
Williams B; Erstad BL
| 10
|
|||
38578309
|
Diagnostic power of ChatGPT 4 in distal radius fracture detection through wrist radiographs.
| 2,024
|
Archives of orthopaedic and trauma surgery
|
Distal radius fractures rank among the most prevalent fractures in humans, necessitating accurate radiological imaging and interpretation for optimal diagnosis and treatment. In addition to human radiologists, artificial intelligence systems are increasingly employed for radiological assessments. Since 2023, ChatGPT 4 has offered image analysis capabilities, which can also be used for the analysis of wrist radiographs. This study evaluates the diagnostic power of ChatGPT 4 in identifying distal radius fractures, comparing it with a board-certified radiologist, a hand surgery resident, a medical student, and the well-established AI Gleamer BoneView. Results demonstrate ChatGPT 4's good diagnostic accuracy (sensitivity 0.88, specificity 0.98, diagnostic power (AUC) 0.93), surpassing the medical student (sensitivity 0.98, specificity 0.72, diagnostic power (AUC) 0.85; p = 0.04) significantly. Nevertheless, the diagnostic power of ChatGPT 4 lags behind the hand surgery resident (sensitivity 0.99, specificity 0.98, diagnostic power (AUC) 0.985; p = 0.014) and Gleamer BoneView(sensitivity 1.00, specificity 0.98, diagnostic power (AUC) 0.99; p = 0.006). This study highlights the utility and potential applications of artificial intelligence in modern medicine, emphasizing ChatGPT 4 as a valuable tool for enhancing diagnostic capabilities in the field of medical imaging.
|
Mert S; Stoerzer P; Brauer J; Fuchs B; Haas-Lutzenberger EM; Demmer W; Giunta RE; Nuernberger T
| 0-1
|
|||
38409350
|
Leveraging generative AI to prioritize drug repurposing candidates for Alzheimer's disease with real-world clinical validation.
| 2,024
|
NPJ digital medicine
|
Drug repurposing represents an attractive alternative to the costly and time-consuming process of new drug development, particularly for serious, widespread conditions with limited effective treatments, such as Alzheimer's disease (AD). Emerging generative artificial intelligence (GAI) technologies like ChatGPT offer the promise of expediting the review and summary of scientific knowledge. To examine the feasibility of using GAI for identifying drug repurposing candidates, we iteratively tasked ChatGPT with proposing the twenty most promising drugs for repurposing in AD, and tested the top ten for risk of incident AD in exposed and unexposed individuals over age 65 in two large clinical datasets: (1) Vanderbilt University Medical Center and (2) the All of Us Research Program. Among the candidates suggested by ChatGPT, metformin, simvastatin, and losartan were associated with lower AD risk in meta-analysis. These findings suggest GAI technologies can assimilate scientific insights from an extensive Internet-based search space, helping to prioritize drug repurposing candidates and facilitate the treatment of diseases.
|
Yan C; Grabowska ME; Dickson AL; Li B; Wen Z; Roden DM; Michael Stein C; Embi PJ; Peterson JF; Feng Q; Malin BA; Wei WQ
| 0-1
|
|||
37503019
|
Leveraging Generative AI to Prioritize Drug Repurposing Candidates: Validating Identified Candidates for Alzheimer's Disease in Real-World Clinical Datasets.
| 2,023
|
Research square
|
Drug repurposing represents an attractive alternative to the costly and time-consuming process of new drug development, particularly for serious, widespread conditions with limited effective treatments, such as Alzheimer's disease (AD). Emerging generative artificial intelligence (GAI) technologies like ChatGPT offer the promise of expediting the review and summary of scientific knowledge. To examine the feasibility of using GAI for identifying drug repurposing candidates, we iteratively tasked ChatGPT with proposing the twenty most promising drugs for repurposing in AD, and tested the top ten for risk of incident AD in exposed and unexposed individuals over age 65 in two large clinical datasets: 1) Vanderbilt University Medical Center and 2) the All of Us Research Program. Among the candidates suggested by ChatGPT, metformin, simvastatin, and losartan were associated with lower AD risk in meta-analysis. These findings suggest GAI technologies can assimilate scientific insights from an extensive Internet-based search space, helping to prioritize drug repurposing candidates and facilitate the treatment of diseases.
|
Wei WQ; Yan C; Grabowska M; Dickson A; Li B; Wen Z; Roden D; Stein C; Embi P; Peterson J; Feng Q; Malin B
| 0-1
|
|||
37461512
|
Leveraging Generative AI to Prioritize Drug Repurposing Candidates: Validating Identified Candidates for Alzheimer's Disease in Real-World Clinical Datasets.
| 2,023
|
medRxiv : the preprint server for health sciences
|
Drug repurposing represents an attractive alternative to the costly and time-consuming process of new drug development, particularly for serious, widespread conditions with limited effective treatments, such as Alzheimer's disease (AD). Emerging generative artificial intelligence (GAI) technologies like ChatGPT offer the promise of expediting the review and summary of scientific knowledge. To examine the feasibility of using GAI for identifying drug repurposing candidates, we iteratively tasked ChatGPT with proposing the twenty most promising drugs for repurposing in AD, and tested the top ten for risk of incident AD in exposed and unexposed individuals over age 65 in two large clinical datasets: 1) Vanderbilt University Medical Center and 2) the All of Us Research Program. Among the candidates suggested by ChatGPT, metformin, simvastatin, and losartan were associated with lower AD risk in meta-analysis. These findings suggest GAI technologies can assimilate scientific insights from an extensive Internet-based search space, helping to prioritize drug repurposing candidates and facilitate the treatment of diseases.
|
Yan C; Grabowska ME; Dickson AL; Li B; Wen Z; Roden DM; Stein CM; Embi PJ; Peterson JF; Feng Q; Malin BA; Wei WQ
| 0-1
|
|||
39359001
|
Evaluating the capability of ChatGPT in predicting drug-drug interactions: Real-world evidence using hospitalized patient data.
| 2,024
|
British journal of clinical pharmacology
|
Drug-drug interactions (DDIs) present a significant health burden, compounded by clinician time constraints and poor patient health literacy. We assessed the ability of ChatGPT (generative artificial intelligence-based large language model) to predict DDIs in a real-world setting. Demographics, diagnoses and prescribed medicines for 120 hospitalized patients were input through three standardized prompts to ChatGPT version 3.5 and compared against pharmacist DDI evaluation to estimate diagnostic accuracy. Area under receiver operating characteristic and inter-rater reliability (Cohen's and Fleiss' kappa coefficients) were calculated. ChatGPT's responses differed based on prompt wording style, with higher sensitivity for prompts mentioning 'drug interaction'. Confusion matrices displayed low true positive and high true negative rates, and there was minimal agreement between ChatGPT and pharmacists (Cohen's kappa values 0.077-0.143). Low sensitivity values suggest a lack of success in identifying DDIs by ChatGPT, and further development is required before it can reliably assess potential DDIs in real-world scenarios.
|
Radha Krishnan RP; Hung EH; Ashford M; Edillo CE; Gardner C; Hatrick HB; Kim B; Lai AWY; Li X; Zhao YX; Raubenheimer JE
| 10
|
|||
39190012
|
Toward an Explainable Large Language Model for the Automatic Identification of the Drug-Induced Liver Injury Literature.
| 2,024
|
Chemical research in toxicology
|
Drug-induced liver injury (DILI) stands as a significant concern in drug safety, representing the primary cause of acute liver failure. Identifying the scientific literature related to DILI is crucial for monitoring, investigating, and conducting meta-analyses of drug safety issues. Given the intricate and often obscure nature of drug interactions, simple keyword searching can be insufficient for the exhaustive retrieval of the DILI-relevant literature. Manual curation of DILI-related publications demands pharmaceutical expertise and is susceptible to errors, severely limiting throughput. Despite numerous efforts utilizing cutting-edge natural language processing and deep learning techniques to automatically identify the DILI-related literature, their performance remains suboptimal for real-world applications in clinical research and regulatory contexts. In the past year, large language models (LLMs) such as ChatGPT and its open-source counterpart LLaMA have achieved groundbreaking progress in natural language understanding and question answering, paving the way for the automated, high-throughput identification of the DILI-related literature and subsequent analysis. Leveraging a large-scale public dataset comprising 14 203 training publications from the CAMDA 2022 literature AI challenge, we have developed what we believe to be the first LLM specialized in DILI analysis based on LLaMA-2. In comparison with other smaller language models such as BERT, GPT, and their variants, LLaMA-2 exhibits an enhanced out-of-fold accuracy of 97.19% and area under the ROC curve of 0.9947 using 3-fold cross-validation on the training set. Despite LLMs' initial design for dialogue systems, our study illustrates their successful adaptation into accurate classifiers for automated identification of the DILI-related literature from vast collections of documents. This work is a step toward unleashing the potential of LLMs in the context of regulatory science and facilitating the regulatory review process.
|
Ma C; Wolfinger RD
| 10
|
|||
40114317
|
Application of Large Language Models in Drug-Induced Osteotoxicity Prediction.
| 2,025
|
Journal of chemical information and modeling
|
Drug-induced osteotoxicity refers to the harmful effects certain drugs have on the skeletal system, posing significant safety risks. These toxic effects are a key concern in clinical practice, drug development, and environmental management. However, existing toxicity assessment models lack specialized data sets and algorithms for predicting osteotoxicity. In our study, we collected osteotoxic molecules and employed various large language models, including DeepSeek and ChatGPT, alongside traditional machine learning methods to predict their properties. Among these, the DeepSeek R1 and ChatGPT o3 models achieved ACC values of 0.87 and 0.88, respectively. Our results indicate that machine learning methods can assist in evaluating the impact of harmful substances on bone health during drug development, improving safety protocols, mitigating skeletal side effects, and enhancing treatment outcomes and public safety. Furthermore, it highlights the potential of large language models in predicting molecular toxicity and their significance in the fields of health and chemical sciences.
|
Chen YQ; Yu T; Song ZQ; Wang CY; Luo JT; Xiao Y; Qiu H; Wang QQ; Jin HM
| 10
|
|||
39593819
|
Artificial Intelligence Diagnosing of Oral Lichen Planus: A Comparative Study.
| 2,024
|
Bioengineering (Basel, Switzerland)
|
Early diagnosis of oral lichen planus (OLP) is challenging, which traditionally is dependent on clinical experience and subjective interpretation. Artificial intelligence (AI) technology has been widely applied in objective and rapid diagnoses. In this study, we aim to investigate the potential of AI diagnosis in OLP and evaluate its effectiveness in improving diagnostic accuracy and accelerating clinical decision making. A total of 128 confirmed OLP patients were included, and lesion images from various anatomical sites were collected. The diagnosis was performed using AI platforms, including ChatGPT-4O, ChatGPT (Diagram-Date extension), and Claude Opus, for AI directly identification and AI pre-training identification. After OLP feature training, the diagnostic accuracy of the AI platforms significantly improved, with the overall recognition rates of ChatGPT-4O, ChatGPT (Diagram-Date extension), and Claude Opus increasing from 59%, 68%, and 15% to 77%, 80%, and 50%, respectively. Additionally, the pre-training recognition rates for buccal mucosa reached 94%, 93%, and 56%, respectively. However, the AI platforms performed less effectively when recognizing lesions in less common sites and complex cases; for instance, the pre-training recognition rates for the gums were only 60%, 60%, and 20%, demonstrating significant limitations. The study highlights the strengths and limitations of different AI technologies and provides a reference for future AI applications in oral medicine.
|
Yu S; Sun W; Mi D; Jin S; Wu X; Xin B; Zhang H; Wang Y; Sun X; He X
| 32
|
|||
38580746
|
Scientific figures interpreted by ChatGPT: strengths in plot recognition and limits in color perception.
| 2,024
|
NPJ precision oncology
|
Emerging studies underscore the promising capabilities of large language model-based chatbots in conducting basic bioinformatics data analyses. The recent feature of accepting image inputs by ChatGPT, also known as GPT-4V(ision), motivated us to explore its efficacy in deciphering bioinformatics scientific figures. Our evaluation with examples in cancer research, including sequencing data analysis, multimodal network-based drug repositioning, and tumor clonal evolution, revealed that ChatGPT can proficiently explain different plot types and apply biological knowledge to enrich interpretations. However, it struggled to provide accurate interpretations when color perception and quantitative analysis of visual elements were involved. Furthermore, while the chatbot can draft figure legends and summarize findings from the figures, stringent proofreading is imperative to ensure the accuracy and reliability of the content.
|
Wang J; Ye Q; Liu L; Guo NL; Hu G
| 0-1
|
|||
37904927
|
Bioinformatics Illustrations Decoded by ChatGPT: The Good, The Bad, and The Ugly.
| 2,023
|
bioRxiv : the preprint server for biology
|
Emerging studies underscore the promising capabilities of large language model-based chatbots in conducting fundamental bioinformatics data analyses. The recent feature of accepting image-inputs by ChatGPT motivated us to explore its efficacy in deciphering bioinformatics illustrations. Our evaluation with examples in cancer research, including sequencing data analysis, multimodal network-based drug repositioning, and tumor clonal evolution, revealed that ChatGPT can proficiently explain different plot types and apply biological knowledge to enrich interpretations. However, it struggled to provide accurate interpretations when quantitative analysis of visual elements was involved. Furthermore, while the chatbot can draft figure legends and summarize findings from the figures, stringent proofreading is imperative to ensure the accuracy and reliability of the content.
|
Wang J; Ye Q; Liu L; Lan Guo N; Hu G
| 0-1
|
|||
37866949
|
[Reflections on the Implications of the Developments in ChatGPT for Changes in Medical Education Models].
| 2,023
|
Sichuan da xue xue bao. Yi xue ban = Journal of Sichuan University. Medical science edition
|
Ever since its official launch, Chat Generative Pre-Trained Transformer, or ChatGPT, a natural language processing tool driven by artificial intelligence (AI) technology, has attracted much attention from the education community. ChatGPT can play an important role in the field of medical education, with its potential applications ranging from assisting teachers in designing individualized teaching scenarios to enhancing students' practical ability for solving clinical problems and improving teaching and research efficiency. With the developments in technology, it is inevitable that ChatGPT, or other generative AI models, will be thoroughly integrated in more and more medical contexts, which will further enhance the efficiency and quality of medical services and allow doctors to spend more time interacting with patients and implement personalized health management. Herein, we suggested that proactive reflections be made to figure out the best way to cultivate health professional in the context of New Medical Education, to help more medical professionals enhance their understanding of developments in artificial intelligence, and to make preparations for the challenges that will emerge in the new round of technological revolution. Medical educators should focus on guiding students to make proper use of AI tools in the appropriate context, thereby prevening abuse or overreliance caused by a lack of discrimating ability. Teachers should focus on helping medical students make improvements in clinical reasoning skills, self-directed learning, and clinical practical skills. Teachers should stress the importance for medical students to understand the philosophical implications of the mind-body unity concept, holistic medical thinking, and systematic medical thinking. It is important to enhance medical students' humanistic qualities, cultivate their empathy and communication skills, and continually enhance their ability to meet the requirements of individualized precision diagnosis and treatment so that they will better adapt to the future developments in medicine.
|
Qu X; Yang J; Chen T; Zhang W
| 10
|
|||
36960451
|
AI-generated research paper fabrication and plagiarism in the scientific community.
| 2,023
|
Patterns (New York, N.Y.)
|
Fabricating research within the scientific community has consequences for one's credibility and undermines honest authors. We demonstrate the feasibility of fabricating research using an AI-based language model chatbot. Human detection versus AI detection will be compared to determine accuracy in identifying fabricated works. The risks of utilizing AI-generated research works will be underscored and reasons for falsifying research will be highlighted.
|
Elali FR; Rachid LN
| 10
|
|||
37918623
|
Leveraging GPT-4 for food effect summarization to enhance product-specific guidance development via iterative prompting.
| 2,023
|
Journal of biomedical informatics
|
Food effect summarization from New Drug Application (NDA) is an essential component of product-specific guidance (PSG) development and assessment, which provides the basis of recommendations for fasting and fed bioequivalence studies to guide the pharmaceutical industry for developing generic drug products. However, manual summarization of food effect from extensive drug application review documents is time-consuming. Therefore, there is a need to develop automated methods to generate food effect summary. Recent advances in natural language processing (NLP), particularly large language models (LLMs) such as ChatGPT and GPT-4, have demonstrated great potential in improving the effectiveness of automated text summarization, but its ability with regard to the accuracy in summarizing food effect for PSG assessment remains unclear. In this study, we introduce a simple yet effective approach,iterative prompting, which allows one to interact with ChatGPT or GPT-4 more effectively and efficiently through multi-turn interaction. Specifically, we propose a three-turn iterative prompting approach to food effect summarization in which the keyword-focused and length-controlled prompts are respectively provided in consecutive turns to refine the quality of the generated summary. We conduct a series of extensive evaluations, ranging from automated metrics to FDA professionals and even evaluation by GPT-4, on 100 NDA review documents selected over the past five years. We observe that the summary quality is progressively improved throughout the iterative prompting process. Moreover, we find that GPT-4 performs better than ChatGPT, as evaluated by FDA professionals (43% vs. 12%) and GPT-4 (64% vs. 35%). Importantly, all the FDA professionals unanimously rated that 85% of the summaries generated by GPT-4 are factually consistent with the golden reference summary, a finding further supported by GPT-4 rating of 72% consistency. Taken together, these results strongly suggest a great potential for GPT-4 to draft food effect summaries that could be reviewed by FDA professionals, thereby improving the efficiency of the PSG assessment cycle and promoting generic drug product development.
|
Shi Y; Ren P; Wang J; Han B; ValizadehAslani T; Agbavor F; Zhang Y; Hu M; Zhao L; Liang H
| 10
|
|||
39216679
|
Editorial Commentary: Large Language Models Like ChatGPT Show Promise, but Clinical Use of Artificial Intelligence Requires Physician Partnership.
| 2,025
|
Arthroscopy : the journal of arthroscopic & related surgery : official publication of the Arthroscopy Association of North America and the International Arthroscopy Association
|
Forcing ChatGPT and other large language models to perform roles reserved for physicians and other health care professionals-namely evaluation, management, and triage-poses a threat from regulatory, risk management, and professional perspectives. The clinical practice of medicine would benefit tremendously from automated administrative support with systems-based transparency and fluidity-not substitution for clinical diagnostics and decision making. ChatGPT and other large language models are not intended or authorized for clinical use, let alone to be tested or rubber stamped for this application. The best clinical use cases of artificial intelligence require physician partnership to enable personal care, minimize administrative burden, maximize efficiency, and minimize risk-without substitution of core physician tasks.
|
Ramkumar PN; Woo JJ
| 10
|
|||
38715436
|
The new paradigm in machine learning - foundation models, large language models and beyond: a primer for physicians.
| 2,024
|
Internal medicine journal
|
Foundation machine learning models are deep learning models capable of performing many different tasks using different data modalities such as text, audio, images and video. They represent a major shift from traditional task-specific machine learning prediction models. Large language models (LLM), brought to wide public prominence in the form of ChatGPT, are text-based foundational models that have the potential to transform medicine by enabling automation of a range of tasks, including writing discharge summaries, answering patients questions and assisting in clinical decision-making. However, such models are not without risk and can potentially cause harm if their development, evaluation and use are devoid of proper scrutiny. This narrative review describes the different types of LLM, their emerging applications and potential limitations and bias and likely future translation into clinical practice.
|
Scott IA; Zuccon G
| 10
|
|||
37893850
|
Leveraging Generative AI and Large Language Models: A Comprehensive Roadmap for Healthcare Integration.
| 2,023
|
Healthcare (Basel, Switzerland)
|
Generative artificial intelligence (AI) and large language models (LLMs), exemplified by ChatGPT, are promising for revolutionizing data and information management in healthcare and medicine. However, there is scant literature guiding their integration for non-AI professionals. This study conducts a scoping literature review to address the critical need for guidance on integrating generative AI and LLMs into healthcare and medical practices. It elucidates the distinct mechanisms underpinning these technologies, such as Reinforcement Learning from Human Feedback (RLFH), including few-shot learning and chain-of-thought reasoning, which differentiates them from traditional, rule-based AI systems. It requires an inclusive, collaborative co-design process that engages all pertinent stakeholders, including clinicians and consumers, to achieve these benefits. Although global research is examining both opportunities and challenges, including ethical and legal dimensions, LLMs offer promising advancements in healthcare by enhancing data management, information retrieval, and decision-making processes. Continued innovation in data acquisition, model fine-tuning, prompt strategy development, evaluation, and system implementation is imperative for realizing the full potential of these technologies. Organizations should proactively engage with these technologies to improve healthcare quality, safety, and efficiency, adhering to ethical and legal guidelines for responsible application.
|
Yu P; Xu H; Hu X; Deng C
| 10
|
|||
38762072
|
Generative artificial intelligence in ophthalmology.
| 2,025
|
Survey of ophthalmology
|
Generative artificial intelligence (AI) has revolutionized medicine over the past several years. A generative adversarial network (GAN) is a deep learning framework that has become a powerful technique in medicine, particularly in ophthalmology for image analysis. In this paper we review the current ophthalmic literature involving GANs, and highlight key contributions in the field. We briefly touch on ChatGPT, another application of generative AI, and its potential in ophthalmology. We also explore the potential uses for GANs in ocular imaging, with a specific emphasis on 3 primary domains: image enhancement, disease identification, and generating of synthetic data. PubMed, Ovid MEDLINE, Google Scholar were searched from inception to October 30, 2022, to identify applications of GAN in ophthalmology. A total of 40 papers were included in this review. We cover various applications of GANs in ophthalmic-related imaging including optical coherence tomography, orbital magnetic resonance imaging, fundus photography, and ultrasound; however, we also highlight several challenges that resulted in the generation of inaccurate and atypical results during certain iterations. Finally, we examine future directions and considerations for generative AI in ophthalmology.
|
Waisberg E; Ong J; Kamran SA; Masalkhi M; Paladugu P; Zaman N; Lee AG; Tavakkoli A
| 0-1
|
|||
39059040
|
Evaluating generative AI responses to real-world drug-related questions.
| 2,024
|
Psychiatry research
|
Generative Artificial Intelligence (AI) systems such as OpenAI's ChatGPT, capable of an unprecedented ability to generate human-like text and converse in real time, hold potential for large-scale deployment in clinical settings such as substance use treatment. Treatment for substance use disorders (SUDs) is particularly high stakes, requiring evidence-based clinical treatment, mental health expertise, and peer support. Thus, promises of AI systems addressing deficient healthcare resources and structural bias are relevant within this domain, especially in an anonymous setting. This study explores the effectiveness of generative AI in answering real-world substance use and recovery questions. We collect questions from online recovery forums, use ChatGPT and Meta's LLaMA-2 for responses, and have SUD clinicians rate these AI responses. While clinicians rated the AI-generated responses as high quality, we discovered instances of dangerous disinformation, including disregard for suicidal ideation, incorrect emergency helplines, and endorsement of home detox. Moreover, the AI systems produced inconsistent advice depending on question phrasing. These findings indicate a risky mix of seemingly high-quality, accurate responses upon initial inspection that contain inaccurate and potentially deadly medical advice. Consequently, while generative AI shows promise, its real-world application in sensitive healthcare domains necessitates further safeguards and clinical validation.
|
Giorgi S; Isman K; Liu T; Fried Z; Sedoc J; Curtis B
| 0-1
|
|||
39974299
|
DeepSeek in Healthcare: Revealing Opportunities and Steering Challenges of a New Open-Source Artificial Intelligence Frontier.
| 2,025
|
Cureus
|
Generative Artificial Intelligence (GAI) has driven several advancements in healthcare, with large language models (LLMs) such as OpenAI's ChatGPT, Google's Gemini, and Microsoft's Copilot demonstrating potential in clinical decision support, medical education, and research acceleration. However, their closed-source architecture, high computational costs, and limited adaptability to specialized medical contexts remained key barriers to universal adoption. Now, with the rise of DeepSeek's DeepThink (R1), an open-source LLM, gaining prominence since mid-January 2025, new opportunities and challenges emerge for healthcare integration and AI-driven research. Unlike proprietary models, DeepSeek fosters continuous learning by leveraging publicly available open-source datasets, possibly enhancing adaptability to the ever-evolving medical knowledge and scientific reasoning. Its transparent, community-driven approach may enable greater customization, regional specialization, and collaboration among data researchers and clinicians. Additionally, DeepSeek supports offline deployment, addressing some data privacy concerns. Despite these promising advantages, DeepSeek presents ethical and regulatory challenges. Users' data privacy worries have emerged, with concerns about user data retention policies and potential developer access to user-generated content without opt-out options. Additionally, when used in healthcare applications, its compliance with China's data-sharing regulations highlights the urgent need for clear international data privacy and governance. Furthermore, like other LLMs, DeepSeek may face limitations related to inherent biases, hallucinations, and output reliability, which warrants rigorous validation and human oversight before clinical application. This editorial explores DeepSeek's potential role in clinical workflows, medical education, and research while also highlighting its challenges related to security, accuracy, and responsible AI governance. With careful implementation, ethical considerations, and international collaboration, DeepSeek and similar LLMs could enhance healthcare innovation, providing cost-effective, scalable AI solutions while ensuring human expertise remains at the forefront of patient care.
|
Temsah A; Alhasan K; Altamimi I; Jamal A; Al-Eyadhy A; Malki KH; Temsah MH
| 10
|
|||
39820845
|
Generative Artificial Intelligence Use in Healthcare: Opportunities for Clinical Excellence and Administrative Efficiency.
| 2,025
|
Journal of medical systems
|
Generative Artificial Intelligence (Gen AI) has transformative potential in healthcare to enhance patient care, personalize treatment options, train healthcare professionals, and advance medical research. This paper examines various clinical and non-clinical applications of Gen AI. In clinical settings, Gen AI supports the creation of customized treatment plans, generation of synthetic data, analysis of medical images, nursing workflow management, risk prediction, pandemic preparedness, and population health management. By automating administrative tasks such as medical documentations, Gen AI has the potential to reduce clinician burnout, freeing more time for direct patient care. Furthermore, application of Gen AI may enhance surgical outcomes by providing real-time feedback and automation of certain tasks in operating rooms. The generation of synthetic data opens new avenues for model training for diseases and simulation, enhancing research capabilities and improving predictive accuracy. In non-clinical contexts, Gen AI improves medical education, public relations, revenue cycle management, healthcare marketing etc. Its capacity for continuous learning and adaptation enables it to drive ongoing improvements in clinical and operational efficiencies, making healthcare delivery more proactive, predictive, and precise.
|
Bhuyan SS; Sateesh V; Mukul N; Galvankar A; Mahmood A; Nauman M; Rai A; Bordoloi K; Basu U; Samuel J
| 10
|
|||
39563479
|
Mapping the Landscape of Generative Language Models in Dental Education: A Comparison Between ChatGPT and Google Bard.
| 2,025
|
European journal of dental education : official journal of the Association for Dental Education in Europe
|
Generative language models (LLMs) have shown great potential in various fields, including medicine and education. This study evaluated and compared ChatGPT 3.5 and Google Bard within dental education and research. METHODS: We developed seven dental education-related queries to assess each model across various domains: their role in dental education, creation of specific exercises, simulations of dental problems with treatment options, development of assessment tools, proficiency in dental literature and their ability to identify, summarise and critique a specific article. Two blind reviewers scored the responses using defined metrics. The means and standard deviations of the scores were reported, and differences between the scores were analysed using Wilcoxon tests. RESULTS: ChatGPT 3.5 outperformed Bard in several tasks, including the ability to create highly comprehensive, accurate, clear, relevant and specific exercises on dental concepts, generate simulations of dental problems with treatment options and develop assessment tools. On the other hand, Bard was successful in retrieving real research, and it was able to critique the article it selected. Statistically significant differences were noted between the average scores of the two models (p </= 0.05) for domains 1 and 3. CONCLUSION: This study highlights the potential of LLMs as dental education tools, enhancing learning through virtual simulations and critical performance analysis. However, the variability in LLMs' performance underscores the need for targeted training, particularly in evidence-based content generation. It is crucial for educators, students and practitioners to exercise caution when considering the delegation of critical educational or healthcare decisions to computer systems.
|
Aldukhail S
| 21
|
|||
37751243
|
A Future of Smarter Digital Health Empowered by Generative Pretrained Transformer.
| 2,023
|
Journal of medical Internet research
|
Generative pretrained transformer (GPT) tools have been thriving, as ignited by the remarkable success of OpenAI's recent chatbot product. GPT technology offers countless opportunities to significantly improve or renovate current health care research and practice paradigms, especially digital health interventions and digital health-enabled clinical care, and a future of smarter digital health can thus be expected. In particular, GPT technology can be incorporated through various digital health platforms in homes and hospitals embedded with numerous sensors, wearables, and remote monitoring devices. In this viewpoint paper, we highlight recent research progress that depicts the future picture of a smarter digital health ecosystem through GPT-facilitated centralized communications, automated analytics, personalized health care, and instant decision-making.
|
Miao H; Li C; Wang J
| 10
|
|||
40316716
|
GPT-4's performance in supporting physician decision-making in nephrology multiple-choice questions.
| 2,025
|
Scientific reports
|
Generative Pre-trained Transformer (GPT)-4, a versatile conversational artificial intelligence, has potential applications in medicine, but its ability to support physicians' decision-making remains unclear. We evaluated GPT-4's performance in assisting physicians with nephrology questions. Forty-five single-answer multiple-choice questions were extracted from the Core Curriculum in Nephrology articles published in the American Journal of Kidney Diseases from October 2021 to June 2023. Eight junior physicians without board certification and ten senior physicians with board certification answered these questions twice: first unaided, then with the opportunity to revise their answers based on GPT-4's outputs. GPT-4 correctly answered 77.8% of the questions. Before using GPT-4, junior physicians had a median (interquartile range) proportion of correct answers of 53.3% (48.3-53.3), senior physicians 65.6% (60.6-66.7). After GPT-4 support, the median proportion of correct answers significantly increased to 72.2% (68.3-76.1) for juniors and 75.6% (73.3-80.0) for seniors (p = 0.008, p = 0.004). The improvement was significantly higher for junior physicians (p = 0.017). However, Senior physicians showed a decreased proportion of correct answers in one of the clinical categories. GPT-4 significantly improved physicians' accuracy in nephrology, especially among less experienced physicians, but may have negative impacts in specific subfields. Careful consideration is required when using GPT-4 to support physicians' decision-making.
|
Noda R; Tanabe K; Ichikawa D; Shibagaki Y
| 21
|
|||
38076902
|
Evaluating ChatGPT as an Agent for Providing Genetic Education.
| 2,023
|
bioRxiv : the preprint server for biology
|
Genetic disorders are complex and can greatly impact an individual's health and well-being. In this study, we assess the ability of ChatGPT, a language model developed by OpenAI, to answer questions related to three specific genetic disorders: BRCA1, MLH1, and HFE. ChatGPT has shown it can supply articulate answers to a wide spectrum of questions. However, its ability to answer questions related to genetic disorders has yet to be evaluated. The aim of this study is to perform both quantitative and qualitative assessments of ChatGPT's performance in this area. The ability of ChatGPT to provide accurate and useful information to patients was assessed by genetic experts. Here we show that ChatGPT answered 64.7% of the 68 genetic questions asked and was able to respond coherently to complex questions related to the three genes/conditions. Our results reveal that ChatGPT can provide valuable information to individuals seeking information about genetic disorders, however, it still has some limitations and inaccuracies, particularly in understanding human inheritance patterns. The results of this study have implications for both genomics and medicine and can inform future developments in this area. AI platforms, like ChatGPT, have significant potential in the field of genomics. As these technologies become integrated into consumer-facing products, appropriate oversight is required to ensure accurate and safe delivery of medical information. With such oversight and training specifically for genetic information, these platforms could have the potential to augment some clinical interactions.
|
Walton N; Gracefo S; Sutherland N; Kozel BA; Danford CJ; McGrath SP
| 0-1
|
|||
39393621
|
ChatGPT as a medical education resource in cardiology: Mitigating replicability challenges and optimizing model performance.
| 2,024
|
Current problems in cardiology
|
Given the rapid development of large language models (LLMs), such as ChatGPT, in its ability to understand and generate human-like texts, these technologies inspired efforts to explore their capabilities in natural language processing tasks, especially those in healthcare contexts. The performance of these tools have been evaluated thoroughly across medicine in diverse tasks, including standardized medical examinations, medical-decision making, and many others. In this journal, Anaya et al. published a study comparing the readability metrics of medical education resources formulated by ChatGPT with those of major U.S. institutions (AHA, ACC, HFSA) about heart failure. In this work, we provide a critical review of this article and further describe approaches to help mitigate challenges in reproducibility of studies evaluating LLMs in cardiology. Additionally, we provide suggestions to optimize sampling of responses provided by LLMs for future studies. Overall, while the study by Anaya et al. provides a meaningful contribution to literature of LLMs in cardiology, further comprehensive studies are necessary to address current limitations and further strengthen our understanding of these novel tools.
|
Pillai J; Pillai K
| 10
|
|||
39273750
|
Custom GPTs Enhancing Performance and Evidence Compared with GPT-3.5, GPT-4, and GPT-4o? A Study on the Emergency Medicine Specialist Examination.
| 2,024
|
Healthcare (Basel, Switzerland)
|
Given the widespread application of ChatGPT, we aim to evaluate its proficiency in the emergency medicine specialty written examination. Additionally, we compare the performance of GPT-3.5, GPT-4, GPTs, and GPT-4o. The research seeks to ascertain whether custom GPTs possess the essential capabilities and access to knowledge bases necessary for providing accurate information, and to explore the effectiveness and potential of personalized knowledge bases in supporting the education of medical residents. We evaluated the performance of ChatGPT-3.5, GPT-4, custom GPTs, and GPT-4o on the Emergency Medicine Specialist Examination in Taiwan. Two hundred single-choice exam questions were provided to these AI models, and their responses were recorded. Correct rates were compared among the four models, and the McNemar test was applied to paired model data to determine if there were significant changes in performance. Out of 200 questions, GPT-3.5, GPT-4, custom GPTs, and GPT-4o correctly answered 77, 105, 119, and 138 questions, respectively. GPT-4o demonstrated the highest performance, significantly better than GPT-4, which, in turn, outperformed GPT-3.5, while custom GPTs exhibited superior performance compared to GPT-4 but inferior performance compared to GPT-4o, with all p < 0.05. In the emergency medicine specialty written exam, our findings highlight the value and potential of large language models (LLMs), and highlight their strengths and limitations, especially in question types and image-inclusion capabilities. Not only do GPT-4o and custom GPTs facilitate exam preparation, but they also elevate the evidence level in responses and source accuracy, demonstrating significant potential to transform educational frameworks and clinical practices in medicine.
|
Liu CL; Ho CT; Wu TC
| 21
|
|||
37076707
|
GPT-4: a new era of artificial intelligence in medicine.
| 2,023
|
Irish journal of medical science
|
GPT-4 is the latest version of ChatGPT which is reported by OpenAI to have greater problem-solving abilities and an even broader knowledge base. We examined GPT-4's ability to inform us about the latest literature in a given area, and to write a discharge summary for a patient following an uncomplicated surgery and its latest image analysis feature which was reported to be able to identify objects in photos. All things considered, GPT-4 has the potential to help drive medical innovation, from aiding with patient discharge notes, summarizing recent clinical trials, providing information on ethical guidelines, and much more.
|
Waisberg E; Ong J; Masalkhi M; Kamran SA; Zaman N; Sarker P; Lee AG; Tavakkoli A
| 10
|
|||
39365643
|
Engine of Innovation in Hospital Pharmacy: Applications and Reflections of ChatGPT.
| 2,024
|
Journal of medical Internet research
|
Hospital pharmacy plays an important role in ensuring medical care quality and safety, especially in the area of drug information retrieval, therapy guidance, and drug-drug interaction management. ChatGPT is a powerful artificial intelligence language model that can generate natural-language texts. Here, we explored the applications and reflections of ChatGPT in hospital pharmacy, where it may enhance the quality and efficiency of pharmaceutical care. We also explored ChatGPT's prospects in hospital pharmacy and discussed its working principle, diverse applications, and practical cases in daily operations and scientific research. Meanwhile, the challenges and limitations of ChatGPT, such as data privacy, ethical issues, bias and discrimination, and human oversight, are discussed. ChatGPT is a promising tool for hospital pharmacy, but it requires careful evaluation and validation before it can be integrated into clinical practice. Some suggestions for future research and development of ChatGPT in hospital pharmacy are provided.
|
Li X; Guo H; Li D; Zheng Y
| 10
|
|||
38380541
|
AI, Machine Learning, and ChatGPT in Hypertension.
| 2,024
|
Hypertension (Dallas, Tex. : 1979)
|
Hypertension, a leading cause of cardiovascular disease and premature death, remains incompletely understood despite extensive research. Indeed, even though numerous drugs are available, achieving adequate blood pressure control remains a challenge, prompting recent interest in artificial intelligence. To promote the use of machine learning in cardiovascular medicine, this review provides a brief introduction to machine learning and reviews its notable applications in hypertension management and research, such as disease diagnosis and prognosis, treatment decisions, and omics data analysis. The challenges and limitations associated with data-driven predictive techniques are also discussed. The goal of this review is to raise awareness and encourage the hypertension research community to consider machine learning as a key component in developing innovative diagnostic and therapeutic tools for hypertension. By integrating traditional cardiovascular risk factors with genomics, socioeconomic, behavioral, and environmental factors, machine learning may aid in the development of precise risk prediction models and personalized treatment approaches for patients with hypertension.
|
Layton AT
| 10
|
|||
37485215
|
Artificial Intelligence in Ophthalmology: A Comparative Analysis of GPT-3.5, GPT-4, and Human Expertise in Answering StatPearls Questions.
| 2,023
|
Cureus
|
Importance Chat Generative Pre-Trained Transformer (ChatGPT) has shown promising performance in various fields, including medicine, business, and law, but its accuracy in specialty-specific medical questions, particularly in ophthalmology, is still uncertain. Purpose This study evaluates the performance of two ChatGPT models (GPT-3.5 and GPT-4) and human professionals in answering ophthalmology questions from the StatPearls question bank, assessing their outcomes, and providing insights into the integration of artificial intelligence (AI) technology in ophthalmology. Methods ChatGPT's performance was evaluated using 467 ophthalmology questions from the StatPearls question bank. These questions were stratified into 11 subcategories, four difficulty levels, and three generalized anatomical categories. The answer accuracy of GPT-3.5, GPT-4, and human participants was assessed. Statistical analysis was conducted via the Kolmogorov-Smirnov test for normality, one-way analysis of variance (ANOVA) for the statistical significance of GPT-3 versus GPT-4 versus human performance, and repeated unpaired two-sample t-tests to compare the means of two groups. Results GPT-4 outperformed both GPT-3.5 and human professionals on ophthalmology StatPearls questions, except in the "Lens and Cataract" category. The performance differences were statistically significant overall, with GPT-4 achieving higher accuracy (73.2%) compared to GPT-3.5 (55.5%, p-value < 0.001) and humans (58.3%, p-value < 0.001). There were variations in performance across difficulty levels (rated one to four), but GPT-4 consistently performed better than both GPT-3.5 and humans on level-two, -three, and -four questions. On questions of level-four difficulty, human performance significantly exceeded that of GPT-3.5 (p = 0.008). Conclusion The study's findings demonstrate GPT-4's significant performance improvements over GPT-3.5 and human professionals on StatPearls ophthalmology questions. Our results highlight the potential of advanced conversational AI systems to be utilized as important tools in the education and practice of medicine.
|
Moshirfar M; Altaf AW; Stoakes IM; Tuttle JJ; Hoopes PC
| 21
|
|||
37103928
|
Performance of an Artificial Intelligence Chatbot in Ophthalmic Knowledge Assessment.
| 2,023
|
JAMA ophthalmology
|
IMPORTANCE: ChatGPT is an artificial intelligence (AI) chatbot that has significant societal implications. Training curricula using AI are being developed in medicine, and the performance of chatbots in ophthalmology has not been characterized. OBJECTIVE: To assess the performance of ChatGPT in answering practice questions for board certification in ophthalmology. DESIGN, SETTING, AND PARTICIPANTS: This cross-sectional study used a consecutive sample of text-based multiple-choice questions provided by the OphthoQuestions practice question bank for board certification examination preparation. Of 166 available multiple-choice questions, 125 (75%) were text-based. EXPOSURES: ChatGPT answered questions from January 9 to 16, 2023, and on February 17, 2023. MAIN OUTCOMES AND MEASURES: Our primary outcome was the number of board certification examination practice questions that ChatGPT answered correctly. Our secondary outcomes were the proportion of questions for which ChatGPT provided additional explanations, the mean length of questions and responses provided by ChatGPT, the performance of ChatGPT in answering questions without multiple-choice options, and changes in performance over time. RESULTS: In January 2023, ChatGPT correctly answered 58 of 125 questions (46%). ChatGPT's performance was the best in the category general medicine (11/14; 79%) and poorest in retina and vitreous (0%). The proportion of questions for which ChatGPT provided additional explanations was similar between questions answered correctly and incorrectly (difference, 5.82%; 95% CI, -11.0% to 22.0%; chi21 = 0.45; P = .51). The mean length of questions was similar between questions answered correctly and incorrectly (difference, 21.4 characters; SE, 36.8; 95% CI, -51.4 to 94.3; t = 0.58; df = 123; P = .22). The mean length of responses was similar between questions answered correctly and incorrectly (difference, -80.0 characters; SE, 65.4; 95% CI, -209.5 to 49.5; t = -1.22; df = 123; P = .22). ChatGPT selected the same multiple-choice response as the most common answer provided by ophthalmology trainees on OphthoQuestions 44% of the time. In February 2023, ChatGPT provided a correct response to 73 of 125 multiple-choice questions (58%) and 42 of 78 stand-alone questions (54%) without multiple-choice options. CONCLUSIONS AND RELEVANCE: ChatGPT answered approximately half of questions correctly in the OphthoQuestions free trial for ophthalmic board certification preparation. Medical professionals and trainees should appreciate the advances of AI in medicine while acknowledging that ChatGPT as used in this investigation did not answer sufficient multiple-choice questions correctly for it to provide substantial assistance in preparing for board certification at this time.
|
Mihalache A; Popovic MM; Muni RH
| 21
|
|||
38954607
|
Comparative Analysis of Performance of Large Language Models in Urogynecology.
| 2,024
|
Urogynecology (Philadelphia, Pa.)
|
IMPORTANCE: Despite growing popularity in medicine, data on large language models in urogynecology are lacking. OBJECTIVE: The aim of this study was to compare the performance of ChatGPT-3.5, GPT-4, and Bard on the American Urogynecologic Society self-assessment examination. STUDY DESIGN: The examination features 185 questions with a passing score of 80. We tested 3 models-ChatGPT-3.5, GPT-4, and Bard on every question. Dedicated accounts enabled controlled comparisons. Questions with prompts were inputted into each model's interface, and responses were evaluated for correctness, logical reasoning behind answer choice, and sourcing. Data on subcategory, question type, correctness rate, question difficulty, and reference quality were noted. The Fisher exact or chi2 test was used for statistical analysis. RESULTS: Out of 185 questions, GPT-4 answered 61.6% questions correctly compared with 54.6% for GPT-3.5 and 42.7% for Bard. GPT-4 answered all questions, whereas GPT-3.5 and Bard declined to answer 4 and 25 questions, respectively. All models demonstrated logical reasoning in their correct responses. Performance of all large language models was inversely proportional to the difficulty level of the questions. Bard referenced sources 97.5% of the time, more often than GPT-4 (83.3%) and GPT-3.5 (39%). GPT-3.5 cited books and websites, whereas GPT-4 and Bard additionally cited journal articles and society guidelines. Median journal impact factor and number of citations were 3.6 with 20 citations for GPT-4 and 2.6 with 25 citations for Bard. CONCLUSIONS: Although GPT-4 outperformed GPT-3.5 and Bard, none of the models achieved a passing score. Clinicians should use language models cautiously in patient care scenarios until more evidence emerges.
|
Yadav GS; Pandit K; Connell PT; Erfani H; Nager CW
| 0-1
|
|||
39148822
|
Large Language Model Influence on Management Reasoning: A Randomized Controlled Trial.
| 2,024
|
medRxiv : the preprint server for health sciences
|
IMPORTANCE: Large language model (LLM) artificial intelligence (AI) systems have shown promise in diagnostic reasoning, but their utility in management reasoning with no clear right answers is unknown. OBJECTIVE: To determine whether LLM assistance improves physician performance on open-ended management reasoning tasks compared to conventional resources. DESIGN: Prospective, randomized controlled trial conducted from 30 November 2023 to 21 April 2024. SETTING: Multi-institutional study from Stanford University, Beth Israel Deaconess Medical Center, and the University of Virginia involving physicians from across the United States. PARTICIPANTS: 92 practicing attending physicians and residents with training in internal medicine, family medicine, or emergency medicine. INTERVENTION: Five expert-developed clinical case vignettes were presented with multiple open-ended management questions and scoring rubrics created through a Delphi process. Physicians were randomized to use either GPT-4 via ChatGPT Plus in addition to conventional resources (e.g., UpToDate, Google), or conventional resources alone. MAIN OUTCOMES AND MEASURES: The primary outcome was difference in total score between groups on expert-developed scoring rubrics. Secondary outcomes included domain-specific scores and time spent per case. RESULTS: Physicians using the LLM scored higher compared to those using conventional resources (mean difference 6.5 %, 95% CI 2.7-10.2, p<0.001). Significant improvements were seen in management decisions (6.1%, 95% CI 2.5-9.7, p=0.001), diagnostic decisions (12.1%, 95% CI 3.1-21.0, p=0.009), and case-specific (6.2%, 95% CI 2.4-9.9, p=0.002) domains. GPT-4 users spent more time per case (mean difference 119.3 seconds, 95% CI 17.4-221.2, p=0.02). There was no significant difference between GPT-4-augmented physicians and GPT-4 alone (-0.9%, 95% CI -9.0 to 7.2, p=0.8). CONCLUSIONS AND RELEVANCE: LLM assistance improved physician management reasoning compared to conventional resources, with particular gains in contextual and patient-specific decision-making. These findings indicate that LLMs can augment management decision-making in complex cases. TRIAL REGISTRATION: ClinicalTrials.gov Identifier: NCT06208423; https://classic.clinicaltrials.gov/ct2/show/NCT06208423.
|
Goh E; Gallo R; Strong E; Weng Y; Kerman H; Freed J; Cool JA; Kanjee Z; Lane KP; Parsons AS; Ahuja N; Horvitz E; Yang D; Milstein A; Olson APJ; Hom J; Chen JH; Rodman A
| 10
|
|||
38475802
|
Assessing the research landscape and clinical utility of large language models: a scoping review.
| 2,024
|
BMC medical informatics and decision making
|
IMPORTANCE: Large language models (LLMs) like OpenAI's ChatGPT are powerful generative systems that rapidly synthesize natural language responses. Research on LLMs has revealed their potential and pitfalls, especially in clinical settings. However, the evolving landscape of LLM research in medicine has left several gaps regarding their evaluation, application, and evidence base. OBJECTIVE: This scoping review aims to (1) summarize current research evidence on the accuracy and efficacy of LLMs in medical applications, (2) discuss the ethical, legal, logistical, and socioeconomic implications of LLM use in clinical settings, (3) explore barriers and facilitators to LLM implementation in healthcare, (4) propose a standardized evaluation framework for assessing LLMs' clinical utility, and (5) identify evidence gaps and propose future research directions for LLMs in clinical applications. EVIDENCE REVIEW: We screened 4,036 records from MEDLINE, EMBASE, CINAHL, medRxiv, bioRxiv, and arXiv from January 2023 (inception of the search) to June 26, 2023 for English-language papers and analyzed findings from 55 worldwide studies. Quality of evidence was reported based on the Oxford Centre for Evidence-based Medicine recommendations. FINDINGS: Our results demonstrate that LLMs show promise in compiling patient notes, assisting patients in navigating the healthcare system, and to some extent, supporting clinical decision-making when combined with human oversight. However, their utilization is limited by biases in training data that may harm patients, the generation of inaccurate but convincing information, and ethical, legal, socioeconomic, and privacy concerns. We also identified a lack of standardized methods for evaluating LLMs' effectiveness and feasibility. CONCLUSIONS AND RELEVANCE: This review thus highlights potential future directions and questions to address these limitations and to further explore LLMs' potential in enhancing healthcare delivery.
|
Park YJ; Pillai A; Deng J; Guo E; Gupta M; Paget M; Naugler C
| 10
|
|||
38484238
|
Evaluation of ChatGPT for Pelvic Floor Surgery Counseling.
| 2,024
|
Urogynecology (Philadelphia, Pa.)
|
IMPORTANCE: Large language models are artificial intelligence applications that can comprehend and produce human-like text and language. ChatGPT is one such model. Recent advances have increased interest in the utility of large language models in medicine. Urogynecology counseling is complex and time-consuming. Therefore, we evaluated ChatGPT as a potential adjunct for patient counseling. OBJECTIVE: Our primary objective was to compare the accuracy and completeness of ChatGPT responses to information in standard patient counseling leaflets regarding common urogynecological procedures. STUDY DESIGN: Seven urogynecologists compared the accuracy and completeness of ChatGPT responses to standard patient leaflets using 5-point Likert scales with a score of 3 being "equally accurate" and "equally complete," and a score of 5 being "much more accurate" and much more complete, respectively. This was repeated 3 months later to evaluate the consistency of ChatGPT. Additional analysis of the understandability and actionability was completed by 2 authors using the Patient Education Materials Assessment Tool. Analysis was primarily descriptive. First and second ChatGPT queries were compared with the Wilcoxon signed rank test. RESULTS: The median (interquartile range) accuracy was 3 (2-3) and completeness 3 (2-4) for the first ChatGPT query and 3 (3-3) and 4 (3-4), respectively, for the second query. Accuracy and completeness were significantly higher in the second query (P < 0.01). Understandability and actionability of ChatGPT responses were lower than the standard leaflets. CONCLUSIONS: ChatGPT is similarly accurate and complete when compared with standard patient information leaflets for common urogynecological procedures. Large language models may be a helpful adjunct to direct patient-provider counseling. Further research to determine the efficacy and patient satisfaction of ChatGPT for patient counseling is needed.
|
Johnson CM; Bradley CS; Kenne KA; Rabice S; Takacs E; Vollstedt A; Kowalski JT
| 0-1
|
|||
38577160
|
Qualitative evaluation of artificial intelligence-generated weight management diet plans.
| 2,024
|
Frontiers in nutrition
|
IMPORTANCE: The transformative potential of artificial intelligence (AI), particularly via large language models, is increasingly being manifested in healthcare. Dietary interventions are foundational to weight management efforts, but whether AI techniques are presently capable of generating clinically applicable diet plans has not been evaluated. OBJECTIVE: Our study sought to evaluate the potential of personalized AI-generated weight-loss diet plans for clinical applications by employing a survey-based assessment conducted by experts in the fields of obesity medicine and clinical nutrition. DESIGN SETTING AND PARTICIPANTS: We utilized ChatGPT (4.0) to create weight-loss diet plans and selected two control diet plans from tertiary medical centers for comparison. Dietitians, physicians, and nurse practitioners specializing in obesity medicine or nutrition were invited to provide feedback on the AI-generated plans. Each plan was assessed blindly based on its effectiveness, balanced-ness, comprehensiveness, flexibility, and applicability. Personalized plans for hypothetical patients with specific health conditions were also evaluated. MAIN OUTCOMES AND MEASURES: The primary outcomes measured included the indistinguishability of the AI diet plan from human-created plans, and the potential of personalized AI-generated diet plans for real-world clinical applications. RESULTS: Of 95 participants, 67 completed the survey and were included in the final analysis. No significant differences were found among the three weight-loss diet plans in any evaluation category. Among the 14 experts who believed that they could identify the AI plan, only five did so correctly. In an evaluation involving 57 experts, the AI-generated personalized weight-loss diet plan was assessed, with scores above neutral for all evaluation variables. Several limitations, of the AI-generated plans were highlighted, including conflicting dietary considerations, lack of affordability, and insufficient specificity in recommendations, such as exact portion sizes. These limitations suggest that refining inputs could enhance the quality and applicability of AI-generated diet plans. CONCLUSION: Despite certain limitations, our study highlights the potential of AI-generated diet plans for clinical applications. AI-generated dietary plans were frequently indistinguishable from diet plans widely used at major tertiary medical centers. Although further refinement and prospective studies are needed, these findings illustrate the potential of AI in advancing personalized weight-centric care.
|
Kim DW; Park JS; Sharma K; Velazquez A; Li L; Ostrominski JW; Tran T; Seitter Perez RH; Shin JH
| 0-1
|
|||
38890161
|
ChatGPT and Clinical Questions on the Practical Guideline of Blepharoptosis: Reply.
| 2,025
|
Aesthetic plastic surgery
|
In a recent Letter to the Editor authored by Daungsupawong et al. in Aesthetic Plastic Surgery, titled "ChatGPT and Clinical Questions on the Practical Guideline of Blepharoptosis: Correspondence," the authors emphasized important points regarding the input language differences between input and output references. However, advanced versions, such as GPT-4, have shown marginal differences between English and Chinese inputs, possibly because of the use of larger training data. To address this issue, non-English-language-oriented large language models (LLMs) have been developed. The ability of LLMs to refer to existing references varies, with newer models, such as GPT-4, showing higher reference rates than GPT-3.5. Future research should focus on addressing the current limitations and enhancing the effectiveness of emerging LLMs in providing accurate and informative answers to medical questions across multiple languages.Level of Evidence V This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
|
Shiraishi M; Tomioka Y; Okazaki M
| 10
|
|||
38187958
|
Blockchain Technology Predictions 2024: Transformations in Healthcare, Patient Identity, and Public Health.
| 2,023
|
Blockchain in healthcare today
|
In an era characterized by the convergence of cutting-edge technologies, the world of healthcare and public health is on the brink of a profound transformation that will shape the future of medicine and wellness. This transformation is not merely an incremental step forward but a paradigm shift driven by the synergistic integration of digital twins, blockchain technology, artificial intelligence, and multi-omics platforms collectively propelling us into uncharted territory. Integrating these innovations holds the potential to rewrite the rules of engagement in clinical trials, revamp the strategies for preventing public health crises, and redefine how we manage, share, and secure healthcare data. As we embark on this journey of exploration and innovation, we find ourselves at a pivotal juncture, akin to the invention of the microscope in biology or the discovery of antibiotics in medicine. We are at the crossroads of a new era with immense promise and transformative power.
|
De Novi G; Sofia N; Vasiliu-Feltes I; Yan Zang C; Ricotta F
| 10
|
|||
37731897
|
ChatGPT: promise and challenges for deployment in low- and middle-income countries.
| 2,023
|
The Lancet regional health. Western Pacific
|
In low- and middle-income countries (LMICs), the fields of medicine and public health grapple with numerous challenges that continue to hinder patients' access to healthcare services. ChatGPT, a publicly accessible chatbot, has emerged as a potential tool in aiding public health efforts in LMICs. This viewpoint details the potential benefits of employing ChatGPT in LMICs to improve medicine and public health encompassing a broad spectrum of domains ranging from health literacy, screening, triaging, remote healthcare support, mental health support, multilingual capabilities, healthcare communication and documentation, medical training and education, and support for healthcare professionals. Additionally, we also share potential concerns and limitations associated with the use of ChatGPT and provide a balanced discussion on the opportunities and challenges of using ChatGPT in LMICs.
|
Wang X; Sanders HM; Liu Y; Seang K; Tran BX; Atanasov AG; Qiu Y; Tang S; Car J; Wang YX; Wong TY; Tham YC; Chung KC
| 43
|
|||
38647700
|
De novo drug design as GPT language modeling: large chemistry models with supervised and reinforcement learning.
| 2,024
|
Journal of computer-aided molecular design
|
In recent years, generative machine learning algorithms have been successful in designing innovative drug-like molecules. SMILES is a sequence-like language used in most effective drug design models. Due to data's sequential structure, models such as recurrent neural networks and transformers can design pharmacological compounds with optimized efficacy. Large language models have advanced recently, but their implications on drug design have not yet been explored. Although one study successfully pre-trained a large chemistry model (LCM), its application to specific tasks in drug discovery is unknown. In this study, the drug design task is modeled as a causal language modeling problem. Thus, the procedure of reward modeling, supervised fine-tuning, and proximal policy optimization was used to transfer the LCM to drug design, similar to Open AI's ChatGPT and InstructGPT procedures. By combining the SMILES sequence with chemical descriptors, the novel efficacy evaluation model exceeded its performance compared to previous studies. After proximal policy optimization, the drug design model generated molecules with 99.2% having efficacy pIC(50) > 7 towards the amyloid precursor protein, with 100% of the generated molecules being valid and novel. This demonstrated the applicability of LCMs in drug discovery, with benefits including less data consumption while fine-tuning. The applicability of LCMs to drug discovery opens the door for larger studies involving reinforcement-learning with human feedback, where chemists provide feedback to LCMs and generate higher-quality molecules. LCMs' ability to design similar molecules from datasets paves the way for more accessible, non-patented alternatives to drug molecules.
|
Ye G
| 0-1
|
|||
38277743
|
Navigating the path to precision: ChatGPT as a tool in pathology.
| 2,024
|
Pathology, research and practice
|
In recent years, the integration of Artificial Intelligence (AI) into medicine has marked a transformative shift in healthcare practices. This study explores the application of ChatGPT 3.5, an AI-based natural language processing model, in the field of pathology, with a focus on Clinical Pathology, Histopathology, and Hematology. Leveraging a dataset of 30 clinical cases from an online source, the model's performance was evaluated, revealing moderate proficiency in data analysis and decision support. While ChatGPT demonstrated strengths in swift narrative comprehension and foundational insights, limitations were observed in generating detailed and comprehensive information. The study emphasizes the evolving nature of AI in pathology, highlighting the need for ongoing refinement and collaborative efforts between AI researchers and healthcare professionals.
|
Vaidyanathaiyer R; Thanigaimani GD; Arumugam P; Einstien D; Ganesan S; Surapaneni KM
| 10
|
|||
40413583
|
[ARTIFICIAL INTELLIGENCE TOOLS AND THEIR USE IN MEDICINE CHATGPT - NOT THE ONLY PLAYER IN THE ARENA].
| 2,025
|
Harefuah
|
In recent years, there has been a remarkable growth in the development and use of artificial intelligence tools in medicine based on large language models. This review will describe the main existing tools and their various applications for medical staff and patients. Despite its popularity, we will show that ChatGPT is not the only tool and that other tools are sometimes preferable. We will review research comparisons between different tools' effectiveness in various tasks. It will be shown that these tools lack specific performances, such as accuracy and reliability in providing information, understanding clinical context, and making diagnoses. The number of studies on these topics is small, and sometimes their presented results contradict each other. Additional quality research is needed to characterize and improve these tools and designate specific tools for different medical uses. Despite the many advantages and enormous potential inherent in these models, they should be used cautiously, as they only aid the treating physician and do not replace his knowledge, professional experience, and human judgment.
|
Weizman Z; Degany O; Shoenfeld Y
| 10
|
|||
38903157
|
Exploring ChatGPT's potential in the clinical stream of neurorehabilitation.
| 2,024
|
Frontiers in artificial intelligence
|
In several medical fields, generative AI tools such as ChatGPT have achieved optimal performance in identifying correct diagnoses only by evaluating narrative clinical descriptions of cases. The most active fields of application include oncology and COVID-19-related symptoms, with preliminary relevant results also in psychiatric and neurological domains. This scoping review aims to introduce the arrival of ChatGPT applications in neurorehabilitation practice, where such AI-driven solutions have the potential to revolutionize patient care and assistance. First, a comprehensive overview of ChatGPT, including its design, and potential applications in medicine is provided. Second, the remarkable natural language processing skills and limitations of these models are examined with a focus on their use in neurorehabilitation. In this context, we present two case scenarios to evaluate ChatGPT ability to resolve higher-order clinical reasoning. Overall, we provide support to the first evidence that generative AI can meaningfully integrate as a facilitator into neurorehabilitation practice, aiding physicians in defining increasingly efficacious diagnostic and personalized prognostic plans.
|
Maggio MG; Tartarisco G; Cardile D; Bonanno M; Bruschetta R; Pignolo L; Pioggia G; Calabro RS; Cerasa A
| 10
|
|||
38060759
|
Be or Not to Be With ChatGPT?
| 2,023
|
Cureus
|
In the ever-evolving realm of scientific research, this letter underscores the vital role of ChatGPT as an invaluable ally in manuscript creation, focusing on its remarkable grammar and spelling error correction capabilities. Furthermore, it highlights ChatGPT's efficacy in expediting the manuscript preparation process by streamlining the collection and highlighting critical scientific information. By elucidating the aim of this letter and the multifaceted benefits of ChatGPT, we aspire to illuminate the path toward a future where scientific writing achieves unparalleled efficiency and precision.
|
Aliyeva A; Sari E
| 10
|
|||
40183181
|
Current applications and future perspectives of artificial intelligence in functional urology and neurourology: how far can we get?
| 2,025
|
Minerva urology and nephrology
|
In the last few years, the scientific community has seen an increasing interest towards the potential applications of artificial intelligence in medicine and healthcare. In this context, urology represents an area of rapid development, particularly in uro-oncology, where a wide range of applications has focused on prostate cancer diagnosis. Other urological branches are also starting to explore the potential advantages of AI in the diagnostic and therapeutic process, and functional urology and neurourology are among them. Although the experiences in this sense have been quite limited so far, some AI applications have already started to show potential benefits, especially for urodynamic and imaging interpretation, as well as for the development of AI-based predictive models for treatment response. A few experiences on the use of ChatGPT to answer questions on functional urology and neurourology topics have also been reported. Conversely, AI applications in functional urology surgery remain largely unexplored. This paper provides a critical overview of the current evidence on this topic, highlighting the potential benefits for the diagnostic workflow, therapeutic evaluation and surgical training, as well as the current limitations that need to be addressed to enable the integration of this tools in the clinical practice in the future.
|
Gallo ML; Moriconi M; Phe V
| 0-1
|
|||
37440696
|
What Should ChatGPT Mean for Bioethics?
| 2,023
|
The American journal of bioethics : AJOB
|
In the last several months, several major disciplines have started their initial reckoning with what ChatGPT and other Large Language Models (LLMs) mean for them - law, medicine, business among other professions. With a heavy dose of humility, given how fast the technology is moving and how uncertain its social implications are, this article attempts to give some early tentative thoughts on what ChatGPT might mean for bioethics. I will first argue that many bioethics issues raised by ChatGPT are similar to those raised by current medical AI - built into devices, decision support tools, data analytics, etc. These include issues of data ownership, consent for data use, data representativeness and bias, and privacy. I describe how these familiar issues appear somewhat differently in the ChatGPT context, but much of the existing bioethical thinking on these issues provides a strong starting point. There are, however, a few "new-ish" issues I highlight - by new-ish I mean issues that while perhaps not truly new seem much more important for it than other forms of medical AI. These include issues about informed consent and the right to know we are dealing with an AI, the problem of medical deepfakes, the risk of oligopoly and inequitable access related to foundational models, environmental effects, and on the positive side opportunities for the democratization of knowledge and empowering patients. I also discuss how races towards dominance (between large companies and between the U.S. and geopolitical rivals like China) risk sidelining ethics.
|
Cohen IG
| 10
|
|||
39198112
|
Large language model application in emergency medicine and critical care.
| 2,024
|
Journal of the Formosan Medical Association = Taiwan yi zhi
|
In the rapidly evolving healthcare landscape, artificial intelligence (AI), particularly the large language models (LLMs), like OpenAI's Chat Generative Pretrained Transformer (ChatGPT), has shown transformative potential in emergency medicine and critical care. This review article highlights the advancement and applications of ChatGPT, from diagnostic assistance to clinical documentation and patient communication, demonstrating its ability to perform comparably to human professionals in medical examinations. ChatGPT could assist clinical decision-making and medication selection in critical care, showcasing its potential to optimize patient care management. However, integrating LLMs into healthcare raises legal, ethical, and privacy concerns, including data protection and the necessity for informed consent. Finally, we addressed the challenges related to the accuracy of LLMs, such as the risk of providing incorrect medical advice. These concerns underscore the importance of ongoing research and regulation to ensure their ethical and practical use in healthcare.
|
Hwai H; Ho YJ; Wang CH; Huang CH
| 10
|
|||
38887420
|
Assessing Risk of Bias Using ChatGPT-4 and Cochrane ROB2 Tool.
| 2,024
|
Medical science educator
|
In the world of evidence-based medicine, systematic reviews have long been the gold standard. But they have had a problem-they take forever. That is where ChatGPT-4 and automation come in. They are like a breath of fresh air, speeding things up and making the process more reliable. ChatGPT-4 is like having a super-smart assistant who can quickly assess bias risk in research studies. It is a game-changer, especially in a field where getting the latest research quickly can mean life or death for patients. Sure, it is not perfect, and we still need humans to keep an eye on things and ensure everything's ethical. But the future looks bright. With ChatGPT-4 and automation, evidence-based medicine is on the fast track to success.
|
Trevino-Juarez AS
| 0-1
|
|||
38313631
|
Use of artificial intelligence in the field of pain medicine.
| 2,024
|
World journal of clinical cases
|
In this editorial we comment on the article "Potential and limitations of ChatGPT and generative artificial intelligence in medial safety education" published in the recent issue of the World Journal of Clinical Cases. This article described the usefulness of artificial intelligence (AI) in medial safety education. Herein, we focus specifically on the use of AI in the field of pain medicine. AI technology has emerged as a powerful tool, and is expected to play an important role in the healthcare sector and significantly contribute to pain medicine as further developments are made. AI may have several applications in pain medicine. First, AI can assist in selecting testing methods to identify causes of pain and improve diagnostic accuracy. Entry of a patient's symptoms into the algorithm can prompt it to suggest necessary tests and possible diagnoses. Based on the latest medical information and recent research results, AI can support doctors in making accurate diagnoses and setting up an effective treatment plan. Second, AI assists in interpreting medical images. For neural and musculoskeletal disorders, imaging tests are of vital importance. AI can analyze a variety of imaging data, including that from radiography, computed tomography, and magnetic resonance imaging, to identify specific patterns, allowing quick and accurate image interpretation. Third, AI can predict the outcomes of pain treatments, contributing to setting up the optimal treatment plan. By predicting individual patient responses to treatment, AI algorithms can assist doctors in establishing a treatment plan tailored to each patient, further enhancing treatment effectiveness. For efficient utilization of AI in the pain medicine field, it is crucial to enhance the accuracy of AI decision-making by using more medical data, while issues related to the protection of patient personal information and responsibility for AI decisions will have to be addressed. In the future, AI technology is expected to be innovatively applied in the field of pain medicine. The advancement of AI is anticipated to have a positive impact on the entire medical field by providing patients with accurate and effective medical services.
|
Chang MC
| 0-1
|
|||
37041067
|
ChatGPT: when artificial intelligence replaces the rheumatologist in medical writing.
| 2,023
|
Annals of the rheumatic diseases
|
In this editorial we discuss the place of artificial intelligence (AI) in the writing of scientific articles and especially editorials. We asked chatGPT << to write an editorial for Annals of Rheumatic Diseases about how AI may replace the rheumatologist in editorial writing >>. chatGPT's response is diplomatic and describes AI as a tool to help the rheumatologist but not replace him. AI is already used in medicine, especially in image analysis, but the domains are infinite and it is possible that AI could quickly help or replace rheumatologists in the writing of scientific articles. We discuss the ethical aspects and the future role of rheumatologists.
|
Verhoeven F; Wendling D; Prati C
| 10
|
|||
38155290
|
Dr. GAI: Significance of Generative AI in Plastic Surgery.
| 2,025
|
Aesthetic plastic surgery
|
In this letter to the editor, I offer a critique of the article titled "Consulting the Digital Doctor: Google Versus ChatGPT as Sources of Information on Breast Implant-Associated Anaplastic Large Cell Lymphoma and Breast Implant Illness." While acknowledging the authors' pioneering effort to compare informational outputs from Google and a generative AI (GAI)-ChatGPT, I raise concerns about the methodology, lack of rigorous validation, potential biases, and the overstatement of findings. The letter suggests that the authors' conclusions about the superiority of ChatGPT in providing high-quality medical information may be premature, given the limitations of the study design and the evolving nature of artificial intelligence (AI) technology.No Level Assigned This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.
|
Ray PP
| 32
|
|||
38849691
|
The Future Role of Radiologists in the Artificial Intelligence-Driven Hospital.
| 2,024
|
Annals of biomedical engineering
|
Increasing population and healthcare costs make changes in the healthcare system necessary. This article deals with ChatGPT's perspective on the future role of radiologists in the AI-driven hospital. This perspective will be augmented by further considerations by the author. AI-based imaging technologies and chatbots like ChatGPT can help improve radiologists' performance and workflow in the future AI-driven hospital. Although basic radiological examinations could be delivered without needing a radiologist, sophisticated imaging procedures will still need the expert opinion of a radiologist.
|
Sedaghat S
| 43
|
|||
38314912
|
ChIP-GPT: a managed large language model for robust data extraction from biomedical database records.
| 2,024
|
Briefings in bioinformatics
|
Increasing volumes of biomedical data are amassing in databases. Large-scale analyses of these data have wide-ranging applications in biology and medicine. Such analyses require tools to characterize and process entries at scale. However, existing tools, mainly centered on extracting predefined fields, often fail to comprehensively process database entries or correct evident errors-a task humans can easily perform. These tools also lack the ability to reason like domain experts, hindering their robustness and analytical depth. Recent advances with large language models (LLMs) provide a fundamentally new way to query databases. But while a tool such as ChatGPT is adept at answering questions about manually input records, challenges arise when scaling up this process. First, interactions with the LLM need to be automated. Second, limitations on input length may require a record pruning or summarization pre-processing step. Third, to behave reliably as desired, the LLM needs either well-designed, short, 'few-shot' examples, or fine-tuning based on a larger set of well-curated examples. Here, we report ChIP-GPT, based on fine-tuning of the generative pre-trained transformer (GPT) model Llama and on a program prompting the model iteratively and handling its generation of answer text. This model is designed to extract metadata from the Sequence Read Archive, emphasizing the identification of chromatin immunoprecipitation (ChIP) targets and cell lines. When trained with 100 examples, ChIP-GPT demonstrates 90-94% accuracy. Notably, it can seamlessly extract data from records with typos or absent field labels. Our proposed method is easily adaptable to customized questions and different databases.
|
Cinquin O
| 10
|
|||
38082605
|
ChatGPT for phenotypes extraction: one model to rule them all?
| 2,023
|
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
|
Information Extraction (IE) is a core task in Natural Language Processing (NLP) where the objective is to identify factual knowledge in textual documents (often unstructured), and feed downstream use cases with the resulting output. In genomic medicine for instance, being able to extract the most precise list of phenotypes associated to a patient allows to improve genetic disease diagnostic, which represents a vital step in the modern deep phenotyping approach. As most of the phenotypic information lies in clinical reports, the challenge is to build an IE pipeline to automatically recognize phenotype concepts from free-text notes. A new machine learning paradigm around large language models (LLM) has given rise of an increasing number of academic works on this topic lately, where sophisticated combinations of different technics have been employed to improve the phenotypes extraction accuracy. Even more recently released, the ChatGPT(1) application nevertheless raises the question of the relevance of these approches compared to this new generic one based on an instruction-oriented LLM. In this paper, we propose a rigorous evaluation of ChatGPT and the current state-of-the-art solutions on this specific task, and discuss the possible impacts and the technical evolutions to consider in the medical domain.Clinical relevance- Deep phenotyping on electronic health records has proven its ability to improve genetic diagnosis by clinical exomes [10]. Thus, comparing state-of-the-art solutions in order to derive insights and improving research paths is essential.
|
Labbe T; Castel P; Sanner JM; Saleh M
| 10
|
|||
39561352
|
Evaluating the concordance of ChatGPT and physician recommendations for bariatric surgery.
| 2,025
|
Canadian journal of physiology and pharmacology
|
Integrating artificial intelligence (AI) into healthcare prompts the need to measure its proficiency relative to human experts. This study evaluates the proficiency of ChatGPT, an OpenAI language model, in offering guidance concerning bariatric surgery compared to bariatric surgeons. Five clinical scenarios representative of diverse bariatric surgery situations were given to American Society for Metabolic and Bariatric Surgery (ASMBS)-accredited bariatric surgeons and ChatGPT. Both groups proposed medical or surgical management for the patients depicted in each scenario. The outcomes from both the surgeons and ChatGPT were examined and matched with the clinical benchmarks set by the ASMBS. There was a high degree of agreement between ChatGPT and physicians on the three simpler clinical scenarios. There was a positive correlation between physicians' and ChatGPT answers for not recommending surgery. ChatGPT's advice aligned with ASMBS guidelines 60% of the time, in contrast to bariatric surgeons, who consistently aligned with the guidelines 100% of the time. ChatGPT showcases potential in offering guidance on bariatric surgery, but it does not have the comprehensive and personalized perspective that doctors exhibit consistently. Enhancing AI's training on intricate patient situations will bolster its role in the medical field.
|
Kahlon S; Sleet M; Sujka J; Docimo S; DuCoin C; Dimou F; Mhaskar R
| 32
|
|||
38657295
|
Comparison of ChatGPT version 3.5 & 4 for utility in respiratory medicine education using clinical case scenarios.
| 2,024
|
Respiratory medicine and research
|
Integration of ChatGPT in Respiratory medicine presents a promising avenue for enhancing clinical practice and pedagogical approaches. This study compares the performance of ChatGPT version 3.5 and 4 in respiratory medicine, emphasizing its potential in clinical decision support and medical education using clinical cases. Results indicate moderate performance highlighting limitations in handling complex case scenarios. Compared to ChatGPT 3.5, version 4 showed greater promise as a pedagogical tool, providing interactive learning experiences. While serving as a preliminary decision support tool clinically, caution is advised, stressing the need for ongoing validation. Future research should refine its clinical capabilities for optimal integration into medical education and practice.
|
Balasanjeevi G; Surapaneni KM
| 21
|
|||
37336616
|
How large language models can augment perioperative medicine: a daring discourse.
| 2,023
|
Regional anesthesia and pain medicine
|
Interest in natural language processing, specifically large language models, for clinical applications has exploded in a matter of several months since the introduction of ChatGPT. Large language models are powerful and impressive. It is important that we understand the strengths and limitations of this rapidly evolving technology so that we can brainstorm its future potential in perioperative medicine. In this daring discourse, we discuss the issues with these large language models and how we should proactively think about how to leverage these models into practice to improve patient care, rather than worry that it may take over clinical decision-making. We review three potential major areas in which it may be used to benefit perioperative medicine: (1) clinical decision support and surveillance tools, (2) improved aggregation and analysis of research data related to large retrospective studies and application in predictive modeling, and (3) optimized documentation for quality measurement, monitoring and billing compliance. These large language models are here to stay and, as perioperative providers, we can either adapt to this technology or be curtailed by those who learn to use it well.
|
Gabriel RA; Mariano ER; McAuley J; Wu CL
| 10
|
|||
38599742
|
[ChatGPT is an above-average student at the Faculty of Medicine of the University of Zaragoza and an excellent collaborator in the development of teaching materials].
| 2,024
|
Revista espanola de patologia : publicacion oficial de la Sociedad Espanola de Anatomia Patologica y de la Sociedad Espanola de Citologia
|
INTRODUCTION AND OBJECTIVE: Artificial intelligence is fully present in our lives. In education, the possibilities of its use are endless, both for students and teachers. MATERIAL AND METHODS: The capacity of ChatGPT has been explored when solving multiple choice questions based on the exam of the subject <<Anatomopathological Diagnostic and Therapeutic Procedures>> of the first call of the 2022-23 academic year. In addition, to comparing their results with those of the rest of the students presented the probable causes of incorrect answers have been evaluated. Finally, its ability to formulate new test questions based on specific instructions has been evaluated. RESULTS: ChatGPT correctly answered 47 out of 68 questions, achieving a grade higher than the course average and median. Most failed questions present negative statements, using the words <<no>>, <<false>> or <<incorrect>> in their statement. After interacting with it, the program can realize its mistake and change its initial response to the correct answer. Finally, ChatGPT can develop new questions based on a theoretical assumption or a specific clinical simulation. CONCLUSIONS: As teachers we are obliged to explore the uses of artificial intelligence and try to use it to our benefit. Carrying out tasks that involve significant consumption, such as preparing multiple-choice questions for content evaluation, is a good example.
|
Cabanuz C; Garcia-Garcia M
| 21
|
|||
40322390
|
Performance of Large Language Models (ChatGPT and Gemini Advanced) in Gastrointestinal Pathology and Clinical Review of Applications in Gastroenterology.
| 2,025
|
Cureus
|
Introduction Artificial intelligence (AI) chatbots have been widely tested in their performance on various examinations, with limited data on their performance in clinical scenarios. The role of Chat Generative Pre-Trained Transformer (ChatGPT) (OpenAI, San Francisco, California, United States) and Gemini Advanced (Google LLC, Mountain View, California, United States) in multiple aspects of gastroenterology including answering patient questions, providing medical advice, and as tools to potentially assist healthcare providers has shown some promise, though associated with many limitations. We aimed to study the performance of ChatGPT-4.0, ChatGPT-3.5, and Gemini Advanced across 20 clinicopathologic scenarios in the unexplored realm of gastrointestinal pathology. Materials and methods Twenty clinicopathological scenarios in gastrointestinal pathology were provided to these three large language models. Two fellowship-trained pathologists independently assessed their responses, evaluating both the diagnostic accuracy and confidence of the models. The results were then compared using the chi-squared test. The study also evaluated each model's ability in four key areas, namely, (1) ability to provide differential diagnoses, (2) interpretation of immunohistochemical stains, (3) ability to deliver a concise final diagnosis, (4) and explanation provided for the thought process, using a five-point scoring system. The mean, median score+/-standard deviation (SD), and interquartile ranges were calculated. A comparative analysis of these four parameters across ChatGPT-4.0, ChatGPT-3.5, and Gemini Advanced was conducted using the Mann-Whitney U test. A p-value of <0.05 was considered statistically significant. Other parameters evaluated were the ability to provide a tumor, node, and metastasis (TNM) stage and the incidence of pseudo-references "hallucinations" while citing reference material. Results Gemini Advanced (diagnostic accuracy: p=0.01; providing differential diagnosis: p=0.03) and ChatGPT-4.0 (interpretation of immunohistochemistry (IHC) stains: p=0.001; providing differential diagnosis: p=0.002) performed significantly better in certain realms than ChatGPT-3.5, indicating continuously improving data training sets. However, the mean performances of ChatGPT-4.0 and Gemini Advanced ranged between 3.0 and 3.7 and were at best classified as average. None of the models could provide the accurate TNM staging for these clinical scenarios, with 25-50% citing references that do not exist (hallucinations). Conclusion This study indicated that though these models are evolving, they need human supervision and definite improvements before being used in clinical medicine. This is the first study of its kind in gastrointestinal pathology to the best of our knowledge.
|
Jain S; Chakraborty B; Agarwal A; Sharma R
| 0-1
|
|||
39564056
|
A Pilot Study of Medical Student Opinions on Large Language Models.
| 2,024
|
Cureus
|
Introduction Artificial intelligence (AI) has long garnered significant interest in the medical field. Large language models (LLMs) have popularized the use of AI for the public through chatbots such as ChatGPT and have become an easily accessible and recognizable medical resource for medical students. Here, we investigate how medical students are currently utilizing LLM-based tools throughout medical education and examine medical student perception of these tools. Methods A cross-sectional survey was administered to current medical students at the University of Florida College of Medicine (UFCOM) in January 2024 discussing the utilization of AI and LLM tools and perspectives on the current and future role of AI in medicine. Results All 102 respondents reported having heard of LLM-based chatbots such as ChatGPT, Bard, Bing Chat, and Claude. Sixty-nine percent (69%; 70/102) of respondents reported having used them for medical-related purposes at least once a month. Seventy-seven point one percent (77.1%; 54/70) reported the information provided by them to be very accurate or somewhat accurate, and 80% (55/70) reported that they were likely to continue using them in their future medical practice. Those with some baseline understanding of and exposure to AI were 3.26 (p=0.020) and 4.30 (p=0.002) times more likely to have used an LLM-based chatbot, respectively, and 5.06 (p=0.021) and 3.38 (p=0.039) times more likely to cross-check information obtained from them, respectively, compared to those with little to no baseline understanding or exposure. Furthermore, those with some exposure to AI in medical school were 2.70 (p=0.039) and 4.61 (p=0.0004) times more likely to trust AI with clinical decision-making currently and in the next 5 years, respectively, than those with little to no exposure. Those who had used an LLM-based chatbot were 4.31 (p=0.019) times more likely to trust AI with clinical decision-making currently compared to those who had not used one. Conclusion LLM-based chatbots, such as ChatGPT, are not only making their way into the medical student repertoire of study resources but are also being utilized in the setting of patient care and research. Medical students who participated in the survey generally had a positive perception of LLM-based chatbots and reported they were likely to continue using them in the future. Previous AI knowledge and exposure correlated with more conscientious use of these tools such as cross-checking information. Combined with our finding that all respondents believed AI should be taught in the medical curriculum, our study highlights a key opportunity in medical education to acclimate medical students to AI now.
|
Xu AY; Piranio VS; Speakman S; Rosen CD; Lu S; Lamprecht C; Medina RE; Corrielus M; Griffin IT; Chatham CE; Abchee NJ; Stribling D; Huynh PB; Harrell H; Shickel B; Brennan M
| 10
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.