| { |
| "File Number": "1063", |
| "Title": "KG-Rank: Enhancing Large Language Models for Medical QA with Knowledge Graphs and Ranking Techniques", |
| "Limitations": "In this research, we propose an LLM framework augmented by UMLS to improve the quality of the content generated. However, there are some limitations, which we will address in the next phase. Firstly, we plan to incorporate physician evaluations to validate the factual accuracy of KGRank’s answers. Secondly, we aim to assess the performance of more medical-specific base models on medical QA tasks. Lastly, while the ranking method may increase computational time, we recognize the need to optimize its efficiency. We will consider graph-based methods (Yang et al., 2023a; Li et al., 2022b) and some efficiency methods (Feng et al., 2023).", |
| "abstractText": "Large language models (LLMs) have demonstrated impressive generative capabilities with the potential to innovate in medicine. However, the application of LLMs in real clinical settings remains challenging due to the lack of factual consistency in the generated content. In this work, we develop an augmented LLM framework, KG-Rank, which leverages a medical knowledge graph (KG) along with ranking and re-ranking techniques, to improve the factuality of long-form question answering (QA) in the medical domain. Specifically, when receiving a question, KG-Rank automatically identifies medical entities within the question and retrieves the related triples from the medical KG to gather factual information. Subsequently, KG-Rank innovatively applies multiple ranking techniques to refine the ordering of these triples, providing more relevant and precise information for LLM inference. To the best of our knowledge, KG-Rank is the first application of KG combined with ranking models in medical QA specifically for generating long answers. Evaluation on four selected medical QA datasets demonstrates that KG-Rank achieves an improvement of over 18% in ROUGE-L score. Additionally, we extend KG-Rank to open domains, including law, business, music, and history, where it realizes a 14% improvement in ROUGE-L score, indicating the effectiveness and great potential of KG-Rank.", |
| "1 Introduction": "Large language models (LLMs), such as GPT4 (OpenAI, 2023) and LLaMa2 (Touvron et al., 2023), have demonstrated powerful generative capabilities (Gao et al., 2023; Yang et al., 2024b). Despite their considerable potential in various domains, including medicine (Li et al., 2022a; Yang et al., 2023c; Ke et al., 2024; Yang et al., 2024a), their limited training on medical data raises concerns about the consistency of the generated con-\ntent with established medical facts (Yang et al., 2023b; Bi et al., 2024).\nTo address this challenge without additional computational cost, previous research, such as Almanac (Hiesinger et al., 2023) and ChatENT (Long et al., 2023), leverages external medical knowledge to enhance the accuracy and reliability of LLMgenerated content. However, merely retrieving external knowledge risks introducing irrelevant or unreliable information (Yang et al., 2024a), which can compromise the effectiveness of LLMs, and raise issues of credibility, data consistency, privacy, security, and legality. While previous studies have emphasized the advantages of utilizing external knowledge, they have overlooked a crucial question: How to better integrate external knowledge?\nIn this work, we propose KG-Rank, an augmented framework that integrates a structured medical knowledge graph (KG) with ranking techniques into LLMs to achieve more accurate and reliable long-form medical question-answering (QA). We first retrieve one-hop relations of related medical entities from the medical KG (Unified Medical Language System (UMLS)) (Bodenreider, 2004). To retain relevant information from the KG, we then propose to apply ranking and re-ranking methods to optimize the ordering of triplets.\nSpecifically, we introduce three ranking techniques to improve the integration of LLM with KG by filtering irrelevant data, highlighting key information, and ensuring diversity. These techniques also streamline the process by reducing the number of triplets required for LLM inference. Additionally, we apply re-ranking models to reassess and emphasize the most relevant triplets, enhancing the factuality of KG-Rank in the long-form medical QA task.\nTo summarize, our contributions are: (1) We propose KG-Rank, a KG-augmented LLM framework for the medical QA task. To the best of our knowledge, this is the first application of KG com-\nbined with ranking techniques to enhance LLMs for medical QA with long answers. (2) We incorporate different ranking and re-ranking techniques to eliminate noise and redundancy in the KG-retrieval stage. (3) We validate the effectiveness of KGRank on both medical and various open-domain QA tasks. All the data and code can be found at https://github.com/YangRui525/KG-Rank.", |
| "2 Methodology": "As shown in Fig. 1, we introduce the KG-Rank (Knowledge Graph -Rank) framework for the longform medical QA task.\nAtrial Fibrillation Heart Failure Diabetes Mellitus ... Query : A 56 year old male patient with atrial fibrillation presents to the clinic. Given their history of heart failure, diabetes and PAD, what is their risk of stroke? Should they be placed on anticoagulation?\nStep 1: Entity Extraction and Mapping\nStep 2: Relation Retrieval and Triplet Ranking\nUMLS Database\nOne-hop Relations\nStep 3: Re-Ranking\nStep 4: Obtaining LLM Response\nTop- Triplets Top- Triplets\nCross-Encoder\nSimilarity\nAnswer Expansion\nMMR\nTriplet Ranking", |
| "2.1 External Knowledge Graph": "We define the external KG as G = (V,E), where V represents the set of entities and E represents the set of structural relations. For the medical QA task, we choose UMLS as the primary medical KG. UMLS is a comprehensive repository of health and biomedical vocabularies, designed to promote information standardization and interoperability. The core component of UMLS, the Metathesaurus, contains over 3.8 million concepts and more than 78 million relations, and supports 25 languages, providing extensive medical knowledge coverage\nto enhance LLMs. In UMLS, knowledge is represented in the form of triples, which consist of two medical concepts and the relation between them. For example, in the triple (Myopia, clinically_associated_with, HYPERGLYCEMIA), \"Myopia\" and \"HYPERGLYCEMIA\" are medical concepts, while \"clinically_associated_with\" is the relation between them.", |
| "2.2 Entity Extraction and Mapping": "In the first step, we extract key entities and find mappings from the external KG. Specifically, for the given question Q, we apply a Medical NER Prompt PMedNER to identify related medical entities EQ, and then we map each entity ei ∈ EQ to the corresponding entity in the knowledge graph G. The detailed prompt can be found in Appendix A.1.", |
| "2.3 Relation Retrieval and Triplet Ranking": "After identifying the corresponding entities EQ′ , we retrieve their one-hop relations from the KG (denoted as UMLS):\nEQ′ = {e′i ∈ V | ∃ei ∈ EQ, ei 7→ e′i}.\nWithin UMLS, there exists extensive relational information, where one entity may be associated with thousands of one-hop relations. Consequently, to facilitate the extraction of the most relevant, we propose ranking methods. We encode the question Q and each triplet (e′i, r, e ′ j) into q, rij through UmlsBERT (Michalopoulos et al., 2021). Then, we explore three techniques for ranking the triplets:\nSimilarity Ranking We compute the similarity score between the question embedding q and each relation embedding rij . Answer Expansion Ranking We first utilize LLMs to generate a hallucinatory answer A for the question Q , and then we encode the concatenation of [Q,A] to obtain text embedding t. Subsequently, we utilize the expanded question embedding t to search for the most similar triplets in vector space. The detailed prompt for answer expansion can be found in Appendix A.2. MMR Ranking This method is inspired by an information extraction method Maximal Marginal Relevance (MMR) (Carbonell and GoldsteinStewart, 1998). Initially, we identify the triplet with the highest similarity score to the question Q. For the remaining triplets, we dynamically adjust their similarity scores based on the ones that\nhave already been selected. In this way, we could consider both relevancy and redundancy:\nw = wbase + δ · n,\nscoreij = sim(q, rij)− w · sim(rij , rsel). Where, w is an adjustable weight, with a base weight and δ as the incremental weight factor per selected triplet, n is the count of triplets that have been selected. Re-ranking After the ranking stage, we obtain an ordering of the triplets. We then employ a medical cross-encoder model, MedCPT (Jin et al., 2023), to re-rank them, ensuring that the most relevant triples are chosen. The re-ranked top-p triplets, combined with the task prompt, are input into LLMs for answer generation. The detailed prompt can be found in Appendix A.3.", |
| "3 Experiments": "We conduct experiments on four selected medical QA datasets, in which the answers are free-text, as shown in Tab. 1. LiveQA (Abacha et al., 2017) consists of health questions submitted by consumers to the National Library of Medicine. It includes a training set with 634 QA pairs and a test set comprising 104 QA pairs, which is used for evaluation. ExpertQA (Malaviya et al., 2023) is a highquality long-form QA dataset with 2177 questions spanning 32 fields, along with answers verified by domain experts. Among them, 504 medical questions (Med) and 96 biology (Bio) questions were used for evaluation. MedicationQA (Abacha et al., 2019) includes 690 drug-related consumer questions along with information retrieved from reliable websites and scientific papers. We evaluate the generated answers using ROUGE (Lin, 2004), BERTScore (Zhang et al., 2019), MoverScore (Zhao et al., 2019) and BLEURT (Sellam et al., 2020).", |
| "3.1 Results": "As shown in Tab. 2, we evaluate GPT-4 and LLaMa2-13b across the following settings: zeroshot (ZS), and three proposed ranking techniques:\nSimilarity Ranking (Sim), Answer Expansion Ranking (AE), and Maximal Marginal Relevance Ranking (MMR). Also with the Re-ranking (RR), which is on top of the Similarity Ranking.", |
| "3.2 Datasets": "The results show that incorporating the knowledge graph and ranking techniques notably enhances performance in almost all benchmarks and evaluation metrics in the zero-shot setting, demonstrating the effectiveness of KG-Rank. Significantly, the RR method excels in the ExpertQA-Bio, ExpertQAMed, and Medication QA datasets, particularly evident in the over 18% increase in the ROUGE-L score for ExpertQA-Bio. While KG-Rank still shows effectiveness on LiveQA, the RR method does not show steady improvement compared to other ranking techniques. This inconsistency may arise since the answers in LiveQA are generated via automatic extraction methods, leading to issues with semantic coherence and disorganized formats. Moreover, the performance of the three ranking methodologies exhibited variability across various datasets, indicating their unique strengths and limitations in differing contexts.\nIn assessing model performance, GPT-4 consistently surpasses LLaMa2-13b in both zero-shot and various ranking settings. Additionally, we evaluate the zero-shot performance of a medical LLM on these datasets in Section 4 (Medical LLM).", |
| "4 Ablation Study and Analysis": "Medical LLM To further investigate the capability of the medical LLM, we compare the zero-shot performance of LLaMa2-7b and baizehealthcare (Xu et al., 2023) without KG-Rank. Baize-healthcare, which is fine-tuned on LLaMa7b using medical data, consistently outperforms LLaMa2-7b across all four datasets, as shown in Fig. 2. More comparison results can be found in Appendix B.1.\nRe-ranking Models We employ GPT-4 with similarity ranking as the final setting and compare two re-ranking models: the MedCPT cross-encoder model, trained on the extensive PubMed articles, and the Cohere (https://cohere.com) re-ranking model, designed for broader domain applications. As shown in Tab. 3, MedCPT steadily outperforms the Cohere re-rank model on all datasets, highlighting the importance of specialized re-rank models\nin the medical field. Additional evaluations are provided in Appendix B.2.\nCase Study To further analyze the generated content of the KG-Rank framework, a case study is presented in Fig. 3. When asked about ideal diet recommendations for a 53-year-old male with acute renal failure and hepatic failure, both provide guidelines regarding protein intake. However, the original recommendation emphasizes ensuring\nadequate protein consumption (1.6-2.2 grams per kilogram), whereas the answer generated under the KG-Rank framework advises controlling protein intake (limited to about 0.8-1 gram per kilogram). The difference is critical for patients with acute renal and hepatic failure, where an inappropriate protein dosage, such as the higher range of 1.6-2.2 grams per kilogram, could worsen the strain on already compromised kidneys and liver, potentially leading to escalated health issues. This case shows that KG-Rank is more factually correct in the generated answer. More case studies can be found in the Appendix C.\nLLM-based Evaluation Although KG-Rank achieves significant improvements in ROUGE, BERTScore, MoverScore, and BLEURT, these automatic scores may have limitations in evaluating the factuality of long-form medical QA. Therefore, we introduce GPT-4 score specifically for factuality\nevaluation (Zheng et al., 2024). The evaluation criteria are designed by two resident physicians with over five years of experience, which can be found in Appendix A.4. As shown in Tab. 4, we choose GPT-4 as the vanilla model, and KG-Rank outperforms the zero-shot setting across all datasets.\nKG-Rank in Open Domain Additionally, to demonstrate the effectiveness of our KG-Rank, we extend it to the open domain by replacing UMLS with Wikipedia through the DBpedia API (https://www.dbpedia.org/). We conduct the experiment on Mintaka (Sen et al., 2022), which is a complex, natural, and multilingual dataset designed for experimenting with end-to-end questionanswering models. We randomly select 1,000 pairs from the test set for evaluation. Under the enhancement of the KG-Rank framework, the accuracy increases from 60.40% to 61.90%. The detailed prompt can be found in Appendix A.5.\nWe also conduct experiments in the domains of law, business, music, and history using the ExpertQA dataset. We employ GPT-4 as the vanilla model and use ROUGE-L, BERTScore, and MoverScore for evaluation. As shown in Tab. 5, KG-Rank outperforms the baseline across all benchmarks. Building on these findings, the effectiveness of our framework is not limited to the medical domain but can also be applied to various other fields. For more case studies, please refer to Appendix C.", |
| "5 Conclusion": "In this work, we propose KG-Rank, an enhanced LLM framework that integrates a medical KG and ranking techniques to improve the factuality of medical QA. As far as we know, KG-Rank is the first application of KG combined with ranking techniques for long-answer medical QA. Across four medical QA datasets, KG-Rank demonstrates over an 18% improvement in ROUGE-L score. Its application to open domains yields a 14% ROUGE-L score enhancement, underscoring KG-Rank’s effectiveness and versatility.", |
| "Ethical Considerations": "This research utilize public medical datasets solely for academic purposes, not for practical application. We employ GPT-4, LLaMa2-13b, LLaMa27b, baize-healthcare for text generation, ensuring that no harmful content was produced. Both the benchmark datasets and the model outputs are free of any individual privacy data.", |
| "A Prompt Templates": "In this section, we present the detailed prompt templates employed as inputs for LLMs at each phase of the KG-Rank process.", |
| "A.1 Medical NER Prompt": "Fig. 4 illustrates the Medical NER prompt template that is specifically designed for extracting medical terminologies from a given question.", |
| "A.2 Answer Expansion Prompt": "Figure 5 illustrates the prompt template designed for our proposed answer expansion ranking strategy, as shown in step 2 of Fig. 1 and as described in Section 2.3.", |
| "A.3 KG-Enhanced Prompt": "Fig. 6 shows the prompt template to obtain final answers from LLMs, corresponding to step 4 in Fig. 1.", |
| "A.4 Physician-Designed Criteria for GPT-4 Evaluation": "Tab. 6 shows the criteria for evaluating medical long-form QA established by two resident physicians with over five years of experience. This critria is part of the GPT-4 evaluation prompt.", |
| "A.5 KG-Enhanced Prompt for Mintaka Task": "Fig. 7 presents the prompt for obtaining KG-enhanced LLM answers, specially designed for the Mintaka dataset.", |
| "B.1 Zero-shot Performance of Different LLMs": "In this section, we evaluate the performance of widely-used LLMs on four medical datasets under the zero-shot setting. As shown in Tab. 7, the results indicate that GPT-4 performing better than the other LLMs.", |
| "B.2 Performance of Different Re-rank Models": "In this section, we evaluate the performance of MedCPT and the Cohere re-rank model on four medical datasets within the GPT-4 with similarity ranking setting. As shown in Table 8, the results indicate that MedCPT outperforms the Cohere re-rank model.", |
| "C More Case Studies": "We put another case study from the ExpertQA-Med dataset, where in regards to the prognosis survival rates of breast cancer cases, the answer generated by KG-Rank is more factually accurate in terms of medical evidence, as shown in Fig. 8. Moreover, Fig. 9 shows a case study on the open-domain QA tasks from the Mintaka dataset, comparing the performance of the vanilla GPT-4 model against the KG-Rank-enhanced GPT-4 model. The case study involves a question: “How many of the Godfather movies was Robert De Niro in?” While GPT-4 responded with “2”, our proposed KG-Rank-enhanced GPT-4 provided the correct answer “1”, which matches the ground truth. We also show the evidence retrieved from DBPedia. This case study shows that by incorporating KG-Rank, the model is able to leverage the relevant information effectively to derive the correct answer, whereas the vanilla GPT-4 did not. This demonstrates the efficacy of KG-Rank in improving the accuracy of answers in LLMs when dealing with general domain factual questions.", |
| "D Experimental Setup": "In our experimental setup, we employ UmlsBERT1, baize-healthcare2, llama-2-7b-chat-hf3, llama-2-13b-chat-hf4, MedCPT5 from Hugging Face. For GPT-4, we use the OpenAI API with a zero-temperature setting. For the Cohere re-rank model, we employ it through its API. In the MMR Ranking setting, the default value for w is 0.1, and δ is set to 0.01. All experiments are conducted on a cluster equipped with 4 NVIDIA A100 GPUs. The prediction for each sample takes about a few seconds. Based on the size of each dataset, it may take up to hours to finish the evaluation.\n1GanjinZero/UMLSBert_ENG 2https://huggingface.co/project-baize/baize-healthcare-lora-7B 3https://huggingface.co/meta-llama 4https://huggingface.co/meta-llama 5https://huggingface.co/ncbi/MedCPT-Cross-Encoder" |
| } |