markdown_text
stringlengths 98
10.1k
| pdf_metadata
dict | chunk_metadata
dict |
|---|---|---|
## RAG vs. GraphRAG: A Systematic Evaluation and Key Insights
### Haoyu Han[1], Yu Wang[2], Harry Shomer[1], Yongjia Lei, [2], Kai Guo [1], Zhigang Hua[3], Bo Long [3], Hui Liu[1], Jiliang Tang[1]
1Michigan State University, 2University of Oregon, 3Meta
### {hanhaoy1, shomerha, guokai1, liuhui7, tangjili}@msu.edu {yuwang, yongjia}@uoregon.edu, {zhua, bolong}@meta.com
### Abstract
Retrieval-Augmented Generation (RAG) enhances the performance of LLMs across various tasks by retrieving relevant information
from external sources, particularly on textbased data. For structured data, such as knowledge graphs, GraphRAG has been widely used
to retrieve relevant information. However, recent studies have revealed that structuring implicit knowledge from text into graphs can
benefit certain tasks, extending the application of GraphRAG from graph data to general
text-based data. Despite their successful extensions, most applications of GraphRAG for
text data have been designed for specific tasks
and datasets, lacking a systematic evaluation
and comparison between RAG and GraphRAG
on widely used text-based benchmarks. In
this paper, we systematically evaluate RAG
and GraphRAG on well-established benchmark
tasks, such as Question Answering and Querybased Summarization. Our results highlight
the distinct strengths of RAG and GraphRAG
across different tasks and evaluation perspectives. Inspired by these observations, we investigate strategies to integrate their strengths
to improve downstream tasks. Additionally,
we provide an in-depth discussion of the shortcomings of current GraphRAG approaches and
outline directions for future research.
### 1 Introduction
Retrieval-Augmented Generation (RAG) has
emerged as a powerful approach to enhance downstream tasks by retrieving relevant knowledge from
external data sources. It has achieved remarkable
success in various real-world applications, such
as healthcare (Xu et al., 2024), law (Wiratunga
et al., 2024), finance (Zhang et al., 2023), and education (Miladi et al., 2024). This success has been
further amplified with the advent of Large Language Models (LLMs), as integrating RAG with
LLMs significantly improves their faithfulness by
mitigating hallucinations, reducing privacy risks,
and enhancing robustness (Zhao et al., 2023; Huang
et al., 2023). In most existing RAG systems, retrieval is primarily conducted from text databases
using lexical and semantic search.
Graphs, as a fundamental data structure, encode
rich relational information and have been extensively utilized across real-world domains, including
knowledge representation, social network analysis,
and biomedical research (Wu et al., 2020; Ma and
Tang, 2021; Wu et al., 2023). Motivated by this,
GraphRAG has recently gained attention for retrieving graph-structured data, such as knowledge
graphs (KGs) and molecular graphs (Han et al.,
2024; Peng et al., 2024). Beyond leveraging existing graphs, GraphRAG has also demonstrated its
effectiveness for text-based tasks after structuring
implicit knowledge from text into graph representations, benefiting applications such as global summarization (Edge et al., 2024; Zhang et al., 2024),
planning (Lin et al., 2024) and reasoning (Han et al.,
2025).
While previous studies have demonstrated the
potential of GraphRAG for text-based tasks by
converting sequential text into graphs, most of
them primarily focus on specific tasks and welldesigned datasets. Consequently, the applicability
of GraphRAG to broader, real-world text-based
tasks remains unclear, particularly when compared
to RAG, which has seen widespread adoption
across diverse applications. This raises a critical
question: What are the advantages and disadvan_tages of applying GraphRAG to general text-based_
_tasks compared to RAG?_
To bridge this gap, we systematically evaluate
the performance of RAG and GraphRAG on general text-based tasks using widely adopted datasets,
including Question Answering and Query-based
Summarization. Specifically, we assess two representative GraphRAG methods: (1) Knowledge
Graph-based GraphRAG (Liu, 2022), which ex
-----
|
{
"id": "2502.11371",
"categories": [
"cs.IR"
]
}
|
{
"file_path": "/content/raw_pdfs/2502.11371v1.pdf",
"page_count": 16,
"page_num": 1,
"table_of_content_items": {
"level": [
1
],
"title": [
"Introduction"
],
"page": [
1
]
},
"tables": [],
"images": [],
"graphics": []
}
|
tracts a Knowledge Graph (KG) from text and performs retrieval solely based on the KG and (2)
Community-based GraphRAG (Edge et al., 2024),
which retrieves information not only from the constructed KG but also from hierarchical communities within the graph. For the Question Answering task, we conduct experiments on both singlehop and multi-hop QA under single-document and
multi-document scenarios. Similarly, for the Querybased Summarization task, we evaluate both singledocument and multi-document summarization to
comprehensively assess the effectiveness of RAG
and GraphRAG.
Based on our comprehensive evaluation, we
conduct an in-depth analysis of the strengths and
weaknesses of RAG and GraphRAG across different tasks. Our findings reveal that RAG and
GraphRAG are complementary, each excelling in
different aspects. For the Question Answering task,
we observe that RAG performs better on singlehop questions and those requiring detailed information, while GraphRAG is more effective for
multi-hop questions. In the Query-based Summarization task, RAG captures fine-grained details,
whereas GraphRAG generates more diverse and
multi-faceted summaries. Building on these insights, we investigate two strategies from different
perspectives to integrate their unique strengths and
enhance the overall performance. Our main contributions are as follows:
- Systematical Evaluation : This is the very first
work to systematically evaluate and compare
RAG and GraphRAG on text-based tasks using
widely adopted datasets and evaluations.
- Task-Specific Insights: We provide an in-depth
analysis of the distinct strengths of RAG and
GraphRAG, demonstrating their complementary
advantages across different types of queries and
objectives.
- Hybrid Retrieval Strategies: Based on our
findings on the unique strengths of RAG and
GraphRAG, we propose two strategies to improve overall performance: (1) Selection, where
queries are dynamically assigned to either RAG
or GraphRAG based on their characteristics, and
(2) Integration, where both methods are integrated to leverage their complementary strengths.
- Challenges and Future Directions: We discuss
the limitations of current GraphRAG approaches
and outline potential future research directions
for broader applicability.
### 2 Related Works
**2.1** **Retrieval-Augmented Generation**
Retrieval-Augmented Generation (RAG) has been
widely applied to enhance the performance of
Large Language Models (LLMs) by retrieving relevant information from external sources, addressing
the limitation of LLMs’ restricted context windows,
improving factual accuracy, and mitigating hallucinations (Fan et al., 2024; Gao et al., 2023). Most
RAG systems primarily process text data by first
splitting it into chunks (Finardi et al., 2024). When
a query is received, RAG retrieves relevant chunks
either through lexical search (Ram et al., 2023)
or by computing semantic similarity (Karpukhin
et al., 2020), embeddings both the query and text
chunks into a shared vector space. Advanced techniques, such as pre-retrieval processing (Ma et al.,
2023; Zheng et al., 2023a) and post-retrieval processing (Dong et al., 2024; Xu et al., 2023), as
well as fine-tuning strategies (Li et al., 2023), have
further enhanced RAG’s effectiveness across various domains, including QA) (Yan et al., 2024),
dialogue generation (Izacard et al., 2023), and text
summarization (Jiang et al., 2023).
Several studies have evaluated the effectiveness
of RAG systems across various tasks (Yu et al.,
2024; Chen et al., 2024; Es et al., 2023), such
as multi-hop question answering (Tang and Yang,
2024), biomedical question answering (Xiong et al.,
2024), and text generation (Liu et al., 2023). However, no existing study has simultaneously and
systematically evaluated and compared RAG and
GraphRAG on these general text-based tasks.
**2.2** **Graph Retrieval-Augmented Generation**
While RAG primarily processes text data, many
real-world scenarios involve graph-structured data,
such as knowledge graphs (KGs), social graphs,
and molecular graphs (Xia et al., 2021; Ma and
Tang, 2021). GraphRAG (Han et al., 2024; Peng
et al., 2024) aims to retrieve information from various types of graph-structured data. The inherent
structure of graphs enhances retrieval by capturing relationships between connected nodes. For
example, hyperlinks between documents can improve retrieval effectiveness in question answering
tasks(Li et al., 2022). Currently, most GraphRAG
studies focus on retrieving information from existing KGs for downstream tasks such as KG-based
QA (Tian et al., 2024; Yasunaga et al., 2021) and
Fact-Checking (Kim et al., 2023).
-----
|
{
"id": "2502.11371",
"categories": [
"cs.IR"
]
}
|
{
"file_path": "/content/raw_pdfs/2502.11371v1.pdf",
"page_count": 16,
"page_num": 2,
"table_of_content_items": {
"level": [
1,
2,
2
],
"title": [
"Related Works",
"Retrieval-Augmented Generation",
"Graph Retrieval-Augmented Generation"
],
"page": [
2,
2,
2
]
},
"tables": [],
"images": [],
"graphics": []
}
|
Figure 1: The illustration of RAG, KG-based GraphRAGs and Community-based GraphRAGs.
Despite leveraging the existing graphs, recent
studies have explored incorporating graph construction into GraphRAG to enhance text-based
tasks. For example, Dong et al. (2024) construct
document graphs using Abstract Meaning Representation (AMR) to improve document ranking.
Edge et al. (2024) construct graphs from documents
using LLMs, where nodes represent entities and
edges capture relationships between them. Based
on these graphs, they generate hierarchical communities and corresponding community summaries
or reports. Their approach focuses on the global
query summarization task, retrieving information
from both the constructed graphs and their hierarchical communities. Additionally, Han et al. (2025)
propose an iterative graph construction approach
using LLMs to improve reasoning tasks.
These studies highlight the potential of
GraphRAG in processing text-based tasks by constructing graphs from textual data. However, their
focus is limited to specific tasks and evaluation
settings. It remains unclear how GraphRAG performs on general text-based tasks compared to
RAG. More importantly, when and how should
GraphRAG be applied to such tasks for optimal
effectiveness? Our work aims to bridge this gap by
systematically evaluating GraphRAG and comparing it with RAG on general text-based tasks.
### 3 Evaluation Methodology
In this section, we introduce the details of our
evaluation framework. We primarily evaluate one
representative RAG system and two representative
GraphRAG systems, as illustrated in Figure 1.
**3.1** **RAG**
We adopt a representative semantic similaritybased retrieval approach as our RAG
method (Karpukhin et al., 2020). Specifically, we
first split the text into chunks, each containing
approximately 256 tokens. For indexing, we use
OpenAI’s text-embedding-ada-002 model, which
has demonstrated effectiveness across various
tasks (Nussbaum et al., 2024). For each query, we
retrieve chunks with Top-10 similarity scores. To
generate responses, we employ two open-source
models of different sizes: Llama-3.1-8B-Instruct
and Llama-3.1-70B-Instruct (Dubey et al., 2024).
For single-document tasks, we generate a separate RAG system for each document, ensuring that
queries corresponding to a specific document are
processed within its respective indexed chunk pool.
For multi-document tasks, we use a shared RAG
system by indexing all documents together.
**3.2** **GraphRAG**
We select two representative GraphRAG methods for a comprehensive evaluation, as shown
in Figure 1, namely KG-based GraphRAG and
Community-based GraphRAG.
In the KG-based GraphRAG (KGGraphRAG) (Liu, 2022), a knowledge graph is first
constructed from text chunks using LLMs through
triplet extraction. When a query is received, its
entities are extracted and matched to those in
the constructed KG using LLMs. The retrieval
process then traverses the graph from the matched
entities and gathers triplets (head, relation, tail)
from their multi-hop neighbors as the retrieved
-----
|
{
"id": "2502.11371",
"categories": [
"cs.IR"
]
}
|
{
"file_path": "/content/raw_pdfs/2502.11371v1.pdf",
"page_count": 16,
"page_num": 3,
"table_of_content_items": {
"level": [
1,
2,
2
],
"title": [
"Evaluation Methodology",
"RAG",
"GraphRAG"
],
"page": [
3,
3,
3
]
},
"tables": [],
"images": [
"{'number': 9, 'bbox': Rect(328.1573181152344, 82.05775451660156, 351.5680847167969, 105.468505859375), 'transform': (23.41075325012207, 0.0, -0.0, 23.41075325012207, 328.1573181152344, 82.05775451660156), 'width': 512, 'height': 512, 'colorspace': 3, 'cs-name': 'ICCBased(RGB,Google/Skia/7C5FA2151397474A0486BBCC83733D59)', 'xres': 96, 'yres': 96, 'bpc': 8, 'size': 2412, 'has-mask': True}"
],
"graphics": []
}
|
content. Additionally, for each triplet, we can
retrieve the corresponding text associated with
it. We define two variants of KG-GraphRAG: (1)
_KG-GraphRAG (Triplets), which retrieves only the_
triplets, and (2) KG-GraphRAG (Triplets+Text),
which retrieves both the triplets and their associated
source text. We implement the KG-GraphRAG
methods using LlamaIndex (Liu, 2022) [1].
For the Community-based GraphRAG (Edge
et al., 2024), in addition to generating KGs using
LLMs, hierarchical communities are constructed
using graph community detection algorithms, as
shown in Figure 1. Each community is associated with a corresponding text summary or report,
where lower-level communities contain detailed
information from the original text. The higherlevel communities further provide summaries of
the lower-level communities. Due to the hierarchical community structure, there are two primary
retrieval methods for retrieving relevant information given a query: Local Search and Global
**Search. In Local Search, entities, relations, their**
descriptions, and lower-level community reports
are retrieved based on entity matching between the
query’s extracted entities and the constructed graph.
We refer to this method as Community-GraphRAG
_(Local). In Global Search, only high-level com-_
munity summaries are retrieved based on semantic
similarity to the query. We refer to this method as
_Community-GraphRAG (Global). The Community-_
GraphRAG methods are implemented using Microsoft GraphRAG (Edge et al., 2024)[2].
To ensure a fair comparison, we adopt the same
settings for both RAG and GraphRAG methods.
This includes the chunking strategy, embedding
model, and LLMs. We select two representative RAG tasks, i.e., Question Answering and
Query-based Summarization, to evaluate RAG and
GraphRAG simultaneously.
### 4 Question Answering
QA is one of the most widely used tasks for evaluating the performance of RAG systems. QA tasks
come in various forms, such as single-hop QA,
multi-hop QA, and open-domain QA (Wang, 2022).
To systematically assess the effectiveness of RAG
and GraphRAG in these tasks, we evaluate them
on widely used QA datasets and employ standard
evaluation metrics.
1https://www.llamaindex.ai/
2https://microsoft.github.io/graphrag
**4.1** **Datasets and Evaluation Metrics**
To comprehensively evaluate the performance of
GraphRAG on general QA tasks, we select four
widely used datasets that cover different perspectives. For the single-hop QA task, we select
the Natural Questions (NQ) dataset (Kwiatkowski
et al., 2019). For the multi-hop QA task, we select HotPotQA (Yang et al., 2018) and MultiHopRAG (Tang and Yang, 2024) datasets. The
MultiHop-RAG dataset categorizes queries into
four types: Inference, Comparison, Temporal, and
Null queries. To further analyze the performance of
RAG and GraphRAG at a finer granularity, we also
include NovelQA (Tang and Yang, 2024), which
contains 21 different types of queries. For more
details, please refer to Appendix A.1.1. We use
Precision (P), Recall (R), and F1-score as evaluation metrics for the NQ and HotPotQA datasets,
while accuracy is used for the MultiHop-RAG and
NovelQA datasets following their original papers.
**4.2** **QA Main Results**
The performance comparison for the NQ and HotPotQA datasets is presented in Table 1, while that
of MultiHop-RAG is shown in Table 2. Due to
space constraints, partial results of NovelQA with
the Llama 3.1-8B model are shown in Table 3, with
the full results available in Appendix A.2. Based on
these results, we make the following observations:
1. RAG excels on detailed single-hop queries.
RAG performs well on single-hop queries and
queries that require detailed information. This
is evident from its performance on the singlehop dataset (NQ) as well as the single-hop (sh)
and detail-oriented (dtl) queries in the NovelQA
dataset, as shown in Table 1 and Table 3.
2. GraphRAG, **particularly** **Community-**
**GraphRAG (Local), excels on multi-hop**
**queries.** For instance, it achieved the best
performance on both the HotPotQA and
MultiHop-RAG datasets. Although its overall
performance on the NovelQA dataset is lower
than that of RAG, it still performs well on the
multi-hop (mh) queries in NovelQA dataset.
3. Community-GraphRAG (Global) often strug**gles on QA tasks. This is due to the global**
search retrieves only high-level communities,
leading to a loss of detailed information. This is
particularly evident from its lower performance
on detail-oriented queries in the NovelQA
dataset. Additionally, Community-GraphRAG
(Global) tends to hallucinate in QA tasks, as
-----
|
{
"id": "2502.11371",
"categories": [
"cs.IR"
]
}
|
{
"file_path": "/content/raw_pdfs/2502.11371v1.pdf",
"page_count": 16,
"page_num": 4,
"table_of_content_items": {
"level": [
1,
2,
2
],
"title": [
"Question Answering",
"Datasets and Evaluation Metrics",
"QA Main Results"
],
"page": [
4,
4,
4
]
},
"tables": [],
"images": [],
"graphics": []
}
|
Table 1: Performance comparison (%) on NQ and Hotpot datasets. The best results are highlighted in bold, and the
second-best results are underlined.
**NQ** **Hotpot**
**Method** **Llama 3.1-8B** **Llama 3.1-70B** **Llama 3.1-8B** **Llama 3.1-70B**
P R F1 P R F1 P R F1 P R F1
RAG **71.7** **63.93** **64.78** **74.55** **67.82** **68.18** 62.32 60.47 60.04 66.34 63.99 63.88
KG-GraphRAG (Triplets only) 40.09 33.56 34.28 37.84 31.22 28.50 26.88 24.81 25.02 32.59 30.63 30.73
KG-GraphRAG (Triplets+Text) 58.36 48.93 50.27 60.91 52.75 53.88 45.22 42.85 42.60 51.44 48.99 48.75
Community-GraphRAG (Local) 69.48 62.54 63.01 71.27 65.46 65.44 **64.14** **62.08** **61.66** **67.20** **64.89** **64.60**
Community-GraphRAG (Global) 60.76 54.99 54.48 61.15 55.52 55.05 45.72 47.60 45.16 48.33 48.56 46.99
Table 2: Performance comparison (%) on the MultiHop-RAG dataset across different query types.
**LLama 3.1-8B** **Llama 3.1-70B**
**Method**
**Inference** **Comparison** **Null** **Temporal** **Overall** **Inference** **Comparison** **Null** **Temporal** **Overall**
RAG **92.16** 57.59 96.01 30.7 67.02 **94.85** 56.31 91.36 25.73 65.77
KG-GraphRAG (Triplets only) 55.76 22.55 **98.67** 18.7 41.24 76.96 32.36 **94.35** 19.55 50.98
KG-GraphRAG (Triplets+Text) 67.4 34.7 97.34 17.15 48.51 85.91 35.98 86.38 21.61 54.58
Community-GraphRAG (Local) 86.89 60.63 80.07 50.6 **69.01** 92.03 60.16 88.70 49.06 **71.17**
Community-GraphRAG (Global) 89.34 **64.02** 19.27 **53.34** 64.4 89.09 **66.00** 13.95 **59.18** 65.69
Table 3: Performance comparison (%) on the NovelQA dataset across different query types with LLama 3.1-8B.
|NQ Hotpot Method Llama 3.1-8B Llama 3.1-70B Llama 3.1-8B Llama 3.1-70B P R F1 P R F1 P R F1 P R F1|NQ|Col3|Hotpot|Col5|
|---|---|---|---|---|
||Llama 3.1-8B|Llama 3.1-70B|Llama 3.1-8B|Llama 3.1-70B|
||P R F1|P R F1|P R F1|P R F1|
|RAG KG-GraphRAG (Triplets only) KG-GraphRAG (Triplets+Text) Community-GraphRAG (Local) Community-GraphRAG (Global)|71.7 63.93 64.78 40.09 33.56 34.28 58.36 48.93 50.27 69.48 62.54 63.01 60.76 54.99 54.48|74.55 67.82 68.18 37.84 31.22 28.50 60.91 52.75 53.88 71.27 65.46 65.44 61.15 55.52 55.05|62.32 60.47 60.04 26.88 24.81 25.02 45.22 42.85 42.60 64.14 62.08 61.66 45.72 47.60 45.16|66.34 63.99 63.88 32.59 30.63 30.73 51.44 48.99 48.75 67.20 64.89 64.60 48.33 48.56 46.99|
|LLama 3.1-8B Llama 3.1-70B Method Inference Comparison Null Temporal Overall Inference Comparison Null Temporal Overall|LLama 3.1-8B|Llama 3.1-70B|
|---|---|---|
||Inference Comparison Null Temporal Overall|Inference Comparison Null Temporal Overall|
|RAG KG-GraphRAG (Triplets only) KG-GraphRAG (Triplets+Text) Community-GraphRAG (Local) Community-GraphRAG (Global)|92.16 57.59 96.01 30.7 67.02 55.76 22.55 98.67 18.7 41.24 67.4 34.7 97.34 17.15 48.51 86.89 60.63 80.07 50.6 69.01 89.34 64.02 19.27 53.34 64.4|94.85 56.31 91.36 25.73 65.77 76.96 32.36 94.35 19.55 50.98 85.91 35.98 86.38 21.61 54.58 92.03 60.16 88.70 49.06 71.17 89.09 66.00 13.95 59.18 65.69|
|RAG|KG-GraphRAG (Triplets+Text)|
|---|---|
|chara mean plot relat settg span times avg mh 68.75 52.94 58.33 75.28 92.31 64.00 33.96 47.34 sh 69.08 62.86 66.11 75.00 78.35 - - 68.73 dtl 64.29 45.51 78.57 10.71 83.78 - - 55.28 avg 67.78 50.57 67.37 60.80 80.95 64.00 33.96 57.12|chara mean plot relat settg span times avg mh 52.08 52.94 44.44 55.06 69.23 64.00 28.61 38.37 sh 36.84 45.71 40.17 87.50 36.08 - - 39.93 dtl 38.57 30.90 42.86 21.43 32.43 - - 33.60 avg 40.00 36.23 41.09 49.60 38.10 64.00 28.61 37.80|
|Community-GraphRAG (Local)|Community-GraphRAG (Global)|
|chara mean plot relat settg span times avg mh 68.75 64.71 55.56 67.42 92.31 52.00 35.83 47.01 sh 59.87 58.57 65.69 87.50 64.95 - - 63.43 dtl 54.29 37.64 62.50 25.00 70.27 - - 46.88 avg 60.00 44.91 64.05 59.20 68.71 52.00 35.83 53.03|chara mean plot relat settg span times avg mh 54.17 58.82 55.56 56.18 53.85 68.00 20.59 34.39 sh 45.39 50.00 55.65 87.50 38.14 - - 49.65 dtl 28.57 29.78 32.14 87.50 40.54 - - 30.89 avg 42.59 36.98 51.66 52.00 40.14 68.00 20.59 39.17|
shown by its poor performance on Null queries
in the MultiHop-RAG dataset, which should ideally be answered as ‘insufficient information.’
However, this summarization approach may be
beneficial for queries that require comparing
different topics or understanding their temporal ordering, such as Comparison and Temporal
queries in the MultiHop-RAG dataset, as shown
in Table 2.
4. KG-based GraphRAG also generally under**perform on QA tasks. This is because it re-**
trieves information solely from the constructed
knowledge graph, which contains only entities
and their relations. However, the extracted entities and relations may be incomplete, leading
to gaps in the retrieved information. To verify
this, we calculated the ratio of answer entities
present in the constructed KG. We found that
only around 65.8% of answer entities exist in
the constructed KG for the Hotpot dataset and
65.5% for the NQ dataset. These findings highlight a key limitation in KG-based retrieval and
suggest the need for improved KG construction
methods to enhance graph completeness for QA.
**4.3** **Comparative QA Analysis**
In this section, we conduct a detailed analysis of
the behavior of RAG and GraphRAG, focusing
on their strengths and weaknesses. In the following discussion, we refer to Community-GraphRAG
(Local) as GraphRAG, as it demonstrates performance comparable to RAG. We categorize queries
into four groups: (1) Queries correctly answered
by both methods, (2) Queries correctly answered
only by RAG (RAG-only), (3) Queries correctly answered only by GraphRAG (GraphRAG-only), and
**(4) Queries answered incorrectly by both methods.**
The confusion matrices representing these four
groups using the Llama 3.1-8B model are shown
in Figure 2. Notably, the proportions of queries
correctly answered exclusively by GraphRAG and
RAG are significant. For example, 13.6% of
queries are GraphRAG-only, while 11.6% are RAGonly on MultiHop-RAG dataset. This phenomenon
highlights the complementary properties of RAG
and GraphRAG, and each method has its own
strengths and weaknesses. Therefore, leveraging
_their unique advantages has the potential to im-_
_prove overall performance._
-----
|
{
"id": "2502.11371",
"categories": [
"cs.IR"
]
}
|
{
"file_path": "/content/raw_pdfs/2502.11371v1.pdf",
"page_count": 16,
"page_num": 5,
"table_of_content_items": {
"level": [
2
],
"title": [
"Comparative QA Analysis"
],
"page": [
5
]
},
"tables": [
"{'bbox': (72.78601837158203, 97.86860656738281, 522.4935913085938, 196.835693359375), 'rows': 4, 'columns': 5}",
"{'bbox': (72.78601837158203, 217.91819763183594, 522.4935913085938, 291.068603515625), 'rows': 3, 'columns': 3}",
"{'bbox': (72.78601837158203, 312.1285095214844, 522.4935913085938, 442.8331298828125), 'rows': 4, 'columns': 2}"
],
"images": [],
"graphics": []
}
|

40.0 17.1
13.7 29.1

45.4 9.2
9.8 35.6
(a) NQ

47.2 7.8
6.0 39.0
(b) Hotpot

55.4 11.6
13.6 19.4
(c) MultiHop-RAG
(d) NovelQA
Figure 2: Confusion matrices comparing GraphRAG and RAG correctness across datasets using Llama 3.1-8B.
**4.4** **Improving QA Performance**
Building on the complementary properties of RAG
and GraphRAG, we investigate the following two
strategies to enhance overall QA performance.
**Strategy 1: RAG vs. GraphRAG Selection.**
In Section 4.2, we observe that RAG generally
performs well on single-hop queries and those
requiring detailed information, while GraphRAG
(Community-GraphRAG (Local)) excels in multihop queries that require reasoning. Therefore, we
hypothesize that RAG is well-suited for fact-based
queries, which rely on direct retrieval and detailed
information, whereas GraphRAG is more effective
for reasoning-based queries that involve chaining
multiple facts together. Therefore, given a query,
we employ a classification mechanism to determine
whether it is fact-based or reasoning-based. Each
query is then assigned to either RAG or GraphRAG
based on the classification results. Specifically, we
leverage the in-context learning ability of LLMs
for classification (Dong et al., 2022; Wei et al.,
2023). Further details and prompts can be found
in Appendix A.3. In this strategy, either RAG or
GraphRAG is selected for each query, and we refer
to this strategy as Selection.
**Strategy 2: RAG and GraphRAG Integration.**
We also explore the Integration strategy to leverage the complementary strengths of RAG and
GraphRAG. Both RAG and GraphRAG retrieve
information for a query simultaneously. The retrieved results are then concatenated and fed into
the generator to produce the final output.
We conduct experiments to verify the effectiveness of the two proposed strategies. Specifically,
we evaluate overall performance across all selected
datasets. For the MultiHop-RAG and NovelQA
datasets, we use the overall accuracy, while for the
NQ and HotPotQA datasets, we use the F1 score
as the evaluation metric. The results are shown
in Figure 3. From these results, we observe that
**both strategies generally enhance overall per-**
**formance. For example, on the MultiHop-RAG**
dataset with Llama 3.1-70B, Selection and Integration improve the best method by 1.1% and 6.4%,
respectively. When comparing the Selection and
Integration strategies, the Integration strategy usually achieves higher performance than the Selection strategy. However, the Selection strategy processes each query using either RAG or GraphRAG,
making it more efficient. In contrast, the Integration strategy yields better performance but requires each query to be processed by both RAG
and GraphRAG, increasing computational cost.
### 5 Query-Based Summarization
Query-based summarization tasks are widely used
to evaluate the performance of RAG systems (Ram
et al., 2023; Yu et al., 2023). GraphRAG has
also demonstrated its effectiveness in summarization tasks (Edge et al., 2024). However, Edge
et al. (2024) only evaluate its effectiveness on the
global summarization task and rely on LLM-as-aJudge (Zheng et al., 2023b) for performance assessment. In Section 5.3, we show that the LLMas-a-Judge evaluation method for summarization
tasks introduces position bias, which can impact
the reliability of results. A systematic comparison
of RAG and GraphRAG on general query-based
summarization across widely used datasets remains
unexplored. To address this gap, we conduct a comprehensive evaluation in this section, leveraging
widely used datasets and evaluation metrics.
**5.1** **Datasets and Evaluation Metrics**
We adopt two widely used single-document querybased summarization datasets, SQuALITY (Wang
et al., 2022) and QMSum (Zhong et al., 2021),
and two multi-document query-based summarization datasets, ODSum-story and ODSummeeting (Zhou et al., 2023), for our evaluation.
Unlike the LLM-generated global queries used in
-----
|
{
"id": "2502.11371",
"categories": [
"cs.IR"
]
}
|
{
"file_path": "/content/raw_pdfs/2502.11371v1.pdf",
"page_count": 16,
"page_num": 6,
"table_of_content_items": {
"level": [
2,
1,
2
],
"title": [
"Improving QA Performance",
"Query-Based Summarization",
"Datasets and Evaluation Metrics"
],
"page": [
6,
6,
6
]
},
"tables": [],
"images": [
"{'number': 18, 'bbox': Rect(423.724609375, 78.95989227294922, 522.6600341796875, 177.89529418945312), 'transform': (98.9354019165039, 0.0, -0.0, 98.9354019165039, 423.724609375, 78.95989227294922), 'width': 532, 'height': 532, 'colorspace': 1, 'cs-name': 'Indexed(3,DeviceRGB)', 'xres': 96, 'yres': 96, 'bpc': 2, 'size': 1081, 'has-mask': True}",
"{'number': 0, 'bbox': Rect(79.03361511230469, 78.95989227294922, 177.96902465820312, 177.89529418945312), 'transform': (98.9354019165039, 0.0, -0.0, 98.9354019165039, 79.03361511230469, 78.95989227294922), 'width': 532, 'height': 532, 'colorspace': 1, 'cs-name': 'Indexed(3,DeviceRGB)', 'xres': 96, 'yres': 96, 'bpc': 2, 'size': 1081, 'has-mask': True}",
"{'number': 6, 'bbox': Rect(193.9306182861328, 78.95989227294922, 292.86602783203125, 177.89529418945312), 'transform': (98.9354019165039, 0.0, -0.0, 98.9354019165039, 193.9306182861328, 78.95989227294922), 'width': 532, 'height': 532, 'colorspace': 1, 'cs-name': 'Indexed(3,DeviceRGB)', 'xres': 96, 'yres': 96, 'bpc': 2, 'size': 1081, 'has-mask': True}",
"{'number': 12, 'bbox': Rect(308.8276062011719, 78.95989227294922, 407.76300048828125, 177.89529418945312), 'transform': (98.9354019165039, 0.0, -0.0, 98.9354019165039, 308.8276062011719, 78.95989227294922), 'width': 532, 'height': 532, 'colorspace': 1, 'cs-name': 'Indexed(3,DeviceRGB)', 'xres': 96, 'yres': 96, 'bpc': 2, 'size': 1081, 'has-mask': True}"
],
"graphics": []
}
|
(a) Llama3.1-8B (b) Llama3.1-70B
Figure 3: Overall QA performance comparison of different methods.
We evaluate both the KG-based and Communitybased GraphRAG methods, along with the Integration strategy discussed in Section 4.4. The results of Llama3.1-8B model on Query-based single
document summarization and multiple document
summarization are shown in Table 4 and Table 5, respectively. The results of Llama3.1-70B are shown
in Appendix A.4. Based on these results, we can
make the following observations:
1. RAG generally performs well on query-based
**summarization tasks.** This is particularly
true on multi-document summarization datasets,
where they are often the best method.
2. KG-based GraphRAG benefit from combin**ing triplets with their corresponding text.**
This improves performance by incorporating
more details, making predictions closer to the
ground truth summaries.
3. Community-based GraphRAG performs bet**ter with the Local search method.** Local
search retrieves entities, relations, and lowlevel communities, while the Global search
method retrieves only high-level summaries.
This demonstrates the importance of detailed
information in the selected datasets.
4. The Integration strategy is often comparable
**to RAG only performance. This strategy in-**
tegrates retrieved content from both RAG and
Community-GraphRAG (Local), resulting in
performance similar to RAG alone.
From the results in Section 5.2, the Communitybased GraphRAG, particularly with global search,
generally underperforms compared to RAG on the
selected datasets. This contrasts with the findings
of Edge et al. (2024), where Community-based
GraphRAG with global search outperformed both
local search and RAG. There are two key differences between our evaluation and Edge et al.
(2024). First, their study primarily focuses on
global summarization, which captures the overall
information of an entire corpus, whereas the selected datasets in our evaluation contain queries related to specific roles or events. Second, Edge et al.
(2024) assess performance by comparing RAG
and GraphRAG outputs using LLM-as-a-Judge
without ground truth, whereas we evaluate results
against ground truth summaries using ROUGE and
BERTScore. These metrics emphasize similarity
to the reference summaries, which often contain
more detailed information.
We further conduct an evaluation following Edge
et al. (2024), using the LLM-as-a-Judge method to
compare RAG and Community-based GraphRAG
from two perspectives: Comprehensiveness and
Diversity. Comprehensiveness focuses on detail,
addressing the question: "How much detail does
_the answer provide to cover all aspects and details_
_of the question?" Meanwhile, Diversity emphasizes_
global information, evaluating "Does the answer
_provide a broad and globally inclusive perspec-_
_tive?". The prompt and details are shown in Ap-_
pendix A.5. Specifically, we input the summaries
generated by RAG and GraphRAG into the prompt
and ask the LLM to select the better one for each
metric, following Edge et al. (2024). Additionally,
to better account for the order in which the summaries are presented, we consider two scenarios.
_Order 1 (O1): We place the RAG summary appears_
|RAG GraphRAG 75 Selection Integration 70 Performance 65 60 55 50 NQ Hotpot MultiHop-RAG NovelQA|RAG GraphRAG|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|Col23|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||||||||||||
||Selection Integration||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||N|Q|||||Hot|po|t|M|ult|iH|op-|RA|G|N|ov|elQ|A||
|compa 5.3 From base gene selec of E Grap local feren (202 glob||ri P t d ra te dg h s c 4) al|son os he Gr lly d e RA ea es . su|o iti r ap u dat et G rc be Fi m|f d on es hR nd as al w h tw rst ma|iff B ult A er et . ( ith an e , ri||(b er ia s i G, pe s. 2 g d en th za|) L ent s i n p rfo T 02 lo R o eir tio|la m n Se ar rm his 4) ba A ur s n,|ma et E ct tic s c , l s G. e tu w|3.1 ho xis io ul co on wh ea T val dy hi|-7 ds. ti n arl m tra er rc he ua p ch|0B ng 5. y pa st e h re ti ri c|E 2, wi re s Co ou a on m ap|va th th d wi m tp re a ari tu|lu e gl to th m er t nd ly re|at Co o R th un for w E f s t|io m bal AG e it m o d oc he|n m s o fin y- ed ke ge us o|un ea n di ba b y et es ve|ity rch th ng se ot di a o ral|
|RAG GraphRAG 70 Selection Integration Performance 65 60 55 50 NQ Hotpot MultiHop-RAG NovelQA|RAG GraphRAG|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||Selection Integration|||||||||||||||||||||
|||||||||||||||||||||||
|||||||||||||||||||||||
|||||||||||||||||||||||
|||||||||||||||||||||||
|||||||||||||||||||||||
|||||||||||||||||||||||
|||||||||||||||||||||||
|||||||||||||||||||||||
||||N|Q||||Hot|po|t|M|ult|iH|op-|RA|G|N|ov|elQ|A||
|the un querie roles o more use RO et al., cal an and gr 5.2 S We ev based gratio sults o docum||re s r gr U 20 d s ou u al G n f e|lea in eve ou G 19 e nd m uat ra str Lla nt|se th nt nd E- ) ma tr m e ph ate m su|d e s s. tr 2 ( as nt ut ari bo R gy a3 m|da el Si ut Li ev ic h za th AG d .1- ma|(a tas ec nc h n, al si su ti th is 8 ri|) F et te e su 2 ua mi m on e me cu B za|Lla ig s o d th m 00 tio lar ma E K th ss mo tio|ma ure f da ese ma 4) n it ri x G- od ed de n|3. 3 Ed tas d ri an m y b es. pe ba s, in l an|1-8 : O ge et at es d etr et ri se a S on d|B ve e s as fo B ic w me d lo ec Q m|ra t a fo ets r ER s t ee nt an ng ti ue ul|ll l. cu c ea T o n t al d w on ry tip|QA (2 s on ch Sc me he R C it 4 -b le|p 02 on tai q or as p es om h .4. as d|er 4) sp n ue e ( ur re ul m th T ed oc|for , e on ry, Zh e di ts u e I h si um|m mo cif e w an lex cte nit nt e r ng e|an st ic or e g i- d y- e- e- le nt|
-----
|
{
"id": "2502.11371",
"categories": [
"cs.IR"
]
}
|
{
"file_path": "/content/raw_pdfs/2502.11371v1.pdf",
"page_count": 16,
"page_num": 7,
"table_of_content_items": {
"level": [
2,
2
],
"title": [
"Summarization Experimental Results",
"Position Bias in Existing Evaluation"
],
"page": [
7,
7
]
},
"tables": [
"{'bbox': (70.86599731445312, 73.72831726074219, 291.97607421875, 430.70294189453125), 'rows': 12, 'columns': 22}",
"{'bbox': (299.4429931640625, 73.72831726074219, 520.5531005859375, 390.9705810546875), 'rows': 12, 'columns': 23}"
],
"images": [],
"graphics": []
}
|
Table 4: The performance of query-based single document summarization task using Llama3.1-8B.
**SQuALITY** **QMSum**
**Method** **ROUGE-2** **BERTScore** **ROUGE-2** **BERTScore**
P R F1 P R F1 P R F1 P R F1
RAG 15.09 8.74 10.08 74.54 81.00 77.62 21.50 **3.80** 6.32 **81.03** 84.45 **82.69**
KG-GraphRAG (Triplets only) 11.99 6.16 7.41 82.46 84.30 83.17 13.71 2.55 4.15 80.16 82.96 81.52
KG-GraphRAG (Triplets+Text) 15.00 **9.48** 10.52 **84.37** **85.88** **84.92** 16.83 3.32 5.38 80.92 83.64 82.25
Community-GraphRAG (Local) **15.82** 8.64 10.10 83.93 85.84 84.66 20.54 3.35 5.64 80.63 84.13 82.34
Community-GraphRAG (Global) 10.23 6.21 6.99 82.68 84.26 83.30 10.54 1.97 3.23 79.79 82.47 81.10
Integration 15.69 9.32 **10.67** 74.56 81.22 77.73 **21.97** **3.80** **6.34** 80.89 **84.47** 82.63
Table 5: The performance of query-based multiple document summarization task using Llama3.1-8B.
**ODSum-story** **ODSum-meeting**
**Method** **ROUGE-2** **BERTScore** **ROUGE-2** **BERTScore**
P R F1 P R F1 P R F1 P R F1
RAG **15.39** 8.44 **9.81** **83.87** **85.74** **84.57** 15.50 **6.43** **8.77** **83.12** **85.84** **84.45**
KG-GraphRAG (Triplets only) 11.02 5.56 6.62 82.09 83.91 82.77 11.64 4.87 6.58 81.13 84.32 82.69
KG-GraphRAG (Triplets+Text) 9.19 5.82 6.22 79.39 83.30 81.03 11.97 4.97 6.72 81.50 84.41 82.92
Community-GraphRAG (Local) 13.84 7.19 8.49 83.19 85.07 83.90 15.65 5.66 8.02 82.44 85.54 83.96
Community-GraphRAG (Global) 9.40 4.47 5.46 81.46 83.54 82.30 11.44 3.89 5.59 81.20 84.50 82.81
Integration 14.77 **8.55** 9.53 83.73 85.56 84.40 **15.69** 6.15 8.51 82.87 85.81 84.31
1.0 1.0 1.0 1.0
RAG-Order 1 RAG-Order 1 RAG-Order 1 RAG-Order 1
GraphRAG-Local-Order 1 GraphRAG-Gloabl-Order 1 GraphRAG-Local-Order 1 GraphRAG-Gloabl-Order 1
0.8 RAG-Order 2GraphRAG-Local-Order 2 0.8 RAG-Order 2GraphRAG-Gloabl-Order 2 0.8 RAGGraphRAG-Local-Order 2-Order 2 0.8 RAG-OGraphRAG-Gloabl-Order 2rder 2
0.6 0.6 0.6 0.6
0.4 0.4 0.4 0.4
0.2 0.2 0.2 0.2
0.0 0.0 0.0 0.0
Comprehensiveness Diversity Comprehensiveness Diversity Comprehensiveness Diversity Comprehensiveness Diversity
(a) QMSum Local (b) QMSum Global (c) ODSum-story Local (d) ODSum-story Global
Figure 4: Comparison of LLM-as-a-Judge evaluations for RAG and GraphRAG. "Local" refers to the evaluation of
RAG vs. GraphRAG-Local, while "Global" refers to RAG vs. GraphRAG-Global.
|SQuALITY QMSum Method ROUGE-2 BERTScore ROUGE-2 BERTScore P R F1 P R F1 P R F1 P R F1|SQuALITY|QMSum|
|---|---|---|
||ROUGE-2 BERTScore|ROUGE-2 BERTScore|
||P R F1 P R F1|P R F1 P R F1|
|RAG KG-GraphRAG (Triplets only) KG-GraphRAG (Triplets+Text) Community-GraphRAG (Local) Community-GraphRAG (Global) Integration|15.09 8.74 10.08 74.54 81.00 77.62 11.99 6.16 7.41 82.46 84.30 83.17 15.00 9.48 10.52 84.37 85.88 84.92 15.82 8.64 10.10 83.93 85.84 84.66 10.23 6.21 6.99 82.68 84.26 83.30 15.69 9.32 10.67 74.56 81.22 77.73|21.50 3.80 6.32 81.03 84.45 82.69 13.71 2.55 4.15 80.16 82.96 81.52 16.83 3.32 5.38 80.92 83.64 82.25 20.54 3.35 5.64 80.63 84.13 82.34 10.54 1.97 3.23 79.79 82.47 81.10 21.97 3.80 6.34 80.89 84.47 82.63|
|RAG-Order 1 GraphRAG-Local-Order 1 RAG-Order 2 GraphRAG-Local-Order 2|Col2|Col3|Col4|RAG-Order 1 GraphRAG-Local-Order 1 RAG-Order 2 GraphRAG-Local-Order 2|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||R G|AG ra|||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
|Col1|Col2|Col3|RAG-Order 1 GraphRAG-Gloabl-Order 1 RAG-Order 2 GraphRAG-Gloabl-Order 2|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
|||||RAG- Grap|||||
||||||||||
||||||||||
||||||||||
||||||||||
|RAG-Order 1 GraphRAG-Local-Order 1 RAG-Order 2 GraphRAG-Local-Order 2|Col2|Col3|Col4|RAG-Order 1 GraphRAG-Local-Order 1 RAG-Order 2 GraphRAG-Local-Order 2|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||RA Gra||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
|Col1|Col2|Col3|RAG-Order 1 GraphRAG-Gloabl-Order 1 RAG-Order 2 GraphRAG-Gloabl-Order 2|Col5|
|---|---|---|---|---|
|||||RAG- Grap|
||||||
||||||
|ODSum-story ODSum-meeting Method ROUGE-2 BERTScore ROUGE-2 BERTScore P R F1 P R F1 P R F1 P R F1 RAG 15.39 8.44 9.81 83.87 85.74 84.57 15.50 6.43 8.77 83.12 85.84 84.45 KG-GraphRAG (Triplets only) 11.02 5.56 6.62 82.09 83.91 82.77 11.64 4.87 6.58 81.13 84.32 82.69 KG-GraphRAG (Triplets+Text) 9.19 5.82 6.22 79.39 83.30 81.03 11.97 4.97 6.72 81.50 84.41 82.92 Community-GraphRAG (Local) 13.84 7.19 8.49 83.19 85.07 83.90 15.65 5.66 8.02 82.44 85.54 83.96 Community-GraphRAG (Global) 9.40 4.47 5.46 81.46 83.54 82.30 11.44 3.89 5.59 81.20 84.50 82.81 Integration 14.77 8.55 9.53 83.73 85.56 84.40 15.69 6.15 8.51 82.87 85.81 84.31 1.0 1.0 1.0 1.0 RAG-Order 1 RAG-Order 1 RAG-Order 1 RAG-Order 1 GraphRAG-Local-Order 1 GraphRAG-Gloabl-Order 1 GraphRAG-Local-Order 1 GraphRAG-Gloabl-Order 1 0.8 RAG-Order 2 0.8 RAG-Order 2 0.8 RAG-Order 2 0.8 RAG-Order 2 GraphRAG-Local-Order 2 GraphRAG-Gloabl-Order 2 GraphRAG-Local-Order 2 GraphRAG-Gloabl-Order 2 Proportion Proportion Proportion Proportion 0.6 0.6 0.6 0.6 0.4 0.4 0.4 0.4 0.2 0.2 0.2 0.2 0.0 0.0 0.0 0.0 Comprehensiveness Diversity Comprehensiveness Diversity Comprehensiveness Diversity Comprehensiveness Diversity|Col2|ODSum-story|ODSum-meeting|Col5|Col6|
|---|---|---|---|---|---|
|||ROUGE-2 BERTScore|ROUGE-2 BERTScore|||
|||P R F1 P R F1|P R F1 P R F1|||
|||15.39 8.44 9.81 83.87 85.74 84.57 11.02 5.56 6.62 82.09 83.91 82.77 9.19 5.82 6.22 79.39 83.30 81.03 13.84 7.19 8.49 83.19 85.07 83.90 9.40 4.47 5.46 81.46 83.54 82.30 14.77 8.55 9.53 83.73 85.56 84.40|15.50 6.43 8.77 83.12 85.84 84.45 11.64 4.87 6.58 81.13 84.32 82.69 11.97 4.97 6.72 81.50 84.41 82.92 15.65 5.66 8.02 82.44 85.54 83.96 11.44 3.89 5.59 81.20 84.50 82.81 15.69 6.15 8.51 82.87 85.81 84.31|||
|||||8.5||
|||||||
||1.0 RAG-Order 1 GraphRAG-Gloabl-Order 1 0.8 RAG-Order 2 GraphRAG-Gloabl-Order 2 Proportion 0.6 0.4 0.2 0.0 Comprehensiveness Diversity|||1.0 RAG-Order 1 GraphRAG-Gloabl-Order 1 0.8 RAG-Order 2 GraphRAG-Gloabl-Order 2 Proportion 0.6 0.4 0.2 0.0 Comprehensiveness Diversity||
before the GraphRAG summary and Order 2 (O2):
GraphRAG appears before RAG. We compare the
proportion of selected best samples from RAG and
GraphRAG, where a higher proportion indicates
better performance as predicted by the LLM.
The results of RAG vs. GraphRAG (Local) and
RAG vs. GraphRAG (Global) on the QMSum and
ODSum-story datasets are presented in Figure 4.
More result can be found in Appendix A.6. We
can make the following observations: (1) Posi**tion bias (Shi et al., 2024; Wang et al., 2024) is**
**evident in the LLM-as-a-Judge evaluations for**
**summarization task, as changing the order of the**
two methods significantly affects the predictions.
This effect is particularly strong in the comparison between RAG and GraphRAG (Local), where
the LLMs make completely opposite decisions
depending on the order, as shown in Figures 4a
and 4c. However, (2) Comparison between RAG
and GraphRAG (Global): While the proportions
vary, RAG consistently outperforms GraphRAG
(Global) in Comprehensiveness but underperforms
in Diversity as shown in Figures 4b and 4d. This result suggests that Community-based GraphRAG
**with Global Search focuses more on the global**
**aspects of whole corpus, whereas RAG captures**
**more detailed information.**
### 6 Conclusion
In this paper, we systematically evaluate and compare RAG and GraphRAG on general text-based
tasks. Our analysis reveals the distinct strengths
of RAG and GraphRAG in QA and query-based
summarization, as well as evaluation challenges in
summarization tasks, providing valuable insights
for future research. Building on these findings, we
propose two strategies to enhance QA performance.
Future work can explore improving GraphRAG
through better graph construction or developing
novel approaches to combine RAG and GraphRAG
methods for both effectiveness and efficiency.
-----
|
{
"id": "2502.11371",
"categories": [
"cs.IR"
]
}
|
{
"file_path": "/content/raw_pdfs/2502.11371v1.pdf",
"page_count": 16,
"page_num": 8,
"table_of_content_items": {
"level": [
1
],
"title": [
"Conclusion"
],
"page": [
8
]
},
"tables": [
"{'bbox': (71.21875762939453, 85.9226303100586, 520.078857421875, 198.17953491210938), 'rows': 4, 'columns': 3}",
"{'bbox': (62.97119903564453, 219.32901000976562, 520.078857421875, 447.79296875), 'rows': 7, 'columns': 6}",
"{'bbox': (86.97974395751953, 344.55548095703125, 177.01034545898438, 439.9045715332031), 'rows': 9, 'columns': 11}",
"{'bbox': (200.96775817871094, 344.55548095703125, 290.99835205078125, 439.9045715332031), 'rows': 6, 'columns': 9}",
"{'bbox': (314.9557800292969, 344.55548095703125, 404.98638916015625, 439.9045715332031), 'rows': 8, 'columns': 11}",
"{'bbox': (428.9427795410156, 344.55548095703125, 520.078857421875, 439.9045715332031), 'rows': 4, 'columns': 5}"
],
"images": [],
"graphics": []
}
|
### Limitations
In this paper, we evaluate and compare RAG and
GraphRAG on Question Answering and Querybased Summarization tasks. Future work can extend this study to additional tasks to further assess
the strengths and applicability of GraphRAG. Additionally, the graph construction in all GraphRAG
methods explored in this work relies on LLM-based
construction, where LLMs extract entities and relations. However, other graph construction models
designed for text processing exist and can be investigated in future studies. Finally, we primarily
evaluate generation performance using Llama 3.18B-Instruct and Llama 3.1-70B-Instruct. Future
research can explore other generation models for a
more comprehensive comparison.
### References
Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun.
2024. Benchmarking large language models in
retrieval-augmented generation. In Proceedings of
_the AAAI Conference on Artificial Intelligence, vol-_
ume 38, pages 17754–17762.
Jialin Dong, Bahare Fatemi, Bryan Perozzi, Lin F Yang,
and Anton Tsitsulin. 2024. Don’t forget to connect!
improving rag with graph-based reranking. arXiv
_preprint arXiv:2405.18414._
Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Jingyuan
Ma, Rui Li, Heming Xia, Jingjing Xu, Zhiyong Wu,
Tianyu Liu, et al. 2022. A survey on in-context learning. arXiv preprint arXiv:2301.00234.
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey,
Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman,
Akhil Mathur, Alan Schelten, Amy Yang, Angela
Fan, et al. 2024. The llama 3 herd of models. arXiv
_preprint arXiv:2407.21783._
Darren Edge, Ha Trinh, Newman Cheng, Joshua
Bradley, Alex Chao, Apurva Mody, Steven Truitt,
and Jonathan Larson. 2024. From local to global: A
graph rag approach to query-focused summarization.
_arXiv preprint arXiv:2404.16130._
Shahul Es, Jithin James, Luis Espinosa-Anke, and
Steven Schockaert. 2023. Ragas: Automated evaluation of retrieval augmented generation. _arXiv_
_preprint arXiv:2309.15217._
Wenqi Fan, Yujuan Ding, Liangbo Ning, Shijie Wang,
Hengyun Li, Dawei Yin, Tat-Seng Chua, and Qing
Li. 2024. A survey on rag meeting llms: Towards
retrieval-augmented large language models. In Pro_ceedings of the 30th ACM SIGKDD Conference on_
_Knowledge Discovery and Data Mining, pages 6491–_
6501.
Paulo Finardi, Leonardo Avila, Rodrigo Castaldoni, Pedro Gengo, Celio Larcher, Marcos Piau, Pablo Costa,
and Vinicius Caridá. 2024. The chronicles of rag:
The retriever, the chunk and the generator. arXiv
_preprint arXiv:2401.07883._
Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia,
Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, and Haofen
Wang. 2023. Retrieval-augmented generation for
large language models: A survey. arXiv preprint
_arXiv:2312.10997._
Haoyu Han, Yu Wang, Harry Shomer, Kai Guo, Jiayuan
Ding, Yongjia Lei, Mahantesh Halappanavar, Ryan A
Rossi, Subhabrata Mukherjee, Xianfeng Tang, et al.
2024. Retrieval-augmented generation with graphs
(graphrag). arXiv preprint arXiv:2501.00309.
Haoyu Han, Yaochen Xie, Hui Liu, Xianfeng Tang,
Sreyashi Nag, William Headden, Yang Li, Chen Luo,
Shuiwang Ji, Qi He, et al. 2025. Reasoning with
graphs: Structuring implicit knowledge to enhance
llms reasoning. arXiv preprint arXiv:2501.07845.
Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong,
Zhangyin Feng, Haotian Wang, Qianglong Chen,
Weihua Peng, Xiaocheng Feng, Bing Qin, et al. 2023.
A survey on hallucination in large language models:
Principles, taxonomy, challenges, and open questions.
_arXiv preprint arXiv:2311.05232._
Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas
Hosseini, Fabio Petroni, Timo Schick, Jane DwivediYu, Armand Joulin, Sebastian Riedel, and Edouard
Grave. 2023. Atlas: Few-shot learning with retrieval
augmented language models. Journal of Machine
_Learning Research, 24(251):1–43._
Zhengbao Jiang, Frank F Xu, Luyu Gao, Zhiqing
Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang,
Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation. arXiv preprint
_arXiv:2305.06983._
Vladimir Karpukhin, Barlas O˘guz, Sewon Min, Patrick
Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and
Wen-tau Yih. 2020. Dense passage retrieval for
open-domain question answering. arXiv preprint
_arXiv:2004.04906._
Jiho Kim, Sungjin Park, Yeonsu Kwon, Yohan Jo, James
Thorne, and Edward Choi. 2023. Factkg: Fact verification via reasoning on knowledge graphs. arXiv
_preprint arXiv:2305.06590._
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti,
Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark
for question answering research. Transactions of the
_Association for Computational Linguistics, 7:453–_
466.
Xinze Li, Zhenghao Liu, Chenyan Xiong, Shi Yu,
Yu Gu, Zhiyuan Liu, and Ge Yu. 2023. Structureaware language model pretraining improves dense
-----
|
{
"id": "2502.11371",
"categories": [
"cs.IR"
]
}
|
{
"file_path": "/content/raw_pdfs/2502.11371v1.pdf",
"page_count": 16,
"page_num": 9,
"table_of_content_items": {
"level": [],
"title": [],
"page": []
},
"tables": [],
"images": [],
"graphics": []
}
|
retrieval on structured data. _arXiv preprint_
_arXiv:2305.19912._
Yongqi Li, Wenjie Li, and Liqiang Nie. 2022. Dynamic
graph reasoning for conversational open-domain
question answering. ACM Transactions on Infor_mation Systems (TOIS), 40(4):1–24._
Chin-Yew Lin. 2004. Rouge: A package for automatic
evaluation of summaries. In Text summarization
_branches out, pages 74–81._
Fangru Lin, Emanuele La Malfa, Valentin Hofmann,
Elle Michelle Yang, Anthony Cohn, and Janet B
Pierrehumbert. 2024. Graph-enhanced large language models in asynchronous plan reasoning. arXiv
_preprint arXiv:2402.02805._
[Jerry Liu. 2022. LlamaIndex.](https://doi.org/10.5281/zenodo.1234)
Yi Liu, Lianzhe Huang, Shicheng Li, Sishuo Chen, Hao
Zhou, Fandong Meng, Jie Zhou, and Xu Sun. 2023.
Recall: A benchmark for llms robustness against
external counterfactual knowledge. arXiv preprint
_arXiv:2311.08147._
Xinbei Ma, Yeyun Gong, Pengcheng He, Hai Zhao,
and Nan Duan. 2023. Query rewriting for retrievalaugmented large language models. arXiv preprint
_arXiv:2305.14283._
Yao Ma and Jiliang Tang. 2021. _Deep learning on_
_graphs. Cambridge University Press._
Fatma Miladi, Valéry Psyché, and Daniel Lemire. 2024.
Leveraging gpt-4 for accuracy in education: A comparative study on retrieval-augmented generation in
moocs. In International Conference on Artificial
_Intelligence in Education, pages 427–434. Springer._
Zach Nussbaum, John X Morris, Brandon Duderstadt,
and Andriy Mulyar. 2024. Nomic embed: Training
a reproducible long context text embedder. arXiv
_preprint arXiv:2402.01613._
Boci Peng, Yun Zhu, Yongchao Liu, Xiaohe Bo,
Haizhou Shi, Chuntao Hong, Yan Zhang, and Siliang
Tang. 2024. Graph retrieval-augmented generation:
A survey. arXiv preprint arXiv:2408.08921.
Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay,
Amnon Shashua, Kevin Leyton-Brown, and Yoav
Shoham. 2023. In-context retrieval-augmented language models. Transactions of the Association for
_Computational Linguistics, 11:1316–1331._
Lin Shi, Chiyu Ma, Wenhua Liang, Weicheng Ma, and
Soroush Vosoughi. 2024. Judging the judges: A
systematic investigation of position bias in pairwise
comparative assessments by llms. arXiv preprint
_arXiv:2406.07791._
Yixuan Tang and Yi Yang. 2024. Multihop-rag: Benchmarking retrieval-augmented generation for multihop queries. arXiv preprint arXiv:2401.15391.
Yijun Tian, Huan Song, Zichen Wang, Haozhu Wang,
Ziqing Hu, Fang Wang, Nitesh V Chawla, and Panpan Xu. 2024. Graph neural prompting with large
language models. In Proceedings of the AAAI Con_ference on Artificial Intelligence, volume 38, pages_
19080–19088.
Alex Wang, Richard Yuanzhe Pang, Angelica Chen,
Jason Phang, and Samuel R Bowman. 2022. Squality:
Building a long-document summarization dataset the
hard way. arXiv preprint arXiv:2205.11465.
Zhen Wang. 2022. Modern question answering
datasets and benchmarks: A survey. arXiv preprint
_arXiv:2206.15030._
Ziqi Wang, Hanlin Zhang, Xiner Li, Kuan-Hao Huang,
Chi Han, Shuiwang Ji, Sham M Kakade, Hao Peng,
and Heng Ji. 2024. Eliminating position bias of
language models: A mechanistic approach. arXiv
_preprint arXiv:2407.01100._
Jerry Wei, Jason Wei, Yi Tay, Dustin Tran, Albert
Webson, Yifeng Lu, Xinyun Chen, Hanxiao Liu,
Da Huang, Denny Zhou, et al. 2023. Larger language
models do in-context learning differently. _arXiv_
_preprint arXiv:2303.03846._
Nirmalie Wiratunga, Ramitha Abeyratne, Lasal Jayawardena, Kyle Martin, Stewart Massie, Ikechukwu NkisiOrji, Ruvan Weerasinghe, Anne Liret, and Bruno
Fleisch. 2024. Cbr-rag: case-based reasoning for
retrieval augmented generation in llms for legal question answering. In International Conference on Case_Based Reasoning, pages 445–460. Springer._
Yaozu Wu, Yankai Chen, Zhishuai Yin, Weiping Ding,
and Irwin King. 2023. A survey on graph embedding techniques for biomedical data: Methods and
applications. Information Fusion, 100:101909.
Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong
Long, Chengqi Zhang, and S Yu Philip. 2020. A comprehensive survey on graph neural networks. IEEE
_transactions on neural networks and learning sys-_
_tems, 32(1):4–24._
Feng Xia, Ke Sun, Shuo Yu, Abdul Aziz, Liangtian Wan,
Shirui Pan, and Huan Liu. 2021. Graph learning: A
survey. IEEE Transactions on Artificial Intelligence,
2(2):109–127.
Guangzhi Xiong, Qiao Jin, Zhiyong Lu, and
Aidong Zhang. 2024. Benchmarking retrievalaugmented generation for medicine. arXiv preprint
_arXiv:2402.13178._
Fangyuan Xu, Weijia Shi, and Eunsol Choi. 2023. Recomp: Improving retrieval-augmented lms with compression and selective augmentation. arXiv preprint
_arXiv:2310.04408._
Ran Xu, Wenqi Shi, Yue Yu, Yuchen Zhuang, Bowen
Jin, May D Wang, Joyce C Ho, and Carl Yang. 2024.
Ram-ehr: Retrieval augmentation meets clinical predictions on electronic health records. arXiv preprint
_arXiv:2403.00815._
-----
|
{
"id": "2502.11371",
"categories": [
"cs.IR"
]
}
|
{
"file_path": "/content/raw_pdfs/2502.11371v1.pdf",
"page_count": 16,
"page_num": 10,
"table_of_content_items": {
"level": [],
"title": [],
"page": []
},
"tables": [],
"images": [],
"graphics": []
}
|
Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, and Zhen-Hua Ling.
2024. Corrective retrieval augmented generation.
_arXiv preprint arXiv:2401.15884._
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and
Christopher D Manning. 2018. Hotpotqa: A dataset
for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600.
Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. 2021. Qagnn: Reasoning with language models and knowledge graphs for question answering. arXiv preprint
_arXiv:2104.06378._
Hao Yu, Aoran Gan, Kai Zhang, Shiwei Tong, Qi Liu,
and Zhaofeng Liu. 2024. Evaluation of retrievalaugmented generation: A survey. In CCF Conference
_on Big Data, pages 102–120. Springer._
Zichun Yu, Chenyan Xiong, Shi Yu, and Zhiyuan Liu.
2023. Augmentation-adapted retriever improves generalization of language models as generic plug-in.
_arXiv preprint arXiv:2305.17331._
Boyu Zhang, Hongyang Yang, Tianyu Zhou, Muhammad Ali Babar, and Xiao-Yang Liu. 2023. Enhancing
financial sentiment analysis via retrieval augmented
large language models. In Proceedings of the fourth
_ACM international conference on AI in finance, pages_
349–356.
Haozhen Zhang, Tao Feng, and Jiaxuan You. 2024.
Graph of records: Boosting retrieval augmented generation for long-context summarization with graphs.
_arXiv preprint arXiv:2410.11001._
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q
Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. _arXiv preprint_
_arXiv:1904.09675._
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang,
Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen
Zhang, Junjie Zhang, Zican Dong, et al. 2023. A
survey of large language models. _arXiv preprint_
_arXiv:2303.18223._
Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen,
Heng-Tze Cheng, Ed H Chi, Quoc V Le, and Denny
Zhou. 2023a. Take a step back: Evoking reasoning via abstraction in large language models. arXiv
_preprint arXiv:2310.06117._
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan
Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin,
Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023b.
Judging llm-as-a-judge with mt-bench and chatbot
arena. Advances in Neural Information Processing
_Systems, 36:46595–46623._
Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia
Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli
Celikyilmaz, Yang Liu, Xipeng Qiu, et al. 2021.
Qmsum: A new benchmark for query-based multidomain meeting summarization. _arXiv preprint_
_arXiv:2104.05938._
Yijie Zhou, Kejian Shi, Wencai Zhang, Yixin Liu, Yilun
Zhao, and Arman Cohan. 2023. Odsum: New benchmarks for open domain multi-document summarization. arXiv preprint arXiv:2309.08960.
-----
|
{
"id": "2502.11371",
"categories": [
"cs.IR"
]
}
|
{
"file_path": "/content/raw_pdfs/2502.11371v1.pdf",
"page_count": 16,
"page_num": 11,
"table_of_content_items": {
"level": [],
"title": [],
"page": []
},
"tables": [],
"images": [],
"graphics": []
}
|
### A Appendix
**A.1** **Dataset**
In this section, we introduce the used datasets in the question answering tasks and query-based summarization tasks.
**A.1.1** **Question Answering**
In the QA tasks, we use the following four widely used datasets:
- Natural Questions (NQ) (Kwiatkowski et al., 2019): The NQ dataset is a widely used benchmark
for evaluating open-domain question answering systems. Introduced by Google, it consists of real
user queries from Google Search with corresponding answers extracted from Wikipedia. Since it
primarily contains single-hop questions, we use NQ as the representative dataset for single-hop
QA. We treat NQ as a single-document QA task, where multiple questions are associated with each
document. Accordingly, we build a separate RAG system for each document in the dataset.
- Hotpot (Yang et al., 2018): HotpotQA is a widely used multi-hop question dataset that provides
10 paragraphs per question. The dataset includes varying difficulty levels, with easier questions
often solvable by LLMs. To ensure a more challenging evaluation, we randomly selected 1,000 hard
bridging questions from the development set of HotpotQA. Additionally, we treat HotpotQA as a
multi-document QA task and build a single RAG system to handle all questions.
- MultiHop-RAG (Tang and Yang, 2024): MultiHop-RAG is a QA dataset designed to evaluate
retrieval and reasoning across multiple documents with metadata in RAG pipelines. Constructed
from English news articles, it contains 2,556 queries, with supporting evidence distributed across 2
to 4 documents. The dataset includes four query types: Inference queries, which synthesize claims
about a bridge entity to identify it; Comparison queries, which compare similarities or differences
and typically yield "yes" or "no" answers; Temporal queries, which examine event ordering with
answers like "before" or "after"; and Null queries, where no answer can be derived from the retrieved
documents. It is also a multi-document QA task.
- NovelQA (Tang and Yang, 2024): NovelQA is a benchmark designed to evaluate the long-text
understanding and retrieval ability of LLMs using manually curated questions about English novels
exceeding 50,000 words. The dataset includes queries that focus on minor details or require crosschapter reasoning, making them inherently challenging for LLMs. It covers various query types
such as details, multi-hop, single-hop, character, meaning, plot, relation, setting, span, and times.
Key challenges highlighted by NovelQA include grasping abstract meanings (meaning questions),
understanding nuanced relationships (relation questions), and tracking temporal sequences and spatial
extents (span and time questions), emphasizing the difficulty of maintaining and applying contextual
information across long narratives. We use it for single-document QA task.
**A.1.2** **Query-based Summarization**
In the Query-based Summarization tasks, we adopt the following four widely used datasets:
- SQuALITY (Wang et al., 2022): SQuALITY (Summary-format QUestion Answering with Long
Input Texts) is a question-focused, long-document, multi-reference summarization dataset. It consists
of short stories from Project Gutenberg, each ranging from 4,000 to 6,000 words. Each story is paired
with five questions, and each question has four reference summaries written by Upwork writers and
NYU undergraduates. SQuALITY is designed as a single-document summarization task, making it a
valuable benchmark for evaluating summarization models on long-form content.
- QMSum (Zhong et al., 2021): QMSum is a human-annotated benchmark for query-based, multidomain meeting summarization, containing 1,808 query-summary pairs from 232 meetings across
multiple domains. We use QMSum as a single-document summarization task in our evaluation.
-----
|
{
"id": "2502.11371",
"categories": [
"cs.IR"
]
}
|
{
"file_path": "/content/raw_pdfs/2502.11371v1.pdf",
"page_count": 16,
"page_num": 12,
"table_of_content_items": {
"level": [
1,
2,
3,
3
],
"title": [
"Appendix",
"Dataset",
"Question Answering",
"Query-based Summarization"
],
"page": [
12,
12,
12,
12
]
},
"tables": [],
"images": [],
"graphics": []
}
|
- ODSum (Zhou et al., 2023): The ODSum dataset is designed to evaluate modern summarization
models in multi-document contexts and consists of two subsets: ODSum-story and ODSum-meeting.
ODSum-story is derived from the SQuALITY dataset, while ODSum-meeting is constructed from
QMSum. We use both ODSum-story and ODSum-meeting for the multi-document summarization
task in our evaluation.
**A.2** **More results on NovelQA dataset**
In this section, we present the missing results for the NovelQA dataset from the main sections. These include the performance of KG-GraphRAG (Triplets) with LLaMA 3.1-8B (Table 6), RAG with LLaMA 3.170B (Table 7), KG-GraphRAG (Triplets) with LLaMA 3.1-70B (Table 8), KG-GraphRAG (Triplets+Text)
with LLaMA 3.1-70B (Table 9), Community-GraphRAG (Local) with LLaMA 3.1-70B (Table 10), and
Community-GraphRAG (Global) with LLaMA 3.1-70B (Table 11).
Table 6: The performance of KG-GraphRAG (Triplets) with Llama 3.1-8B model on NovelQA dataset.
KG-GraphRAG(Triplet) character meaning plot relat settg span times avg
mh 31.25 17.65 41.67 50.56 38.46 64 26.47 32.89
sh 35.53 45.71 30.54 62.5 27.84 - - 33.75
dtl 31.43 24.72 35.71 17.86 27.03 - - 27.37
avg 33.7 29.81 32.63 44 28.57 64 26.47 31.88
Table 7: The performance of RAG with Llama 3.1-70B model on NovelQA dataset.
RAG character meaning plot relat settg span times avg
mh 64.58 82.35 77.78 69.66 84.62 36 36.63 48.5
sh 70.39 70 76.57 75 83.51 - - 75.27
dtl 60 51.12 76.79 67.86 83.78 - - 61.25
avg 66.67 58.11 76.74 69.6 83.67 36 36.63 61.42
Table 8: The performance of KG-GraphRAG (Triplets) with Llama 3.1-70B model on NovelQA dataset.
KG-GraphRAG (Triplets) character meaning plot relat settg span times avg
mh 50 76.47 75 43.82 76.92 24 22.46 33.72
sh 52.63 62.86 55.23 12.5 50.52 - - 54.06
dtl 35.71 26.97 39.29 53.57 37.84 - - 33.6
avg 47.78 39.62 54.68 44 49.66 24 22.46 41.18
Table 9: The performance of KG-GraphRAG (Triplets+Text) with Llama 3.1-70B model on NovelQA dataset.
KG-GraphRAG (Triplets+Text) character meaning plot relat settg span times avg
mh 56.25 58.82 63.89 51.69 84.62 24 21.39 33.72
sh 51.97 61.43 55.65 50 50.52 - - 54.42
dtl 34.29 25.28 41.07 50 37.84 - - 32.52
avg 48.15 36.98 54.08 51.2 50.34 24 21.39 41.05
**A.3** **RAG vs. GraphRAG Selection**
We classify QA queries into Fact-based and Reasoning-based queries. Fact-based queries are processed
using RAG, while Reasoning-based queries are handled by GraphRAG. The Query Classification prompt
is shown in Figure 5.
|KG-GraphRAG(Triplet)|character meaning plot relat settg span times avg|
|---|---|
|mh sh dtl avg|31.25 17.65 41.67 50.56 38.46 64 26.47 32.89 35.53 45.71 30.54 62.5 27.84 - - 33.75 31.43 24.72 35.71 17.86 27.03 - - 27.37 33.7 29.81 32.63 44 28.57 64 26.47 31.88|
|RAG|character meaning plot relat settg span times avg|
|---|---|
|mh sh dtl avg|64.58 82.35 77.78 69.66 84.62 36 36.63 48.5 70.39 70 76.57 75 83.51 - - 75.27 60 51.12 76.79 67.86 83.78 - - 61.25 66.67 58.11 76.74 69.6 83.67 36 36.63 61.42|
|KG-GraphRAG (Triplets)|character meaning plot relat settg span times avg|
|---|---|
|mh sh dtl avg|50 76.47 75 43.82 76.92 24 22.46 33.72 52.63 62.86 55.23 12.5 50.52 - - 54.06 35.71 26.97 39.29 53.57 37.84 - - 33.6 47.78 39.62 54.68 44 49.66 24 22.46 41.18|
|KG-GraphRAG (Triplets+Text)|character meaning plot relat settg span times avg|
|---|---|
|mh sh dtl avg|56.25 58.82 63.89 51.69 84.62 24 21.39 33.72 51.97 61.43 55.65 50 50.52 - - 54.42 34.29 25.28 41.07 50 37.84 - - 32.52 48.15 36.98 54.08 51.2 50.34 24 21.39 41.05|
-----
|
{
"id": "2502.11371",
"categories": [
"cs.IR"
]
}
|
{
"file_path": "/content/raw_pdfs/2502.11371v1.pdf",
"page_count": 16,
"page_num": 13,
"table_of_content_items": {
"level": [
2,
2
],
"title": [
"More results on NovelQA dataset",
"RAG vs. GraphRAG Selection"
],
"page": [
13,
13
]
},
"tables": [
"{'bbox': (80.79900360107422, 272.2960205078125, 514.4769897460938, 340.8390197753906), 'rows': 2, 'columns': 2}",
"{'bbox': (122.41100311279297, 389.18902587890625, 472.864013671875, 457.7320251464844), 'rows': 2, 'columns': 2}",
"{'bbox': (77.31300354003906, 506.0810241699219, 517.9619750976562, 574.6240234375), 'rows': 2, 'columns': 2}",
"{'bbox': (70.86599731445312, 622.9739990234375, 536.7359619140625, 691.5170288085938), 'rows': 2, 'columns': 2}"
],
"images": [],
"graphics": []
}
|
Table 10: The performance of Community-GraphRAG (Local) with Llama 3.1-70B model on NovelQA dataset.
Community-GraphRAG (Local) character meaning plot relat settg span times avg
mh 77.08 70.59 63.89 77.53 92.31 28 32.35 46.68
sh 68.42 71.43 74.9 62.5 74.23 - - 72.44
dtl 55.71 37.08 69.64 64.29 75.68 - - 51.49
avg 66.67 48.3 72.81 73.6 76.19 28 32.35 57.32
Table 11: The performance of Community-GraphRAG (Global) with Llama 3.1-70B model on NovelQA dataset.
Community-GraphRAG (Global) character meaning plot relat settg span times avg
mh 47.92 58.82 55.56 57.3 61.54 16 35.83 41.53
sh 42.76 42.86 54.39 25 40.21 - - 47
dtl 24.29 22.47 32.14 50 35.14 - - 27.64
avg 38.89 30.19 50.76 53.6 40.82 16 35.83 40.21
**Prompt for Query Classification**
System Prompt: Classifying Queries into Fact-Based and Reasoning-Based Categories
You are an AI model tasked with classifying queries into one of two categories based on their
complexity and reasoning requirements.
**Category Definitions**
1. Fact-Based Queries
- The answer can be directly retrieved from a knowledge source or requires details.
- The query does not require multi-step reasoning, inference, or cross-referencing multiple sources.
2. Reasoning-Based Queries
- The answer cannot be found in a single lookup and requires cross-referencing multiple sources,
logical inference, or multi-step reasoning.
**Examples**
**Fact-Based Queries**
{{ Fact-Based Queries Examples }}
**Reasoning-Based Queries**
{{ Reasoning-Based Queries Examples }}
Figure 5: Prompt for Query Classification.
**A.4** **Query-based Summarization Results with Llama3.1-70B model**
In this section, we present the results for Query-based Summarization tasks using the LLaMA 3.1-70B
model. The results for single-document summarization are shown in Table 12, while the results for
multi-document summarization are provided in Table 13.
Table 12: The performance of query-based single document summarization task using Llama3.1-70B.
|Community-GraphRAG (Local)|character meaning plot relat settg span times avg|
|---|---|
|mh sh dtl avg|77.08 70.59 63.89 77.53 92.31 28 32.35 46.68 68.42 71.43 74.9 62.5 74.23 - - 72.44 55.71 37.08 69.64 64.29 75.68 - - 51.49 66.67 48.3 72.81 73.6 76.19 28 32.35 57.32|
|Community-GraphRAG (Global)|character meaning plot relat settg span times avg|
|---|---|
|mh sh dtl avg|47.92 58.82 55.56 57.3 61.54 16 35.83 41.53 42.76 42.86 54.39 25 40.21 - - 47 24.29 22.47 32.14 50 35.14 - - 27.64 38.89 30.19 50.76 53.6 40.82 16 35.83 40.21|
|SQuALITY QMSum Method ROUGE-2 BERTScore ROUGE-2 BERTScore P R F1 P R F1 P R F1 P R F1|SQuALITY|QMSum|
|---|---|---|
||ROUGE-2 BERTScore|ROUGE-2 BERTScore|
||P R F1 P R F1|P R F1 P R F1|
|RAG KG-GraphRAG(Triplets only) KG-GraphRAG(Triplets+Text) Community-GraphRAG(Local) Community-GraphRAG(Global) Combine|11.85 14.24 11.00 85.96 85.76 85.67 8.53 10.28 7.46 84.13 83.97 83.89 6.57 10.14 6.00 80.52 82.23 81.07 12.54 10.31 9.61 84.50 85.33 84.71 8.99 4.78 5.60 81.64 83.64 82.44 13.59 11.32 10.55 84.88 85.76 85.12|10.42 10.00 9.53 86.14 85.92 86.02 10.62 6.25 7.48 83.20 84.72 83.94 8.64 7.85 7.29 84.10 84.55 84.31 13.69 7.43 9.14 84.09 85.85 84.95 10.97 4.40 6.01 81.93 84.67 83.26 13.16 8.67 9.93 85.18 86.21 85.69|
-----
|
{
"id": "2502.11371",
"categories": [
"cs.IR"
]
}
|
{
"file_path": "/content/raw_pdfs/2502.11371v1.pdf",
"page_count": 16,
"page_num": 14,
"table_of_content_items": {
"level": [
2
],
"title": [
"Query-based Summarization Results with Llama3.1-70B model"
],
"page": [
14
]
},
"tables": [
"{'bbox': (71.59184265136719, 92.9830322265625, 539.0269775390625, 161.5260009765625), 'rows': 2, 'columns': 2}",
"{'bbox': (71.59184265136719, 196.9840087890625, 539.0269775390625, 265.52703857421875), 'rows': 2, 'columns': 2}",
"{'bbox': (71.59184265136719, 661.7940673828125, 523.32666015625, 772.50048828125), 'rows': 4, 'columns': 3}"
],
"images": [],
"graphics": []
}
|
Table 13: The performance of query-based multiple document summarization task using Llama3.1-70B.
ODSum-story ODSum-meeting
Method ROUGE-2 BERTScore ROUGE-2 BERTScore
P R F1 P R F1 P R F1 P R
RAG 15.60 9.98 11.09 74.80 81.29 77.89 18.81 6.41 8.97 83.56 85.16
KG-GraphRAG(Triplets only) 10.08 9.12 8.48 75.71 81.93 78.66 11.52 3.41 4.79 81.19 83.07
KG-GraphRAG(Triplets+Text) 10.98 16.67 11.42 76.74 81.92 79.21 13.09 6.31 7.70 84.07 84.24
Community-GraphRAG(Local) 14.20 11.34 11.25 75.44 81.81 78.46 16.17 7.87 9.23 84.17 84.85
Community-GraphRAG(Global) 10.46 6.30 7.08 74.63 81.24 77.77 10.65 1.99 3.28 79.78 82.53
Combine 14.76 12.17 11.72 75.39 81.75 78.41 17.57 8.64 10.34 84.51 85.14
**A.5** **The LLM-as-a-Judge Prompt**
The LLM-as-a-Judge prompt can be found in Figure 6.
|ODSum-story ODSum-meeting Method ROUGE-2 BERTScore ROUGE-2 BERTScore P R F1 P R F1 P R F1 P R F1|ODSum-story|ODSum-meeting|
|---|---|---|
||ROUGE-2 BERTScore|ROUGE-2 BERTScore|
||P R F1 P R F1|P R F1 P R F1|
|RAG KG-GraphRAG(Triplets only) KG-GraphRAG(Triplets+Text) Community-GraphRAG(Local) Community-GraphRAG(Global) Combine|15.60 9.98 11.09 74.80 81.29 77.89 10.08 9.12 8.48 75.71 81.93 78.66 10.98 16.67 11.42 76.74 81.92 79.21 14.20 11.34 11.25 75.44 81.81 78.46 10.46 6.30 7.08 74.63 81.24 77.77 14.76 12.17 11.72 75.39 81.75 78.41|18.81 6.41 8.97 83.56 85.16 84.34 11.52 3.41 4.79 81.19 83.07 82.11 13.09 6.31 7.70 84.07 84.24 84.14 16.17 7.87 9.23 84.17 84.85 84.49 10.65 1.99 3.28 79.78 82.53 81.12 17.57 8.64 10.34 84.51 85.14 84.81|
**LLM-as-a-Judge Prompt**
You are an expert evaluator assessing the quality of responses in a query-based summarization task.
Below is a query, followed by two LLM-generated summarization answers. Your task is to evaluate
the best answer based on the given criteria. For each aspect, select the model that performs better.
**Query**
{{query}}
**Answers Section**
**The Answer of Model 1:**
{{answer 1}}
**The Answer of Model 2:**
{{answer 2}}
**Evaluation Criteria Assess each LLM-generated answer independently based on the following**
two aspects:
1. Comprehensiveness
- Does the answer fully address the query and include all relevant information?
- A comprehensive answer should cover all key points, ensuring that no important details are
missing.
- It should present a well-rounded view, incorporating relevant context when necessary.
- The level of detail should be sufficient to fully inform the reader without unnecessary omission
or excessive brevity.
2. Global Diversity
- Does the answer provide a broad and globally inclusive perspective?
- A globally diverse response should avoid narrow or region-specific biases and instead consider
multiple viewpoints.
- The response should be accessible and relevant to a wide, international audience rather than
assuming familiarity with specific local contexts.
Figure 6: LLM-as-a-Judge Prompt.
**A.6** **The LLM-as-a-Judge Results on more datasets**
In the main section, we present LLM-as-a-Judge results for the OMSum and ODSum-story datasets. Here,
we provide additional results on the SQuALITY and ODSum-meeting datasets, as shown in Figure 7.
-----
|
{
"id": "2502.11371",
"categories": [
"cs.IR"
]
}
|
{
"file_path": "/content/raw_pdfs/2502.11371v1.pdf",
"page_count": 16,
"page_num": 15,
"table_of_content_items": {
"level": [
2,
2
],
"title": [
"The LLM-as-a-Judge Prompt ",
"The LLM-as-a-Judge Results on more datasets"
],
"page": [
15,
15
]
},
"tables": [
"{'bbox': (71.95475769042969, 93.11809539794922, 523.32666015625, 203.82452392578125), 'rows': 4, 'columns': 3}"
],
"images": [],
"graphics": []
}
|
|RAG-Order 1 GraphRAG-Local-Order 1 RAG-Order 2 GraphRAG-Local-Order 2|Col2|Col3|Col4|RAG-Order 1 GraphRAG-Local-Order 1 RAG-Order 2 GraphRAG-Local-Order 2|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
||||||RAG Gra|||||
||||||||-Loc|||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|RAG-Order 1 GraphRAG-Gloabl-Order 1 RAG-Order 2 GraphRAG-Gloabl-Order 2|Col2|Col3|RAG-Order 1 GraphRAG-Gloabl-Order 1 RAG-Order 2 GraphRAG-Gloabl-Order 2|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
|||||RAG- Grap|||
||||||||
||||||||
||||||||
||||||||
||||||||
|RAG-Order 1 GraphRAG-Local-Order 1 RAG-Order 2 GraphRAG-Local-Order 2|Col2|Col3|Col4|RAG-Order 1 GraphRAG-Local-Order 1 RAG-Order 2 GraphRAG-Local-Order 2|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||R G|AG ra|||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
|RAG-Order 1 GraphRAG-Gloabl-Order 1 RAG-Order 2 GraphRAG-Gloabl-Order 2|Col2|Col3|RAG-Order 1 GraphRAG-Gloabl-Order 1 RAG-Order 2 GraphRAG-Gloabl-Order 2|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||RAG- Grap||||||
||||||||Gloa|||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|1.0 1.0 1.0 1.0 RAG-Order 1 RAG-Order 1 RAG-Order 1 RAG-Order 1 GraphRAG-Local-Order 1 GraphRAG-Gloabl-Order 1 GraphRAG-Local-Order 1 GraphRAG-Gloabl-Order 1 0.8 RAG-Order 2 0.8 RAG-Order 2 0.8 RAG-Order 2 0.8 RAG-Order 2 GraphRAG-Local-Order 2 GraphRAG-Gloabl-Order 2 GraphRAG-Local-Order 2 GraphRAG-Gloabl-Order 2 Proportion Proportion Proportion Proportion 0.6 0.6 0.6 0.6 0.4 0.4 0.4 0.4 0.2 0.2 0.2 0.2 0.0 0.0 0.0 0.0 Comprehensiveness Diversity Comprehensiveness Diversity Comprehensiveness Diversity Comprehensiveness Diversity|Col2|
|---|---|
|||
||1.0 RAG-Order 1 GraphRAG-Gloabl-Order 1 0.8 RAG-Order 2 GraphRAG-Gloabl-Order 2 Proportion 0.6 0.4 0.2 0.0 Comprehensiveness Diversity|
(a) SQuALITY Local
(b) SQuALITY Global
(c) ODSum-meeting Local
(d) ODSum-meeting Global
Figure 7: Comparison of LLM-as-a-Judge evaluations for RAG and GraphRAG. "Local" refers to the evaluation of
RAG vs. GraphRAG-Local, while "Global" refers to RAG vs. GraphRAG-Global. "Order 1" corresponds to the
prompt where RAG result is presented before GraphRAG, whereas "Order 2" corresponds to the reversed order.
-----
|
{
"id": "2502.11371",
"categories": [
"cs.IR"
]
}
|
{
"file_path": "/content/raw_pdfs/2502.11371v1.pdf",
"page_count": 16,
"page_num": 16,
"table_of_content_items": {
"level": [],
"title": [],
"page": []
},
"tables": [
"{'bbox': (62.97119903564453, 312.7525634765625, 522.2426147460938, 445.4940185546875), 'rows': 3, 'columns': 2}",
"{'bbox': (86.97974395751953, 340.8359680175781, 177.01034545898438, 437.7095642089844), 'rows': 8, 'columns': 10}",
"{'bbox': (201.8767547607422, 340.8359680175781, 291.9073486328125, 437.7095642089844), 'rows': 8, 'columns': 7}",
"{'bbox': (316.7737731933594, 340.8359680175781, 406.80438232421875, 437.7095642089844), 'rows': 7, 'columns': 11}",
"{'bbox': (431.6707763671875, 340.8359680175781, 522.2426147460938, 437.7095642089844), 'rows': 8, 'columns': 10}"
],
"images": [],
"graphics": []
}
|
## Retrieval-Augmented Generation with Graphs (GraphRAG)
**Haoyu Han[1][∗], Yu Wang[2][∗], Harry Shomer[1], Kai Guo[1], Jiayuan Ding[5], Yongjia Lei[2],**
**Mahantesh Halappanavar[3], Ryan A. Rossi[4], Subhabrata Mukherjee[5], Xianfeng Tang[6], Qi He[6],**
**Zhigang Hua[7], Bo Long[7], Tong Zhao[8], Neil Shah[8], Amin Javari[9], Yinglong Xia[7], Jiliang Tang[1]**
1Michigan State University, 2University of Oregon, 3Pacific Northwest National Laboratory
4Adobe Research, 5Hippocratic AI, 6Amazon, 7Meta,8Snap Inc.,,9The Home Depot,
```
{hanhaoy1, shomerha, guokai1, tangjili}@msu.edu,
{yuwang, yongjia}@uoregon.edu, hala@pnnl.gov, ryarossi@gmail.com,
{jiayuan, subho}@hippocraticai.com, {xianft, qih}@amazon.com,
{zhua, bolong, yxia}@meta.com, {tong, nshah}@snap.com, amin_javari@homedepot.com
### Abstract
```
Retrieval-augmented generation (RAG) is a powerful technique that enhances downstream task execution by retrieving additional information, such as knowledge,
skills, and tools from external sources. Graph, by its intrinsic "nodes connected by
edges" nature, encodes massive heterogeneous and relational information, making
it a golden resource for RAG in tremendous real-world applications. As a result,
we have recently witnessed increasing attention on equipping RAG with Graph, i.e.,
GraphRAG. However, unlike conventional RAG, where the retriever, generator, and
external data sources can be uniformly designed in the neural-embedding space, the
uniqueness of graph-structured data, such as diverse-formatted and domain-specific
relational knowledge, poses unique and significant challenges when designing
GraphRAG for different domains. Given the broad applicability, the associated
design challenges, and the recent surge in GraphRAG, a systematic and up-to-date
survey of its key concepts and techniques is urgently desired. Following this motivation, we present a comprehensive and up-to-date survey on GraphRAG. Our
survey first proposes a holistic GraphRAG framework by defining its key components, including query processor, retriever, organizer, generator, and data source.
Furthermore, recognizing that graphs in different domains exhibit distinct relational
patterns and require dedicated designs, we review GraphRAG techniques uniquely
tailored to each domain. Finally, we discuss research challenges and brainstorm
directions to inspire cross-disciplinary opportunities. Our survey repository is
[publicly maintained at https://github.com/Graph-RAG/GraphRAG/.](https://github.com/Graph-RAG/GraphRAG/)
### 1 Introduction
Retrieval-Augmented Generation (RAG), as a powerful technique to improve downstream tasks by
retrieving additional information from external data sources, has been successfully applied to various
real-world applications [87, 120, 514, 551]. In RAG frameworks, retrievers search for additional
knowledge, skills, and tools based on user-defined queries or task instructions. The retrieved content
is then refined by an organizer and seamlessly integrated with the original query or instruction, which
is further fed into the generator to produce the final answer. For example, when conducting questionanswering (QA) tasks, the classic "Retriever-then-Reader" frameworks [191, 196, 468, 562] retrieve
external factual knowledge to improve the answer faithfulness, which significantly benefit social
goodness and mitigate risks in high-stake scenarios (e.g., medical, legal, financial, and education
_∗Equal contribution._
Preprint. Under review.
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 1,
"table_of_content_items": {
"level": [
1
],
"title": [
"Introduction"
],
"page": [
1
]
},
"tables": [],
"images": [],
"graphics": []
}
|
consultation [467, 472, 515]). Moreover, recent advancements in large language models (LLMs)
have further underscored the power of RAG in enhancing the social responsibility of LLMs, such as
mitigating hallucinations [397], enhancing interpretability and transparency [203], enabling dynamic
adaptability [360, 419], reducing privacy risks [512, 513], ensuring reliability/robust responses [105,
460], and promoting fair treatment [362].
Building on the unprecedented success of RAG and further considering the ubiquity of graphs in realworld applications [545], recent research has explored the integration of RAG with graph-structured
data. Unlike textual or visual data, graph-structured data encodes heterogeneous and relational
information through its intrinsic "nodes connected by edges" nature. For example, individuals
connected by social relationships of social networks usually exhibit homophily behaviors [291],
sequential decision-making steps in plans follow casual dependency [454], and atoms belonging to the
same functional group within a molecule possess unique structural properties [103, 508]. Designing
the RAG that utilizes relational information requires adapting its core components, such as the retriever
and generator, to seamlessly integrate graph-structured data, resulting in GraphRAG. Different from
RAG, which predominantly uses semantic/lexical similarity search [104, 120], GraphRAG offers
unique advantages in capturing relational knowledge by leveraging graph-based machine learning
(e.g., Graph Neural Networks (GNNs)) and graph/network analysis techniques (e.g., Graph Traversal
Search and Community Detection [98, 428]). For example, considering the query “What drugs are
used to treat epithelioid sarcoma and also affect the EZH2 gene product?" [452], blindly executing
the existing BM25 or embedding-based search that relies solely on semantic/lexical similarity ignores
relational knowledge encoded in graph structure. In contrast, some GraphRAG methods traverse the
graph along the relational path “Disease (Epithelioid Sarcoma) → [indication] → Drug ← [target]
_←_ Gene/Protein (EZH2 gene product)" to retrieve neighbors of Epithelioid Disease following the
relation [indication], neighbors of Gene EZH2 following the relation [target], and find their intersected
drug [186, 271, 428]. Moreover, some domains involve entities with extremely complex geometry that
require dedicated model design to characterize. For example, 3D structures in molecular graphs [52,
445] and hierarchical tree structures commonly found in product taxonomies (e.g., on Amazon [529]),
in document sections (e.g., when using Adobe Acrobat [537]), and social networks (e.g., at Snap [277])
requires carefully designed graph encoders (or, more precisely, geometric encoders) with appropriate
expressiveness to capture structural nuances [277, 527]. Simply verbalizing node texts and feeding
them into LLMs cannot express complex geometric information and becomes infeasible given the
exponentially growing textual descriptions as neighborhood layers expand.
Despite the above advantages of GraphRAGs over RAGs, designing appropriate GraphRAGs faces
unprecedented challenges due to the following differences in graph-structured data:
- Difference 1 - Unified versus (vs.) Diverse-Formatted Information: Unlike conventional
RAG, where semantic information can be uniformly represented as a 2D grid of image patches
or a 1D sequence of textual corpora, graph-structured data often encompass diverse formats
and are stored in heterogeneous sources [4, 26, 434]. For example, document graphs embed
entities as sentence chunks [98, 428], knowledge graphs store graph information as triplets or
paths [38], and molecule graphs consist of higher-order structures (e.g., cellular complexes) [26],
as shown in Figure 1. Some graph data may even be multimodal (e.g., text-attributed graphs
include both structural and textual attributes, and scene graphs combine structures and vision).
Consequently, this diversity necessitates different RAG designs. For retrievers, conventional RAG
assumes the target information is indexed in an image or text corpus, which can be uniformly
represented as vector embeddings and enable one-size-fits-all embedding-based retrieval. However,
retrievers for GraphRAG must consider the concrete format and source of the desired information,
making the one-size-fits-all design impractical. When dealing with knowledge graph questionanswering, information of nodes, edges, or subgraphs is usually fetched by graph search before
embedding matching-based retrieval [419, 492]. This fetching operation is usually conducted
by identifying relevant nodes/edges/subgraphs via entity linking, relational matching, and graph
search algorithms (e.g., Breadth-First Search, Depth-First Search, Monte Carlo Tree Search, and A*
search) [395, 419, 570], which is unachievable if solely through deep learning-based embedding
similarity search. Furthermore, the design of the retriever should ensure sufficient geometric
expressiveness to capture structural nuances. For instance, when retrieving APIs from a plan graph
to accomplish specific goals [355, 356, 454], it is essential to equip the retriever with directional
awareness. This enables the execution of APIs with resource dependencies in the correct order,
preventing conflicts and avoiding invalid operations. Similarly, designing expressive retrievers
capable of differentiating high-order subgraph structures, such as 6-cycle benzene versus vs. 4
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 2,
"table_of_content_items": {
"level": [],
"title": [],
"page": []
},
"tables": [],
"images": [],
"graphics": []
}
|
star Methane, and 3-star T-junction vs. 4-square road, is essential in drug design for disease
treatment [139] and road construction for city planning [209]. Beyond the retriever, the generator
also requires specialized designs. When retrieved content includes complex graph structures
with textual attributes, simply verbalizing the text of the subgraph and concatenating it into a
prompt may obscure critical structural information. In these cases, encoding the graph with
graph encoders such as GNNs before integrating it into generation can help preserve structural
nuances [134, 244, 434, 443, 456].
- Difference 2 - Independent vs. Interdependent Information: In conventional RAG, information
is stored and utilized independently. For example, documents are split into chunks, such as
individual sentences, paragraphs, or a fixed number of tokens, based on the document context
and downstream task [21, 562]. Each chunk is then indexed and stored independently in a vector
database. This independence prevents the retrieval from capturing chunk relations, which hinders
performance on tasks requiring multi-hop reasoning and long-form planning. However, GraphRAG
stores chunks as interconnected nodes with edges denoting their relations, which can benefit
retrieval, organization, and generation. For retrieval, these edges could enable multi-hop traversal
to capture other chunks that share a logical connection with existing retrieved chunks. Furthermore,
the retrieved content can be organized not only by their semantic meaning (e.g., reranking [43,
172, 256]) but also their structural relations (e.g., graph pruning [377, 431]). During the generation
phase, squeezing interdependency (e.g., positional encoding [361, 549]) to the generator would
encode richer structural signals into the generated content.
- Difference 3 - Domain Invariance vs. Domain-specific Information: The relations in graphstructured data are domain-specific. Unlike images and texts, where different domains often share
transferable semantics [254, 286], such as textures and grains in images or vocabulary defined by
the tokenizer in texts, graph-structured data lacks explicit transferable units. This shared basis
in images and texts lays the foundation for designing encoders with geometric invariance and
enables the well-known data-scaling law. However, for graph-structured data, the underlying data
generation process governing the generated graphs varies significantly across different domains.
This variability makes the relational information highly domain-specific, and it is nearly impossible
to design a unified GraphRAG applicable to different domains. For example, when predicting
the topic of an academic paper, the widely accepted homophily assumption suggests retrieving
references from the paper to inform its topic prediction [563]. However, this homophily assumption
is not suitable when classifying the role of an airport in a flight network, where hubs are often
sparsely distributed across a country with no direct connections [68]. Moreover, even within the
same graph from the same domain, different tasks may necessitate distinct GraphRAG designs.
For example, when designing an automatic email completion system to optimize communication
efficiency in a company, both content relevance and tone coherence should be considered [429].
To ensure the content relevance of the generated emails, one might assume that close emails (i.e.,
emails from the same conversation thread) share similar content and thus should be retrieved for
reference. However, to maintain tone coherence, emails from staff with similar roles might be
retrieved, even if they do not share close social relations (e.g., between subordinates and superiors)
but instead hold similar structural roles within the company (e.g., as managers of different teams).
Despite the above differences that have driven extensive research in GraphRAG, the current research
landscape in this field remains fragmented, with significant variation in concepts, techniques, and
datasets across studies. Moreover, current GraphRAG research primarily focuses on knowledge and
document graphs as surveyed in Figure 2, often overlooking broader applications in other domains
like infrastructure graphs. This imbalance not only hampers the advancement of GraphRAG but
also risks creating a "bubble effect" that restricts the scope of future exploration. To address these
challenges, we present a comprehensive and up-to-date review of GraphRAG, aiming to unify the
GraphRAG framework from the global perspective while also specializing its unique design for each
domain from the local perspective. The key contributions of this survey are as follows:
- A Holistic Framework of GraphRAG: We propose a holistic framework of GraphRAG consisting
of five key components: query processor, retriever, organizer, generator, and graph data source.
Within each component, we review representative GraphRAG techniques.
- Specialization of GraphRAG in different domains: We categorize GraphRAG designs into 10
distinct domains based on their specific applications, including knowledge graph, document
graph, scientific graph, social graph, planning & reasoning graph, tabular graph,
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 3,
"table_of_content_items": {
"level": [],
"title": [],
"page": []
},
"tables": [],
"images": [
"{'number': 14, 'bbox': Rect(423.2510070800781, 710.631591796875, 431.1713562011719, 720.0), 'transform': (7.920340061187744, 0.0, -0.0, 9.368380546569824, 423.2510070800781, 710.631591796875), 'width': 361, 'height': 427, 'colorspace': 3, 'cs-name': 'DeviceRGB', 'xres': 96, 'yres': 96, 'bpc': 8, 'size': 29563, 'has-mask': False}",
"{'number': 10, 'bbox': Rect(223.05499267578125, 710.9684448242188, 230.9730682373047, 720.0), 'transform': (7.918079853057861, 0.0, -0.0, 9.031559944152832, 223.05499267578125, 710.9684448242188), 'width': 192, 'height': 219, 'colorspace': 3, 'cs-name': 'DeviceRGB', 'xres': 96, 'yres': 96, 'bpc': 8, 'size': 60423, 'has-mask': True}",
"{'number': 8, 'bbox': Rect(142.84800720214844, 711.77197265625, 150.76800537109375, 720.0), 'transform': (7.920000076293945, 0.0, -0.0, 8.227999687194824, 142.84800720214844, 711.77197265625), 'width': 360, 'height': 374, 'colorspace': 3, 'cs-name': 'DeviceRGB', 'xres': 96, 'yres': 96, 'bpc': 8, 'size': 166767, 'has-mask': True}",
"{'number': 6, 'bbox': Rect(450.4540100097656, 699.4249877929688, 458.3744812011719, 707.47802734375), 'transform': (7.920460224151611, 0.0, -0.0, 8.053019523620605, 450.4540100097656, 699.4249877929688), 'width': 239, 'height': 243, 'colorspace': 3, 'cs-name': 'DeviceRGB', 'xres': 96, 'yres': 96, 'bpc': 8, 'size': 39009, 'has-mask': True}",
"{'number': 16, 'bbox': Rect(494.7829895019531, 712.0797119140625, 502.7032775878906, 720.0), 'transform': (7.920300006866455, 0.0, -0.0, 7.920300006866455, 494.7829895019531, 712.0797119140625), 'width': 170, 'height': 170, 'colorspace': 3, 'cs-name': 'DeviceRGB', 'xres': 96, 'yres': 96, 'bpc': 8, 'size': 9072, 'has-mask': True}",
"{'number': 12, 'bbox': Rect(290.27398681640625, 712.0800170898438, 298.1940002441406, 720.0), 'transform': (7.920000076293945, 0.0, -0.0, 7.920000076293945, 290.27398681640625, 712.0800170898438), 'width': 200, 'height': 200, 'colorspace': 3, 'cs-name': 'DeviceRGB', 'xres': 96, 'yres': 96, 'bpc': 8, 'size': 1081, 'has-mask': True}"
],
"graphics": []
}
|

Figure 1: RAG works on text and images, which can be uniformly formatted as 1D sequences or 2D
grids with no relational information. In contrast, GraphRAG works on graph-structured data, which
encompasses diverse formats and includes domain-specific relational information.
infrastructure graph, biological graph, scene graph, and random graph . For each domain,
we review their unique applications and specific graph construction methods. We then summarize
the distinctive designs of each component within our proposed holistic GraphRAG framework and
collect rich benchmark datasets and tool resources.
- Challenges and Future Directions: We highlight the challenges of current GraphRAG research
and pinpoint future opportunities for further advancing GraphRAG into the new frontier.
In the following, we highlight the differences between our
survey and existing surveys. Despite the urgent need for a systematic overview of GraphRAG, most existing surveys focus
on general RAG within the context of i.i.d. data [11, 120, 227,
551, 561]. Before the advent of LLMs, earlier surveys focused
on textual RAGs [11, 227]. With the recent unprecedented success achieved by foundational models such as LLMs, various
surveys have explored foundational-model-powered RAG in
different modalities. Gao et al. [120] group existing RAG ap- Figure 2: Publications of
proaches into three categories (Naive, Advanced, and Modular GraphRAGs in different doRAGs), summarize three core techniques (Retrieval, Genera- mains based on surveyed papers
tion, and Augmentation), and review evaluation metrics. In
parallel, Zhao et al. [551] review representative RAG systems according to their corresponding
application and data modality. [561] focuses on reviewing trustworthy concerns and techniques
of RAG. However, none of them have a dedicated focus on graph-structured data. To the best of
our knowledge, only one very recent study [319] has specifically surveyed RAG in the context of
graph-structured data. However, this work mainly focuses on reviewing techniques introduced by
graphs under the conventional RAG architecture without specializing in reviewing diverse relations
and technical designs for graphs across different domains. In contrast to its holistic review philosophy,
we recognize the inherent heterogeneity of graph-structured data and specialize our GraphRAG
review across different domains. Specifically, we uncover the fundamental task applications (when
to retrieve), graph construction methods and relational rationales (what to retrieve), and GraphRAG
techniques (how to retrieve) for each domain. In this way, our survey provides a comprehensive
overview of GraphRAG for information retrieval, data mining, and machine learning communities
and domain-specific insights that facilitate interdisciplinary research and industrial opportunities.
Our survey is structured as follows: Section 2 introduces the holistic framework of GraphRAG and
introduces representative techniques for its five key components. From Section 3 to 9, we delve
into specific domains, reviewing unique task applications, summarizing existing graph construction
methods that guide GraphRAG design for that domain, highlighting domain-specific techniques
for each of the five components within our proposed holistic framework, and presenting existing
GraphRAG resources (e.g., benchmark datasets and tools) used across different domains. Finally, we
discuss research challenges and opportunities in Section 10 and conclude our survey in Section 11.

-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 4,
"table_of_content_items": {
"level": [],
"title": [],
"page": []
},
"tables": [],
"images": [
"{'number': 0, 'bbox': Rect(108.0, 72.00004577636719, 503.9928283691406, 243.36199951171875), 'transform': (395.9928283691406, 0.0, -0.0, 171.36195373535156, 108.0, 72.00004577636719), 'width': 6237, 'height': 2699, 'colorspace': 3, 'cs-name': 'DeviceRGB', 'xres': 96, 'yres': 96, 'bpc': 8, 'size': 13538967, 'has-mask': True}",
"{'number': 12, 'bbox': Rect(365.39801025390625, 367.6179504394531, 503.98907470703125, 446.40899658203125), 'transform': (138.59107971191406, 0.0, -0.0, 78.7910385131836, 365.39801025390625, 367.6179504394531), 'width': 4167, 'height': 2369, 'colorspace': 3, 'cs-name': 'DeviceRGB', 'xres': 96, 'yres': 96, 'bpc': 8, 'size': 582717, 'has-mask': True}",
"{'number': 3, 'bbox': Rect(196.29299926757812, 287.3692932128906, 204.21371459960938, 295.2900085449219), 'transform': (7.920720100402832, 0.0, -0.0, 7.920720100402832, 196.29299926757812, 287.3692932128906), 'width': 684, 'height': 684, 'colorspace': 3, 'cs-name': 'DeviceRGB', 'xres': 96, 'yres': 96, 'bpc': 8, 'size': 4206, 'has-mask': True}",
"{'number': 5, 'bbox': Rect(275.4410095214844, 287.3744812011719, 283.3565368652344, 295.2900085449219), 'transform': (7.915520191192627, 0.0, -0.0, 7.915520191192627, 275.4410095214844, 287.3744812011719), 'width': 512, 'height': 512, 'colorspace': 3, 'cs-name': 'DeviceRGB', 'xres': 96, 'yres': 96, 'bpc': 8, 'size': 2412, 'has-mask': True}",
"{'number': 7, 'bbox': Rect(337.2200012207031, 287.3744812011719, 345.1355285644531, 295.2900085449219), 'transform': (7.915520191192627, 0.0, -0.0, 7.915520191192627, 337.2200012207031, 287.3744812011719), 'width': 512, 'height': 512, 'colorspace': 3, 'cs-name': 'DeviceRGB', 'xres': 96, 'yres': 96, 'bpc': 8, 'size': 32181, 'has-mask': True}",
"{'number': 9, 'bbox': Rect(423.739013671875, 287.3757629394531, 431.65325927734375, 295.2900085449219), 'transform': (7.914239883422852, 0.0, -0.0, 7.914239883422852, 423.739013671875, 287.3757629394531), 'width': 512, 'height': 512, 'colorspace': 3, 'cs-name': 'DeviceRGB', 'xres': 96, 'yres': 96, 'bpc': 8, 'size': 13089, 'has-mask': True}"
],
"graphics": []
}
|

Figure 3: A holistic framework of GraphRAG and representative techniques for its key components.
### 2 A Holistic Framework of GraphRAG
Based on existing literature on GraphRAG, we present a holistic framework of GraphRAG. Next, we
introduce the basic problem setting and notation used throughout the whole framework.
**2.1** **Problem Setting and Notations**
Following the general setting of RAG, given a graph-structured data source G, the user-defined
query Q is further sent to query processor Ω[Processor] to obtain the pre-processed query _Q[ˆ]. After that,_
the retriever Ω[Retriever] retrieves the content C from the graph data source G based on _Q[ˆ]. Next, the_
retrieved content C is refined by the organizer Ω[Organizer] to formulate the refined content _C[ˆ]. Finally,_
the refined content _C[ˆ] triggers the generator Ω[Generator]_ to generate the final answer A. The above five
components are summarized as follows:
- Query Processor Ω[Processor]: Preprocessing the given query _Q[ˆ] = Ω[Processor](Q)._
- Graph Data Source G: Information organized in graph-structured format.
- Retriever Ω[Retriever]: Retrieve the content C = Ω[Retriever]( Q, G[ˆ] ) from G based on the query _Q[ˆ]._
- Organizer Ω[Organizer]: Arrange and refine the retrieved content _C[ˆ] = Ω[Organizer]( Q, C[ˆ]_ ).
- Generator: Generate answers A = Ω[Generator]( Q,[ˆ] _C[ˆ]) to answer query Q._
Unlike sequential-based textual data and grid-structured image data, graph-structured data encapsulates relational information. To effectively harness this relational information, the above five core
components of GraphRAG desire dedicated designs to handle graph-structured input/output and
support graph-based operations. For example, in the retriever component, conventional RAG in
the Natural Language Processing (NLP) utilizes sparse/dense encoders for index search [196, 468].
In contrast, GraphRAG employs graph traversal methods (e.g., entity linking and BFS/DFS) and
graph-based encoders (e.g., Graph Neural Networks (GNNs)) to produce embeddings for retrieval.
This motivates us to summarize key innovations and representative designs of GraphRAG for each of
the above five components under the holistic GraphRAG framework in the following.
**2.2** **Task Applications and Example Query Q**
Similar to the general RAG framework where the text-formatted query Q specifies the question
context or the task instruction. Query Q in GraphRAG could also be in the format of text. For
example, in knowledge graph-based question-answering, the query could be "What is the Capital
of China?" [272, 395]. In addition, the query could also be in other formats, such as smile strings
for molecular graphs [132], or could even be the combination of multiple formats, such as the scene
graph along with the text instruction [147]. Table 1 summarizes the common task applications and
exemplary queries used in each domain, as well as their representative references.
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 5,
"table_of_content_items": {
"level": [
1,
2,
2
],
"title": [
"A Holistic Framework of GraphRAG",
"Problem Setting and Notations",
"Task Applications and Example Query Q"
],
"page": [
5,
5,
5
]
},
"tables": [],
"images": [
"{'number': 0, 'bbox': Rect(108.0, 72.00154876708984, 503.99078369140625, 189.48797607421875), 'transform': (395.99078369140625, 0.0, -0.0, 117.4864273071289, 108.0, 72.00154876708984), 'width': 8942, 'height': 2653, 'colorspace': 3, 'cs-name': 'DeviceRGB', 'xres': 96, 'yres': 96, 'bpc': 8, 'size': 3473637, 'has-mask': True}"
],
"graphics": []
}
|
Table 1: Summary of Task Applications and Exemplary Queries for GraphRAG in each domain.
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 6,
"table_of_content_items": {
"level": [],
"title": [],
"page": []
},
"tables": [],
"images": [],
"graphics": []
}
|

Figure 4: Existing techniques of query processor Ω[Processor] in GraphRAG.
Table 2: Difference of query processor Ω[Processor] between RAG and GraphRAG.
**Technique** **RAG** **GraphRAG**
Entity Recognition Extracting mentions in knowledge bases Extracting mentioned nodes in graphs.
Relational Extraction Extracting textual relations Extracting graph edge relations
Query Structuration Structuring text query to SQL, SPARQL Structuring text query to GQL
Query Decomposition Decomposed queries are separate Decomposed queries are logically related
Query Expansion Expansion based on semantic knowledge Expansion based on relational knowledge
**2.3** **Query Processor Ω[Processor]**
Unlike RAG, where both queries and data sources are purely text-formatted, data sources used in
GraphRAG are graph-structured, which raises challenges in bridging text-formatted queries and
graph-structured data sources. For example, the information that connects the knowledge graph
and the query "Who is Justin Bieber’s brother?" is not a specific passage but instead the entity
"Justin Bieber" and the relation "brother of". Many techniques are proposed to correctly extract this
information from the query, including entity recognition, relational extraction, query structuration,
query decomposition, and query expansion. In the following, we first review each of these five query
processing techniques within the broader NLP domain, followed by a focused examination of their
unique adaptations for GraphRAG.
**2.3.1** **Name Entity Recognition**
Named Entity Recognition (NER) aims to identify mentions of entities from the text that belong to
predefined categories, such as persons, locations, or organizations, and it serves as a fundamental
component for numerous natural language applications [160, 199, 228, 293]. NER techniques can
be broadly categorized into four main approaches: (1) rule-based methods, which rely entirely on
handcrafted rules and require no annotated data; (2) unsupervised learning methods, which use
unsupervised algorithms without labeled training examples; (3) feature-based supervised learning
methods, which depend on supervised algorithms and careful feature engineering; and (4) deep
learning approaches, which automatically discover representations needed after (un)-supervised
training the deep learning models. Recent LLMs fall into the category of deep learning approaches
and have demonstrated unprecedented success for NER. More details about these techniques and
their resources can be found in Li et al. [228].
Specifically, in the GraphRAG context, entity recognition primarily uses deep learning techniques
(e.g., EntityLinker [395, 493] and LLM-based extraction [186]) to identify entities in queries grounded
by nodes in the given graph data sources. This step is vital for applications such as knowledge graphbased question answering [395, 492, 493]. For example, given the question, "What is the best way to
guess the color of the eye of the baby?", NER extracts entities such as "baby", "eye", and "color",
which correspond to nodes in the knowledge graph and are treated as the seed nodes to initialize
the retrieval process thereafter [443, 347]. For more recent GraphRAG research, NER has evolved
beyond identifying the entity names but instead their structures. For example, Jin et al. [186] leverages
LLMs to recognize node types in the graph, which further guides the retriever to identify nodes that
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 7,
"table_of_content_items": {
"level": [
2,
3
],
"title": [
"Query Processor Omega Processor",
"Name Entity Recognition"
],
"page": [
7,
7
]
},
"tables": [],
"images": [
"{'number': 0, 'bbox': Rect(108.0, 72.00201416015625, 503.9894714355469, 211.56298828125), 'transform': (395.9894714355469, 0.0, -0.0, 139.56097412109375, 108.0, 72.00201416015625), 'width': 5357, 'height': 1888, 'colorspace': 3, 'cs-name': 'DeviceRGB', 'xres': 96, 'yres': 96, 'bpc': 8, 'size': 982881, 'has-mask': True}"
],
"graphics": []
}
|
match the recognized types for next-round exploration. For example, given the question "Who are the
authors of ‘Language Models are Unsupervised Multi-task Learners’?" the initially recognized entity
should not only be based on the semantic name "Language Models are Unsupervised Multi-task
Learners" but also be based on the type of that entity, which is the paper node in this case. Accurately
recognizing the names and structures of entities in GraphRAG reduces cascading errors and provides
a solid foundation for subsequent retrieval and generation steps.
**2.3.2** **Relational Extraction**
Similar to NER, relational extraction (RE) is a long-standing technique in NLP to identify relations
among entities and is widely applied to structured search, sentiment analysis, question answering,
summarization, and knowledge graph construction [47, 230, 303]. Recent advances in RE have been
largely driven by deep learning techniques, and they can be summarized into three perspectives: text
representation, context encoding, and triplet prediction, more details of which can be found in Pawar
et al. [317], Nasar et al. [303], Han et al. [141].
For GraphRAG, RE serves two key purposes: constructing graph-structured data sources (e.g.,
knowledge graphs) by extracting triplets and matching the relations mentioned in the query and the
graph data source to guide the graph search. For instance, given a query like "What is the capital of
China?", relational extraction identifies the relation "capital of" and searches for corresponding edges
via vector similarity in the knowledge graph, which guides the neighborhood selection and graph
traversal direction [119, 200, 272, 273].
**2.3.3** **Query Structuration**
Query structuration transforms queries into formats tailored to specific data sources and tasks. It
often converts natural language queries into structured formats like SQL or SPARQL [181, 238]
to interact with relational databases. Recent advancements leverage pre-trained and fine-tuned
LLMs to generate structured queries from natural language input to query databases. For graphstructured data, Graph Query Language (GQL) has emerged, such as Cypher, GraphQL, and SPARQL,
which enables complex interactions with property graph databases. Additionally, Jin et al. [186]
introduced a technique that decomposes complex queries into multiple structured operations, including
node retrieval, feature fetching, neighbor checks, and degree assessment, enhancing precision and
adaptability in querying.
**2.3.4** **Query Decomposition**
Query decomposition [447] aims to split the input query into multiple distinct subqueries, which are
used to first retrieve sub-results and aggregate these sub-results together for the final results. In most
existing RAG and GraphRAG, decomposed queries usually possess explicit logic connections that
can handle complex tasks that require multistep reasoning and planning [248, 316, 355, 372, 477].
For example, a query like "Please generate an image where a girl is reading a book, and her pose is
the same as the boy in ‘example.jpg’ then describe the new image with your voice" involves multiple
subtasks [477], each of which would be completed by a specific sub-query. In addition, Park et al.
[316] enhance the decomposition of the query by building a question graph where each sub-query is
represented as a triplet within the graph. These graph-structured sub-queries effectively guide the
retriever/generator through multi-step promptings.
**2.3.5** **Query Expansion**
Query Expansion enriches a query by adding meaningful terms with similar significance [12], which
primarily addresses three challenges: (1) user-submitted queries are ambiguous and relate to multiple
topics; (2) queries may be too brief to fully capture user intent; and (3) users are often uncertain about
what they are seeking. Generally, it can be categorized into manual query expansion, automatic query
expansion, and interactive query expansion. More recently, LLM-based query expansion has been a
prominent area due to the creativity of the generated content[54, 173, 221]
Unlike existing methods that mostly focus on textual similarities and overlook relations, QE in
GraphRAG augments LLM expansion with structured relations. For example Xia et al. [459] expands
the query by leveraging neighboring nodes of the mentioned entities in the query. Alternatively, Wang
et al. [406] convert the query into several sub-queries using pre-defined templates.
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 8,
"table_of_content_items": {
"level": [
3,
3,
3,
3
],
"title": [
"Relational Extraction",
"Query Structuration",
"Query Decomposition",
"Query Expansion"
],
"page": [
8,
8,
8,
8
]
},
"tables": [],
"images": [],
"graphics": []
}
|

Figure 5: Visualizing representative retrievers used in GraphRAG.
Table 3: Categorizing representative retrievers used in GraphRAG.
**Method/Strategy** **Input** **Output** **Description**
Entity Linking Entity Mention Node Match query entity and graph node
Relational Matching Relation Mention Edge Match query relation and graph edge
Graph Traversal Node/Edge Graph Expand seed nodes/edges into subgraphs
Graph Kernel (Sub)Graph (Sub)Graph Match query graph and candidate graph
Shallow Embedding Any Any Embedding similarity match query and candidate
Deep Embedding Any Any Embedding similarity match query and candidate
Domain Expertise Expertise Rule Any Match Domain Expertise with nodes/edges/graphs
**2.4** **Retriever Ω[Retriever]**
After obtaining the processed query _Q[ˆ], the retriever Ω[Retriever]_ identifies and retrieves relevant content
_C from external graph sources G to augment the downstream task execution:_
_C = Ω[Retriever]( Q, G[ˆ]_ ) (1)
Recently, retrievers have been increasingly integrated with LLMs to mitigate hallucination issues [397], address privacy concerns [513], and enhance explainability and dynamic adaptability [360, 419]. While effective, they are predominantly designed for texts and images and not readily
transferable to graph-structured data for GraphRAG for two reasons. First, the input/output format of
GraphRAG differs significantly from that of traditional RAG. While most retrievers in RAG use NLP
tokenizers for encoders and adhere to the "Text-in, Text-out" workflow, the workflow of GraphRAG
is more diverse, including "Text-in, Text-out" [395, 428], "Text-in, Graph-out" [454, 569], "Graph-in,
Text-out" and "Graph-in, Graph-out" processes [433]. Secondly, retrievers in traditional RAGs do
not capture graph structure signals. Methods like BM25 and TF-IDF [337, 333] primarily focus on
lexical signals, and deep-learning-based retrievers [196] usually capture semantic signals, both of
which overlook the graph structure signals. This motivates us to review existing GraphRAG retrievers,
i.e., heuristic-based, learning-based, and domain-specific retrievers, with a particular emphasis on
their unique technical design adapted to graph-structured data.
**2.4.1** **Heuristic-based Retriever**
Heuristic-based retrievers primarily use predefined rules, domain-specific insights, and hard-coded
algorithms to extract relevant information from graph data sources. Their reliance on explicit rules
often makes them more time/resource-efficient compared to deep learning models. For instance,
simple graph traversal methods like BFS or DFS can be executed in linear time without needing
training data. However, this same reliance on fixed heuristics also limits their adaptability to generalize
to unseen scenarios. In the following, we review the heuristic-based retrievers commonly used in
GraphRAG.
**Entity Linking: In heuristic-based retrievers, entity linking involves mapping entities identified**
in the query to corresponding nodes in graph data sources. This mapping forms an initial bridge
between the query and the graph, serving as either the retriever by itself or as a foundation for further
graph traversal to broaden the scope of the retrieval. The effectiveness of this approach relies on
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 9,
"table_of_content_items": {
"level": [
2,
3
],
"title": [
"Retriever Omega Retriever",
"Heuristic-based Retriever"
],
"page": [
9,
9
]
},
"tables": [],
"images": [
"{'number': 0, 'bbox': Rect(108.0, 72.00119018554688, 503.9922790527344, 206.843994140625), 'transform': (395.9922790527344, 0.0, -0.0, 134.84280395507812, 108.0, 72.00119018554688), 'width': 5706, 'height': 1943, 'colorspace': 3, 'cs-name': 'DeviceRGB', 'xres': 96, 'yres': 96, 'bpc': 8, 'size': 1907268, 'has-mask': True}"
],
"graphics": []
}
|
accurate entity recognition conducted by the query processor and the quality of labeled entities on
graph nodes. This technique is commonly applied in knowledge graphs, where Top-K nodes are
selected as starting points based on their textual similarity to the query. The similarity metric can
be computed using vector embeddings [443, 347] and lexical features [428]. More recently, LLMs
have been used as knowledgeable context augmenters to generate mention-centered descriptions as
additional input to augment the long-tail entities where their limited training data usually cause the
entity linking model to struggle to disambiguate [464].
**Relational Matching: Relational matching, similar to entity linking, is a heuristic-based retrieval**
approach designed to identify edges within graph data sources that align with the relations specified
in a query. This method is crucial for tasks that focus on identifying relationships among entities in
a graph. The matched edges guide the traversal process by indicating which edges to explore next
based on the entities and relations encountered in the graph data sources. Similar to entity linking,
Top-K edges are selected based on their similarity to each edge in the graph [200, 119].
In addition to the efficiency and simplicity of the above two types of heuristic-based retrievers,
another key advantage is their ability to overcome ambiguity. For example, although machine/deep
learning-based retrievers are difficult to differentiate semantically/lexically similar entities/relations
(e.g., Byte vs. Bit, and President of vs. Resident of), these heuristic methods can easily distinguish
them based on pre-defined rules, even in cases where semantic/lexical differences are subtle.
**Graph Traversal: After performing entity linking and relational matching to identify initial nodes**
and relations in graph data sources, graph traversal algorithms (e.g., BFS, DFS) can expand this
set to uncover additional query-relevant information. However, a core challenge for traversalbased retrieval is the risk of information overload, as the exponentially expanding neighborhood
often includes substantial irrelevant content. To address this, current traversal techniques integrate
adaptive retrieval and filtering processes, selectively exploring the most relevant neighboring nodes
and incrementally refining the retrieved content to minimize noise. This graph traversal is mainly
used in GraphRAG for knowledge and document graphs. When traversing on these two types of
graphs, many methods extract all paths less than length l between the nodes identified by entity
linking [492, 493, 530, 185, 110], while others consider the l-hop subgraph around the initial
entities [308, 395, 181, 205]. To more efficiently traverse the KG, other methods prune irrelevant
paths via the use of a LLM [271, 347, 428, 134] and others use pre-defined rules or templates to
traverse the graph [406, 246, 72].
**Graph Kernel: Compared with the above heuristic-based retrievers for retrieving nodes, edges,**
and their combined subgraphs, some earlier works (e.g., graph extraction and image retrieval) [448,
218, 123] treat the text and image as the entire graph and use graph-level heuristics such as graph
kernels to measure similarity and retrieve. Graph kernels measure pairwise similarities by calculating
inner products between graphs, aligning both structural and semantic aspects of the query and the
retrieved graphs. Notable examples include the random-walk kernel and the Weisfeiler Leman
kernel [357, 403]. The random walk kernel computes similarity by performing simultaneous random
walks on two graphs and counting the number of matching paths. The Weisfeiler Leman kernel
iteratively applies the Weisfeiler Leman algorithm to produce color distributions of node labels at
each iteration and then calculates similarity based on the inner products of these histogram vectors.
For example, Wu et al. [448] constructs event graphs of both documents and queries and uses a
product graph kernel that counts walks between two graphs to measure the query-document similarity
and rank the documents. Lebrun et al. [218] conducts event graph matching by introducing a fast and
efficient graph-matching kernel for image retrieval. Similarly, Glavaš and Šnajder [123] translates
images into representative attribute structural graphs that capture spatial relations among regions and
perform graph kernel based on random walks to derive hash codes for image retrieval.
**Domain Expertise: The domain-agnostic nature of traditional heuristic-based methods restricts**
their effectiveness in areas that require specialized expertise. For instance, in drug discovery,
chemists typically design drugs by referencing existing molecules with desirable properties rather
than constructing molecular structures from scratch. These molecules are selected based on domain
knowledge that guides the retrieval of structures with similar characteristics. Following this intuition,
many GraphRAG systems incorporate domain expertise to enhance retriever design. Wang et al.
[434] develop a hybrid retrieval system that integrates heuristic-based and learning-based retrieval to
retrieve exemplar molecules that partially meet the target design criteria.
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 10,
"table_of_content_items": {
"level": [],
"title": [],
"page": []
},
"tables": [],
"images": [],
"graphics": []
}
|
**2.4.2** **Learning-based Retriever**
One significant limitation of heuristic-based retrievers is their over-reliance on pre-defined rules,
which limits their generalizability to data that does not strictly adhere to these rules. For example,
when confronted with entities that have slight semantic or structural variations, such as "doctor"
and "physician", heuristic-based retrievers like entity linking may treat them differently due to their
distinct lexical representations, despite their shared underlying meaning. To overcome this limitation,
learning-based retrievers have been proposed to capture deeper, more abstract, and task-relevant
relations between the query and objects in data sources, which avoid relying solely on hard-coded
rules. These retrievers often work by uniformly compressing information of various formats (e.g.,
texts and images) into embeddings based on machine learning encoders and then fetching relevant
information by conducting an embedding-based similarity search. Notably, some entity linking and
relational matching methods that employ machine learning encoders to generate embeddings for
matching should also be considered as learning-based retrievers.
In conventional RAG, assuming the query q and data sources that contain n instances S are embedded
by corresponding encoders as q = Fq(q) ∈ R[d] and S = FS(S) ∈ R[n][×][d], we retrieve top-k instances
by similarity search according to the pre-defined similarity function ϕ in the embedding space.
_S_ _[∗]_ = arg max _ϕ(q, S),_ (2)
_k_
Unlike RAGs that use language and vision encoders to embed texts and images, encoders used
in GraphRAG retrieval extend beyond independently and identically distributed (i.i.d.) data by
embedding nodes, edges, and (sub)graphs. Depending on the input format, the encoder could be a
text encoder for query, a graph-based encoder for graph structure, and an integrated text-and-graph
encoder for the textual attributed graph [55, 56]. We specifically focus on graph-based encoders.
Existing graph-based encoders can be broadly categorized into shallow embedding methods – such
as Node2Vec and DeepWalk – and deep embedding methods like Graph Neural Networks (GNNs).
Below, we review these two encoders and their unique roles in GraphRAG.
**Shallow Embedding Methods: Shallow embedding methods [114], like Node2Vec [127] and**
Role2Vec [5], learn node, edge, and graph embeddings that retain the essential structural information
of the original graph. Based on the type of structural information that can be extracted, these methods
generally fall into two categories: proximity/role-based embeddings. Proximity-based methods,
such as DeepWalk and Node2Vec [127, 321], focus on preserving the proximity of connected nodes,
ensuring that nodes close in the graph also remain close in the embedding space. Role-based methods,
like Role2Vec and GraphWave [5, 93], generate node embeddings based on their structural roles
rather than their proximity relations. In general, these methods initialize each node with a latent
embedding vector and conduct unsupervised training to squeeze structural signals derived from graph
structure into the embedding. In GraphRAG, proximity-based shallow embeddings can effectively
retrieve entities that are proximally close, while role-based embeddings can capture entities that share
similar roles. For instance, proximity-based embeddings could be used to retrieve academic papers by
fetching papers sharing similar research topics or retrieve reviews from products that are co-purchased
with the current product [346, 429]. Meanwhile, role-based embeddings could support tasks like
generating company emails by retrieving similar emails based on shared roles or tones [429].
**Deep Embedding Methods: Although shallow embedding methods incorporate structural signals**
into learned embeddings for nodes, edges, or entire graphs, they struggle to leverage semantic
features—like bag-of-words representations for academic paper retrieval or atomic numbers for
molecular retrieval [114]. Additionally, these methods lack inductivity, requiring re-initialization and
retraining whenever new nodes, edges, or graphs are added. This limitation significantly reduces
their applicability in GraphRAG retrieval tasks as real-world knowledge evolves dynamically where
new information continually replaces outdated content, such as in citation networks, social graphs,
and knowledge graphs [59, 419, 453]. To address these limitations, deep embedding methods have
been proposed, which not only jointly fuse features and graph structures to obtain embeddings for
retrieval but also inherent inductive property as the newly coming nodes/edges/graphs share common
feature space with the ones during the training phase. One of the most representative and powerful
approaches in this category is GNN, which combines the power of message-passing to encode
structural signals and feature transformation to extract task-relevant information. Mathematically,
_l[th]-layer graph convolution can be formulated as:_
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 11,
"table_of_content_items": {
"level": [
3
],
"title": [
"Learning-based Retriever"
],
"page": [
11
]
},
"tables": [],
"images": [],
"graphics": []
}
|
**x[l]i** [=][ γ][Θ]γ
**e[l]ij** [=][ γ][Θ]γ
x[l]i[−][1] _⊕_ � _ϕΘϕ_ �x[l]i[−][1], xj[l][−][1], eij� _,_ Node-level (3)
_j∈Ni_
**e[l]ij** [=][ γ][Θ]γ e[l]ij[−][1] _⊕_ � _ϕΘϕ_ �e[l]ij[−][1][,][ e]mn[l][−][1][,][ x][e]ij _[∩][e]mn�_ _,_ Edge-level (4)
_emn∈Nij[e]_
**G[l]** = ρΘρG ({x[l]i[,][ e][l]ij _[|][ v][i]_ _[∈V][G][, e][ij]_ _[∈E][G][}][)][,]_ Graph-level (5)
In node-level graph convolution, each node vi adaptively aggregates the embeddings of its neighboring
nodes Ni, with weights based on edge features via the weighting function ϕΘϕ. The aggregated
neighborhood embeddings are then combined with the node’s own embedding from the previous
layer x[l]i[−][1], using a combination function γΘγ, as shown by Eq (3). Optimizing loss from training
downstream tasks would enable the weighting function ϕΘϕ to prioritize the most important neighbors
and enable the combination function γΘγ to balance contributions from the node’s neighborhood
and its own embedding. Similarly, in edge-level graph convolution, the same aggregation principle
applies, but the neighbors of an edge are edges incident to the same ending points of that edge
_Nij[e]_ [, as shown by Eq][ (][4][)][. Graph-level embeddings could be obtained by further applying pooling]
operation ρΘρ over node and edge embeddings, as shown by Eq (5). Following this GNN-based
embedding paradigm, various forms of graph knowledge from diverse sources—such as nodes, edges,
and (sub)graphs—can be uniformly embedded into vector representations, as shown in Figure 5(c)
where we derive embeddings for nodes (X), edges (E), and graphs (G).
Having obtained these node/edge/graph-level embeddings further enables us to create embeddings
for different types of structures (S) by combining these sub-structure embeddings according to
specific configurations for each structure. For instance, if the retrieved subgraph is a path within a
knowledge graph, we can aggregate the embeddings of the nodes and relations along that path to
form a cohesive path embedding.[2] Eventually, the resulting embeddings for different structures can
be utilized either during the training phase to optimize query alignment or during the testing phase
to enable similarity-based neural search. For example, GNN-RAG [347] uses a GNN to perform
retrieval, where a separate round of message passing is performed for each query. The query _Q[ˆ] is_
incorporated into the message passing by, including its embedding in the message computation. A
set of “candidate” nodes is chosen which have a probability of being relevant greater than some
threshold. The shortest path from the query nodes to each candidate node is retrieved as context.
Liu et al. [251] consider the use of a conditional GNN [163] where only the linked entities from the
query are initialized to a non-zero representation. The candidate nodes are chosen in a similar manner
to [347]. A single path is then retrieved for each candidate node and is extracted by backtracking
until we reach a query node. REANO [106] encodes the query information into an edge-specific
attention weight, conditional on the query. After aggregation, the top k triples most similar to the
query are chosen as context.
**2.4.3** **Advanced Retrieval Strategies**
Real-world queries are often complex and encode multi-aspect intentions, possess structure patterns,
and desire multi-hop reasoning that the aforementioned basic retrievers struggle to address. For
example, answering "What is the name of the fight song of the university whose main campus
is in Lawrence, Kansas, and whose branch campuses are in the Kansas City metropolitan area?"
demands multi-hop reasoning to identify the university based on location and retrieve information
about its fight song [145, 399]. Similarly, a query like "What are the main themes in the dataset?"
requires understanding the product community structure, retrieving themes for each community,
and aggregating the identified themes together to summarize the main theme [98]. Furthermore,
when asking "Who is the most impactful research scholar in deep learning?" the answer could vary
depending on multiple aspects [536], such as the number of citations, the volume of published papers,
or the number of co-authors. Accurately addressing such queries requires a deeper understanding of
the underlying data distribution to discern which aspect the query prioritizes. To address these highly
complex queries, advanced retrieval strategies have been proposed, and we review them as follows:
2Incorporating structural signals may be necessary, a consideration to be addressed in future work.
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 12,
"table_of_content_items": {
"level": [
3
],
"title": [
"Advanced Retrieval Strategies"
],
"page": [
12
]
},
"tables": [],
"images": [],
"graphics": []
}
|
**Integrated Retrieval: Integrated retrieval combines various types of retrievers to capture relevant**
information by balancing their strengths and weaknesses. Typically, integrated retrieval approaches
are categorized according to which individual retrievers are used in combination, with notable
examples including neural-symbolic retrieval [83, 220, 428] and multimodal retrieval [69, 266].
Since the knowledge stored in graph-structured data exists mostly in symbolic format, neural-symbolic
retrieval is a natural choice for the integrated retrieval strategy in GraphRAG. This strategy interleaves
rule-based patterns for retrieving symbolic knowledge with neural-based signals for retrieving more
abstract and deep knowledge. For example, Luo et al. [272], Wen et al. [443] first expands the
neighbors based on the knowledge of the symbolic knowledge graph and then performs path retrieval
using neural matching. In contrast, Mavromatis and Karypis [289] first utilizes GNNs to retrieve seed
entities (neural retrieval) and then extract the shortest paths from seed entities (symbolic retrieval).
Similarly, Tian et al. [395], Yasunaga et al. [492, 493], Wang et al. [427], Luo et al. [272] fetch the
k-hop neighborhood of the entities mentioned in the current question-answering pair and the session
of user-generated items as the answer candidates (symbolic retrieval) and compute attention between
the query and the extracted subgraph to differentiate candidate relevance (neural retrieval).
**Iterative Retrieval: Iterative retrieval is a multistep process where consecutive retrieval operations**
share common dependencies such as causal, resource, and temporal dependency. These dependencies
can be implicitly characterized by the retrieval order in RAG [399, 481] or explicitly modeled as a
graph structure in GraphRAG [145, 454]. Consequently, iterative retrieval is primarily utilized in
GraphRAG to capture these dependencies. For example, KGP [419] alternates between generating the
next piece of evidence for the question and selecting the most promising neighbor. ToG [381] starts
by identifying initial entities and then iteratively expands reasoning paths until enough information is
gathered to answer the question. StructGPT [181] pre-defines graph interfaces and prompts LLMs to
iteratively invoke these interfaces until sufficient information is collected.
**Adaptive Retrieval: While retrieved external knowledge offers benefits, it also introduces risks.**
If the generator already possesses sufficient internal knowledge for a task, the retrieved external
information may be unnecessary or even conflicting [42, 473]. Specifically, when internal knowledge
fully covers the necessary information, retrieval becomes redundant and may introduce contradictions.
To mitigate this, knowledge checking has been proposed in RAG systems [176, 189, 412, 490]. This
approach allows the system to adaptively assess when and how much external information is needed.
By equipping the retriever with this adaptability, RAG can provide more intelligent, flexible, and
context-aware responses, fostering better harmony between internal and external knowledge sources.
One of the adaptive retrievals in GraphRAG is designed by considering different reasoning depths
for different queries, i.e., too few hops of graph traversal might overlook critical reasoning relations,
while too many can introduce unnecessary noise. Guo et al. [134], Wu et al. [455] address this by
training models to predict the required number of hops for a given query and retrieving the relevant
graph content accordingly. No existing works focus on resolving knowledge conflicts in GraphRAG,
and therefore, we leave this discussion to future work.
**2.5** **Organizer**
After retrieving the relevant content C from external graph data sources, which may be in the format
of entities, relations, triplets, paths or subgraphs, the organizer Ω[Organizer] processes this content in
conjunction with the processed query _Q[ˆ]. The aim is to post-process and refine the retrieved content_
to better adapt it for generator consumption, thereby further improving the quality of the downstream
content generation. Formally, the organizer is represented as follows:
_Cˆ = Ω[Organizer]( ˆQ, C)_ (6)
In GraphRAG, the need for fine-grained organization and refinement of retrieved content is driven
by several key reasons. Firstly, when the retrieved contents are subgraphs, their heterogeneous
format of knowledge in terms of node/edge features and graph structures becomes more likely to
include irrelevant and noisy information, which poses significant difficulty for LLM to digest and
thus compromises the generation quality. This raises the desire for graph pruning techniques to
polish the retrieved subgraph and remove task-irrelevant knowledge. Secondly, LLMs have been
widely demonstrated to possess attention biases toward certain positions of relevant information
within the retrieved context [43]. Therefore, the exponentially growing neighbors as the receptive
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 13,
"table_of_content_items": {
"level": [
2
],
"title": [
"Organizer"
],
"page": [
13
]
},
"tables": [],
"images": [],
"graphics": []
}
|
field enlarges (i.e., the number of hops increases) in the retrieved subgraphs would also exponentially
increase the amount of context length in the prompt and dilute the focus of LLMs on the taskrelevant knowledge [358]. This poses a new requirement for the graph-based reranking mechanism to
prioritize the most important content within the retrieved graph. Thirdly, the retrieved content might
be incomplete in terms of both the semantic content and the structural content, which necessitates
graph augmentation for the enhancement. Finally, the retrieved content is often a graph, which
not only possesses semantic content information but also owns its unique structure. This complex
structural content is not easily consumed by LLMs that are trained by next-token prediction coupled
with linearized prompting, which requires structure-aware verbalization techniques to reorganize. We
will formally review each of the above-motivated organizer techniques in the following sections.
**2.5.1** **Graph Pruning**
In GraphRAG, the retrieved graph can be large and potentially contain a significant amount of noisy
and redundant information. For example, when graph traversal methods are applied in retrieval, the
size of the retrieved subgraph exponentially increases with the number of hops. Large subgraph sizes
not only increase computational costs but can also reduce generation quality due to the inclusion of
noisy information. In contrast, if the number of hops is too small, the retrieved subgraph may be
too small to include crucial knowledge required by tasks. To achieve a better trade-off between the
size of the retrieved subgraph and the amount of its encoded task-relevant information, various graph
pruning methods have been proposed to reduce the size of subgraphs by removing irrelevant nodes
and edges while preserving the essential information.
- Semantic-based pruning: Semantic-based pruning focuses on reducing the graph size by removing
nodes and edge relations that are semantically irrelevant to the query. For example, QA-GNN [492]
prunes irrelevant nodes with low relevance scores by encoding the query context and node labels
using LLMs, followed by a linear projection. GraphQA [389] further removes clusters of nodes with
the lowest relevance to the query. KnowledgeNavigator [133] scores the relations in the retrieved
graph based on the query and prunes irrelevant relations to reduce graph size. Additionally, Gao
et al. [118] partition the retrieved subgraph into smaller subgraphs and then ranks them with only
the top-k smaller subgraphs retained for generations. G-Retriever [147] defines a semantic score
for each retrieved node and edge, then refines the graph by solving the prize-collecting Steiner tree
problem to construct a more compact and relevant subgraph.
- Syntactic-based pruning: Syntactic-based pruning removes irrelevant nodes from a syntactic
perspective. For instance, Su et al. [377] leverages dependency analysis to generate a parsing tree
of the context and then filters the retrieved nodes based on their span distance from the parsing tree.
- Structure-based pruning: Structure-based pruning methods focus on pruning the retrieved graph
based on its structural properties. For example, RoK [431] filters out reasoning paths in the
subgraph by calculating the average PageRank score for each path. Other works, such as Jiang et al.
[181] and He et al. [144], also leverage PageRank to extract the most relevant entities.
- Dynamic pruning: Unlike the aforementioned methods, which typically prune the graph once,
dynamic pruning removes noisy nodes dynamically during training. For example, JointLK [382]
uses attention weights to recursively remove irrelevant nodes at each layer, keeping only a fixed
ratio of nodes. Similarly, DHLK [430] filters out nodes with attention scores below a certain
threshold dynamically during the learning process.
**2.5.2** **Reranker**
The performance of LLMs can be influenced by the position of relevant information within the
context, whether it appears at the beginning, middle, or end [43]. Additionally, LLMs’ generation is
impacted by the order in which in-context knowledge is provided, with later documents contributing
less than earlier ones [172, 256]. While retrieved information is typically ordered by relevance scores
during the retrieval process, these scores are often based on coarse-grained rankings across a large set
of candidates. Enhancing reordering solely among the retrieved information at a fine-grained level, a
process known as re-ranking, is essential to achieve optimal downstream performance. For example,
Li et al. [234] rerank retrieved triples using a pre-trained cross-encoder. Jiang et al. [185] and Liu
et al. [252] employ pre-trained reranker models to rerank retrieved paths. Yu et al. [498] train a GNN
to rerank the retrieved passages. Liao et al. [246] order the paths by the time they occurred, giving
more emphasis to recent paths.
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 14,
"table_of_content_items": {
"level": [
3,
3
],
"title": [
"Graph Pruning",
"Reranker"
],
"page": [
14,
14
]
},
"tables": [],
"images": [],
"graphics": []
}
|
**2.5.3** **Graph Augmentation**
Graph augmentation aims to enrich the retrieved graph to either enhance the content or improve
the robustness of the generator. This process can involve adding supplementary information to the
retrieved graph, sourced from external data or knowledge embedded within LLMs. There are two
main categories of methods:
- Graph Structure Augmentation: Graph structure augmentation methods involve adding new
nodes and edges to the retrieved graph. For instance, GraphQA [389] augments the retrieved subgraph by incorporating noun phrase chunk nodes extracted from the context. Moreover, Yasunaga
et al. [492] and Taunk et al. [389] treat the query as a node, integrating it into the retrieved graph to
create direct connections between the query and relevant information. Tang et al. [388] augment
the graph structure based on pretrained diffusion models.
- Graph Feature Augmentation: Graph feature augmentation methods focus on enriching the
features of the nodes and edges in the graph. Since the original features might be lengthy or sparse,
data augmenters can be employed to summarize or provide additional details for these features. For
example, Once [258] uses LLMs as Content Summarizers, User Profilers, and Personalized Content
Generators in recommendation systems. Similarly, LLM-Rec [276] and KAR [458] apply various
prompting techniques to enrich node features, making them more informative for downstream
tasks.
Additionally, some graph augmentation techniques focus solely on the retrieved graph itself, such as
randomly dropping nodes, edges, or features to improve model robustness. Ding et al. [86] provide a
systematic review of these data augmentation methods.
**2.5.4** **Verbalizing**
Verbalizing refers to converting retrieved triples, paths or graphs into natural language that can be
consumed by LLMs. There are two main approaches to verbalization: linear verbalization and
model-based verbalization.
Linear verbalization methods typically convert graphs into text using predefined rules. The primary
techniques for linear verbalization include:
- Tuple-based: These methods place the different pieces of retrieved information and order
them in a tuple [14, 309]. For example, when performing retrieval on a KG, many methods retrieve a set of facts. A single fact is verbalized in the generation prompt as the tuple
(entity 1, relation 1, entity 2) [308, 395]. For a set of facts, we first sort them in a specific order,
and then verbalize them one at a time as an individual tuple. Each piece of information is typically
separated by line in the prompt. Note that the same logic can be applied to paths, nodes, and so on.
- Template-based: These methods verbalize paths or graphs using predefined templates to generate
more natural text. For example, LLaGA [49] proposes some templates such as Hop-Field Overview
Template to convert graph into sequence. For KGs, several methods [134, 244] convert individual
facts into natural text. For example, Guo et al. [134] convert a fact (entity 1, relation, entity 2) to
text using the template “The {relation} of {entity 1} is/are: {entity 2}”.
Model-based verbalization methods typically use fine-tuned models or LLMs to convert input facts
into coherent and natural language. These methods generally fall into two categories:
- Graph-to-text verbalization: These methods focus on converting retrieved graphs into natural
language while preserving all the information. For instance, Koncel-Kedziorski et al. [211] and
Wang et al. [421] leverage graph transformers to generate text from knowledge graph. Ribeiro et al.
[336] evaluate several pretrained language models for graph-to-text generation, while Wu et al.
[455], and Agarwal et al. [3] fine-tune LLMs to transform graphs into sentences, ensuring a faithful
representation of the graph content in textual form.
- Graph Summarization: In contrast to Graph-to-Text Verbalization, which retains all details,
Graph Summarization methods aim to generate concise summaries based on the retrieved graph
and the query. EFSum [208] proposes two approaches: one directly prompts LLMs to summarize
the retrieved facts and query, while the other fine-tunes LLMs specifically for summarization
tasks. CoTKR [457], on the other hand, alternates between two operations: Reasoning, where it
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 15,
"table_of_content_items": {
"level": [
3,
3
],
"title": [
"Graph Augmentation",
"Verbalizing"
],
"page": [
15,
15
]
},
"tables": [],
"images": [],
"graphics": []
}
|
decomposes the question, generates a reasoning trace, and identifies the specific knowledge needed
for the current step; and Summarization, where it summarizes the relevant knowledge from the
subgraph that retrieved based on the current reasoning trace.
**2.6** **Generator**
The generator aims to produce the desired output for specific tasks based on the query and the
retrieved information. These tasks can range from discrimination tasks (e.g., node/edge/graph
classification) to generation tasks (e.g., KG-based question answering) and graph generation (e.g.,
molecular generation). Due to the uniqueness of different tasks, different generators are often desired.
We categorize generators into three main types: Discriminative-based Generators, which leverage
models like GNNs and Graph Transformers for tasks like classification; LLM-based Generators,
which utilize the capabilities of LLMs to generate answers for text-based tasks; and Graph-based
Generators, which generate new graphs using generative models such as diffusion models. Next, we
provide a detailed illustration of these generators.
**2.6.1** **Discrimination-based Generator**
Discrimination-based generators focus on discriminative and regression tasks, which can typically be
modeled as graph tasks, such as node, edge, or graph classification and regression. Models designed
for graph data, such as GNNs and Graph Transformers, are widely used as discrimination-based
generators. The choice of GNN depends on the graph type and task. For instance, GCN [206],
GraphSAGE [138], and GAT [402] are typically applied to homogeneous graphs, whereas models
like RGCN [350] and HAN [423] are used for heterogeneous graphs, and HGNN [111] and HyperAttention [16] are suitable for hypergraphs. Additionally, graph transformers [296, 354] have gained
popularity for their ability to capture global dependencies. Additionally, different training strategies,
such as (semi-)supervised learning [279] and graph contrastive learning [192, 264], are employed
depending on the specific requirements of the task.
**2.6.2** **LLM-based Generator**
LLMs have demonstrated remarkable capabilities in understanding and generating natural language
across a wide range of tasks. However, LLMs are inherently designed to process sequential data,
while the retrieved information in GraphRAG is typically structured as graphs. Although various
GraphRAG organizers, such as verbalization methods, convert retrieved graph information into text,
these transformations may result in the loss of important graph structure information, which could
be crucial for certain tasks. To take advantage of the ability of LLMs, many research efforts have
been proposed to feed the graph information into LLMs, and we summarize them into the following
categories:
- Verbalizing: Verbalizing aims to convert the retrieved information in GraphRAG into sequences
that can be processed by LLMs. These methods are detailed in Section 2.5.4.
- Embedding-fusion: Embedding-fusion integrates graph embeddings and text embeddings within
LLMs. The graph embeddings can be obtained using GNNs or Graph Transformers[36]. To align
graph embeddings with text embeddings, a domain projector is typically learned to map graph
embeddings to the text embedding space. Embedding fusion can occur at different layers of LLMs.
For example, He et al. [147] feed the projected graph embeddings through the self-attention layers
of LLMs, while Tian et al. [395] prepend the projected graph embeddings with the text tokens. [9]
fuse the text and projected graph embeddings before the prediction layers of LLMs. Additionally,
LLMs can either be fine-tuned along with the domain projector using methods such as LoRA, or
the LLM can remain fixed, training only the graph embedding model and domain projector.
- Positional embedding-fusion: Directly converting the graph into sequences by Verbalization may
lose graph structure information, which can be crucial in some tasks. Positional embedding-fusion
aims to add the position of nodes in the retrieved graph to the LLMs. GIMLET[549], as a unified
graph-text model, employs a generalized position embedding to encode both graph structures
and textual instructions as unified tokens. LINKGPT [148] leverages the pairwise encoding in
LPFormer [361] to encode the pairwise information between two nodes.
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 16,
"table_of_content_items": {
"level": [
2,
3,
3
],
"title": [
"Generator",
"Discrimination-based Generator",
"LLM-based Generator"
],
"page": [
16,
16,
16
]
},
"tables": [],
"images": [],
"graphics": []
}
|
**2.6.3** **Graph-based Generator**
In the scientific graph domain, GraphRAG generators often go beyond LLM-based methods due to
the need for accurate structure generation. RetMol [433] is particularly versatile because it can work
with various encoder and decoder architectures, supporting multiple generative models and molecule
representations. For example, generators can be transformer-based or utilize Graph VAE architectures.
Huang et al. [167] highlight the use of a diffusion model, specifically the 3D molecular diffusion
model IRDIF. In the generation process, SE(3)-equivariance is achieved through architectures like
Equivariant Graph Neural Networks (EGNNs) [349], which ensure that the geometric properties of
molecular structures remain invariant to spatial transformations such as rotation, translation, and
reflection at each step. Incorporating SE(3)-equivariance into the diffusion model guarantees that
the generated molecular structures maintain geometric consistency under these transformations. For
KGs, multiple works [110, 492, 389] use a GNN to generate the answer. The GNNs used in these
works are conditional on the query, thereby making the final predictions relevant to it.
**2.7** **Graph Datasources**
We have conducted a comprehensive review of the primary techniques applied in the initial four
model-centric components of GraphRAG—namely, the query processor, retriever, organizer, and
generator. However, even with the best configurations of these components, a GraphRAG system
may still fall short of optimal performance if the underlying graph data sources, from which external
knowledge is retrieved, are not meticulously curated. This also underscores the recent significant
shift in AI research from a model-centric to a data-centric perspective, where enhancing data quality
and relevance becomes equally, if not more, crucial for achieving superior results. Adopting this
data-centric perspective, the following section provides an overview of existing GraphRAG research
on constructing graph data sources from a high-level perspective, with a detailed discussion of
domain-specific graph construction methods reserved for the subsequent domain-specific section.
- Explicit Construction: Explicit construction refers to building graphs based on explicit and
predefined relationships in the data. This method is widely adopted across various domains. For
example, molecule graphs are constructed from the connections between atoms; knowledge graphs
are formed based on explicit relationships between entities; citation graphs are built by linking
papers through citation relationships; and recommendation graphs model interactions between
users and items.
- Implicit Construction: Implicit construction is used when there are no explicit relationships
between nodes, but instead, implicit connections can be derived. For instance, word co-occurrence
in a document can suggest shared semantic information, and feature interaction in Tabular data can
indicate the correlation between features. Graphs can explicitly model these connections, which
might be beneficial to the downstream tasks.
After the graph is constructed, there are also several ways to formally represent graphs.
- Adjacency matrix: The adjacency matrix is one of the most popular ways to denote a graph.
Specifically, the adjacency matrix A ∈ R[||V×|V|] denotes the graph connections among nodes in V,
where |V| is the number of nodes.
- Edge list: The edge list represents each edge in the graph, typically in the form of tuples or triples,
such as (i, j) or (i, r, j), where i and j are nodes, and r is the relation between nodes i and j.
- Adjacency list: The adjacency list is a node-centric representation where each node is associated
with a list of its neighbors. It is typically represented as a dictionary {i : Ni}, where Ni is the
neighbor list of node i.
- Node Sequence: A node sequence transforms a graph into a sequence of nodes in either an
irreversible or reversible manner. Most serialization methods are irreversible and do not allow for
complete recovery of the original graph structure. For example, there are also some serialization
methods that are reversible which can recover the whole graph structure. For example, Zhao et al.
[552] propose serializing graphs using Eulerian paths by first applying Eulerization to the graph.
Besides, if the graph establishes a tree structure, the BFS/DFS can also serialize the graph in a
reversible manner.
- Nature language: With the growing popularity of LLMs for processing text-based information,
various methods have been developed to describe graphs using natural language.
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 17,
"table_of_content_items": {
"level": [
3,
2
],
"title": [
"Graph-based Generator",
"Graph Datasources"
],
"page": [
17,
17
]
},
"tables": [],
"images": [],
"graphics": []
}
|
Note that the above-mentioned data structures can only represent basic graphs without support
for complex scenarios such as multi-relational edges or edge attributes. For instance, using an
adjacency matrix to represent a multi-relational attributed graph requires an expanded structure:
**A ∈** R[|V|×|V|×|R|], where R denotes the set of possible relationships. Here, Ai,j,r represents the
weight of the edge connecting node i and node j under relation r.
Selecting an appropriate graph representation is essential for task-specific requirements. For example,
Ge et al. [122] finds that the order of graph descriptions significantly impacts LLMs’ comprehension
of graph structures and their performance across different tasks.
### 3 Knowledge Graph
A knowledge graph is a structured database that connects entities through well-defined relationships.
It can either encompass a broad spectrum of general knowledge, such as the widely recognized
Google Knowledge Graph [53, 261], or delve deeply into specialized domains, like the BioASQ
dataset [400] for biomedical reasoning. The diverse information contained in a knowledge graph
– represented as entities, relationships, paths, and subgraphs – serves as a valuable resource for
enhancing various downstream tasks across different sectors, including question-answering [395,
428, 493], commonsense reasoning [169], fact-checking [201], recommender systems [131], drug
discovery [28], healthcare [37], and fraud detection [287].
**3.1** **Application Tasks**
This section reviews representative applications that GraphRAG on KGs is used for.
- Question-answering: Question-answering (QA) can focus on a single domain or span across
global knowledge. Typically, a query in text format is given, such as "What is the best way to
predict a baby’s eye color?" or "Were there fossil fuels in the ground when humans evolved?" [395]
– the answer can be a sentence generated by a large language model (LLM), a selected text span
from relevant documents, or even a specific choice in a multiple-choice QA scenario. In all these
contexts, GraphRAG leverages knowledge graphs to retrieve relevant information, providing the
necessary context or supporting facts to generate accurate answers.
- Fact-Checking: Fact-checking is to verify the truthfulness of statements by cross-referencing
them with reliable sources of information. GraphRAG enhances this task by querying a knowledge
graph to retrieve relevant facts and relational structures that either support or refute the given claim.
GraphRAG identifies discrepancies or confirmations within the data by mapping the statement onto
the knowledge graph, providing a thorough and evidence-based validation process.
- Knowledge Graph Completion: Knowledge graph completion is the task of predicting new facts
to enhance the comprehensiveness of the graph and infer missing facts [535]. GraphRAG addresses
this task by retrieving structural knowledge around the triplets for inference, supplying essential
structural knowledge, and enhancing the LLM inference.
- Cybersecurity Analysis and Defense: Cybersecurity Analysis and Defense aims to analyze and
respond to vulnerabilities, weaknesses, attack patterns, and threat tactics. With the increasing
complexity and volume of cybersecurity data, GraphRAG has been proposed to provide cybersecurity analysis with more comprehensive insights into potential attack vectors and mitigation
strategies [330].
**3.2** **Knowledge Graph Construction**
We discuss how KGs are typically constructed. For each type of construction technique, we give
examples of common KG databases. How a KG is constructed is important, as it can affect both its
usefulness and function in different downstream tasks. We describe the main techniques below:
- Manual construction: Some KGs are constructed manually via human annotation. WikiData [404]
is a KG that uses crowd-sourced efforts to gather a variety of knowledge. Each entity corresponds
to a page in the Wikipedia encyclopedia. Another KG, the Unified Medical Language System
(UMLS) [25], contains biomedical facts collected from numerous sources.
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 18,
"table_of_content_items": {
"level": [
1,
2,
2
],
"title": [
"Knowledge Graph Knowledge Graph",
"Application Tasks",
"Knowledge Graph Construction"
],
"page": [
18,
18,
18
]
},
"tables": [],
"images": [
"{'number': 3, 'bbox': Rect(222.79299926757812, 183.7969207763672, 234.67129516601562, 195.8740234375), 'transform': (11.878299713134766, 0.0, -0.0, 12.077099800109863, 222.79299926757812, 183.7969207763672), 'width': 239, 'height': 243, 'colorspace': 3, 'cs-name': 'DeviceRGB', 'xres': 96, 'yres': 96, 'bpc': 8, 'size': 39009, 'has-mask': True}"
],
"graphics": []
}
|
- Rule-based construction: Many traditional approaches use rule-based techniques for constructing
the graph. This takes the form of custom parsers and manually defined rules used to extract facts
from raw text. Note that these parsers can differ depending on the source of the text. Prominent
examples include ConceptNet [373], which links different words together via assertions, and
Freebase [27] which contains a wide variety of general facts. REANO [106] extracts the entities
and relations from a set of passages using traditional entity recognition (ER) and relation extraction
(RE) methods, respectively. This includes SpaCy [151] for ER and TAGME [112] for RE. To
extract the facts that connect two entities via a relation, they use DocuNet [526].
- LLM-based construction: Recently, work has explored how LLMs can be used to construct
KGs from a set of documents. In such a way, the LLM can automatically extract the entities and
relations and link those together to form facts in the given text. Of note is that no ground-truth
KG exists for these methods. Rather, they simply use a KG as a way to organize and represent a
set of documents. For example, CuriousLLM [487] considers passages in the text as entities and
determines whether two entities should be connected based on their encoded textual similarity. On
the other hand, Cheng et al. [58] uses a manually-defined prompt to convert a piece of text into
a KG. Graph-RAG [98] first divides each document into chunks and then uses an LLM to detect
all the entities in each chunk, including their name, type, and description. To identify the relation
between any two entities, both entities and a description of their relationship are passed to an LLM.
An LLM is then used again to summarize the content of each entity and relation to arrive at their
final title. Lastly, AutoKG [40] uses a combination of LLM embeddings and clustering techniques
to construct a KG from a set of texts.
**3.3** **Retriever**
Real-world facts in KGs can provide grounded information for generative models, enhancing the
reliability of the model output. Given the structured nature of KGs, they are naturally well-suited
for retrieval. The goal is for a given question or query to retrieve either relevant facts [3] or entities
that can help answer that question. Multiple considerations need to be considered during retrieval,
including the type of facts we want to retrieve, the efficiency, and the amount of facts retrieved. In
general, retrieval of KGs has two stages: identifying seed entities and retrieving facts or entities. We
describe both below.
**Identifying seed entities: The first step in retrieving the relevant facts for a given query is to identify**
a set of “seed entities”, which we’ll refer to as Vseed. Seed entities are the initial entities that are
chosen to be highly relevant to the original query. Given such, we expect that triples that contain
any of these entities or are nearby in the graph should provide helpful context. Multiple techniques
exist for identifying the seed entities. Some works [181, 200, 251, 380, 522] assume that we are
given a set of initial entities for each query. However, most works [110, 381, 443, 530, 308, 493]
attempt to extract the entities from the query. One approach is through entity extraction [6], which
uses methods specifically designed for extracting entities from a given text. Most works only extract
entities from the original query. Another common approach is to extract a set of entities that are
semantically similar to the original query [443, 347]. HyKGE [185] first generates a hypothesis
and extracts entities from the original query and the hypothesis. Similarly, in order to reduce the
possibility of hallucination, Guo et al. [134] uses an LLM to generate two similar questions and
retrieves all entities found in the original and generated questions. In a similar vein, RoK [431] first
uses chain of thoughts reasoning to expand the original query, extracting the seed entities from the
expanded query.
**Retrieval Methods: The outcome of the previous step provides us with a set of entities that are**
related in some capacity to the query. These entities are then leveraged to retrieve a set of facts or
entities that can aid us in answering the query. We summarize the core retrieval methods below.
- Traversal-based retriever: These methods traverse the graph and extract paths to aid in answering
a specific question. Given the set of seed entities, Vseed, Yasunaga et al. [492, 493], Zhang et al.
[530] extract all paths up to length two between the entities in Vseed, resulting in a final entity set V .
They further augment V by including all triples that connect any two entities in V . Given V, both
[492, 530] only keep the top k entities by relevance scores. This is calculated by training a separate
model that takes the text embedding of the query and entity as input and outputs how relevant
3Throughout this paper, we will refer to facts as triples or edges, interchangeably
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 19,
"table_of_content_items": {
"level": [
2
],
"title": [
"Retriever"
],
"page": [
19
]
},
"tables": [],
"images": [],
"graphics": []
}
|
the entity is to the query. For Yasunaga et al. [493], if |V | > 200, they randomly sample 200
entities. Sun et al. [381] use a version of beam-search to explore the KG. Jiang et al. [185], Feng
et al. [110] extract all paths of length k ≤ 2 between seed entities. Alternatively, LARK [62]
retrieves all facts that lie on paths ≤ _k in length starting from the seed entities. Delile et al. [78]_
first extract the shortest paths connecting all seed entities. They further prioritize some entities
over others by considering the recency, importance, and relevance to the query of their associated
text. OREOLM [158] traverse k hops from the seed entities, contextualizing the importance of
each relation and entity to a path via a learnable d-dimensional embedding and it’s LM-encoded
representation. Zhang et al. [522] introduce a trainable retriever that traverses the graph starting
from each seed entity. They also train a model to score each newly visited edge, only keeping a
portion of them. KG-RAG [347] works in a similar manner, scoring each edge by its relevance
and similarity to the query via a dense retriever. They then use an LLM to decide which paths
to explore in the next step. RoG [271] uses instruction tuning to fine-tune an LLM to generate
useful relation paths, which can be retrieved from the KG. KnowledgeNavigator [134] first uses
the query to predict the expected number of hops, hQ needed to traverse in the retrieval stage. It
then traverses hQ hops starting from the seed entities, using an LLM to score and prune irrelevant
nodes. Wu et al. [456] operate in a similar manner; however, they choose which paths to traverse
based solely on the relations. Furthermore, when scoring a path, all relations that lie on that path
are considered when computing the score. RoK [431] considers a different approach, using the
Personalized PageRank (PPR) score to identify useful paths. They further augment these paths by
including the 1-hop neighbors of the seed entities. PullNet [380] assumes that each entity has an
associated set of documents. Given a single seed entity, PullNet, traverses k hops, where in each
iteration it extracts the facts for the newly observed entities. It also extracts any entities that are
contained in documents associated with an entity found in the traversal. Furthermore, for each
entity, only the top N facts are used, which are ranked via similarity to the query. KG-R3 [312] uses
MINERVA [75], a reinforcement learning approach to mining paths between entities, to retrieve a
set of important paths between both entities in the fact. Wang et al. [428] use an LLM to traverse
the graph starting from the seed entities. At each iteration in the traversal, we choose the next
node to visit by prompting an LLM. Specifically, given the information already collected in the
traversal, the LLM is prompted to generate the remaining information needed to correctly answer
the question. The neighboring node that best matches the required information is chosen as the
next node to visit. They further instruction-tune the LLM.
- Subgraph-based retriever: These methods extract a subgraph of size k around each of the
seed entities. Facts that contain one of the seed entities or are nearby, should be highly relevant
to answering the question. Furthermore, they may actually contain the answer itself. Each of
[308, 395, 181, 205] extract either the one or two hop subgraph around each seed entity. The final
set of facts is the union of each individual subgraph. Gao et al. [118] propose to first extract the
subgraph containing the seed entity and potential answers using the method in Sun et al. [379].
This is then partitioned into a set of smaller subgraphs. Then, they design a framework to rank
the subgraphs, keeping the top k subgraphs for generation. For a question-choice pair, MVPTuning [164] considers the triple that contains the highest number of seed and choice entities. They
further augment this by extracting the top k most similar questions in the dataset using BM25 [338],
and extract the triples for each of them.
- Rule-based retriever: These methods use pre-defined rules or templates to extract paths from the
graph. GenTKG [246] considers a temporal KG, where they first extract logical rules from the KG,
and use the top k rules to extract paths in a given time interval for the seed entities. Both [72, 270]
generate queries using SPARQL, which are then used to retrieve import paths. KEQING [406]
decomposes the original query into k sub-queries using using an LLM fine-tuned via LoRA [153].
For each sub-query, they find the most similar question templates, which are predefined. For each
template, they further pre-define a set of logical chains, which are then used to extract matching
paths for the seed entities in the sub-query from the KG.
- GNN-based retriever: GNN-RAG [289] trains a GNN for the retrieval task. A separate round of
message passing is done for each query q, which is incorporated in the message computation along
with the relation and entity representations. The GNN is then trained as in the node classification
task, where the correct answer entity for q has a label of 1 and 0 otherwise. During inference, the
entities with probability above some threshold are treated as candidate answers, and the shortest
path from the seed entity is extracted. Liu et al. [251] use a conditional GNN [163] for retrieval,
where for each query, only the seed entity (they assume there is only one) is initialized to a non-zero
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 20,
"table_of_content_items": {
"level": [],
"title": [],
"page": []
},
"tables": [],
"images": [],
"graphics": []
}
|
representation based on the LLM-encoded query. They then run L rounds of message passing
where after each layer l only the top-K new edges are kept, resulting in a set of entities Cq[l] [. This]
is determined by a learnable attention weight, which prunes the other edges from the graph. The
final set of candidate entities is the union of candidate entities at each layer l, Cq = ∪[L]l=1[C]q[l] [. It]
is optimized in a similar manner to [289]. For each candidate entity, they retrieve an evidence
chain by backtracking from the entity until it reaches the seed entity, choosing those edges with
the highest attention weight. REANO [106] initializes the entity and relation representations via
the mean-pooled representations of all mentions of that entity/relation in the texts, encoded by T5.
They then run a GNN, which includes an attention weight that considers the relevance of a given
triple to the original question (also encoded by T5). After running the GNN, they retrieve the top
K triples in the KG that are most relevant to the question, where relevance is defined via the dot
product between the triple encoded by the GNN and the question.
- Similarity-based retriever: STaRK [452] considers the vector similarity of the query to each entity.
Each entity embeds both the textual and relational information together. They further consider
multi-vector similarity, where the entities are encoded using multiple vectors. This is done by
chunking the textual and relational information of each entity, with each chunk being embedded
into its own vector. Both REALM [567] and EMERGE [566] extract the entities most similar to
the query. While REALM only retrieves the entities themselves, EMERGE further retrieves the
1-hop subgraph around each entity.
- Relation-based retriever: Kim et al. [200] propose a general framework for reasoning on KGs
using LLMs. They first use an LLM to segment the original query into a set of i ∈ _I sub-sentences,_
where each sub-sentence Si has an associated set of entities Ei. For each sub-sentence, they further
use an LLM to retrieve the top-k most relevant relations Ri,k. Given the set of k relations, for
each sub-sentence, they retrieve all triples that contain a relation in Ri,k and whose entities are
in [�]i∈I _[S][i][. GenTKGQA [][119][] focuses on temporal KG QA. Like [][200][], they retrieve the top-k]_
relations for the query. They then retrieve all facts that contain one of the top k relations and satisfy
the temporal constraints.
- Fusion-based retriever: These techniques consider a combination of different retrieval techniques.
Mindmap [443] considers extracting some paths ≤ _k hops from the seed entity and the 1-hop sub-_
graph of each seed. These two extracted components are combined into one subgraph. DALK [224]
uses a procedure similar to Mindmap, where they extract both paths and the 1-hop subgraph around
each seed entity. However, they argue that this procedure often results in the retrieval of redundant
or unnecessary information. To remove these facts, they use an LLM to rank the retrieved facts
given both the original question and the subgraph. Only the Top-k most relevant facts are kept.
UniOQA [244] considers two branches for retrieving. The first is a translator, which is a fine-tuned
LLM that generates the answer in a CQL format. the second is a searcher that retrieves the 1-hop
subgraph around the seed entities. When determining the answer, answers from the translator
are prioritized over those from the searcher. KG-Rank [483] considers ranking all triples in the
1-hop neighborhood of the seed entities via the similarity of the relation to the query, the similarity
of each triple to the encoded output of a = LLM(q), and an MMR ranking [35] that uses the
similarity score. Only the top-ranked triples are kept. GrapeQA [389] extends [492] by further
including a set of “extra nodes”, which are the common neighbors of the entities retrieved via a
path-based retriever. They further introduce a clustering-based method for pruning entities that
may be irrelevant to the query. SubgraphRAG [232] considers both GNN and textual information.
For the GNN, they consider initializing the node representations using a one-hot encoding to
differentiate between seed entities and others. A GNN is then run for L layers, resulting in the final
representation sv for a node v. To retrieve the relevant triples, they consider first concatenating
the final node representations for each triple (h, r, t) such that zτ = [sh, st]. The probability of
choosing this triple is then given by p(h, r, t) = MLP([zq, zh, zr, zt, zτ ]), where zq, zh, zr, zt are
the encoded textual representations of the query and the triple (h, r, t), respectively. Only the top K
triples are chosen.
- Agent-based retriever: These techniques use LLM agents to retrieve facts from the KG. KnowledGPT [425] defines a set of tools for searching over a KG. Given a query, they generate a piece of
code to search over the KG that considers the seed entities. The code is then executed over the KG
to find the correct answer. KG-Agent [182] focuses on fine-tuning an LLM to generate the SQL
code for retrieving the correct answer. Using a set of tools, they extract a set of paths that contain
the seed entities. KnowAgent [568] first identifies the relevant actions for the query via a planning
module. Using these actions, they then generate a set of paths that are used for generation.
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 21,
"table_of_content_items": {
"level": [],
"title": [],
"page": []
},
"tables": [],
"images": [],
"graphics": []
}
|
**Other retrievers: KICGPT [439] is concerned with the task of knowledge graph completion, where**
given a partial fact (h, r, ∗), we want to predict the correct entity ˆe. KICGPT retrieves the entities
by first scoring all possible entities using a traditional KG embedding score function. That is, for a
score function f (·) and a partial fact (h, r, ∗), they compute the set of scores {f (h, r, e) ∀e ∈V}.
They use RotatE [383] for the function f (·), a popular approach. Only the top k entities by score
are retrieved. To supplement their knowledge, they also retrieve all triples with (a) the same relation
as the query and (b) all triples that contain the entity h in the query. These are referred to as the
analogous and supplement triple pools, respectively.
**3.4** **Organizer**
In this subsection, we describe how the retrieved knowledge is organized for generation. More
concretely, this is how the information is formatted when given to the generator. Note that not every
method necessarily has an explicit organizer. We summarize the common methods below:
- Tuple-based organizer: These methods consider each piece of retrieved information
as an ordered triple. For example, it would include a triple in the generation
prompt as “(entity 1, relation 1, entity 2)”. Similarly, a path of length m is given by
“(entity 1, relation 1, entity 2, relation 2, · · ·, entity m)”. The entities and relations are usually
represented either as their names or IDs. Each triple or path is usually listed on a separate line. Many works append the retrieved paths to the original query as additional context
[381, 185, 289, 347, 271, 62, 251, 431, 568]. Other works that retrieve facts instead of paths
operate in a similar manner, where instead they append the triples [308, 567, 72, 246, 483,
181, 200, 119]. Some methods [566, 395] consider only including the retrieved entities as
the context. Given a set of facts, KG-R3 [312] first lists all entities and then relations, i.e.,
“(entity 1, entity 2, · · ·, entity m, · · ·, relation 1, · · ·, relation m − 1)”. Delile et al. [78] consider
a KG where each entity has an associated chunk of text. Each text chunk for an entity is considered
as a different piece of information to be included in the context. Both [395, 119] represent each
entity and relation as an embedding, which is the combination of the LLM and GNN embedding.
Liu et al. [251] further include the probability of each path containing the correct answer given by
the GNN model. MVP-Tuning [164] considers combining multiple facts that share the same subject
and relation to remove redundant information. That is, for a subject-relation pair (subject, relation),
they denote the facts for k possible objects as “subject relation {object 1, · · ·, object k}”. KGAgent [182] stores the current KG information and the historical reasoning programs in lists.
- Text organizer: Wu et al. [456] verbalize the retrieved subgraph by passing each triple to the
LLM and prompting it to convert it to a text representation. MindMap Wen et al. [443] uses a
similar procedure for subgraphs, where each is organized as a path before being passed to the LLM.
Some methods use a set of pre-defined templates to verbalize the triples or paths [134, 244, 205].
Wang et al. [406] experiment with verbalizing either via an LLM or pre-defined question templates,
finding that LLM-based verbalizing works better for ChatGPT while template-based works better
for LLaMA [398]. KICGPT [439] uses a combination of data preprocessing and LLM prompting
to convert the triples to text. StaRK [452] uses an LLM to synthesize each entity with its relational
and textual information. Note that they use some pre-defined templates that depend on the specific
task. CoTKR [457] uses an LLM to summarize and then re-write a subgraph of facts for a question
through a “knowledge rewriter”. To train the rewriter, preference alignment is used, which optimizes
the rewriter’s output to match our preferred output. First, k representations of the retrieved subgraph
are produced, with ChatGPT choosing the best and work representations as the most and least
preferred solutions.
- Other organizer: There are some exceptions to the previous classification. KnowledGPT [425]
represents the information in the form of a python class format. They also experiment with
including additional information like the entity description and entity-aspect information.
- Re-Ranking: Some methods also re-rank the information in a specific order. This is done as the
order of information can have a subtle impact on LLM performance. Delile et al. [78] order the
text chunks of each entity based on the impact (measured by # of citations of the parent paper)
and the recency. Dai et al. [72] sort the triples by the relevancy score to the triple. Choudhary
and Reddy [62] attempt to order the paths in a logical matter, such that for a given path, the
subsequent paths build upon it. Yang et al. [483] re-rank the retrieved triples using a task-specific
Cross-Encoder [187]. STaRK [452] considers re-ranking the retrieved entities using an LLM. The
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 22,
"table_of_content_items": {
"level": [
2
],
"title": [
"Organizer"
],
"page": [
22
]
},
"tables": [],
"images": [],
"graphics": []
}
|
LLM is given the relational and textual information of each, and is asked to give it a score from 0
to 1, which is then used for re-ranking. GenTKG [246] orders the paths by the time they occurred,
further including the time with each. KICGPT [439] ranks all entities using the score of the KG
embedding score function, keeping only the top k entities. KICGPT re-rank the entities using
in-context learning, where they prompt the LLM with examples from the analogy and supplement
pool, as prior knowledge to aid the LLM in how to re-rank the entities.
**3.5** **Generator**
In this section we describe how the retrieved and organized data is used to generate a final response
to the query. We categorize these generators according to the type of methods used to create these
responses.
- LLM-based generator: The vast majority of works use a LLM to generate the response. The input
to the LLM is the original query and retrieved and organized context, formatted using a specific
template. The most commonly used LLMs include ChatGPT [310], Gemini [390], Mistral [180],
Gemma [391], among others. For open-source models where the weights are publicly available,
fine-tuning is sometimes used to modify the weights for a specific task [244, 568]. This is often
done through LoRA [153], which allows for efficient fine-tuning.
- GNN-based generator: Some methods use graph neural networks (GNNs) [207] to conduct the
generation. Yasunaga et al. [492], Taunk et al. [389], Feng et al. [110] extract both the language and
GNN embeddings for each potential answer (i.e., entity) conditional on the query. The probability
of a single entity being the answer is then learnt based on the fusion of the two types of embeddings.
- Other generators: Zhang et al. [530], Yasunaga et al. [493], Hu et al. [158] formulate the
prediction as a masked language modeling (MLM) problem. The goal is to predict the correct value
(i.e., entity) for the masked token which answers the query. To do so, they fine-tune RoBERTa [263]
language model. KG-R3 [312] scores the potential answer entities by performing cross-attention
the representations of the query and each individual entity. PullNet [380] uses GraftNet [379]
to score the different entities. Gao et al. [118] first selects the correct subgraph by computing
the cosine similarity between the query and subgraph representations. For the subgraph with the
highest similarity, it’s fed to GraftNet [379] to select the most probable entity. REANO [106]
passes the encoded triples and their associated text passages to the T5 decoder. The task is framed
as a classification problem, where the goal is to assign the highest probability to the triple with the
correct answer.
**3.6** **Resources and Tools**
In this section, we list common tools and KGs that are used in graph RAG systems. For each, we
give a brief description and a link to the project.
**3.6.1** **Data Resources**
- Freebase [4] [27] is an encyclopedic KG that contains a large variety of general and basic facts.
- ConceptNet [5] [373] is a semantic graph, where the links in the graph are used to describe the
meaning of different words or ideas.
- WikiData [6] [404] is a crowdsourced knowledge base that functions as a structured analog to the
Wikipedia encyclopedia.
**3.6.2** **Tools**
- Graph RAG [7] [98] is an official open-source implementation of the Graph RAG [98] framework.
It can further be installed via the graphrag python package.
[4https://developers.google.com/freebase](https://developers.google.com/freebase)
[5https://conceptnet.io/](https://conceptnet.io/)
[6https://www.wikidata.org/wiki/Wikidata:Main_Page](https://www.wikidata.org/wiki/Wikidata:Main_Page)
[7https://github.com/microsoft/graphrag](https://github.com/microsoft/graphrag)
-----
|
{
"id": "2501.00309",
"categories": [
"cs.IR",
"cs.CL",
"cs.LG"
]
}
|
{
"file_path": "/content/raw_pdfs/2501.00309v2.pdf",
"page_count": 88,
"page_num": 23,
"table_of_content_items": {
"level": [
2,
2,
3,
3
],
"title": [
"Generator",
"Resources and Tools",
"Data Resources",
" Tools"
],
"page": [
23,
23,
23,
23
]
},
"tables": [],
"images": [],
"graphics": []
}
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- -