mishig HF Staff commited on
Commit
7e58c56
·
verified ·
1 Parent(s): 8c4f69a

Add 1 files

Browse files
Files changed (1) hide show
  1. 2507/2507.00355.md +360 -0
2507/2507.00355.md ADDED
@@ -0,0 +1,360 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Question Decomposition for Retrieval-Augmented Generation
2
+
3
+ URL Source: https://arxiv.org/html/2507.00355
4
+
5
+ Markdown Content:
6
+ Paul J. L. Ammann Jonas Golde Alan Akbik
7
+
8
+ Humboldt-Universität zu Berlin
9
+
10
+ {paul.ammann, jonas.max.golde.1, alan.akbik}@hu-berlin.de
11
+
12
+ ###### Abstract
13
+
14
+ Grounding large language models (LLMs) in verifiable external sources is a well-established strategy for generating reliable answers. Retrieval-augmented generation (RAG) is one such approach, particularly effective for tasks like question answering: it retrieves passages that are semantically related to the question and then conditions the model on this evidence. However, multi-hop questions, such as “Which company among NVIDIA, Apple, and Google made the biggest profit in 2023?,” challenge RAG because relevant facts are often distributed across multiple documents rather than co-occurring in one source, making it difficult for standard RAG to retrieve sufficient information. To address this, we propose a RAG pipeline that incorporates question decomposition: (i) an LLM decomposes the original query into sub-questions, (ii) passages are retrieved for each sub-question, and (iii) the merged candidate pool is reranked to improve the coverage and precision of the retrieved evidence. We show that question decomposition effectively assembles complementary documents, while reranking reduces noise and promotes the most relevant passages before answer generation. Although reranking itself is standard, we show that pairing an off-the-shelf cross-encoder reranker with LLM-driven question decomposition bridges the retrieval gap on multi-hop questions and provides a practical, drop-in enhancement, without any extra training or specialized indexing. We evaluate our approach on the MultiHop-RAG and HotpotQA, showing gains in retrieval (M⁢R⁢R⁢@⁢10:+36.7%:𝑀 𝑅 𝑅@10 percent 36.7 MRR@10:+36.7\%italic_M italic_R italic_R @ 10 : + 36.7 %) and answer accuracy (F⁢1:+11.6%:𝐹 1 percent 11.6 F1:+11.6\%italic_F 1 : + 11.6 %) over standard RAG baselines.
15
+
16
+ Question Decomposition for Retrieval-Augmented Generation
17
+
18
+ Paul J. L. Ammann Jonas Golde Alan Akbik Humboldt-Universität zu Berlin{paul.ammann, jonas.max.golde.1, alan.akbik}@hu-berlin.de
19
+
20
+ ![Image 1: Refer to caption](https://arxiv.org/html/2507.00355v1/extracted/6577976/imgs/QD_RR.png)
21
+
22
+ Figure 1: (a) Standard retrieval in RAG versus (b) our approach using question decomposition and reranking.
23
+
24
+ 1 Introduction
25
+ --------------
26
+
27
+ Retrieval-augmented generation (RAG) addresses knowledge gaps in large language models (LLMs) by retrieving external information at inference time (Lewis et al., [2020](https://arxiv.org/html/2507.00355v1#bib.bib16)). While effective, RAG’s performance depends heavily on retrieval quality; irrelevant documents can mislead the model and degrade the quality of its output (Cho et al., [2023](https://arxiv.org/html/2507.00355v1#bib.bib3); Shi et al., [2023](https://arxiv.org/html/2507.00355v1#bib.bib30)). For example, when asked “Who painted Starry Night?” a naive retriever may surface a general Wikipedia article on _Post-Impressionism_ rather than the specific page on _Vincent van Gogh_, offering little direct evidence for the correct answer. This issue becomes more pronounced in multi-hop QA tasks, where supporting facts are spread across multiple documents. For instance, a single, undifferentiated search for the query “Which company among NVIDIA, Apple, and Google made the biggest profit in 2023?” might return a broad market overview article mentioning all three companies together, but omit their individual 2023 earnings reports—forcing the model to respond without access to the necessary disaggregated information.
28
+
29
+ Challenges of Multi-hop Retrieval. Complex questions often require reasoning over multiple entities, events, or steps, which are rarely addressed within a single document. While the individual facts needed to answer such questions may be simple, the required evidence is typically distributed across multiple sources. To improve retrieval coverage in multi-hop QA settings, our approach decomposes the original question into simpler subqueries—a process we refer to as _question decomposition_(Perez et al., [2020](https://arxiv.org/html/2507.00355v1#bib.bib23)). By breaking down a complex query into focused subqueries, question decomposition increases the likelihood of retrieving documents that address distinct aspects of the information need, especially when information sources are self-contained.
30
+
31
+ Consider the question: “Which planet has more moons, Mars or Venus?” In a standard RAG pipeline, the entire question is embedded as a single unit, and the retriever attempts to find a single passage that answers it directly (cf.[Figure 1](https://arxiv.org/html/2507.00355v1#S0.F1 "In Question Decomposition for Retrieval-Augmented Generation")a). In practice, this often results in retrieving a general article about planetary science or solar system formation. We assume that relevant facts are located in two self-contained documents—one about Mars and the other about Venus. With QD, we exploit the fact of increasingly capable LLMs to generate fact-seeking subquestions such as “How many moons does Mars have?” and “How many moons does Venus have?”, each of which is more likely to retrieve a precise, relevant answer from its respective source (cf.[Figure 1](https://arxiv.org/html/2507.00355v1#S0.F1 "In Question Decomposition for Retrieval-Augmented Generation")b).
32
+
33
+ Contributions. In this paper, we present a retrieval-augmented generation pipeline that integrates question decomposition with reranking to improve multi-hop question answering. Our QD component uses a LLM to decompose complex questions into simpler subqueries, each addressing a specific part of the information need, and thus requires no fine-tuning or task-specific training. Retrieved results from all subqueries are aggregated to form a broader and more semantically relevant candidate pool.
34
+
35
+ To mitigate the noise introduced by retrieving documents for each subquery, we apply a pre-trained reranker that scores each candidate passage based on its relevance to the original complex query. This substantially improves precision by filtering out irrelevant results. In combination, question decomposition ensures broad evidence coverage, while reranking distills this expanded set into a concise collection of highly relevant passages.
36
+
37
+ We evaluate our approach on the MultiHop-RAG and HotpotQA benchmarks and demonstrate substantial gains in recall and ranking metrics over standard RAG and single-component variants. We further analyze the inference overhead, showing that the added cost of QD remains manageable. Our main contributions are as follows:
38
+
39
+ * •We propose a question decomposition (QD)–based RAG pipeline for multi-hop question answering, where a LLM decomposes complex questions into simpler subqueries without any task-specific training.
40
+ * •To improve precision, we incorporate a cross-encoder reranker that scores retrieved passages based on their relevance to the original complex query, effectively filtering noise from the expanded candidate pool introduced by QD.
41
+ * •We empirically validate our approach on the MultiHop-RAG and HotpotQA benchmarks, demonstrating substantial improvements in retrieval recall, ranking quality, and final answer accuracy—achieved without any domain-specific fine-tuning.
42
+
43
+ We release our code 1 1 1 https://github.com/Wecoreator/qd_rag on GitHub for reproducibility.
44
+
45
+ 2 Methodology
46
+ -------------
47
+
48
+ Our pipeline follows the retrieval-augmented generation framework of Lewis et al. ([2020](https://arxiv.org/html/2507.00355v1#bib.bib16)), which combines a retriever with a generative language model. The goal is to answer a natural language query q 𝑞 q italic_q by grounding the language model’s response in documents retrieved from a large corpus 𝒟 𝒟\mathcal{D}caligraphic_D.
49
+
50
+ #### Retrieval.
51
+
52
+ In the first step, a query encoder f q subscript 𝑓 𝑞 f_{q}italic_f start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT and a document encoder f d subscript 𝑓 𝑑 f_{d}italic_f start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT project queries and documents into a shared vector space (Karpukhin et al., [2020](https://arxiv.org/html/2507.00355v1#bib.bib14)). During retrieval, the query representation f q⁢(q)subscript 𝑓 𝑞 𝑞 f_{q}(q)italic_f start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT ( italic_q ) is compared to all document embeddings f d⁢(d)subscript 𝑓 𝑑 𝑑 f_{d}(d)italic_f start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ( italic_d ) using inner product similarity. Subsequently, we select the top-k 𝑘 k italic_k most relevant documents:
53
+
54
+ R⁢(q)=Top⁢-k d∈𝒟⁢(⟨f q⁢(q),f d⁢(d)⟩)𝑅 𝑞 Top subscript-k 𝑑 𝒟 subscript 𝑓 𝑞 𝑞 subscript 𝑓 𝑑 𝑑 R(q)=\mathrm{Top}\text{-k}_{d\in\mathcal{D}}\left(\langle f_{q}(q),f_{d}(d)% \rangle\right)italic_R ( italic_q ) = roman_Top -k start_POSTSUBSCRIPT italic_d ∈ caligraphic_D end_POSTSUBSCRIPT ( ⟨ italic_f start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT ( italic_q ) , italic_f start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ( italic_d ) ⟩ )
55
+
56
+ Here, ⟨⋅,⋅⟩⋅⋅\langle\cdot,\cdot\rangle⟨ ⋅ , ⋅ ⟩ denotes the similarity score between the query and document embeddings, computed as inner product similarity in the shared embedding space. This dense retrieval stage identifies documents that are semantically similar to the query and provides candidates for grounding the language model’s response.
57
+
58
+ #### Reranking.
59
+
60
+ To refine the initial retrieval set R⁢(q)𝑅 𝑞 R(q)italic_R ( italic_q ), we apply a pre-trained reranker that computes fine-grained relevance scores between the query q 𝑞 q italic_q and each candidate document d∈R⁢(q)𝑑 𝑅 𝑞 d\in R(q)italic_d ∈ italic_R ( italic_q ). Cross-encoder rerankers are a staple of modern information retrieval and already feature in recent RAG systems (Glass et al., [2022](https://arxiv.org/html/2507.00355v1#bib.bib9); Wang et al., [2024b](https://arxiv.org/html/2507.00355v1#bib.bib35)). We therefore deliberately employ an off-the-shelf model. Each query-document pair is jointly encoded by a transformer model, producing a single relevance score g ϕ⁢(q,d)∈ℝ subscript 𝑔 italic-ϕ 𝑞 𝑑 ℝ g_{\phi}(q,d)\in\mathbb{R}italic_g start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_q , italic_d ) ∈ blackboard_R, where ϕ italic-ϕ\phi italic_ϕ denotes the model parameters. The top-k 𝑘 k italic_k documents (ranked in descending order of g ϕ⁢(q,d)subscript 𝑔 italic-ϕ 𝑞 𝑑 g_{\phi}(q,d)italic_g start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_q , italic_d )) form the final reranked set R′⁢(q)superscript 𝑅′𝑞 R^{\prime}(q)italic_R start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_q ). Only these top-k 𝑘 k italic_k ranked passages are passed to the generator, while the rest are discarded. Unlike the retrieval stage, where queries and documents are encoded independently for efficiency, reranking involves joint encoding of each pair, which increases computational cost but enables more accurate relevance estimation by modeling interactions between query and document tokens.
61
+
62
+ #### Generation.
63
+
64
+ A pretrained autoregressive LLM receives the concatenation of q 𝑞 q italic_q and the top-ranked passages and then generates the answer. Specifically, we concatenate the query with the top-ranked passages R′⁢(q)={d 1,…,d r}superscript 𝑅′𝑞 subscript 𝑑 1…subscript 𝑑 𝑟 R^{\prime}(q)=\{d_{1},\dots,d_{r}\}italic_R start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_q ) = { italic_d start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_d start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT } into a single input sequence:
65
+
66
+ x=[q;d 1;d 2;…;d r]𝑥 𝑞 subscript 𝑑 1 subscript 𝑑 2…subscript 𝑑 𝑟 x=[q;d_{1};d_{2};\dots;d_{r}]italic_x = [ italic_q ; italic_d start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ; italic_d start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ; … ; italic_d start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ]
67
+
68
+ The model then generates the answer token-by-token, modeling the conditional probability:
69
+
70
+ p⁢(y∣x)=∏t=1 T p⁢(y t∣y<t,x).𝑝 conditional 𝑦 𝑥 superscript subscript product 𝑡 1 𝑇 𝑝 conditional subscript 𝑦 𝑡 subscript 𝑦 absent 𝑡 𝑥 p(y\mid x)=\prod_{t=1}^{T}p(y_{t}\mid y_{<t},x).italic_p ( italic_y ∣ italic_x ) = ∏ start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_p ( italic_y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ∣ italic_y start_POSTSUBSCRIPT < italic_t end_POSTSUBSCRIPT , italic_x ) .
71
+
72
+ This way, we enable the language model to attend over the complete retrieved context and generate a response grounded in multiple evidences simultaneously.
73
+
74
+ 3 RAG with Question Decomposition
75
+ ---------------------------------
76
+
77
+ A _naive_ RAG system encodes the user query q 𝑞 q italic_q once and retrieves the top-k 𝑘 k italic_k most relevant passages. These retrieved documents are then concatenated with the query and used as input to the language model, which generates an answer (Lewis et al., [2020](https://arxiv.org/html/2507.00355v1#bib.bib16); Karpukhin et al., [2020](https://arxiv.org/html/2507.00355v1#bib.bib14)). Notably, this baseline assumes that the top-ranked passages contain all necessary evidence, treating each question as single-hop and ignoring multi-step reasoning or dependencies across documents.
78
+
79
+ Our proposed pipeline augments the standard RAG framework with two additional components: a _question decomposition_ module and a _reranking_ module. A comparison between our approach and a naive RAG baseline is illustrated in[Figure 1](https://arxiv.org/html/2507.00355v1#S0.F1 "In Question Decomposition for Retrieval-Augmented Generation"). To address the challenges posed by multi-hop questions, which can degrade retrieval performance in standard RAG, we (i) decompose the original query into a set of simpler sub-queries, (ii) retrieve documents for each sub-query, (iii) merge and deduplicate the retrieved results, and (iv) apply a reranker to filter out noisy or weakly relevant candidates. From this filtered set, only the top-k 𝑘 k italic_k passages R′⁢(q)superscript 𝑅′𝑞 R^{\prime}(q)italic_R start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_q ) are passed to the language model. The full pipeline is described in[Algorithm 1](https://arxiv.org/html/2507.00355v1#alg1 "In 3 RAG with Question Decomposition ‣ Question Decomposition for Retrieval-Augmented Generation").
80
+
81
+ 1:Query
82
+
83
+ q 𝑞 q italic_q
84
+ , documents
85
+
86
+ 𝒟 𝒟\mathcal{D}caligraphic_D
87
+ , cutoff
88
+
89
+ k 𝑘 k italic_k
90
+
91
+ 2:
92
+
93
+ R′⁢(q)superscript 𝑅′𝑞 R^{\prime}(q)italic_R start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_q )
94
+ : top-
95
+
96
+ k 𝑘 k italic_k
97
+ passages relevant to
98
+
99
+ q 𝑞 q italic_q
100
+
101
+ 3:
102
+
103
+ Q←{q}∪Decompose⁢(q 0)←𝑄 𝑞 Decompose subscript 𝑞 0 Q\leftarrow\{\,q\,\}\cup\textsc{Decompose}(q_{0})italic_Q ← { italic_q } ∪ Decompose ( italic_q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT )
104
+ ▷▷\triangleright▷ original and decomposed queries
105
+
106
+ 4:
107
+
108
+ C←∅←𝐶 C\leftarrow\emptyset italic_C ← ∅
109
+ ▷▷\triangleright▷ global candidate set
110
+
111
+ 5:for all
112
+
113
+ q∈Q 𝑞 𝑄 q\in Q italic_q ∈ italic_Q
114
+ do
115
+
116
+ 6:
117
+
118
+ C←C∪Top-k⁢(q,𝒟)←𝐶 𝐶 Top-k 𝑞 𝒟 C\leftarrow C\cup\textsc{Top\text{-}k}(q,\mathcal{D})italic_C ← italic_C ∪ Top smallcaps_- k ( italic_q , caligraphic_D )
119
+ ▷▷\triangleright▷ Add top-k candidates for each query
120
+
121
+ 7:end for
122
+
123
+ 8:
124
+
125
+ C←Rerank⁢(C)←𝐶 Rerank 𝐶 C\leftarrow\textsc{Rerank}(C)italic_C ← Rerank ( italic_C )
126
+ ▷▷\triangleright▷ using a pre-trained reranker
127
+
128
+ 9:
129
+
130
+ R′⁢(q)←Head⁢(C,k)←superscript 𝑅′𝑞 Head 𝐶 𝑘 R^{\prime}(q)\leftarrow\textsc{Head}(C,k)italic_R start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_q ) ← Head ( italic_C , italic_k )
131
+ ▷▷\triangleright▷ retain highest-scoring k 𝑘 k italic_k
132
+
133
+ 10:return
134
+
135
+ R′⁢(q)superscript 𝑅′𝑞 R^{\prime}(q)italic_R start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_q )
136
+
137
+ Algorithm 1 Retrieval with question decomposition: Given a complex query q 𝑞 q italic_q, the algorithm first generates sub-queries using an LLM, retrieves documents for each, and aggregates the results. A reranker then filters the merged candidate set, and the top-k 𝑘 k italic_k passages are selected for downstream generation.
138
+
139
+ ### 3.1 QD Module
140
+
141
+ Given a complex question q 𝑞 q italic_q, we define a prompting function Decompose⁢(q,p)Decompose 𝑞 𝑝\textsc{Decompose}(q,p)Decompose ( italic_q , italic_p ) that produces a set of sub-queries {q~1,…,q~n}subscript~𝑞 1…subscript~𝑞 𝑛\{\tilde{q}_{1},\dots,\tilde{q}_{n}\}{ over~ start_ARG italic_q end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , over~ start_ARG italic_q end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT }, where p 𝑝 p italic_p is a fixed natural language prompt provided to an instruction-tuned language model. The number of sub-queries n 𝑛 n italic_n is not fixed but typically small, depending on how many distinct aspects or reasoning steps are involved in answering q 𝑞 q italic_q. The final set of queries used for retrieval is defined as Q={q}∪Decompose⁢(q,p)𝑄 𝑞 Decompose 𝑞 𝑝 Q=\{q\}\cup\textsc{Decompose}(q,p)italic_Q = { italic_q } ∪ Decompose ( italic_q , italic_p ), where the original query q 𝑞 q italic_q is always retained to preserve baseline retrieval performance.
142
+
143
+ ### 3.2 Reranker Module
144
+
145
+ Decomposing a complex question q 𝑞 q italic_q into multiple sub-queries {q~1,…,q~n}subscript~𝑞 1…subscript~𝑞 𝑛\{\tilde{q}_{1},\dots,\tilde{q}_{n}\}{ over~ start_ARG italic_q end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , over~ start_ARG italic_q end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT } naturally increases retrieval coverage but also introduces the risk of noise. Since documents are retrieved independently for each sub-query, some may be overly specific, only partially relevant, or even unrelated to the original question. To address this, we apply a reranking module that scores each retrieved document based on its relevance to the original complex query q 𝑞 q italic_q. This step helps to realign the expanded candidate pool with the user’s initial intent by filtering out documents that, while relevant to a sub-question, do not meaningfully contribute to answering q 𝑞 q italic_q as a whole. The goal is to retain only passages that clearly address distinct aspects of the original question, improving precision in the final evidence set.
146
+
147
+ 4 Experiments
148
+ -------------
149
+
150
+ We evaluate our proposed question decomposition pipeline on established multi-hop question answering benchmarks, focusing specifically on the retrieval stage. This allows us to isolate and directly measure improvements in evidence selection, independent of downstream generation. Following prior work, we report results on the evaluation split, as gold test labels are not publicly available.
151
+
152
+ ### 4.1 Datasets
153
+
154
+ We use the following datasets in our experiments:
155
+
156
+ #### MultiHop-RAG.
157
+
158
+ MultiHop-RAG (Tang and Yang, [2024](https://arxiv.org/html/2507.00355v1#bib.bib32)) is specifically designed for RAG pipelines and requires aggregating evidence from multiple sources to answer each query. In addition to question-answer pairs, it provides gold evidence annotations, enabling fine-grained evaluation of both retrieval accuracy and multi-hop reasoning. Importantly, the retrieval and generation components are evaluated separately, allowing for focused analysis of each component. This separation allows fair comparison across systems.
159
+
160
+ #### HotpotQA.
161
+
162
+ HotpotQA (Yang et al., [2018](https://arxiv.org/html/2507.00355v1#bib.bib43)) is a widely used multi-hop question answering benchmark constructed over Wikipedia. It features questions that explicitly require reasoning over two or more supporting passages. Gold answers and annotated supporting facts are provided, making it suitable for evaluating both retrieval and end-to-end QA performance. In this work, we focus on retrieval accuracy to assess how well different strategies recover the necessary evidence.
163
+
164
+ ### 4.2 Baselines
165
+
166
+ To assess the individual and combined contributions of question QD and reranking within multi-hop RAG, we evaluate four system configurations:
167
+
168
+ 1. 1.Naive RAG is the base setup in which a single query q 𝑞 q italic_q is embedded, and the top-k 𝑘 k italic_k most relevant passages are retrieved from the corpus 𝒟 𝒟\mathcal{D}caligraphic_D using dense retrieval.
169
+ 2. 2.RAG + QD modifies the retrieval stage by introducing question decomposition. The original query q 𝑞 q italic_q is transformed into a set of sub-queries {q~1,…,q~n}subscript~𝑞 1…subscript~𝑞 𝑛\{\tilde{q}_{1},\dots,\tilde{q}_{n}\}{ over~ start_ARG italic_q end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , over~ start_ARG italic_q end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT }, and retrieval is performed independently for each element of Q={q}∪{q~i}𝑄 𝑞 subscript~𝑞 𝑖 Q=\{q\}\cup\{\tilde{q}_{i}\}italic_Q = { italic_q } ∪ { over~ start_ARG italic_q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT }. The retrieved results are merged, and the top-k 𝑘 k italic_k passages are selected based on similarity scores. This setup increases retrieval coverage by capturing information across multiple query aspects.
170
+ 3. 3.RAG + Reranker retains the single-query retrieval approach but adds a reranking step. To support more diverse initial candidates, we retrieve the top-2⁢k 2 𝑘 2k 2 italic_k passages for the original query (2×k 2 𝑘 2\times k 2 × italic_k candidates), which are then scored by a reranker. The top-k 𝑘 k italic_k passages according to this score are selected as final input.
171
+ 4. 4.RAG + QD + Reranker combines both components. It first decomposes the query into sub-queries, retrieves documents for each, merges the results, and applies reranking to select the final top-k 𝑘 k italic_k passages. This configuration aims to improve both evidence coverage and ranking precision in multi-hop QA scenarios.
172
+
173
+ ### 4.3 Evaluation Metrics
174
+
175
+ We report dataset-specific evaluation metrics in accordance with the protocols defined for each benchmark.
176
+
177
+ #### MultiHop-RAG.
178
+
179
+ Following Tang and Yang ([2024](https://arxiv.org/html/2507.00355v1#bib.bib32)), we report the following three retrieval-oriented metrics:
180
+
181
+ * •Hits@k 𝑘 k italic_k for k∈{4,10}𝑘 4 10 k\!\in\!\{4,10\}italic_k ∈ { 4 , 10 } which represents the percentage of questions for which at least one gold evidence appears in the top-k 𝑘 k italic_k retrieved passages.
182
+ * •MAP@10 (_mean average precision_) computes the average precision at each rank position where a gold passage is retrieved, and then averages this over all queries. We truncate at rank 10.
183
+ * •MRR@10 (_mean reciprocal rank_) computes the mean of the reciprocal rank of the first correct passage, rewarding systems that surface a gold document as early as possible. We also truncate at rank 10.
184
+
185
+ #### HotpotQA.
186
+
187
+ For HotpotQA, we adopt the official QA-centric evaluation metrics introduced in the original benchmark (Yang et al., [2018](https://arxiv.org/html/2507.00355v1#bib.bib43); Rajpurkar et al., [2016](https://arxiv.org/html/2507.00355v1#bib.bib26)). Results are reported separately for (i) answer accuracy, (ii) supporting fact prediction, and (iii) their joint correctness. The joint metric constitutes a stricter criterion, requiring both the predicted answer and the corresponding supporting evidence to be correct. This provides a more comprehensive assessment of system performance by jointly evaluating generation quality and the relevance of retrieved evidence.
188
+
189
+ * •EM (_exact match_) measures whether the predicted answer exactly matches the reference answer string.
190
+ * •F1, Precision, Recall measure token-level overlap between the predicted and reference answers, thus allowing for partially correct answers.
191
+ * •Supporting-Fact EM, F1, Precision, Recall are the same metrics applied to the gold-labeled supporting facts.
192
+ * •Joint EM, F1, Precision, Recall considers a prediction correct only if both the answer and _all_ supporting facts are correct. This metric captures the system’s ability to jointly generate correct answers and identify the correct supporting evidence.
193
+
194
+ ### 4.4 Implementation Details
195
+
196
+ #### Retrieval
197
+
198
+ We embed each passage chunk using bge-large-en-v1.5 (d=1024 𝑑 1024 d{=}1024 italic_d = 1024) (Xiao et al., [2023](https://arxiv.org/html/2507.00355v1#bib.bib38)). The resulting embeddings are stored in a FAISS IndexFlatIP index to enable exact maximum inner product search. This setup ensures that any observed gains are attributable to question decomposition and reranking, rather than approximations introduced by approximate nearest neighbor search (Douze et al., [2024](https://arxiv.org/html/2507.00355v1#bib.bib4); Facebookresearch, [2024](https://arxiv.org/html/2507.00355v1#bib.bib5)).
199
+
200
+ #### Reranker
201
+
202
+ We rescore the retrieved passages using the bge-reranker-large cross-encoder (Xiao et al., [2023](https://arxiv.org/html/2507.00355v1#bib.bib38)). The model outputs a relevance logit for each query–passage pair. We then sort the passages by their scores and retain the top-k 𝑘 k italic_k passages, which are appended to the prompt for answer generation.
203
+
204
+ #### Generation Model
205
+
206
+ We generate answers using Qwen2.5-32B-Instruct(Qwen Team, [2024](https://arxiv.org/html/2507.00355v1#bib.bib25); Yang et al., [2024](https://arxiv.org/html/2507.00355v1#bib.bib42)), operating in bfloat16 precision. We use maximum sequence length of 512 tokens.
207
+
208
+ #### Software
209
+
210
+ In our implementations, we use LangChain(LangChain, [2025](https://arxiv.org/html/2507.00355v1#bib.bib15)), Huggingface Transformers(Wolf et al., [2020](https://arxiv.org/html/2507.00355v1#bib.bib36)), and faiss-cpu(Yamaguchi, [2025](https://arxiv.org/html/2507.00355v1#bib.bib40)). All our experiments are executed on NVIDIA A100 GPUs with 80GB of memory.
211
+
212
+ ### 4.5 Hyperparameters
213
+
214
+ We use the following hyperparameters across all experiments: the number of retrieved passages is fixed at k=10 𝑘 10 k=10 italic_k = 10 for all datasets, consistent with the official evaluation settings of both HotpotQA and MultiHop-RAG. Both sub-query generation and answer synthesis are performed with a sampling temperature of 0.8; and we apply nucleus sampling with Top-⁢p=0.8 Top-𝑝 0.8\text{Top-}p=0.8 Top- italic_p = 0.8.
215
+
216
+ 5 Results
217
+ ---------
218
+
219
+ ### 5.1 MultiHop-RAG
220
+
221
+ Table 1: Retrieval performance on the MultiHop-RAG eval split. †: We report the best baselines from Tang and Yang ([2024](https://arxiv.org/html/2507.00355v1#bib.bib32)), including text-ada-002 and voyage-002 models with reranking.
222
+
223
+ 1 1 footnotetext: Results taken from Tang and Yang ([2024](https://arxiv.org/html/2507.00355v1#bib.bib32)).
224
+
225
+ We present retrieval results on the MultiHop-RAG dataset in Table[1](https://arxiv.org/html/2507.00355v1#S5.T1 "Table 1 ‣ 5.1 MultiHop-RAG ‣ 5 Results ‣ Question Decomposition for Retrieval-Augmented Generation"). Question decomposition (qd) and reranking (rr) individually improve recall-oriented metrics: qd yields +4.4 percentage points on Hits@4 and +2.9 on Hits@10, while rr achieves a +7.6 point gain on Hits@4. Reranking also substantially improves MAP@10 and MRR@10. Our proposed pipeline, which combines both modules (qd+rr), achieves the strongest results overall, reaching 87.2% Hits@10 and 0.635 MRR@10.
226
+
227
+ For comparison, the strongest configurations in the original MultiHop-RAG paper (Tang and Yang, [2024](https://arxiv.org/html/2507.00355v1#bib.bib32)), which use text-ada-002(OpenAI, [2022](https://arxiv.org/html/2507.00355v1#bib.bib22)) and voyage-02(Voyage AI Innovations Inc., [2024](https://arxiv.org/html/2507.00355v1#bib.bib33)) embeddings with bge-reranker-large reranker. Despite using a smaller embedding model, we demonstrate strong improvements over the reported 74.7% Hits@10 and 0.586 MRR@10. Our qd+rr thus improves Hits@10 by 16.5% and MRR@10 by 8.4%. However, we also notice that our approach falls short on MAP@10.
228
+
229
+ Interestingly, despite the larger retrieval pool from decomposition, MAP@10 also increases (0.322 vs. 0.274 in rr), suggesting that reranking not only filters noise but leverages the broader context to prioritize relevant passages. These findings reinforce the complementary strengths of QD and reranking: decomposition expands coverage, and reranking restores precision.
230
+
231
+ Table 2: HotpotQA dev results. Upper block: answer metrics; middle: supporting-fact metrics; lower: joint metrics.
232
+
233
+ ### 5.2 HotpotQA
234
+
235
+ Table[2](https://arxiv.org/html/2507.00355v1#S5.T2 "Table 2 ‣ 5.1 MultiHop-RAG ‣ 5 Results ‣ Question Decomposition for Retrieval-Augmented Generation") presents answer-level, supporting-fact, and joint metrics on the dev split of HotpotQA.2 2 2 The official test set is hidden; as we do not train new models, we follow standard practice and evaluate on the dev set. Applying question decomposition (qd) alone yields only marginal improvements over the naive RAG baseline, with answer F 1 increasing from 31.3 to 32.3 and EM from 25.4 to 26.1. Reranking (rr) leads to stronger gains (F 1: 32.9, EM: 26.4), demonstrating its effectiveness in improving retrieval relevance. The combined system (qd+rr) achieves the best overall results, with the highest answer EM (28.1), F 1 (35.0), precision (37.1), and recall (34.8), indicating that improved coverage and ranking together lead to better evidence-grounded answers.
236
+
237
+ For supporting-fact metrics, qd+rr achieves the highest precision (46.8), despite having lower EM (17.9) and F 1 (11.2) compared to rr, which achieves the highest supporting-fact EM (19.6) and F 1 (12.9). Interestingly, qd+rr achieves the highest supporting-fact and joint precision (46.8 and 23.1, respectively), even though decomposition typically expands the retrieval pool and might be expected to reduce precision. This suggests that reranking effectively filters out less relevant candidates, even when starting from a broader and potentially noisier set. Moreover, the results indicate that decomposed sub-queries may surface complementary evidence that, after reranking, leads to more complete and better-aligned evidence sets. In some cases, a single document may contain answers to multiple sub-parts of a complex query, allowing the system to retrieve multi-hop evidence more efficiently than anticipated. These findings highlight the strength of combining decomposition with reranking: the former improves coverage, while the latter restores precision.
238
+
239
+ ### 5.3 Ablation: subqueries generated vs.gold evidences
240
+
241
+ Table[3](https://arxiv.org/html/2507.00355v1#S5.T3 "Table 3 ‣ 5.3 Ablation: subqueries generated vs. gold evidences ‣ 5 Results ‣ Question Decomposition for Retrieval-Augmented Generation") compares the number of gold evidence sentences per query with the number of subqueries produced by the question decomposition module. We instruct the LLM to generate at most 5 subqueries per query in order to keep our experiments strictly zero-shot. Most questions require only two or three supporting facts (e.g., 67.4% of HotpotQA have two), yet the LLM almost always generates exactly five subqueries (93.3% on MultiHop-RAG, 98.6% on HotpotQA), matching the prompt limit. However, we note that allowing variable-size decomposition could better align with actual evidence needs.
242
+
243
+ Table 3: Distribution of required gold evidences vs.sub-queries generated by QD. Rows sum to 100 %; buckets <1%absent percent 1<\!1\%< 1 % are omitted.
244
+
245
+ Correlation analysis. Both Pearson and Spearman coefficients are near zero (Table[5](https://arxiv.org/html/2507.00355v1#A1.T5 "Table 5 ‣ Appendix A Additional Ablation Results ‣ Question Decomposition for Retrieval-Augmented Generation")), indicating no correlation relationship between the number of sub-queries. This suggests that the LLM does not aim to predict the number of reasoning steps (or “hops”), but instead produces a diverse set of focused subqueries. Importantly, our goal was not to mirror the gold evidence count, but to ensure broad coverage through over-complete decomposition, increasing the chance of retrieving all relevant evidence. The near-zero correlation scores suggest the model applies a fixed subquery “budget” defined by the prompt, rather than adapting to question complexity.
246
+
247
+ ![Image 2: Refer to caption](https://arxiv.org/html/2507.00355v1/extracted/6577976/imgs/gold_evidences_vs_num_subqueries.png)
248
+
249
+ Figure 2: Absolute counts of gold evidences (blue) vs.subqueries generated (orange). Left: MultiHop-RAG; right: HotpotQA.
250
+
251
+ ### 5.4 Efficiency
252
+
253
+ Table[4](https://arxiv.org/html/2507.00355v1#S5.T4 "Table 4 ‣ 5.4 Efficiency ‣ 5 Results ‣ Question Decomposition for Retrieval-Augmented Generation") reports end-to-end retrieval latency (excluding generation) for 250 MultiHop-RAG queries. While Naive RAG is extremely fast (0.03s/query), adding reranking (rr) increases latency substantially to 0.88s/query due to the cost of scoring and sorting candidate passages with a cross-encoder. The overhead of question decomposition (qd) is 16.7s/query. This is primarily due to the additional LLM inference required to generate subqueries. When combined, the full qd+rr system reaches 18.9s/query, thus slower than the simple naive RAG baseline. However, once decomposed, subqueries can be reused (e.g., through caching) so that the latency remains identical to the baseline. A practical implementation is trivial: keep a small key-value store whose key is the raw user query and whose value is the list of generated sub-queries; on a cache hit the expensive QD LLM call is skipped entirely. These results highlight a key tradeoff: while qd+rr achieves the best retrieval quality (Section[5.1](https://arxiv.org/html/2507.00355v1#S5.SS1 "5.1 MultiHop-RAG ‣ 5 Results ‣ Question Decomposition for Retrieval-Augmented Generation")), it does so at the cost of increased latency.
254
+
255
+ Table 4: Retrieval wall-clock times on 250 MultiHop-RAG queries.
256
+
257
+ 6 Related Work
258
+ --------------
259
+
260
+ #### Retrieval-Augmented Generation and Multi-Hop QA.
261
+
262
+ RAG augments LLMs with access to external information at inference time, addressing their inherent limitations in handling up-to-date or specialized knowledge (Lewis et al., [2020](https://arxiv.org/html/2507.00355v1#bib.bib16)). RAG has shown promise in knowledge-intensive tasks such as open-domain and multi-hop question answering (QA), where single-document retrieval is often insufficient (Yang et al., [2018](https://arxiv.org/html/2507.00355v1#bib.bib43); Joshi et al., [2017](https://arxiv.org/html/2507.00355v1#bib.bib13)). However, RAG performance heavily depends on the quality of retrieved content—irrelevant or misleading passages can significantly impair answer quality (Cho et al., [2023](https://arxiv.org/html/2507.00355v1#bib.bib3); Shi et al., [2023](https://arxiv.org/html/2507.00355v1#bib.bib30); Yan et al., [2024](https://arxiv.org/html/2507.00355v1#bib.bib41)).
263
+
264
+ #### Question Decomposition for Multi-Hop Retrieval.
265
+
266
+ To better address multi-hop queries that span multiple evidence sources, recent work has explored decomposing complex questions into simpler subqueries (Feldman and El-Yaniv, [2019](https://arxiv.org/html/2507.00355v1#bib.bib7); Yao et al., [2023](https://arxiv.org/html/2507.00355v1#bib.bib44); Fazili et al., [2024](https://arxiv.org/html/2507.00355v1#bib.bib6); Xu et al., [2024](https://arxiv.org/html/2507.00355v1#bib.bib39); Shao et al., [2023](https://arxiv.org/html/2507.00355v1#bib.bib29)) using large language models as synthetic data generator (Golde et al., [2023](https://arxiv.org/html/2507.00355v1#bib.bib10); Li and Zhang, [2024](https://arxiv.org/html/2507.00355v1#bib.bib17)). This decomposition strategy allows models to target different aspects of a query independently, thereby facilitating more complete evidence aggregation (Press et al., [2023](https://arxiv.org/html/2507.00355v1#bib.bib24)). However, this approach is not without limitations. One notable issue is the "lost-in-retrieval" problem (Zhu et al., [2025](https://arxiv.org/html/2507.00355v1#bib.bib46)), where LLMs fail to match the recall performance of specialized models such as those trained for named entity recognition (Golde et al., [2024](https://arxiv.org/html/2507.00355v1#bib.bib11)). Further, many of these approaches rely on sequential subquestion resolution, which introduces latency and increases the risk of cascading errors (Mavi et al., [2024](https://arxiv.org/html/2507.00355v1#bib.bib19)). Alternative techniques involve decomposing queries using specialized models or fine-tuning decomposition modules (Min et al., [2019](https://arxiv.org/html/2507.00355v1#bib.bib20); Srinivasan et al., [2022](https://arxiv.org/html/2507.00355v1#bib.bib31); Zhou et al., [2022](https://arxiv.org/html/2507.00355v1#bib.bib45); Wang et al., [2024a](https://arxiv.org/html/2507.00355v1#bib.bib34); Wu et al., [2024](https://arxiv.org/html/2507.00355v1#bib.bib37)), limiting their generality. Our work instead adopts a single-step decomposition approach using general-purpose LLMs without task-specific training, ensuring modularity and ease of integration.
267
+
268
+ #### Reranking for Precision Retrieval.
269
+
270
+ Reranking methods further refine the retrieval stage by scoring initially retrieved candidates using more expressive models, typically cross-encoders (Nogueira and Cho, [2020](https://arxiv.org/html/2507.00355v1#bib.bib21)). These models evaluate query-document pairs jointly, capturing fine-grained interactions and significantly improving relevance over dual-encoder architectures (Reimers and Gurevych, [2019](https://arxiv.org/html/2507.00355v1#bib.bib27)). Reranking has proven effective in boosting precision for multi-hop and complex QA pipelines (Tang and Yang, [2024](https://arxiv.org/html/2507.00355v1#bib.bib32)). Our approach leverages cross-encoder reranking in conjunction with question decomposition, which together enhance both document coverage and ranking quality.
271
+
272
+ #### Complementary Approaches.
273
+
274
+ A range of complementary strategies has been proposed to optimize retrieval for complex queries, including adaptive retrieval (Jeong et al., [2024](https://arxiv.org/html/2507.00355v1#bib.bib12)), corrective reranking (Yan et al., [2024](https://arxiv.org/html/2507.00355v1#bib.bib41)), and self-reflective generation (Asai et al., [2023](https://arxiv.org/html/2507.00355v1#bib.bib1)). Techniques such as hypothetical document embeddings (HyDE) (Gao et al., [2022](https://arxiv.org/html/2507.00355v1#bib.bib8)) and query rewriting (Chan et al., [2024](https://arxiv.org/html/2507.00355v1#bib.bib2); Ma et al., [2023](https://arxiv.org/html/2507.00355v1#bib.bib18)) focus on improving the retrieval query itself. While promising, many of these methods involve non-trivial training or model customization. In contrast, our method is lightweight, model-agnostic, and easily deployable within existing RAG architectures.
275
+
276
+ 7 Conclusion
277
+ ------------
278
+
279
+ This study examined how LLM-based question decomposition (QD) and cross-encoder reranking influence retrieval-augmented generation for complex and multi-hop question answering. Across four system variants and two datasets, the combination of QD and reranking provided the largest gains, increasing retrieval and answer correctness, without requiring extra training or domain-specific tuning. Splitting a query into focused sub-queries broadened evidence coverage, while the reranker promoted the most relevant passages, yielding improvements on benchmark datasets.
280
+
281
+ But the approach is not without downsides. If a query is already precise, decomposition can introduce noise, and reranking cannot remove every irrelevant passage. Both modules also add computation, which may be prohibitive in low-latency scenarios. Performance further depends on the quality of the LLM used for sub-query generation and on an appropriate choice of reranker.
282
+
283
+ Future work. Employing QD only when a query is predicted to need multi-hop reasoning could preserve most benefits while cutting overhead. The incorporation of both QD and reranking inevitably increases computational overhead, which can be a limitation in low-latency, real-time deployments. Future work could therefore focus on efficiency-oriented variants, e.g.swapping in smaller instruction models for QD or using lightweight rerankers, to keep response times low without sacrificing accuracy. Additional gains may come from testing alternative LLMs, rerankers and prompts, and from tuning the number of sub-queries and retrieved passages. Additionally, human studies and domain-specific evaluations can deepen our understanding of real-world impact and clarify how generated sub-queries relate to required evidence.
284
+
285
+ Limitations
286
+ -----------
287
+
288
+ While our approach improves multi-hop retrieval quality, it has several limitations that warrant further attention.
289
+
290
+ Single-hop and adverse cases. Question decomposition can be counterproductive when the original query is already specific. In such cases, subqueries may introduce noise or distract from the original intent. In rare instances, none of the generated subqueries retrieve stronger evidence than the original query alone.
291
+
292
+ Prompt and model sensitivity. The quality of subqueries is sensitive to both the prompt design and the underlying LLM. This dependence may require prompt tuning or model selection when adapting the method to new domains or languages, potentially limiting generalization.
293
+
294
+ Computational overhead. As discussed in §[5.4](https://arxiv.org/html/2507.00355v1#S5.SS4 "5.4 Efficiency ‣ 5 Results ‣ Question Decomposition for Retrieval-Augmented Generation"), generating M 𝑀 M italic_M subqueries and reranking M×k 𝑀 𝑘 M\times k italic_M × italic_k candidate passages substantially increases latency and GPU requirements. This motivates future work on more efficient decomposition strategies, such as lightweight LLMs, retrieval-aware early stopping, or subquery caching.
295
+
296
+ Pipeline complexity. Our design adds two separate modules to the standard RAG stack. Although both are plug-and-play, and rerankers are already commonly used in RAG pipelines (Saxena et al., [2025](https://arxiv.org/html/2507.00355v1#bib.bib28)), every extra component increases engineering overhead, latency, and potential points of failure.
297
+
298
+ Reranker and domain dependence. The observed gains rely on a strong, domain-aligned cross-encoder reranker. When the reranker is mismatched with the retrieval or task domain, the benefits of decomposition may diminish or vanish entirely.
299
+
300
+ Lack of iterative retrieval. Our pipeline operates in a single-shot manner: subqueries are generated once and not updated based on retrieved evidence. This limits its ability to support adaptive multi-step reasoning, which might be necessary for more complex tasks.
301
+
302
+ Acknowledgments
303
+ ---------------
304
+
305
+ We thank all reviewers for their valuable comments. Jonas Golde is supported by the Bundesministerium für Bildung und Forschung (BMBF) as part of the project “FewTuRe” (project number 01IS24020). Alan Akbik is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Emmy Noether grant “Eidetic Representations of Natural Language” (project number 448414230) and under Germany’s Excellence Strategy “Science of Intelligence” (EXC 2002/1, project number 390523135).
306
+
307
+ References
308
+ ----------
309
+
310
+ * Asai et al. (2023) Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. 2023. [Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection](https://doi.org/10.48550/arXiv.2310.11511). _Preprint_, arXiv:2310.11511.
311
+ * Chan et al. (2024) Chi-Min Chan, Chunpu Xu, Ruibin Yuan, Hongyin Luo, Wei Xue, Yike Guo, and Jie Fu. 2024. [RQ-RAG: Learning to Refine Queries for Retrieval Augmented Generation](https://doi.org/10.48550/arXiv.2404.00610). _Preprint_, arXiv:2404.00610.
312
+ * Cho et al. (2023) Sukmin Cho, Jeongyeon Seo, Soyeong Jeong, and Jong C. Park. 2023. [Improving Zero-shot Reader by Reducing Distractions from Irrelevant Documents in Open-Domain Question Answering](https://arxiv.org/abs/2310.17490). _Preprint_, arXiv:2310.17490.
313
+ * Douze et al. (2024) Matthijs Douze, Alexandr Guzhva, Chengqi Deng, Jeff Johnson, Gergely Szilvasy, Pierre-Emmanuel Mazaré, Maria Lomeli, Lucas Hosseini, and Hervé Jégou. 2024. [The Faiss library](https://doi.org/10.48550/arXiv.2401.08281). _Preprint_, arXiv:2401.08281.
314
+ * Facebookresearch (2024) Facebookresearch. 2024. Faiss indexes. https://github.com/facebookresearch/faiss/wiki/ Faiss-indexes.
315
+ * Fazili et al. (2024) Barah Fazili, Koustava Goswami, Natwar Modani, and Inderjeet Nair. 2024. [GenSco: Can Question Decomposition based Passage Alignment improve Question Answering?](https://doi.org/10.48550/arXiv.2407.10245)_Preprint_, arXiv:2407.10245.
316
+ * Feldman and El-Yaniv (2019) Yair Feldman and Ran El-Yaniv. 2019. [Multi-Hop Paragraph Retrieval for Open-Domain Question Answering](https://doi.org/10.48550/arXiv.1906.06606). _Preprint_, arXiv:1906.06606.
317
+ * Gao et al. (2022) Luyu Gao, Xueguang Ma, Jimmy Lin, and Jamie Callan. 2022. [Precise Zero-Shot Dense Retrieval without Relevance Labels](https://doi.org/10.48550/arXiv.2212.10496). _Preprint_, arXiv:2212.10496.
318
+ * Glass et al. (2022) Michael Glass, Gaetano Rossiello, Md Faisal Mahbub Chowdhury, Ankita Naik, Pengshan Cai, and Alfio Gliozzo. 2022. [Re2G: Retrieve, Rerank, Generate](https://doi.org/10.18653/v1/2022.naacl-main.194). In _Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_, pages 2701–2715, Seattle, United States. Association for Computational Linguistics.
319
+ * Golde et al. (2023) Jonas Golde, Patrick Haller, Felix Hamborg, Julian Risch, and Alan Akbik. 2023. [Fabricator: An open source toolkit for generating labeled training data with teacher LLMs](https://doi.org/10.18653/v1/2023.emnlp-demo.1). In _Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_, pages 1–11, Singapore. Association for Computational Linguistics.
320
+ * Golde et al. (2024) Jonas Golde, Felix Hamborg, and Alan Akbik. 2024. [Large-scale label interpretation learning for few-shot named entity recognition](https://aclanthology.org/2024.eacl-long.178/). In _Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 2915–2930, St. Julian’s, Malta. Association for Computational Linguistics.
321
+ * Jeong et al. (2024) Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, and Jong C. Park. 2024. [Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity](https://doi.org/10.48550/arXiv.2403.14403). _Preprint_, arXiv:2403.14403.
322
+ * Joshi et al. (2017) Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. [TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension](https://doi.org/10.48550/arXiv.1705.03551). _Preprint_, arXiv:1705.03551.
323
+ * Karpukhin et al. (2020) Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906). _Preprint_, arXiv:2004.04906.
324
+ * LangChain (2025) LangChain. 2025. LangChain. https://www.langchain.com/.
325
+ * Lewis et al. (2020) Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401). Technical Report arXiv:2005.11401, arXiv.
326
+ * Li and Zhang (2024) Kunze Li and Yu Zhang. 2024. [Planning first, question second: An LLM-guided method for controllable question generation](https://doi.org/10.18653/v1/2024.findings-acl.280). In _Findings of the Association for Computational Linguistics: ACL 2024_, pages 4715–4729, Bangkok, Thailand. Association for Computational Linguistics.
327
+ * Ma et al. (2023) Xinbei Ma, Yeyun Gong, Pengcheng He, Hai Zhao, and Nan Duan. 2023. [Query Rewriting for Retrieval-Augmented Large Language Models](https://arxiv.org/abs/2305.14283). _Preprint_, arXiv:2305.14283.
328
+ * Mavi et al. (2024) Vaibhav Mavi, Anubhav Jangra, and Adam Jatowt. 2024. [Multi-hop Question Answering](https://doi.org/10.48550/arXiv.2204.09140). _Preprint_, arXiv:2204.09140.
329
+ * Min et al. (2019) Sewon Min, Victor Zhong, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2019. [Multi-hop Reading Comprehension through Question Decomposition and Rescoring](https://doi.org/10.48550/arXiv.1906.02916). _Preprint_, arXiv:1906.02916.
330
+ * Nogueira and Cho (2020) Rodrigo Nogueira and Kyunghyun Cho. 2020. [Passage Re-ranking with BERT](https://arxiv.org/abs/1901.04085). Technical Report arXiv:1901.04085, arXiv.
331
+ * OpenAI (2022) OpenAI. 2022. New and improved embedding model. https://openai.com/index/new-and-improved-embedding-model/.
332
+ * Perez et al. (2020) Ethan Perez, Patrick Lewis, Wen-tau Yih, Kyunghyun Cho, and Douwe Kiela. 2020. [Unsupervised Question Decomposition for Question Answering](https://doi.org/10.48550/arXiv.2002.09758). _Preprint_, arXiv:2002.09758.
333
+ * Press et al. (2023) Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Mike Lewis. 2023. [Measuring and Narrowing the Compositionality Gap in Language Models](https://doi.org/10.48550/arXiv.2210.03350). _Preprint_, arXiv:2210.03350.
334
+ * Qwen Team (2024) Qwen Team. 2024. Qwen2.5: A Party of Foundation Models! https://qwenlm.github.io/blog/qwen2.5/.
335
+ * Rajpurkar et al. (2016) Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://doi.org/10.48550/arXiv.1606.05250). _Preprint_, arXiv:1606.05250.
336
+ * Reimers and Gurevych (2019) Nils Reimers and Iryna Gurevych. 2019. [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084). Technical Report arXiv:1908.10084, arXiv.
337
+ * Saxena et al. (2025) Yash Saxena, Ankur Padia, Mandar S. Chaudhary, Kalpa Gunaratna, Srinivasan Parthasarathy, and Manas Gaur. 2025. [Ranking Free RAG: Replacing Re-ranking with Selection in RAG for Sensitive Domains](https://doi.org/10.48550/arXiv.2505.16014). _Preprint_, arXiv:2505.16014.
338
+ * Shao et al. (2023) Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, and Weizhu Chen. 2023. [Enhancing Retrieval-Augmented Large Language Models with Iterative Retrieval-Generation Synergy](https://doi.org/10.48550/arXiv.2305.15294). _Preprint_, arXiv:2305.15294.
339
+ * Shi et al. (2023) Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Schärli, and Denny Zhou. 2023. [Large Language Models Can Be Easily Distracted by Irrelevant Context](https://doi.org/10.48550/arXiv.2302.00093). _Preprint_, arXiv:2302.00093.
340
+ * Srinivasan et al. (2022) Krishna Srinivasan, Karthik Raman, Anupam Samanta, Lingrui Liao, Luca Bertelli, and Mike Bendersky. 2022. [QUILL: Query Intent with Large Language Models using Retrieval Augmentation and Multi-stage Distillation](https://doi.org/10.48550/arXiv.2210.15718). _Preprint_, arXiv:2210.15718.
341
+ * Tang and Yang (2024) Yixuan Tang and Yi Yang. 2024. [MultiHop-RAG: Benchmarking Retrieval-Augmented Generation for Multi-Hop Queries](https://doi.org/10.48550/arXiv.2401.15391). _Preprint_, arXiv:2401.15391.
342
+ * Voyage AI Innovations Inc. (2024) Voyage AI Innovations Inc. 2024. Voyage AI | Home. https://www.voyageai.com/.
343
+ * Wang et al. (2024a) Shuting Wang, Xin Yu, Mang Wang, Weipeng Chen, Yutao Zhu, and Zhicheng Dou. 2024a. [RichRAG: Crafting Rich Responses for Multi-faceted Queries in Retrieval-Augmented Generation](https://arxiv.org/abs/2406.12566). _Preprint_, arXiv:2406.12566.
344
+ * Wang et al. (2024b) Xiaohua Wang, Zhenghua Wang, Xuan Gao, Feiran Zhang, Yixin Wu, Zhibo Xu, Tianyuan Shi, Zhengyuan Wang, Shizheng Li, Qi Qian, Ruicheng Yin, Changze Lv, Xiaoqing Zheng, and Xuanjing Huang. 2024b. [Searching for Best Practices in Retrieval-Augmented Generation](https://doi.org/10.18653/v1/2024.emnlp-main.981). In _Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing_, pages 17716–17736, Miami, Florida, USA. Association for Computational Linguistics.
345
+ * Wolf et al. (2020) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, and 3 others. 2020. [HuggingFace’s Transformers: State-of-the-art Natural Language Processing](https://doi.org/10.48550/arXiv.1910.03771). _Preprint_, arXiv:1910.03771.
346
+ * Wu et al. (2024) Jian Wu, Linyi Yang, Yuliang Ji, Wenhao Huang, Börje F. Karlsson, and Manabu Okumura. 2024. [GenDec: A robust generative Question-decomposition method for Multi-hop reasoning](https://doi.org/10.48550/arXiv.2402.11166). _Preprint_, arXiv:2402.11166.
347
+ * Xiao et al. (2023) Shitao Xiao, Zheng Liu, Peitian Zhang, Niklas Muennighoff, Defu Lian, and Jian-Yun Nie. 2023. [C-Pack: Packed Resources For General Chinese Embeddings](https://doi.org/10.48550/arXiv.2309.07597). _Preprint_, arXiv:2309.07597.
348
+ * Xu et al. (2024) Shicheng Xu, Liang Pang, Huawei Shen, Xueqi Cheng, and Tat-Seng Chua. 2024. [Search-in-the-Chain: Interactively Enhancing Large Language Models with Search for Knowledge-intensive Tasks](https://doi.org/10.48550/arXiv.2304.14732). _Preprint_, arXiv:2304.14732.
349
+ * Yamaguchi (2025) Kota Yamaguchi. 2025. Faiss-cpu: A library for efficient similarity search and clustering of dense vectors.
350
+ * Yan et al. (2024) Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, and Zhen-Hua Ling. 2024. [Corrective Retrieval Augmented Generation](https://arxiv.org/abs/2401.15884). Technical Report arXiv:2401.15884, arXiv.
351
+ * Yang et al. (2024) An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Haoran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, and 43 others. 2024. [Qwen2 Technical Report](https://doi.org/10.48550/arXiv.2407.10671). _Preprint_, arXiv:2407.10671.
352
+ * Yang et al. (2018) Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. [HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering](https://arxiv.org/abs/1809.09600). _Preprint_, arXiv:1809.09600.
353
+ * Yao et al. (2023) Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2023. [ReAct: Synergizing Reasoning and Acting in Language Models](https://doi.org/10.48550/arXiv.2210.03629). _Preprint_, arXiv:2210.03629.
354
+ * Zhou et al. (2022) Ben Zhou, Kyle Richardson, Xiaodong Yu, and Dan Roth. 2022. [Learning to Decompose: Hypothetical Question Decomposition Based on Comparable Texts](https://doi.org/10.48550/arXiv.2210.16865). _Preprint_, arXiv:2210.16865.
355
+ * Zhu et al. (2025) Rongzhi Zhu, Xiangyu Liu, Zequn Sun, Yiwei Wang, and Wei Hu. 2025. [Mitigating lost-in-retrieval problems in retrieval augmented multi-hop question answering](https://arxiv.org/abs/2502.14245). _Preprint_, arXiv:2502.14245.
356
+
357
+ Appendix A Additional Ablation Results
358
+ --------------------------------------
359
+
360
+ Table 5: Correlation between the number of sub-queries and the number of gold evidences per query.