Add link to paper

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +18 -18
README.md CHANGED
@@ -1,25 +1,27 @@
1
  ---
 
 
2
  license: apache-2.0
 
 
3
  task_categories:
4
- - question-answering
5
- - text-retrieval
6
- language:
7
- - en
8
  tags:
9
- - rag
10
- - retrieval-augmented-generation
11
- - multi-hop-reasoning
12
- - hotpotqa
13
- - information-retrieval
14
- - question-answering
15
- - evaluation
16
- size_categories:
17
- - 1K<n<10K
18
  ---
19
 
20
  # StratRAG
21
 
22
- **StratRAG** is a retrieval evaluation dataset for benchmarking Retrieval-Augmented Generation (RAG) systems on multi-hop reasoning tasks. It is derived from [HotpotQA](https://hotpotqa.github.io/) (distractor setting) and structured specifically for evaluating retrieval strategies — including sparse (BM25), dense, and hybrid approaches — in realistic, noisy document pool conditions.
 
 
23
 
24
  ---
25
 
@@ -95,7 +97,8 @@ print("Number of docs in pool:", len(row["doc_pool"]))
95
 
96
  # Access gold documents directly
97
  for idx in row["gold_doc_indices"]:
98
- print(f"\nGold doc [{idx}]:", row["doc_pool"][idx]["text"][:200])
 
99
  ```
100
 
101
  ---
@@ -157,9 +160,6 @@ from datasets import load_dataset
157
  hotpot = load_dataset("hotpot_qa", "distractor")
158
  ```
159
 
160
- ---
161
-
162
-
163
  ---
164
 
165
  ## Benchmark Results
 
1
  ---
2
+ language:
3
+ - en
4
  license: apache-2.0
5
+ size_categories:
6
+ - 1K<n<10K
7
  task_categories:
8
+ - question-answering
9
+ - text-retrieval
 
 
10
  tags:
11
+ - rag
12
+ - retrieval-augmented-generation
13
+ - multi-hop-reasoning
14
+ - hotpotqa
15
+ - information-retrieval
16
+ - question-answering
17
+ - evaluation
 
 
18
  ---
19
 
20
  # StratRAG
21
 
22
+ **StratRAG** is a retrieval evaluation dataset for benchmarking Retrieval-Augmented Generation (RAG) systems on multi-hop reasoning tasks. It was introduced in the paper [StratRAG: A Multi-Hop Retrieval Evaluation Dataset for Retrieval-Augmented Generation Systems](https://huggingface.co/papers/2604.22757).
23
+
24
+ It is derived from [HotpotQA](https://hotpotqa.github.io/) (distractor setting) and structured specifically for evaluating retrieval strategies — including sparse (BM25), dense, and hybrid approaches — in realistic, noisy document pool conditions.
25
 
26
  ---
27
 
 
97
 
98
  # Access gold documents directly
99
  for idx in row["gold_doc_indices"]:
100
+ print(f"
101
+ Gold doc [{idx}]:", row["doc_pool"][idx]["text"][:200])
102
  ```
103
 
104
  ---
 
160
  hotpot = load_dataset("hotpot_qa", "distractor")
161
  ```
162
 
 
 
 
163
  ---
164
 
165
  ## Benchmark Results