File size: 2,682 Bytes
878ad06
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9da1d3a
 
 
 
 
 
 
 
 
 
878ad06
9da1d3a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
license: cc-by-4.0
configs:
  - config_name: default
    data_files:
      - split: test
        path: qrels/test.jsonl
  - config_name: corpus
    data_files:
      - split: corpus
        path: corpus.jsonl
  - config_name: queries
    data_files:
      - split: queries
        path: queries.jsonl
---
# 📚 Translated LONG2RAG (MTEB-Style Retrieval Dataset)

## Dataset Summary

This dataset is a **translated version** of the [LONG2RAG benchmark](https://github.com/QZH-777/longrag) (Qi et al., EMNLP Findings 2024), adapted into **MTEB-style retrieval format** for evaluating multilingual **retrieval-augmented generation (RAG)** and **long-context retrieval** systems.

LONG2RAG was originally designed to evaluate how well large language models (LLMs) incorporate key points from retrieved long documents into long-form answers. It includes **280 complex, practical questions** across **10 domains** and **8 question categories**, each paired with **5 retrieved documents** (avg. length ~2,444 words).

This translated version preserves the structure but reformats it into **query–document relevance pairs** suitable for **retrieval evaluation** under the [Massive Text Embedding Benchmark (MTEB)](https://huggingface.co/collections/mteb/mteb-benchmark-63f5f98f79c33120b8f94d1d).

---

## Supported Tasks and Leaderboards

* **Task Category:** Retrieval  
* **Task:** Given a natural language query, rank candidate documents by relevance.  
* **MTEB Integration:** Compatible with `mteb` evaluation framework.  

---

## Languages

* **Original:** English  
* **This release:** Translated into Persian  

---


## Dataset Details

### Queries
- **280** complex, uncontaminated, long-form questions.  

### Corpus
- Retrieved real-world documents (**5 per query**).  

### Relevance Labels
- Binary (**relevant / not relevant**).  

---

## Domains and Question Categories

### Domains (10)
- AI  
- Biology  
- Economics  
- Film  
- History  
- Music  
- Religion  
- Sports  
- Technology  
- Others  

### Question Categories (8)
- Factual  
- Explanatory  
- Comparative  
- Subjective  
- Methodological  
- Causal  
- Hypothetical  
- Predictive  

---

## Data Splits

- **test**: 280 queries  

Each query has **5 candidate documents**, aligned with **MTEB retrieval style**.  

---

## Citation

```bibtex
@inproceedings{qi2024long2rag,
  title = {LONG2RAG: Evaluating Long-Context \& Long-Form Retrieval-Augmented Generation with Key Point Recall},
  author = {Qi, Zehan and Xu, Rongwu and Guo, Zhijiang and Wang, Cunxiang and Zhang, Hao and Xu, Wei},
  booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2024},
  year = {2024}
}