|
|
--- |
|
|
license: cc-by-4.0 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: test |
|
|
path: qrels/test.jsonl |
|
|
- config_name: corpus |
|
|
data_files: |
|
|
- split: corpus |
|
|
path: corpus.jsonl |
|
|
- config_name: queries |
|
|
data_files: |
|
|
- split: queries |
|
|
path: queries.jsonl |
|
|
--- |
|
|
# 📚 Translated LONG2RAG (MTEB-Style Retrieval Dataset) |
|
|
|
|
|
## Dataset Summary |
|
|
|
|
|
This dataset is a **translated version** of the [LONG2RAG benchmark](https://github.com/QZH-777/longrag) (Qi et al., EMNLP Findings 2024), adapted into **MTEB-style retrieval format** for evaluating multilingual **retrieval-augmented generation (RAG)** and **long-context retrieval** systems. |
|
|
|
|
|
LONG2RAG was originally designed to evaluate how well large language models (LLMs) incorporate key points from retrieved long documents into long-form answers. It includes **280 complex, practical questions** across **10 domains** and **8 question categories**, each paired with **5 retrieved documents** (avg. length ~2,444 words). |
|
|
|
|
|
This translated version preserves the structure but reformats it into **query–document relevance pairs** suitable for **retrieval evaluation** under the [Massive Text Embedding Benchmark (MTEB)](https://huggingface.co/collections/mteb/mteb-benchmark-63f5f98f79c33120b8f94d1d). |
|
|
|
|
|
--- |
|
|
|
|
|
## Supported Tasks and Leaderboards |
|
|
|
|
|
* **Task Category:** Retrieval |
|
|
* **Task:** Given a natural language query, rank candidate documents by relevance. |
|
|
* **MTEB Integration:** Compatible with `mteb` evaluation framework. |
|
|
|
|
|
--- |
|
|
|
|
|
## Languages |
|
|
|
|
|
* **Original:** English |
|
|
* **This release:** Translated into Persian |
|
|
|
|
|
--- |
|
|
|
|
|
|
|
|
## Dataset Details |
|
|
|
|
|
### Queries |
|
|
- **280** complex, uncontaminated, long-form questions. |
|
|
|
|
|
### Corpus |
|
|
- Retrieved real-world documents (**5 per query**). |
|
|
|
|
|
### Relevance Labels |
|
|
- Binary (**relevant / not relevant**). |
|
|
|
|
|
--- |
|
|
|
|
|
## Domains and Question Categories |
|
|
|
|
|
### Domains (10) |
|
|
- AI |
|
|
- Biology |
|
|
- Economics |
|
|
- Film |
|
|
- History |
|
|
- Music |
|
|
- Religion |
|
|
- Sports |
|
|
- Technology |
|
|
- Others |
|
|
|
|
|
### Question Categories (8) |
|
|
- Factual |
|
|
- Explanatory |
|
|
- Comparative |
|
|
- Subjective |
|
|
- Methodological |
|
|
- Causal |
|
|
- Hypothetical |
|
|
- Predictive |
|
|
|
|
|
--- |
|
|
|
|
|
## Data Splits |
|
|
|
|
|
- **test**: 280 queries |
|
|
|
|
|
Each query has **5 candidate documents**, aligned with **MTEB retrieval style**. |
|
|
|
|
|
--- |
|
|
|
|
|
## Citation |
|
|
|
|
|
```bibtex |
|
|
@inproceedings{qi2024long2rag, |
|
|
title = {LONG2RAG: Evaluating Long-Context \& Long-Form Retrieval-Augmented Generation with Key Point Recall}, |
|
|
author = {Qi, Zehan and Xu, Rongwu and Guo, Zhijiang and Wang, Cunxiang and Zhang, Hao and Xu, Wei}, |
|
|
booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2024}, |
|
|
year = {2024} |
|
|
} |
|
|
|