File size: 2,084 Bytes
bdae700
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
configs:
  - config_name: default
    data_files:
      - split: test
        path: qrels/test.jsonl
  - config_name: corpus
    data_files:
      - split: corpus
        path: corpus.jsonl
  - config_name: queries
    data_files:
      - split: queries
        path: queries.jsonl
---
## Dataset Summary

**NQ-Fa** is a Persian (Farsi) dataset created for the **Retrieval** task, specifically targeting **open-domain question answering**. It is a **translated version** of the original English **Natural Questions (NQ)** dataset and a central component of the [FaMTEB (Farsi Massive Text Embedding Benchmark)](https://huggingface.co/spaces/mteb/leaderboard), as part of the **BEIR-Fa** collection.

- **Language(s):** Persian (Farsi)  
- **Task(s):** Retrieval (Question Answering)  
- **Source:** Translated from English NQ using Google Translate  
- **Part of FaMTEB:** Yes — under BEIR-Fa

## Supported Tasks and Leaderboards

This dataset evaluates how well **text embedding models** can retrieve relevant answer passages from Persian Wikipedia in response to **natural language questions**, originally issued to Google Search. Results are benchmarked on the **Persian MTEB Leaderboard** on Hugging Face Spaces (language filter: Persian).

## Construction

The construction process included:

- Starting with the **Natural Questions (NQ)** English dataset, containing real user search queries  
- Using the **Google Translate API** to translate both questions and annotated Wikipedia passages into Persian  
- Retaining original query-passage mapping structure for retrieval evaluation

As described in the *FaMTEB* paper, all BEIR-Fa datasets (including NQ-Fa) underwent:

- **BM25 retrieval comparison** between English and Persian  
- **LLM-based translation quality check** using the GEMBA-DA framework  

These evaluations confirmed a **high level of translation quality**.

## Data Splits

Defined in the FaMTEB paper (Table 5):

- **Train:** 0 samples  
- **Dev:** 0 samples  
- **Test:** 2,685,669 samples

**Total:** ~2.69 million examples (according to metadata)