mehran-sarmadi commited on
Commit
bdae700
·
verified ·
1 Parent(s): 4f0e7fd

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -0
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ configs:
3
+ - config_name: default
4
+ data_files:
5
+ - split: train
6
+ path: qrels/train.jsonl
7
+ - split: validation
8
+ path: qrels/validation.jsonl
9
+ - split: test
10
+ path: qrels/test.jsonl
11
+ - config_name: corpus
12
+ data_files:
13
+ - split: corpus
14
+ path: corpus.jsonl
15
+ - config_name: queries
16
+ data_files:
17
+ - split: queries
18
+ path: queries.jsonl
19
+ ---
20
+ ## Dataset Summary
21
+
22
+ **NQ-Fa** is a Persian (Farsi) dataset created for the **Retrieval** task, specifically targeting **open-domain question answering**. It is a **translated version** of the original English **Natural Questions (NQ)** dataset and a central component of the [FaMTEB (Farsi Massive Text Embedding Benchmark)](https://huggingface.co/spaces/mteb/leaderboard), as part of the **BEIR-Fa** collection.
23
+
24
+ - **Language(s):** Persian (Farsi)
25
+ - **Task(s):** Retrieval (Question Answering)
26
+ - **Source:** Translated from English NQ using Google Translate
27
+ - **Part of FaMTEB:** Yes — under BEIR-Fa
28
+
29
+ ## Supported Tasks and Leaderboards
30
+
31
+ This dataset evaluates how well **text embedding models** can retrieve relevant answer passages from Persian Wikipedia in response to **natural language questions**, originally issued to Google Search. Results are benchmarked on the **Persian MTEB Leaderboard** on Hugging Face Spaces (language filter: Persian).
32
+
33
+ ## Construction
34
+
35
+ The construction process included:
36
+
37
+ - Starting with the **Natural Questions (NQ)** English dataset, containing real user search queries
38
+ - Using the **Google Translate API** to translate both questions and annotated Wikipedia passages into Persian
39
+ - Retaining original query-passage mapping structure for retrieval evaluation
40
+
41
+ As described in the *FaMTEB* paper, all BEIR-Fa datasets (including NQ-Fa) underwent:
42
+
43
+ - **BM25 retrieval comparison** between English and Persian
44
+ - **LLM-based translation quality check** using the GEMBA-DA framework
45
+
46
+ These evaluations confirmed a **high level of translation quality**.
47
+
48
+ ## Data Splits
49
+
50
+ Defined in the FaMTEB paper (Table 5):
51
+
52
+ - **Train:** 0 samples
53
+ - **Dev:** 0 samples
54
+ - **Test:** 2,685,669 samples
55
+
56
+ **Total:** ~2.69 million examples (according to metadata)