mehran-sarmadi commited on
Commit
48f780a
·
verified ·
1 Parent(s): 4f772a4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -0
README.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ configs:
3
+ - config_name: default
4
+ data_files:
5
+ - split: test
6
+ path: qrels/test.jsonl
7
+ - config_name: corpus
8
+ data_files:
9
+ - split: corpus
10
+ path: corpus.jsonl
11
+ - config_name: queries
12
+ data_files:
13
+ - split: queries
14
+ path: queries.jsonl
15
+ ---
16
+ ## Dataset Summary
17
+
18
+ **CQADupstack-wordpress-Fa** is a Persian (Farsi) dataset created for the **Retrieval** task, focused on identifying **duplicate or semantically equivalent questions** in the domain of WordPress development. It is a **translated version** of the *WordPress Development StackExchange* data from the English **CQADupstack** dataset and is part of the [FaMTEB (Farsi Massive Text Embedding Benchmark)](https://huggingface.co/spaces/mteb/leaderboard).
19
+
20
+ * **Language(s):** Persian (Farsi)
21
+ * **Task(s):** Retrieval (Duplicate Question Retrieval)
22
+ * **Source:** Translated from CQADupstack-WordPress (BEIR benchmark) using Google Translate
23
+ * **Part of FaMTEB:** Yes — under the BEIR-Fa collection
24
+
25
+ ## Supported Tasks and Leaderboards
26
+
27
+ This dataset is designed to test the ability of **text embedding models** to retrieve semantically similar or duplicate questions from technical user forums. Evaluation results appear on the **Persian MTEB Leaderboard** on Hugging Face Spaces (filter by language: Persian).
28
+
29
+ ## Construction
30
+
31
+ The dataset was constructed by:
32
+
33
+ - Extracting the WordPress subforum data from the English CQADupstack dataset
34
+ - Translating it into Persian using the **Google Translate API**
35
+ - Preserving original query-positive pairs for Retrieval task evaluation
36
+
37
+ As noted in the *FaMTEB* paper, the **BEIR-Fa collection** (which includes this dataset) was evaluated through:
38
+
39
+ - **BM25 score comparisons**
40
+ - **GEMBA-DA** framework using LLMs to assess translation accuracy
41
+
42
+ These validation methods indicated good translation quality overall.
43
+
44
+ ## Data Splits
45
+
46
+ The full CQADupstack-Fa benchmark includes:
47
+
48
+ - **Train:** 0 samples
49
+ - **Dev:** 0 samples
50
+ - **Test:** 480,902 samples (across all CQADupstack-Fa datasets)
51
+
52
+ This WordPress-specific subset includes approximately **49.9k examples**. Individual test splits for this sub-dataset are not separately detailed in the FaMTEB paper; refer to the dataset provider or Hugging Face dataset card for exact distribution if needed.