Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## Dataset Summary
|
| 2 |
+
|
| 3 |
+
**CQADupstack-webmasters-Fa** is a Persian (Farsi) dataset created for the **Retrieval** task, focusing on identifying **duplicate or semantically similar questions** within community question-answering (CQA) platforms. It is a **translated version** of the *Webmasters StackExchange* data from the English **CQADupstack** dataset and is part of the [FaMTEB (Farsi Massive Text Embedding Benchmark)](https://huggingface.co/spaces/mteb/leaderboard).
|
| 4 |
+
|
| 5 |
+
* **Language(s):** Persian (Farsi)
|
| 6 |
+
* **Task(s):** Retrieval (Duplicate Question Retrieval)
|
| 7 |
+
* **Source:** Translated from CQADupstack-Webmasters (BEIR benchmark) using Google Translate
|
| 8 |
+
* **Part of FaMTEB:** Yes — as part of the BEIR-Fa collection
|
| 9 |
+
|
| 10 |
+
## Supported Tasks and Leaderboards
|
| 11 |
+
|
| 12 |
+
The dataset is designed to test **text embedding models' performance** in retrieving **duplicate or semantically equivalent questions** in a technical domain (SEO, webmastering, site performance). It is benchmarked on the **Persian MTEB Leaderboard** (language: Persian).
|
| 13 |
+
|
| 14 |
+
## Construction
|
| 15 |
+
|
| 16 |
+
This dataset was constructed via:
|
| 17 |
+
|
| 18 |
+
- Extracting data from the **Webmasters** subforum of StackExchange (from the English CQADupstack dataset)
|
| 19 |
+
- Translating the data into Persian using the **Google Translate API**
|
| 20 |
+
- Retaining the original query-relevant pairs for Retrieval evaluation
|
| 21 |
+
|
| 22 |
+
As discussed in the *FaMTEB* paper, the entire **BEIR-Fa collection** (including this dataset) was evaluated using:
|
| 23 |
+
|
| 24 |
+
- **BM25 retrieval score comparison**
|
| 25 |
+
- **GEMBA-DA framework** leveraging LLMs to validate translation quality
|
| 26 |
+
|
| 27 |
+
These assessments indicate good fidelity in Persian translations.
|
| 28 |
+
|
| 29 |
+
## Data Splits
|
| 30 |
+
|
| 31 |
+
The full CQADupstack-Fa collection has the following evaluation split:
|
| 32 |
+
|
| 33 |
+
- **Train:** 0 samples
|
| 34 |
+
- **Dev:** 0 samples
|
| 35 |
+
- **Test:** 480,902 samples (across all domains)
|
| 36 |
+
|
| 37 |
+
The **Webmasters-specific subset** contains approximately **19.3k examples**, though **individual splits are not separately provided** in the FaMTEB paper. For detailed splits, consult the dataset provider or Hugging Face dataset card.
|