| | --- |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: test |
| | path: qrels/test.jsonl |
| | - config_name: corpus |
| | data_files: |
| | - split: corpus |
| | path: corpus.jsonl |
| | - config_name: queries |
| | data_files: |
| | - split: queries |
| | path: queries.jsonl |
| | --- |
| | ## Dataset Summary |
| |
|
| | **CQADupstack-webmasters-Fa** is a Persian (Farsi) dataset created for the **Retrieval** task, focusing on identifying **duplicate or semantically similar questions** within community question-answering (CQA) platforms. It is a **translated version** of the *Webmasters StackExchange* data from the English **CQADupstack** dataset and is part of the [FaMTEB (Farsi Massive Text Embedding Benchmark)](https://huggingface.co/spaces/mteb/leaderboard). |
| |
|
| | * **Language(s):** Persian (Farsi) |
| | * **Task(s):** Retrieval (Duplicate Question Retrieval) |
| | * **Source:** Translated from CQADupstack-Webmasters (BEIR benchmark) using Google Translate |
| | * **Part of FaMTEB:** Yes — as part of the BEIR-Fa collection |
| |
|
| | ## Supported Tasks and Leaderboards |
| |
|
| | The dataset is designed to test **text embedding models' performance** in retrieving **duplicate or semantically equivalent questions** in a technical domain (SEO, webmastering, site performance). It is benchmarked on the **Persian MTEB Leaderboard** (language: Persian). |
| |
|
| | ## Construction |
| |
|
| | This dataset was constructed via: |
| |
|
| | - Extracting data from the **Webmasters** subforum of StackExchange (from the English CQADupstack dataset) |
| | - Translating the data into Persian using the **Google Translate API** |
| | - Retaining the original query-relevant pairs for Retrieval evaluation |
| |
|
| | As discussed in the *FaMTEB* paper, the entire **BEIR-Fa collection** (including this dataset) was evaluated using: |
| |
|
| | - **BM25 retrieval score comparison** |
| | - **GEMBA-DA framework** leveraging LLMs to validate translation quality |
| |
|
| | These assessments indicate good fidelity in Persian translations. |
| |
|
| | ## Data Splits |
| |
|
| | The full CQADupstack-Fa collection has the following evaluation split: |
| |
|
| | - **Train:** 0 samples |
| | - **Dev:** 0 samples |
| | - **Test:** 480,902 samples (across all domains) |
| |
|
| | The **Webmasters-specific subset** contains approximately **19.3k examples**, though **individual splits are not separately provided** in the FaMTEB paper. For detailed splits, consult the dataset provider or Hugging Face dataset card. |
| |
|