| | --- |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: test |
| | path: qrels/test.jsonl |
| | - config_name: corpus |
| | data_files: |
| | - split: corpus |
| | path: corpus.jsonl |
| | - config_name: queries |
| | data_files: |
| | - split: queries |
| | path: queries.jsonl |
| | --- |
| | ## Dataset Summary |
| |
|
| | **CQADupstack-wordpress-Fa** is a Persian (Farsi) dataset created for the **Retrieval** task, focused on identifying **duplicate or semantically equivalent questions** in the domain of WordPress development. It is a **translated version** of the *WordPress Development StackExchange* data from the English **CQADupstack** dataset and is part of the [FaMTEB (Farsi Massive Text Embedding Benchmark)](https://huggingface.co/spaces/mteb/leaderboard). |
| |
|
| | * **Language(s):** Persian (Farsi) |
| | * **Task(s):** Retrieval (Duplicate Question Retrieval) |
| | * **Source:** Translated from CQADupstack-WordPress (BEIR benchmark) using Google Translate |
| | * **Part of FaMTEB:** Yes — under the BEIR-Fa collection |
| |
|
| | ## Supported Tasks and Leaderboards |
| |
|
| | This dataset is designed to test the ability of **text embedding models** to retrieve semantically similar or duplicate questions from technical user forums. Evaluation results appear on the **Persian MTEB Leaderboard** on Hugging Face Spaces (filter by language: Persian). |
| |
|
| | ## Construction |
| |
|
| | The dataset was constructed by: |
| |
|
| | - Extracting the WordPress subforum data from the English CQADupstack dataset |
| | - Translating it into Persian using the **Google Translate API** |
| | - Preserving original query-positive pairs for Retrieval task evaluation |
| |
|
| | As noted in the *FaMTEB* paper, the **BEIR-Fa collection** (which includes this dataset) was evaluated through: |
| |
|
| | - **BM25 score comparisons** |
| | - **GEMBA-DA** framework using LLMs to assess translation accuracy |
| |
|
| | These validation methods indicated good translation quality overall. |
| |
|
| | ## Data Splits |
| |
|
| | The full CQADupstack-Fa benchmark includes: |
| |
|
| | - **Train:** 0 samples |
| | - **Dev:** 0 samples |
| | - **Test:** 480,902 samples (across all CQADupstack-Fa datasets) |
| |
|
| | This WordPress-specific subset includes approximately **49.9k examples**. Individual test splits for this sub-dataset are not separately detailed in the FaMTEB paper; refer to the dataset provider or Hugging Face dataset card for exact distribution if needed. |
| |
|