File size: 2,302 Bytes
48f780a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
configs:
  - config_name: default
    data_files:
      - split: test
        path: qrels/test.jsonl
  - config_name: corpus
    data_files:
      - split: corpus
        path: corpus.jsonl
  - config_name: queries
    data_files:
      - split: queries
        path: queries.jsonl
---
## Dataset Summary

**CQADupstack-wordpress-Fa** is a Persian (Farsi) dataset created for the **Retrieval** task, focused on identifying **duplicate or semantically equivalent questions** in the domain of WordPress development. It is a **translated version** of the *WordPress Development StackExchange* data from the English **CQADupstack** dataset and is part of the [FaMTEB (Farsi Massive Text Embedding Benchmark)](https://huggingface.co/spaces/mteb/leaderboard).

* **Language(s):** Persian (Farsi)  
* **Task(s):** Retrieval (Duplicate Question Retrieval)  
* **Source:** Translated from CQADupstack-WordPress (BEIR benchmark) using Google Translate  
* **Part of FaMTEB:** Yes — under the BEIR-Fa collection

## Supported Tasks and Leaderboards

This dataset is designed to test the ability of **text embedding models** to retrieve semantically similar or duplicate questions from technical user forums. Evaluation results appear on the **Persian MTEB Leaderboard** on Hugging Face Spaces (filter by language: Persian).

## Construction

The dataset was constructed by:

- Extracting the WordPress subforum data from the English CQADupstack dataset
- Translating it into Persian using the **Google Translate API**
- Preserving original query-positive pairs for Retrieval task evaluation

As noted in the *FaMTEB* paper, the **BEIR-Fa collection** (which includes this dataset) was evaluated through:

- **BM25 score comparisons**
- **GEMBA-DA** framework using LLMs to assess translation accuracy

These validation methods indicated good translation quality overall.

## Data Splits

The full CQADupstack-Fa benchmark includes:

- **Train:** 0 samples  
- **Dev:** 0 samples  
- **Test:** 480,902 samples (across all CQADupstack-Fa datasets)

This WordPress-specific subset includes approximately **49.9k examples**. Individual test splits for this sub-dataset are not separately detailed in the FaMTEB paper; refer to the dataset provider or Hugging Face dataset card for exact distribution if needed.