configs:
- config_name: default
data_files:
- split: test
path: qrels/test.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: queries.jsonl
Dataset Summary
CQADupstack-unix-Fa is a Persian (Farsi) dataset designed for the Retrieval task, with a focus on duplicate question retrieval. It is a translated version of the "unix" (Unix & Linux Stack Exchange) subforum from the original English CQADupstack dataset, used in the BEIR benchmark, and is part of the FaMTEB (Farsi Massive Text Embedding Benchmark) under the BEIR-Fa collection.
- Language(s): Persian (Farsi)
- Task(s): Retrieval (Duplicate Question Retrieval)
- Source: Translated from the "unix" StackExchange subforum using Google Translate
- Part of FaMTEB: Yes — part of the BEIR-Fa collection
Supported Tasks and Leaderboards
This dataset evaluates models' ability to identify semantically similar or duplicate questions in technical domains (specifically Unix & Linux systems). Performance can be compared on the Persian MTEB Leaderboard (filter by language: Persian).
Construction
- Translated from the "unix" subforum of the CQADupstack dataset using the Google Translate API
- Part of the broader BEIR benchmark collection targeting community question answering (CQA)
Translation quality was validated using:
- BM25 score comparisons
- LLM-based assessment (GEMBA-DA framework)
Data Splits
As reported in the FaMTEB paper (Table 5):
- Train: 0 samples
- Dev: 0 samples
- Test: included in 480,902 aggregate test samples (across all CQADupstack-Fa datasets)
Approximate total dataset size: 50.1k examples (user-provided figure)