| | --- |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: test |
| | path: qrels/test.jsonl |
| | - config_name: corpus |
| | data_files: |
| | - split: corpus |
| | path: corpus.jsonl |
| | - config_name: queries |
| | data_files: |
| | - split: queries |
| | path: queries.jsonl |
| | --- |
| | ## Dataset Summary |
| |
|
| | **ArguAna-Fa** is a Persian (Farsi) dataset designed for the **Retrieval** task, focusing on **argument and counter-argument retrieval**. It is a translated version of the original English **ArguAna** dataset used in the BEIR benchmark and is a part of the **FaMTEB** (Farsi Massive Text Embedding Benchmark) under the BEIR-Fa suite. |
| |
|
| | - **Language(s):** Persian (Farsi) |
| | - **Task(s):** Retrieval (Argument Retrieval, Counter-Argument Retrieval) |
| | - **Source:** Translated from the English ArguAna dataset using Google Translate |
| | - **Part of FaMTEB:** Yes — part of the BEIR-Fa collection |
| |
|
| | ## Supported Tasks and Leaderboards |
| |
|
| | ArguAna-Fa is used to benchmark models on their ability to retrieve relevant **counterarguments** given an input argument. This tests the **semantic understanding of argumentation** in Persian. Performance can be evaluated on the **Persian MTEB Leaderboard** (filter by language: Persian). |
| |
|
| | ## Construction |
| |
|
| | - The dataset was created by translating the English ArguAna dataset using the **Google Translate API** |
| | - Originally sourced from online debate portals, focusing on **argumentative reasoning and contrast** |
| |
|
| | As noted in the FaMTEB paper, the translation quality was evaluated by: |
| |
|
| | - Comparing **BM25 retrieval scores** between English and Persian |
| | - Using the **GEMBA-DA framework** (LLM-based assessment) to ensure translation accuracy |
| |
|
| | ## Data Splits |
| |
|
| | According to the FaMTEB paper (Table 5): |
| |
|
| | - **Train:** 0 samples |
| | - **Dev:** 0 samples |
| | - **Test:** 10,080 samples |
| |
|
| | > Approximate total dataset size: **11.5k examples** (user-provided figure) |
| |
|