| | --- |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: train |
| | path: qrels/train.jsonl |
| | - split: dev |
| | path: qrels/dev.jsonl |
| | - split: test |
| | path: qrels/test.jsonl |
| | - config_name: corpus |
| | data_files: |
| | - split: corpus |
| | path: corpus.jsonl |
| | - config_name: queries |
| | data_files: |
| | - split: queries |
| | path: queries.jsonl |
| | --- |
| | ## Dataset Summary |
| |
|
| | **FEVER-Fa** is a Persian (Farsi) dataset designed for the **Retrieval** task. It is a key component of the [FaMTEB (Farsi Massive Text Embedding Benchmark)](https://huggingface.co/spaces/mteb/leaderboard) and represents a translated version of the original English FEVER dataset. This dataset is specifically tailored for evaluating models on automatic fact-checking by requiring the retrieval of evidential sentences from a pre-processed Wikipedia corpus that support or refute given claims. |
| |
|
| | * **Language(s):** Persian (Farsi) |
| | * **Task(s):** Retrieval (Fact Checking, Evidence Retrieval) |
| | * **Source:** Translated from the English [FEVER dataset](https://fever.ai/) using Google Translate. |
| | * **Part of FaMTEB:** Yes (specifically, part of the BEIR-Fa collection within FaMTEB) |
| |
|
| | ## Supported Tasks and Leaderboards |
| |
|
| | This dataset is primarily used to evaluate the performance of text embedding models on the **Retrieval** task. Model performance can be benchmarked and compared on the [Persian MTEB Leaderboard on Hugging Face Spaces](https://huggingface.co/spaces/mteb/leaderboard) (filter by language: Persian). |
| |
|
| | ## Construction |
| |
|
| | The **FEVER-Fa** dataset was created by machine-translating the original English FEVER (Fact Extraction and VERification) dataset into Persian. The translation was performed using the Google Translate API. |
| |
|
| | As detailed in the "FaMTEB: Massive Text Embedding Benchmark in Persian Language" paper, the quality of the BEIR-Fa collection (of which FEVER-Fa is a part) underwent rigorous evaluation. This included: |
| | 1. Comparing BM25 retrieval scores between the original English versions and the translated Persian versions, which showed comparable performance. |
| | 2. Utilizing Large Language Models (LLMs) for a direct assessment of translation quality (GEMBA-DA framework), which indicated good overall translation quality, competitive with translations produced by other prominent LLMs. |
| |
|
| | ## Data Splits |
| |
|
| | The data is split into training and test sets as defined in the FaMTEB paper (Table 5): |
| |
|
| | * **Train:** 5,556,643 samples |
| | * **Development (Dev):** 0 samples |
| | * **Test:** 5,424,495 samples |