File size: 2,156 Bytes
9ce4f4a 3139a92 9ce4f4a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
---
configs:
- config_name: default
data_files:
- split: train
path: qrels/train.jsonl
- split: dev
path: qrels/dev.jsonl
- split: test
path: qrels/test.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: queries.jsonl
---
## Dataset Summary
**FiQA2018-Fa** is a Persian (Farsi) dataset designed for the **Retrieval** task, specifically targeting **opinion-based question answering** in the **financial domain**. It is a translated version of the original English **FiQA 2018** dataset and a core component of the [FaMTEB (Farsi Massive Text Embedding Benchmark)](https://huggingface.co/spaces/mteb/leaderboard), under the **BEIR-Fa** collection.
- **Language(s):** Persian (Farsi)
- **Task(s):** Retrieval (Opinion-based Question Answering, Financial QA)
- **Source:** Translated from the English FiQA 2018 dataset using Google Translate
- **Part of FaMTEB:** Yes — under BEIR-Fa
## Supported Tasks and Leaderboards
The dataset evaluates **text embedding models** on their ability to retrieve **relevant financial content** in response to **subjective, opinion-based questions**. Results are benchmarked on the **Persian MTEB Leaderboard** on Hugging Face Spaces (language filter: Persian).
## Construction
Steps in dataset creation:
- Translation of the **original English FiQA 2018** dataset (based on StackExchange "Investment" forum posts) using the **Google Translate API**
- The dataset retains mappings between **user questions** and **relevant opinion-based answers**
As outlined in the *FaMTEB* paper, the BEIR-Fa datasets (including FiQA2018-Fa) underwent:
- **BM25 retrieval comparison** with the original English
- **Translation quality analysis** using the **GEMBA-DA LLM evaluation framework**
These evaluations confirmed **good translation quality** for retrieval benchmarking.
## Data Splits
According to the FaMTEB paper (Table 5):
- **Train:** 71,804 samples
- **Dev:** 0 samples
- **Test:** 59,344 samples
**Total:** ~131,148 examples
|