Datasets:
File size: 7,248 Bytes
2740a0e 75d99ae 7409fdc 75d99ae 5418e82 75d99ae 7409fdc 2740a0e 7409fdc 75d99ae 7409fdc 75d99ae 7409fdc 75d99ae 7409fdc 75d99ae 7409fdc 75d99ae 7409fdc 75d99ae | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 | ---
language:
- en
license: cc-by-sa-4.0
tags:
- retrieval
- text-retrieval
- beir
- finance
- question-answering
- benchmark
pretty_name: BEIR FiQA (retrieval)
size_categories: "10K<n<100K"
task_categories:
- text-retrieval
---
# FiQA (BEIR) — Financial QA retrieval
## Dataset description
**FiQA** (*Financial Question Answering*) is a domain-specific benchmark for retrieving passages that answer real financial questions. It was introduced as part of the **WWW 2018** workshop challenge on *Financial Opinion Mining and Question Answering*, motivated by the need for NLP and IR methods that cope with noisy, opinion-rich, domain-specific text in finance.
**BEIR** (*Benchmarking IR*) repackaged FiQA—along with many other public corpora—as a standard **retrieval** benchmark for **zero-shot** evaluation of dense, sparse, and hybrid information retrieval models across heterogeneous tasks and domains. FiQA is one of the **question-focused retrieval** tasks in BEIR: systems must rank passages from a financial Q&A corpus so that human-judged relevant documents appear at the top of the list.
This repository (`orgrctera/beir_fiqa`) provides **train / validation / test** splits in **Parquet** form for use in retrieval evaluation pipelines. Each row is one **query** with **relevance judgments** pointing at corpus document identifiers (as distributed in the BEIR FiQA benchmark).
### Scale and domain (BEIR FiQA)
Typical statistics for the BEIR FiQA setting (corpus + queries + qrels) are on the order of:
- **Tens of thousands** of short **documents** (passages) drawn from English financial community Q&A.
- **Thousands** of **queries** (natural-language financial questions).
- **Binary** (or graded) **qrels** linking each query to one or more relevant document IDs.
Exact counts follow the standard [BEIR FiQA](https://github.com/beir-cellar/beir) release; see the upstream project for version-precise figures.
## Task: retrieval
The task is **ad hoc passage (or document) retrieval**:
1. **Input:** a natural-language **question** (the query).
2. **Output:** a ranked list of **document IDs** from the FiQA corpus (or scores over the full collection), such that **relevant** IDs—according to the official qrels—receive high rank.
Downstream metrics are standard IR metrics (e.g., **nDCG@k**, **Recall@k**, **MRR**), as implemented in BEIR’s evaluation scripts or in frameworks such as Pyserini / BEIR’s own API.
> **Note:** Full retrieval evaluation also requires the **corpus** (passage text keyed by ID). This dataset card describes the **query + qrels** side as prepared for CTERA-style evaluation rows; align corpus IDs with the same BEIR FiQA corpus you use for indexing.
## Data format (this repository)
Each record includes:
| Field | Description |
|--------|-------------|
| `id` | UUID for this example row. |
| `input` | The **query text** (financial question). |
| `expected_output` | JSON string: list of objects `{"id": "<corpus-doc-id>", "score": <relevance>}`. Scores follow the BEIR qrels convention (typically `1` for relevant in binary settings). |
| `metadata.query_id` | Original BEIR / FiQA query identifier (string). |
| `metadata.split` | Split name: `train`, `dev`, or `test`. |
### Example 1
```json
{
"id": "730c40a0-689d-45de-8044-76a8f6a3b1e1",
"input": "If I go to a seminar held overseas, may I claim my flights on my tax return?",
"expected_output": "[{\"id\": \"324513\", \"score\": 1}, {\"id\": \"351169\", \"score\": 1}, {\"id\": \"104464\", \"score\": 1}]",
"metadata.query_id": "1857",
"metadata.split": "train"
}
```
### Example 2
```json
{
"id": "d47d4e77-7b21-410c-b568-fbd606e79e13",
"input": "When is the right time to buy a new/emerging technology?",
"expected_output": "[{\"id\": \"290647\", \"score\": 1}, {\"id\": \"90238\", \"score\": 1}, {\"id\": \"510872\", \"score\": 1}, {\"id\": \"157480\", \"score\": 1}]",
"metadata.query_id": "3483",
"metadata.split": "train"
}
```
## References
### BEIR benchmark (FiQA as a subset)
**Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych**
*BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models*
NeurIPS 2021 (Datasets and Benchmarks Track).
**Abstract (from arXiv):** *“Existing neural information retrieval (IR) models have often been studied in homogeneous and narrow settings, which has considerably limited insights into their out-of-distribution (OOD) generalization capabilities. To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval. We leverage a careful selection of 18 publicly available datasets from diverse text retrieval tasks and domains and evaluate 10 state-of-the-art retrieval systems including lexical, sparse, dense, late-interaction and re-ranking architectures on the BEIR benchmark. Our results show BM25 is a robust baseline and re-ranking and late-interaction-based models on average achieve the best zero-shot performances, however, at high computational costs. In contrast, dense and sparse-retrieval models are computationally more efficient but often underperform other approaches, highlighting the considerable room for improvement in their generalization capabilities.”*
- Paper: [arXiv:2104.08663](https://arxiv.org/abs/2104.08663) — also [OpenReview](https://openreview.net/forum?id=wCu6T5xFjeJ) (NeurIPS 2021 Datasets & Benchmarks); code and data: [BEIR on GitHub](https://github.com/beir-cellar/beir).
### FiQA @ WWW 2018 (original challenge)
The FiQA challenge at **The Web Conference (WWW) 2018** promoted research on **financial opinion mining** and **opinion-aware question answering**, including retrieval-oriented settings over community Q&A style text.
- Challenge overview: [ACM DL entry](https://dl.acm.org/doi/10.1145/3184558.3192301) — *WWW’18 Open Challenge: Financial Opinion Mining and Question Answering* (organizers include Maia, Handschuh, Freitas, and colleagues).
- Community site (historical): [FiQA 2018](https://sites.google.com/view/fiqa/home).
### Related resources
- **BEIR FiQA** mirrors on Hugging Face (raw BEIR layout), e.g. [`BeIR/fiqa`](https://huggingface.co/datasets/BeIR/fiqa), for corpus / queries / qrels in classic BEIR JSONL + TSV form.
- **IRDS** packaging: [`irds/beir_fiqa`](https://huggingface.co/datasets/irds/beir_fiqa) exposes FiQA via the ir-datasets tooling.
## Citation
If you use the **BEIR** benchmark or FiQA through BEIR, cite the BEIR paper (BibTeX from the [official repository](https://github.com/beir-cellar/beir)). If you cite the **original FiQA challenge**, use the WWW 2018 challenge publication above.
## License
Content in FiQA traces to public community Q&A sources (see FiQA documentation); Stack Exchange network content is typically shared under **Creative Commons** terms. This card marks **`cc-by-sa-4.0`** as a common choice for Stack Exchange–derived text; verify against your corpus snapshot and upstream terms if compliance is strict.
---
*Dataset card maintained for the `orgrctera/beir_fiqa` Hub repository.*
|