Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
PolyFiQA-Expert / README.md
nielsr's picture
nielsr HF Staff
Add link to paper, add table-question-answering task category and code to reproduce results
06e9202 verified
|
raw
history blame
8.59 kB
---
language:
- en
- zh
- jp
- es
- el
license: apache-2.0
size_categories:
- n<1K
task_categories:
- question-answering
- table-question-answering
pretty_name: PolyFiQA-Expert
dataset_info:
features:
- name: task_id
dtype: string
- name: query
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 5184523
num_examples: 76
download_size: 1660815
dataset_size: 5184523
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
tags:
- finance
- multilingual
---
# Dataset Card for PolyFiQA-Expert
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://huggingface.co/collections/TheFinAI/multifinben-6826f6fc4bc13d8af4fab223
- **Repository:** https://huggingface.co/datasets/TheFinAI/polyfiqa-expert
- **Paper:** [MultiFinBen: A Multilingual, Multimodal, and Difficulty-Aware Benchmark for Financial LLM Evaluation](https://huggingface.co/papers/2506.14028)
- **Leaderboard:** https://huggingface.co/spaces/TheFinAI/Open-FinLLM-Leaderboard
### Dataset Summary
**PolyFiQA-Expert** is a multilingual financial question-answering dataset designed to evaluate expert-level financial reasoning in low-resource and multilingual settings. Each instance consists of a task identifier, a query prompt, an associated financial question, and the correct answer.The Expert split emphasizes complex, high-level financial understanding, requiring deeper domain knowledge and nuanced reasoning.
### Supported Tasks and Leaderboards
- **Tasks:**
- question-answering
- table-question-answering
- **Evaluation Metrics:**
- ROUGE-1
### Languages
- English (en)
- Chinese (zh)
- Japanese (jp)
- Spanish (es)
- Greek (el)
## Dataset Structure
### Data Instances
Each instance in the dataset contains:
- `task_id`: A unique identifier for the query-task pair.
- `query`: A brief query statement from the financial domain.
- `question`: The full question posed based on the query context.
- `answer`: The correct answer string.
### Data Fields
| Field | Type | Description |
|-----------|--------|----------------------------------------------|
| task_id | string | Unique ID per task |
| query | string | Financial query (short form) |
| question | string | Full natural-language financial question |
| answer | string | Ground-truth answer to the question |
### Data Splits
| Split | # Examples | Size (bytes) |
|-------|------------|--------------|
| test | 76 | 5,184,523 |
## Dataset Creation
### Curation Rationale
PolyFiQA-Expert was curated to probe the financial reasoning capabilities of large language models under expert-level scenarios
### Source Data
#### Initial Data Collection
The source data was derived from a diverse collection of English financial reports. Questions were derived from real-world financial scenarios and manually adapted to fit a concise QA format.
#### Source Producers
Data was created by researchers and annotators with backgrounds in finance, NLP, and data curation.
### Annotations
#### Annotation Process
Questions and answers were carefully authored and validated through a multi-round expert annotation process to ensure fidelity and depth.
#### Annotators
A team of finance researchers and data scientists.
### Personal and Sensitive Information
The dataset contains no personal or sensitive information. All content is synthetic or anonymized for safe usage.
## Considerations for Using the Data
### Social Impact of Dataset
PolyFiQA-Expert contributes to research in financial NLP supports research in multilingual financial QA, with applications in risk analysis, regulatory auditing, and financial advising tools.
### Discussion of Biases
- May over-represent English financial contexts.
- Questions emphasize clarity and answerability over real-world ambiguity.
### Other Known Limitations
- Limited size (76 examples).
- Focused on expert questions; may not generalize to complex reasoning tasks.
## Additional Information
### Dataset Curators
- The FinAI Team
### Licensing Information
- **License:** Apache License 2.0
### Citation Information
If you use this dataset, please cite:
```bibtex
@misc{peng2025multifinbenmultilingualmultimodaldifficultyaware,
title={MultiFinBen: A Multilingual, Multimodal, and Difficulty-Aware Benchmark for Financial LLM Evaluation},
author={Xueqing Peng and Lingfei Qian and Yan Wang and Ruoyu Xiang and Yueru He and Yang Ren and Mingyang Jiang and Jeff Zhao and Huan He and Yi Han and Yun Feng and Yuechen Jiang and Yupeng Cao and Haohang Li and Yangyang Yu and Xiaoyu Wang and Penglei Gao and Shengyuan Lin and Keyi Wang and Shanshan Yang and Yilun Zhao and Zhiwei Liu and Peng Lu and Jerry Huang and Suyuchen Wang and Triantafillos Papadopoulos and Polydoros Giannouris and Efstathia Soufleri and Nuo Chen and Guojun Xiong and Zhiyang Deng and Yijia Zhao and Mingquan Lin and Meikang Qiu and Kaleb E Smith and Arman Cohan and Xiao-Yang Liu and Jimin Huang and Alejandro Lopez-Lira and Xi Chen and Junichi Tsujii and Jian-Yun Nie and Sophia Ananiadou and Qianqian Xie},
year={2025},
eprint={2506.14028},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.14028},
}
```
### Code to reproduce results
1. Navigate to the evaluation folder:
```bash
cd FinBen/finlm_eval/
```
2. Create and activate a new conda environment:
```bash
conda create -n finben python=3.12
conda activate finben
```
3. Install the required dependencies:
```bash
pip install -e .
pip install -e .[vllm]
```
4. Log into Hugging Face
Set your Hugging Face token as an environment variable:
```bash
export HF_TOKEN="your_hf_token"
```
5. Model Evaluation
6. Navigate to the FinBen directory:
```bash
cd FinBen/
```
7. Set the VLLM worker multiprocessing method:
```bash
export VLLM_WORKER_MULTIPROC_METHOD="spawn"
```
8. Run evaluation:
Important Notes on Evaluation
- 0-shot setting: Use `num_fewshot=0` and `lm-eval-results-gr-0shot` as the results repository.
- 5-shot setting: Use `num_fewshot=5` and `lm-eval-results-gr-5shot` as the results repository.
- Base models: Remove `apply_chat_template`.
- Instruction models: Use `apply_chat_template`.
For gr Tasks
Execute the following command:
```bash
lm_eval --model vllm \
--model_args "pretrained=meta-llama/Llama-3.2-1B-Instruct,tensor_parallel_size=4,gpu_memory_utilization=0.8,max_model_len=1024" \
--tasks gr \
--num_fewshot 5 \
--batch_size auto \
--output_path results \
--hf_hub_log_args "hub_results_org=TheFinAI,details_repo_name=lm-eval-results-gr-5shot,push_results_to_hub=True,push_samples_to_hub=True,public_repo=False" \
--log_samples \
--apply_chat_template \
--include_path ./tasks
```
For gr_long Task
Execute the following command:
```bash
lm_eval --model vllm \
--model_args "pretrained=Qwen/Qwen2.5-72B-Instruct,tensor_parallel_size=4,gpu_memory_utilization=0.8,max_length=8192" \
--tasks gr_long \
--num_fewshot 5 \
--batch_size auto \
--output_path results \
--hf_hub_log_args "hub_results_org=TheFinAI,details_repo_name=lm-eval-results-gr-5shot,push_results_to_hub=True,push_samples_to_hub=True,public_repo=False" \
--log_samples \
--apply_chat_template \
--include_path ./tasks
```