Datasets:
language:
- en
license: cc-by-4.0
task_categories:
- text-classification
- question-answering
dataset_info:
features:
- name: query
dtype: string
- name: dqc_id
dtype: string
- name: answer
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
- name: id
dtype: int64
splits:
- name: test
num_bytes: 46455162
num_examples: 440
download_size: 4278211
dataset_size: 46455162
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
papers:
- title: >-
FinAuditing: A Financial Taxonomy-Structured Multi-Document Benchmark for
Evaluating LLMs
authors:
- Yan Wang
- Keyi Wang
- Shanshan Yang
- Jaisal Patel
- Jeff Zhao
- Fengran Mo
- Xueqing Peng
- Lingfei Qian
- Jimin Huang
- Guojun Xiong
- Xiao-Yang Liu
- Jian-Yun Nie
url: https://arxiv.org/abs/2510.08886
conference: arXiv preprint, 2025
tags:
- finance
- auditing
- xbrl
- gaap
- llm
- benchmark
- financial-reasoning
π§Ύ FinAuditing Benchmark
This dataset is introduced in the paper
FinAuditing: A Financial Taxonomy-Structured Multi-Document Benchmark for Evaluating LLMs
by Yan Wang, Keyi Wang, Shanshan Yang, Jaisal Patel, Jeff Zhao, Fengran Mo, Xueqing Peng, Lingfei Qian, Jimin Huang, Guojun Xiong, Xiao-Yang Liu, and Jian-Yun Nie (2025).
GitHub Repository: https://github.com/The-FinAI/FinAuditing.git Evaluation Framework: https://github.com/The-FinAI/FinBen
π Overview
FinAuditing is the first taxonomy-aligned, structure-aware, multi-document benchmark for evaluating Large Language Models (LLMs) on financial auditing tasks. Built from real US-GAAP-compliant XBRL filings, FinAuditing defines three complementary subtasks: FinSM for semantic consistency, FinRE for relational consistency, and FinMR for numerical consistency, each targeting a distinct aspect of structured auditing reasoning. It further proposes a unified evaluation framework integrating retrieval, classification, and reasoning metrics across these subtasks.
π Datasets Released
| π Dataset | π Description |
|---|---|
| FinSM | Evaluation set for FinSM subtask within FinAuditing benchmark. This task follows the information retrieval paradigm: given a query describing a financial term that represents either currency or concentration of credit risk, an XBRL filing, and a US-GAAP taxonomy, the output is the set of mismatched US-GAAP tags after retrieval. |
| FinRE | Evaluation set for FinRE subtask within FinAuditing benchmark. This is a relation extraction task, given two specific elements $e_1$ and $e_2$, an XBRL filing, and a US-GAAP taxonomy, the goal of this task is to classify three relation error types. |
| FinMR | Evaluation set for FinMR subtask within FinAuditing benchmark. This is a mathematical reasoning task, given two questions $q_1$ and $q_2$, where $q_1$ concerns the extraction of a reported value and $q_2$ pertains to the calculation of the corresponding real value, an XBRL filing, and a US-GAAP taxonomy, the task is to extract the reported value for a given instance in the XBRL filing and to compute the numeric value for that instance, which is then used to verify whether the reported value is correct. |
| FinSM_Sub | FinSM subset for ICAIF 2025. |
| FinRE_Sub | FinRE subset for ICAIF 2025. |
| FinMR_Sub | FinMR subset for ICAIF 2025. |
Citation
If you find our benchmark useful, please cite:
@misc{wang2025finauditingfinancialtaxonomystructuredmultidocument,
title={FinAuditing: A Financial Taxonomy-Structured Multi-Document Benchmark for Evaluating LLMs},
author={Yan Wang and Keyi Wang and Shanshan Yang and Jaisal Patel and Jeff Zhao and Fengran Mo and Xueqing Peng and Lingfei Qian and Jimin Huang and Guojun Xiong and Xiao-Yang Liu and Jian-Yun Nie},
year={2025},
eprint={2510.08886},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2510.08886},
}