File size: 4,895 Bytes
d8b6304 5033dda d8b6304 9031a40 d8b6304 9031a40 d8b6304 3b1babd 5033dda 3b1babd 5033dda 3b1babd d8b6304 3b1babd 5033dda 3b1babd 5033dda |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
---
language:
- en
license: cc-by-4.0
task_categories:
- question-answering
dataset_info:
features:
- name: query
dtype: string
- name: dqc_id
dtype: string
- name: answer
dtype: string
- name: id
dtype: int64
splits:
- name: test
num_bytes: 35975942
num_examples: 332
download_size: 4787195
dataset_size: 35975942
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
papers:
- title: 'FinAuditing: A Financial Taxonomy-Structured Multi-Document Benchmark for Evaluating LLMs'
authors:
- Yan Wang
- Keyi Wang
- Shanshan Yang
- Jaisal Patel
- Jeff Zhao
- Fengran Mo
- Xueqing Peng
- Lingfei Qian
- Jimin Huang
- Guojun Xiong
- Xiao-Yang Liu
- Jian-Yun Nie
url: https://huggingface.co/papers/2510.08886
conference: arXiv preprint, 2025
tags:
- finance
- auditing
- xbrl
- gaap
- llm
- benchmark
- financial-reasoning
---
# ๐งพ FinAuditing: A Financial Taxonomy-Structured Multi-Document Benchmark for Evaluating LLMs
This dataset is introduced in the paper:
**[FinAuditing: A Financial Taxonomy-Structured Multi-Document Benchmark for Evaluating LLMs](https://huggingface.co/papers/2510.08886)**
by Yan Wang, Keyi Wang, Shanshan Yang, Jaisal Patel, Jeff Zhao, Fengran Mo, Xueqing Peng, Lingfei Qian, Jimin Huang, Guojun Xiong, Xiao-Yang Liu, and Jian-Yun Nie (2025).
**Dataset Repository:** [https://github.com/The-FinAI/FinAuditing](https://github.com/The-FinAI/FinAuditing.git)
**Evaluation Framework:** [https://github.com/The-FinAI/FinBen](https://github.com/The-FinAI/FinBen)
---
## ๐ Overview
FinAuditing is the first taxonomy-aligned, structure-aware, multi-document benchmark for evaluating Large Language Models (LLMs) on financial auditing tasks. It addresses the complexity of Generally Accepted Accounting Principles (GAAP) and the hierarchical structure of eXtensible Business Reporting Language (XBRL) filings. Built from real US-GAAP-compliant XBRL filings, FinAuditing defines three complementary subtasks: FinSM for semantic consistency, FinRE for relational consistency, and FinMR for numerical consistency, each targeting a distinct aspect of structured auditing reasoning. The benchmark aims to identify systematic limitations of modern LLMs in taxonomy-grounded financial reasoning and establish a foundation for developing trustworthy, structure-aware, and regulation-aligned financial intelligence systems.
---
## ๐ Datasets Released
The FinAuditing benchmark comprises the following sub-datasets, each available on the Hugging Face Hub:
| ๐ Dataset | ๐ Description |
|------------|----------------|
| [**FinSM**](https://huggingface.co/datasets/TheFinAI/FinSM) | Evaluation set for FinSM subtask within FinAuditing benchmark. This task follows the information retrieval paradigm: given a query describing a financial term that represents either currency or concentration of credit risk, an XBRL filing, and a US-GAAP taxonomy, the output is the set of mismatched US-GAAP tags after retrieval. |
| [**FinRE**](https://huggingface.co/datasets/TheFinAI/FinRE) | Evaluation set for FinRE subtask within FinAuditing benchmark. This is a relation extraction task, given two specific elements $e_1$ and $e_2$, an XBRL filing, and a US-GAAP taxonomy, the goal of this task is to classify three relation error types. |
| [**FinMR**](https://huggingface.co/datasets/TheFinAI/FinMR) | Evaluation set for FinMR subtask within FinAuditing benchmark. This is a mathematical reasoning task, given two questions $q_1$ and $q_2$, where $q_1$ concerns the extraction of a reported value and $q_2$ pertains to the calculation of the corresponding real value, an XBRL filing, and a US-GAAP taxonomy, the task is to extract the reported value for a given instance in the XBRL filing and to compute the numeric value for that instance, which is then used to verify whether the reported value is correct.. |
| [**FinSM_Sub**](https://huggingface.co/datasets/TheFinAI/FinSM_Sub) | FinSM subset for ICAIF 2025. |
| [**FinRE_Sub**](https://huggingface.co/datasets/TheFinAI/FinRE_Sub) | FinRE subset for ICAIF 2025. |
| [**FinMR_Sub**](https://huggingface.co/datasets/TheFinAI/FinMR_Sub) | FinMR subset for ICAIF 2025. |
---
## ๐ Citation
If you find our benchmark useful, please cite:
```bibtex
@misc{wang2025finauditingfinancialtaxonomystructuredmultidocument,
title={FinAuditing: A Financial Taxonomy-Structured Multi-Document Benchmark for Evaluating LLMs},
author={Yan Wang and Keyi Wang and Shanshan Yang and Jaisal Patel and Jeff Zhao and Fengran Mo and Xueqing Peng and Lingfei Qian and Jimin Huang and Guojun Xiong and Xiao-Yang Liu and Jian-Yun Nie},
year={2025},
eprint={2510.08886},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2510.08886},
}
``` |