File size: 4,018 Bytes
09b1385
d0b0043
 
 
 
 
 
 
 
 
 
09b1385
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ed79412
 
 
09b1385
ed79412
 
 
 
09b1385
 
 
ed79412
 
09b1385
 
 
d0b0043
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
task_categories:
- text-classification
- question-answering
language:
- en
tags:
- finance
- auditing
- xbrl
- llm-evaluation
dataset_info:
  features:
  - name: query
    dtype: string
  - name: dqc_id
    dtype: string
  - name: answer
    dtype: string
  - name: choices
    sequence: string
  - name: gold
    dtype: int64
  - name: id
    dtype: int64
  splits:
  - name: train
    num_bytes: 27602759
    num_examples: 263
  - name: test
    num_bytes: 18855923
    num_examples: 177
  download_size: 4949800
  dataset_size: 46458682
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---

# FinAuditing: A Financial Taxonomy-Structured Multi-Document Benchmark for Evaluating LLMs

The FinAuditing benchmark was presented in the paper [FinAuditing: A Financial Taxonomy-Structured Multi-Document Benchmark for Evaluating LLMs](https://huggingface.co/papers/2510.08886).

**Code Repository:** [https://github.com/The-FinAI/FinAuditing.git](https://github.com/The-FinAI/FinAuditing.git)
**Evaluation Framework:** [https://github.com/The-FinAI/FinBen](https://github.com/The-FinAI/FinBen)

## Overview

FinAuditing is the first taxonomy-aligned, structure-aware, multi-document benchmark designed to evaluate Large Language Models (LLMs) on financial auditing tasks. It addresses the complexity arising from Generally Accepted Accounting Principles (GAAP) and the hierarchical structure of eXtensible Business Reporting Language (XBRL) filings. Built from real US-GAAP-compliant XBRL filings, FinAuditing defines three complementary subtasks, each targeting a distinct aspect of structured auditing reasoning.

### Datasets Released

| 📂 Dataset | 📝 Description |
|------------|----------------|
| [**FinSM**](https://huggingface.co/datasets/TheFinAI/FinSM) | Evaluation set for FinSM subtask within FinAuditing benchmark. This task follows the information retrieval paradigm: given a query describing a financial term that represents either currency or concentration of credit risk, an XBRL filing, and a US-GAAP taxonomy, the output is the set of mismatched US-GAAP tags after retrieval. |
| [**FinRE**](https://huggingface.co/datasets/TheFinAI/FinRE) | Evaluation set for FinRE subtask within FinAuditing benchmark. This is a relation extraction task, given two specific elements $e_1$ and $e_2$, an XBRL filing, and a US-GAAP taxonomy, the goal of this task is to classify three relation error types. |
| [**FinMR**](https://huggingface.co/datasets/TheFinAI/FinMR) | Evaluation set for FinMR subtask within FinAuditing benchmark. This is a mathematical reasoning task, given two questions $q_1$ and $q_2$, where $q_1$ concerns the extraction of a reported value and $q_2$ pertains to the calculation of the corresponding real value, an XBRL filing, and a US-GAAP taxonomy, the task is to extract the reported value for a given instance in the XBRL filing and to compute the numeric value for that instance, which is then used to verify whether the reported value is correct. |
| [**FinSM_Sub**](https://huggingface.co/datasets/TheFinAI/FinSM_Sub) | FinSM subset for ICAIF 2025. |
| [**FinRE_Sub**](https://huggingface.co/datasets/TheFinAI/FinRE_Sub) | FinRE subset for ICAIF 2025. |
| [**FinMR_Sub**](https://huggingface.co/datasets/TheFinAI/FinMR_Sub) | FinMR subset for ICAIF 2025. |

## Citation

If you find our benchmark useful, please cite:

```bibtex
    @misc{wang2025finauditingfinancialtaxonomystructuredmultidocument,
          title={FinAuditing: A Financial Taxonomy-Structured Multi-Document Benchmark for Evaluating LLMs}, 
          author={Yan Wang and Keyi Wang and Shanshan Yang and Jaisal Patel and Jeff Zhao and Fengran Mo and Xueqing Peng and Lingfei Qian and Jimin Huang and Guojun Xiong and Xiao-Yang Liu and Jian-Yun Nie},
          year={2025},
          eprint={2510.08886},
          archivePrefix={arXiv},
          primaryClass={cs.CL},
          url={https://arxiv.org/abs/2510.08886}, 
    }
```