MichaelYang-lyx commited on
Commit
302994a
·
verified ·
1 Parent(s): 400ef26

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -32
README.md CHANGED
@@ -1,6 +1,7 @@
 
1
  ---
2
  language:
3
- - en
4
  license: apache-2.0
5
  task_categories:
6
  - question-answering
@@ -18,51 +19,81 @@ pretty_name: AIDABench
18
 
19
  ## Dataset Summary
20
 
21
- [cite_start]As AI-driven document understanding and processing tools become increasingly prevalent, the need for rigorous evaluation standards has grown[cite: 7]. [cite_start]**AIDABench** is a comprehensive benchmark designed for evaluating AI systems on complex Data Analytics tasks in an end-to-end manner[cite: 9].
22
 
23
- [cite_start]It encompasses over 600 diverse document analytical tasks grounded in realistic scenarios, involving heterogeneous data types such as spreadsheets, databases, financial reports, and operational records[cite: 10, 11]. [cite_start]The tasks are highly challenging; even human experts require 1-2 hours per question when assisted by AI tools[cite: 12].
24
 
25
- *(💡 建议在这里插入论文的 Figure 1,展示整体框架结构。)*
26
- ![Overview of AIDABench Framework](./images/figure1_overview.png)
27
- [cite_start]*Figure 1: Overview of the AIDABench evaluation framework[cite: 65].*
28
 
29
- ## Dataset Structure
 
 
30
 
31
  ### Task Categories
32
- [cite_start]The dataset comprises three primary capability dimensions covering the end-to-end document processing pipeline[cite: 67, 199]:
33
- * [cite_start]**File Generation (43.3%):** Assesses data wrangling operations like filtering, format normalization, deduplication, and cross-sheet linkage[cite: 72, 199].
34
- * [cite_start]**Question Answering (QA) (37.5%):** Evaluates analytical operations including summation, mean computation, ranking, and trend analysis[cite: 71, 199].
35
- * [cite_start]**Data Visualization (19.2%):** Measures the ability to generate and adapt multiple visualization forms (bar, line, pie charts) and style customizations[cite: 73, 199].
36
 
37
- *(💡 建议在这里插入论文的 Figure 2,直观展示三种任务的输入输出示例。)*
38
- ![Evaluation Scenarios](./images/figure2_scenarios.png)
39
- [cite_start]*Figure 2: Example evaluation scenarios for QA, Data Visualization, and File Generation[cite: 187, 188, 190].*
 
 
 
 
 
 
 
 
 
 
 
40
 
41
  ### Task Complexity
42
- [cite_start]Tasks are stratified by the number of expert-level reasoning steps required[cite: 201]:
43
- * [cite_start]**Easy (29.5%):** $\le$ 3 steps[cite: 202].
44
- * [cite_start]**Medium (49.4%):** 4-6 steps[cite: 202].
45
- * [cite_start]**Hard (21.1%):** > 7 steps[cite: 202].
46
- * [cite_start]**Cross-file Reasoning:** 27.4% of tasks require joint reasoning over multiple input files (up to 14 files)[cite: 205].
 
 
47
 
48
  ### Data Formats
49
- [cite_start]Tabular files dominate the distribution (xlsx/csv account for 91.8%), complemented by DOCX and PDF formats to support mixed-type processing[cite: 200].
 
50
 
51
  ## Evaluation Framework
52
 
53
- [cite_start]All models are evaluated under a unified, tool-augmented protocol where the model receives task instructions and files, and can execute arbitrary Python code in a sandboxed environment[cite: 211, 212, 214].
 
 
54
 
55
- [cite_start]To align with the task categories, AIDABench utilizes three dedicated LLM-based evaluators[cite: 222]:
56
- 1. [cite_start]**QA Evaluator:** A binary judge powered by QwQ-32B[cite: 227, 326].
57
- 2. [cite_start]**Visualization Evaluator:** Powered by Gemini 3 Pro, scoring both correctness and readability[cite: 235, 326].
58
- 3. [cite_start]**Spreadsheet File Evaluator:** Powered by Claude Sonnet 4.5, utilizing a coarse-to-fine verification strategy[cite: 249, 326].
59
 
60
- *(💡 建议在这里插入论文的 Figure 3,解释自动化评测是怎么运作的。)*
61
- ![Evaluator Design](./images/figure3_evaluators.png)
62
- [cite_start]*Figure 3: The design of the three types of evaluators in AIDABench[cite: 313].*
 
 
 
 
 
 
63
 
64
  ## Baseline Performance
65
- Results reveal that complex data analytics tasks remain a significant challenge. [cite_start]The best-performing model (Claude-Sonnet-4.5) achieved only a 59.43 pass@1 score[cite: 14, 349].
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
 
67
  ## Citation
68
 
@@ -75,6 +106,5 @@ If you use this dataset, please cite the original paper:
75
  journal={arXiv preprint},
76
  year={2026}
77
  }
78
- ```
79
- Github Repository: https://github.com/MichaelYang-lyx/AIDABench
80
- ---
 
1
+ ````md
2
  ---
3
  language:
4
+ - zh
5
  license: apache-2.0
6
  task_categories:
7
  - question-answering
 
19
 
20
  ## Dataset Summary
21
 
22
+ **AIDABench** is a benchmark for evaluating AI systems on **end-to-end data analytics over real-world documents**. It contains **600+** diverse analytical tasks grounded in realistic scenarios and spans heterogeneous data sources such as **spreadsheets, databases, financial reports, and operational records**. Tasks are designed to be challenging, often requiring multi-step reasoning and tool use to complete reliably.
23
 
24
+ ![Overview of AIDABench Framework](images/figure1_overview.png)
25
 
26
+ *Figure 1: Overview of the AIDABench evaluation framework.*
 
 
27
 
28
+ ## Supported Tasks and Evaluation Targets
29
+
30
+ AIDABench focuses on practical document analytics workflows where a model/agent must read files, reason over structured data, and produce a final deliverable.
31
 
32
  ### Task Categories
 
 
 
 
33
 
34
+ The dataset is organized around three primary capability dimensions:
35
+
36
+ - **File Generation (43.3%)**
37
+ Data wrangling and transformation tasks such as filtering, normalization, deduplication, joins, and cross-sheet linkage, with outputs as generated files (e.g., spreadsheets).
38
+
39
+ - **Question Answering (QA) (37.5%)**
40
+ Analytical queries such as aggregation, averages, ranking, comparisons, and trend analysis, with outputs as final answers.
41
+
42
+ - **Data Visualization (19.2%)**
43
+ Chart creation/adaptation tasks (e.g., bar/line/pie) including style requirements and presentation constraints, with outputs as figures or chart files.
44
+
45
+ ![Evaluation Scenarios](images/figure2_scenarios.png)
46
+
47
+ *Figure 2: Example evaluation scenarios for QA, Data Visualization, and File Generation.*
48
 
49
  ### Task Complexity
50
+
51
+ Tasks are stratified by the number of expert-level reasoning steps required:
52
+
53
+ - **Easy (29.5%)**: 3 steps
54
+ - **Medium (49.4%)**: 4–6 steps
55
+ - **Hard (21.1%)**: ≥ 7 steps
56
+ - **Cross-file Reasoning**: 27.4% of tasks require reasoning over multiple input files (up to 14 files).
57
 
58
  ### Data Formats
59
+
60
+ Most inputs are tabular files (xlsx/csv dominate), complemented by **DOCX** and **PDF** formats to support mixed-type document processing.
61
 
62
  ## Evaluation Framework
63
 
64
+ All models are evaluated under a unified **tool-augmented protocol**: the model receives task instructions and associated files, and can execute **arbitrary Python code** within a **sandboxed environment** to complete the task.
65
+
66
+ To align with task categories, AIDABench uses three dedicated **LLM-based evaluators**:
67
 
68
+ 1. **QA Evaluator**
69
+ A binary judge that determines whether the produced answer matches the reference (under the benchmark’s scoring rules).
 
 
70
 
71
+ 2. **Visualization Evaluator**
72
+ Scores both **correctness** and **readability** of generated visualizations.
73
+
74
+ 3. **Spreadsheet File Evaluator**
75
+ Verifies generated spreadsheet outputs with a **coarse-to-fine** strategy, combining structural checks with sampled content validation and task-specific verification.
76
+
77
+ ![Evaluator Design](images/figure3_evaluators.png)
78
+
79
+ *Figure 3: The design of the three types of evaluators in AIDABench.*
80
 
81
  ## Baseline Performance
82
+
83
+ Results indicate that complex, tool-augmented document analytics remains challenging: the best-performing baseline model (**Claude-Sonnet-4.5**) achieves **59.43 pass@1** on AIDABench (see the paper for full settings, model list, and breakdowns).
84
+
85
+ ## Intended Uses
86
+
87
+ AIDABench is intended for:
88
+
89
+ - Evaluating **agents** or **tool-using LLM systems** on realistic document analytics tasks
90
+ - Benchmarking end-to-end capabilities across **QA**, **file generation**, and **visualization**
91
+ - Diagnosing failure modes in multi-step, multi-file reasoning over business-like data
92
+
93
+ ## Limitations
94
+
95
+ - The benchmark is designed for tool-augmented settings; purely text-only inference may underperform due to the need for code execution and file manipulation.
96
+ - Automated evaluation relies on LLM judges, which introduces additional compute cost and (small) scoring variance depending on settings.
97
 
98
  ## Citation
99
 
 
106
  journal={arXiv preprint},
107
  year={2026}
108
  }
109
+ ````
110
+ ---