MichaelYang-lyx commited on
Commit
30118fd
·
verified ·
1 Parent(s): 684b8c9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +118 -0
README.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ language:- zh
3
+
4
+ license: apache-2.0
5
+
6
+ task_categories:
7
+
8
+ - question-answering
9
+
10
+ - tabular-classification
11
+
12
+ - text-generation
13
+
14
+ tags:
15
+
16
+ - data-analytics
17
+
18
+ - agents
19
+
20
+ - document-understanding
21
+
22
+ - benchmark
23
+
24
+ pretty_name: AIDABench
25
+
26
+ ---# Dataset Card for AIDABench## Dataset Summary
27
+
28
+
29
+
30
+ [cite_start]As AI-driven document understanding and processing tools become increasingly prevalent, the need for rigorous evaluation standards has grown[cite: 7]. [cite_start]**AIDABench** is a comprehensive benchmark designed for evaluating AI systems on complex Data Analytics tasks in an end-to-end manner[cite: 9].
31
+
32
+
33
+
34
+ [cite_start]It encompasses over 600 diverse document analytical tasks grounded in realistic scenarios, involving heterogeneous data types such as spreadsheets, databases, financial reports, and operational records[cite: 10, 11]. [cite_start]The tasks are highly challenging; even human experts require 1-2 hours per question when assisted by AI tools[cite: 12].*(💡 建议在这里插入论文的 Figure 1,展示整体框架结构。)*
35
+
36
+ ![Overview of AIDABench Framework](./images/figure1_overview.png)
37
+
38
+ [cite_start]*Figure 1: Overview of the AIDABench evaluation framework[cite: 65].*
39
+
40
+
41
+
42
+ ## Dataset Structure
43
+
44
+
45
+
46
+ ### Task Categories
47
+
48
+ [cite_start]The dataset comprises three primary capability dimensions covering the end-to-end document processing pipeline[cite: 67, 199]:* [cite_start]**File Generation (43.3%):** Assesses data wrangling operations like filtering, format normalization, deduplication, and cross-sheet linkage[cite: 72, 199].
49
+
50
+ * [cite_start]**Question Answering (QA) (37.5%):** Evaluates analytical operations including summation, mean computation, ranking, and trend analysis[cite: 71, 199].* [cite_start]**Data Visualization (19.2%):** Measures the ability to generate and adapt multiple visualization forms (bar, line, pie charts) and style customizations[cite: 73, 199].
51
+
52
+
53
+
54
+ *(💡 建议在这里插入论文的 Figure 2,直观展示三种任务的输入输出示例。)*
55
+
56
+ ![Evaluation Scenarios](./images/figure2_scenarios.png)
57
+
58
+ [cite_start]*Figure 2: Example evaluation scenarios for QA, Data Visualization, and File Generation[cite: 187, 188, 190].*### Task Complexity
59
+
60
+ [cite_start]Tasks are stratified by the number of expert-level reasoning steps required[cite: 201]:* [cite_start]**Easy (29.5%):** $\le$ 3 steps[cite: 202].
61
+
62
+ * [cite_start]**Medium (49.4%):** 4-6 steps[cite: 202].* [cite_start]**Hard (21.1%):** > 7 steps[cite: 202].
63
+
64
+ * [cite_start]**Cross-file Reasoning:** 27.4% of tasks require joint reasoning over multiple input files (up to 14 files)[cite: 205].### Data Formats
65
+
66
+ [cite_start]Tabular files dominate the distribution (xlsx/csv account for 91.8%), complemented by DOCX and PDF formats to support mixed-type processing[cite: 200].
67
+
68
+
69
+
70
+ ## Evaluation Framework
71
+
72
+
73
+
74
+ [cite_start]All models are evaluated under a unified, tool-augmented protocol where the model receives task instructions and files, and can execute arbitrary Python code in a sandboxed environment[cite: 211, 212, 214].
75
+
76
+
77
+
78
+ [cite_start]To align with the task categories, AIDABench utilizes three dedicated LLM-based evaluators[cite: 222]:1. [cite_start]**QA Evaluator:** A binary judge powered by QwQ-32B[cite: 227, 326].
79
+
80
+ 2. [cite_start]**Visualization Evaluator:** Powered by Gemini 3 Pro, scoring both correctness and readability[cite: 235, 326].3. [cite_start]**Spreadsheet File Evaluator:** Powered by Claude Sonnet 4.5, utilizing a coarse-to-fine verification strategy[cite: 249, 326].
81
+
82
+
83
+
84
+ *(💡 建议在这里插入论文的 Figure 3,解释自动化评测是怎么运作的。)*
85
+
86
+ ![Evaluator Design](./images/figure3_evaluators.png)
87
+
88
+ [cite_start]*Figure 3: The design of the three types of evaluators in AIDABench[cite: 313].*## Baseline Performance
89
+
90
+ Results reveal that complex data analytics tasks remain a significant challenge. [cite_start]The best-performing model (Claude-Sonnet-4.5) achieved only a 59.43 pass@1 score[cite: 14, 349].
91
+
92
+
93
+
94
+ ## Citation
95
+
96
+
97
+
98
+ If you use this dataset, please cite the original paper:
99
+
100
+
101
+
102
+ ```bibtex
103
+
104
+ @article{yang2026aidabench,
105
+
106
+ title={AIDABENCH: AI DATA ANALYTICS BENCHMARK},
107
+
108
+ author={Yang, Yibo and Lei, Fei and Sun, Yixuan and others},
109
+
110
+ journal={arXiv preprint},
111
+
112
+ year={2026}
113
+
114
+ }
115
+
116
+
117
+
118
+ Github Repository: https://github.com/MichaelYang-lyx/AIDABench