Update README.md
Browse files
README.md
CHANGED
|
@@ -1,6 +1,7 @@
|
|
|
|
|
| 1 |
---
|
| 2 |
language:
|
| 3 |
-
-
|
| 4 |
license: apache-2.0
|
| 5 |
task_categories:
|
| 6 |
- question-answering
|
|
@@ -18,51 +19,81 @@ pretty_name: AIDABench
|
|
| 18 |
|
| 19 |
## Dataset Summary
|
| 20 |
|
| 21 |
-
|
| 22 |
|
| 23 |
-
[
|
| 24 |
|
| 25 |
-
*
|
| 26 |
-

|
| 27 |
-
[cite_start]*Figure 1: Overview of the AIDABench evaluation framework[cite: 65].*
|
| 28 |
|
| 29 |
-
##
|
|
|
|
|
|
|
| 30 |
|
| 31 |
### Task Categories
|
| 32 |
-
[cite_start]The dataset comprises three primary capability dimensions covering the end-to-end document processing pipeline[cite: 67, 199]:
|
| 33 |
-
* [cite_start]**File Generation (43.3%):** Assesses data wrangling operations like filtering, format normalization, deduplication, and cross-sheet linkage[cite: 72, 199].
|
| 34 |
-
* [cite_start]**Question Answering (QA) (37.5%):** Evaluates analytical operations including summation, mean computation, ranking, and trend analysis[cite: 71, 199].
|
| 35 |
-
* [cite_start]**Data Visualization (19.2%):** Measures the ability to generate and adapt multiple visualization forms (bar, line, pie charts) and style customizations[cite: 73, 199].
|
| 36 |
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
|
| 41 |
### Task Complexity
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
|
|
|
|
|
|
| 47 |
|
| 48 |
### Data Formats
|
| 49 |
-
|
|
|
|
| 50 |
|
| 51 |
## Evaluation Framework
|
| 52 |
|
| 53 |
-
|
|
|
|
|
|
|
| 54 |
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
2. [cite_start]**Visualization Evaluator:** Powered by Gemini 3 Pro, scoring both correctness and readability[cite: 235, 326].
|
| 58 |
-
3. [cite_start]**Spreadsheet File Evaluator:** Powered by Claude Sonnet 4.5, utilizing a coarse-to-fine verification strategy[cite: 249, 326].
|
| 59 |
|
| 60 |
-
*
|
| 61 |
-
|
| 62 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
|
| 64 |
## Baseline Performance
|
| 65 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
|
| 67 |
## Citation
|
| 68 |
|
|
@@ -75,6 +106,5 @@ If you use this dataset, please cite the original paper:
|
|
| 75 |
journal={arXiv preprint},
|
| 76 |
year={2026}
|
| 77 |
}
|
| 78 |
-
```
|
| 79 |
-
|
| 80 |
-
---
|
|
|
|
| 1 |
+
````md
|
| 2 |
---
|
| 3 |
language:
|
| 4 |
+
- zh
|
| 5 |
license: apache-2.0
|
| 6 |
task_categories:
|
| 7 |
- question-answering
|
|
|
|
| 19 |
|
| 20 |
## Dataset Summary
|
| 21 |
|
| 22 |
+
**AIDABench** is a benchmark for evaluating AI systems on **end-to-end data analytics over real-world documents**. It contains **600+** diverse analytical tasks grounded in realistic scenarios and spans heterogeneous data sources such as **spreadsheets, databases, financial reports, and operational records**. Tasks are designed to be challenging, often requiring multi-step reasoning and tool use to complete reliably.
|
| 23 |
|
| 24 |
+

|
| 25 |
|
| 26 |
+
*Figure 1: Overview of the AIDABench evaluation framework.*
|
|
|
|
|
|
|
| 27 |
|
| 28 |
+
## Supported Tasks and Evaluation Targets
|
| 29 |
+
|
| 30 |
+
AIDABench focuses on practical document analytics workflows where a model/agent must read files, reason over structured data, and produce a final deliverable.
|
| 31 |
|
| 32 |
### Task Categories
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
+
The dataset is organized around three primary capability dimensions:
|
| 35 |
+
|
| 36 |
+
- **File Generation (43.3%)**
|
| 37 |
+
Data wrangling and transformation tasks such as filtering, normalization, deduplication, joins, and cross-sheet linkage, with outputs as generated files (e.g., spreadsheets).
|
| 38 |
+
|
| 39 |
+
- **Question Answering (QA) (37.5%)**
|
| 40 |
+
Analytical queries such as aggregation, averages, ranking, comparisons, and trend analysis, with outputs as final answers.
|
| 41 |
+
|
| 42 |
+
- **Data Visualization (19.2%)**
|
| 43 |
+
Chart creation/adaptation tasks (e.g., bar/line/pie) including style requirements and presentation constraints, with outputs as figures or chart files.
|
| 44 |
+
|
| 45 |
+

|
| 46 |
+
|
| 47 |
+
*Figure 2: Example evaluation scenarios for QA, Data Visualization, and File Generation.*
|
| 48 |
|
| 49 |
### Task Complexity
|
| 50 |
+
|
| 51 |
+
Tasks are stratified by the number of expert-level reasoning steps required:
|
| 52 |
+
|
| 53 |
+
- **Easy (29.5%)**: ≤ 3 steps
|
| 54 |
+
- **Medium (49.4%)**: 4–6 steps
|
| 55 |
+
- **Hard (21.1%)**: ≥ 7 steps
|
| 56 |
+
- **Cross-file Reasoning**: 27.4% of tasks require reasoning over multiple input files (up to 14 files).
|
| 57 |
|
| 58 |
### Data Formats
|
| 59 |
+
|
| 60 |
+
Most inputs are tabular files (xlsx/csv dominate), complemented by **DOCX** and **PDF** formats to support mixed-type document processing.
|
| 61 |
|
| 62 |
## Evaluation Framework
|
| 63 |
|
| 64 |
+
All models are evaluated under a unified **tool-augmented protocol**: the model receives task instructions and associated files, and can execute **arbitrary Python code** within a **sandboxed environment** to complete the task.
|
| 65 |
+
|
| 66 |
+
To align with task categories, AIDABench uses three dedicated **LLM-based evaluators**:
|
| 67 |
|
| 68 |
+
1. **QA Evaluator**
|
| 69 |
+
A binary judge that determines whether the produced answer matches the reference (under the benchmark’s scoring rules).
|
|
|
|
|
|
|
| 70 |
|
| 71 |
+
2. **Visualization Evaluator**
|
| 72 |
+
Scores both **correctness** and **readability** of generated visualizations.
|
| 73 |
+
|
| 74 |
+
3. **Spreadsheet File Evaluator**
|
| 75 |
+
Verifies generated spreadsheet outputs with a **coarse-to-fine** strategy, combining structural checks with sampled content validation and task-specific verification.
|
| 76 |
+
|
| 77 |
+

|
| 78 |
+
|
| 79 |
+
*Figure 3: The design of the three types of evaluators in AIDABench.*
|
| 80 |
|
| 81 |
## Baseline Performance
|
| 82 |
+
|
| 83 |
+
Results indicate that complex, tool-augmented document analytics remains challenging: the best-performing baseline model (**Claude-Sonnet-4.5**) achieves **59.43 pass@1** on AIDABench (see the paper for full settings, model list, and breakdowns).
|
| 84 |
+
|
| 85 |
+
## Intended Uses
|
| 86 |
+
|
| 87 |
+
AIDABench is intended for:
|
| 88 |
+
|
| 89 |
+
- Evaluating **agents** or **tool-using LLM systems** on realistic document analytics tasks
|
| 90 |
+
- Benchmarking end-to-end capabilities across **QA**, **file generation**, and **visualization**
|
| 91 |
+
- Diagnosing failure modes in multi-step, multi-file reasoning over business-like data
|
| 92 |
+
|
| 93 |
+
## Limitations
|
| 94 |
+
|
| 95 |
+
- The benchmark is designed for tool-augmented settings; purely text-only inference may underperform due to the need for code execution and file manipulation.
|
| 96 |
+
- Automated evaluation relies on LLM judges, which introduces additional compute cost and (small) scoring variance depending on settings.
|
| 97 |
|
| 98 |
## Citation
|
| 99 |
|
|
|
|
| 106 |
journal={arXiv preprint},
|
| 107 |
year={2026}
|
| 108 |
}
|
| 109 |
+
````
|
| 110 |
+
---
|
|
|