Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -9,11 +9,11 @@ pretty_name: AirQA
|
|
| 9 |
|
| 10 |
# AirQA: A Comprehensive QA Dataset for AI Research with Instance-Level Evaluation
|
| 11 |
|
| 12 |
-
This repository contains the `metadata`, `processed_data` and `papers` for the **AirQA** dataset introduced in our paper [**AirQA: A Comprehensive QA Dataset for AI Research with Instance-Level Evaluation**](https://www.arxiv.org/abs/2509.16952). Detailed instructions for using the dataset will soon be publicly available in [our official repository](https://github.com/OpenDFM/AirQA).
|
| 13 |
|
| 14 |
**AirQA** is a human-annotated multi-modal multitask **A**rtificial **I**ntelligence **R**esearch **Q**uestion **A**nswering dataset, which encompasses 1,246 examples and 13,956 papers, aiming at evaluating an agent’s research capabilities in realistic scenarios. It is the first dataset that encompasses multiple question types, also the first to bring function-based evaluation into QA domain, enabling convenient and systematic assessment of research capabilities.
|
| 15 |
|
| 16 |
-
## Folder Structure
|
| 17 |
|
| 18 |
```txt
|
| 19 |
metadata/
|
|
@@ -31,7 +31,7 @@ processed_data/
|
|
| 31 |
|
| 32 |
Due to Hugging Face's limit on the number of files in a single folder, we packaged `metadata` and `processed_data` into archives.
|
| 33 |
|
| 34 |
-
## Dataset Statistics
|
| 35 |
|
| 36 |
Our dataset encompasses papers from 34 volumes, spanning 7 conferences over 16 years. The detailed distribution is summarized below.
|
| 37 |
|
|
@@ -77,7 +77,7 @@ Our dataset encompasses papers from 34 volumes, spanning 7 conferences over 16 y
|
|
| 77 |
|
| 78 |
</details>
|
| 79 |
|
| 80 |
-
## Citation
|
| 81 |
|
| 82 |
If you find this dataset useful, please cite our work:
|
| 83 |
```txt
|
|
|
|
| 9 |
|
| 10 |
# AirQA: A Comprehensive QA Dataset for AI Research with Instance-Level Evaluation
|
| 11 |
|
| 12 |
+
This repository contains the `metadata`, `processed_data` and `papers` for the **AirQA** dataset introduced in our paper [**AirQA: A Comprehensive QA Dataset for AI Research with Instance-Level Evaluation**](https://www.arxiv.org/abs/2509.16952) accepted to ICLR 2026. Detailed instructions for using the dataset will soon be publicly available in [our official repository](https://github.com/OpenDFM/AirQA).
|
| 13 |
|
| 14 |
**AirQA** is a human-annotated multi-modal multitask **A**rtificial **I**ntelligence **R**esearch **Q**uestion **A**nswering dataset, which encompasses 1,246 examples and 13,956 papers, aiming at evaluating an agent’s research capabilities in realistic scenarios. It is the first dataset that encompasses multiple question types, also the first to bring function-based evaluation into QA domain, enabling convenient and systematic assessment of research capabilities.
|
| 15 |
|
| 16 |
+
## 📂 Folder Structure
|
| 17 |
|
| 18 |
```txt
|
| 19 |
metadata/
|
|
|
|
| 31 |
|
| 32 |
Due to Hugging Face's limit on the number of files in a single folder, we packaged `metadata` and `processed_data` into archives.
|
| 33 |
|
| 34 |
+
## 📊 Dataset Statistics
|
| 35 |
|
| 36 |
Our dataset encompasses papers from 34 volumes, spanning 7 conferences over 16 years. The detailed distribution is summarized below.
|
| 37 |
|
|
|
|
| 77 |
|
| 78 |
</details>
|
| 79 |
|
| 80 |
+
## ✍🏻 Citation
|
| 81 |
|
| 82 |
If you find this dataset useful, please cite our work:
|
| 83 |
```txt
|