Datasets:
File size: 5,242 Bytes
1386a69 923ce94 1386a69 e097756 c18f6f2 e097756 483b360 1386a69 e097756 923ce94 e097756 1386a69 483b360 1386a69 483b360 1386a69 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 |
---
license: apache-2.0
task_categories:
- question-answering
language:
- en
pretty_name: AirQA
---
# AirQA: A Comprehensive QA Dataset for AI Research with Instance-Level Evaluation
This repository contains the test set, the `metadata`, `processed_data` and `papers` for the **AirQA** dataset introduced in our paper [**AirQA: A Comprehensive QA Dataset for AI Research with Instance-Level Evaluation**](https://www.arxiv.org/abs/2509.16952) accepted to ICLR 2026. Detailed instructions for using the dataset will soon be publicly available in [our official repository](https://github.com/OpenDFM/AirQA).
**AirQA** is a human-annotated multi-modal multitask **A**rtificial **I**ntelligence **R**esearch **Q**uestion **A**nswering dataset, which encompasses 1,246 examples and 13,956 papers, aiming at evaluating an agent’s research capabilities in realistic scenarios. It is the first dataset that encompasses multiple question types, also the first to bring function-based evaluation into QA domain, enabling convenient and systematic assessment of research capabilities.
## 🔍 Quick Start
Load the AirQA dataset in one line using Hugging Face `datasets`:
```py
from datasets import load_dataset
dataset = load_dataset("OpenDFM/AirQA")
```
However, we recommend referring to [our official repository](https://github.com/OpenDFM/AirQA) for complete usage instructions, including the data format and evaluation scripts.
## 📂 Folder Structure
```txt
AirQA
|── data/
| |── test.parquet # test set (simple, for minimal usage)
| |── test_data.jsonl # test set (complete, including function-based evaluation)
| └── uuid2title.json # mapping from paper UUID to title
|── metadata/
| |── 000ab6db-4b65-5dc0-8393-fbc2c05843c8.json
| └── ... # more metadata dicts
|── papers/
| |── acl2016/
| | └── 16c3a7ad-d638-5ebf-a72a-bd58f06c16d7.pdf
| |── acl2019/
| | └── c7563d97-695f-5c77-8021-334bf2ff9ddb.pdf
| |── acl2023/
| | |── 001ab93b-7665-5d56-a28e-eac95d2a9d7e.pdf
| | └── ... # more .pdf published in ACL 2023
| └── ... # other sub-folders of paper collections
|── processed_data/
| |── 000ab6db-4b65-5dc0-8393-fbc2c05843c8.json # cached data for PDF parsing
| └── ... # more cached data for PDFs
└── README.md
```
Due to Hugging Face's limit on the number of files in a single folder, we packaged `metadata` and `processed_data` into archives.
## 📊 Dataset Statistics
Our dataset encompasses papers from 34 volumes, spanning 7 conferences over 16 years. The detailed distribution is summarized below.
<details><summary>👇🏻 Click to view the paper distribution of dataset</summary>
| Folder | Conference | Year | Collected |
| :----: | :--------: | :--: | :-------: |
| iclr2024 | ICLR | 2024 | 3301 |
| iclr2023 | ICLR | 2023 | 31 |
| iclr2020 | ICLR | 2020 | 1 |
| neurips2024 | NeurIPS | 2024 | 6857 |
| neurips2023 | NeurIPS | 2023 | 73 |
| nips2006 | NeurIPS | 2006 | 1 |
| acl2024 | ACL | 2024 | 161 |
| acl2023 | ACL | 2023 | 3083 |
| acl2019 | ACL | 2019 | 1 |
| acl2019 | ACL | 2016 | 1 |
| emnlp2024 | EMNLP | 2024 | 55 |
| emnlp2023 | EMNLP | 2023 | 52 |
| emnlp2021 | EMNLP | 2021 | 2 |
| emnlp2013 | EMNLP | 2013 | 1 |
| icassp2024 | ICASSP | 2024 | 18 |
| icassp2023 | ICASSP | 2023 | 12 |
| eacl2024 | EACL | 2024 | 1 |
| ijcnlp2023 | IJCNLP | 2023 | 1 |
| arxiv2025 | arXiv | 2025 | 12 |
| arxiv2024 | arXiv | 2024 | 53 |
| arxiv2023 | arXiv | 2023 | 61 |
| arxiv2022 | arXiv | 2022 | 61 |
| arxiv2021 | arXiv | 2021 | 43 |
| arxiv2020 | arXiv | 2020 | 25 |
| arxiv2019 | arXiv | 2019 | 20 |
| arxiv2018 | arXiv | 2018 | 11 |
| arxiv2017 | arXiv | 2017 | 6 |
| arxiv2016 | arXiv | 2016 | 4 |
| arxiv2015 | arXiv | 2015 | 1 |
| arxiv2014 | arXiv | 2014 | 1 |
| arxiv2013 | arXiv | 2013 | 1 |
| arxiv2012 | arXiv | 2012 | 1 |
| arxiv2011 | arXiv | 2011 | 1 |
| uncategorized | - | - | 3 |
| Total | - | - | 13956 |
</details>
## ✍🏻 Citation
If you find this dataset useful, please cite our work:
```txt
@misc{huang2025airqacomprehensiveqadataset,
title={AirQA: A Comprehensive QA Dataset for AI Research with Instance-Level Evaluation},
author={Tiancheng Huang and Ruisheng Cao and Yuxin Zhang and Zhangyi Kang and Zijian Wang and Chenrun Wang and Yijie Luo and Hang Zheng and Lirong Qian and Lu Chen and Kai Yu},
year={2025},
eprint={2509.16952},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.16952},
}
``` |