license: cc-by-nc-4.0
task_categories:
- text-generation
- graph-ml
- text-classification
- question-answering
- time-series-forecasting
tags:
- benchmarks
- agents
language:
- en
pretty_name: AIRS-Bench
AIRS-Bench: a Suite of Tasks for Frontier AI Research Science Agents
The AI Research Science Benchmark (AIRS-Bench) quantifies the autonomous research abilities of LLM agents in the area of machine learning. AIRS-Bench comprises 20 tasks from state-of-the-art machine learning papers spanning diverse domains: NLP, Code, Math, biochemical modelling, and time series forecasting.
Each task is specified by a ⟨problem, dataset, metric⟩ triplet and a SOTA value. The agent receives the full task specification and is expected to develop a solution that generates predictions on a test set, which are then evaluated and compared against the state-of-the-art (SOTA) score from a published paper.
For full details see the paper and the GitHub repository.
Dataset Description
This dataset contains the task specification files for the 20 AIRS-Bench tasks, formatted for use with the aira-dojo agentic harness.
Categories
| Category | # Tasks |
|---|---|
| Text Classification | 2 |
| Question Answering | 4 |
| Text Extraction and Matching | 3 |
| Molecules and Proteins ML | 5 |
| Time Series | 3 |
| Code | 2 |
| Math | 1 |
Data Fields
| Column | Type | Description |
|---|---|---|
task |
string |
Task identifier (directory name, e.g. SentimentAnalysisYelpReviewFullAccuracy) |
category |
string |
High-level domain category (e.g. Text Classification, Code) |
research_problem |
string |
The specific research problem the task addresses |
dataset |
string |
HuggingFace dataset identifier used for the task |
metric |
string |
Evaluation metric (e.g. Accuracy, MeanAbsoluteError, Rouge1) |
metadata.yaml |
string |
Full content of the task metadata file (dataset config, SOTA info, requirements) |
project_description.md |
string |
The task prompt provided to the agent |
prepare.py |
string |
Dataset preparation script (creates train/test splits, hides test labels) |
evaluate_prepare.py |
string |
Evaluation data preparation script (creates test labels for scoring) |
evaluate.py |
string |
Evaluation script used to score the agent's submission |
custom_labels.py |
string |
Optional custom label handler for non-standard label formats (empty if unused) |
utils.py |
string |
Optional shared utilities across task scripts (empty if unused) |
Citation
@article{lupidi2026airsbenchsuitetasksfrontier,
title={AIRS-Bench: a Suite of Tasks for Frontier AI Research Science Agents},
author={Alisia Lupidi and Bhavul Gauri and Thomas Simon Foster and Bassel Al Omari and Despoina Magka and Alberto Pepe and Alexis Audran-Reiss and Muna Aghamelu and Nicolas Baldwin and Lucia Cipolina-Kun and Jean-Christophe Gagnon-Audet and Chee Hau Leow and Sandra Lefdal and Hossam Mossalam and Abhinav Moudgil and Saba Nazir and Emanuel Tewolde and Isabel Urrego and Jordi Armengol Estape and Amar Budhiraja and Gaurav Chaurasia and Abhishek Charnalia and Derek Dunfield and Karen Hambardzumyan and Daniel Izcovich and Martin Josifoski and Ishita Mediratta and Kelvin Niu and Parth Pathak and Michael Shvartsman and Edan Toledo and Anton Protopopov and Roberta Raileanu and Alexander Miller and Tatiana Shavrina and Jakob Foerster and Yoram Bachrach},
year={2026},
eprint={2602.06855},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2602.06855},
}
License
This dataset is released under the CC BY-NC 4.0 license.