Datasets:
File size: 5,343 Bytes
fd119ca 6146868 64670d8 75b3d5c 64670d8 fd119ca 6146868 fc92507 6146868 fc92507 6146868 fd119ca 4dba039 fc92507 6146868 fc92507 7bbe7bb 3797f78 64670d8 6146868 7bbe7bb 6146868 7bbe7bb 6146868 3797f78 fc92507 64670d8 6146868 64670d8 4dba039 64670d8 6146868 fc92507 6146868 64670d8 4090f66 4dba039 fc92507 64670d8 6146868 3797f78 6146868 3797f78 75b3d5c 3797f78 6146868 4dba039 6146868 3797f78 6146868 3797f78 6146868 64670d8 6146868 64670d8 6146868 64670d8 6146868 3797f78 6146868 64670d8 7bbe7bb 6146868 4090f66 4dba039 7bbe7bb 64670d8 6146868 64670d8 6146868 64670d8 4090f66 6146868 64670d8 fc92507 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 |
---
license: cc0-1.0
task_categories:
- text-generation
tags:
- code
- scientific-computing
configs:
- config_name: msb_type
data_files:
- split: train
path: data/msb_type.jsonl
default: true
- config_name: et_type
data_files:
- split: train
path: data/et_type.jsonl
---
# AInsteinBench
**AInsteinBench** is a benchmark for evaluating the capabilities of AI agents in solving scientific computing problems. It currently supports **Einstein Toolkit** and **Multi-SWE-bench** formats of coding questions.
## π Dataset Overview
AInsteinBench provides 244 scientific computing tasks derived from multiple scientific repositories. These tasks have been verified on execution and also reviewed by corresponding domain experts to verify both software engineering and scientific content accuracy. The tasks cover numerical relativity, quantum information, molecular dynamics, cheminformatics and quantum chemistry.
## π Data Fields
The questions are formatted following the AInsteinBench format:
```python
{
"question_id": str, # Unique identifier
"question_type": str, # currently supporting "einstein_toolkit" or "multi_swe_bench"
"description": str, # Task description
"content": dict, # Full problem context
"environment": dict, # environment such as docker images
"answer": dict, # reference answer following the format of the question type
"test": dict, # Test cases and how to run them
"scoring_config": dict # Scoring configuration
}
```
## π¬ Data Curation Process
### Multi-SWE-Bench Processing (`msb_type`)
244 tasks from real-world development process of scientific computing repositories:
- **Sources**: OpenMM, PySCF, RDkit, Qiskit, AMReX, EinsteinToolkit (EisnteinToolkit problems are synthesized. Others are from real issues and pull requests)
- **Languages**: C++ (\~65%), Python (\~35%)
- **Tasks**: Bug fixes, feature implementation, code completion
Data is formatted following the Multi-SWE-Bench structure with issue descriptions, patches, and test cases.
### Einstein Toolkit Processing (`et_type`)
The Einstein Toolkit is a collection of C/C++/Fortran codes for general relativistic simulations, organized into packages called "Thorns" managed by the Cactus Computation Language (CCL).
**Thorn Structure**:
```
.
βββ doc/ # Documentation
β βββ documentation.tex
βββ src/ # Source code (one file removed as target)
β βββ *.c, *.cpp, *.f90
βββ test/ # Test cases
β βββ <test_name>/ # Reference outputs
β βββ <test_name>.par # Test parameters
βββ README.md
βββ configuration.ccl # Dependencies
βββ interface.ccl # Shared variables/functions
βββ param.ccl # Parameters
βββ schedule.ccl # Execution scheduling
```
**Problem Definition**: Given an incomplete Thorn (missing one source file), can the model complete it and pass all tests?
**Data Curation Pipeline**:
1. Collected ~3,000 questions from open-sourced Einstein Toolkit Thorns
2. Screened for 1,085 questions with runnable tests (the questions verified on execution are provided in `et_type`)
3. Manually verified and selected 40 questions where models are evaluated on physical reasoning abilities, in addition to software engineering skills. (merged to `msb_type`)
## π» Usage
### Loading the Dataset
```python
from datasets import load_dataset
# Load the default subset (msb_type)
dataset = load_dataset("ByteDance-Seed/AInsteinBench")
# Load specific subsets
msb_dataset = load_dataset("ByteDance-Seed/AInsteinBench", "msb_type")
et_dataset = load_dataset("ByteDance-Seed/AInsteinBench", "et_type")
# Access samples
for sample in dataset['train']:
print(f"ID: {sample['question_id']}")
print(f"Type: {sample['question_type']}")
print(f"Task: {sample['description']}")
```
### Evaluation
For evaluation scripts and detailed usage, please visit the [AInsteinBench GitHub repository](https://github.com/ByteDance-Seed/AInsteinBench).
## π License
The dataset is licensed under CC0, subject to any intellectual property rights in the dataset. The data is adapted from open source projects; your use of that data must comply with their respective licenses.
**Source Repositories**:
- Einstein Toolkit:
- homepage: https://einsteintoolkit.org
- arrangement: https://bitbucket.org/einsteintoolkit/einsteintoolkit/
- license: GPL-2.0
- GitHub repositories:
- OpenMM: https://github.com/openmm/openmm (MIT license and the GNU Lesser General Public License )
- PySCF: https://github.com/pyscf/pyscf (Apache License 2.0)
- RDkit: https://github.com/rdkit/rdkit (BSD 3-Clause License)
- Qiskit: https://github.com/Qiskit/qiskit (Apache License 2.0)
- AMReX: https://github.com/AMReX-Codes/amrex (BSD 3-Clause License)
## π€ Citation
If you use AInsteinBench in your research, please cite:
```bibtex
@dataset{ainsteinbench2025dataset,
title={AInsteinBench: Benchmarking Coding Agents on Scientific Repositories},
author={ByteDance Seed Team},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/datasets/ByteDance-Seed/AInsteinBench}
}
```
|