|
|
--- |
|
|
license: cc0-1.0 |
|
|
task_categories: |
|
|
- text-generation |
|
|
tags: |
|
|
- code |
|
|
- scientific-computing |
|
|
configs: |
|
|
- config_name: msb_type |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/msb_type.jsonl |
|
|
default: true |
|
|
- config_name: et_type |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/et_type.jsonl |
|
|
--- |
|
|
|
|
|
# AInsteinBench |
|
|
|
|
|
**AInsteinBench** is a benchmark for evaluating the capabilities of AI agents in solving scientific computing problems. It currently supports **Einstein Toolkit** and **Multi-SWE-bench** formats of coding questions. |
|
|
|
|
|
## π Dataset Overview |
|
|
|
|
|
AInsteinBench provides 244 scientific computing tasks derived from multiple scientific repositories. These tasks have been verified on execution and also reviewed by corresponding domain experts to verify both software engineering and scientific content accuracy. The tasks cover numerical relativity, quantum information, molecular dynamics, cheminformatics and quantum chemistry. |
|
|
|
|
|
## π Data Fields |
|
|
|
|
|
The questions are formatted following the AInsteinBench format: |
|
|
|
|
|
```python |
|
|
{ |
|
|
"question_id": str, # Unique identifier |
|
|
"question_type": str, # currently supporting "einstein_toolkit" or "multi_swe_bench" |
|
|
"description": str, # Task description |
|
|
"content": dict, # Full problem context |
|
|
"environment": dict, # environment such as docker images |
|
|
"answer": dict, # reference answer following the format of the question type |
|
|
"test": dict, # Test cases and how to run them |
|
|
"scoring_config": dict # Scoring configuration |
|
|
} |
|
|
``` |
|
|
|
|
|
|
|
|
## π¬ Data Curation Process |
|
|
|
|
|
### Multi-SWE-Bench Processing (`msb_type`) |
|
|
|
|
|
244 tasks from real-world development process of scientific computing repositories: |
|
|
- **Sources**: OpenMM, PySCF, RDkit, Qiskit, AMReX, EinsteinToolkit (EisnteinToolkit problems are synthesized. Others are from real issues and pull requests) |
|
|
- **Languages**: C++ (\~65%), Python (\~35%) |
|
|
- **Tasks**: Bug fixes, feature implementation, code completion |
|
|
|
|
|
Data is formatted following the Multi-SWE-Bench structure with issue descriptions, patches, and test cases. |
|
|
|
|
|
|
|
|
### Einstein Toolkit Processing (`et_type`) |
|
|
|
|
|
The Einstein Toolkit is a collection of C/C++/Fortran codes for general relativistic simulations, organized into packages called "Thorns" managed by the Cactus Computation Language (CCL). |
|
|
|
|
|
**Thorn Structure**: |
|
|
``` |
|
|
. |
|
|
βββ doc/ # Documentation |
|
|
β βββ documentation.tex |
|
|
βββ src/ # Source code (one file removed as target) |
|
|
β βββ *.c, *.cpp, *.f90 |
|
|
βββ test/ # Test cases |
|
|
β βββ <test_name>/ # Reference outputs |
|
|
β βββ <test_name>.par # Test parameters |
|
|
βββ README.md |
|
|
βββ configuration.ccl # Dependencies |
|
|
βββ interface.ccl # Shared variables/functions |
|
|
βββ param.ccl # Parameters |
|
|
βββ schedule.ccl # Execution scheduling |
|
|
``` |
|
|
|
|
|
**Problem Definition**: Given an incomplete Thorn (missing one source file), can the model complete it and pass all tests? |
|
|
|
|
|
**Data Curation Pipeline**: |
|
|
1. Collected ~3,000 questions from open-sourced Einstein Toolkit Thorns |
|
|
2. Screened for 1,085 questions with runnable tests (the questions verified on execution are provided in `et_type`) |
|
|
3. Manually verified and selected 40 questions where models are evaluated on physical reasoning abilities, in addition to software engineering skills. (merged to `msb_type`) |
|
|
|
|
|
|
|
|
## π» Usage |
|
|
|
|
|
### Loading the Dataset |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load the default subset (msb_type) |
|
|
dataset = load_dataset("ByteDance-Seed/AInsteinBench") |
|
|
|
|
|
# Load specific subsets |
|
|
msb_dataset = load_dataset("ByteDance-Seed/AInsteinBench", "msb_type") |
|
|
et_dataset = load_dataset("ByteDance-Seed/AInsteinBench", "et_type") |
|
|
|
|
|
# Access samples |
|
|
for sample in dataset['train']: |
|
|
print(f"ID: {sample['question_id']}") |
|
|
print(f"Type: {sample['question_type']}") |
|
|
print(f"Task: {sample['description']}") |
|
|
``` |
|
|
|
|
|
### Evaluation |
|
|
|
|
|
For evaluation scripts and detailed usage, please visit the [AInsteinBench GitHub repository](https://github.com/ByteDance-Seed/AInsteinBench). |
|
|
|
|
|
## π License |
|
|
|
|
|
The dataset is licensed under CC0, subject to any intellectual property rights in the dataset. The data is adapted from open source projects; your use of that data must comply with their respective licenses. |
|
|
|
|
|
|
|
|
**Source Repositories**: |
|
|
- Einstein Toolkit: |
|
|
- homepage: https://einsteintoolkit.org |
|
|
- arrangement: https://bitbucket.org/einsteintoolkit/einsteintoolkit/ |
|
|
- license: GPL-2.0 |
|
|
- GitHub repositories: |
|
|
- OpenMM: https://github.com/openmm/openmm (MIT license and the GNU Lesser General Public License ) |
|
|
- PySCF: https://github.com/pyscf/pyscf (Apache License 2.0) |
|
|
- RDkit: https://github.com/rdkit/rdkit (BSD 3-Clause License) |
|
|
- Qiskit: https://github.com/Qiskit/qiskit (Apache License 2.0) |
|
|
- AMReX: https://github.com/AMReX-Codes/amrex (BSD 3-Clause License) |
|
|
|
|
|
## π€ Citation |
|
|
|
|
|
If you use AInsteinBench in your research, please cite: |
|
|
|
|
|
```bibtex |
|
|
@dataset{ainsteinbench2025dataset, |
|
|
title={AInsteinBench: Benchmarking Coding Agents on Scientific Repositories}, |
|
|
author={ByteDance Seed Team}, |
|
|
year={2025}, |
|
|
publisher={Hugging Face}, |
|
|
url={https://huggingface.co/datasets/ByteDance-Seed/AInsteinBench} |
|
|
} |
|
|
``` |
|
|
|
|
|
|