test_jsonl / README.md
xinshuo's picture
Upload README.md with huggingface_hub
4dba039 verified
metadata
license: cc0-1.0
task_categories:
  - text-generation
tags:
  - code
  - scientific-computing
configs:
  - config_name: msb_type
    data_files:
      - split: train
        path: data/msb_type.jsonl
    default: true
  - config_name: et_type
    data_files:
      - split: train
        path: data/et_type.jsonl

AInsteinBench

AInsteinBench is a benchmark for evaluating the capabilities of AI agents in solving scientific computing problems. It currently supports Einstein Toolkit and Multi-SWE-bench formats of coding questions.

πŸ“Š Dataset Overview

AInsteinBench provides 244 scientific computing tasks derived from multiple scientific repositories. These tasks have been verified on execution and also reviewed by corresponding domain experts to verify both software engineering and scientific content accuracy. The tasks cover numerical relativity, quantum information, molecular dynamics, cheminformatics and quantum chemistry.

πŸ“‹ Data Fields

The questions are formatted following the AInsteinBench format:

{
    "question_id": str,        # Unique identifier
    "question_type": str,      # currently supporting "einstein_toolkit" or "multi_swe_bench"
    "description": str,        # Task description
    "content": dict,           # Full problem context
    "environment": dict,       # environment such as docker images
    "answer": dict,            # reference answer following the format of the question type
    "test": dict,              # Test cases and how to run them
    "scoring_config": dict     # Scoring configuration
}

πŸ”¬ Data Curation Process

Multi-SWE-Bench Processing (msb_type)

244 tasks from real-world development process of scientific computing repositories:

  • Sources: OpenMM, PySCF, RDkit, Qiskit, AMReX, EinsteinToolkit (EisnteinToolkit problems are synthesized. Others are from real issues and pull requests)
  • Languages: C++ (~65%), Python (~35%)
  • Tasks: Bug fixes, feature implementation, code completion

Data is formatted following the Multi-SWE-Bench structure with issue descriptions, patches, and test cases.

Einstein Toolkit Processing (et_type)

The Einstein Toolkit is a collection of C/C++/Fortran codes for general relativistic simulations, organized into packages called "Thorns" managed by the Cactus Computation Language (CCL).

Thorn Structure:

.
β”œβ”€β”€ doc/                       # Documentation
β”‚   └── documentation.tex
β”œβ”€β”€ src/                       # Source code (one file removed as target)
β”‚   └── *.c, *.cpp, *.f90
β”œβ”€β”€ test/                      # Test cases
β”‚   β”œβ”€β”€ <test_name>/          # Reference outputs
β”‚   └── <test_name>.par       # Test parameters
β”œβ”€β”€ README.md
β”œβ”€β”€ configuration.ccl          # Dependencies
β”œβ”€β”€ interface.ccl              # Shared variables/functions
β”œβ”€β”€ param.ccl                  # Parameters
└── schedule.ccl               # Execution scheduling

Problem Definition: Given an incomplete Thorn (missing one source file), can the model complete it and pass all tests?

Data Curation Pipeline:

  1. Collected ~3,000 questions from open-sourced Einstein Toolkit Thorns
  2. Screened for 1,085 questions with runnable tests (the questions verified on execution are provided in et_type)
  3. Manually verified and selected 40 questions where models are evaluated on physical reasoning abilities, in addition to software engineering skills. (merged to msb_type)

πŸ’» Usage

Loading the Dataset

from datasets import load_dataset

# Load the default subset (msb_type)
dataset = load_dataset("ByteDance-Seed/AInsteinBench")

# Load specific subsets
msb_dataset = load_dataset("ByteDance-Seed/AInsteinBench", "msb_type")
et_dataset = load_dataset("ByteDance-Seed/AInsteinBench", "et_type")

# Access samples
for sample in dataset['train']:
    print(f"ID: {sample['question_id']}")
    print(f"Type: {sample['question_type']}")
    print(f"Task: {sample['description']}")

Evaluation

For evaluation scripts and detailed usage, please visit the AInsteinBench GitHub repository.

πŸ“œ License

The dataset is licensed under CC0, subject to any intellectual property rights in the dataset. The data is adapted from open source projects; your use of that data must comply with their respective licenses.

Source Repositories:

🀝 Citation

If you use AInsteinBench in your research, please cite:

@dataset{ainsteinbench2025dataset,
  title={AInsteinBench: Benchmarking Coding Agents on Scientific Repositories},
  author={ByteDance Seed Team},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/datasets/ByteDance-Seed/AInsteinBench}
}