xinshuo commited on
Commit
eb8c2fa
Β·
verified Β·
1 Parent(s): 04d9664

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +142 -0
README.md ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc0-1.0
3
+ task_categories:
4
+ - text-generation
5
+ tags:
6
+ - code
7
+ - scientific-computing
8
+ configs:
9
+ - config_name: msb_type
10
+ data_files:
11
+ - split: train
12
+ path: data/msb_type.jsonl
13
+ default: true
14
+ - config_name: et_type
15
+ data_files:
16
+ - split: train
17
+ path: data/et_type.jsonl
18
+ ---
19
+
20
+ # AInsteinBench
21
+
22
+ **AInsteinBench** is a benchmark for evaluating the capabilities of AI agents in solving scientific computing problems. It currently supports **Einstein Toolkit** and **Multi-SWE-bench** formats of coding questions.
23
+
24
+ ## πŸ“Š Dataset Overview
25
+
26
+ AInsteinBench provides 244 scientific computing tasks derived from multiple scientific repositories. These tasks have been verified on execution and also reviewed by corresponding domain experts to verify both software engineering and scientific content accuracy. The tasks cover numerical relativity, quantum information, molecular dynamics, cheminformatics and quantum chemistry.
27
+
28
+ ## πŸ“‹ Data Fields
29
+
30
+ The questions are formatted following the AInsteinBench format:
31
+
32
+ ```python
33
+ {
34
+ "question_id": str, # Unique identifier
35
+ "question_type": str, # currently supporting "einstein_toolkit" or "multi_swe_bench"
36
+ "description": str, # Task description
37
+ "content": dict, # Full problem context
38
+ "environment": dict, # environment such as docker images
39
+ "answer": dict, # reference answer following the format of the question type
40
+ "test": dict, # Test cases and how to run them
41
+ "scoring_config": dict # Scoring configuration
42
+ }
43
+ ```
44
+
45
+
46
+ ## πŸ”¬ Data Curation Process
47
+
48
+ ### Multi-SWE-Bench Processing (`msb_type`)
49
+
50
+ 244 tasks from real-world development process of scientific computing repositories:
51
+ - **Sources**: OpenMM, PySCF, RDkit, Qiskit, AMReX, EinsteinToolkit (EisnteinToolkit problems are synthesized. Others are from real issues and pull requests)
52
+ - **Languages**: C++ (\~65%), Python (\~35%)
53
+ - **Tasks**: Bug fixes, feature implementation, code completion
54
+
55
+ Data is formatted following the Multi-SWE-Bench structure with issue descriptions, patches, and test cases.
56
+
57
+
58
+ ### Einstein Toolkit Processing (`et_type`)
59
+
60
+ The Einstein Toolkit is a collection of C/C++/Fortran codes for general relativistic simulations, organized into packages called "Thorns" managed by the Cactus Computation Language (CCL).
61
+
62
+ **Thorn Structure**:
63
+ ```
64
+ .
65
+ β”œβ”€β”€ doc/ # Documentation
66
+ β”‚ └── documentation.tex
67
+ β”œβ”€β”€ src/ # Source code (one file removed as target)
68
+ β”‚ └── *.c, *.cpp, *.f90
69
+ β”œβ”€β”€ test/ # Test cases
70
+ β”‚ β”œβ”€β”€ <test_name>/ # Reference outputs
71
+ β”‚ └── <test_name>.par # Test parameters
72
+ β”œβ”€β”€ README.md
73
+ β”œβ”€β”€ configuration.ccl # Dependencies
74
+ β”œβ”€β”€ interface.ccl # Shared variables/functions
75
+ β”œβ”€β”€ param.ccl # Parameters
76
+ └── schedule.ccl # Execution scheduling
77
+ ```
78
+
79
+ **Problem Definition**: Given an incomplete Thorn (missing one source file), can the model complete it and pass all tests?
80
+
81
+ **Data Curation Pipeline**:
82
+ 1. Collected ~3,000 questions from open-sourced Einstein Toolkit Thorns
83
+ 2. Screened for 1,085 questions with runnable tests (the questions verified on execution are provided in `et_type`)
84
+ 3. Manually verified and selected 40 questions where models are evaluated on physical reasoning abilities, in addition to software engineering skills. (merged to `msb_type`)
85
+
86
+
87
+ ## πŸ’» Usage
88
+
89
+ ### Loading the Dataset
90
+
91
+ ```python
92
+ from datasets import load_dataset
93
+
94
+ # Load the default subset (msb_type)
95
+ dataset = load_dataset("ByteDance-Seed/AInsteinBench")
96
+
97
+ # Load specific subsets
98
+ msb_dataset = load_dataset("ByteDance-Seed/AInsteinBench", "msb_type")
99
+ et_dataset = load_dataset("ByteDance-Seed/AInsteinBench", "et_type")
100
+
101
+ # Access samples
102
+ for sample in dataset['train']:
103
+ print(f"ID: {sample['question_id']}")
104
+ print(f"Type: {sample['question_type']}")
105
+ print(f"Task: {sample['description']}")
106
+ ```
107
+
108
+ ### Evaluation
109
+
110
+ For evaluation scripts and detailed usage, please visit the [AInsteinBench GitHub repository](https://github.com/ByteDance-Seed/AInsteinBench).
111
+
112
+ ## πŸ“œ License
113
+
114
+ The dataset is licensed under CC0, subject to any intellectual property rights in the dataset. The data is adapted from open source projects; your use of that data must comply with their respective licenses.
115
+
116
+
117
+ **Source Repositories**:
118
+ - Einstein Toolkit:
119
+ - homepage: https://einsteintoolkit.org
120
+ - arrangement: https://bitbucket.org/einsteintoolkit/einsteintoolkit/
121
+ - license: GPL-2.0
122
+ - GitHub repositories:
123
+ - OpenMM: https://github.com/openmm/openmm (MIT license and the GNU Lesser General Public License )
124
+ - PySCF: https://github.com/pyscf/pyscf (Apache License 2.0)
125
+ - RDkit: https://github.com/rdkit/rdkit (BSD 3-Clause License)
126
+ - Qiskit: https://github.com/Qiskit/qiskit (Apache License 2.0)
127
+ - AMReX: https://github.com/AMReX-Codes/amrex (BSD 3-Clause License)
128
+
129
+ ## 🀝 Citation
130
+
131
+ If you use AInsteinBench in your research, please cite:
132
+
133
+ ```bibtex
134
+ @dataset{ainsteinbench2025dataset,
135
+ title={AInsteinBench: Benchmarking Coding Agents on Scientific Repositories},
136
+ author={ByteDance Seed Team},
137
+ year={2025},
138
+ publisher={Hugging Face},
139
+ url={https://huggingface.co/datasets/ByteDance-Seed/AInsteinBench}
140
+ }
141
+ ```
142
+