xinshuo commited on
Commit
6146868
Β·
verified Β·
1 Parent(s): 64670d8

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +88 -161
README.md CHANGED
@@ -1,4 +1,10 @@
1
  ---
 
 
 
 
 
 
2
  configs:
3
  - config_name: msb_type
4
  data_files:
@@ -15,213 +21,134 @@ configs:
15
  default: true
16
  ---
17
 
18
- # Scientific Coding Benchmark Dataset
19
 
20
- ## Dataset Description
21
 
22
- This dataset contains scientific coding questions and benchmarks from multiple sources, organized into three configurations:
23
 
24
- - **msb_type** (244 samples): Multi-source Scientific Benchmark questions focusing on software engineering tasks
25
- - **et_type** (1,085 samples): Einstein Toolkit related coding questions with extensive documentation
26
- - **scicobench_all** (1,329 samples, **default**): Combined and unified format of all scientific coding questions
27
 
28
- ### Dataset Summary
 
 
 
 
29
 
30
- The Scientific Coding Benchmark dataset is designed for evaluating AI models on real-world scientific software development tasks. The dataset provides both raw data formats and a unified format for comprehensive evaluation.
31
 
32
- ### Supported Tasks
 
33
 
34
- - **Scientific Code Generation**: Generate code solutions for scientific computing problems
35
- - **Software Engineering**: Fix bugs and implement features in scientific software
36
- - **Documentation Understanding**: Work with complex scientific software documentation
37
- - **Test-Driven Development**: Write code that passes specific test cases
38
 
39
- ## Dataset Structure
40
-
41
- ### Data Configurations
42
-
43
- The dataset has three configurations with different schemas:
44
-
45
- | Configuration | Samples | Size | Description |
46
- |---------------|---------|------|-------------|
47
- | msb_type | 244 | ~15 MB | Software engineering tasks (raw format) |
48
- | et_type | 1,085 | ~677 MB | Einstein Toolkit tasks (raw format) |
49
- | **scicobench_all** | 1,329 | ~636 MB | **Unified format (default)** |
50
-
51
- **Default Configuration**: `scicobench_all`
52
-
53
- ### Data Loading
54
 
55
  ```python
56
- from datasets import load_dataset
57
-
58
- # Load default configuration (scicobench_all - unified format)
59
- dataset = load_dataset("xinshuo/test_jsonl")
60
- print(f"Loaded {len(dataset['train'])} samples")
61
-
62
- # Load specific configurations
63
- msb_data = load_dataset("xinshuo/test_jsonl", "msb_type")
64
- et_data = load_dataset("xinshuo/test_jsonl", "et_type")
65
- all_data = load_dataset("xinshuo/test_jsonl", "scicobench_all")
66
-
67
- # Access a sample
68
- print(dataset['train'][0])
69
  ```
70
 
71
- ### Data Fields
72
-
73
- #### Unified Format (scicobench_all)
74
 
75
- Each sample contains:
76
 
77
- - `question_id` (string): Unique identifier for the question
78
- - `question_type` (string): Type/category of the question (MSB or ET)
79
- - `description` (string): High-level description of the task
80
- - `content` (string): Detailed question content and requirements
81
- - `environment` (string): Environment setup and configuration details
82
- - `answer` (string): Expected answer or solution
83
- - `test` (string): Test cases and validation logic
84
- - `scoring_config` (string): Configuration for scoring the solution
85
 
86
- #### MSB Type Format (msb_type)
87
 
88
- Raw format from Scientific Coding SWE dataset with 33 fields including:
89
- - `org`, `repo`, `number`, `state`, `title`, `body`
90
- - `fix_patch`, `test_patch`, `doc_patch`
91
- - `language`, `instance_id`
92
- - `fixed_tests`, `run_result`, `test_patch_result`, `fix_patch_result`
93
- - And more... (see samples for full schema)
94
 
95
- #### ET Type Format (et_type)
 
 
 
96
 
97
- Raw format from Einstein Toolkit with 15 fields including:
98
- - `thorn_name`, `url`, `configuration`
99
- - `interface`, `param`, `schedule`
100
- - `src_filename`, `src_code`
101
- - `context`, `doc_tex_content`, `readme_content`
102
- - `combined_doc_context`
103
- - And more... (see samples for full schema)
104
-
105
- ## Source Data
 
 
 
 
 
 
 
106
 
107
- ### MSB Type Questions (244 samples)
108
 
109
- Derived from the Scientific Coding SWE dataset, focusing on:
110
  - **Organizations**: einsteintoolkit, openmm, pyscf, rdkit, Qiskit, AMReX-Codes
111
  - **Languages**: C++ (65%), Python (35%)
112
  - **Tasks**: Bug fixes, feature implementation, code completion
113
 
114
- ### ET Type Questions (1,085 samples)
115
-
116
- Einstein Toolkit code generation tasks including:
117
- - Thorn implementations
118
- - Interface definitions
119
- - Parameter configurations
120
- - Schedule definitions
121
- - Extensive documentation context
122
-
123
- ### SciCoBench All (1,329 samples)
124
-
125
- Unified format combining both question types with standardized structure for comprehensive evaluation.
126
 
127
- ## Usage Examples
128
 
129
- ### Load and Iterate Through Questions
130
 
131
  ```python
132
  from datasets import load_dataset
133
 
134
- # Load the unified format (recommended)
135
- dataset = load_dataset("xinshuo/test_jsonl")
136
 
 
 
 
 
 
137
  for sample in dataset['train']:
138
- print(f"Question ID: {sample['question_id']}")
139
  print(f"Type: {sample['question_type']}")
140
- print(f"Description: {sample['description'][:100]}...")
141
- print("-" * 80)
142
- ```
143
-
144
- ### Filter by Question Type
145
-
146
- ```python
147
- # Filter MSB-type questions from the unified dataset
148
- msb_questions = dataset['train'].filter(lambda x: x['question_type'] == 'MSB')
149
- print(f"MSB questions: {len(msb_questions)}")
150
-
151
- # Filter ET-type questions
152
- et_questions = dataset['train'].filter(lambda x: x['question_type'] == 'ET')
153
- print(f"ET questions: {len(et_questions)}")
154
- ```
155
-
156
- ### Work with Raw Formats
157
-
158
- ```python
159
- # Load raw MSB format for detailed metadata
160
- msb_raw = load_dataset("xinshuo/test_jsonl", "msb_type")
161
- sample = msb_raw['train'][0]
162
- print(f"Organization: {sample['org']}")
163
- print(f"Repository: {sample['repo']}")
164
- print(f"Fix patch:\n{sample['fix_patch'][:200]}...")
165
-
166
- # Load raw ET format for full documentation
167
- et_raw = load_dataset("xinshuo/test_jsonl", "et_type")
168
- sample = et_raw['train'][0]
169
- print(f"Thorn: {sample['thorn_name']}")
170
- print(f"Documentation: {sample['combined_doc_context'][:200]}...")
171
  ```
172
 
173
- ## Considerations for Using the Data
174
-
175
- ### Use Cases
176
-
177
- - Training code generation models for scientific computing
178
- - Benchmarking AI coding assistants on scientific software tasks
179
- - Research in automated scientific software development
180
- - Education in scientific programming practices
181
 
182
- ### Recommended Configuration
183
 
184
- For most use cases, we recommend using the **default `scicobench_all` configuration**, which provides:
185
- - Unified schema across all questions
186
- - Standardized fields for easier processing
187
- - Complete coverage of both MSB and ET question types
188
 
189
- Use the raw configurations (`msb_type` or `et_type`) only if you need:
190
- - Original metadata and detailed execution results
191
- - Domain-specific fields
192
- - Full documentation context
193
 
194
- ### Limitations
 
 
195
 
196
- - Focus on specific scientific domains (physics, chemistry, quantum computing)
197
- - Primarily C++ and Python languages
198
- - May require domain-specific knowledge to evaluate correctly
199
- - Different configurations have different schemas
200
 
201
- ## Additional Information
202
 
203
- ### Dataset Curators
 
204
 
205
- This dataset combines and unifies questions from:
206
- - xinshuo/Scientific_Coding_SWE_dataset
207
- - xinshuo/ET_1k
208
 
209
- ### Licensing Information
210
-
211
- Please refer to the original datasets for licensing information.
212
-
213
- ### Citation Information
214
 
215
  ```bibtex
216
- @dataset{scicobench,
217
- title={Scientific Coding Benchmark Dataset},
218
- author={xinshuo},
219
- year={2024},
220
- url={https://huggingface.co/datasets/xinshuo/test_jsonl}
 
221
  }
222
  ```
223
 
224
- ### Contributions
225
-
226
- For questions or issues, please refer to the dataset repository on HuggingFace.
227
-
 
1
  ---
2
+ license: cc0-1.0
3
+ task_categories:
4
+ - text-generation
5
+ tags:
6
+ - code
7
+ - scientific-computing
8
  configs:
9
  - config_name: msb_type
10
  data_files:
 
21
  default: true
22
  ---
23
 
24
+ # AInsteinBench
25
 
26
+ **AInsteinBench** is a benchmark for evaluating the capabilities of AI agents in solving scientific computing problems. It currently supports **Einstein Toolkit** and **Multi-SWE-bench** formats of coding questions.
27
 
28
+ ## πŸ“Š Dataset Overview
29
 
30
+ AInsteinBench contains 1,329 code generation problems from two sources:
 
 
31
 
32
+ | Subset | Samples | Description |
33
+ |--------|---------|-------------|
34
+ | **et_type** | 1,085 | Einstein Toolkit code completion tasks |
35
+ | **msb_type** | 244 | Multi-SWE-bench scientific computing tasks |
36
+ | **scicobench_all** | 1,329 | Combined dataset in unified format (default) |
37
 
38
+ ### Data Sources
39
 
40
+ 1. **Einstein Toolkit (et_type)**: 1,085 code completion tasks from the Einstein Toolkit codebase
41
+ 2. **Multi-SWE-Bench (msb_type)**: 244 tasks from multi-swe-bench processing of multiple scientific computing repositories
42
 
43
+ ## πŸ“‹ Data Fields
 
 
 
44
 
45
+ The unified format (`scicobench_all`) contains:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
  ```python
48
+ {
49
+ "question_id": str, # Unique identifier
50
+ "question_type": str, # "einstein_toolkit" or "multi_swe_bench"
51
+ "description": str, # Task description
52
+ "content": str, # Full problem context
53
+ "environment": str, # Setup requirements
54
+ "answer": str, # Expected solution
55
+ "test": str, # Test cases
56
+ "scoring_config": str # Evaluation configuration
57
+ }
 
 
 
58
  ```
59
 
60
+ For detailed field descriptions and raw formats, see the [AInsteinBench repository](https://github.com/ByteDance-Seed/AInsteinBench).
 
 
61
 
62
+ ## πŸ”¬ Data Curation Process
63
 
64
+ ### Einstein Toolkit Processing (et_type)
 
 
 
 
 
 
 
65
 
66
+ The Einstein Toolkit is a collection of C/C++/Fortran codes for general relativistic simulations, organized into packages called "Thorns" managed by the Cactus Computation Language (CCL).
67
 
68
+ **Problem Definition**: Given an incomplete Thorn (missing one source file), can the model complete it and pass all tests?
 
 
 
 
 
69
 
70
+ **Data Curation Pipeline**:
71
+ 1. Collected ~3,000 source files from open-sourced Einstein Toolkit Thorns
72
+ 2. Screened for 1,000 problems with runnable tests
73
+ 3. Selected 1,085 problems where models are evaluated on physics correctness, not just compilation
74
 
75
+ **Thorn Structure**:
76
+ ```
77
+ .
78
+ β”œβ”€β”€ doc/ # Documentation
79
+ β”‚ └── documentation.tex
80
+ β”œβ”€β”€ src/ # Source code (one file removed as target)
81
+ β”‚ └── *.c, *.cpp, *.f90
82
+ β”œβ”€β”€ test/ # Test cases
83
+ β”‚ β”œβ”€β”€ <test_name>/ # Reference outputs
84
+ β”‚ └── <test_name>.par # Test parameters
85
+ β”œβ”€β”€ README.md
86
+ β”œβ”€β”€ configuration.ccl # Dependencies
87
+ β”œβ”€β”€ interface.ccl # Shared variables/functions
88
+ β”œβ”€β”€ param.ccl # Parameters
89
+ └── schedule.ccl # Execution scheduling
90
+ ```
91
 
92
+ ### Multi-SWE-Bench Format (msb_type)
93
 
94
+ 244 software engineering tasks from scientific computing repositories:
95
  - **Organizations**: einsteintoolkit, openmm, pyscf, rdkit, Qiskit, AMReX-Codes
96
  - **Languages**: C++ (65%), Python (35%)
97
  - **Tasks**: Bug fixes, feature implementation, code completion
98
 
99
+ Data is formatted following the Multi-SWE-Bench structure with issue descriptions, patches, and test cases.
 
 
 
 
 
 
 
 
 
 
 
100
 
101
+ ## πŸ’» Usage
102
 
103
+ ### Loading the Dataset
104
 
105
  ```python
106
  from datasets import load_dataset
107
 
108
+ # Load the unified format (default)
109
+ dataset = load_dataset("ByteDance-Seed/AInsteinBench", "scicobench_all")
110
 
111
+ # Load specific subsets
112
+ et_dataset = load_dataset("ByteDance-Seed/AInsteinBench", "et_type")
113
+ msb_dataset = load_dataset("ByteDance-Seed/AInsteinBench", "msb_type")
114
+
115
+ # Access samples
116
  for sample in dataset['train']:
117
+ print(f"ID: {sample['question_id']}")
118
  print(f"Type: {sample['question_type']}")
119
+ print(f"Task: {sample['description']}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
120
  ```
121
 
122
+ ### Evaluation
 
 
 
 
 
 
 
123
 
124
+ For evaluation scripts and detailed usage, please visit the [AInsteinBench GitHub repository](https://github.com/ByteDance-Seed/AInsteinBench).
125
 
126
+ ## πŸ“œ License
 
 
 
127
 
128
+ The dataset is licensed under CC0, subject to any intellectual property rights in the dataset. The data is adapted from open source projects; your use of that data must comply with their respective licenses.
 
 
 
129
 
130
+ **Source Repositories**:
131
+ - Einstein Toolkit: https://einsteintoolkit.org
132
+ - Scientific computing repositories: openmm, pyscf, rdkit, Qiskit, AMReX-Codes, einsteintoolkit
133
 
134
+ All source repositories are open source with permissive licenses.
 
 
 
135
 
136
+ ## πŸ”— Links
137
 
138
+ - **GitHub Repository**: https://github.com/ByteDance-Seed/AInsteinBench
139
+ - **Paper**: Coming soon
140
 
141
+ ## 🀝 Citation
 
 
142
 
143
+ If you use AInsteinBench in your research, please cite:
 
 
 
 
144
 
145
  ```bibtex
146
+ @dataset{ainsteinbench2025,
147
+ title={AInsteinBench: A Benchmark for AI Agents in Scientific Computing},
148
+ author={ByteDance Seed Team},
149
+ year={2025},
150
+ publisher={Hugging Face},
151
+ url={https://huggingface.co/datasets/ByteDance-Seed/AInsteinBench}
152
  }
153
  ```
154