xinshuo commited on
Commit
fc92507
·
verified ·
1 Parent(s): d6e431f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +198 -0
README.md ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Scientific Coding SWE Dataset (JSONL Format)
2
+
3
+ ## Dataset Description
4
+
5
+ This dataset contains 244 samples extracted from the [xinshuo/Scientific_Coding_SWE_dataset](https://huggingface.co/datasets/xinshuo/Scientific_Coding_SWE_dataset), specifically the `json_dump` field from each sample. The data is provided in JSONL (JSON Lines) format for easy streaming and processing.
6
+
7
+ ### Dataset Summary
8
+
9
+ The Scientific Coding SWE (Software Engineering) dataset focuses on real-world code issues and fixes from prominent scientific computing projects. Each sample represents a complete software engineering task including:
10
+
11
+ - Issue descriptions
12
+ - Code patches (fixes)
13
+ - Test cases
14
+ - Metadata about the changes
15
+ - Execution results
16
+
17
+ ### Supported Tasks
18
+
19
+ - **Software Engineering Task Resolution**: Understanding and resolving code issues in scientific computing contexts
20
+ - **Code Generation**: Generating fixes for missing or incorrect code
21
+ - **Test-Driven Development**: Working with test cases and validation
22
+ - **Documentation Generation**: Understanding code documentation needs
23
+
24
+ ### Languages
25
+
26
+ The dataset covers two primary programming languages:
27
+ - **C++**: 159 samples (65.2%)
28
+ - **Python**: 85 samples (34.8%)
29
+
30
+ ## Dataset Structure
31
+
32
+ ### Data Format
33
+
34
+ The dataset is provided as a single JSONL file (`dataset_json_dumps.jsonl`), where each line is a complete JSON object representing one sample.
35
+
36
+ ### Data Fields
37
+
38
+ Each sample contains the following fields:
39
+
40
+ - `org` (string): Organization name (e.g., "einsteintoolkit", "Qiskit", "openmm")
41
+ - `repo` (string): Repository name
42
+ - `number` (int): Issue/PR number
43
+ - `state` (string): State of the issue (e.g., "open", "closed")
44
+ - `title` (string): Title of the issue/PR
45
+ - `body` (string): Description of the issue
46
+ - `base` (dict): Base branch information with keys:
47
+ - `label`: Branch label
48
+ - `ref`: Branch reference
49
+ - `sha`: Commit SHA
50
+ - `resolved_issues` (list): List of related issues with:
51
+ - `number`: Issue number
52
+ - `title`: Issue title
53
+ - `body`: Issue description
54
+ - `fix_patch` (string): Git diff/patch for the fix
55
+ - `test_patch` (string): Git diff/patch for tests
56
+ - `doc_patch` (string): Git diff/patch for documentation
57
+ - `tag` (string): Additional tags
58
+ - `number_interval` (string): Number interval information
59
+ - `lang` (string): Programming language (cpp, python)
60
+ - `language` (string): Same as lang
61
+ - `instance_id` (string): Unique instance identifier
62
+ - `fixed_tests` (dict): Test results for fixed code
63
+ - `p2p_tests` (dict): Patch-to-pass tests
64
+ - `f2p_tests` (dict): Fail-to-pass tests
65
+ - `s2p_tests` (dict): Skip-to-pass tests
66
+ - `n2p_tests` (dict): None-to-pass tests
67
+ - `run_result` (dict): Execution results with counts of passed/failed/skipped tests
68
+ - `test_patch_result` (dict): Test patch execution results
69
+ - `fix_patch_result` (dict): Fix patch execution results
70
+ - `hints` (string): Hints for solving the issue
71
+ - `count_new_files` (int): Number of new files added
72
+ - `count_new_entities` (int): Number of new code entities
73
+ - `workdir` (string): Working directory path
74
+ - `fix_meta_info` (dict): Metadata about the fix
75
+ - `test_meta_info` (dict): Metadata about tests
76
+ - `doc_meta_info` (dict): Metadata about documentation
77
+ - `meta_info` (dict): General metadata
78
+ - `pre_commands` (list): Commands to run before testing
79
+
80
+ ### Data Instances
81
+
82
+ An example instance (truncated for brevity):
83
+
84
+ ```json
85
+ {
86
+ "org": "einsteintoolkit",
87
+ "repo": "Cactus",
88
+ "number": 1,
89
+ "state": "open",
90
+ "title": "Add missing LinearExtrapBnd.c in CactusExamples/SampleBoundary",
91
+ "body": "This PR adds the missing source file LinearExtrapBnd.c to complete the CactusExamples/SampleBoundary thorn implementation.",
92
+ "language": "cpp",
93
+ "instance_id": "einsteintoolkit__cactus_1",
94
+ "fix_patch": "diff --git a/arrangements/CactusExamples/SampleBoundary/src/LinearExtrapBnd.c ...",
95
+ "run_result": {
96
+ "passed_count": 1,
97
+ "failed_count": 0,
98
+ "skipped_count": 0
99
+ }
100
+ }
101
+ ```
102
+
103
+ ### Data Splits
104
+
105
+ This dataset contains a single split with all 244 samples.
106
+
107
+ ## Source Projects
108
+
109
+ The dataset includes samples from the following scientific computing projects:
110
+
111
+ | Organization | Count | Description |
112
+ |-------------|-------|-------------|
113
+ | openmm | 54 | High-performance molecular dynamics simulation |
114
+ | pyscf | 47 | Python-based Simulations of Chemistry Framework |
115
+ | rdkit | 46 | Cheminformatics and machine learning software |
116
+ | einsteintoolkit | 40 | Computational infrastructure for numerical relativity and astrophysics |
117
+ | Qiskit | 38 | Open-source quantum computing framework |
118
+ | AMReX-Codes | 19 | Block-structured adaptive mesh refinement framework |
119
+
120
+ ## Dataset Creation
121
+
122
+ ### Source Data
123
+
124
+ This dataset is derived from the [xinshuo/Scientific_Coding_SWE_dataset](https://huggingface.co/datasets/xinshuo/Scientific_Coding_SWE_dataset), which curates software engineering tasks from real GitHub issues and pull requests in scientific computing projects.
125
+
126
+ ### Data Collection Process
127
+
128
+ The data was extracted by:
129
+ 1. Loading the original dataset using the HuggingFace `datasets` library
130
+ 2. Extracting only the `json_dump` field from each sample
131
+ 3. Converting to JSONL format for efficient streaming and processing
132
+
133
+ ## Considerations for Using the Data
134
+
135
+ ### Social Impact of Dataset
136
+
137
+ This dataset can be used to:
138
+ - Train and evaluate AI models for scientific software engineering tasks
139
+ - Improve automated code generation and bug fixing in scientific computing
140
+ - Understand common patterns in scientific software development
141
+
142
+ ### Discussion of Biases
143
+
144
+ The dataset may have the following biases:
145
+ - Overrepresentation of certain programming languages (C++ and Python)
146
+ - Focus on specific scientific domains (physics, chemistry, quantum computing)
147
+ - Representation limited to open-source projects with public GitHub repositories
148
+
149
+ ## Additional Information
150
+
151
+ ### Dataset Curators
152
+
153
+ This JSONL version was created from the original xinshuo/Scientific_Coding_SWE_dataset.
154
+
155
+ ### Licensing Information
156
+
157
+ Please refer to the original dataset for licensing information.
158
+
159
+ ### Citation Information
160
+
161
+ If you use this dataset, please cite the original source:
162
+
163
+ ```bibtex
164
+ @dataset{scientific_coding_swe_dataset,
165
+ title={Scientific Coding SWE Dataset},
166
+ author={xinshuo},
167
+ year={2024},
168
+ url={https://huggingface.co/datasets/xinshuo/Scientific_Coding_SWE_dataset}
169
+ }
170
+ ```
171
+
172
+ ### Contributions
173
+
174
+ For questions or issues with this JSONL format, please refer to the dataset repository.
175
+
176
+ ## Usage Example
177
+
178
+ ```python
179
+ import json
180
+
181
+ # Read and process the dataset
182
+ with open('dataset_json_dumps.jsonl', 'r') as f:
183
+ for line in f:
184
+ sample = json.loads(line)
185
+ print(f"Issue: {sample['title']}")
186
+ print(f"Language: {sample['language']}")
187
+ print(f"Organization: {sample['org']}")
188
+ print("---")
189
+ ```
190
+
191
+ Or use with HuggingFace datasets:
192
+
193
+ ```python
194
+ from datasets import load_dataset
195
+
196
+ dataset = load_dataset("xinshuo/test_jsonl", data_files="dataset_json_dumps.jsonl")
197
+ ```
198
+