helloadhavan commited on
Commit
d5d4095
·
verified ·
1 Parent(s): 4f099ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +162 -51
README.md CHANGED
@@ -1,51 +1,162 @@
1
- ---
2
- license: mit
3
- configs:
4
- - config_name: default
5
- data_files:
6
- - split: split1
7
- path: data/split1-*
8
- - split: split2
9
- path: data/split2-*
10
- - split: split3
11
- path: data/split3-*
12
- - split: split4
13
- path: data/split4-*
14
- dataset_info:
15
- features:
16
- - name: repo
17
- dtype: string
18
- - name: issue_number
19
- dtype: int64
20
- - name: issue_title
21
- dtype: string
22
- - name: issue_body
23
- dtype: string
24
- - name: commit_sha
25
- dtype: string
26
- - name: files
27
- list:
28
- - name: filename
29
- dtype: string
30
- - name: patch
31
- dtype: string
32
- - name: additions
33
- dtype: int64
34
- - name: deletions
35
- dtype: int64
36
- splits:
37
- - name: split1
38
- num_bytes: 50886134
39
- num_examples: 2000
40
- - name: split2
41
- num_bytes: 53889478
42
- num_examples: 2000
43
- - name: split3
44
- num_bytes: 58002025
45
- num_examples: 2000
46
- - name: split4
47
- num_bytes: 52200575
48
- num_examples: 1984
49
- download_size: 210310215
50
- dataset_size: 214978212
51
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ configs:
4
+ - config_name: default
5
+ data_files:
6
+ - split: split1
7
+ path: data/split1-*
8
+ - split: split2
9
+ path: data/split2-*
10
+ - split: split3
11
+ path: data/split3-*
12
+ - split: split4
13
+ path: data/split4-*
14
+ dataset_info:
15
+ features:
16
+ - name: repo
17
+ dtype: string
18
+ - name: issue_number
19
+ dtype: int64
20
+ - name: issue_title
21
+ dtype: string
22
+ - name: issue_body
23
+ dtype: string
24
+ - name: commit_sha
25
+ dtype: string
26
+ - name: files
27
+ list:
28
+ - name: filename
29
+ dtype: string
30
+ - name: patch
31
+ dtype: string
32
+ - name: additions
33
+ dtype: int64
34
+ - name: deletions
35
+ dtype: int64
36
+ splits:
37
+ - name: split1
38
+ num_bytes: 50886134
39
+ num_examples: 2000
40
+ - name: split2
41
+ num_bytes: 53889478
42
+ num_examples: 2000
43
+ - name: split3
44
+ num_bytes: 58002025
45
+ num_examples: 2000
46
+ - name: split4
47
+ num_bytes: 52200575
48
+ num_examples: 1984
49
+ download_size: 210310215
50
+ dataset_size: 214978212
51
+ task_categories:
52
+ - text-generation
53
+ language:
54
+ - en
55
+ tags:
56
+ - code
57
+ pretty_name: Github issues dataset
58
+ size_categories:
59
+ - 1K<n<10K
60
+ ---
61
+
62
+ # GitHub Issues + Fixes Dataset
63
+
64
+ A **curated, high-signal dataset** of GitHub issues collected from **25 popular open-source repositories**.
65
+ Each example pairs a real GitHub issue with the **exact code changes (diffs)** that resolved it.
66
+
67
+ The dataset is designed for:
68
+ - **Automated bug fixing**
69
+ - **LLM-based code agents**
70
+ - **Issue → patch generation**
71
+ - **Program repair research**
72
+
73
+ ---
74
+
75
+ ## How the data was extracted
76
+
77
+ The data was collected using the **GitHub REST API** and processed into a structured format.
78
+
79
+ To maintain quality and usefulness:
80
+ - Only **closed issues** were considered
81
+ - Each issue must have a **clearly associated fix**
82
+ - Fixes are stored as **unified diffs** extracted from the resolving commit
83
+ - Low-signal issues (questions, duplicates, discussions) were filtered out
84
+ - Issues without meaningful code changes were excluded
85
+
86
+ Each row represents **one issue–fix pair**.
87
+
88
+ ---
89
+
90
+ ## Dataset structure
91
+
92
+ Each dataset entry has the following schema:
93
+
94
+ ```json
95
+ {
96
+ "repo": "owner/repository",
97
+ "issue_number": 12345,
98
+ "issue_title": "Short description of the problem",
99
+ "issue_body": "Full issue discussion and problem description",
100
+ "commit_sha": "abcdef123456...",
101
+ "files": [
102
+ {
103
+ "filename": "path/to/file.ext",
104
+ "patch": "unified diff showing the fix",
105
+ "additions": 10,
106
+ "deletions": 2
107
+ }
108
+ ]
109
+ }
110
+ ```
111
+
112
+ | Field | Description |
113
+ | ------------------- | -------------------------------------------- |
114
+ | `repo` | GitHub repository where the issue originated |
115
+ | `issue_number` | Original GitHub issue number |
116
+ | `issue_title` | Title of the issue |
117
+ | `issue_body` | Full issue description and context |
118
+ | `commit_sha` | Commit that fixed the issue |
119
+ | `files` | List of modified files |
120
+ | `files[].filename` | Path of the modified file |
121
+ | `files[].patch` | Unified diff representing the fix |
122
+ | `files[].additions` | Number of added lines |
123
+ | `files[].deletions` | Number of removed lines |
124
+
125
+
126
+ ## Supported languages
127
+
128
+ The dataset contains fixes across multiple programming languages, including (but not limited to):
129
+
130
+ * C / C++
131
+ * Python
132
+ * JavaScript / TypeScript
133
+ * Rust
134
+ * Go
135
+ * Java
136
+ * Assembly
137
+
138
+ Language distribution varies by repository.
139
+
140
+ ## Intended use cases
141
+
142
+ This dataset is well-suited for:
143
+
144
+ * Training models to generate code patches from issue descriptions
145
+ * Evaluating LLM reasoning over real-world bug reports
146
+ * Building autonomous debugging or refactoring agents
147
+ * Research on program repair, code synthesis, and software maintenance
148
+
149
+ It is **not** intended for:
150
+
151
+ * Issue classification
152
+ * entiment analysis
153
+ * Chatbot fine-tuning without code generation
154
+
155
+ ## Limitations
156
+
157
+ * The dataset reflects real-world noise from GitHub issues
158
+ * Issue descriptions vary widely in clarity and detail
159
+ * Some fixes involve refactoring or design changes rather than minimal patches
160
+ * No guarantee that all fixes are optimal or best practice
161
+
162
+ > **<span style="color:red;font-size:1.25rem">Warning</span>**: This dataset currently has the issues if 9/25 repos and 8k rows but is expected to have 50k rows and 2 GB in size