github_issues / README.md
helloadhavan's picture
Upload dataset
d9ebbd6 verified
|
raw
history blame
4.53 kB
---
license: mit
configs:
- config_name: default
data_files:
- split: split1
path: data/split1-*
- split: split2
path: data/split2-*
- split: split3
path: data/split3-*
- split: split4
path: data/split4-*
dataset_info:
features:
- name: repo
dtype: string
- name: pr_number
dtype: int64
- name: title
dtype: string
- name: body
dtype: string
- name: buggy_commit
dtype: string
- name: fix_commit
dtype: string
- name: buggy_distance
dtype: int64
- name: confidence
dtype: string
- name: files
list:
- name: filename
dtype: string
- name: patch
dtype: string
- name: additions
dtype: int64
- name: deletions
dtype: int64
splits:
- name: split1
num_bytes: 34024342
num_examples: 6000
- name: split2
num_bytes: 31467405
num_examples: 6000
- name: split3
num_bytes: 34011489
num_examples: 6000
- name: split4
num_bytes: 34819671
num_examples: 6000
download_size: 50269862
dataset_size: 134322907
task_categories:
- text-generation
language:
- en
tags:
- code
pretty_name: Github issues dataset
size_categories:
- 10K<n<100K
---
# GitHub Issues + Fixes Dataset
A **curated, high-signal dataset** of GitHub issues collected from **25 popular open-source repositories**.
Each example pairs a real GitHub issue with the **exact code changes (diffs)** that resolved it.
The dataset is designed for:
- **Automated bug fixing**
- **LLM-based code agents**
- **Issue → patch generation**
- **Program repair research**
---
## How the data was extracted
The data was collected using the **GitHub REST API** and processed into a structured format.
To maintain quality and usefulness:
- Only **closed issues** were considered
- Each issue must have a **clearly associated fix**
- Fixes are stored as **unified diffs** extracted from the resolving commit
- Low-signal issues (questions, duplicates, discussions) were filtered out
- Issues without meaningful code changes were excluded
Each row represents **one issue–fix pair**.
---
## Dataset structure
Each dataset entry has the following schema:
```json
{
"repo": "owner/repository",
"issue_number": 12345,
"issue_title": "Short description of the problem",
"issue_body": "Full issue discussion and problem description",
"commit_sha": "abcdef123456...",
"files": [
{
"filename": "path/to/file.ext",
"patch": "unified diff showing the fix",
"additions": 10,
"deletions": 2
}
]
}
```
| Field | Description |
| ------------------- | -------------------------------------------- |
| `repo` | GitHub repository where the issue originated |
| `issue_number` | Original GitHub issue number |
| `issue_title` | Title of the issue |
| `issue_body` | Full issue description and context |
| `commit_sha` | Commit that fixed the issue |
| `files` | List of modified files |
| `files[].filename` | Path of the modified file |
| `files[].patch` | Unified diff representing the fix |
| `files[].additions` | Number of added lines |
| `files[].deletions` | Number of removed lines |
## Supported languages
The dataset contains fixes across multiple programming languages, including (but not limited to):
* C / C++
* Python
* JavaScript / TypeScript
* Rust
* Go
* Java
* Assembly (very rare)
Language distribution varies by repository.
## Intended use cases
This dataset is well-suited for:
* Training models to generate code patches from issue descriptions
* Evaluating LLM reasoning over real-world bug reports
* Building autonomous debugging or refactoring agents
* Research on program repair, code synthesis, and software maintenance
It is **not** intended for:
* Issue classification
* sentiment analysis
* Chatbot fine-tuning without code generation
## Limitations
* The dataset reflects real-world noise from GitHub issues
* Issue descriptions vary widely in clarity and detail
* Some fixes involve refactoring or design changes rather than minimal patches
* No guarantee that all fixes are optimal or best practice
> **<span style="color:red;font-size:1.25rem">Warning</span>**: This dataset currently has the issues of 10/25 repos and 14k rows but is expected to have 50k rows and 2 GB in size