github_issues / README.md
helloadhavan's picture
Upload dataset
227a911 verified
|
raw
history blame
6.16 kB
metadata
license: mit
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: eval
        path: data/eval-*
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: repo
      dtype: string
    - name: fix_commit
      dtype: string
    - name: buggy_commit
      dtype: string
    - name: message
      dtype: string
    - name: files
      list:
        - name: path
          dtype: string
        - name: patch
          dtype: string
        - name: additions
          dtype: int64
        - name: deletions
          dtype: int64
        - name: language
          dtype: string
    - name: timestamp
      dtype: timestamp[s]
  splits:
    - name: train
      num_bytes: 1561640639
      num_examples: 115096
    - name: eval
      num_bytes: 29054081
      num_examples: 3000
    - name: test
      num_bytes: 29054081
      num_examples: 3000
  download_size: 549629363
  dataset_size: 1619748801
task_categories:
  - text-generation
  - summarization
language:
  - en
tags:
  - code
pretty_name: Github issues dataset
size_categories:
  - 100K<n<1M

GitHub Pull Request Bug–Fix Dataset

Kaggle url

A curated, high-signal dataset of real-world software bugs and fixes collected from 25 popular open-source GitHub repositories.
Each entry corresponds to a single pull request (PR) and pairs contextual metadata with the exact code changes (unified diffs) that fixed the bug.

This dataset is designed for:

  • Automated program repair
  • Bug-fix patch generation
  • LLM-based code and debugging agents
  • Empirical software engineering research

How to use

install datasets python library:

pip install datasets

here is a copy paste example

from datasets import load_dataset

# Load all splits
dataset = load_dataset("helloadhavan/github_issues")

print(dataset)
# pick the train split

example = dataset["train"][0]

# Inspect a single example

print("Repository:", example["repo"])
print("Buggy commit:", example["buggy_commit"])
print("Fix commit:", example["fix_commit"])
print("Message:", example["message"])
print("Timestamp:", example["timestamp"])

print("\nModified files:")
for f in example["files"]:
    print("-", f["path"], f["language"])

# Filter examples by programming language

def contains_assembly_file(example):
    return any(f["language"] == "Assembly" for f in example["files"])

python_fixes = dataset["train"].filter(contains_assembly_file)

print("Assembly-related fixes:", len(python_fixes))

Data collection methodology

Data was collected from GitHub repositories by identifying commit pairs that represent a bug-introducing version and its corresponding fix commit.

The dataset was constructed and post-processed to ensure high signal and usability:

  • Only commits representing bug fixes or correctness changes were included
  • Each example explicitly links a buggy commit to the corresponding fix commit
  • Repository metadata is preserved for traceability
  • Code changes are stored as unified diffs at the file level
  • Commits that only perform refactoring, formatting, or non-functional changes were excluded
  • Entries without meaningful code changes were filtered out

Each dataset row represents one bug–fix commit pair, rather than a pull request.


Dataset schema

Each entry in the dataset follows the schema below:

{
  "repo": "owner/repository",
  "buggy_commit": "abcdef123456...",
  "fix_commit": "fedcba654321...",
  "message": "Commit message describing the fix",
  "timestamp": "YYYY-MM-DDTHH:MM:SSZ",
  "files": [
    {
      "path": "path/to/file.ext",
      "patch": "unified diff representing the fix",
      "additions": 10,
      "deletions": 2,
      "language": "Programming language inferred from file extension"
    }
  ]
}
Field Description
repo GitHub repository containing the fix
buggy_commit Commit introducing or containing the bug
fix_commit Commit that fixes the bug
message Commit message associated with the fix
timestamp Timestamp of the fix commit (ISO 8601 format)
files List of files modified by the fix
files[].path Path to the modified file
files[].patch Unified diff containing the code changes
files[].additions Number of lines added
files[].deletions Number of lines removed
files[].language Programming language inferred from the file extension

Supported languages

The dataset contains fixes across multiple programming languages, including (but not limited to):

  • C / C++
  • Python
  • JavaScript / TypeScript
  • Rust
  • Go
  • Java
  • Assembly (very rare)

Language distribution varies by repository.

Intended use cases

This dataset is well-suited for:

  • Training models to generate patches from real pull request context
  • Studying bug-fix patterns across large codebases
  • Building autonomous debugging or repair agents
  • Research in program repair, code synthesis, and software maintenance

It is not intended for:

  • Pull request classification or triage
  • Sentiment analysis

Limitations

The dataset reflects real-world noise from GitHub pull requests Buggy commit identification is heuristic and may be imperfect Some fixes involve refactoring or design changes rather than minimal patches No guarantee that fixes represent optimal or best-practice solutions

Note: Due to a bug in the scraper code, 109k samples were collected instead of the planned 50k.