Datasets:
metadata
license: mit
task_categories:
- text-generation
language:
- en
- code
tags:
- code-review
- code-generation
- software-engineering
- pull-requests
- github
size_categories:
- 100K<n<1M
Code Review Diffs
A large-scale dataset of (before_code, reviewer_comment, after_code) triplets extracted from merged pull requests on top GitHub repositories.
Dataset Description
Each row captures a moment where a human code reviewer left an inline comment on a pull request, and the author subsequently modified the code in response. This provides a natural signal for training models to:
- Generate code review comments given a code diff
- Apply review feedback by modifying code based on reviewer suggestions
- Understand code quality patterns across languages and projects
Key Features
- 118K+ triplets from 562 top GitHub repositories
- 20+ programming languages (Python, TypeScript, Go, C++, Rust, JavaScript, C#, Java, Kotlin, Swift, and more)
- Quality-filtered: bot comments, noise ("LGTM", "+1"), and auto-generated content removed
- Chunk-focused: ~50 lines of context around the reviewed code, not entire files
- Permissive licenses only: all source repos use MIT, Apache-2.0, BSD, or similar licenses
- Verified changes: only includes triplets where the code chunk actually changed after the review
Schema
| Column | Type | Description |
|---|---|---|
before_code |
string | ~50 lines of code around the comment, before the fix |
reviewer_comment |
string | The inline review comment text |
after_code |
string | ~50 lines of code around the comment, after the fix |
diff_context |
string | The PR diff hunk where the comment was placed |
file_path |
string | File path within the repo |
comment_line |
int | Line number where the comment was placed |
language |
string | Programming language |
quality_score |
float | Comment quality score (0.0-1.0) |
comment_length |
int | Character count of reviewer comment |
before_lines |
int | Line count of before code |
after_lines |
int | Line count of after code |
pr_title |
string | Pull request title |
pr_number |
int | PR number |
repo_name |
string | Full repo name (owner/repo) |
repo_stars |
int | GitHub stars |
repo_language |
string | Primary repo language |
reviewer_username |
string | Reviewer's GitHub username |
author_username |
string | PR author's GitHub username |
Usage
from datasets import load_dataset
ds = load_dataset("ronantakizawa/github-codereview")
# Get a training example
example = ds["train"][0]
print(f"Review comment: {example['reviewer_comment']}")
print(f"Language: {example['language']}")
print(f"Before:\n{example['before_code'][:200]}")
print(f"After:\n{example['after_code'][:200]}")
Filter by language
python_reviews = ds["train"].filter(lambda x: x["language"] == "Python")
Filter by quality
high_quality = ds["train"].filter(lambda x: x["quality_score"] >= 0.3)
Collection Methodology
- Repo selection: Top 10,000 GitHub repos by stars with permissive licenses from ronantakizawa/github-top-projects
- PR discovery: Paginate merged PRs, filter bot authors, fetch inline review comments
- Comment filtering: Remove bots, noise patterns, auto-generated comments, non-English text, non-code files, reply comments
- Triplet extraction: Fetch file contents at the review commit (before) and PR head (after), extract focused chunks around the comment line
- Change verification: Only keep triplets where the code chunk around the comment actually changed
Quality Filters Applied
- Bot authors excluded (dependabot, renovate, codecov, etc.)
- Comments < 30 characters excluded
- Noise patterns excluded (LGTM, +1, emoji-only, etc.)
- Auto-generated comments excluded (coverage reports, CI output)
- Non-English comments excluded (ASCII ratio < 70%)
- Non-source-code files excluded
- Reply comments excluded (only top-level review comments)
- Files > 512 KB excluded
- PRs with < 10 or > 500 changed lines excluded
Splits
| Split | Percentage | Description |
|---|---|---|
| train | 90% | Training data |
| test | 5% | Test data |
| validation | 5% | Validation data |
Splits are deterministic by repository — all triplets from the same repo appear in the same split.
Limitations
- Review comments may address style/formatting rather than substantive logic changes
- The "after" code may include changes unrelated to the specific review comment
- Line number alignment may be imprecise when multiple commits occur between review and merge
- Some
original_commit_idSHAs may be unavailable due to force-pushes; in these cases, the PR merge base is used as the "before" state
Citation
If you use this dataset, please cite:
@dataset{takizawa2026codereviewdiffs,
title={Code Review Diffs: A Large-Scale Dataset of Review-Driven Code Changes},
author={Takizawa, Ronan},
year={2026},
publisher={Hugging Face},
url={https://huggingface.co/datasets/ronantakizawa/github-codereview}
}