| --- |
| license: odc-by |
| task_categories: |
| - text-generation |
| - text-classification |
| - feature-extraction |
| language: |
| - en |
| - mul |
| pretty_name: OpenGitHub |
| size_categories: |
| - 100M<n<1B |
| tags: |
| - github |
| - events |
| - open-source |
| - gharchive |
| - code |
| - software-engineering |
| - social |
| configs: |
| - config_name: pushes |
| data_files: "data/pushes/**/*.parquet" |
| - config_name: issues |
| data_files: "data/issues/**/*.parquet" |
| - config_name: issue_comments |
| data_files: "data/issue_comments/**/*.parquet" |
| - config_name: pull_requests |
| data_files: "data/pull_requests/**/*.parquet" |
| - config_name: pr_reviews |
| data_files: "data/pr_reviews/**/*.parquet" |
| - config_name: pr_review_comments |
| data_files: "data/pr_review_comments/**/*.parquet" |
| - config_name: stars |
| data_files: "data/stars/**/*.parquet" |
| - config_name: forks |
| data_files: "data/forks/**/*.parquet" |
| - config_name: creates |
| data_files: "data/creates/**/*.parquet" |
| - config_name: deletes |
| data_files: "data/deletes/**/*.parquet" |
| - config_name: releases |
| data_files: "data/releases/**/*.parquet" |
| - config_name: commit_comments |
| data_files: "data/commit_comments/**/*.parquet" |
| - config_name: wiki_pages |
| data_files: "data/wiki_pages/**/*.parquet" |
| - config_name: members |
| data_files: "data/members/**/*.parquet" |
| - config_name: public_events |
| data_files: "data/public_events/**/*.parquet" |
| - config_name: discussions |
| data_files: "data/discussions/**/*.parquet" |
| - config_name: live |
| data_files: "today/raw/**/*.parquet" |
| --- |
| |
| # OpenGitHub |
|
|
| ## What is it? |
|
|
| This dataset contains every public event on GitHub: every push, pull request, issue, star, fork, code review, release, and discussion across all public repositories. GitHub is the world's largest software development platform, home to over 200 million repositories and the daily work of tens of millions of developers, from individual open-source contributors to the engineering teams behind the most widely used software on earth. |
|
|
| The archive currently spans from **2011-02-12** to **2015-04-15** (1,481 days), totaling **304,484,380 events** across 16 fully structured Parquet tables. New events are fetched directly from the GitHub Events API every few seconds and committed as 5-minute Parquet blocks through an automated live pipeline, so the dataset stays current with GitHub itself. |
|
|
| We believe this is the most complete and regularly updated structured mirror of public GitHub activity available on Hugging Face. The original 103.9 GB of raw GH Archive NDJSON has been parsed, flattened, and compressed into 23.1 GB of Zstd-compressed Parquet. Every nested JSON field is expanded into typed columns — no JSON parsing needed downstream. The data is partitioned as `data/TABLE/YYYY/MM/DD.parquet`, making it straightforward to query with DuckDB, load with the `datasets` library, or process with any tool that reads Parquet. |
|
|
| The underlying data comes from [GH Archive](https://www.gharchive.org/), created by [Ilya Grigorik](https://www.igvita.com/), which has been recording every public GitHub event via the [Events API](https://docs.github.com/en/rest/activity/events) since 2011. Released under the [Open Data Commons Attribution License (ODC-By) v1.0](https://opendatacommons.org/licenses/by/1-0/). |
|
|
| ## Live data (today) |
|
|
| Events from today are captured in near-real-time from the GitHub Events API and stored as 5-minute blocks in `today/raw/YYYY/MM/DD/HHMM.parquet`. Each block contains a generic event record with the full JSON payload preserved for later processing. Live blocks are committed to this dataset within minutes of the events occurring. |
|
|
| **2026-03-29** — 3,277,622 events in 165 blocks |
|
|
| ``` |
| 00:00 ████████████████████████░░░░░░ 216.2K |
| 01:00 ████████████████████████░░░░░░ 222.5K |
| 02:00 ██████████████████████████░░░░ 238.7K |
| 03:00 █████████████████████████░░░░░ 230.2K |
| 04:00 ██████████████████████████░░░░ 237.8K |
| 05:00 ████████████████████████████░░ 258.6K |
| 06:00 ███████████████████████████░░░ 246.3K |
| 07:00 ██████████████████████████░░░░ 240.5K |
| 08:00 ███████████████████████████░░░ 246.9K |
| 09:00 ███████████████████████████░░░ 243.6K |
| 10:00 ████████████████████████████░░ 254.2K |
| 11:00 ███████████████████████████░░░ 250.9K |
| 12:00 ██████████████████████████████ 269.8K |
| 13:00 ██░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 24.1K |
| 14:00 ██████████░░░░░░░░░░░░░░░░░░░░ 97.2K |
| 15:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0 |
| 16:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0 |
| 17:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0 |
| 18:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0 |
| 19:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0 |
| 20:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0 |
| 21:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0 |
| 22:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0 |
| 23:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0 |
| ``` |
|
|
| Live capture spans **2 days** (2026-03-28 → 2026-03-29) — **9,267,624 events** in 4,156 blocks. |
|
|
|
|
| ### Live event schema |
|
|
| | Column | Type | Description | |
| |---|---|---| |
| | `event_id` | string | Unique GitHub event ID | |
| | `event_type` | string | Event type (PushEvent, IssuesEvent, etc.) | |
| | `created_at` | timestamp | When the event occurred | |
| | `actor_id` | int64 | User ID | |
| | `actor_login` | string | Username | |
| | `repo_id` | int64 | Repository ID | |
| | `repo_name` | string | Full repository name (owner/repo) | |
| | `org_id` | int64 | Organization ID (0 if personal) | |
| | `org_login` | string | Organization login | |
| | `action` | string | Event action (opened, closed, started, etc.) | |
| | `number` | int32 | Issue/PR number | |
| | `payload_json` | string | Full event payload as JSON | |
|
|
| ```python |
| # Query today's live events with DuckDB. |
| # Run: uv run live_events.py |
| import duckdb |
| |
| duckdb.sql(""" |
| SELECT event_type, COUNT(*) as n |
| FROM read_parquet('hf://datasets/open-index/open-github/today/raw/**/*.parquet') |
| GROUP BY event_type ORDER BY n DESC |
| """).show() |
| ``` |
|
|
| ## Events per year |
|
|
| ``` |
| 2011 ████░░░░░░░░░░░░░░░░░░░░░░░░░░ 18.0M |
| 2012 ██████████░░░░░░░░░░░░░░░░░░░░ 42.5M |
| 2013 ██████████████████░░░░░░░░░░░░ 76.2M |
| 2014 ██████████████████████████████ 124.6M |
| 2015 ██████████░░░░░░░░░░░░░░░░░░░░ 43.2M |
| ``` |
|
|
| | Year | Days | Events | Avg/Day | Raw Input | Parquet Output | Download | Process | Upload | |
| |------|-----:|-------:|--------:|----------:|---------------:|---------:|--------:|-------:| |
| | 2011 | 323 | 18,032,583 | 55,828 | 3.5 GB | 76.6 MB | 47m29s | 52m20s | 33m17s | |
| | 2012 | 366 | 42,537,517 | 116,222 | 11.5 GB | 161.5 MB | 1h19m | 2h30m | 35m15s | |
| | 2013 | 353 | 76,154,264 | 215,734 | 23.2 GB | 4.1 GB | 2h33m | 8h52m | 2h34m | |
| | 2014 | 348 | 124,581,314 | 357,992 | 51.9 GB | 11.8 GB | 4h13m | 20h37m | 5h05m | |
| | 2015 | 91 | 43,178,702 | 474,491 | 13.8 GB | 7.1 GB | 59m16s | 2h15m | 2h10m | |
|
|
|
|
| ### Pushes per year |
|
|
| Pushes are the most common event type, representing roughly half of all GitHub activity. Each push can contain multiple commits. Bots (Dependabot, Renovate, CI pipelines) account for a significant share. |
|
|
| ``` |
| 2011 ████░░░░░░░░░░░░░░░░░░░░░░░░░░ 8.7M |
| 2012 █████████░░░░░░░░░░░░░░░░░░░░░ 20.6M |
| 2013 ██████████████████░░░░░░░░░░░░ 38.9M |
| 2014 ██████████████████████████████ 63.1M |
| 2015 ██████████░░░░░░░░░░░░░░░░░░░░ 21.2M |
| ``` |
|
|
|
|
| ```sql |
| -- Top 20 repos by push volume this year. |
| -- Run: duckdb -c ".read pushes_top_repos.sql" |
| SELECT repo_name, COUNT(*) as pushes, SUM(size) as commits |
| FROM read_parquet('hf://datasets/open-index/open-github/data/pushes/2026/**/*.parquet') |
| GROUP BY repo_name ORDER BY pushes DESC LIMIT 20; |
| ``` |
|
|
| ### Issues per year |
|
|
| Issue events track the full lifecycle: opened, closed, reopened, labeled, assigned, and more. Use the `action` column to filter by lifecycle stage. |
|
|
| ``` |
| 2011 ████░░░░░░░░░░░░░░░░░░░░░░░░░░ 982.1K |
| 2012 █████████░░░░░░░░░░░░░░░░░░░░░ 2.4M |
| 2013 ██████████████████░░░░░░░░░░░░ 4.4M |
| 2014 ██████████████████████████████ 7.3M |
| 2015 █████████░░░░░░░░░░░░░░░░░░░░░ 2.2M |
| ``` |
|
|
|
|
| ```sql |
| -- Repos with the most issues opened vs closed this year. |
| -- Run: duckdb -c ".read issues_top_repos.sql" |
| SELECT repo_name, |
| COUNT(*) FILTER (WHERE action = 'opened') as opened, |
| COUNT(*) FILTER (WHERE action = 'closed') as closed |
| FROM read_parquet('hf://datasets/open-index/open-github/data/issues/2026/**/*.parquet') |
| GROUP BY repo_name ORDER BY opened DESC LIMIT 20; |
| ``` |
|
|
| ### Pull requests per year |
|
|
| Pull request events cover the full review cycle: opened, merged, closed, review requested, and synchronized (new commits pushed). The `merged` field indicates whether a PR was merged when closed. |
|
|
| ``` |
| 2011 ██░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 475.6K |
| 2012 █████████░░░░░░░░░░░░░░░░░░░░░ 1.8M |
| 2013 ████████████████░░░░░░░░░░░░░░ 3.0M |
| 2014 ██████████████████████████████ 5.5M |
| 2015 ███████████░░░░░░░░░░░░░░░░░░░ 2.1M |
| ``` |
|
|
|
|
| ```sql |
| -- Top repos by merged PRs this year. |
| -- Run: duckdb -c ".read prs_top_merged.sql" |
| SELECT repo_name, COUNT(*) as merged_prs |
| FROM read_parquet('hf://datasets/open-index/open-github/data/pull_requests/2026/**/*.parquet') |
| WHERE action = 'merged' |
| GROUP BY repo_name ORDER BY merged_prs DESC LIMIT 20; |
| ``` |
|
|
| ### Stars per year |
|
|
| Stars (WatchEvent in the GitHub API) reflect community interest and discovery. Starring patterns often correlate with Hacker News, Reddit, or Twitter posts. For 2012–2014 events, `repo_language`, `repo_stars_count`, and `repo_forks_count` are populated from the legacy Timeline API repository snapshot. |
|
|
| ``` |
| 2011 ████░░░░░░░░░░░░░░░░░░░░░░░░░░ 1.8M |
| 2012 ██████████░░░░░░░░░░░░░░░░░░░░ 4.0M |
| 2013 ███████████████████░░░░░░░░░░░ 7.1M |
| 2014 ██████████████████████████████ 11.2M |
| 2015 ██████████░░░░░░░░░░░░░░░░░░░░ 3.9M |
| ``` |
|
|
|
|
| ```sql |
| -- Most starred repos this year. |
| -- Run: duckdb -c ".read stars_top_repos.sql" |
| SELECT repo_name, COUNT(*) as stars |
| FROM read_parquet('hf://datasets/open-index/open-github/data/stars/2026/**/*.parquet') |
| GROUP BY repo_name ORDER BY stars DESC LIMIT 20; |
| ``` |
|
|
| ## Quick start |
|
|
| ### Python (`datasets`) |
|
|
| ```python |
| # Quick-start: load OpenGitHub data with the Hugging Face datasets library. |
| # Run: uv run quickstart_datasets.py |
| from datasets import load_dataset |
| |
| # Stream all stars without downloading everything |
| ds = load_dataset("open-index/open-github", "stars", streaming=True) |
| for row in ds["train"]: |
| print(row["repo_name"], row["actor_login"], row["created_at"]) |
| break # remove to stream all |
| |
| # Load a specific month of issues |
| ds = load_dataset("open-index/open-github", "issues", |
| data_files="data/issues/2026/03/*.parquet") |
| print(f"March 2026 issues: {len(ds['train'])}") |
| |
| # Load all pull requests into memory |
| ds = load_dataset("open-index/open-github", "pull_requests") |
| print(f"Total PRs: {len(ds['train'])}") |
| |
| # Query today's live events |
| ds = load_dataset("open-index/open-github", "live", streaming=True) |
| for row in ds["train"]: |
| print(row["event_type"], row["repo_name"], row["created_at"]) |
| break # remove to stream all |
| ``` |
|
|
| ### DuckDB |
|
|
| ```sql |
| -- Quick-start DuckDB queries for the OpenGitHub dataset. |
| -- Run: duckdb -c ".read quickstart.sql" |
| |
| -- Top 20 most-starred repos this year |
| SELECT repo_name, COUNT(*) as stars |
| FROM read_parquet('hf://datasets/open-index/open-github/data/stars/2026/**/*.parquet') |
| GROUP BY repo_name ORDER BY stars DESC LIMIT 20; |
| |
| -- Most active PR reviewers (approvals only) |
| SELECT actor_login, COUNT(*) as approvals |
| FROM read_parquet('hf://datasets/open-index/open-github/data/pr_reviews/2026/**/*.parquet') |
| WHERE review_state = 'approved' |
| GROUP BY actor_login ORDER BY approvals DESC LIMIT 20; |
| |
| -- Issue open/close rates by repo |
| SELECT repo_name, |
| COUNT(*) FILTER (WHERE action = 'opened') as opened, |
| COUNT(*) FILTER (WHERE action = 'closed') as closed, |
| ROUND(COUNT(*) FILTER (WHERE action = 'closed') * 100.0 / |
| NULLIF(COUNT(*) FILTER (WHERE action = 'opened'), 0), 1) as close_pct |
| FROM read_parquet('hf://datasets/open-index/open-github/data/issues/2026/**/*.parquet') |
| WHERE is_pull_request = false |
| GROUP BY repo_name HAVING opened >= 10 |
| ORDER BY opened DESC LIMIT 20; |
| |
| -- Full activity timeline for a repo (one month) |
| SELECT event_type, created_at, actor_login |
| FROM read_parquet('hf://datasets/open-index/open-github/data/*/2026/03/*.parquet') |
| WHERE repo_name = 'golang/go' |
| ORDER BY created_at DESC LIMIT 100; |
| ``` |
|
|
| ### Bulk download (`huggingface_hub`) |
| |
| ```python |
| # Download OpenGitHub data locally with huggingface_hub. |
| # Run: uv run quickstart_download.py |
| # For faster downloads: HF_HUB_ENABLE_HF_TRANSFER=1 uv run quickstart_download.py |
| from huggingface_hub import snapshot_download |
|
|
| # Download only stars data |
| snapshot_download("open-index/open-github", repo_type="dataset", |
| local_dir="./open-github/", |
| allow_patterns="data/stars/**/*.parquet") |
| |
| # Download a specific repo's data across all tables |
| # snapshot_download("open-index/open-github", repo_type="dataset", |
| # local_dir="./open-github/", |
| # allow_patterns="data/*/2026/03/*.parquet") |
| ``` |
| |
| For faster downloads, install `pip install huggingface_hub[hf_transfer]` and set `HF_HUB_ENABLE_HF_TRANSFER=1`. |
| |
| ## Schema |
| |
| ### Event envelope (shared across all 16 tables) |
| |
| Every row includes these columns: |
| |
| | Column | Type | Description | |
| |---|---|---| |
| | `event_id` | string | Unique GitHub event ID | |
| | `event_type` | string | GitHub event type (e.g. `PushEvent`, `IssuesEvent`) | |
| | `created_at` | string | ISO 8601 timestamp | |
| | `actor_id` | int64 | User ID of the actor | |
| | `actor_login` | string | Username of the actor | |
| | `repo_id` | int64 | Repository ID | |
| | `repo_name` | string | Full repository name (`owner/repo`) | |
| | `org_id` | int64 | Organization ID (0 if personal repo) | |
| | `org_login` | string | Organization login | |
| |
| ### Per-table payload fields |
| |
| #### `pushes`.PushEvent |
| |
| Git push events, typically the highest volume table (~50% of all events). Each push includes the full list of commits with SHA, message, and author. |
| |
| **Processing:** Each `PushEvent` produces one row. The `commits` field is a Parquet LIST of structs with fields `sha`, `message`, `author_name`, `author_email`, `distinct`, `url`. All other fields are flattened directly from `payload.*`. |
| |
| | Column | Type | Description | |
| |---|---|---| |
| | `push_id` | int64 | Unique push identifier | |
| | `ref` | string | Git ref (e.g. `refs/heads/main`) | |
| | `head` | string | SHA after push | |
| | `before` | string | SHA before push | |
| | `size` | int32 | Total commits in push | |
| | `distinct_size` | int32 | Distinct (new) commits | |
| | `commits` | list\<struct\> | Commit list: `[{sha, message, author_name, author_email, distinct, url}]` | |
| |
| #### `issues`.IssuesEvent |
| |
| Issue lifecycle events: opened, closed, reopened, edited, labeled, assigned, milestoned, and more. Contains the full issue snapshot at event time. |
| |
| **Processing:** Flattened from `payload.issue.*`. Nested objects like `issue.user` become `user_login`, `issue.milestone` becomes `milestone_id`/`milestone_title`. Labels and assignees are Parquet LIST columns. |
| |
| | Column | Type | Description | |
| |---|---|---| |
| | `action` | string | opened, closed, reopened, labeled, etc. | |
| | `issue_id` | int64 | Issue ID | |
| | `issue_number` | int32 | Issue number | |
| | `title` | string | Issue title | |
| | `body` | string | Issue body (markdown) | |
| | `state` | string | open or closed | |
| | `locked` | bool | Whether comments are locked | |
| | `comments_count` | int32 | Comment count | |
| | `user_login` | string | Author username | |
| | `user_id` | int64 | Author user ID | |
| | `assignee_login` | string | Primary assignee | |
| | `milestone_title` | string | Milestone name | |
| | `labels` | list\<string\> | Label names | |
| | `assignees` | list\<string\> | Assignee logins | |
| | `reactions_total` | int32 | Total reactions | |
| | `issue_created_at` | timestamp | When the issue was created | |
| | `issue_closed_at` | timestamp | When closed (null if open) | |
| |
| #### `issue_comments`.IssueCommentEvent |
| |
| Comments on issues and pull requests. Each event contains both the comment and a summary of the parent issue. |
| |
| **Processing:** Flattened from `payload.comment.*` and `payload.issue.*`. Comment reactions are flattened from `comment.reactions.*`. The parent issue fields are prefixed with `issue_` for context. |
| |
| | Column | Type | Description | |
| |---|---|---| |
| | `action` | string | created, edited, or deleted | |
| | `comment_id` | int64 | Comment ID | |
| | `comment_body` | string | Comment text (markdown) | |
| | `comment_user_login` | string | Comment author | |
| | `comment_created_at` | string | Comment timestamp | |
| | `issue_number` | int32 | Parent issue/PR number | |
| | `issue_title` | string | Parent issue/PR title | |
| | `issue_state` | string | Parent state (open/closed) | |
| | `reactions_total` | int32 | Total reactions on comment | |
| |
| #### `pull_requests`.PullRequestEvent |
| |
| Pull request lifecycle: opened, closed, merged, labeled, review_requested, synchronize, and more. The richest table, containing diff stats, merge status, head/base refs, and full PR metadata. |
| |
| **Processing:** Deeply flattened from `payload.pull_request.*`. Branch refs like `head.ref`, `head.sha`, `base.ref` become `head_ref`, `head_sha`, `base_ref`. Repository info from `head.repo` and `base.repo` become `head_repo_full_name`, `base_repo_full_name`. Labels and reviewers are Parquet LIST columns. |
| |
| | Column | Type | Description | |
| |---|---|---| |
| | `action` | string | opened, closed, merged, synchronize, etc. | |
| | `pr_id` | int64 | PR ID | |
| | `pr_number` | int32 | PR number | |
| | `title` | string | PR title | |
| | `body` | string | PR body (markdown) | |
| | `state` | string | open or closed | |
| | `merged` | bool | Whether merged | |
| | `draft` | bool | Whether a draft PR | |
| | `commits_count` | int32 | Commit count | |
| | `additions` | int32 | Lines added | |
| | `deletions` | int32 | Lines deleted | |
| | `changed_files` | int32 | Files changed | |
| | `user_login` | string | Author username | |
| | `head_ref` | string | Source branch | |
| | `head_sha` | string | Source commit SHA | |
| | `base_ref` | string | Target branch | |
| | `head_repo_full_name` | string | Source repo | |
| | `base_repo_full_name` | string | Target repo | |
| | `merged_by_login` | string | Who merged | |
| | `pr_created_at` | timestamp | When the PR was opened | |
| | `pr_merged_at` | timestamp | When merged (null if not merged) | |
| | `labels` | list\<string\> | Label names | |
| | `requested_reviewers` | list\<string\> | Requested reviewer logins | |
| | `reactions_total` | int32 | Total reactions | |
| |
| #### `pr_reviews`.PullRequestReviewEvent |
| |
| Code review submissions: approved, changes_requested, commented, or dismissed. Each review is one row. |
| |
| **Processing:** Flattened from `payload.review.*` and `payload.pull_request.*`. The review state (approved/changes_requested/commented/dismissed) is the most useful field for analyzing review patterns. |
| |
| | Column | Type | Description | |
| |---|---|---| |
| | `action` | string | submitted, dismissed | |
| | `review_id` | int64 | Review ID | |
| | `review_state` | string | approved, changes_requested, commented, dismissed | |
| | `review_body` | string | Review body text | |
| | `review_submitted_at` | timestamp | Review timestamp | |
| | `review_user_login` | string | Reviewer username | |
| | `review_commit_id` | string | Commit SHA reviewed | |
| | `pr_id` | int64 | PR ID | |
| | `pr_number` | int32 | PR number | |
| | `pr_title` | string | PR title | |
| |
| #### `pr_review_comments`.PullRequestReviewCommentEvent |
| |
| Line-level comments on pull request diffs. Includes the diff hunk for context and threading via `in_reply_to_id`. |
| |
| **Processing:** Flattened from `payload.comment.*` and `payload.pull_request.*`. The `diff_hunk` field contains the surrounding diff context. Thread replies reference the parent comment via `in_reply_to_id`. |
| |
| | Column | Type | Description | |
| |---|---|---| |
| | `action` | string | created | |
| | `comment_id` | int64 | Comment ID | |
| | `comment_body` | string | Comment text | |
| | `diff_hunk` | string | Diff context | |
| | `path` | string | File path | |
| | `line` | int32 | Line number | |
| | `side` | string | LEFT or RIGHT | |
| | `in_reply_to_id` | int64 | Parent comment (threads) | |
| | `comment_user_login` | string | Author | |
| | `comment_created_at` | string | Timestamp | |
| | `pr_number` | int32 | PR number | |
| | `reactions_total` | int32 | Total reactions | |
| |
| #### `stars`.WatchEvent |
| |
| Repository star events. Who starred which repo, and when. GitHub API quirk: the event is called `WatchEvent` but means starring. Action is always `"started"` so it is not stored. |
| |
| **Processing:** The WatchEvent payload carries no useful fields — all signal is in the event envelope (actor, repo, timestamp). For 2012–2014 events the legacy Timeline API included a full repository snapshot, so `repo_language`, `repo_stars_count`, `repo_forks_count`, `repo_description`, and `repo_is_fork` are populated for that era. `actor_type` is also populated from the legacy `actor_attributes` object. For 2015+ events those fields are empty; `actor_avatar_url` is populated instead. |
| |
| | Column | Type | Description | |
| |---|---|---| |
| | `actor_avatar_url` | string | Actor avatar URL (2015+) | |
| | `actor_type` | string | `User` or `Organization` (2012–2014 only) | |
| | `repo_description` | string | Repo description at star time (2012–2014 only) | |
| | `repo_language` | string | Primary language (2012–2014 only) | |
| | `repo_stars_count` | int32 | Star count at star time (2012–2014 only) | |
| | `repo_forks_count` | int32 | Fork count at star time (2012–2014 only) | |
| | `repo_is_fork` | bool | Whether the starred repo is a fork (2012–2014 only) | |
| |
| #### `forks`.ForkEvent |
| |
| Repository fork events. Contains metadata about the newly created fork, including its language, license, and star count at fork time. |
| |
| **Processing:** Flattened from `payload.forkee.*`. The forkee is the newly created repository. Owner info from `forkee.owner` becomes `forkee_owner_login`. License from `forkee.license` becomes `forkee_license_key`. Topics are a Parquet LIST column. |
| |
| | Column | Type | Description | |
| |---|---|---| |
| | `forkee_id` | int64 | Forked repo ID | |
| | `forkee_full_name` | string | Fork full name (owner/repo) | |
| | `forkee_language` | string | Primary language | |
| | `forkee_stars_count` | int32 | Stars at fork time | |
| | `forkee_forks_count` | int32 | Forks at fork time | |
| | `forkee_owner_login` | string | Fork owner | |
| | `forkee_description` | string | Fork description | |
| | `forkee_license_key` | string | License SPDX key | |
| | `forkee_topics` | list\<string\> | Repository topics | |
| | `forkee_created_at` | timestamp | Fork creation time | |
| |
| #### `creates`.CreateEvent |
| |
| Branch, tag, or repository creation. The `ref_type` field distinguishes between them. |
| |
| **Processing:** Direct mapping from `payload.*` fields. When `ref_type` is `"repository"`, the `ref` field is null and `description` contains the repo description. |
| |
| | Column | Type | Description | |
| |---|---|---| |
| | `ref` | string | Ref name (branch/tag name, null for repos) | |
| | `ref_type` | string | `branch`, `tag`, or `repository` | |
| | `master_branch` | string | Default branch name | |
| | `description` | string | Repo description (repo creates only) | |
| | `pusher_type` | string | User type | |
| |
| #### `deletes`.DeleteEvent |
| |
| Branch or tag deletion. Repositories cannot be deleted via the Events API. |
| |
| **Processing:** Direct mapping from `payload.*` fields. |
| |
| | Column | Type | Description | |
| |---|---|---| |
| | `ref` | string | Deleted ref name | |
| | `ref_type` | string | `branch` or `tag` | |
| | `pusher_type` | string | User type | |
| |
| #### `releases`.ReleaseEvent |
| |
| Release publication events. Contains the full release metadata including tag, release notes, and assets. |
| |
| **Processing:** Flattened from `payload.release.*`. Author info from `release.author` becomes `release_author_login`. Assets are a Parquet LIST of structs. Reactions flattened from `release.reactions.*`. |
| |
| | Column | Type | Description | |
| |---|---|---| |
| | `action` | string | published, edited, etc. | |
| | `release_id` | int64 | Release ID | |
| | `tag_name` | string | Git tag | |
| | `name` | string | Release title | |
| | `body` | string | Release notes (markdown) | |
| | `draft` | bool | Draft release | |
| | `prerelease` | bool | Pre-release | |
| | `release_created_at` | timestamp | Creation time | |
| | `release_published_at` | timestamp | Publication time | |
| | `release_author_login` | string | Author | |
| | `assets_count` | int32 | Number of assets | |
| | `assets` | list\<struct\> | Assets: `[{name, label, content_type, state, size, download_count}]` | |
| | `reactions_total` | int32 | Total reactions | |
| |
| #### `commit_comments`.CommitCommentEvent |
| |
| Comments on specific commits. Can be on a specific file and line, or on the commit as a whole. |
| |
| **Processing:** Flattened from `payload.comment.*`. When the comment is on a specific file, `path` and `line` are populated. Reactions flattened from `comment.reactions.*`. |
| |
| | Column | Type | Description | |
| |---|---|---| |
| | `comment_id` | int64 | Comment ID | |
| | `commit_id` | string | Commit SHA | |
| | `comment_body` | string | Comment text | |
| | `path` | string | File path (line comments) | |
| | `line` | int32 | Line number | |
| | `position` | int32 | Diff position | |
| | `comment_user_login` | string | Author | |
| | `comment_created_at` | string | Timestamp | |
| | `reactions_total` | int32 | Total reactions | |
| |
| #### `wiki_pages`.GollumEvent |
| |
| Wiki page creates and edits. A single `GollumEvent` can contain multiple page changes, so we emit **one row per page** (not per event). |
| |
| **Processing:** The `payload.pages` array is unpacked: each page in the array produces a separate row, all sharing the same event envelope. This means one GitHub event can generate multiple rows. |
| |
| | Column | Type | Description | |
| |---|---|---| |
| | `page_name` | string | Page slug | |
| | `title` | string | Page title | |
| | `action` | string | `created` or `edited` | |
| | `sha` | string | Page revision SHA | |
| | `summary` | string | Edit summary | |
| |
| #### `members`.MemberEvent |
| |
| Collaborator additions to repositories. |
| |
| **Processing:** Flattened from `payload.member.*`. The actor is who added the member; the member fields describe who was added. |
| |
| | Column | Type | Description | |
| |---|---|---| |
| | `action` | string | `added` | |
| | `member_id` | int64 | Added user's ID | |
| | `member_login` | string | Added user's username | |
| | `member_type` | string | User type | |
| |
| #### `public_events`.PublicEvent |
| |
| Repository visibility changes from private to public. The simplest table, containing only the event envelope (who, which repo, when) with no additional payload columns. |
| |
| **Processing:** No payload fields are extracted. The event envelope alone captures the relevant information. |
| |
| #### `discussions`.DiscussionEvent |
| |
| GitHub Discussions lifecycle: created, answered, category_changed, labeled, and more. Includes category, answer status, and full discussion metadata. |
| |
| **Processing:** Flattened from `payload.discussion.*`. Category info from `discussion.category` becomes `category_name`/`category_slug`/`category_emoji`. Answer info becomes `answer_html_url`/`answer_chosen_at`. Labels are a Parquet LIST column. Reactions flattened from `discussion.reactions.*`. |
| |
| | Column | Type | Description | |
| |---|---|---| |
| | `action` | string | created, answered, category_changed, etc. | |
| | `discussion_number` | int32 | Discussion number | |
| | `title` | string | Discussion title | |
| | `body` | string | Discussion body (markdown) | |
| | `state` | string | Discussion state | |
| | `comments_count` | int32 | Comment count | |
| | `user_login` | string | Author | |
| | `category_name` | string | Category name | |
| | `category_slug` | string | Category slug | |
| | `discussion_created_at` | timestamp | When created | |
| | `answer_chosen_at` | timestamp | When answer was accepted (null if none) | |
| | `labels` | list\<string\> | Label names | |
| | `reactions_total` | int32 | Total reactions | |
| |
| |
| ## Per-table breakdown |
| |
| | Table | GitHub Event | Events | % | Description | |
| |-------|-------------|-------:|---:|-------------| |
| | `pushes` | PushEvent | 152,400,030 | 50.1% | Git pushes with commits | |
| | `issues` | IssuesEvent | 17,290,572 | 5.7% | Issue lifecycle events | |
| | `issue_comments` | IssueCommentEvent | 26,990,903 | 8.9% | Comments on issues/PRs | |
| | `pull_requests` | PullRequestEvent | 12,906,380 | 4.2% | PR lifecycle events | |
| | `pr_review_comments` | PullRequestReviewCommentEvent | 3,369,021 | 1.1% | Line-level PR comments | |
| | `stars` | WatchEvent | 28,074,327 | 9.2% | Repository stars | |
| | `forks` | ForkEvent | 10,646,202 | 3.5% | Repository forks | |
| | `creates` | CreateEvent | 38,232,637 | 12.6% | Branch/tag/repo creation | |
| | `deletes` | DeleteEvent | 4,426,487 | 1.5% | Branch/tag deletion | |
| | `releases` | ReleaseEvent | 529,438 | 0.2% | Release publications | |
| | `commit_comments` | CommitCommentEvent | 2,615,240 | 0.9% | Comments on commits | |
| | `wiki_pages` | GollumEvent | 4,450,303 | 1.5% | Wiki page edits | |
| | `members` | MemberEvent | 368,818 | 0.1% | Collaborator additions | |
| | `public_events` | PublicEvent | 286,945 | 0.1% | Repo made public | |
| |
| ## How it's built |
| |
| The pipeline has two modes that work together: |
| |
| **Archive mode** processes historical GH Archive hourly dumps in a single pass per file: download the `.json.gz`, decompress and parse each JSON line, route by event type to one of 16 handlers, flatten nested JSON into typed columns, write to Parquet with Zstd compression, and publish daily to HuggingFace. |
| |
| **Live mode** captures events directly from the GitHub Events API in near-real-time. Multiple API tokens poll concurrently with adaptive pagination (up to 300 events per cycle). Events are deduplicated by ID, bucketed into 5-minute blocks by their `created_at` timestamp, and written as Parquet files. Each block is pushed to HuggingFace immediately after writing. On each hour boundary, the corresponding GH Archive file is downloaded and merged into the typed daily tables for complete coverage. |
| |
| All scalar fields are fully flattened into typed columns. Variable-length arrays (commits, labels, assets, topics, assignees) are stored as native Parquet LIST columns — no JSON strings. All `*_at` timestamp fields use the Parquet TIMESTAMP type (UTC microsecond precision), so DuckDB, pandas, Spark, and the HuggingFace viewer all read them as native datetimes. |
| |
| No events are filtered. Every public event captured by GH Archive appears in the corresponding table. Events with parse errors are logged and skipped (typically less than 0.01%). |
| |
| ## Known limitations |
| |
| - **Full coverage starts 2015-01-01.** Events from 2011-02-12 to 2014-12-31 are included but parsed from the deprecated Timeline API format, which has less detail for some event types. |
| - **Bot activity.** A significant fraction of events (especially pushes and issues) are generated by bots such as Dependabot, Renovate, and CI systems. No bot filtering is applied. |
| - **Event lag.** GH Archive captures events with a small delay (roughly minutes). Events during GitHub outages may be missing. |
| - **Pre-2015 limitations.** IssuesEvent and IssueCommentEvent from 2012-2014 contain only integer IDs (no title, body, or state) because the old API did not include full objects in event payloads. |
| |
| ## Personal information |
| |
| All data was already public on GitHub. Usernames, user IDs, and repository information are included as they appear in the GitHub Events API. Email addresses may appear in commit metadata within PushEvent payloads (from public git commit objects). No private repository data is present. |
| |
| ## License |
| |
| Released under the **[Open Data Commons Attribution License (ODC-By) v1.0](https://opendatacommons.org/licenses/by/1-0/)**. The underlying data is sourced from the public GitHub Events API via GH Archive. GitHub's Terms of Service apply to the original data. |
| |
| ## Credits |
| |
| - **[GH Archive](https://www.gharchive.org/)** by [Ilya Grigorik](https://www.igvita.com/), the foundational project that has recorded every public GitHub event since 2011 |
| - **[GitHub Events API](https://docs.github.com/en/rest/activity/events)**, the source data stream |
| - Built with [Apache Parquet](https://parquet.apache.org/) (Go), published via [HuggingFace Hub](https://huggingface.co/) |
| |
| ## Contact |
| |
| Questions, feedback, or issues? Open a discussion on the [Community tab](https://huggingface.co/datasets/open-index/open-github/discussions). |
| |