| --- |
| license: other |
| license_name: permissive-mixed |
| license_link: LICENSE |
| task_categories: |
| - text-generation |
| - fill-mask |
| - feature-extraction |
| language: |
| - en |
| tags: |
| - code |
| - github |
| - ai-training |
| - llm |
| - fine-tuning |
| - code-generation |
| - python |
| - javascript |
| - typescript |
| - rust |
| - go |
| - bigcode-standard |
| - stack-v2-methodology |
| - commercial-safe |
| - pii-scrubbed |
| - license-audited |
| pretty_name: HSH Intelligence — GitHub Code AI Training Corpus (5K Sample) |
| size_categories: |
| - 1K<n<10K |
| --- |
| |
| # HSH Intelligence — GitHub Code AI Training Corpus |
|
|
| **5,000-record sample of the HSH Intelligence GitHub Code AI Training Corpus.** |
|
|
| A curated, production-grade sample of source code from top-tier public GitHub repositories — engineered for large language model training, fine-tuning, and code understanding research. |
|
|
| The full corpus contains **5.6 TB** of source code (**211 million+ files**, **7.05 billion lines**) across **14 production languages**. |
|
|
| --- |
|
|
| ## 10/10 Quality Checks |
|
|
| This sample passes all 10 industry-standard quality checks following **BigCode / The Stack v2** production methodology. |
|
|
| | # | Check | Tool | Result | |
| |---|---|---|---| |
| | 1 | License compliance | scancode-toolkit 32.5.0 | 0% copyleft | |
| | 2 | Secret detection | gitleaks 8.18.4 | 0 leaks | |
| | 3 | Near-duplicate removal | MinHash LSH (256-perm, 5-gram, 0.9 threshold) | 0% duplicates | |
| | 4 | Code complexity | radon 6.0.1 | 3.92 avg cyclomatic | |
| | 5 | Token diversity | tiktoken cl100k_base (GPT-4) | 63,712 unique tokens | |
| | 6 | Statistical balance | Custom audit | 1K per language | |
| | 7 | Benchmark contamination | vs HumanEval (164) + MBPP (500) | 0 matches | |
| | 8 | PII beyond secrets | Custom regex + Luhn validation | 0 real PII | |
| | 9 | Syntax validation | Babel parser, syn 2.0, tsc, ast, gofmt | 98.0% parseable | |
| | 10 | Repo legitimacy | GitHub REST API verification | 100% verified | |
| |
| **Reference:** Methodology follows [BigCode / The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2) production standards. |
| |
| Full audit certificate: [`QUALITY_CERTIFICATE.json`](./QUALITY_CERTIFICATE.json) |
| |
| --- |
| |
| ## Sample Specifications |
| |
| | Metric | Value | |
| |---|---| |
| | Records | 5,000 (curated subset) | |
| | Languages | 5 (Python, JavaScript, TypeScript, Go, Rust) | |
| | Records per language | 1,000 (perfectly balanced) | |
| | Unique repositories | 1,499 verified active on GitHub | |
| | Format | Apache Parquet (zstd compression) + CSV | |
| | Schema | 19 fields per record | |
| | Size | 13.4 MB (Parquet) / 49.9 MB (CSV) | |
| | License coverage | 100% commercial-safe (MIT, Apache-2.0, BSD, ISC) | |
| | PII status | Fully scrubbed (zero secrets, emails, IPs, SSNs) | |
| | Syntax validation | 98.0% parseable (industry standard greater than or equal to 95%) | |
| |
| ### Repository Quality |
| |
| - 56.1% from repos with 10,000+ GitHub stars |
| - 6.1% archived repos (still valid, just not actively maintained) |
| - 0.0% deleted repos |
| - Top repos include: `facebook/react`, `ollama/ollama`, `django/django`, `AUTOMATIC1111/stable-diffusion-webui` |
| |
| --- |
| |
| ## Full Corpus Specifications |
| |
| | Metric | Value | |
| |---|---| |
| | Total dataset size | 5.6 TB (raw) / 391 GB (Parquet, compressed) | |
| | Total records | 211 million+ code files | |
| | Total lines of code | 7.05 billion | |
| | Unique repositories | 3,710+ permissive-license repos | |
| | Programming languages | 14 production languages | |
| | Updates | Daily incremental | |
| |
| **Languages covered:** Python, JavaScript, TypeScript, Go, Rust, Java, C++, Ruby, Swift, Kotlin, PHP, C#, Scala, Solidity |
| |
| --- |
| |
| ## License Coverage (Commercial-Safe Only) |
| |
| | License | Status | Notes | |
| |---|---|---| |
| | MIT | INCLUDED | Most permissive | |
| | Apache-2.0 | INCLUDED | Permissive with patent grant | |
| | BSD-2-Clause | INCLUDED | Permissive | |
| | BSD-3-Clause | INCLUDED | Permissive | |
| | ISC | INCLUDED | Permissive | |
| | GPL-2.0 / GPL-3.0 | EXCLUDED | Copyleft | |
| | AGPL-3.0 | EXCLUDED | Strong copyleft | |
| | LGPL-2.1 / LGPL-3.0 | EXCLUDED | Copyleft | |
| | No license / Proprietary | EXCLUDED | Default copyright | |
| |
| License detection performed using **scancode-toolkit 32.5.0** with per-file SPDX classification. |
| |
| --- |
| |
| ## Schema (19 Fields) |
| |
| | Field | Type | Description | |
| |---|---|---| |
| | `id` | string | Unique record identifier (sha256-prefixed) | |
| | `language` | string | Detected programming language | |
| | `repo_owner` | string | GitHub username or organization | |
| | `repo_name` | string | Repository name | |
| | `repo_stars` | integer | GitHub star count | |
| | `repo_forks` | integer | GitHub fork count | |
| | `repo_description` | string | Repository description | |
| | `repo_topics` | list[string] | GitHub repo topics | |
| | `license` | string | SPDX license identifier | |
| | `file_path` | string | Relative path within repo | |
| | `file_name` | string | Filename with extension | |
| | `file_size` | integer | File size in bytes | |
| | `code` | string | Raw source code content (PII-scrubbed) | |
| | `word_count` | integer | Total word count | |
| | `char_count` | integer | Character count | |
| | `line_count` | integer | Total lines of code | |
| | `data_quality_score` | float | Composite quality score (0.0–1.0) | |
| | `timestamp` | timestamp | Record creation timestamp | |
| | `scrubbed` | boolean | PII scrubbing flag (always `True`) | |
|
|
| --- |
|
|
| ## Quick Start |
|
|
| ### Load with Hugging Face Datasets |
|
|
| ```python |
| from datasets import load_dataset |
| |
| ds = load_dataset("HSH-Intelligence/github-code-corpus-sample") |
| print(ds) |
| print(ds["train"][0]) |
| |
| # Filter to high-quality Python only |
| python_only = ds["train"].filter( |
| lambda x: x["language"] == "Python" and x["data_quality_score"] >= 0.95 |
| ) |
| print(f"High-quality Python records: {len(python_only)}") |
| ``` |
|
|
| ### Load directly with pandas |
|
|
| ```python |
| import pandas as pd |
| |
| df = pd.read_parquet( |
| "hf://datasets/HSH-Intelligence/github-code-corpus-sample/github_code_sample_5000.parquet" |
| ) |
| print(df.head()) |
| print(f"Total records: {len(df):,}") |
| print(f"Languages: {df['language'].value_counts()}") |
| print(f"Top repos: {df['repo_name'].value_counts().head(10)}") |
| ``` |
|
|
| --- |
|
|
| ## Live API Demo |
|
|
| Try the full corpus via the live API sandbox (no signup required): |
|
|
| ```bash |
| curl -H "X-API-Key: demo-key-12345" \ |
| "https://api.hshintelligence.com/api/v1/github-code-corpus?language=Rust&license=MIT&page_size=5" |
| ``` |
| Returns real Parquet records with full metadata: code, license, repo stars, |
| quality score, commit history. Free tier limited to 2 files (~18 records). |
| Full corpus delivered via Backblaze B2 download link after purchase. |
|
|
| **API documentation:** |
| Or use the [interactive docs](https://api.hshintelligence.com/docs) — click |
| any endpoint, click "Try it out", paste the demo key, and run live queries. |
|
|
| **Live endpoint:** https://api.hshintelligence.com/api/v1/github-code-corpus |
|
|
| Or run the interactive Google Colab notebook: |
| https://links.hshintelligence.com/github-demo |
|
|
| --- |
|
|
| ## Use Cases |
|
|
| - **LLM pre-training** — multi-language code corpus for foundation models |
| - **Code completion fine-tuning** — Copilot-style models |
| - **Code search and retrieval** — embedding training |
| - **Code understanding research** — academic benchmarks |
| - **Vertical AI** — domain-specific code assistants |
| - **Benchmark-safe evaluation** — zero contamination vs HumanEval/MBPP |
|
|
| --- |
|
|
| ## Why This Corpus |
|
|
| | vs. Alternative | HSH Intelligence Edge | |
| |---|---| |
| | The Stack v2 | Per-file license audit + provenance trail + 10-check quality verification | |
| | Common Crawl code | Pre-filtered, deduplicated, syntax-validated, PII-scrubbed | |
| | Custom GitHub scraping | Saves 4+ months of engineering work | |
| | Internal datasets | EU AI Act Article 10 compliance ready | |
| | Generic samples | Industry-standard 10/10 quality checks documented | |
|
|
| --- |
|
|
| ## Compliance & Provenance |
|
|
| - **EU AI Act Article 10** ready (training data governance) |
| - **GDPR** safe (zero PII verified) |
| - **CCPA** safe (no California resident data) |
| - **HIPAA** considerations addressed (no medical data) |
| - Per-record license audit trail |
| - Source attribution retained (`repo_owner`, `repo_name`) |
| - Quality scoring per record |
| - Zero PII (emails, phones, IPs, SSNs, credit cards verified) |
| - Zero secrets (API keys, tokens, credentials verified via gitleaks) |
| - Zero benchmark contamination (HumanEval, MBPP verified) |
|
|
| --- |
|
|
| ## Methodology |
|
|
| This dataset follows **BigCode / The Stack v2** production methodology with additional quality gates. |
|
|
| ### Tools Used |
|
|
| | Category | Tools | |
| |---|---| |
| | License detection | scancode-toolkit | |
| | Secret scanning | gitleaks | |
| | Deduplication | datasketch MinHash LSH | |
| | Complexity analysis | radon | |
| | Tokenization | tiktoken (cl100k_base) | |
| | Syntax validation | Babel parser, syn 2.0, tsc, Python ast, gofmt | |
| | Repo verification | GitHub REST API v3 | |
| |
| ### Quality Thresholds |
| |
| - License compliance: less than 0.1% copyleft (achieved: 0%) |
| - Secret leaks: 0 tolerance (achieved: 0) |
| - Near-duplicates: less than 5% (achieved: 0%) |
| - PII: 0 tolerance (achieved: 0) |
| - Syntax validation: greater than or equal to 95% parseable (achieved: 98%) |
| - Repo legitimacy: less than 1% deleted (achieved: 0%) |
| |
| Full quality certificate: [`QUALITY_CERTIFICATE.json`](./QUALITY_CERTIFICATE.json) |
| |
| --- |
| |
| ## Full Corpus Access |
| |
| This is a **5,000-record evaluation sample**. The full corpus is available via commercial license: |
| |
| | Tier | Records | Languages | Format | |
| |---|---|---|---| |
| | Sample (this dataset) | 5,000 | 5 | Parquet + CSV | |
| | Standard | 10M+ | 14 | Parquet | |
| | Enterprise | 211M+ (full) | 14 | Parquet (+JSONL on request) | |
| |
| **Delivery options:** |
| - Cloud signed URL (Backblaze B2, AWS S3) |
| - Cross-cloud transfer (AWS, GCP, Azure) |
| - sFTP delivery for on-prem |
| - Daily incremental updates (Enterprise tier) |
| |
| **Custom subsets available:** Filter by language, license, repo stars, complexity, or quality threshold. |
| |
| **Licensing:** 1-year non-exclusive commercial license. |
| |
| --- |
| |
| ## Contact |
| |
| - **Email:** sales@healingsunhaven.com |
| - **Website:** https://www.hshintelligence.com |
| - **Live API:** https://api.hshintelligence.com |
| - **Documentation:** https://links.hshintelligence.com/github-docs |
| - **Demo Colab:** https://links.hshintelligence.com/github-demo |
| |
| --- |
| |
| ## About HSH Intelligence |
| |
| **HSH Intelligence** is the Data Division of **Healing Sun Haven LLC**, building production-grade AI training datasets and B2B intelligence products. |
| |
| We engineer datasets across AI training, B2B intelligence, and decision-support — purpose-built for frontier AI labs and enterprise teams who demand industry-standard quality verification. |
| |
| --- |
| |
| *This dataset is provided for evaluation purposes. The full 5.6 TB corpus is available under commercial license. Quality audit certificate, license documentation, and provenance trail included with all enterprise contracts.* |
| |
| Audit date: 2026-05-07 | Methodology reference: BigCode/Stack v2 | Full quality report: QUALITY_CERTIFICATE.json |