--- license: other license_name: permissive-mixed license_link: LICENSE task_categories: - text-generation - fill-mask - feature-extraction language: - en tags: - code - github - ai-training - llm - fine-tuning - code-generation - python - javascript - typescript - rust - go - bigcode-standard - stack-v2-methodology - commercial-safe - pii-scrubbed - license-audited pretty_name: HSH Intelligence — GitHub Code AI Training Corpus (5K Sample) size_categories: - 1K= 0.95 ) print(f"High-quality Python records: {len(python_only)}") ``` ### Load directly with pandas ```python import pandas as pd df = pd.read_parquet( "hf://datasets/HSH-Intelligence/github-code-corpus-sample/github_code_sample_5000.parquet" ) print(df.head()) print(f"Total records: {len(df):,}") print(f"Languages: {df['language'].value_counts()}") print(f"Top repos: {df['repo_name'].value_counts().head(10)}") ``` --- ## Live API Demo Try the full corpus via the live API sandbox (no signup required): ```bash curl -H "X-API-Key: demo-key-12345" \ "https://api.hshintelligence.com/api/v1/github-code-corpus?language=Rust&license=MIT&page_size=5" ``` Returns real Parquet records with full metadata: code, license, repo stars, quality score, commit history. Free tier limited to 2 files (~18 records). Full corpus delivered via Backblaze B2 download link after purchase. **API documentation:** Or use the [interactive docs](https://api.hshintelligence.com/docs) — click any endpoint, click "Try it out", paste the demo key, and run live queries. **Live endpoint:** https://api.hshintelligence.com/api/v1/github-code-corpus Or run the interactive Google Colab notebook: https://links.hshintelligence.com/github-demo --- ## Use Cases - **LLM pre-training** — multi-language code corpus for foundation models - **Code completion fine-tuning** — Copilot-style models - **Code search and retrieval** — embedding training - **Code understanding research** — academic benchmarks - **Vertical AI** — domain-specific code assistants - **Benchmark-safe evaluation** — zero contamination vs HumanEval/MBPP --- ## Why This Corpus | vs. Alternative | HSH Intelligence Edge | |---|---| | The Stack v2 | Per-file license audit + provenance trail + 10-check quality verification | | Common Crawl code | Pre-filtered, deduplicated, syntax-validated, PII-scrubbed | | Custom GitHub scraping | Saves 4+ months of engineering work | | Internal datasets | EU AI Act Article 10 compliance ready | | Generic samples | Industry-standard 10/10 quality checks documented | --- ## Compliance & Provenance - **EU AI Act Article 10** ready (training data governance) - **GDPR** safe (zero PII verified) - **CCPA** safe (no California resident data) - **HIPAA** considerations addressed (no medical data) - Per-record license audit trail - Source attribution retained (`repo_owner`, `repo_name`) - Quality scoring per record - Zero PII (emails, phones, IPs, SSNs, credit cards verified) - Zero secrets (API keys, tokens, credentials verified via gitleaks) - Zero benchmark contamination (HumanEval, MBPP verified) --- ## Methodology This dataset follows **BigCode / The Stack v2** production methodology with additional quality gates. ### Tools Used | Category | Tools | |---|---| | License detection | scancode-toolkit | | Secret scanning | gitleaks | | Deduplication | datasketch MinHash LSH | | Complexity analysis | radon | | Tokenization | tiktoken (cl100k_base) | | Syntax validation | Babel parser, syn 2.0, tsc, Python ast, gofmt | | Repo verification | GitHub REST API v3 | ### Quality Thresholds - License compliance: less than 0.1% copyleft (achieved: 0%) - Secret leaks: 0 tolerance (achieved: 0) - Near-duplicates: less than 5% (achieved: 0%) - PII: 0 tolerance (achieved: 0) - Syntax validation: greater than or equal to 95% parseable (achieved: 98%) - Repo legitimacy: less than 1% deleted (achieved: 0%) Full quality certificate: [`QUALITY_CERTIFICATE.json`](./QUALITY_CERTIFICATE.json) --- ## Full Corpus Access This is a **5,000-record evaluation sample**. The full corpus is available via commercial license: | Tier | Records | Languages | Format | |---|---|---|---| | Sample (this dataset) | 5,000 | 5 | Parquet + CSV | | Standard | 10M+ | 14 | Parquet | | Enterprise | 211M+ (full) | 14 | Parquet (+JSONL on request) | **Delivery options:** - Cloud signed URL (Backblaze B2, AWS S3) - Cross-cloud transfer (AWS, GCP, Azure) - sFTP delivery for on-prem - Daily incremental updates (Enterprise tier) **Custom subsets available:** Filter by language, license, repo stars, complexity, or quality threshold. **Licensing:** 1-year non-exclusive commercial license. --- ## Contact - **Email:** sales@healingsunhaven.com - **Website:** https://www.hshintelligence.com - **Live API:** https://api.hshintelligence.com - **Documentation:** https://links.hshintelligence.com/github-docs - **Demo Colab:** https://links.hshintelligence.com/github-demo --- ## About HSH Intelligence **HSH Intelligence** is the Data Division of **Healing Sun Haven LLC**, building production-grade AI training datasets and B2B intelligence products. We engineer datasets across AI training, B2B intelligence, and decision-support — purpose-built for frontier AI labs and enterprise teams who demand industry-standard quality verification. --- *This dataset is provided for evaluation purposes. The full 5.6 TB corpus is available under commercial license. Quality audit certificate, license documentation, and provenance trail included with all enterprise contracts.* Audit date: 2026-05-07 | Methodology reference: BigCode/Stack v2 | Full quality report: QUALITY_CERTIFICATE.json