CodeReality / README.md
vinsblack's picture
Update README.md
c7eff8e verified
|
raw
history blame
10.3 kB
metadata
dataset_name: CodeReality
pretty_name: 'CodeReality: Deliberately Noisy Code Dataset'
tags:
  - code
  - bigcode
  - software-engineering
  - robustness
  - noisy-dataset
  - evaluation
size_categories:
  - 10GB<n<100GB
  - 1TB<n<10TB
task_categories:
  - text-generation
  - text-classification
  - text-retrieval
  - other
language:
  - en
license: other
configs:
  - config_name: default
    data_files:
      - eval/subset/*.jsonl

CodeReality: Large-Scale Deliberately Noisy Code Dataset

Dataset Status Size Repositories

⚠️ Important Limitations

⚠️ Not Enterprise-Ready: This dataset is deliberately noisy and designed for research only. Contains mixed/unknown licenses, possible secrets, potential security vulnerabilities, duplicate code, and experimental repositories. Requires substantial preprocessing for production use.

Use at your own risk - this is a research dataset for robustness testing and data curation method development.

Overview

CodeReality-1T is a large-scale, deliberately noisy code repository dataset designed for robust AI research. Contains 397,475 repositories across 21 programming languages in 3.05 TB of uncompressed data, specifically curated to test robustness, data curation methods, and real-world code understanding.

Key Features

  • Complete Coverage: 100% analysis of all 397,475 repositories (no sampling)
  • BigCode Compliant: Meets all community standards for transparency and reproducibility
  • Deliberately Noisy: Includes duplicates, incomplete code, and experimental projects
  • Rich Metadata: Enhanced Blueprint metadata with cross-domain classification
  • Professional Grade: 63.7-hour comprehensive analysis with open source tools

Quick Start

Dataset Structure

codereality-1t/
├── data/                    # Main dataset location reference
│   ├── README.md           # Data access instructions
│   └── manifest.json      # Integrity verification
├── analysis/               # Analysis results
│   ├── dataset_index.json  # Complete file index (29MB)
│   └── metrics.json        # Comprehensive analysis results
├── docs/                   # Documentation
│   ├── DATASET_CARD.md     # Comprehensive dataset card
│   └── LICENSE.md          # Licensing information
└── eval/                   # Evaluation subset (in progress)
    └── subset/             # Curated 15GB research subset

Loading the Dataset

import json
import os

# Load dataset index
with open('analysis/dataset_index.json', 'r') as f:
    index = json.load(f)

print(f"Files: {len(index['files'])}")
print(f"Repositories: {sum(f['repository_count'] for f in index['files'])}")

# Access data files
data_dir = "/mnt/z/CodeReality_Final/unified_dataset"
for file_info in index['files'][:5]:  # First 5 files
    file_path = file_info['path']
    with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
        for line in f:
            repo_data = json.loads(line)
            print(f"Repository: {repo_data.get('name', 'Unknown')}")
            break  # Just first repo from each file

Dataset Statistics

Scale

  • Total Repositories: 397,475
  • Total Files: 52,692 JSONL archives
  • Total Size: 3.05 TB uncompressed
  • Languages Detected: 21
  • Analysis Coverage: 100% (no sampling)

Language Distribution (Top 10)

Language Repositories Percentage
Unknown 389,941 98.1%
Python 4,738 1.2%
Shell 4,505 1.1%
C 3,969 1.0%
C++ 3,339 0.8%
HTML 2,487 0.6%
JavaScript 2,394 0.6%
Go 2,110 0.5%
Java 2,026 0.5%
CSS 1,655 0.4%

Duplicate Analysis

Exact Duplicates: 0% exact SHA256 duplicates detected across file-level content Semantic Duplicates: ~18% estimated semantic duplicates and forks preserved by design Research Value: Duplicates intentionally maintained for real-world code distribution studies

License Analysis

License Detection: 0% detection rate (design decision for noisy dataset research) Unknown Licenses: 96.4% of repositories marked as "Unknown" by design Research Purpose: Preserved to test license detection systems and curation methods

Security Analysis

⚠️ Security Warning: Dataset contains potential secrets

  • Password patterns: 1,231,942 occurrences
  • Token patterns: 353,266 occurrences
  • Secret patterns: 71,778 occurrences
  • API key patterns: 4,899 occurrences

Research Applications

Primary Use Cases

  1. Code LLM Robustness: Testing model performance on noisy, real-world data
  2. Data Curation Research: Developing automated filtering and cleaning methods
  3. License Detection: Training and evaluating license classification systems
  4. Bug-Fix Studies: Before/after commit analysis for automated debugging
  5. Cross-Language Analysis: Multi-language repository understanding

Evaluation Subset & Benchmarks

A curated 19GB evaluation subset is now available for standardized benchmarks:

  • 323 files containing 2,049 repositories
  • Research value scoring with diversity sampling
  • Cross-language implementations and multi-repo analysis
  • Complete build system configurations
  • Enhanced metadata with commit history and issue tracking

Demonstration Benchmarks available in eval/benchmarks/:

  • License Detection: Automated license classification evaluation
  • Code Completion: Pass@k metrics for code generation models
  • Extensible Framework: Easy to add new evaluation tasks

Benchmarks & Results

📊 Baseline Performance

Demonstration benchmark results available in eval/results/:

🏃 Quick Start Benchmarking

cd eval/benchmarks
python3 license_detection_benchmark.py    # License classification
python3 code_completion_benchmark.py      # Code generation Pass@k

Note: These are demonstration baselines, not production-ready models. Results show expected challenges of deliberately noisy data.

📊 Benchmarks & Results

Usage Guidelines

✅ Recommended Uses

  • Academic research and education
  • Robustness testing of code models
  • Development of data curation methods
  • License detection research
  • Security pattern analysis

❌ Important Limitations

  • No Commercial Use without individual license verification
  • Research Only: Many repositories have unknown licensing
  • Security Risk: Contains potential secrets and vulnerabilities
  • Deliberately Noisy: Requires preprocessing for most applications

Documentation

Document Description
Dataset Card Comprehensive dataset documentation
License Licensing terms and legal considerations
Data README Data access and usage instructions

Verification

Verify dataset integrity:

# Check file counts
python3 -c "
import json
with open('analysis/dataset_index.json', 'r') as f:
    idx = json.load(f)
    print(f'Files: {len(idx[\"files\"])}')
    print(f'Repositories: {sum(f[\"repository_count\"] for f in idx[\"files\"])}')
"

# Expected output:
# Files: 52692
# Repositories: 397475

Citation

@misc{codereality2025,
  title={CodeReality-1T: A Large-Scale Deliberately Noisy Dataset for Robust Code Understanding},
  author={Vincenzo Gallo},
  year={2025},
  publisher={Hugging Face},
  howpublished={\\url{https://huggingface.co/vinsblack}},
  note={Version 1.0.0}
}

Community Contributions

We welcome community contributions to improve CodeReality-1T:

🛠️ Data Curation Scripts

  • Contribute filtering and cleaning scripts for the noisy dataset
  • Share deduplication algorithms and quality improvement tools
  • Submit license detection and classification improvements

📊 New Benchmarks

  • Add evaluation tasks beyond license detection and code completion
  • Contribute cross-language analysis benchmarks
  • Share bug detection and security analysis evaluations

📈 Future Versions

  • v1.1.0: Enhanced evaluation subset with community feedback
  • v1.2.0: Improved license detection and filtering tools
  • v2.0.0: Community-curated clean variant with quality filters

🤝 How to Contribute

Community contributions are actively welcomed and encouraged! Help improve the largest deliberately noisy code dataset.

🎯 Priority Contribution Areas:

  • Data Curation: Cleaning scripts, deduplication algorithms, quality filters
  • Benchmarks: New evaluation tasks, improved baselines, framework implementations
  • Analysis Tools: Visualization, statistics, metadata enhancement
  • Documentation: Usage examples, tutorials, case studies

📋 Contribution Process:

  1. Check Issues for current needs and coordination2
  2. Create feature branch for your contribution
  3. Submit pull request with detailed description and testing
  4. Engage in community review and discussions

💡 Join the Community: Share your research, tools, and insights using CodeReality-1T!

Support