File size: 17,664 Bytes
fb665e4 cd6673c fb665e4 cd6673c fb665e4 cd6673c fb665e4 cd6673c 75846bf fb665e4 cd6673c fb665e4 cd6673c 3255b9e fb665e4 cd6673c 6759906 cd6673c 6759906 cd6673c de222b9 6759906 cd6673c 6759906 8b3d39f cd6673c 6759906 cd6673c 8b3d39f 6759906 3255b9e 6759906 3255b9e 6759906 3255b9e cd6673c 6759906 cd6673c 6759906 cd6673c 6759906 cd6673c 6759906 cd6673c 6759906 cd6673c 6759906 cd6673c 6759906 cd6673c 6759906 cd6673c 6759906 cd6673c 6759906 cd6673c 6759906 cd6673c 6759906 cd6673c 6759906 cd6673c 6759906 8b3d39f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 |
---
dataset_name: "CodeReality-EvalSubset"
pretty_name: "CodeReality: Evaluation Subset - Deliberately Noisy Code Dataset"
tags:
- code
- software-engineering
- robustness
- noisy-dataset
- evaluation-subset
- research-dataset
- code-understanding
size_categories:
- 10GB<n<100GB
task_categories:
- text-generation
- text-classification
- text-retrieval
- fill-mask
- other
language:
- en
- code
license: other
configs:
- config_name: default
data_files: "data_csv/*.csv"
---
# CodeReality: Evaluation Subset - Deliberately Noisy Code Dataset




## ⚠️ Important Limitations
> **⚠️ Not Enterprise-Ready**: This dataset is deliberately noisy and designed for research only. Contains mixed/unknown licenses, possible secrets, potential security vulnerabilities, duplicate code, and experimental repositories. **Requires substantial preprocessing for production use.**
>
> **Use at your own risk** - this is a research dataset for robustness testing and data curation method development.
## Overview
**CodeReality Evaluation Subset** is a curated research subset extracted from the complete CodeReality dataset (3.05TB, 397,475 repositories). This subset contains **2,049 repositories** in **19GB** of data, specifically selected for standardized evaluation and benchmarking of code understanding models on deliberately noisy data.
For complete Dataset 3tb, please contact me at vincenzo.gallo77@hotmail.com
### Key Features
- ✅ **Curated Selection**: Research value scoring with diversity sampling from 397,475 repositories
- ✅ **Research Grade**: Comprehensive analysis with transparent methodology
- ✅ **Deliberately Noisy**: Includes duplicates, incomplete code, and experimental projects
- ✅ **Rich Metadata**: Enhanced Blueprint metadata with cross-domain classification
- ✅ **Professional Grade**: 63.7-hour comprehensive analysis with open source tools
## Quick Start
### Dataset Structure
```
codereality-1t/
├── data_csv/ # Evaluation subset data (CSV format, 2,387 repositories)
│ ├── codereality_unified.csv # Main dataset file with unified schema
│ └── metadata.json # Dataset metadata and column information
├── analysis/ # Analysis results and tools
│ ├── dataset_index.json # File index and metadata
│ └── metrics.json # Analysis results
├── docs/ # Documentation
│ ├── DATASET_CARD.md # Comprehensive dataset card
│ └── LICENSE.md # Licensing information
├── benchmarks/ # Benchmarking scripts and frameworks
├── results/ # Evaluation results and metrics
├── Notebook/ # Analysis notebooks and visualizations
├── eval_metadata.json # Evaluation metadata and statistics
└── eval_subset_stats.json # Statistical analysis of the subset
```
### Loading the Dataset
## 📊 **Unified CSV Format**
**This dataset has been converted to CSV format with a unified schema** to ensure compatibility with Hugging Face's dataset viewer and eliminate schema inconsistencies that were present in the original JSONL format.
### **How to Use This Dataset**
**Option 1: Standard Hugging Face Datasets (Recommended)**
```python
from datasets import load_dataset
# Load the complete dataset
dataset = load_dataset("vinsblack/CodeReality")
# Access the data
print(f"Total samples: {len(dataset['train'])}")
print(f"Columns: {dataset['train'].column_names}")
# Sample record
sample = dataset['train'][0]
print(f"Repository: {sample['repo_name']}")
print(f"Language: {sample['primary_language']}")
print(f"Quality Score: {sample['quality_score']}")
```
**Option 2: Direct CSV Access**
```python
import pandas as pd
from huggingface_hub import snapshot_download
# Download the dataset
repo_path = snapshot_download(repo_id="vinsblack/CodeReality", repo_type="dataset")
# Load CSV files
import glob
csv_files = glob.glob(f"{repo_path}/data_csv/*.csv")
df = pd.concat([pd.read_csv(f) for f in csv_files], ignore_index=True)
print(f"Total records: {len(df)}")
print(f"Columns: {list(df.columns)}")
```
**Option 3: Metadata and Analysis**
```python
# Load evaluation subset metadata
with open('eval_metadata.json', 'r') as f:
metadata = json.load(f)
print(f"Subset: {metadata['eval_subset_info']['name']}")
print(f"Files: {metadata['subset_statistics']['total_files']}")
print(f"Repositories: {metadata['subset_statistics']['estimated_repositories']}")
print(f"Size: {metadata['subset_statistics']['total_size_gb']} GB")
# Access evaluation data files
data_dir = "data/" # Local evaluation subset data
for filename in os.listdir(data_dir)[:5]: # First 5 files
file_path = os.path.join(data_dir, filename)
with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
for line in f:
repo_data = json.loads(line)
print(f"Repository: {repo_data.get('name', 'Unknown')}")
break # Just first repo from each file
```
## Dataset Statistics
### Evaluation Subset Scale
- **Total Repositories**: 2,049 (curated from 397,475)
- **Total Files**: 323 JSONL archives
- **Total Size**: 19GB uncompressed
- **Languages Detected**: Multiple (JavaScript, Python, Java, C/C++, mixed)
- **Selection**: Research value scoring with diversity sampling
- **Source Dataset**: CodeReality complete dataset (3.05TB)
### Language Distribution (Top 10)
| Language | Repositories | Percentage |
|----------|-------------|------------|
| Unknown | 389,941 | 98.1% |
| Python | 4,738 | 1.2% |
| Shell | 4,505 | 1.1% |
| C | 3,969 | 1.0% |
| C++ | 3,339 | 0.8% |
| HTML | 2,487 | 0.6% |
| JavaScript | 2,394 | 0.6% |
| Go | 2,110 | 0.5% |
| Java | 2,026 | 0.5% |
| CSS | 1,655 | 0.4% |
### Duplicate Analysis
**Exact Duplicates**: 0% exact SHA256 duplicates detected across file-level content
**Semantic Duplicates**: ~18% estimated semantic duplicates and forks preserved by design
**Research Value**: Duplicates intentionally maintained for real-world code distribution studies
### License Analysis
**License Detection**: 0% detection rate (design decision for noisy dataset research)
**Unknown Licenses**: 96.4% of repositories marked as "Unknown" by design
**Research Purpose**: Preserved to test license detection systems and curation methods
### Security Analysis
⚠️ **Security Warning**: Dataset contains potential secrets
- Password patterns: 1,231,942 occurrences
- Token patterns: 353,266 occurrences
- Secret patterns: 71,778 occurrences
- API key patterns: 4,899 occurrences
## Research Applications
### Primary Use Cases
1. **Code LLM Robustness**: Testing model performance on noisy, real-world data
2. **Data Curation Research**: Developing automated filtering and cleaning methods
3. **License Detection**: Training and evaluating license classification systems
4. **Bug-Fix Studies**: Before/after commit analysis for automated debugging
5. **Cross-Language Analysis**: Multi-language repository understanding
### About This Evaluation Subset
This repository contains the **19GB evaluation subset** designed for standardized benchmarks:
- **323 files** containing **2,049 repositories**
- Research value scoring with diversity sampling
- Cross-language implementations and multi-repo analysis
- Complete build system configurations
- Enhanced metadata with commit history and issue tracking
**Note**: The complete 3.05TB CodeReality dataset with all 397,475 repositories is available separately. Contact vincenzo.gallo77@hotmail.com for access to the full dataset.
**Demonstration Benchmarks** available in `eval/benchmarks/`:
- **License Detection**: Automated license classification evaluation
- **Code Completion**: Pass@k metrics for code generation models
- **Extensible Framework**: Easy to add new evaluation tasks
## Benchmarks & Results
### 📊 **Baseline Performance**
Demonstration benchmark results available in `eval/results/`:
- [`license_detection_sample_results.json`](eval/results/license_detection_sample_results.json) - 9.8% accuracy (challenging baseline)
- [`code_completion_sample_results.json`](eval/results/code_completion_sample_results.json) - 14.2% Pass@1 (noisy data challenge)
### 🏃 **Quick Start Benchmarking**
```bash
cd eval/benchmarks
python3 license_detection_benchmark.py # License classification
python3 code_completion_benchmark.py # Code generation Pass@k
```
**Note**: These are demonstration baselines, not production-ready models. Results show expected challenges of deliberately noisy data.
### 📊 **Benchmarks & Results**
- **License Detection**: 9.8% accuracy baseline ([`license_detection_sample_results.json`](eval/results/license_detection_sample_results.json))
- **Code Completion**: 14.2% Pass@1, 34.6% Pass@5 ([`code_completion_sample_results.json`](eval/results/code_completion_sample_results.json))
- **Framework Scaffolds**: Bug detection and cross-language translation ready for community implementation
- **Complete Analysis**: [`benchmark_summary.csv`](eval/results/benchmark_summary.csv) - All metrics for easy comparison and research use
## Usage Guidelines
### ✅ Recommended Uses
- Academic research and education
- Robustness testing of code models
- Development of data curation methods
- License detection research
- Security pattern analysis
### ❌ Important Limitations
- **No Commercial Use** without individual license verification
- **Research Only**: Many repositories have unknown licensing
- **Security Risk**: Contains potential secrets and vulnerabilities
- **Deliberately Noisy**: Requires preprocessing for most applications
## ⚠️ Important: Dataset vs Evaluation Subset
**This repository contains the 19GB evaluation subset only.** Some files within this repository (such as `docs/DATASET_CARD.md`, notebooks in `Notebook/`, and analysis results) reference or describe the complete 3.05TB CodeReality dataset. This is intentional for research context and documentation completeness.
### What's in this repository:
- ✅ **Evaluation subset data**: 19GB, 2,049 repositories in `data/` directory
- ✅ **Analysis tools and scripts**: For working with both subset and full dataset
- ✅ **Documentation**: Describes both the subset and the complete dataset methodology
- ✅ **Benchmarks**: Ready to use with the evaluation subset
### Complete Dataset Access (3.05TB):
- 📧 **Contact**: vincenzo.gallo77@hotmail.com for access to the full dataset
- 📊 **Full Scale**: 397,475 repositories across 21 programming languages
- 🗂️ **Size**: 3.05TB uncompressed, 52,692 JSONL files
#### Who Should Use the Complete Dataset:
- 🎯 **Large-scale ML researchers** training foundation models on massive code corpora
- 🏢 **Enterprise teams** developing production code understanding systems
- 🔬 **Academic institutions** conducting comprehensive code analysis studies
- 📊 **Data scientists** performing statistical analysis on repository distributions
- 🛠️ **Tool developers** building large-scale code curation and filtering systems
#### Advantages of Complete Dataset vs Evaluation Subset:
| Feature | Evaluation Subset (19GB) | Complete Dataset (3.05TB) |
|---------|-------------------------|---------------------------|
| **Repositories** | 2,049 curated | 397,475 complete coverage |
| **Use Case** | Benchmarking & evaluation | Large-scale training & research |
| **Data Quality** | High (curated selection) | Mixed (deliberately noisy) |
| **Languages** | Multi-language focused | 21+ languages comprehensive |
| **Setup Time** | Immediate | Requires infrastructure planning |
| **Best For** | Model evaluation, testing | Model training, comprehensive analysis |
#### Choose Complete Dataset When:
- ✅ Training large language models requiring massive code corpora
- ✅ Developing data curation algorithms at scale
- ✅ Studying real-world code distribution patterns
- ✅ Building production-grade code understanding systems
- ✅ Researching cross-language programming patterns
- ✅ Creating comprehensive code quality metrics
#### Choose Evaluation Subset When:
- ✅ Benchmarking existing models
- ✅ Quick prototyping and testing
- ✅ Learning to work with noisy code datasets
- ✅ Limited storage or computational resources
- ✅ Focused evaluation on curated, high-value repositories
## Configuration Files (YAML)
The project includes comprehensive YAML configuration files for easy programmatic access:
| Configuration File | Description |
|-------------------|-------------|
| [`dataset-config.yaml`](dataset-config.yaml) | Main dataset metadata and structure |
| [`analysis-config.yaml`](analysis-config.yaml) | Analysis methodology and results |
| [`benchmarks-config.yaml`](benchmarks-config.yaml) | Benchmarking framework configuration |
### Using Configuration Files
```python
import yaml
# Load dataset configuration
with open('dataset-config.yaml', 'r') as f:
dataset_config = yaml.safe_load(f)
print(f"Dataset: {dataset_config['dataset']['name']}")
print(f"Version: {dataset_config['dataset']['version']}")
print(f"Total repositories: {dataset_config['dataset']['metadata']['total_repositories']}")
# Load analysis configuration
with open('analysis-config.yaml', 'r') as f:
analysis_config = yaml.safe_load(f)
print(f"Analysis time: {analysis_config['analysis']['methodology']['total_time_hours']} hours")
print(f"Coverage: {analysis_config['analysis']['methodology']['coverage_percentage']}%")
# Load benchmarks configuration
with open('benchmarks-config.yaml', 'r') as f:
benchmarks_config = yaml.safe_load(f)
for benchmark in benchmarks_config['benchmarks']['available_benchmarks']:
print(f"Benchmark: {benchmark}")
```
## Documentation
| Document | Description |
|----------|-------------|
| [Dataset Card](docs/DATASET_CARD.md) | Comprehensive dataset documentation |
| [License](docs/LICENSE.md) | Licensing terms and legal considerations |
| [Data README](data/README.md) | Data access and usage instructions |
## Verification
Verify dataset integrity:
```bash
# Check evaluation subset counts
python3 -c "
import json
with open('eval_metadata.json', 'r') as f:
metadata = json.load(f)
print(f'Files: {metadata[\"subset_statistics\"][\"total_files\"]}')
print(f'Repositories: {metadata[\"subset_statistics\"][\"estimated_repositories\"]}')
print(f'Size: {metadata[\"subset_statistics\"][\"total_size_gb\"]} GB')
"
# Expected output:
# Files: 323
# Repositories: 2049
# Size: 19.0 GB
```
## Citation
```bibtex
@misc{codereality2025,
title={CodeReality Evaluation Subset: A Curated Research Dataset for Robust Code Understanding},
author={Vincenzo Gallo},
year={2025},
note={Version 1.0.0 - Evaluation Subset (19GB from 3.05TB source)}
}
```
## Community Contributions
We welcome community contributions to improve CodeReality-1T:
### 🛠️ **Data Curation Scripts**
- Contribute filtering and cleaning scripts for the noisy dataset
- Share deduplication algorithms and quality improvement tools
- Submit license detection and classification improvements
### 📊 **New Benchmarks**
- Add evaluation tasks beyond license detection and code completion
- Contribute cross-language analysis benchmarks
- Share bug detection and security analysis evaluations
### 📈 **Future Versions**
- **v1.1.0**: Enhanced evaluation subset with community feedback
- **v1.2.0**: Improved license detection and filtering tools
- **v2.0.0**: Community-curated clean variant with quality filters
### 🤝 **How to Contribute**
**Community contributions are actively welcomed and encouraged!** Help improve the largest deliberately noisy code dataset.
**🎯 Priority Contribution Areas**:
- **Data Curation**: Cleaning scripts, deduplication algorithms, quality filters
- **Benchmarks**: New evaluation tasks, improved baselines, framework implementations
- **Analysis Tools**: Visualization, statistics, metadata enhancement
- **Documentation**: Usage examples, tutorials, case studies
**📋 Contribution Process**:
1. Clone the repository locally
2. Review existing analysis in the `analysis/` directory
3. Develop improvements or new features
4. Test your contributions thoroughly
5. Submit your improvements via standard collaboration methods
**💡 Join the Community**: Share your research, tools, and insights using CodeReality!
## Support & Access
### Evaluation Subset (This Repository)
- **Documentation**: See `docs/` directory for comprehensive information
- **Analysis**: Check `analysis/` directory for current research insights
- **Usage**: All benchmarks and tools work directly with the 19GB subset
### Complete Dataset Access (3.05TB)
- **🔗 Full Dataset Request**: Contact vincenzo.gallo77@hotmail.com
- **📋 Include in your request**:
- Research purpose and intended use
- Institutional affiliation (if applicable)
- Technical requirements and storage capacity
- **⚡ Response time**: Typically within 24-48 hours
### General Support
- **Technical Questions**: vincenzo.gallo77@hotmail.com
- **Documentation Issues**: Check `docs/` directory first
- **Benchmark Problems**: Review `benchmarks/` and `results/` directories
---
*Dataset created using transparent research methodology with complete reproducibility. Analysis completed in 63.7 hours with 100% coverage and no sampling.*
|