vinsblack commited on
Commit
cd6673c
·
verified ·
1 Parent(s): c7eff8e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +192 -69
README.md CHANGED
@@ -1,37 +1,44 @@
1
  ---
2
- dataset_name: "CodeReality"
3
- pretty_name: "CodeReality: Deliberately Noisy Code Dataset"
4
  tags:
5
- - code
6
- - bigcode
7
- - software-engineering
8
- - robustness
9
- - noisy-dataset
10
- - evaluation
 
11
  size_categories:
12
- - 10GB<n<100GB
13
- - 1TB<n<10TB
14
  task_categories:
15
- - text-generation
16
- - text-classification
17
- - text-retrieval
18
- - other
 
 
19
  language:
20
- - en
 
21
  license: other
22
  configs:
23
- - config_name: default
24
- data_files:
25
- - eval/subset/*.jsonl
 
 
 
 
26
  ---
27
 
28
 
29
-
30
- # CodeReality: Large-Scale Deliberately Noisy Code Dataset
31
 
32
  ![Dataset Status](https://img.shields.io/badge/status-complete-brightgreen)
33
- ![Size](https://img.shields.io/badge/size-3.05TB-orange)
34
- ![Repositories](https://img.shields.io/badge/repositories-397,475-red)
 
35
 
36
  ## ⚠️ Important Limitations
37
 
@@ -41,11 +48,11 @@ configs:
41
 
42
  ## Overview
43
 
44
- **CodeReality-1T** is a large-scale, deliberately noisy code repository dataset designed for robust AI research. Contains **397,475 repositories** across **21 programming languages** in **3.05 TB** of uncompressed data, specifically curated to test robustness, data curation methods, and real-world code understanding.
45
 
46
  ### Key Features
47
- - ✅ **Complete Coverage**: 100% analysis of all 397,475 repositories (no sampling)
48
- - ✅ **BigCode Compliant**: Meets all community standards for transparency and reproducibility
49
  - ✅ **Deliberately Noisy**: Includes duplicates, incomplete code, and experimental projects
50
  - ✅ **Rich Metadata**: Enhanced Blueprint metadata with cross-domain classification
51
  - ✅ **Professional Grade**: 63.7-hour comprehensive analysis with open source tools
@@ -55,17 +62,22 @@ configs:
55
  ### Dataset Structure
56
  ```
57
  codereality-1t/
58
- ├── data/ # Main dataset location reference
59
  │ ├── README.md # Data access instructions
60
- │ └── manifest.json # Integrity verification
61
- ├── analysis/ # Analysis results
62
- │ ├── dataset_index.json # Complete file index (29MB)
63
- │ └── metrics.json # Comprehensive analysis results
64
  ├── docs/ # Documentation
65
  │ ├── DATASET_CARD.md # Comprehensive dataset card
66
  │ └── LICENSE.md # Licensing information
67
- └── eval/ # Evaluation subset (in progress)
68
- └── subset/ # Curated 15GB research subset
 
 
 
 
 
69
  ```
70
 
71
  ### Loading the Dataset
@@ -74,17 +86,19 @@ codereality-1t/
74
  import json
75
  import os
76
 
77
- # Load dataset index
78
- with open('analysis/dataset_index.json', 'r') as f:
79
- index = json.load(f)
80
 
81
- print(f"Files: {len(index['files'])}")
82
- print(f"Repositories: {sum(f['repository_count'] for f in index['files'])}")
 
 
83
 
84
- # Access data files
85
- data_dir = "/mnt/z/CodeReality_Final/unified_dataset"
86
- for file_info in index['files'][:5]: # First 5 files
87
- file_path = file_info['path']
88
  with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
89
  for line in f:
90
  repo_data = json.loads(line)
@@ -94,12 +108,13 @@ for file_info in index['files'][:5]: # First 5 files
94
 
95
  ## Dataset Statistics
96
 
97
- ### Scale
98
- - **Total Repositories**: 397,475
99
- - **Total Files**: 52,692 JSONL archives
100
- - **Total Size**: 3.05 TB uncompressed
101
- - **Languages Detected**: 21
102
- - **Analysis Coverage**: 100% (no sampling)
 
103
 
104
  ### Language Distribution (Top 10)
105
  | Language | Repositories | Percentage |
@@ -141,14 +156,16 @@ for file_info in index['files'][:5]: # First 5 files
141
  4. **Bug-Fix Studies**: Before/after commit analysis for automated debugging
142
  5. **Cross-Language Analysis**: Multi-language repository understanding
143
 
144
- ### Evaluation Subset & Benchmarks
145
- A curated **19GB evaluation subset** is now available for standardized benchmarks:
146
  - **323 files** containing **2,049 repositories**
147
  - Research value scoring with diversity sampling
148
  - Cross-language implementations and multi-repo analysis
149
  - Complete build system configurations
150
  - Enhanced metadata with commit history and issue tracking
151
 
 
 
152
  **Demonstration Benchmarks** available in `eval/benchmarks/`:
153
  - **License Detection**: Automated license classification evaluation
154
  - **Code Completion**: Pass@k metrics for code generation models
@@ -191,6 +208,91 @@ python3 code_completion_benchmark.py # Code generation Pass@k
191
  - **Security Risk**: Contains potential secrets and vulnerabilities
192
  - **Deliberately Noisy**: Requires preprocessing for most applications
193
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
194
  ## Documentation
195
 
196
  | Document | Description |
@@ -203,30 +305,30 @@ python3 code_completion_benchmark.py # Code generation Pass@k
203
 
204
  Verify dataset integrity:
205
  ```bash
206
- # Check file counts
207
  python3 -c "
208
  import json
209
- with open('analysis/dataset_index.json', 'r') as f:
210
- idx = json.load(f)
211
- print(f'Files: {len(idx[\"files\"])}')
212
- print(f'Repositories: {sum(f[\"repository_count\"] for f in idx[\"files\"])}')
 
213
  "
214
 
215
  # Expected output:
216
- # Files: 52692
217
- # Repositories: 397475
 
218
  ```
219
 
220
  ## Citation
221
 
222
  ```bibtex
223
  @misc{codereality2025,
224
- title={CodeReality-1T: A Large-Scale Deliberately Noisy Dataset for Robust Code Understanding},
225
  author={Vincenzo Gallo},
226
  year={2025},
227
- publisher={Hugging Face},
228
- howpublished={\\url{https://huggingface.co/vinsblack}},
229
- note={Version 1.0.0}
230
  }
231
  ```
232
 
@@ -259,17 +361,38 @@ We welcome community contributions to improve CodeReality-1T:
259
  - **Documentation**: Usage examples, tutorials, case studies
260
 
261
  **📋 Contribution Process**:
262
- 1. Check Issues for current needs and coordination2
263
- 2. Create feature branch for your contribution
264
- 3. Submit pull request with detailed description and testing
265
- 4. Engage in community review and discussions
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
266
 
267
- **💡 Join the Community**: Share your research, tools, and insights using CodeReality-1T!
 
 
268
 
269
- ## Support
270
 
271
- - **Documentation**: See `docs/` directory
272
- - **Contact**: vincenzo.gallo77@hotmail.com
273
 
274
- ---
275
 
 
1
  ---
2
+ dataset_name: "CodeReality-EvalSubset"
3
+ pretty_name: "CodeReality: Evaluation Subset - Deliberately Noisy Code Dataset"
4
  tags:
5
+ - code
6
+ - software-engineering
7
+ - robustness
8
+ - noisy-dataset
9
+ - evaluation-subset
10
+ - research-dataset
11
+ - code-understanding
12
  size_categories:
13
+ - 10GB<n<100GB
 
14
  task_categories:
15
+ - text-generation
16
+ - text-classification
17
+ - text-retrieval
18
+ - code-completion
19
+ - license-detection
20
+ - cross-language-analysis
21
  language:
22
+ - en
23
+ - code
24
  license: other
25
  configs:
26
+ - config_name: default
27
+ data_files:
28
+ - "data/*.jsonl"
29
+ - config_name: benchmarks
30
+ data_files:
31
+ - "benchmarks/*.py"
32
+ - "results/*.json"
33
  ---
34
 
35
 
36
+ # CodeReality: Evaluation Subset - Deliberately Noisy Code Dataset
 
37
 
38
  ![Dataset Status](https://img.shields.io/badge/status-complete-brightgreen)
39
+ ![Size](https://img.shields.io/badge/size-19GB-orange)
40
+ ![Repositories](https://img.shields.io/badge/repositories-2,049-red)
41
+ ![Subset](https://img.shields.io/badge/type-evaluation_subset-purple)
42
 
43
  ## ⚠️ Important Limitations
44
 
 
48
 
49
  ## Overview
50
 
51
+ **CodeReality Evaluation Subset** is a curated research subset extracted from the complete CodeReality dataset (3.05TB, 397,475 repositories). This subset contains **2,049 repositories** in **19GB** of data, specifically selected for standardized evaluation and benchmarking of code understanding models on deliberately noisy data.
52
 
53
  ### Key Features
54
+ - ✅ **Curated Selection**: Research value scoring with diversity sampling from 397,475 repositories
55
+ - ✅ **Research Grade**: Comprehensive analysis with transparent methodology
56
  - ✅ **Deliberately Noisy**: Includes duplicates, incomplete code, and experimental projects
57
  - ✅ **Rich Metadata**: Enhanced Blueprint metadata with cross-domain classification
58
  - ✅ **Professional Grade**: 63.7-hour comprehensive analysis with open source tools
 
62
  ### Dataset Structure
63
  ```
64
  codereality-1t/
65
+ ├── data/ # Evaluation subset data (19GB, 323 JSONL files)
66
  │ ├── README.md # Data access instructions
67
+ │ └── [323 JSONL files] # Repository data in JSON Lines format
68
+ ├── analysis/ # Analysis results and tools
69
+ │ ├── dataset_index.json # File index and metadata
70
+ │ └── metrics.json # Analysis results
71
  ├── docs/ # Documentation
72
  │ ├── DATASET_CARD.md # Comprehensive dataset card
73
  │ └── LICENSE.md # Licensing information
74
+ ├── benchmarks/ # Benchmarking scripts and frameworks
75
+ ├── results/ # Evaluation results and metrics
76
+ ├── Notebook/ # Analysis notebooks and visualizations
77
+ ├── eval_metadata.json # Evaluation metadata and statistics
78
+ ├── dataset-config.yaml # Main dataset configuration
79
+ ├── analysis-config.yaml # Analysis methodology and results configuration
80
+ └── benchmarks-config.yaml # Benchmarking framework configuration
81
  ```
82
 
83
  ### Loading the Dataset
 
86
  import json
87
  import os
88
 
89
+ # Load evaluation subset metadata
90
+ with open('eval_metadata.json', 'r') as f:
91
+ metadata = json.load(f)
92
 
93
+ print(f"Subset: {metadata['eval_subset_info']['name']}")
94
+ print(f"Files: {metadata['subset_statistics']['total_files']}")
95
+ print(f"Repositories: {metadata['subset_statistics']['estimated_repositories']}")
96
+ print(f"Size: {metadata['subset_statistics']['total_size_gb']} GB")
97
 
98
+ # Access evaluation data files
99
+ data_dir = "data/" # Local evaluation subset data
100
+ for filename in os.listdir(data_dir)[:5]: # First 5 files
101
+ file_path = os.path.join(data_dir, filename)
102
  with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
103
  for line in f:
104
  repo_data = json.loads(line)
 
108
 
109
  ## Dataset Statistics
110
 
111
+ ### Evaluation Subset Scale
112
+ - **Total Repositories**: 2,049 (curated from 397,475)
113
+ - **Total Files**: 323 JSONL archives
114
+ - **Total Size**: 19GB uncompressed
115
+ - **Languages Detected**: Multiple (JavaScript, Python, Java, C/C++, mixed)
116
+ - **Selection**: Research value scoring with diversity sampling
117
+ - **Source Dataset**: CodeReality complete dataset (3.05TB)
118
 
119
  ### Language Distribution (Top 10)
120
  | Language | Repositories | Percentage |
 
156
  4. **Bug-Fix Studies**: Before/after commit analysis for automated debugging
157
  5. **Cross-Language Analysis**: Multi-language repository understanding
158
 
159
+ ### About This Evaluation Subset
160
+ This repository contains the **19GB evaluation subset** designed for standardized benchmarks:
161
  - **323 files** containing **2,049 repositories**
162
  - Research value scoring with diversity sampling
163
  - Cross-language implementations and multi-repo analysis
164
  - Complete build system configurations
165
  - Enhanced metadata with commit history and issue tracking
166
 
167
+ **Note**: The complete 3.05TB CodeReality dataset with all 397,475 repositories is available separately. Contact vincenzo.gallo77@hotmail.com for access to the full dataset.
168
+
169
  **Demonstration Benchmarks** available in `eval/benchmarks/`:
170
  - **License Detection**: Automated license classification evaluation
171
  - **Code Completion**: Pass@k metrics for code generation models
 
208
  - **Security Risk**: Contains potential secrets and vulnerabilities
209
  - **Deliberately Noisy**: Requires preprocessing for most applications
210
 
211
+ ## ⚠️ Important: Dataset vs Evaluation Subset
212
+
213
+ **This repository contains the 19GB evaluation subset only.** Some files within this repository (such as `docs/DATASET_CARD.md`, notebooks in `Notebook/`, and analysis results) reference or describe the complete 3.05TB CodeReality dataset. This is intentional for research context and documentation completeness.
214
+
215
+ ### What's in this repository:
216
+ - ✅ **Evaluation subset data**: 19GB, 2,049 repositories in `data/` directory
217
+ - ✅ **Analysis tools and scripts**: For working with both subset and full dataset
218
+ - ✅ **Documentation**: Describes both the subset and the complete dataset methodology
219
+ - ✅ **Benchmarks**: Ready to use with the evaluation subset
220
+
221
+ ### Complete Dataset Access (3.05TB):
222
+ - 📧 **Contact**: vincenzo.gallo77@hotmail.com for access to the full dataset
223
+ - 📊 **Full Scale**: 397,475 repositories across 21 programming languages
224
+ - 🗂️ **Size**: 3.05TB uncompressed, 52,692 JSONL files
225
+
226
+ #### Who Should Use the Complete Dataset:
227
+ - 🎯 **Large-scale ML researchers** training foundation models on massive code corpora
228
+ - 🏢 **Enterprise teams** developing production code understanding systems
229
+ - 🔬 **Academic institutions** conducting comprehensive code analysis studies
230
+ - 📊 **Data scientists** performing statistical analysis on repository distributions
231
+ - 🛠️ **Tool developers** building large-scale code curation and filtering systems
232
+
233
+ #### Advantages of Complete Dataset vs Evaluation Subset:
234
+ | Feature | Evaluation Subset (19GB) | Complete Dataset (3.05TB) |
235
+ |---------|-------------------------|---------------------------|
236
+ | **Repositories** | 2,049 curated | 397,475 complete coverage |
237
+ | **Use Case** | Benchmarking & evaluation | Large-scale training & research |
238
+ | **Data Quality** | High (curated selection) | Mixed (deliberately noisy) |
239
+ | **Languages** | Multi-language focused | 21+ languages comprehensive |
240
+ | **Setup Time** | Immediate | Requires infrastructure planning |
241
+ | **Best For** | Model evaluation, testing | Model training, comprehensive analysis |
242
+
243
+ #### Choose Complete Dataset When:
244
+ - ✅ Training large language models requiring massive code corpora
245
+ - ✅ Developing data curation algorithms at scale
246
+ - ✅ Studying real-world code distribution patterns
247
+ - ✅ Building production-grade code understanding systems
248
+ - ✅ Researching cross-language programming patterns
249
+ - ✅ Creating comprehensive code quality metrics
250
+
251
+ #### Choose Evaluation Subset When:
252
+ - ✅ Benchmarking existing models
253
+ - ✅ Quick prototyping and testing
254
+ - ✅ Learning to work with noisy code datasets
255
+ - ✅ Limited storage or computational resources
256
+ - ✅ Focused evaluation on curated, high-value repositories
257
+
258
+ ## Configuration Files (YAML)
259
+
260
+ The project includes comprehensive YAML configuration files for easy programmatic access:
261
+
262
+ | Configuration File | Description |
263
+ |-------------------|-------------|
264
+ | [`dataset-config.yaml`](dataset-config.yaml) | Main dataset metadata and structure |
265
+ | [`analysis-config.yaml`](analysis-config.yaml) | Analysis methodology and results |
266
+ | [`benchmarks-config.yaml`](benchmarks-config.yaml) | Benchmarking framework configuration |
267
+
268
+ ### Using Configuration Files
269
+
270
+ ```python
271
+ import yaml
272
+
273
+ # Load dataset configuration
274
+ with open('dataset-config.yaml', 'r') as f:
275
+ dataset_config = yaml.safe_load(f)
276
+
277
+ print(f"Dataset: {dataset_config['dataset']['name']}")
278
+ print(f"Version: {dataset_config['dataset']['version']}")
279
+ print(f"Total repositories: {dataset_config['dataset']['metadata']['total_repositories']}")
280
+
281
+ # Load analysis configuration
282
+ with open('analysis-config.yaml', 'r') as f:
283
+ analysis_config = yaml.safe_load(f)
284
+
285
+ print(f"Analysis time: {analysis_config['analysis']['methodology']['total_time_hours']} hours")
286
+ print(f"Coverage: {analysis_config['analysis']['methodology']['coverage_percentage']}%")
287
+
288
+ # Load benchmarks configuration
289
+ with open('benchmarks-config.yaml', 'r') as f:
290
+ benchmarks_config = yaml.safe_load(f)
291
+
292
+ for benchmark in benchmarks_config['benchmarks']['available_benchmarks']:
293
+ print(f"Benchmark: {benchmark}")
294
+ ```
295
+
296
  ## Documentation
297
 
298
  | Document | Description |
 
305
 
306
  Verify dataset integrity:
307
  ```bash
308
+ # Check evaluation subset counts
309
  python3 -c "
310
  import json
311
+ with open('eval_metadata.json', 'r') as f:
312
+ metadata = json.load(f)
313
+ print(f'Files: {metadata[\"subset_statistics\"][\"total_files\"]}')
314
+ print(f'Repositories: {metadata[\"subset_statistics\"][\"estimated_repositories\"]}')
315
+ print(f'Size: {metadata[\"subset_statistics\"][\"total_size_gb\"]} GB')
316
  "
317
 
318
  # Expected output:
319
+ # Files: 323
320
+ # Repositories: 2049
321
+ # Size: 19.0 GB
322
  ```
323
 
324
  ## Citation
325
 
326
  ```bibtex
327
  @misc{codereality2025,
328
+ title={CodeReality Evaluation Subset: A Curated Research Dataset for Robust Code Understanding},
329
  author={Vincenzo Gallo},
330
  year={2025},
331
+ note={Version 1.0.0 - Evaluation Subset (19GB from 3.05TB source)}
 
 
332
  }
333
  ```
334
 
 
361
  - **Documentation**: Usage examples, tutorials, case studies
362
 
363
  **📋 Contribution Process**:
364
+ 1. Clone the repository locally
365
+ 2. Review existing analysis in the `analysis/` directory
366
+ 3. Develop improvements or new features
367
+ 4. Test your contributions thoroughly
368
+ 5. Submit your improvements via standard collaboration methods
369
+
370
+ **💡 Join the Community**: Share your research, tools, and insights using CodeReality!
371
+
372
+ ## Support & Access
373
+
374
+ ### Evaluation Subset (This Repository)
375
+ - **Documentation**: See `docs/` directory for comprehensive information
376
+ - **Analysis**: Check `analysis/` directory for current research insights
377
+ - **Usage**: All benchmarks and tools work directly with the 19GB subset
378
+
379
+ ### Complete Dataset Access (3.05TB)
380
+ - **🔗 Full Dataset Request**: Contact vincenzo.gallo77@hotmail.com
381
+ - **📋 Include in your request**:
382
+ - Research purpose and intended use
383
+ - Institutional affiliation (if applicable)
384
+ - Technical requirements and storage capacity
385
+ - **⚡ Response time**: Typically within 24-48 hours
386
+
387
+ ### General Support
388
+ - **Technical Questions**: vincenzo.gallo77@hotmail.com
389
+ - **Documentation Issues**: Check `docs/` directory first
390
+ - **Benchmark Problems**: Review `benchmarks/` and `results/` directories
391
 
392
+ ---
393
+
394
+ *Dataset created using transparent research methodology with complete reproducibility. Analysis completed in 63.7 hours with 100% coverage and no sampling.*
395
 
 
396
 
 
 
397
 
 
398