Datasets:
File size: 9,645 Bytes
242084c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 |
---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- code
- en
license: other
multilinguality:
- multilingual
pretty_name: Google Code Archive Dataset
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- text-generation
tags:
- code
- google-code
- archive
configs:
- config_name: default
data_files:
- split: train
path: "data/*.parquet"
default: true
dataset_info:
features:
- name: code
dtype: string
- name: repo_name
dtype: string
- name: path
dtype: string
- name: language
dtype: string
- name: license
dtype: string
- name: size
dtype: int64
---
# Google Code Archive Dataset
## Dataset Description
This dataset was compiled from the [Google Code Archive](https://code.google.com/archive/), a preserved snapshot of projects hosted on Google Code, Google's open-source project hosting service that operated from 2006 to 2016. Google Code was one of the major code hosting platforms of its era, hosting hundreds of thousands of open-source projects before its shutdown. The archive provides a unique historical record of open-source development during a formative period of modern software engineering.
### Dataset Summary
| Statistic | Value |
|-----------|-------|
| **Total Files** | 65,825,565 |
| **Total Repositories** | 488,618 |
| **Total Size** | 47 GB (compressed Parquet) |
| **Programming Languages** | 454 |
| **File Format** | Parquet with Zstd compression (71 files) |
### Key Features
- **Historical open-source corpus**: Contains code from over 488K repositories hosted on Google Code during 2006-2016
- **Diverse language coverage**: Spans 454 programming languages identified by [go-enry](https://github.com/go-enry/go-enry) (based on GitHub Linguist rules)
- **Rich metadata**: Includes repository name, file path, detected language, license information, and file size
- **Quality filtered**: Extensive filtering to remove vendor code, build artifacts, generated files, and low-quality content
- **Era-specific patterns**: Captures coding conventions and library usage from the pre-modern era of software development
### Languages
The dataset includes 454 programming languages. The top 30 languages by file count:
| Rank | Language | File Count |
|------|----------|------------|
| 1 | Java | 16,331,993 |
| 2 | PHP | 12,764,574 |
| 3 | HTML | 5,705,184 |
| 4 | C++ | 5,090,685 |
| 5 | JavaScript | 4,937,765 |
| 6 | C | 4,179,202 |
| 7 | C# | 3,872,245 |
| 8 | Python | 2,207,240 |
| 9 | CSS | 1,697,385 |
| 10 | Objective-C | 1,186,050 |
| 11 | Shell | 639,183 |
| 12 | Java Server Pages | 541,498 |
| 13 | ActionScript | 540,557 |
| 14 | Makefile | 481,563 |
| 15 | ASP.NET | 381,389 |
| 16 | Smarty | 339,555 |
| 17 | Ruby | 331,743 |
| 18 | Go | 316,427 |
| 19 | Perl | 307,960 |
| 20 | Vim Script | 216,236 |
| 21 | Lua | 215,226 |
| 22 | HTML+PHP | 150,781 |
| 23 | HTML+Razor | 149,131 |
| 24 | MATLAB | 145,686 |
| 25 | Batchfile | 138,523 |
| 26 | Pascal | 135,992 |
| 27 | Visual Basic .NET | 118,732 |
| 28 | TeX | 110,379 |
| 29 | Less | 98,221 |
| 30 | Unix Assembly | 94,758 |
### Licenses
The dataset includes files from repositories with various licenses as specified in the Google Code Archive:
| License | File Count |
|---------|------------|
| Apache License 2.0 (asf20) | 21,568,143 |
| GNU GPL v3 (gpl3) | 14,843,470 |
| GNU GPL v2 (gpl2) | 6,824,185 |
| Other Open Source (oos) | 5,433,436 |
| MIT License (mit) | 4,754,567 |
| GNU LGPL (lgpl) | 4,073,137 |
| BSD License (bsd) | 3,787,348 |
| Artistic License (art) | 1,910,047 |
| Eclipse Public License (epl) | 1,587,289 |
| Mozilla Public License 1.1 (mpl11) | 580,102 |
| Multiple Licenses (multiple) | 372,457 |
| Google Summer of Code (gsoc) | 63,292 |
| Public Domain (publicdomain) | 28,092 |
## Dataset Structure
### Data Fields
| Field | Type | Description |
|-------|------|-------------|
| `code` | string | Content of the source file (UTF-8 encoded) |
| `repo_name` | string | Name of the Google Code project |
| `path` | string | Path of the file within the repository (relative to repo root) |
| `language` | string | Programming language as identified by [go-enry](https://github.com/go-enry/go-enry) |
| `license` | string | License of the repository (Google Code license identifier) |
| `size` | int64 | Size of the source file in bytes |
### Data Format
- **Format**: Apache Parquet with Zstd compression
- **File Structure**: 71 files (`google_code_0000.parquet` to `google_code_0070.parquet`)
### Data Splits
All examples are in the train split. There is no validation or test split.
### Example Data Point
```
{
'code': 'public class HundredIntegers {\n\tpublic static void main (String[] args) {\n\t\tfor (int i = 1; i<=100; i++) {\n\t\t\tSystem.out.println(i);\n\t\t}\n\t}\n}',
'repo_name': '100integers',
'path': 'HundredIntegers.java',
'language': 'Java',
'license': 'epl',
'size': 147
}
```
## Dataset Creation
### Pipeline Overview
The dataset was created through a multi-stage pipeline:
1. **Project Discovery**: Fetching project metadata from the Google Code Archive
2. **Source Filtering**: Selecting projects that have source code available (`hasSource: true`)
3. **Archive Downloading**: Downloading source archives from the Google Code Archive storage
4. **Content Extraction**: Extracting and filtering source code files
5. **Parquet Generation**: Writing filtered records to Parquet shards with Zstd compression
### Language Detection
Programming languages are detected using [go-enry](https://github.com/go-enry/go-enry), a Go port of GitHub's Linguist library. Only files classified as **Programming** or **Markup** language types are included (Data and Prose types are excluded).
### License Detection
Licenses are obtained directly from the Google Code Archive project metadata. The archive preserves the original license selection made by project owners when creating their repositories on Google Code.
### File Filtering
Extensive filtering is applied to ensure data quality:
#### Size Limits
| Limit | Value |
|-------|-------|
| Max repository archive size | 64 MB |
| Max single file size | 2 MB |
| Max line length | 1,000 characters |
#### Excluded Directories
- **Configuration**: `.git/`, `.github/`, `.gitlab/`, `.vscode/`, `.idea/`, `.vs/`, `.settings/`, `.eclipse/`, `.project/`, `.metadata/`
- **Vendor/Dependencies**: `node_modules/`, `bower_components/`, `jspm_packages/`, `vendor/`, `third_party/`, `3rdparty/`, `external/`, `packages/`, `deps/`, `lib/vendor/`, `target/dependency/`, `Pods/`
- **Build Output**: `build/`, `dist/`, `out/`, `bin/`, `target/`, `release/`, `debug/`, `.next/`, `.nuxt/`, `_site/`, `_build/`, `__pycache__/`, `.pytest_cache/`, `cmake-build-*`, `.gradle/`, `.maven/`
#### Excluded Files
- **Lock Files**: `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`, `Gemfile.lock`, `Cargo.lock`, `poetry.lock`, `Pipfile.lock`, `composer.lock`, `go.sum`, `mix.lock`
- **Minified Files**: Any file containing `.min.` in the name
- **Binary Files**: `.exe`, `.dll`, `.so`, `.dylib`, `.a`, `.lib`, `.o`, `.obj`, `.jar`, `.war`, `.ear`, `.class`, `.pyc`, `.pyo`, `.wasm`, `.bin`, `.dat`, `.pdf`, `.doc`, `.docx`, `.xls`, `.xlsx`, `.ppt`, `.pptx`, `.zip`, `.tar`, `.gz`, `.bz2`, `.7z`, `.rar`, `.jpg`, `.jpeg`, `.png`, `.gif`, `.bmp`, `.ico`, `.svg`, `.mp3`, `.mp4`, `.avi`, `.mov`, `.wav`, `.flac`, `.ttf`, `.otf`, `.woff`, `.woff2`, `.eot`
- **System Files**: `.DS_Store`, `thumbs.db`
#### Content Filtering
- **UTF-8 Validation**: Files must be valid UTF-8 encoded text
- **Binary Detection**: Files detected as binary by go-enry are excluded
- **Generated Files**: Files with generation markers in the first 500 bytes are excluded:
- `generated by`, `do not edit`, `auto-generated`, `autogenerated`, `@generated`, `<auto-generated`
- **Empty Files**: Files that are empty or contain only whitespace are excluded
- **Long Lines**: Files with any line exceeding 1,000 characters are excluded
- **go-enry Filters**: Additional filtering using go-enry's `IsVendor()`, `IsImage()`, `IsDotFile()`, `IsTest()`, and `IsGenerated()` functions
- **Documentation-only Repos**: Repositories containing only documentation files (no actual code) are skipped
### Source Data
All data originates from the [Google Code Archive](https://code.google.com/archive/), which preserves projects hosted on Google Code before its shutdown in January 2016.
## Considerations for Using the Data
### Historical Context
This dataset represents code from 2006-2016 and may contain:
- Outdated coding patterns and deprecated APIs
- Legacy library dependencies that are no longer maintained
- Security vulnerabilities that have since been discovered and patched
- Code written for older language versions (Python 2, older Java versions, etc.)
Users should be aware that this code reflects historical practices and may not represent modern best practices.
### Personal and Sensitive Information
The dataset may contain:
- Email addresses in code comments or configuration files
- API keys or credentials that were accidentally committed
- Personal information in comments or documentation
Users should exercise caution and implement appropriate filtering when using this data.
### Licensing Information
This dataset is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in this dataset must abide by the terms of the original licenses, including attribution clauses when relevant. The license field in each data point indicates the license of the source repository.
|