Datasets:
File size: 8,117 Bytes
7e13d43 43d9520 7e13d43 43d9520 7e13d43 43d9520 7e13d43 43d9520 7e13d43 43d9520 7e13d43 43d9520 7e13d43 43d9520 7e13d43 43d9520 7e13d43 43d9520 7e13d43 43d9520 7e13d43 43d9520 7e13d43 43d9520 7e13d43 43d9520 7e13d43 43d9520 7e13d43 43d9520 f412a76 7e13d43 43d9520 f412a76 43d9520 7e13d43 43d9520 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 | ---
license: odc-by
task_categories:
- text-generation
language:
- code
size_categories:
- 1B<n<10B
tags:
- curated
- starcoderdata
- code
- structured-data
- multi-task-filter
---
# StarCoderData Curated
A curated subset of [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata)
optimised for training a 500M parameter model focused on structured data output
(JSON generation, function calling, schema compliance).
## Dataset Summary
- **Total code files:** 5,203,508
- **Total tokens:** 3.9B (target: 3.5B)
- **Classifier-scored files:** 1,553,596 (1.7B tokens)
- **Non-classified files:** 3,649,912 (2.2B tokens) — filtered by heuristics, not the classifier
- **Source:** bigcode/starcoderdata
- **Classifier:** [mdonigian/code-curator-v1](https://huggingface.co/mdonigian/code-curator-v1) (UniXcoder-base, multi-task)
- **Curation:** Per-language-slice filtering + compression ratio pre-filter + MinHash deduplication
## Filtering Strategy
Different language groups need different curation approaches. Not every slice
goes through the GPU classifier — schema languages and GitHub issues are filtered
with cheaper heuristics because the classifier was trained on general-purpose code
and isn't the right tool for inherently structured formats.
**All slices** share these pre-filters:
- zlib compression ratio < 0.10 (catches extreme repetition)
- MinHash LSH deduplication (128 perms, 5-line shingles, 0.7 Jaccard threshold)
### Classifier-Scored Slices (relevance_filter)
These languages were scored by the multi-task classifier. Files were ranked by
structured data relevance and filtered to keep only those with relevance ≥ 2.0
and quality ≥ 1.5, sampled down to the per-slice token budget:
- **TypeScript**: ~600M tokens — strong type system, filter by SD relevance ≥ 2
- **Python**: ~600M tokens — filter by SD relevance ≥ 2
- **Rust/Go/Java**: ~600M tokens — strongly typed, filter by SD relevance ≥ 2
### Non-Classified Slices
These languages were **not** run through the classifier. Their `quality`,
`structured_data`, and `content_type` columns contain default placeholder values
(0.0 / "unclassified") and should be ignored:
- **Schema languages** (JSON/YAML/SQL/protobuf/thrift/XSLT): ~800M tokens — inherently structured data formats; quality floor + random sample to budget
- **GitHub Issues** (technical): ~500M tokens — keyword filter matching structured-data topics (JSON, schema, API, protobuf, gRPC, etc.)
- **General code** (78 other languages): ~1B tokens — random sample for language diversity; quality floor only
## Language Slice Distribution
| Slice | Strategy | Languages | Target | Actual | % of Target |
|-------|----------|-----------|--------|--------|-------------|
| schema_languages | light_filter | json, yaml, sql, protocol-buffer +2 more | 800M | 799M | 99.9% |
| typescript | relevance_filter | typescript | 600M | 598M | 99.7% |
| python | relevance_filter | python | 600M | 594M | 99.1% |
| rust_go_java | relevance_filter | rust, go, java | 600M | 485M | 80.8% |
| github_issues | keyword_filter | github-issues-filtered-structured | 500M | 426M | 85.2% |
| general_code | light_filter | ada, agda, alloy, antlr +74 more | 1000M | 999M | 99.9% |
## Classifier-Scored Slices — Detail
The quality and structured data scores below apply **only** to the 1,553,596 files
(1.7B tokens) that went through the classifier. Non-classified slices
are excluded from these statistics.
| Slice | Files | Tokens | Avg Quality | Avg SD Relevance |
|-------|-------|--------|-------------|------------------|
| typescript | 841,426 | 598M | 3.81 | 2.88 |
| python | 567,721 | 594M | 3.71 | 2.73 |
| rust_go_java | 144,438 | 485M | 3.97 | 3.07 |
### Content Group Distribution (classifier-scored files only)
| Group | % of Classified Tokens | Tokens | Files |
|-------|-----------------------|--------|-------|
| Library/Package | 64.3% | 1,079,381,502 | 1,075,672 |
| Application | 3.4% | 56,360,750 | 118,200 |
| Script/CLI | 1.1% | 17,853,662 | 24,871 |
| Test Code | 5.5% | 91,370,763 | 48,003 |
| Config/Data/Generated/Other | 25.8% | 432,393,146 | 286,850 |
### Structured Data Relevance (classifier-scored files only)
The strongest classifier signal (Spearman 0.81 on held-out data). SD2+ files
contain significant structured data patterns (API endpoints, JSON parsing,
schema definitions, etc.).
Quality mean: 3.79, Median: 3.88.
| Level | Range | Target % | Actual % | Files |
|-------|-------|----------|----------|-------|
| SD0 | [0.0, 0.5) | 10.0% | 0.0% | 0 |
| SD1 | [0.5, 1.5) | 20.0% | 0.0% | 0 |
| SD2 | [1.5, 2.5) | 35.0% | 3.2% | 49,213 |
| SD3 | [2.5, 3.5) | 35.0% | 96.8% | 1,504,383 |
### Quality Distribution (classifier-scored files only)
| Level | Description | Files |
|-------|-------------|-------|
| 1 | Broken/gibberish | 0 |
| 2 | Functional but poor | 42,668 |
| 3 | Decent | 129,945 |
| 4 | Good | 1,380,674 |
| 5 | Excellent | 309 |
## Non-Classified Slices — Detail
These slices were filtered using heuristics. The classifier columns (`quality`,
`structured_data`, `content_type`) are set to defaults and **do not reflect
actual code quality** — the filtering was done by other means:
| Slice | Strategy | Files | Tokens | How Filtered |
|-------|----------|-------|--------|-------------|
| schema_languages | light_filter | 2,203,233 | 799M | Quality floor (≥1.5) + token budget, randomly sampled |
| github_issues | keyword_filter | 485,384 | 426M | Keyword match for structured-data topics + quality floor |
| general_code | light_filter | 961,295 | 999M | Quality floor (≥1.5) + token budget, randomly sampled |
## Programming Languages
| Language | % Tokens | Files |
|----------|----------|-------|
| typescript | 15.3% | 841,426 |
| python | 15.2% | 567,721 |
| github-issues-filtered-structured | 10.9% | 485,384 |
| markdown | 8.9% | 351,728 |
| json | 8.7% | 1,124,326 |
| go | 8.5% | 73,899 |
| sql | 5.9% | 121,035 |
| javascript | 5.8% | 281,216 |
| yaml | 5.8% | 957,872 |
| java | 3.2% | 57,787 |
| c-sharp | 3.0% | 114,063 |
| html | 2.9% | 53,527 |
| c | 2.8% | 75,899 |
| haskell | 2.2% | 84,862 |
| rust | 0.7% | 12,752 |
## Token Count Distribution
| Percentile | Tokens |
|------------|--------|
| P10 | 55 |
| P25 | 111 |
| P50 (median) | 255 |
| P75 | 631 |
| P90 | 1,416 |
| Mean | 749 |
## Schema
Each row contains:
| Field | Type | Description |
|-------|------|-------------|
| `content` | string | Source code text |
| `lang` | string | Programming language |
| `size` | int | File size in bytes |
| `token_count` | int | Estimated token count (size // 4) |
| `quality` | float | Code quality score 1-5 (**classifier-scored slices only**; 0.0 for non-classified) |
| `structured_data` | float | Structured data relevance 0-3 (**classifier-scored slices only**; 0.0 for non-classified) |
| `content_type` | string | Content type — 9 classes (**classifier-scored slices only**; "unclassified" for non-classified) |
| `language_slice` | string | Language slice name (use this to filter by curation strategy) |
| `relevance_score` | float | Composite relevance score (**classifier-scored slices only**; 0.0 for non-classified) |
> **Tip:** To work with only classifier-scored data, filter on
> `language_slice` in `{"typescript", "python", "rust_go_java"}`.
## Methodology
1. **Download:** All language folders from `bigcode/starcoderdata`.
2. **Classification:** Multi-task UniXcoder-base model (3 heads: quality, SD relevance,
content type) runs on TypeScript, Python, Rust, Go, and Java
Schema languages, GitHub issues, and general code skip this step.
3. **Pre-filtering:** zlib compression ratio filter removes repetitive boilerplate
before GPU inference.
4. **Filtering:** Per-slice strategy — relevance-based ranking for classified languages,
keyword matching for GitHub issues, random sampling for schema/general code. All
slices enforce a quality floor.
5. **Deduplication:** MinHash LSH (128 perms, 5-line shingles, 0.7 Jaccard threshold).
Highest-relevance file kept from each cluster.
|