codelion commited on
Commit
f08bd67
·
verified ·
1 Parent(s): 649fd94

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +217 -0
README.md ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ size_categories:
6
+ - 10M<n<100M
7
+ task_categories:
8
+ - text-generation
9
+ tags:
10
+ - pretraining
11
+ - educational
12
+ - pedagogical
13
+ - synthetic
14
+ - sutra
15
+ - multi-domain
16
+ - 10B
17
+ pretty_name: Sutra 10B Pretraining Dataset
18
+ ---
19
+
20
+ # Sutra 10B Pretraining Dataset
21
+
22
+ A high-quality pedagogical dataset designed for LLM pretraining, containing 10,193,029 educational entries totaling over 10 billion tokens. This is the largest dataset in the Sutra series, designed to demonstrate that dense, curated datasets can provide best-in-class pretraining performance for small language models.
23
+
24
+ ## Dataset Description
25
+
26
+ This dataset was generated using the Sutra framework, which creates structured educational content optimized for language model pretraining. Each entry is designed to maximize learning efficiency through:
27
+
28
+ - **Clear pedagogical structure**: Content follows proven educational patterns
29
+ - **Cross-domain connections**: Concepts are linked across disciplines
30
+ - **Varied complexity levels**: From foundational (level 1) to advanced (level 10)
31
+ - **Quality-controlled generation**: All entries meet minimum quality thresholds
32
+ - **Diverse content types**: 33 different pedagogical formats
33
+ - **Rich metadata**: Every entry annotated with 13 structured fields
34
+
35
+ ## Dataset Statistics
36
+
37
+ | Metric | Value |
38
+ |--------|-------|
39
+ | Total Entries | 10,193,029 |
40
+ | Total Tokens | 10,218,677,925 |
41
+ | Avg Tokens/Entry | 1002 |
42
+ | Avg Quality Score | 0.701 |
43
+ | Tokenizer | SmolLM2 (HuggingFaceTB/SmolLM2-135M) |
44
+
45
+ ### Domain Distribution
46
+
47
+ | Domain | Entries | Tokens | Percentage |
48
+ |--------|---------|--------|------------|
49
+ | interdisciplinary | 3,561,052 | 3570.0M | 34.9% |
50
+ | technology | 2,154,481 | 2159.9M | 21.1% |
51
+ | science | 1,456,708 | 1460.3M | 14.3% |
52
+ | social_studies | 862,288 | 864.4M | 8.5% |
53
+ | mathematics | 830,414 | 832.5M | 8.1% |
54
+ | life_skills | 559,667 | 561.1M | 5.5% |
55
+ | arts_and_creativity | 455,738 | 456.9M | 4.5% |
56
+ | language_arts | 235,957 | 236.5M | 2.3% |
57
+ | philosophy_and_ethics | 76,724 | 76.9M | 0.8% |
58
+
59
+
60
+ ### Content Type Distribution (Top 15)
61
+
62
+ | Content Type | Count | Percentage |
63
+ |--------------|-------|------------|
64
+ | historical_context | 3,082,957 | 30.2% |
65
+ | concept_introduction | 928,244 | 9.1% |
66
+ | data_analysis | 776,495 | 7.6% |
67
+ | worked_examples | 697,861 | 6.8% |
68
+ | problem_set | 676,977 | 6.6% |
69
+ | tutorial | 620,163 | 6.1% |
70
+ | technical_documentation | 520,246 | 5.1% |
71
+ | research_summary | 494,023 | 4.8% |
72
+ | code_implementation | 473,056 | 4.6% |
73
+ | practical_application | 438,157 | 4.3% |
74
+ | creative_writing | 337,065 | 3.3% |
75
+ | reasoning_demonstration | 227,343 | 2.2% |
76
+ | qa_pairs | 200,076 | 2.0% |
77
+ | ethical_analysis | 157,882 | 1.5% |
78
+ | experiment_design | 141,859 | 1.4% |
79
+
80
+
81
+ ## Data Sources
82
+
83
+ This dataset combines multiple high-quality sources:
84
+
85
+ | Source | Description | Approximate Tokens |
86
+ |--------|-------------|--------------------|
87
+ | Sutra 7.5B | Core pedagogical dataset generated with Sutra framework | ~7.5B |
88
+ | Sutra 3B (unique) | Additional skill-tagged educational content | ~0.5B |
89
+ | Sutra 1B (unique) | Metadata-rich pedagogical entries | ~0.3B |
90
+ | Nemotron-CC-Math v1 | High-quality mathematical content (NVIDIA) | ~1.0B |
91
+ | OpenWebMath | Mathematical web content | ~0.5B |
92
+ | Wikipedia (English) | Encyclopedic knowledge | ~0.9B |
93
+ | Cosmopedia | Synthetic educational content (multiple subsets) | ~1.0B |
94
+ | FineWeb-Edu | High-quality educational web content | ~0.5B |
95
+
96
+ ## Data Fields
97
+
98
+ Each entry contains 13 structured fields:
99
+
100
+ | Field | Type | Description |
101
+ |-------|------|-------------|
102
+ | `id` | string | Unique identifier (UUID) |
103
+ | `concept_name` | string | The concept being taught (2-5 words) |
104
+ | `domain` | string | Primary knowledge domain (9 domains) |
105
+ | `content_type` | string | Type of pedagogical content (33 types) |
106
+ | `text` | string | The main educational content |
107
+ | `quality_score` | float | Quality assessment score (0.0-1.0) |
108
+ | `information_density` | string | Measure of information per token (low/medium/high) |
109
+ | `complexity_level` | integer | Difficulty level (1-10) |
110
+ | `token_count` | integer | Number of tokens (SmolLM2 tokenizer) |
111
+ | `prerequisites` | list[string] | Required prior knowledge concepts |
112
+ | `builds_to` | list[string] | Advanced concepts this enables |
113
+ | `cross_domain_connections` | list[string] | Related knowledge domains |
114
+ | `quality_assessment` | object | Multi-dimensional quality scores |
115
+
116
+ ### Quality Assessment Sub-fields
117
+
118
+ | Sub-field | Type | Description |
119
+ |-----------|------|-------------|
120
+ | `clarity` | float | How clear and readable (0.0-1.0) |
121
+ | `accuracy` | float | Factual correctness (0.0-1.0) |
122
+ | `pedagogy` | float | Educational structure quality (0.0-1.0) |
123
+ | `engagement` | float | How engaging the content is (0.0-1.0) |
124
+ | `depth` | float | Depth of coverage (0.0-1.0) |
125
+ | `creativity` | float | Creative presentation (0.0-1.0) |
126
+
127
+ ### Valid Domains (9)
128
+
129
+ `mathematics`, `science`, `technology`, `language_arts`, `social_studies`, `arts_and_creativity`, `life_skills`, `philosophy_and_ethics`, `interdisciplinary`
130
+
131
+ ### Valid Content Types (33)
132
+
133
+ `concept_introduction`, `reasoning_demonstration`, `code_implementation`, `technical_documentation`, `tutorial`, `cross_domain_bridge`, `worked_examples`, `qa_pairs`, `common_misconceptions`, `meta_learning`, `synthesis`, `prerequisite_scaffolding`, `code_explanation`, `diagnostic_assessment`, `code_debugging`, `historical_context`, `research_summary`, `problem_set`, `case_study`, `analogy`, `experiment_design`, `proof`, `algorithm_analysis`, `data_analysis`, `ethical_analysis`, `comparative_analysis`, `creative_writing`, `debate_argument`, `practical_application`, `thought_experiment`, `visualization`, `system_design`, `review_summary`
134
+
135
+ ## Data Cleaning
136
+
137
+ The dataset underwent comprehensive cleaning:
138
+
139
+ - **Deduplication**: SHA-256 hash-based exact duplicate removal across all sources
140
+ - **Quality Filtering**: Entries below quality_score 0.3 removed
141
+ - **Length Filtering**: Entries shorter than 50 tokens or longer than 65,536 tokens removed
142
+ - **Garbage Detection**: Repetitive content, control characters, non-English content filtered
143
+ - **Field Validation**: All 13 fields validated and normalized
144
+
145
+ ## Metadata Generation
146
+
147
+ Metadata was generated using a combination of approaches:
148
+ - Entries from Sutra-1B retained their existing rich metadata
149
+ - All other entries (~95%) were classified using heuristic keyword-based classification
150
+ - Domain and content type classification via pattern matching and text analysis
151
+ - Quality scores computed from text statistics (vocabulary diversity, structure, length)
152
+ - Token counts computed using SmolLM2 tokenizer for accuracy
153
+
154
+ ## Usage
155
+
156
+ ```python
157
+ from datasets import load_dataset
158
+
159
+ # Load the full dataset
160
+ ds = load_dataset("codelion/sutra-10B", split="train")
161
+
162
+ # Stream for large-scale training
163
+ ds = load_dataset("codelion/sutra-10B", split="train", streaming=True)
164
+
165
+ # Filter by domain
166
+ math_ds = ds.filter(lambda x: x["domain"] == "mathematics")
167
+
168
+ # Filter by quality
169
+ high_quality = ds.filter(lambda x: x["quality_score"] > 0.7)
170
+
171
+ # Filter by complexity
172
+ beginner = ds.filter(lambda x: x["complexity_level"] <= 3)
173
+ ```
174
+
175
+ ## Scaling Trajectory
176
+
177
+ The Sutra dataset series demonstrates the impact of dataset scaling:
178
+
179
+ | Dataset | Tokens | Best Benchmark |
180
+ |---------|--------|----------------|
181
+ | Sutra-1B | 1B | Baseline |
182
+ | Sutra-3B | 3B | +2.1% avg |
183
+ | Sutra-5B | 5B | +3.5% avg |
184
+ | Sutra-7.5B | 7.5B | +4.8% avg |
185
+ | **Sutra-10B** | **10B** | **TBD** |
186
+
187
+ ## Intended Use
188
+
189
+ This dataset is designed for:
190
+
191
+ - **LLM Pretraining**: High-quality educational content for foundational model training
192
+ - **Domain-specific fine-tuning**: Subset by domain for specialized training
193
+ - **Educational AI research**: Studying pedagogical content generation
194
+ - **Curriculum learning**: Progressive complexity for staged training
195
+ - **Small model optimization**: Demonstrating data quality > quantity for small LMs
196
+
197
+ ## Related Datasets
198
+
199
+ - [sutra-1B](https://huggingface.co/datasets/codelion/sutra-1B): 1B token pretraining dataset
200
+ - [sutra-100M](https://huggingface.co/datasets/codelion/sutra-100M): 100M token subset
201
+ - [sutra-30k-seeds](https://huggingface.co/datasets/codelion/sutra-30k-seeds): Instruction prompts for post-training
202
+ - [sutra-magpie-sft](https://huggingface.co/datasets/codelion/sutra-magpie-sft): SFT dataset
203
+
204
+ ## License
205
+
206
+ Apache 2.0
207
+
208
+ ## Citation
209
+
210
+ ```bibtex
211
+ @dataset{sutra-10b,
212
+ title={Sutra 10B: A Pedagogical Pretraining Dataset},
213
+ author={Sutra Team},
214
+ year={2026},
215
+ url={https://huggingface.co/datasets/codelion/sutra-10B}
216
+ }
217
+ ```