sutra-10B / README.md
codelion's picture
Update README.md
415549c verified
metadata
language:
  - en
license: apache-2.0
size_categories:
  - 10M<n<100M
task_categories:
  - text-generation
tags:
  - pretraining
  - educational
  - pedagogical
  - synthetic
  - sutra
  - multi-domain
  - 10B
pretty_name: Sutra 10B Pretraining Dataset

Sutra 10B Pretraining Dataset

A high-quality pedagogical dataset designed for LLM pretraining, containing 10,193,029 educational entries totaling over 10 billion tokens. This is the largest dataset in the Sutra series, designed to demonstrate that dense, curated datasets can provide best-in-class pretraining performance for small language models.

Dataset Description

This dataset was generated using the Sutra framework, which creates structured educational content optimized for language model pretraining. Each entry is designed to maximize learning efficiency through:

  • Clear pedagogical structure: Content follows proven educational patterns
  • Cross-domain connections: Concepts are linked across disciplines
  • Varied complexity levels: From foundational (level 1) to advanced (level 10)
  • Quality-controlled generation: All entries meet minimum quality thresholds
  • Diverse content types: 33 different pedagogical formats
  • Rich metadata: Every entry annotated with 13 structured fields

Dataset Statistics

Metric Value
Total Entries 10,193,029
Total Tokens 10,218,677,925
Avg Tokens/Entry 1002
Avg Quality Score 0.701
Tokenizer SmolLM2 (HuggingFaceTB/SmolLM2-135M)

Domain Distribution

Domain Entries Tokens Percentage
interdisciplinary 3,561,052 3570.0M 34.9%
technology 2,154,481 2159.9M 21.1%
science 1,456,708 1460.3M 14.3%
social_studies 862,288 864.4M 8.5%
mathematics 830,414 832.5M 8.1%
life_skills 559,667 561.1M 5.5%
arts_and_creativity 455,738 456.9M 4.5%
language_arts 235,957 236.5M 2.3%
philosophy_and_ethics 76,724 76.9M 0.8%

Content Type Distribution (Top 15)

Content Type Count Percentage
historical_context 3,082,957 30.2%
concept_introduction 928,244 9.1%
data_analysis 776,495 7.6%
worked_examples 697,861 6.8%
problem_set 676,977 6.6%
tutorial 620,163 6.1%
technical_documentation 520,246 5.1%
research_summary 494,023 4.8%
code_implementation 473,056 4.6%
practical_application 438,157 4.3%
creative_writing 337,065 3.3%
reasoning_demonstration 227,343 2.2%
qa_pairs 200,076 2.0%
ethical_analysis 157,882 1.5%
experiment_design 141,859 1.4%

Data Sources

Sutra-10B was created by scaling the same recipe used for Sutra-1B from 1 billion to 10 billion tokens. The core pedagogical content was generated using the Sutra framework, then mixed with several high-quality open datasets for diversity:

Source Description Approximate Tokens
Sutra (core) Pedagogical content generated with the Sutra framework, scaled from the 1B recipe ~7.8B
Nemotron-CC-Math v1 High-quality mathematical content (NVIDIA) ~0.5B
OpenWebMath Mathematical web content ~0.5B
Wikipedia (English) Encyclopedic knowledge ~0.5B
Cosmopedia Synthetic educational content (multiple subsets) ~0.5B
FineWeb-Edu High-quality educational web content ~0.5B

Data Fields

Each entry contains 13 structured fields:

Field Type Description
id string Unique identifier (UUID)
concept_name string The concept being taught (2-5 words)
domain string Primary knowledge domain (9 domains)
content_type string Type of pedagogical content (33 types)
text string The main educational content
quality_score float Quality assessment score (0.0-1.0)
information_density string Measure of information per token (low/medium/high)
complexity_level integer Difficulty level (1-10)
token_count integer Number of tokens (SmolLM2 tokenizer)
prerequisites list[string] Required prior knowledge concepts
builds_to list[string] Advanced concepts this enables
cross_domain_connections list[string] Related knowledge domains
quality_assessment object Multi-dimensional quality scores

Quality Assessment Sub-fields

Sub-field Type Description
clarity float How clear and readable (0.0-1.0)
accuracy float Factual correctness (0.0-1.0)
pedagogy float Educational structure quality (0.0-1.0)
engagement float How engaging the content is (0.0-1.0)
depth float Depth of coverage (0.0-1.0)
creativity float Creative presentation (0.0-1.0)

Valid Domains (9)

mathematics, science, technology, language_arts, social_studies, arts_and_creativity, life_skills, philosophy_and_ethics, interdisciplinary

Valid Content Types (33)

concept_introduction, reasoning_demonstration, code_implementation, technical_documentation, tutorial, cross_domain_bridge, worked_examples, qa_pairs, common_misconceptions, meta_learning, synthesis, prerequisite_scaffolding, code_explanation, diagnostic_assessment, code_debugging, historical_context, research_summary, problem_set, case_study, analogy, experiment_design, proof, algorithm_analysis, data_analysis, ethical_analysis, comparative_analysis, creative_writing, debate_argument, practical_application, thought_experiment, visualization, system_design, review_summary

Data Cleaning

The dataset underwent comprehensive cleaning:

  • Deduplication: SHA-256 hash-based exact duplicate removal across all sources
  • Quality Filtering: Entries below quality_score 0.3 removed
  • Length Filtering: Entries shorter than 50 tokens or longer than 65,536 tokens removed
  • Garbage Detection: Repetitive content, control characters, non-English content filtered
  • Field Validation: All 13 fields validated and normalized

Metadata Generation

Metadata was generated using heuristic keyword-based classification:

  • Domain and content type classification via pattern matching and text analysis
  • Quality scores computed from text statistics (vocabulary diversity, structure, length)
  • Token counts computed using SmolLM2 tokenizer for accuracy

Usage

from datasets import load_dataset

# Load the full dataset
ds = load_dataset("codelion/sutra-10B", split="train")

# Stream for large-scale training
ds = load_dataset("codelion/sutra-10B", split="train", streaming=True)

# Filter by domain
math_ds = ds.filter(lambda x: x["domain"] == "mathematics")

# Filter by quality
high_quality = ds.filter(lambda x: x["quality_score"] > 0.7)

# Filter by complexity
beginner = ds.filter(lambda x: x["complexity_level"] <= 3)

Scaling Trajectory

Sutra-10B is the largest dataset in the Sutra series, scaling the original 1B recipe by 10x. When evaluated on SmolLM2-70M (69M parameters), benchmark performance remains consistent across scales, suggesting the model has reached its capacity ceiling. Larger models are expected to benefit more from the additional data and diversity.

Intended Use

This dataset is designed for:

  • LLM Pretraining: High-quality educational content for foundational model training
  • Domain-specific fine-tuning: Subset by domain for specialized training
  • Educational AI research: Studying pedagogical content generation
  • Curriculum learning: Progressive complexity for staged training
  • Small model optimization: Demonstrating data quality > quantity for small LMs

Related Datasets

Citation

@article{sharma2026sutra,
  title={Scaling Pedagogical Pretraining: From Optimal Mixing to 10 Billion Tokens},
  author={Sharma, Asankhaya},
  year={2026},
  url={https://huggingface.co/blog/codelion/scaling-pedagogical-pretraining-10-billion-tokens}
}

License

Apache 2.0