--- language: - en license: apache-2.0 size_categories: - 10M 0.7) # Filter by complexity beginner = ds.filter(lambda x: x["complexity_level"] <= 3) ``` ## Scaling Trajectory Sutra-10B is the largest dataset in the Sutra series, scaling the original 1B recipe by 10x. When evaluated on SmolLM2-70M (69M parameters), benchmark performance remains consistent across scales, suggesting the model has reached its capacity ceiling. Larger models are expected to benefit more from the additional data and diversity. ## Intended Use This dataset is designed for: - **LLM Pretraining**: High-quality educational content for foundational model training - **Domain-specific fine-tuning**: Subset by domain for specialized training - **Educational AI research**: Studying pedagogical content generation - **Curriculum learning**: Progressive complexity for staged training - **Small model optimization**: Demonstrating data quality > quantity for small LMs ## Related Datasets - [sutra-1B](https://huggingface.co/datasets/codelion/sutra-1B): 1B token pretraining dataset - [sutra-100M](https://huggingface.co/datasets/codelion/sutra-100M): 100M token subset - [sutra-30k-seeds](https://huggingface.co/datasets/codelion/sutra-30k-seeds): Instruction prompts for post-training - [sutra-magpie-sft](https://huggingface.co/datasets/codelion/sutra-magpie-sft): SFT dataset ## Citation ```bibtex @article{sharma2026sutra, title={Scaling Pedagogical Pretraining: From Optimal Mixing to 10 Billion Tokens}, author={Sharma, Asankhaya}, year={2026}, url={https://huggingface.co/blog/codelion/scaling-pedagogical-pretraining-10-billion-tokens} } ``` ## License Apache 2.0