--- license: apache-2.0 task_categories: - text-generation language: - en - zh pretty_name: MiniModel Pretraining Corpus --- # Dataset Card for MiniModel Pretraining Corpus This dataset is a curated, tokenized pretraining mixture designed specifically for training **MiniModel**-series small language models. It was tokenized using the **Mistral-7B-Instruct-v0.3 tokenizer** (vocab size: 32,768), which is included in the [MiniModel-200M-Base repository](https://huggingface.co/xTimeCrystal/MiniModel-200M-Base). For **training code**, **data loading utilities**, and full reproducibility (including the training script), see the official GitHub repository: 🔗 [https://github.com/xTimeCrystal/MiniModel/tree/main](https://github.com/xTimeCrystal/MiniModel/tree/main) ## Dataset Details ### Dataset Description - **Curated by:** xTimeCrystal - **Languages:** English, Chinese, Python (code) - **License:** Apache 2.0 - **Intended use:** Pretraining efficient small language models (e.g., MiniModel-200M-Base) - **Token count:** ~10 billion tokens This corpus combines high-quality educational and general-purpose text sources, filtered and balanced to maximize learning efficiency in low-compute training regimes. ### Source Data Composition The dataset is a weighted mixture of the following sources (by token count): - **70%** [`openbmb/Ultra-FineWeb`](https://huggingface.co/datasets/openbmb/Ultra-FineWeb) (English subset) - **20%** [`openbmb/Ultra-FineWeb`](https://huggingface.co/datasets/openbmb/Ultra-FineWeb) (Chinese subset) - **5%** [`Avelina/python-edu-cleaned`](https://huggingface.co/datasets/Avelina/python-edu-cleaned) - **5%** [`HuggingFaceTB/finemath`](https://huggingface.co/datasets/HuggingFaceTB/finemath) All source datasets are publicly available and compatible with the Apache 2.0 license. ### Preprocessing - Tokenized with the **Mistral-7B-Instruct-v0.3 tokenizer** - Sequences were packed using a bin-packing algorithm to minimize padding (final padding < 5%) - Maximum sequence length: 2048 tokens - No deduplication beyond source-level filtering > 💡 **Note**: The tokenizer, training configuration, and data-loading pipeline are provided in the [GitHub repo](https://github.com/xTimeCrystal/MiniModel/tree/main) for full reproducibility.