TinyCorpus-v2 / README.md
xTimeCrystal's picture
Update README.md
6e2567a verified
metadata
license: apache-2.0
task_categories:
  - text-generation
language:
  - en
  - zh
pretty_name: MiniModel Pretraining Corpus

Dataset Card for MiniModel Pretraining Corpus

This dataset is a curated, tokenized pretraining mixture designed specifically for training MiniModel-series small language models. It was tokenized using the Mistral-7B-Instruct-v0.3 tokenizer (vocab size: 32,768), which is included in the MiniModel-200M-Base repository.

For training code, data loading utilities, and full reproducibility (including the training script), see the official GitHub repository:
๐Ÿ”— https://github.com/xTimeCrystal/MiniModel/tree/main

Dataset Details

Dataset Description

  • Curated by: xTimeCrystal
  • Languages: English, Chinese, Python (code)
  • License: Apache 2.0
  • Intended use: Pretraining efficient small language models (e.g., MiniModel-200M-Base)
  • Token count: ~10 billion tokens

This corpus combines high-quality educational and general-purpose text sources, filtered and balanced to maximize learning efficiency in low-compute training regimes.

Source Data Composition

The dataset is a weighted mixture of the following sources (by token count):

All source datasets are publicly available and compatible with the Apache 2.0 license.

Preprocessing

  • Tokenized with the Mistral-7B-Instruct-v0.3 tokenizer
  • Sequences were packed using a bin-packing algorithm to minimize padding (final padding < 5%)
  • Maximum sequence length: 2048 tokens
  • No deduplication beyond source-level filtering

๐Ÿ’ก Note: The tokenizer, training configuration, and data-loading pipeline are provided in the GitHub repo for full reproducibility.