xTimeCrystal commited on
Commit
6e2567a
·
verified ·
1 Parent(s): c9cf1a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -3
README.md CHANGED
@@ -1,3 +1,48 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ - zh
8
+ pretty_name: MiniModel Pretraining Corpus
9
+ ---
10
+
11
+ # Dataset Card for MiniModel Pretraining Corpus
12
+
13
+ This dataset is a curated, tokenized pretraining mixture designed specifically for training **MiniModel**-series small language models. It was tokenized using the **Mistral-7B-Instruct-v0.3 tokenizer** (vocab size: 32,768), which is included in the [MiniModel-200M-Base repository](https://huggingface.co/xTimeCrystal/MiniModel-200M-Base).
14
+
15
+ For **training code**, **data loading utilities**, and full reproducibility (including the training script), see the official GitHub repository:
16
+ 🔗 [https://github.com/xTimeCrystal/MiniModel/tree/main](https://github.com/xTimeCrystal/MiniModel/tree/main)
17
+
18
+ ## Dataset Details
19
+
20
+ ### Dataset Description
21
+
22
+ - **Curated by:** xTimeCrystal
23
+ - **Languages:** English, Chinese, Python (code)
24
+ - **License:** Apache 2.0
25
+ - **Intended use:** Pretraining efficient small language models (e.g., MiniModel-200M-Base)
26
+ - **Token count:** ~10 billion tokens
27
+
28
+ This corpus combines high-quality educational and general-purpose text sources, filtered and balanced to maximize learning efficiency in low-compute training regimes.
29
+
30
+ ### Source Data Composition
31
+
32
+ The dataset is a weighted mixture of the following sources (by token count):
33
+
34
+ - **70%** [`openbmb/Ultra-FineWeb`](https://huggingface.co/datasets/openbmb/Ultra-FineWeb) (English subset)
35
+ - **20%** [`openbmb/Ultra-FineWeb`](https://huggingface.co/datasets/openbmb/Ultra-FineWeb) (Chinese subset)
36
+ - **5%** [`Avelina/python-edu-cleaned`](https://huggingface.co/datasets/Avelina/python-edu-cleaned)
37
+ - **5%** [`HuggingFaceTB/finemath`](https://huggingface.co/datasets/HuggingFaceTB/finemath)
38
+
39
+ All source datasets are publicly available and compatible with the Apache 2.0 license.
40
+
41
+ ### Preprocessing
42
+
43
+ - Tokenized with the **Mistral-7B-Instruct-v0.3 tokenizer**
44
+ - Sequences were packed using a bin-packing algorithm to minimize padding (final padding < 5%)
45
+ - Maximum sequence length: 2048 tokens
46
+ - No deduplication beyond source-level filtering
47
+
48
+ > 💡 **Note**: The tokenizer, training configuration, and data-loading pipeline are provided in the [GitHub repo](https://github.com/xTimeCrystal/MiniModel/tree/main) for full reproducibility.