ZhouChuYue commited on
Commit
24a0dff
·
1 Parent(s): 0a8cadc

Update README: Refine experimental results, setup details, and L0 parser comparison

Browse files
Files changed (1) hide show
  1. README.md +47 -8
README.md CHANGED
@@ -63,8 +63,8 @@ Experiments show that on the MiniCPM-1B architecture, ***UltraData-Math*** achie
63
 
64
  ***UltraData-Math*** has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) models.
65
 
66
- - **[UltraData-Math-L1](https://huggingface.co/datasets/openbmb/UltraData-Math)**: Large-scale high-quality mathematical pre-training dataset, containing 159.4B tokens of web mathematical corpus. (**<-- you are here**)
67
- - **[UltraData-Math-L3](https://huggingface.co/datasets/openbmb/UltraData-Math-L3)**: High-quality synthetic mathematical dataset, containing 37.1B tokens of multi-format synthetic data (Q&A, multi-turn dialogues, knowledge textbooks, etc.).
68
 
69
  ## 🏗️ Data Processing Pipeline
70
 
@@ -128,15 +128,29 @@ Natural web data is mostly declarative text. To enhance the model's instruction
128
 
129
  ## 📈 Experimental Results
130
 
131
- We used the **MiniCPM-1.2B** model architecture and **MiniCPM3-4B** tokenizer for experimental verification. Each experiment was conducted with a training volume of **100 billion Tokens**, allowing for comprehensive verification of data performance within a parameter range with controllable computational efficiency. We used the Lighteval library for model evaluation, and all evaluation metrics are based on **Zero-Shot** settings. Evaluation benchmarks include:
132
 
133
- - **Mathematical Reasoning:** GSM8K, MATH, R-Bench, Math-Bench
134
- - **Code Generation:** HumanEval, MBPP
135
- - **Comprehensive Knowledge:** MMLU, MMLU-STEM
136
 
137
- ### L0 Parser Comparison
138
 
139
- Based on data from the same source, we re-parsed the raw HTML using different parsers and applied the same L1 cleaning operators to all parsed results. Each cleaned dataset was then used for independent training to directly compare the effects of parsing strategies:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
140
 
141
  | Parser | Average | MMLU | GSM8K | HumanEval | math | mbpp_full | mmlu-stem |
142
  |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
@@ -147,6 +161,17 @@ Based on data from the same source, we re-parsed the raw HTML using different pa
147
  | magic-html + w3m | 41.29 | 51.23 | 51.63 | 26.83 | 26.58 | 45.02 | 46.45 |
148
 
149
 
 
 
 
 
 
 
 
 
 
 
 
150
 
151
  ### Full Evaluation Results
152
 
@@ -166,6 +191,20 @@ We used a single dataset for independent training to directly compare the effect
166
  - **L3 Synthesis Layer**: [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct), [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B), [GLM-4.5](https://huggingface.co/zai-org/GLM-4.5)
167
  - **Seed Data**: [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1), [MegaMath](https://huggingface.co/datasets/LLM360/MegaMath), [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath)
168
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
169
  ## 📜 License
170
 
171
  This project is licensed under the [Apache 2.0](./LICENSE) license.
 
63
 
64
  ***UltraData-Math*** has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) models.
65
 
66
+ - **[UltraData-Math-L1](https://huggingface.co/datasets/openbmb/UltraData-Math)**: Large-scale high-quality mathematical pre-training dataset, containing 170.5B tokens of web mathematical corpus. (**<-- you are here**)
67
+ - **[UltraData-Math-L3](https://huggingface.co/datasets/openbmb/UltraData-Math-L3)**: High-quality synthetic mathematical dataset, containing 88B tokens of multi-format synthetic data (Q&A, multi-turn dialogues, knowledge textbooks, etc.).
68
 
69
  ## 🏗️ Data Processing Pipeline
70
 
 
128
 
129
  ## 📈 Experimental Results
130
 
131
+ We used the **MiniCPM-1.2B** model architecture and **MiniCPM3-4B** tokenizer for experimental verification. Each experiment was conducted with a training volume of **100 billion Tokens**, using the **Decay Verification** method (annealing from a 1.3T base model). We used the Lighteval library for model evaluation. Evaluation benchmarks include:
132
 
133
+ - **Mathematical Reasoning:** GSM8K (4-shot), MATH (4-shot), Math-Bench
134
+ - **Code Generation:** HumanEval (0-shot), MBPP (3-shot)
135
+ - **Comprehensive Knowledge:** MMLU (5-shot), MMLU-STEM (5-shot)
136
 
137
+ ### 🔧 Experimental Setup
138
 
139
+ Following the **Decay Verification** method described in the paper, we evaluated the data quality by continuing the pre-training of a **MiniCPM-1.2B** base model (trained on 1.3T tokens).
140
+
141
+ | Hyperparameter | Value | Description |
142
+ | :--- | :--- | :--- |
143
+ | **Base Model** | MiniCPM-1.2B | Pre-trained on 1.3T MiniCPM-4 corpus |
144
+ | **Training Tokens** | ~100B | 20,000 steps |
145
+ | **Data Mixture** | 30% Target / 70% Base | 30% UltraData-Math + 70% General Data |
146
+ | **Sequence Length** | 4096 | |
147
+ | **Global Batch Size** | 1280 | Micro batch size 10 |
148
+ | **Learning Rate** | 7.5e-4 $\to$ 3.75e-5 | Exponential decay |
149
+ | **Optimizer** | AdamW | with Maximal Update Parameterization (µP) |
150
+
151
+ ### Effectiveness of L0 Parsing Strategy
152
+
153
+ To fairly compare different parsing strategies, we conducted experiments on a data subset sampled from the **2023-2024** distribution. We re-parsed the raw HTML from this source using different parsers and **applied the same L1 cleaning operators to all baselines**. This comparison demonstrates the **overall benefit of our L0 Parser + L1 Filtering pipeline** against other parsers under identical cleaning conditions.
154
 
155
  | Parser | Average | MMLU | GSM8K | HumanEval | math | mbpp_full | mmlu-stem |
156
  |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
 
161
  | magic-html + w3m | 41.29 | 51.23 | 51.63 | 26.83 | 26.58 | 45.02 | 46.45 |
162
 
163
 
164
+ ### Pipeline Effectiveness (L1 vs L2 vs L3)
165
+
166
+ To validate the effectiveness of our L0-L3 hierarchical framework, we conducted ablation studies comparing models trained on different tiers of UltraData-Math. Unlike the L0 parser comparison above (which used a 2023-2024 subset), these results are based on the **full dataset**.
167
+
168
+ | Dataset | Average | MMLU | ARC-E | ARC-C | BBH | CSQA | Hella. | OBQA | PIQA | SIQA | Wino. | Math | GSM8K | MBPP | HumanEval | CMMLU | C-Eval | Avg_ZH |
169
+ | :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
170
+ | **UltraData-Math-L1** | 48.39 | 51.41 | 54.50 | 37.29 | 37.75 | 60.44 | 58.02 | 41.60 | 74.21 | 41.71 | 57.14 | 27.78 | 54.66 | 44.71 | 29.88 | 51.28 | 51.89 | 51.59 |
171
+ | **UltraData-Math-L2** | 48.59 | 50.93 | 55.20 | 36.95 | 39.27 | 60.20 | 57.52 | 39.80 | 74.48 | 44.73 | 57.77 | 29.20 | 52.92 | 44.50 | 32.32 | 51.13 | 50.55 | 50.84 |
172
+ | **UltraData-Math-L3** | **52.23** | **51.67** | **59.79** | **38.98** | **43.62** | **61.18** | **58.27** | **57.00** | **74.76** | 43.35 | **59.04** | **37.02** | **61.79** | **49.27** | **32.93** | **52.87** | **54.08** | **53.48** |
173
+
174
+ *Note: Results demonstrate that higher-tier data (L3) significantly boosts mathematical reasoning (MATH, GSM8K) and general capabilities.*
175
 
176
  ### Full Evaluation Results
177
 
 
191
  - **L3 Synthesis Layer**: [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct), [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B), [GLM-4.5](https://huggingface.co/zai-org/GLM-4.5)
192
  - **Seed Data**: [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1), [MegaMath](https://huggingface.co/datasets/LLM360/MegaMath), [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath)
193
 
194
+ ## 📖 Citation
195
+
196
+ If you find **UltraData-Math** useful in your research, please consider citing:
197
+
198
+ ```bibtex
199
+ @misc{ultradata-math,
200
+ title={UltraData-Math},
201
+ author={Chuyue Zhou and Hongya Lv and Xinle Lin and Yudong Wang and Jie Zhou and Hengyu Zhao and Junshao Guo and Xueren Zhang and Shuaikang Xue and Zhiyuan Liu},
202
+ year={2026},
203
+ url={https://huggingface.co/datasets/openbmb/UltraData-Math},
204
+ publisher={Hugging Face}
205
+ }
206
+ ```
207
+
208
  ## 📜 License
209
 
210
  This project is licensed under the [Apache 2.0](./LICENSE) license.