Datasets:
ZhouChuYue
Cursor
commited on
Commit
·
4a41aee
1
Parent(s):
98dbb6e
Update README: Simplify experimental setup, unify table column order, remove commonsense benchmarks
Browse files
README.md
CHANGED
|
@@ -135,44 +135,33 @@ We used the **MiniCPM-1.2B** model architecture and **MiniCPM3-4B** tokenizer fo
|
|
| 135 |
- **Mathematical Reasoning:** GSM8K (4-shot), MATH (4-shot), Math-Bench
|
| 136 |
- **Code Generation:** HumanEval (0-shot), MBPP (3-shot)
|
| 137 |
- **Comprehensive Knowledge:** MMLU (5-shot), MMLU-STEM (5-shot)
|
| 138 |
-
- **Commonsense Reasoning:** ARC-E/C (0-shot), BBH (3-shot), CommonSenseQA (8-shot), HellaSwag (0-shot), OpenBookQA (0-shot), PIQA (0-shot), SIQA (0-shot), Winogrande (0-shot)
|
| 139 |
|
| 140 |
### 🔧 Experimental Setup
|
| 141 |
|
| 142 |
-
|
| 143 |
-
|
| 144 |
-
| Hyperparameter | Value | Description |
|
| 145 |
-
| :--- | :--- | :--- |
|
| 146 |
-
| **Base Model** | MiniCPM-1.2B | Pre-trained on 1.3T MiniCPM-4 corpus |
|
| 147 |
-
| **Training Tokens** | ~100B | 20,000 steps |
|
| 148 |
-
| **Data Mixture** | 30% Target / 70% Base | 30% UltraData-Math + 70% General Data |
|
| 149 |
-
| **Sequence Length** | 4096 | |
|
| 150 |
-
| **Global Batch Size** | 1280 | Micro batch size 10 |
|
| 151 |
-
| **Learning Rate** | 7.5e-4 $\to$ 3.75e-5 | Exponential decay |
|
| 152 |
-
| **Optimizer** | AdamW | with Maximal Update Parameterization (µP) |
|
| 153 |
|
| 154 |
### Effectiveness of L0 Parsing Strategy
|
| 155 |
|
| 156 |
To fairly compare different parsing strategies, we conducted experiments on a data subset sampled from the **2023-2024** distribution. We re-parsed the raw HTML from this source using different parsers and **applied the same L1 cleaning operators to all baselines**. This comparison demonstrates the **overall benefit of our L0 Parser + L1 Filtering pipeline** against other parsers under identical cleaning conditions.
|
| 157 |
|
| 158 |
-
| Parser | Average | MMLU |
|
| 159 |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
| 160 |
-
| **UltraData-Math-L0-Parser (Ours)** | **43.44** | 51.41 |
|
| 161 |
-
| trafilatura + w3m | 42.33 | 50.95 |
|
| 162 |
-
| trafilatura | 42.44 | 51.42 |
|
| 163 |
-
| Megamath | 42.32 | **51.46** |
|
| 164 |
-
| magic-html + w3m | 41.29 | 51.23 |
|
| 165 |
|
| 166 |
|
| 167 |
### Pipeline Effectiveness (L1 vs L2 vs L3)
|
| 168 |
|
| 169 |
To validate the effectiveness of our L0-L3 hierarchical framework, we conducted ablation studies comparing models trained on different tiers of UltraData-Math. Unlike the L0 parser comparison above (which used a 2023-2024 subset), these results are based on the **full dataset**.
|
| 170 |
|
| 171 |
-
| Dataset | Average | MMLU |
|
| 172 |
-
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
| 173 |
-
| **UltraData-Math-L1** |
|
| 174 |
-
| **UltraData-Math-L2** |
|
| 175 |
-
| **UltraData-Math-L3** | **
|
| 176 |
|
| 177 |
*Note: Results demonstrate that higher-tier data (L3) significantly boosts mathematical reasoning (MATH, GSM8K) and general capabilities.*
|
| 178 |
|
|
@@ -180,13 +169,13 @@ To validate the effectiveness of our L0-L3 hierarchical framework, we conducted
|
|
| 180 |
|
| 181 |
We used a single dataset for independent training to directly compare the effects of different data sources:
|
| 182 |
|
| 183 |
-
| Model | Average | MMLU |
|
| 184 |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
| 185 |
-
| **UltraData-Math (Ours)** | **43.79** | 51.67 |
|
| 186 |
-
| Nemotron-cc 4plus mind | 43.45 | 52.09 |
|
| 187 |
-
| Nemotron-cc 4plus | 42.62 | 51.96 |
|
| 188 |
-
| MegaMath-Web-Pro | 41.38 | **53.16** |
|
| 189 |
-
| FineMath-4+ | 40.51 | 50.90 |
|
| 190 |
|
| 191 |
## ❤️ Acknowledgements
|
| 192 |
|
|
|
|
| 135 |
- **Mathematical Reasoning:** GSM8K (4-shot), MATH (4-shot), Math-Bench
|
| 136 |
- **Code Generation:** HumanEval (0-shot), MBPP (3-shot)
|
| 137 |
- **Comprehensive Knowledge:** MMLU (5-shot), MMLU-STEM (5-shot)
|
|
|
|
| 138 |
|
| 139 |
### 🔧 Experimental Setup
|
| 140 |
|
| 141 |
+
We evaluated data quality using the **Decay Verification** method: continuing pre-training of a **MiniCPM-1.2B** base model (pre-trained on 1.3T tokens) with **~100B tokens** (30% target data + 70% general data).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 142 |
|
| 143 |
### Effectiveness of L0 Parsing Strategy
|
| 144 |
|
| 145 |
To fairly compare different parsing strategies, we conducted experiments on a data subset sampled from the **2023-2024** distribution. We re-parsed the raw HTML from this source using different parsers and **applied the same L1 cleaning operators to all baselines**. This comparison demonstrates the **overall benefit of our L0 Parser + L1 Filtering pipeline** against other parsers under identical cleaning conditions.
|
| 146 |
|
| 147 |
+
| Parser | Average | MMLU | MMLU-STEM | Math | GSM8K | MBPP | HumanEval |
|
| 148 |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
| 149 |
+
| **UltraData-Math-L0-Parser (Ours)** | **43.44** | 51.41 | 46.76 | **28.72** | 54.97 | 47.10 | **31.71** |
|
| 150 |
+
| trafilatura + w3m | 42.33 | 50.95 | 45.52 | 27.64 | 54.51 | **47.93** | 27.44 |
|
| 151 |
+
| trafilatura | 42.44 | 51.42 | 46.62 | 28.08 | **56.03** | 45.64 | 26.83 |
|
| 152 |
+
| Megamath | 42.32 | **51.46** | **46.81** | 26.04 | 54.06 | 45.64 | 29.88 |
|
| 153 |
+
| magic-html + w3m | 41.29 | 51.23 | 46.45 | 26.58 | 51.63 | 45.02 | 26.83 |
|
| 154 |
|
| 155 |
|
| 156 |
### Pipeline Effectiveness (L1 vs L2 vs L3)
|
| 157 |
|
| 158 |
To validate the effectiveness of our L0-L3 hierarchical framework, we conducted ablation studies comparing models trained on different tiers of UltraData-Math. Unlike the L0 parser comparison above (which used a 2023-2024 subset), these results are based on the **full dataset**.
|
| 159 |
|
| 160 |
+
| Dataset | Average | MMLU | MMLU-STEM | Math | GSM8K | MBPP | HumanEval |
|
| 161 |
+
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
| 162 |
+
| **UltraData-Math-L1** | 42.31 | 51.41 | 45.44 | 27.78 | 54.66 | 44.71 | 29.88 |
|
| 163 |
+
| **UltraData-Math-L2** | 42.57 | 50.93 | 45.52 | 29.20 | 52.92 | 44.50 | 32.32 |
|
| 164 |
+
| **UltraData-Math-L3** | **46.44** | **51.67** | **45.93** | **37.02** | **61.79** | **49.27** | **32.93** |
|
| 165 |
|
| 166 |
*Note: Results demonstrate that higher-tier data (L3) significantly boosts mathematical reasoning (MATH, GSM8K) and general capabilities.*
|
| 167 |
|
|
|
|
| 169 |
|
| 170 |
We used a single dataset for independent training to directly compare the effects of different data sources:
|
| 171 |
|
| 172 |
+
| Model | Average | MMLU | MMLU-STEM | Math | GSM8K | MBPP | HumanEval | R-bench | Math-bench |
|
| 173 |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
| 174 |
+
| **UltraData-Math (Ours)** | **43.79** | 51.67 | 45.93 | **37.02** | **61.79** | **49.27** | 32.93 | 23.38 | **48.33** |
|
| 175 |
+
| Nemotron-cc 4plus mind | 43.45 | 52.09 | 45.99 | 35.96 | 59.97 | 48.03 | 34.76 | **23.51** | 47.25 |
|
| 176 |
+
| Nemotron-cc 4plus | 42.62 | 51.96 | 45.67 | 33.40 | 58.45 | 46.47 | **35.37** | 22.74 | 46.92 |
|
| 177 |
+
| MegaMath-Web-Pro | 41.38 | **53.16** | **47.15** | 32.12 | 56.71 | 47.10 | 31.71 | 21.23 | 41.83 |
|
| 178 |
+
| FineMath-4+ | 40.51 | 50.90 | 44.98 | 29.84 | 56.25 | 48.96 | 29.88 | 18.93 | 44.33 |
|
| 179 |
|
| 180 |
## ❤️ Acknowledgements
|
| 181 |
|