ZhouChuYue commited on
Commit
7f39997
·
1 Parent(s): 6a38a85

Merge L1-L3 result note into description paragraph

Browse files
Files changed (2) hide show
  1. README.md +1 -3
  2. README_ZH.md +1 -3
README.md CHANGED
@@ -178,14 +178,12 @@ To fairly compare different parsing strategies, we conducted experiments on a da
178
 
179
  ### Pipeline Effectiveness (L1 vs L2 vs L3)
180
 
181
- To validate the effectiveness of our L0-L3 tiered framework, we conducted ablation studies comparing models trained on different tiers of UltraData-Math. Unlike the L0 parser comparison above (which used a 2023-2024 subset), these results are based on the **full dataset**.
182
 
183
  <div align="center">
184
  <img src="assets/ultradata-math-l1l2l3-comparison.png" width="700"/>
185
  </div>
186
 
187
- *Note: Results demonstrate that higher-tier data (L3) significantly boosts mathematical reasoning (MATH500, GSM8K) and general capabilities.*
188
-
189
  ### Full Evaluation Results
190
 
191
  To compare against existing public mathematical pre-training datasets, we trained models independently on each dataset using the same model architecture and training budget (~100B tokens). The baselines include [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1), [MegaMath-Web-Pro](https://huggingface.co/datasets/LLM360/MegaMath), and [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath). All models are evaluated under identical conditions for a fair comparison:
 
178
 
179
  ### Pipeline Effectiveness (L1 vs L2 vs L3)
180
 
181
+ To validate the effectiveness of our L0-L3 tiered framework, we conducted ablation studies comparing models trained on different tiers of UltraData-Math. Unlike the L0 parser comparison above (which used a 2023-2024 subset), these results are based on the **full dataset**. Results demonstrate that higher-tier data (L3) significantly boosts mathematical reasoning (MATH500, GSM8K) and general capabilities.
182
 
183
  <div align="center">
184
  <img src="assets/ultradata-math-l1l2l3-comparison.png" width="700"/>
185
  </div>
186
 
 
 
187
  ### Full Evaluation Results
188
 
189
  To compare against existing public mathematical pre-training datasets, we trained models independently on each dataset using the same model architecture and training budget (~100B tokens). The baselines include [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1), [MegaMath-Web-Pro](https://huggingface.co/datasets/LLM360/MegaMath), and [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath). All models are evaluated under identical conditions for a fair comparison:
README_ZH.md CHANGED
@@ -139,14 +139,12 @@ ds = load_dataset("openbmb/UltraData-Math", "UltraData-Math-L3-Conversation-Synt
139
 
140
  ### 流水线有效性(L1 vs L2 vs L3)
141
 
142
- 为验证 L0-L3 分级框架的有效性,我们对使用不同层级 UltraData-Math 训练的模型进行了消融实验。与上文 L0 解析器对比(使用 2023-2024 子集)不同,以下结果基于**全量数据集**。
143
 
144
  <div align="center">
145
  <img src="assets/ultradata-math-l1l2l3-comparison.png" width="700"/>
146
  </div>
147
 
148
- *注:结果表明,更高层级的数据(L3)显著提升了数学推理能力(MATH500、GSM8K)及通用能力。*
149
-
150
  ### 完整评测结果
151
 
152
  为与现有公开数学预训练数据集进行对比,我们使用相同的模型架构和训练预算(~100B tokens)在每个数据集上独立训练模型。基线包括 [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1)、[MegaMath-Web-Pro](https://huggingface.co/datasets/LLM360/MegaMath) 和 [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath)。所有模型在相同条件下评估以确保公平对比:
 
139
 
140
  ### 流水线有效性(L1 vs L2 vs L3)
141
 
142
+ 为验证 L0-L3 分级框架的有效性,我们对使用不同层级 UltraData-Math 训练的模型进行了消融实验。与上文 L0 解析器对比(使用 2023-2024 子集)不同,以下结果基于**全量数据集**。结果表明,更高层级的数据(L3)显著提升了数学推理能力(MATH500、GSM8K)及通用能力。
143
 
144
  <div align="center">
145
  <img src="assets/ultradata-math-l1l2l3-comparison.png" width="700"/>
146
  </div>
147
 
 
 
148
  ### 完整评测结果
149
 
150
  为与现有公开数学预训练数据集进行对比,我们使用相同的模型架构和训练预算(~100B tokens)在每个数据集上独立训练模型。基线包括 [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1)、[MegaMath-Web-Pro](https://huggingface.co/datasets/LLM360/MegaMath) 和 [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath)。所有模型在相同条件下评估以确保公平对比: