Datasets:
ZhouChuYue
commited on
Commit
·
3559ea7
1
Parent(s):
bffeb3e
Update README: replace experiment tables with images, update benchmark list
Browse files- README.md +12 -21
- README_ZH.md +14 -23
- assets/ultradata-math-full-comparison.png +3 -0
- assets/ultradata-math-l0-parser-comparison.png +3 -0
- assets/ultradata-math-l1l2l3-comparison.png +3 -0
README.md
CHANGED
|
@@ -162,32 +162,27 @@ ds = load_dataset("openbmb/UltraData-Math", "UltraData-Math-L3-Conversation-Synt
|
|
| 162 |
|
| 163 |
We evaluated data quality using the **Decay Verification** method: continuing pre-training of a **MiniCPM-1.2B** base model (pre-trained on 1.3T tokens with **MiniCPM3-4B** tokenizer) with **~100B tokens** (30% target data + 70% general data). We used [OpenCompass](https://github.com/open-compass/opencompass) as our evaluation framework. Evaluation benchmarks include:
|
| 164 |
|
| 165 |
-
- **Mathematical Reasoning:**
|
| 166 |
- **Code Generation:** HumanEval, MBPP
|
| 167 |
-
- **
|
|
|
|
| 168 |
|
| 169 |
### Effectiveness of L0 Parsing Strategy
|
| 170 |
|
| 171 |
To fairly compare different parsing strategies, we conducted experiments on a data subset sampled from the **2023-2024** distribution. We re-parsed the raw HTML from this source using different parsers. This comparison demonstrates the **effectiveness of our L0 Parser** against other parsers.
|
| 172 |
|
| 173 |
-
|
| 174 |
-
|
| 175 |
-
|
| 176 |
-
| trafilatura + w3m | 42.33 | 50.95 | 45.52 | 27.64 | 54.51 | **47.93** | 27.44 |
|
| 177 |
-
| trafilatura | 42.44 | 51.42 | 46.62 | 28.08 | **56.03** | 45.64 | 26.83 |
|
| 178 |
-
| Megamath | 42.32 | **51.46** | **46.81** | 26.04 | 54.06 | 45.64 | 29.88 |
|
| 179 |
-
| magic-html + w3m | 41.29 | 51.23 | 46.45 | 26.58 | 51.63 | 45.02 | 26.83 |
|
| 180 |
|
| 181 |
|
| 182 |
### Pipeline Effectiveness (L1 vs L2 vs L3)
|
| 183 |
|
| 184 |
To validate the effectiveness of our L0-L3 tiered framework, we conducted ablation studies comparing models trained on different tiers of UltraData-Math. Unlike the L0 parser comparison above (which used a 2023-2024 subset), these results are based on the **full dataset**.
|
| 185 |
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
|
| 189 |
-
| **UltraData-Math-L2** | 42.57 | 50.93 | 45.52 | 29.20 | 52.92 | 44.50 | 32.32 |
|
| 190 |
-
| **UltraData-Math-L3** | **46.44** | **51.67** | **45.93** | **37.02** | **61.79** | **49.27** | **32.93** |
|
| 191 |
|
| 192 |
*Note: Results demonstrate that higher-tier data (L3) significantly boosts mathematical reasoning (MATH500, GSM8K) and general capabilities.*
|
| 193 |
|
|
@@ -195,13 +190,9 @@ To validate the effectiveness of our L0-L3 tiered framework, we conducted ablati
|
|
| 195 |
|
| 196 |
To compare against existing public mathematical pre-training datasets, we trained models independently on each dataset using the same model architecture and training budget (~100B tokens). The baselines include [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1), [MegaMath-Web-Pro](https://huggingface.co/datasets/LLM360/MegaMath), and [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath). All models are evaluated under identical conditions for a fair comparison:
|
| 197 |
|
| 198 |
-
|
| 199 |
-
|
| 200 |
-
|
| 201 |
-
| Nemotron-cc 4plus mind | 43.45 | 52.09 | 45.99 | 35.96 | 59.97 | 48.03 | 34.76 | **23.51** | 47.25 |
|
| 202 |
-
| Nemotron-cc 4plus | 42.62 | 51.96 | 45.67 | 33.40 | 58.45 | 46.47 | **35.37** | 22.74 | 46.92 |
|
| 203 |
-
| MegaMath-Web-Pro | 41.38 | **53.16** | **47.15** | 32.12 | 56.71 | 47.10 | 31.71 | 21.23 | 41.83 |
|
| 204 |
-
| FineMath-4+ | 40.51 | 50.90 | 44.98 | 29.84 | 56.25 | 48.96 | 29.88 | 18.93 | 44.33 |
|
| 205 |
|
| 206 |
## ❤️ Acknowledgements
|
| 207 |
|
|
|
|
| 162 |
|
| 163 |
We evaluated data quality using the **Decay Verification** method: continuing pre-training of a **MiniCPM-1.2B** base model (pre-trained on 1.3T tokens with **MiniCPM3-4B** tokenizer) with **~100B tokens** (30% target data + 70% general data). We used [OpenCompass](https://github.com/open-compass/opencompass) as our evaluation framework. Evaluation benchmarks include:
|
| 164 |
|
| 165 |
+
- **Mathematical Reasoning:** MATH500, GSM8K, Math-Bench, R-Bench-Math
|
| 166 |
- **Code Generation:** HumanEval, MBPP
|
| 167 |
+
- **English:** MMLU, ARC-E, ARC-C, BBH, CSQA, HellaSwag, OBQA, PIQA, SIQA, WinoGrande
|
| 168 |
+
- **Chinese:** CMMLU, C-Eval
|
| 169 |
|
| 170 |
### Effectiveness of L0 Parsing Strategy
|
| 171 |
|
| 172 |
To fairly compare different parsing strategies, we conducted experiments on a data subset sampled from the **2023-2024** distribution. We re-parsed the raw HTML from this source using different parsers. This comparison demonstrates the **effectiveness of our L0 Parser** against other parsers.
|
| 173 |
|
| 174 |
+
<div align="center">
|
| 175 |
+
<img src="assets/ultradata-math-l0-parser-comparison.png" width="700"/>
|
| 176 |
+
</div>
|
|
|
|
|
|
|
|
|
|
|
|
|
| 177 |
|
| 178 |
|
| 179 |
### Pipeline Effectiveness (L1 vs L2 vs L3)
|
| 180 |
|
| 181 |
To validate the effectiveness of our L0-L3 tiered framework, we conducted ablation studies comparing models trained on different tiers of UltraData-Math. Unlike the L0 parser comparison above (which used a 2023-2024 subset), these results are based on the **full dataset**.
|
| 182 |
|
| 183 |
+
<div align="center">
|
| 184 |
+
<img src="assets/ultradata-math-l1l2l3-comparison.png" width="700"/>
|
| 185 |
+
</div>
|
|
|
|
|
|
|
| 186 |
|
| 187 |
*Note: Results demonstrate that higher-tier data (L3) significantly boosts mathematical reasoning (MATH500, GSM8K) and general capabilities.*
|
| 188 |
|
|
|
|
| 190 |
|
| 191 |
To compare against existing public mathematical pre-training datasets, we trained models independently on each dataset using the same model architecture and training budget (~100B tokens). The baselines include [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1), [MegaMath-Web-Pro](https://huggingface.co/datasets/LLM360/MegaMath), and [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath). All models are evaluated under identical conditions for a fair comparison:
|
| 192 |
|
| 193 |
+
<div align="center">
|
| 194 |
+
<img src="assets/ultradata-math-full-comparison.png" width="700"/>
|
| 195 |
+
</div>
|
|
|
|
|
|
|
|
|
|
|
|
|
| 196 |
|
| 197 |
## ❤️ Acknowledgements
|
| 198 |
|
README_ZH.md
CHANGED
|
@@ -18,7 +18,7 @@
|
|
| 18 |
- **数据质量层面**:现有数据集普遍缺乏系统的质量分级机制,高价值数学内容与低质噪声混杂。
|
| 19 |
- **数据多样性层面**:主流数据集多源自教科书或竞赛题库,缺少真实网页中的数学讨论与应用场景;合成数据格式单一,难以覆盖多轮对话、多风格表达等多样化需求。
|
| 20 |
|
| 21 |
-
针对上述问题,我们提出 ***UltraData-Math***——一个面向数学推理任务的大规模高质量预训练数据集。本数据集基于 [UltraData](https://
|
| 22 |
|
| 23 |
- **L0 原始数据层**:基于 *magic-html* 开发数学解析器,结合 *w3m* 布局保持渲染与多级回退策略,将 MathML、KaTeX、AsciiMath 标准化为 LaTeX 格式。
|
| 24 |
- **L1 过滤数据层**:通过启发式规则清洗噪声并进行文档级去重。
|
|
@@ -35,7 +35,7 @@
|
|
| 35 |
|
| 36 |
## 🏗️ 数据处理流水线
|
| 37 |
|
| 38 |
-
为突破现有数学数据集在质量与多样性上的局限,我们建立了一套以"数学内容完整性"和"信息密度"为核心的精细化分级标准。***UltraData-Math*** 采用了 [UltraData](https://
|
| 39 |
|
| 40 |
<div align="center">
|
| 41 |
<img src="assets/ultradata-math-pipeline.png" width="900"/>
|
|
@@ -123,32 +123,27 @@ ds = load_dataset("openbmb/UltraData-Math", "UltraData-Math-L3-Conversation-Synt
|
|
| 123 |
|
| 124 |
我们使用 **衰减验证(Decay Verification)** 方法评估数据质量:在 **MiniCPM-1.2B** 基座模型(使用 **MiniCPM3-4B** 分词器,预训练 1.3T tokens)上继续训练 **~100B tokens**(30% 目标数据 + 70% 通用数据)。我们使用 [OpenCompass](https://github.com/open-compass/opencompass) 作为评估框架。评估基准包括:
|
| 125 |
|
| 126 |
-
- **数学推理:** GSM8K、
|
| 127 |
- **代码生成:** HumanEval、MBPP
|
| 128 |
-
-
|
|
|
|
| 129 |
|
| 130 |
### L0 解析策略有效性
|
| 131 |
|
| 132 |
为公平对比不同解析策略,我们在 **2023-2024** 年分布的数据子集上进行实验。我们使用不同解析器重新解析原始 HTML。该对比展示了我们 **L0 解析器的有效性**。
|
| 133 |
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
| trafilatura + w3m | 42.33 | 50.95 | 45.52 | 27.64 | 54.51 | **47.93** | 27.44 |
|
| 138 |
-
| trafilatura | 42.44 | 51.42 | 46.62 | 28.08 | **56.03** | 45.64 | 26.83 |
|
| 139 |
-
| Megamath | 42.32 | **51.46** | **46.81** | 26.04 | 54.06 | 45.64 | 29.88 |
|
| 140 |
-
| magic-html + w3m | 41.29 | 51.23 | 46.45 | 26.58 | 51.63 | 45.02 | 26.83 |
|
| 141 |
|
| 142 |
|
| 143 |
### 流水线有效性(L1 vs L2 vs L3)
|
| 144 |
|
| 145 |
为验证 L0-L3 分级框架的有效性,我们对使用不同层级 UltraData-Math 训练的模型进行了消融实验。与上文 L0 解析器对比(使用 2023-2024 子集)不同,以下结果基于**全量数据集**。
|
| 146 |
|
| 147 |
-
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
| **UltraData-Math-L2** | 42.57 | 50.93 | 45.52 | 29.20 | 52.92 | 44.50 | 32.32 |
|
| 151 |
-
| **UltraData-Math-L3** | **46.44** | **51.67** | **45.93** | **37.02** | **61.79** | **49.27** | **32.93** |
|
| 152 |
|
| 153 |
*注:结果表明,更高层级的数据(L3)显著提升了数学推理能力(MATH500、GSM8K)及通用能力。*
|
| 154 |
|
|
@@ -156,13 +151,9 @@ ds = load_dataset("openbmb/UltraData-Math", "UltraData-Math-L3-Conversation-Synt
|
|
| 156 |
|
| 157 |
为与现有公开数学预训练数据集进行对比,我们使用相同的模型架构和训练预算(~100B tokens)在每个数据集上独立训练模型。基线包括 [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1)、[MegaMath-Web-Pro](https://huggingface.co/datasets/LLM360/MegaMath) 和 [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath)。所有模型在相同条件下评估以确保公平对比:
|
| 158 |
|
| 159 |
-
|
| 160 |
-
|
| 161 |
-
|
| 162 |
-
| Nemotron-cc 4plus mind | 43.45 | 52.09 | 45.99 | 35.96 | 59.97 | 48.03 | 34.76 | **23.51** | 47.25 |
|
| 163 |
-
| Nemotron-cc 4plus | 42.62 | 51.96 | 45.67 | 33.40 | 58.45 | 46.47 | **35.37** | 22.74 | 46.92 |
|
| 164 |
-
| MegaMath-Web-Pro | 41.38 | **53.16** | **47.15** | 32.12 | 56.71 | 47.10 | 31.71 | 21.23 | 41.83 |
|
| 165 |
-
| FineMath-4+ | 40.51 | 50.90 | 44.98 | 29.84 | 56.25 | 48.96 | 29.88 | 18.93 | 44.33 |
|
| 166 |
|
| 167 |
## ❤️ 致谢
|
| 168 |
|
|
|
|
| 18 |
- **数据质量层面**:现有数据集普遍缺乏系统的质量分级机制,高价值数学内容与低质噪声混杂。
|
| 19 |
- **数据多样性层面**:主流数据集多源自教科书或竞赛题库,缺少真实网页中的数学讨论与应用场景;合成数据格式单一,难以覆盖多轮对话、多风格表达等多样化需求。
|
| 20 |
|
| 21 |
+
针对上述问题,我们提出 ***UltraData-Math***——一个面向数学推理任务的大规模高质量预训练数据集。本数据集基于 [UltraData](https://ultradata.openbmb.cn/blog/position-paper) 的 L0-L4 分级数据管理框架开发,包含四个递进层级:
|
| 22 |
|
| 23 |
- **L0 原始数据层**:基于 *magic-html* 开发数学解析器,结合 *w3m* 布局保持渲染与多级回退策略,将 MathML、KaTeX、AsciiMath 标准化为 LaTeX 格式。
|
| 24 |
- **L1 过滤数据层**:通过启发式规则清洗噪声并进行文档级去重。
|
|
|
|
| 35 |
|
| 36 |
## 🏗️ 数据处理流水线
|
| 37 |
|
| 38 |
+
为突破现有数学数据集在质量与多样性上的局限,我们建立了一套以"数学内容完整性"和"信息密度"为核心的精细化分级标准。***UltraData-Math*** 采用了 [UltraData](https://ultradata.openbmb.cn/blog/position-paper) 论文提出的 **L0-L4 分级数据管理框架**,通过标准化的层级定义,实现数学数据资产的有序管理与高效流转。每一级都代表了更高的数据纯度与数学价值,同时也对应着更精细的加工程度。
|
| 39 |
|
| 40 |
<div align="center">
|
| 41 |
<img src="assets/ultradata-math-pipeline.png" width="900"/>
|
|
|
|
| 123 |
|
| 124 |
我们使用 **衰减验证(Decay Verification)** 方法评估数据质量:在 **MiniCPM-1.2B** 基座模型(使用 **MiniCPM3-4B** 分词器,预训练 1.3T tokens)上继续训练 **~100B tokens**(30% 目标数据 + 70% 通用数据)。我们使用 [OpenCompass](https://github.com/open-compass/opencompass) 作为评估框架。评估基准包括:
|
| 125 |
|
| 126 |
+
- **数学推理:** MATH500、GSM8K、Math-Bench、R-Bench-Math
|
| 127 |
- **代码生成:** HumanEval、MBPP
|
| 128 |
+
- **英文:** MMLU、ARC-E、ARC-C、BBH、CSQA、HellaSwag、OBQA、PIQA、SIQA、WinoGrande
|
| 129 |
+
- **中文:** CMMLU、C-Eval
|
| 130 |
|
| 131 |
### L0 解析策略有效性
|
| 132 |
|
| 133 |
为公平对比不同解析策略,我们在 **2023-2024** 年分布的数据子集上进行实验。我们使用不同解析器重新解析原始 HTML。该对比展示了我们 **L0 解析器的有效性**。
|
| 134 |
|
| 135 |
+
<div align="center">
|
| 136 |
+
<img src="assets/ultradata-math-l0-parser-comparison.png" width="700"/>
|
| 137 |
+
</div>
|
|
|
|
|
|
|
|
|
|
|
|
|
| 138 |
|
| 139 |
|
| 140 |
### 流水线有效性(L1 vs L2 vs L3)
|
| 141 |
|
| 142 |
为验证 L0-L3 分级框架的有效性,我们对使用不同层级 UltraData-Math 训练的模型进行了消融实验。与上文 L0 解析器对比(使用 2023-2024 子集)不同,以下结果基于**全量数据集**。
|
| 143 |
|
| 144 |
+
<div align="center">
|
| 145 |
+
<img src="assets/ultradata-math-l1l2l3-comparison.png" width="700"/>
|
| 146 |
+
</div>
|
|
|
|
|
|
|
| 147 |
|
| 148 |
*注:结果表明,更高层级的数据(L3)显著提升了数学推理能力(MATH500、GSM8K)及通用能力。*
|
| 149 |
|
|
|
|
| 151 |
|
| 152 |
为与现有公开数学预训练数据集进行对比,我们使用相同的模型架构和训练预算(~100B tokens)在每个数据集上独立训练模型。基线包括 [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1)、[MegaMath-Web-Pro](https://huggingface.co/datasets/LLM360/MegaMath) 和 [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath)。所有模型在相同条件下评估以确保公平对比:
|
| 153 |
|
| 154 |
+
<div align="center">
|
| 155 |
+
<img src="assets/ultradata-math-full-comparison.png" width="700"/>
|
| 156 |
+
</div>
|
|
|
|
|
|
|
|
|
|
|
|
|
| 157 |
|
| 158 |
## ❤️ 致谢
|
| 159 |
|
assets/ultradata-math-full-comparison.png
ADDED
|
Git LFS Details
|
assets/ultradata-math-l0-parser-comparison.png
ADDED
|
Git LFS Details
|
assets/ultradata-math-l1l2l3-comparison.png
ADDED
|
Git LFS Details
|