ZhouChuYue commited on
Commit
8e8d4fe
·
1 Parent(s): e3b69c3

Update README: Merge experimental setup into results section

Browse files
Files changed (2) hide show
  1. README.md +2 -6
  2. README_ZH.md +1 -5
README.md CHANGED
@@ -133,16 +133,12 @@ Natural web data is mostly declarative text, lacking structured reasoning steps
133
 
134
  ## 📈 Experimental Results
135
 
136
- We used the **MiniCPM-1.2B** model architecture and **MiniCPM3-4B** tokenizer for experimental verification. Each experiment was conducted with a training volume of **100 billion Tokens**, using the **Decay Verification** method (annealing from a 1.3T base model). We used [OpenCompass](https://github.com/open-compass/opencompass) as our evaluation framework. Evaluation benchmarks include:
137
 
138
  - **Mathematical Reasoning:** GSM8K, MATH500, Math-Bench, R-Bench-Math
139
  - **Code Generation:** HumanEval, MBPP
140
  - **Comprehensive Knowledge:** MMLU, MMLU-STEM
141
 
142
- ### 🔧 Experimental Setup
143
-
144
- We evaluated data quality using the **Decay Verification** method: continuing pre-training of a **MiniCPM-1.2B** base model (pre-trained on 1.3T tokens) with **~100B tokens** (30% target data + 70% general data).
145
-
146
  ### Effectiveness of L0 Parsing Strategy
147
 
148
  To fairly compare different parsing strategies, we conducted experiments on a data subset sampled from the **2023-2024** distribution. We re-parsed the raw HTML from this source using different parsers and **applied the same L1 cleaning operators to all baselines**. This comparison demonstrates the **overall benefit of our L0 Parser + L1 Filtering pipeline** against other parsers under identical cleaning conditions.
@@ -193,7 +189,7 @@ If you find **UltraData-Math** useful in your research, please consider citing:
193
  ```bibtex
194
  @misc{ultradata-math,
195
  title={UltraData-Math},
196
- author={Chuyue Zhou and Hongya Lv and Xinle Lin and Yudong Wang and Jie Zhou and Hengyu Zhao and Junshao Guo and Xueren Zhang and Shuaikang Xue and Zhiyuan Liu},
197
  year={2026},
198
  url={https://huggingface.co/datasets/openbmb/UltraData-Math},
199
  publisher={Hugging Face}
 
133
 
134
  ## 📈 Experimental Results
135
 
136
+ We evaluated data quality using the **Decay Verification** method: continuing pre-training of a **MiniCPM-1.2B** base model (pre-trained on 1.3T tokens with **MiniCPM3-4B** tokenizer) with **~100B tokens** (30% target data + 70% general data). We used [OpenCompass](https://github.com/open-compass/opencompass) as our evaluation framework. Evaluation benchmarks include:
137
 
138
  - **Mathematical Reasoning:** GSM8K, MATH500, Math-Bench, R-Bench-Math
139
  - **Code Generation:** HumanEval, MBPP
140
  - **Comprehensive Knowledge:** MMLU, MMLU-STEM
141
 
 
 
 
 
142
  ### Effectiveness of L0 Parsing Strategy
143
 
144
  To fairly compare different parsing strategies, we conducted experiments on a data subset sampled from the **2023-2024** distribution. We re-parsed the raw HTML from this source using different parsers and **applied the same L1 cleaning operators to all baselines**. This comparison demonstrates the **overall benefit of our L0 Parser + L1 Filtering pipeline** against other parsers under identical cleaning conditions.
 
189
  ```bibtex
190
  @misc{ultradata-math,
191
  title={UltraData-Math},
192
+ author={UltraData Team},
193
  year={2026},
194
  url={https://huggingface.co/datasets/openbmb/UltraData-Math},
195
  publisher={Hugging Face}
README_ZH.md CHANGED
@@ -98,16 +98,12 @@ L1 数据虽然格式整洁,但内容质量参差不齐。L2 阶段引入了
98
 
99
  ## 📈 实验结果
100
 
101
- 我们采用 **MiniCPM-1.2B** 模型架构与 **MiniCPM3-4B** 分词器进行实验验证。每个实验均在 **1000 亿 Token** 的训练量下进行,采用 **衰减验证(Decay Verification)** 方法(基于 1.3T 预训练基座模型进行退火)。我们使用 [OpenCompass](https://github.com/open-compass/opencompass) 作为评估框架。评估基准包括:
102
 
103
  - **数学推理:** GSM8K、MATH500、Math-Bench、R-Bench-Math
104
  - **代码生成:** HumanEval、MBPP
105
  - **综合知识:** MMLU、MMLU-STEM
106
 
107
- ### 🔧 实验设置
108
-
109
- 我们使用 **衰减验证(Decay Verification)** 方法评估数据质量:在 **MiniCPM-1.2B** 基座模型(预训练 1.3T tokens)上继续训练 **~100B tokens**(30% 目标数据 + 70% 通用数据)。
110
-
111
  ### L0 解析策略有效性
112
 
113
  为公平对比不同解析策略,我们在 **2023-2024** 年分布的数据子集上进行实验。我们使用不同解析器重新解析原始 HTML,并对**所有基线应用相同的 L1 清洗算子**。该对比展示了我们 **L0 解析器 + L1 过滤管线的综合收益**。
 
98
 
99
  ## 📈 实验结果
100
 
101
+ 我们使用 **衰减验证(Decay Verification)** 方法评估数据质量:在 **MiniCPM-1.2B** 基座模型(使用 **MiniCPM3-4B** 分词器,预训练 1.3T tokens)上继续训练 **~100B tokens**(30% 目标数据 + 70% 通用数据)。我们使用 [OpenCompass](https://github.com/open-compass/opencompass) 作为评估框架。评估基准包括:
102
 
103
  - **数学推理:** GSM8K、MATH500、Math-Bench、R-Bench-Math
104
  - **代码生成:** HumanEval、MBPP
105
  - **综合知识:** MMLU、MMLU-STEM
106
 
 
 
 
 
107
  ### L0 解析策略有效性
108
 
109
  为公平对比不同解析策略,我们在 **2023-2024** 年分布的数据子集上进行实验。我们使用不同解析器重新解析原始 HTML,并对**所有基线应用相同的 L1 清洗算子**。该对比展示了我们 **L0 解析器 + L1 过滤管线的综合收益**。