ZhouChuYue commited on
Commit
9cd8f42
·
1 Parent(s): b835cc9

language switch

Browse files
Files changed (3) hide show
  1. README.md +66 -67
  2. README_EN.md +0 -156
  3. README_ZH.md +158 -0
README.md CHANGED
@@ -27,106 +27,106 @@ tags:
27
 
28
  <div align="center">
29
 
30
- [🤗 数据集](https://huggingface.co/datasets/openbmb/UltraData-Math-L1) | [💻 代码仓库](https://github.com/UltraData-OpenBMB/UltraData-Math)
31
 
32
 
33
  </div>
34
 
35
- ## 📚 简介
36
 
37
- 高质量预训练数据对提升大语言模型的数学推理能力至关重要。然而,现有数学预训练数据构建方案存在以下不足:
38
 
39
- - **HTML 解析层面**:通用提取器(如 trafilaturareadability)主要面向新闻/文章场景设计,对数学公式等内容缺乏专门处理,常导致公式结构破坏或丢失;同时论坛类页面的数学讨论部分,难以完整提取。
40
- - **数据质量层面**:现有数据集普遍缺乏系统的质量分级机制,高价值数学内容与低质噪声混杂。
41
- - **数据多样性层面**:主流数据集多源自教科书或竞赛题库,缺少真实网页中的数学讨论与应用场景;合成数据格式单一,难以覆盖多轮对话、多风格表达等多样化需求。
42
 
43
- 针对上述问题,我们提出 ***UltraData-Math***——一个面向数学推理任务的大规模高质量预训练数据集。本数据集基于 [Ultra-Data](xxx) L0-L4 分级数据处理框架开发,包含四个递进层级:
44
 
45
- - **L0 原始数据层**:基于 *magic-html* 开发数学解析器,结合 *w3m* 布局保持渲染与多级回退策略,将 MathMLKaTeXAsciiMath 标准化为 LaTeX 格式
46
- - **L1 过滤数据层**:通过启发式规则清洗噪声并进行文档级去重
47
- - **L2 精筛数据层**:使用闭源大模型标注种子数据并蒸馏至轻量 Embedding 分类器,实现全量语料的高效质量分级
48
- - **L3 合成数据层**:基于多模型集成生成 Q&A、多轮对话、多风格改写、知识接地教材等多种格式的合成数据
49
 
50
- 实验表明,在 MiniCPM-1B 架构上,***UltraData-Math*** MATH 基准上达到 **37.02** 分,相较 Nemotron-CC 4plus 提升 **+3.62** 分;在 GSM8K 上达到 **61.79** 分,提升 **+3.34** 分,同时保持代码生成与通用知识能力。
51
 
52
- ***UltraData-Math*** 已应用于 [MiniCPM 系列](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) 模型的数学预训练。
53
 
54
- - **[UltraData-Math-L1](https://huggingface.co/datasets/openbmb/UltraData-Math-L1)**: 大规模高质量数学预训练数据集,包含 159.4B tokens 的网页数学语料。(**<-- you are here**)
55
- - **[UltraData-Math-L3](https://huggingface.co/datasets/openbmb/UltraData-Math-L3)**: 高质量合成数学数据集,包含 37.1B tokens 的多格式合成数据(Q&A、多轮对话、知识教材等)。
56
 
57
- ## 🏗️ 数据处理流水线
58
 
59
- 为突破现有数学数据集在质量与多样性上的局限,我们建立了一套以"数学内容完整性""信息密度"为核心的精细化分级标准。***UltraData-Math*** 采用了 UltraData观点论文提出的 **L0-L4 数据分级体系**,通过标准化的层级定义,实现数学数据资产的有序管理与高效流转。每一级都代表了更高的数据纯度与数学价值,同时也对应着更精细的加工程度。
60
 
61
  <div align="center">
62
  <img src="assets/ultradata-math-pipeline.png" width="900"/>
63
  </div>
64
 
65
- ### L0:原始数据解析与标准化
66
 
67
- **目标**:解决通用 HTML 解析器对数学公式支持不佳的问题,最大限度保留网页中的数学语义。
68
 
69
- L0 阶段主要处理从 Common Crawl 等来源获取的原始网页数据。针对数学网页的特殊性,我们开发了专用的解析策略,而非直接使用通用的 trafilatura readability
70
 
71
- - **统一解析模式**:自动识别页面类型,尽可能保证完整内容提取。
72
- - **多级回退策略**:为了防止解析失败导致数据丢失,我们实施了多级回退机制,确保在结构化解析失败时仍能捕获文本内容。
73
- - **数学公式标准化**:我们将网页中不同的数学表达统一转换为标准的 LaTeX 格式,实现了数据格式的归一化,便于模型统一学习。
74
 
75
- ### L1:启发式清洗与过滤
76
 
77
- **目标**:去除格式噪声,提升数据的可读性和规范性。
78
 
79
- 在获取了包含完整数学公式的文本后,我们通过一系列启发式规则对 L0 数据进行清洗:
80
 
81
- - **格式修复**:
82
- - 清理不可见字符、乱码及非自然的连续换行。
83
- - 移除导航栏、页脚、广告弹窗及"阅读更多"等无关网页噪音。
84
- - **内容过滤**:
85
- - *长度过滤*:移除过短的文本片段,这些片段通常缺乏上下文,难以支持有效的数学推理训练。
86
- - *语言识别*:确保数据集主要由高质量的英文及中文数学内容组成。
87
- - *文档去重*:在文档级别进行去重,防止重复内容对模型训练造成偏差。
88
 
89
- ### L2:基于质量模型的精选
90
 
91
- **目标**:从海量数据中识别出具有高价值的核心语料。
92
 
93
- L1 数据虽然格式整洁,但内容质量参差不齐。L2 阶段引入了基于模型的质量评估体系:
94
 
95
- - **种子数据标注**:使用闭源大模型对一部分种子数据进行多维度打分,关注多个维度。
96
- - **分类器训练与蒸馏**:基于标注数据训练轻量级的 Embedding 分类器,使其具备识别高价值数学内容的能力。
97
- - **全量推理**:使用训练好的分类器对 L1 数据进行全量打分与筛选。
98
- - *保留*:包含详细解题步骤、数学概念解释、高水平学术讨论的内容。
99
- - *剔除*:简单的名词堆砌、无意义的数字列表、低幼或非数学领域的噪声。
100
 
101
- ### L3:合成与增强数据
102
 
103
- **目标**:通过合成数据弥补自然语料在格式和场景上的单一性,强化模型的推理链条(CoT)能力。
104
 
105
- 自然网页数据多为陈述性文本,为了增强模型的指令遵循和多轮交互能力,我们构建了 L3 合成数据层:
106
 
107
- - **Q&A 对生成**:利用高性能模型将陈述性文档改写为"问题-回答"对,构建QA风格的数据。
108
- - **多轮对话合成**:模拟"老师-学生"的辅导场景,生成包含追问、纠错、引导的多轮对话数据。
109
- - **多风格改写**:将单一来源的数据改写为多种风格(如教科书严谨风格、竞赛解题风格、科普直观风格),提升模型的泛化能力。
110
- - **知识点教材生成**:基于特定知识点生成系统化的教材类内容,确保模型掌握核心数学概念。
111
 
112
- | 数据集 | # Tokens | # Documents |
113
  |:---|:---:|:---:|
114
  | UltraData-Math-L1 | 159.4B | 85.56M |
115
  | UltraData-Math-L3 | 37.1B | 31.87M |
116
 
117
- ## 📈 实验结果
118
 
119
- 我们采用 **MiniCPM-1.2B** 模型架构与**MiniCPM3-4B**分词器进行实验验证。每个实验均在 **1000 亿 Token** 的训练量下进行,从而能在计算效率可控的参数范围内对数据性能进行全面验证。我们使用Lighteval库进行模型评估,所有评估指标均基于 **Zero-Shot** 设置。评估基准包括:
120
 
121
- - **数学推理:** GSM8KMATHR-BenchMath-Bench
122
- - **代码生成:** HumanEvalMBPP
123
- - **综合知识:** MMLUMMLU-STEM
124
 
125
- ### L0 解析器消融实验
126
 
127
- 基于相同来源数据,我们使用不同解析器分别提取后独立训练,直接对比解析策略的效果差异:
128
 
129
- | 解析器 | 平均分 | MMLU | GSM8K | HumanEval | math | mbpp_full | mmlu-stem |
130
  |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
131
  | **UltraData-Math-L0-Parser (Ours)** | **43.44** | 51.41 | 54.97 | **31.71** | **28.72** | 47.10 | 46.76 |
132
  | trafilatura + w3m | 42.33 | 50.95 | 54.51 | 27.44 | 27.64 | 47.93 | 45.52 |
@@ -134,11 +134,11 @@ L1 数据虽然格式整洁,但内容质量参差不齐。L2 阶段引入了
134
  | Megamath | 42.32 | 51.46 | 54.06 | 29.88 | 26.04 | 45.64 | 46.81 |
135
  | magic-html + w3m | 41.29 | 51.23 | 51.63 | 26.83 | 26.58 | 45.02 | 46.45 |
136
 
137
- ### 完整评测结果
138
 
139
- 我们使用单一数据集进行独立训练,以直接对比不同数据源的效果:
140
 
141
- | 模型 | 平均分 | MMLU | GSM8K | HumanEval | math | mbpp_full | mmlu-stem | R-bench | Math-bench |
142
  |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
143
  | **UltraData-Math (Ours)** | **43.79** | 51.67 | **61.79** | 32.93 | **37.02** | **49.27** | 45.93 | 23.38 | **48.33** |
144
  | Nemotron-cc 4plus mind | 43.45 | 52.09 | 59.97 | 34.76 | 35.96 | 48.03 | 45.99 | 23.51 | 47.25 |
@@ -146,13 +146,12 @@ L1 数据虽然格式整洁,但内容质量参差不齐。L2 阶段引入了
146
  | MegaMath-Web-Pro | 41.38 | 53.16 | 56.71 | 31.71 | 32.12 | 47.10 | 47.15 | 21.23 | 41.83 |
147
  | FineMath-4+ | 40.51 | 50.90 | 56.25 | 29.88 | 29.84 | 48.96 | 44.98 | 18.93 | 44.33 |
148
 
 
149
 
150
- ## ❤️ 致谢
 
 
151
 
152
- - **L0 解析层**:[magic-html](https://github.com/opendatalab/magic-html)、[w3m](http://w3m.sourceforge.net/)、[trafilatura](https://github.com/adbar/trafilatura)
153
- - **L3 合成层**:[Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)、[Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B)、[GLM-4.5](https://huggingface.co/zai-org/GLM-4.5)
154
- - **种子数据**:[Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1)、[MegaMath](https://huggingface.co/datasets/LLM360/MegaMath)、[FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath)
155
 
156
- ## 📜 许可证
157
-
158
- 本项目基于 [Apache 2.0](./LICENSE) 许可证发布。
 
27
 
28
  <div align="center">
29
 
30
+ [🤗 Dataset](https://huggingface.co/datasets/openbmb/UltraData-Math-L1) | [💻 Code Repository](https://github.com/UltraData-OpenBMB/UltraData-Math) | [🇨🇳 中文 README](README_ZH.md)
31
 
32
 
33
  </div>
34
 
35
+ ## 📚 Introduction
36
 
37
+ High-quality pre-training data is crucial for enhancing the mathematical reasoning capabilities of Large Language Models (LLMs). However, existing mathematical pre-training data construction schemes have the following shortcomings:
38
 
39
+ - **HTML Parsing Level**: General extractors (such as trafilatura, readability) are mainly designed for news/article scenarios, lacking specialized processing for mathematical formulas and other content, often leading to formula structure destruction or loss; meanwhile, mathematical discussions on forum-like pages are difficult to extract completely.
40
+ - **Data Quality Level**: Existing datasets generally lack a systematic quality grading mechanism, with high-value mathematical content mixed with low-quality noise.
41
+ - **Data Diversity Level**: Mainstream datasets mostly originate from textbooks or competition question banks, lacking mathematical discussions and application scenarios in real web pages; synthetic data formats are single, difficult to cover diverse needs such as multi-turn dialogues and multi-style expressions.
42
 
43
+ To address these issues, we propose ***UltraData-Math***—a large-scale high-quality pre-training dataset for mathematical reasoning tasks. This dataset is developed based on the [Ultra-Data](xxx) L0-L4 hierarchical data processing framework, containing four progressive levels:
44
 
45
+ - **L0 Raw Data Layer**: Developed a mathematical parser based on *magic-html*, combined with *w3m* layout preservation rendering and multi-level fallback strategies, standardizing MathML, KaTeX, and AsciiMath into LaTeX format.
46
+ - **L1 Filtered Data Layer**: Cleans noise through heuristic rules and performs document-level deduplication.
47
+ - **L2 Selected Data Layer**: Uses closed-source large models to annotate seed data and distills it into a lightweight Embedding classifier to achieve efficient quality grading of the full corpus.
48
+ - **L3 Synthetic Data Layer**: Generates synthetic data in various formats such as Q&A, multi-turn dialogues, multi-style rewriting, and knowledge-grounded textbooks based on multi-model ensemble.
49
 
50
+ Experiments show that on the MiniCPM-1B architecture, ***UltraData-Math*** achieves a score of **37.02** on the MATH benchmark, an improvement of **+3.62** compared to Nemotron-CC 4plus; it achieves **61.79** on GSM8K, an improvement of **+3.34**, while maintaining code generation and general knowledge capabilities.
51
 
52
+ ***UltraData-Math*** has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) models.
53
 
54
+ - **[UltraData-Math-L1](https://huggingface.co/datasets/openbmb/UltraData-Math-L1)**: Large-scale high-quality mathematical pre-training dataset, containing 159.4B tokens of web mathematical corpus. (**<-- you are here**)
55
+ - **[UltraData-Math-L3](https://huggingface.co/datasets/openbmb/UltraData-Math-L3)**: High-quality synthetic mathematical dataset, containing 37.1B tokens of multi-format synthetic data (Q&A, multi-turn dialogues, knowledge textbooks, etc.).
56
 
57
+ ## 🏗️ Data Processing Pipeline
58
 
59
+ To break through the limitations of existing mathematical datasets in quality and diversity, we established a refined grading standard centered on "mathematical content integrity" and "information density". ***UltraData-Math*** adopts the **L0-L4 Data Grading System** proposed by the UltraData position paper. Through standardized level definitions, it achieves orderly management and efficient flow of mathematical data assets. Each level represents higher data purity and mathematical value, while also corresponding to a more refined degree of processing.
60
 
61
  <div align="center">
62
  <img src="assets/ultradata-math-pipeline.png" width="900"/>
63
  </div>
64
 
65
+ ### L0: Raw Data Parsing and Standardization
66
 
67
+ **Goal**: Address the poor support of general HTML parsers for mathematical formulas and maximize the preservation of mathematical semantics in web pages.
68
 
69
+ The L0 phase mainly processes raw web data obtained from sources such as Common Crawl. Given the specificity of mathematical web pages, we developed specialized parsing strategies instead of directly using general ones like trafilatura or readability.
70
 
71
+ - **Unified Parsing Mode**: Automatically identifies page types to ensure complete content extraction as much as possible.
72
+ - **Multi-level Fallback Strategy**: To prevent data loss due to parsing failures, we implemented a multi-level fallback mechanism to ensure text content is captured even if structured parsing fails.
73
+ - **Mathematical Formula Standardization**: We unified different mathematical expressions in web pages into standard LaTeX format, achieving data format normalization for unified model learning.
74
 
75
+ ### L1: Heuristic Cleaning and Filtering
76
 
77
+ **Goal**: Remove format noise and improve data readability and standardization.
78
 
79
+ After obtaining text containing complete mathematical formulas, we clean the L0 data through a series of heuristic rules:
80
 
81
+ - **Format Repair**:
82
+ - Clean invisible characters, garbled text, and unnatural continuous line breaks.
83
+ - Remove irrelevant web noise such as navigation bars, footers, ad pop-ups, and "read more".
84
+ - **Content Filtering**:
85
+ - *Length Filtering*: Remove overly short text fragments, which usually lack context and are difficult to support effective mathematical reasoning training.
86
+ - *Language Identification*: Ensure the dataset is composed mainly of high-quality English and Chinese mathematical content.
87
+ - *Document Deduplication*: Perform deduplication at the document level to prevent duplicate content from biasing model training.
88
 
89
+ ### L2: Selection Based on Quality Models
90
 
91
+ **Goal**: Identify core corpora with high value from massive data.
92
 
93
+ Although L1 data has a clean format, the content quality varies. The L2 phase introduces a model-based quality assessment system:
94
 
95
+ - **Seed Data Annotation**: Use closed-source large models to score a portion of seed data across multiple dimensions.
96
+ - **Classifier Training and Distillation**: Train lightweight Embedding classifiers based on annotated data to equip them with the ability to identify high-value mathematical content.
97
+ - **Full-scale Inference**: Use the trained classifier to score and screen L1 data in full.
98
+ - *Retention*: Content containing detailed problem-solving steps, mathematical concept explanations, and high-level academic discussions.
99
+ - *Exclusion*: Simple stacking of nouns, meaningless lists of numbers, juvenile content, or noise from non-mathematical fields.
100
 
101
+ ### L3: Synthetic and Augmented Data
102
 
103
+ **Goal**: Compensate for the singularity of natural corpora in format and scenarios through synthetic data, enhancing the model's Chain of Thought (CoT) capabilities.
104
 
105
+ Natural web data is mostly declarative text. To enhance the model's instruction following and multi-turn interaction capabilities, we built the L3 synthetic data layer:
106
 
107
+ - **Q&A Pair Generation**: Use high-performance models to rewrite declarative documents into "Question-Answer" pairs, constructing QA-style data.
108
+ - **Multi-turn Dialogue Synthesis**: Simulate "Teacher-Student" tutoring scenarios to generate multi-turn dialogue data containing follow-up questions, corrections, and guidance.
109
+ - **Multi-style Rewriting**: Rewrite single-source data into multiple styles (such as rigorous textbook style, competition problem-solving style, intuitive popular science style) to improve model generalization.
110
+ - **Knowledge Point Textbook Generation**: Generate systematic textbook-like content based on specific knowledge points to ensure the model masters core mathematical concepts.
111
 
112
+ | Dataset | # Tokens | # Documents |
113
  |:---|:---:|:---:|
114
  | UltraData-Math-L1 | 159.4B | 85.56M |
115
  | UltraData-Math-L3 | 37.1B | 31.87M |
116
 
117
+ ## 📈 Experimental Results
118
 
119
+ We used the **MiniCPM-1.2B** model architecture and **MiniCPM3-4B** tokenizer for experimental verification. Each experiment was conducted with a training volume of **100 billion Tokens**, allowing for comprehensive verification of data performance within a parameter range with controllable computational efficiency. We used the Lighteval library for model evaluation, and all evaluation metrics are based on **Zero-Shot** settings. Evaluation benchmarks include:
120
 
121
+ - **Mathematical Reasoning:** GSM8K, MATH, R-Bench, Math-Bench
122
+ - **Code Generation:** HumanEval, MBPP
123
+ - **Comprehensive Knowledge:** MMLU, MMLU-STEM
124
 
125
+ ### L0 Parser Ablation Study
126
 
127
+ Based on data from the same source, we used different parsers for extraction and trained independently to directly compare the effects of parsing strategies:
128
 
129
+ | Parser | Average | MMLU | GSM8K | HumanEval | math | mbpp_full | mmlu-stem |
130
  |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
131
  | **UltraData-Math-L0-Parser (Ours)** | **43.44** | 51.41 | 54.97 | **31.71** | **28.72** | 47.10 | 46.76 |
132
  | trafilatura + w3m | 42.33 | 50.95 | 54.51 | 27.44 | 27.64 | 47.93 | 45.52 |
 
134
  | Megamath | 42.32 | 51.46 | 54.06 | 29.88 | 26.04 | 45.64 | 46.81 |
135
  | magic-html + w3m | 41.29 | 51.23 | 51.63 | 26.83 | 26.58 | 45.02 | 46.45 |
136
 
137
+ ### Full Evaluation Results
138
 
139
+ We used a single dataset for independent training to directly compare the effects of different data sources:
140
 
141
+ | Model | Average | MMLU | GSM8K | HumanEval | math | mbpp_full | mmlu-stem | R-bench | Math-bench |
142
  |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
143
  | **UltraData-Math (Ours)** | **43.79** | 51.67 | **61.79** | 32.93 | **37.02** | **49.27** | 45.93 | 23.38 | **48.33** |
144
  | Nemotron-cc 4plus mind | 43.45 | 52.09 | 59.97 | 34.76 | 35.96 | 48.03 | 45.99 | 23.51 | 47.25 |
 
146
  | MegaMath-Web-Pro | 41.38 | 53.16 | 56.71 | 31.71 | 32.12 | 47.10 | 47.15 | 21.23 | 41.83 |
147
  | FineMath-4+ | 40.51 | 50.90 | 56.25 | 29.88 | 29.84 | 48.96 | 44.98 | 18.93 | 44.33 |
148
 
149
+ ## ❤️ Acknowledgements
150
 
151
+ - **L0 Parsing Layer**: [magic-html](https://github.com/opendatalab/magic-html), [w3m](http://w3m.sourceforge.net/), [trafilatura](https://github.com/adbar/trafilatura)
152
+ - **L3 Synthesis Layer**: [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct), [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B), [GLM-4.5](https://huggingface.co/zai-org/GLM-4.5)
153
+ - **Seed Data**: [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1), [MegaMath](https://huggingface.co/datasets/LLM360/MegaMath), [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath)
154
 
155
+ ## 📜 License
 
 
156
 
157
+ This project is licensed under the [Apache 2.0](./LICENSE) license.
 
 
README_EN.md DELETED
@@ -1,156 +0,0 @@
1
- ---
2
- language:
3
- - en
4
- - zh
5
- license: apache-2.0
6
- size_categories:
7
- - 100B<n<1T
8
- task_categories:
9
- - text-generation
10
- pretty_name: UltraData-Math
11
- arxiv: xxxx.xxxxx
12
- tags:
13
- - llm
14
- - pretraining
15
- - math
16
- - data-synthesis
17
- - data-filtering
18
- - high-quality
19
- - mathematical-reasoning
20
- ---
21
-
22
- # UltraData-Math
23
-
24
- <div align="center">
25
- <img src="assets/ultradata-math-logo.png" width="600"/>
26
- </div>
27
-
28
- <div align="center">
29
-
30
- [🤗 Dataset](https://huggingface.co/datasets/openbmb/UltraData-Math-L1) | [💻 Code Repository](https://github.com/UltraData-OpenBMB/UltraData-Math)
31
-
32
- </div>
33
-
34
- ## 📚 Introduction
35
-
36
- High-quality pre-training data is crucial for enhancing the mathematical reasoning capabilities of Large Language Models (LLMs). However, existing mathematical pre-training data construction schemes have the following shortcomings:
37
-
38
- - **HTML Parsing Level**: General extractors (such as trafilatura, readability) are mainly designed for news/article scenarios, lacking specialized processing for mathematical formulas and other content, often leading to formula structure destruction or loss; meanwhile, mathematical discussions on forum-like pages are difficult to extract completely.
39
- - **Data Quality Level**: Existing datasets generally lack a systematic quality grading mechanism, with high-value mathematical content mixed with low-quality noise.
40
- - **Data Diversity Level**: Mainstream datasets mostly originate from textbooks or competition question banks, lacking mathematical discussions and application scenarios in real web pages; synthetic data formats are single, difficult to cover diverse needs such as multi-turn dialogues and multi-style expressions.
41
-
42
- To address these issues, we propose ***UltraData-Math***—a large-scale high-quality pre-training dataset for mathematical reasoning tasks. This dataset is developed based on the [Ultra-Data](xxx) L0-L4 hierarchical data processing framework, containing four progressive levels:
43
-
44
- - **L0 Raw Data Layer**: Developed a mathematical parser based on *magic-html*, combined with *w3m* layout preservation rendering and multi-level fallback strategies, standardizing MathML, KaTeX, and AsciiMath into LaTeX format.
45
- - **L1 Filtered Data Layer**: Cleans noise through heuristic rules and performs document-level deduplication.
46
- - **L2 Selected Data Layer**: Uses closed-source large models to annotate seed data and distills it into a lightweight Embedding classifier to achieve efficient quality grading of the full corpus.
47
- - **L3 Synthetic Data Layer**: Generates synthetic data in various formats such as Q&A, multi-turn dialogues, multi-style rewriting, and knowledge-grounded textbooks based on multi-model ensemble.
48
-
49
- Experiments show that on the MiniCPM-1B architecture, ***UltraData-Math*** achieves a score of **37.02** on the MATH benchmark, an improvement of **+3.62** compared to Nemotron-CC 4plus; it achieves **61.79** on GSM8K, an improvement of **+3.34**, while maintaining code generation and general knowledge capabilities.
50
-
51
- ***UltraData-Math*** has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) models.
52
-
53
- - **[UltraData-Math-L1](https://huggingface.co/datasets/openbmb/UltraData-Math-L1)**: Large-scale high-quality mathematical pre-training dataset, containing 159.4B tokens of web mathematical corpus. (**<-- you are here**)
54
- - **[UltraData-Math-L3](https://huggingface.co/datasets/openbmb/UltraData-Math-L3)**: High-quality synthetic mathematical dataset, containing 37.1B tokens of multi-format synthetic data (Q&A, multi-turn dialogues, knowledge textbooks, etc.).
55
-
56
- ## 🏗️ Data Processing Pipeline
57
-
58
- To break through the limitations of existing mathematical datasets in quality and diversity, we established a refined grading standard centered on "mathematical content integrity" and "information density". ***UltraData-Math*** adopts the **L0-L4 Data Grading System** proposed by the UltraData position paper. Through standardized level definitions, it achieves orderly management and efficient flow of mathematical data assets. Each level represents higher data purity and mathematical value, while also corresponding to a more refined degree of processing.
59
-
60
- <div align="center">
61
- <img src="assets/ultradata-math-pipeline.png" width="900"/>
62
- </div>
63
-
64
- ### L0: Raw Data Parsing and Standardization
65
-
66
- **Goal**: Address the poor support of general HTML parsers for mathematical formulas and maximize the preservation of mathematical semantics in web pages.
67
-
68
- The L0 phase mainly processes raw web data obtained from sources such as Common Crawl. Given the specificity of mathematical web pages, we developed specialized parsing strategies instead of directly using general ones like trafilatura or readability.
69
-
70
- - **Unified Parsing Mode**: Automatically identifies page types to ensure complete content extraction as much as possible.
71
- - **Multi-level Fallback Strategy**: To prevent data loss due to parsing failures, we implemented a multi-level fallback mechanism to ensure text content is captured even if structured parsing fails.
72
- - **Mathematical Formula Standardization**: We unified different mathematical expressions in web pages into standard LaTeX format, achieving data format normalization for unified model learning.
73
-
74
- ### L1: Heuristic Cleaning and Filtering
75
-
76
- **Goal**: Remove format noise and improve data readability and standardization.
77
-
78
- After obtaining text containing complete mathematical formulas, we clean the L0 data through a series of heuristic rules:
79
-
80
- - **Format Repair**:
81
- - Clean invisible characters, garbled text, and unnatural continuous line breaks.
82
- - Remove irrelevant web noise such as navigation bars, footers, ad pop-ups, and "read more".
83
- - **Content Filtering**:
84
- - *Length Filtering*: Remove overly short text fragments, which usually lack context and are difficult to support effective mathematical reasoning training.
85
- - *Language Identification*: Ensure the dataset is composed mainly of high-quality English and Chinese mathematical content.
86
- - *Document Deduplication*: Perform deduplication at the document level to prevent duplicate content from biasing model training.
87
-
88
- ### L2: Selection Based on Quality Models
89
-
90
- **Goal**: Identify core corpora with high value from massive data.
91
-
92
- Although L1 data has a clean format, the content quality varies. The L2 phase introduces a model-based quality assessment system:
93
-
94
- - **Seed Data Annotation**: Use closed-source large models to score a portion of seed data across multiple dimensions.
95
- - **Classifier Training and Distillation**: Train lightweight Embedding classifiers based on annotated data to equip them with the ability to identify high-value mathematical content.
96
- - **Full-scale Inference**: Use the trained classifier to score and screen L1 data in full.
97
- - *Retention*: Content containing detailed problem-solving steps, mathematical concept explanations, and high-level academic discussions.
98
- - *Exclusion*: Simple stacking of nouns, meaningless lists of numbers, juvenile content, or noise from non-mathematical fields.
99
-
100
- ### L3: Synthetic and Augmented Data
101
-
102
- **Goal**: Compensate for the singularity of natural corpora in format and scenarios through synthetic data, enhancing the model's Chain of Thought (CoT) capabilities.
103
-
104
- Natural web data is mostly declarative text. To enhance the model's instruction following and multi-turn interaction capabilities, we built the L3 synthetic data layer:
105
-
106
- - **Q&A Pair Generation**: Use high-performance models to rewrite declarative documents into "Question-Answer" pairs, constructing QA-style data.
107
- - **Multi-turn Dialogue Synthesis**: Simulate "Teacher-Student" tutoring scenarios to generate multi-turn dialogue data containing follow-up questions, corrections, and guidance.
108
- - **Multi-style Rewriting**: Rewrite single-source data into multiple styles (such as rigorous textbook style, competition problem-solving style, intuitive popular science style) to improve model generalization.
109
- - **Knowledge Point Textbook Generation**: Generate systematic textbook-like content based on specific knowledge points to ensure the model masters core mathematical concepts.
110
-
111
- | Dataset | # Tokens | # Documents |
112
- |:---|:---:|:---:|
113
- | UltraData-Math-L1 | 159.4B | 85.56M |
114
- | UltraData-Math-L3 | 37.1B | 31.87M |
115
-
116
- ## 📈 Experimental Results
117
-
118
- We used the **MiniCPM-1.2B** model architecture and **MiniCPM3-4B** tokenizer for experimental verification. Each experiment was conducted with a training volume of **100 billion Tokens**, allowing for comprehensive verification of data performance within a parameter range with controllable computational efficiency. We used the Lighteval library for model evaluation, and all evaluation metrics are based on **Zero-Shot** settings. Evaluation benchmarks include:
119
-
120
- - **Mathematical Reasoning:** GSM8K, MATH, R-Bench, Math-Bench
121
- - **Code Generation:** HumanEval, MBPP
122
- - **Comprehensive Knowledge:** MMLU, MMLU-STEM
123
-
124
- ### L0 Parser Ablation Study
125
-
126
- Based on data from the same source, we used different parsers for extraction and trained independently to directly compare the effects of parsing strategies:
127
-
128
- | Parser | Average | MMLU | GSM8K | HumanEval | math | mbpp_full | mmlu-stem |
129
- |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
130
- | **UltraData-Math-L0-Parser (Ours)** | **43.44** | 51.41 | 54.97 | **31.71** | **28.72** | 47.10 | 46.76 |
131
- | trafilatura + w3m | 42.33 | 50.95 | 54.51 | 27.44 | 27.64 | 47.93 | 45.52 |
132
- | trafilatura | 42.44 | 51.42 | 56.03 | 26.83 | 28.08 | 45.64 | 46.62 |
133
- | Megamath | 42.32 | 51.46 | 54.06 | 29.88 | 26.04 | 45.64 | 46.81 |
134
- | magic-html + w3m | 41.29 | 51.23 | 51.63 | 26.83 | 26.58 | 45.02 | 46.45 |
135
-
136
- ### Full Evaluation Results
137
-
138
- We used a single dataset for independent training to directly compare the effects of different data sources:
139
-
140
- | Model | Average | MMLU | GSM8K | HumanEval | math | mbpp_full | mmlu-stem | R-bench | Math-bench |
141
- |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
142
- | **UltraData-Math (Ours)** | **43.79** | 51.67 | **61.79** | 32.93 | **37.02** | **49.27** | 45.93 | 23.38 | **48.33** |
143
- | Nemotron-cc 4plus mind | 43.45 | 52.09 | 59.97 | 34.76 | 35.96 | 48.03 | 45.99 | 23.51 | 47.25 |
144
- | Nemotron-cc 4plus | 42.62 | 51.96 | 58.45 | 35.37 | 33.40 | 46.47 | 45.67 | 22.74 | 46.92 |
145
- | MegaMath-Web-Pro | 41.38 | 53.16 | 56.71 | 31.71 | 32.12 | 47.10 | 47.15 | 21.23 | 41.83 |
146
- | FineMath-4+ | 40.51 | 50.90 | 56.25 | 29.88 | 29.84 | 48.96 | 44.98 | 18.93 | 44.33 |
147
-
148
- ## ❤️ Acknowledgements
149
-
150
- - **L0 Parsing Layer**: [magic-html](https://github.com/opendatalab/magic-html), [w3m](http://w3m.sourceforge.net/), [trafilatura](https://github.com/adbar/trafilatura)
151
- - **L3 Synthesis Layer**: [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct), [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B), [GLM-4.5](https://huggingface.co/zai-org/GLM-4.5)
152
- - **Seed Data**: [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1), [MegaMath](https://huggingface.co/datasets/LLM360/MegaMath), [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath)
153
-
154
- ## 📜 License
155
-
156
- This project is licensed under the [Apache 2.0](./LICENSE) license.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README_ZH.md ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - zh
5
+ license: apache-2.0
6
+ size_categories:
7
+ - 100B<n<1T
8
+ task_categories:
9
+ - text-generation
10
+ pretty_name: UltraData-Math
11
+ arxiv: xxxx.xxxxx
12
+ tags:
13
+ - llm
14
+ - pretraining
15
+ - math
16
+ - data-synthesis
17
+ - data-filtering
18
+ - high-quality
19
+ - mathematical-reasoning
20
+ ---
21
+
22
+ # UltraData-Math
23
+
24
+ <div align="center">
25
+ <img src="assets/ultradata-math-logo.png" width="600"/>
26
+ </div>
27
+
28
+ <div align="center">
29
+
30
+ [🤗 数据集](https://huggingface.co/datasets/openbmb/UltraData-Math-L1) | [💻 代码仓库](https://github.com/UltraData-OpenBMB/UltraData-Math) | [🇺🇸 English README](README_EN.md)
31
+
32
+
33
+ </div>
34
+
35
+ ## 📚 简介
36
+
37
+ 高质量预训练数据对提升大语言模型的数学推理能力至关重要。然而,现有数学预训练数据构建方案存在以下不足:
38
+
39
+ - **HTML 解析层面**:通用提取器(如 trafilatura、readability)主要面向新闻/文章场景设计,对数学公式等内容缺乏专门处理,常导致公式结构破坏或丢失;同时论坛类页面的数学讨论部分,难以完整提取。
40
+ - **数据质量层面**:现有数据集普遍缺乏系统的质量分级机制,高价值数学内容与低质噪声混杂。
41
+ - **数据多样性层面**:主流数据集多源自教科书或竞赛题库,缺少真实网页中的数学讨论与应用场景;合成数据格式单一,难以覆盖多轮对话、多风格表达等多样化需求。
42
+
43
+ 针对上述问题,我们提出 ***UltraData-Math***——一个面向数学推理任务的大规模高质量预训练数据集。本数据集基于 [Ultra-Data](xxx) 的 L0-L4 分级数据处理框架开发,包含四个递进层级:
44
+
45
+ - **L0 原始数据层**:基于 *magic-html* 开发数学解析器,结合 *w3m* 布局保持渲染与多级回退策略,将 MathML、KaTeX、AsciiMath 标准化为 LaTeX 格式
46
+ - **L1 过滤数据层**:通过启发式规则清洗噪声并进行文档级去重
47
+ - **L2 精筛数据层**:使用闭源大模型标注种子数据并蒸馏至轻量 Embedding 分类器,实现全量语料的高效质量分级
48
+ - **L3 合成数据层**:基于多模型集成生成 Q&A、多轮对话、多风格改写、知识接地教材等多种格式的合成数据
49
+
50
+ 实验表明,在 MiniCPM-1B 架构上,***UltraData-Math*** 在 MATH 基准上达到 **37.02** 分,相较 Nemotron-CC 4plus 提升 **+3.62** 分;在 GSM8K 上达到 **61.79** 分,提升 **+3.34** 分,同时保持代码生成与通用知识能力。
51
+
52
+ ***UltraData-Math*** 已应用于 [MiniCPM 系列](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) 模型的数学预训练。
53
+
54
+ - **[UltraData-Math-L1](https://huggingface.co/datasets/openbmb/UltraData-Math-L1)**: 大规模高质量数学预训练数据集,包含 159.4B tokens 的网页数学语料。(**<-- you are here**)
55
+ - **[UltraData-Math-L3](https://huggingface.co/datasets/openbmb/UltraData-Math-L3)**: 高质量合成数学数据集,包含 37.1B tokens 的多格式合成数据(Q&A、多轮对话、知识教材等)。
56
+
57
+ ## 🏗️ 数据处理流水线
58
+
59
+ 为突破现有数学数据集在质量与多样性上的局限,我们建立了一套以"数学内容完整性"和"信息密度"为核心的精细化分级标准。***UltraData-Math*** 采用了 UltraData观点论文提出的 **L0-L4 数据分级体系**,通过标准化的层级定义,实现数学数据资产的有序管理与高效流转。每一级都代表了更高的数据纯度与数学价值,同时也对应着更精细的加工程度。
60
+
61
+ <div align="center">
62
+ <img src="assets/ultradata-math-pipeline.png" width="900"/>
63
+ </div>
64
+
65
+ ### L0:原始数据解析与标准化
66
+
67
+ **目标**:解决通用 HTML 解析器对数学公式支持不佳的问题,最大限度保留网页中的数学语义。
68
+
69
+ L0 阶段主要处理从 Common Crawl 等来源获取的原始网页数据。针对数学网页的特殊性,我们开发了专用的解析策略,而非直接使用通用的 trafilatura 或 readability。
70
+
71
+ - **统一解析模式**:自动识别页面类型,尽可能保证完整内容提取。
72
+ - **多级回退策略**:为了防止解析失败导致数据丢失,我们实施了多级回退机制,确保在结构化解析失败时仍能捕获文本内容。
73
+ - **数学公式标准化**:我们将网页中不同的数学表达统一转换为标准的 LaTeX 格式,实现了数据格式的归一化,便于模型统一学习。
74
+
75
+ ### L1:启发式清洗与过滤
76
+
77
+ **目标**:去除格式噪声,提升数据的可读性和规范性。
78
+
79
+ 在获取了包含完整数学公式的文本后,我们通过一系列启发式规则对 L0 数据进行清洗:
80
+
81
+ - **格式修复**:
82
+ - 清理不可见字符、乱码及非自然的连续换行。
83
+ - 移除导航栏、页脚、广告弹窗及"阅读更多"等无关网页噪音。
84
+ - **内容过滤**:
85
+ - *长度过滤*:移除过短的文本片段,这些片段通常缺乏上下文,难以支持有效的数学推理训练。
86
+ - *语言识别*:确保数据集主要由高质量的英文及中文数学内容组成。
87
+ - *文档去重*:在文档级别进行去重,防止重复内容对模型训练造成偏差。
88
+
89
+ ### L2:基于质量模型的精选
90
+
91
+ **目标**:从海量数据中识别出具有高价值的核心语料。
92
+
93
+ L1 数据虽然格式整洁,但内容质量参差不齐。L2 阶段引入了基于模型的质量评估体系:
94
+
95
+ - **种子数据标注**:使用闭源大模型对一部分种子数据进行多维度打分,关注多个维度。
96
+ - **分类器训练与蒸馏**:基于标注数据训练轻量级的 Embedding 分类器,使其具备识别高价值数学内容的能力。
97
+ - **全量推理**:使用训练好的分类器对 L1 数据进行全量打分与筛选。
98
+ - *保留*:包含详细解题步骤、数学概念解释、高水平学术讨论的内容。
99
+ - *剔除*:简单的名词堆砌、无意义的数字列表、低幼或非数学领域的噪声。
100
+
101
+ ### L3:合成与增强数据
102
+
103
+ **目标**:通过合成数据弥补自然语料在格式和场景上的单一性,强化模型的推理链条(CoT)能力。
104
+
105
+ 自然网页数据多为陈述性文本,为了增强模型的指令遵循和多轮交互能力,我们构建了 L3 合成数据层:
106
+
107
+ - **Q&A 对生成**:利用高性能模型将陈述性文档改写为"问题-回答"对,构建QA风格的数据。
108
+ - **多轮对话合成**:模拟"老师-学生"的辅导场景,生成包含追问、纠错、引导的多轮对话数据。
109
+ - **多风格改写**:将单一来源的数据改写为多种风格(如教科书严谨风格、竞赛解题风格、科普直观风格),提升模型的泛化能力。
110
+ - **知识点教材生成**:基于特定知识点生成系统化的教材类内容,确保模型掌握核心数学概念。
111
+
112
+ | 数据集 | # Tokens | # Documents |
113
+ |:---|:---:|:---:|
114
+ | UltraData-Math-L1 | 159.4B | 85.56M |
115
+ | UltraData-Math-L3 | 37.1B | 31.87M |
116
+
117
+ ## 📈 实验结果
118
+
119
+ 我们采用 **MiniCPM-1.2B** 模型架构与**MiniCPM3-4B**分词器进行实验验证。每个实验均在 **1000 亿 Token** 的训练量下进行,从而能在计算效率可控的参数范围内对数据性能进行全面验证。我们使用Lighteval库进行模型评估,所有评估指标均基于 **Zero-Shot** 设置。评估基准包括:
120
+
121
+ - **数学推理:** GSM8K、MATH、R-Bench、Math-Bench
122
+ - **代码生成:** HumanEval、MBPP
123
+ - **综合知识:** MMLU、MMLU-STEM
124
+
125
+ ### L0 解析器消融实验
126
+
127
+ 基于相同来源数据,我们使用不同解析器分别提取后独立训练,直接对比解析策略的效果差异:
128
+
129
+ | 解析器 | 平均分 | MMLU | GSM8K | HumanEval | math | mbpp_full | mmlu-stem |
130
+ |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
131
+ | **UltraData-Math-L0-Parser (Ours)** | **43.44** | 51.41 | 54.97 | **31.71** | **28.72** | 47.10 | 46.76 |
132
+ | trafilatura + w3m | 42.33 | 50.95 | 54.51 | 27.44 | 27.64 | 47.93 | 45.52 |
133
+ | trafilatura | 42.44 | 51.42 | 56.03 | 26.83 | 28.08 | 45.64 | 46.62 |
134
+ | Megamath | 42.32 | 51.46 | 54.06 | 29.88 | 26.04 | 45.64 | 46.81 |
135
+ | magic-html + w3m | 41.29 | 51.23 | 51.63 | 26.83 | 26.58 | 45.02 | 46.45 |
136
+
137
+ ### 完整评测结果
138
+
139
+ 我们使用单一数据集进行独立训练,以直接对比不同数据源的效果:
140
+
141
+ | 模型 | 平均分 | MMLU | GSM8K | HumanEval | math | mbpp_full | mmlu-stem | R-bench | Math-bench |
142
+ |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
143
+ | **UltraData-Math (Ours)** | **43.79** | 51.67 | **61.79** | 32.93 | **37.02** | **49.27** | 45.93 | 23.38 | **48.33** |
144
+ | Nemotron-cc 4plus mind | 43.45 | 52.09 | 59.97 | 34.76 | 35.96 | 48.03 | 45.99 | 23.51 | 47.25 |
145
+ | Nemotron-cc 4plus | 42.62 | 51.96 | 58.45 | 35.37 | 33.40 | 46.47 | 45.67 | 22.74 | 46.92 |
146
+ | MegaMath-Web-Pro | 41.38 | 53.16 | 56.71 | 31.71 | 32.12 | 47.10 | 47.15 | 21.23 | 41.83 |
147
+ | FineMath-4+ | 40.51 | 50.90 | 56.25 | 29.88 | 29.84 | 48.96 | 44.98 | 18.93 | 44.33 |
148
+
149
+
150
+ ## ❤️ 致谢
151
+
152
+ - **L0 解析层**:[magic-html](https://github.com/opendatalab/magic-html)、[w3m](http://w3m.sourceforge.net/)、[trafilatura](https://github.com/adbar/trafilatura)
153
+ - **L3 合成层**:[Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)、[Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B)、[GLM-4.5](https://huggingface.co/zai-org/GLM-4.5)
154
+ - **种子数据**:[Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1)、[MegaMath](https://huggingface.co/datasets/LLM360/MegaMath)、[FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath)
155
+
156
+ ## 📜 许可证
157
+
158
+ 本项目基于 [Apache 2.0](./LICENSE) 许可证发布。