Datasets:
ZhouChuYue
commited on
Commit
·
f8e047f
1
Parent(s):
8ce3c7a
Update README: fix links, rename Layer labels, and update UltraData reference URL
Browse files- Fix README_ZH.md link to use full HF URL
- Remove redundant "Layer" from L0-L3 labels
- Replace "extractors" with "parsers" for consistency
- Update UltraData paper link to ultradata.openbmb.cn
- README.md +11 -9
- README_ZH.md +2 -0
README.md
CHANGED
|
@@ -40,23 +40,25 @@ default_config_name: UltraData-Math-L3-Conversation-Synthetic
|
|
| 40 |
</div>
|
| 41 |
|
| 42 |
<p align="center">
|
| 43 |
-
<a href="https://huggingface.co/datasets/openbmb/UltraData-Math">🤗 Dataset</a> | <a href="https://github.com/UltraData-OpenBMB/UltraData-Math">💻 Source Code</a> | <a href="README_ZH.md">🇨🇳 中文 README</a>
|
| 44 |
</p>
|
| 45 |
|
| 46 |
## 📚 Introduction
|
| 47 |
|
|
|
|
|
|
|
| 48 |
High-quality pre-training data is crucial for enhancing the mathematical reasoning capabilities of large language models (LLMs). However, existing mathematical pre-training data construction schemes have the following shortcomings:
|
| 49 |
|
| 50 |
-
- **HTML Parsing**: General
|
| 51 |
- **Data Quality**: Existing datasets generally lack a systematic quality grading mechanism, with high-value mathematical content mixed with low-quality noise.
|
| 52 |
- **Data Diversity**: Mainstream datasets mostly originate from textbooks or competition question banks, lacking mathematical discussions and application scenarios in real web pages; synthetic data formats are single, difficult to cover diverse needs such as multi-turn dialogues and multi-style expressions.
|
| 53 |
|
| 54 |
-
To address these issues, we propose ***UltraData-Math***—a large-scale high-quality pre-training dataset for mathematical reasoning tasks. This dataset is developed based on the [UltraData](https://
|
| 55 |
|
| 56 |
-
- **L0 Raw Data
|
| 57 |
-
- **L1 Filtered Data
|
| 58 |
-
- **L2 Selected Data
|
| 59 |
-
- **L3 Refined Data
|
| 60 |
|
| 61 |
Experiments show that on the MiniCPM-1.2B architecture, ***UltraData-Math*** achieves a score of **37.02** on the MATH500 benchmark, an improvement of **+3.62** compared to Nemotron-CC 4plus; it achieves **61.79** on GSM8K, an improvement of **+3.34**, while maintaining code generation and general knowledge capabilities.
|
| 62 |
|
|
@@ -68,7 +70,7 @@ Experiments show that on the MiniCPM-1.2B architecture, ***UltraData-Math*** ach
|
|
| 68 |
|
| 69 |
## 🏗️ Data Processing Pipeline
|
| 70 |
|
| 71 |
-
To break through the limitations of existing mathematical datasets in quality and diversity, we established a refined grading standard centered on "mathematical content integrity" and "information density". ***UltraData-Math*** adopts the **L0-L4 Tiered Data Management Framework** proposed by the [UltraData](https://
|
| 72 |
|
| 73 |
<div align="center">
|
| 74 |
<img src="assets/ultradata-math-pipeline.png" width="900"/>
|
|
@@ -78,7 +80,7 @@ To break through the limitations of existing mathematical datasets in quality an
|
|
| 78 |
|
| 79 |
**Goal**: Address the poor support of general HTML parsers for mathematical formulas and maximize the preservation of mathematical semantics in web pages.
|
| 80 |
|
| 81 |
-
The L0 phase mainly processes raw web data obtained from sources such as Common Crawl. Given the specificity of mathematical web pages, we develop specialized parsing strategies through the [UltraData-Math-Parser](https://github.com/UltraData-OpenBMB/UltraData-Math/tree/main/UltraData-Math-L0-Parser) instead of directly using general
|
| 82 |
|
| 83 |
- **Unified Parsing Mode**: Automatically identifies page types to ensure complete content extraction as much as possible.
|
| 84 |
- **Multi-level Fallback Strategy**: To prevent data loss due to parsing failures, we implement a multi-level fallback mechanism to ensure text content is captured even if structured parsing fails.
|
|
|
|
| 40 |
</div>
|
| 41 |
|
| 42 |
<p align="center">
|
| 43 |
+
<a href="https://huggingface.co/datasets/openbmb/UltraData-Math">🤗 Dataset</a> | <a href="https://github.com/UltraData-OpenBMB/UltraData-Math">💻 Source Code</a> | <a href="https://huggingface.co/datasets/openbmb/UltraData-Math/blob/main/README_ZH.md">🇨🇳 中文 README</a>
|
| 44 |
</p>
|
| 45 |
|
| 46 |
## 📚 Introduction
|
| 47 |
|
| 48 |
+
***UltraData-Math*** is a large-scale, high-quality mathematical pre-training dataset totaling **290B+ tokens** across three progressive tiers—**L1** (170.5B tokens web corpus), **L2** (33.7B tokens quality-selected), and **L3** (88B tokens multi-format refined)—designed to systematically enhance mathematical reasoning in LLMs. It has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm4) models.
|
| 49 |
+
|
| 50 |
High-quality pre-training data is crucial for enhancing the mathematical reasoning capabilities of large language models (LLMs). However, existing mathematical pre-training data construction schemes have the following shortcomings:
|
| 51 |
|
| 52 |
+
- **HTML Parsing**: General parsers (such as trafilatura, readability) are mainly designed for news/article parsing, lacking specialized processing for mathematical formulas and other content, often leading to formula structure destruction or loss; meanwhile, mathematical discussions on forum-like pages are difficult to extract completely.
|
| 53 |
- **Data Quality**: Existing datasets generally lack a systematic quality grading mechanism, with high-value mathematical content mixed with low-quality noise.
|
| 54 |
- **Data Diversity**: Mainstream datasets mostly originate from textbooks or competition question banks, lacking mathematical discussions and application scenarios in real web pages; synthetic data formats are single, difficult to cover diverse needs such as multi-turn dialogues and multi-style expressions.
|
| 55 |
|
| 56 |
+
To address these issues, we propose ***UltraData-Math***—a large-scale high-quality pre-training dataset for mathematical reasoning tasks. This dataset is developed based on the [UltraData](https://ultradata.openbmb.cn/blog/position-paper) L0-L4 Tiered Data Management Framework, containing four progressive levels:
|
| 57 |
|
| 58 |
+
- **L0 Raw Data**: Develops a mathematical parser based on *magic-html*, combined with *w3m* layout preservation rendering and multi-level fallback strategies, standardizing MathML, KaTeX, and AsciiMath into LaTeX format.
|
| 59 |
+
- **L1 Filtered Data**: Cleans noise through heuristic rules and performs document-level deduplication.
|
| 60 |
+
- **L2 Selected Data**: Uses proprietary large models to annotate seed data and distills it into a lightweight embedding classifier to achieve efficient quality grading of the full corpus.
|
| 61 |
+
- **L3 Refined Data**: Produces structured content with clear reasoning through rewriting, synthetic generation, and refinement in various formats such as Q&A, multi-turn dialogues, multi-style rewriting, and knowledge-grounded textbooks.
|
| 62 |
|
| 63 |
Experiments show that on the MiniCPM-1.2B architecture, ***UltraData-Math*** achieves a score of **37.02** on the MATH500 benchmark, an improvement of **+3.62** compared to Nemotron-CC 4plus; it achieves **61.79** on GSM8K, an improvement of **+3.34**, while maintaining code generation and general knowledge capabilities.
|
| 64 |
|
|
|
|
| 70 |
|
| 71 |
## 🏗️ Data Processing Pipeline
|
| 72 |
|
| 73 |
+
To break through the limitations of existing mathematical datasets in quality and diversity, we established a refined grading standard centered on "mathematical content integrity" and "information density". ***UltraData-Math*** adopts the **L0-L4 Tiered Data Management Framework** proposed by the [UltraData](https://ultradata.openbmb.cn/blog/position-paper) paper. Through standardized level definitions, it achieves orderly management and efficient flow of mathematical data assets. Each level represents higher data purity and mathematical value, while also corresponding to a more refined degree of processing.
|
| 74 |
|
| 75 |
<div align="center">
|
| 76 |
<img src="assets/ultradata-math-pipeline.png" width="900"/>
|
|
|
|
| 80 |
|
| 81 |
**Goal**: Address the poor support of general HTML parsers for mathematical formulas and maximize the preservation of mathematical semantics in web pages.
|
| 82 |
|
| 83 |
+
The L0 phase mainly processes raw web data obtained from sources such as Common Crawl. Given the specificity of mathematical web pages, we develop specialized parsing strategies through the [UltraData-Math-Parser](https://github.com/UltraData-OpenBMB/UltraData-Math/tree/main/UltraData-Math-L0-Parser) instead of directly using general parsers like trafilatura or readability.
|
| 84 |
|
| 85 |
- **Unified Parsing Mode**: Automatically identifies page types to ensure complete content extraction as much as possible.
|
| 86 |
- **Multi-level Fallback Strategy**: To prevent data loss due to parsing failures, we implement a multi-level fallback mechanism to ensure text content is captured even if structured parsing fails.
|
README_ZH.md
CHANGED
|
@@ -10,6 +10,8 @@
|
|
| 10 |
|
| 11 |
## 📚 简介
|
| 12 |
|
|
|
|
|
|
|
| 13 |
高质量预训练数据对提升大语言模型的数学推理能力至关重要。然而,现有数学预训练数据构建方案存在以下不足:
|
| 14 |
|
| 15 |
- **HTML 解析层面**:通用提取器(如 trafilatura、readability)主要面向新闻/文章场景设计,对数学公式等内容缺乏专门处理,常导致公式结构破坏或丢失;同时论坛类页面的数学讨论部分,难以完整提取。
|
|
|
|
| 10 |
|
| 11 |
## 📚 简介
|
| 12 |
|
| 13 |
+
***UltraData-Math*** 是一个面向数学推理的大规模高质量预训练数据集,总计 **290B+ tokens**,涵盖三个递进层级——**L1**(170.5B tokens 网页语料)、**L2**(33.7B tokens 质量精选)、**L3**(88B tokens 多格式精炼),旨在系统性提升大语言模型的数学推理能力。已应用于 [MiniCPM 系列](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) 模型的数学预训练。
|
| 14 |
+
|
| 15 |
高质量预训练数据对提升大语言模型的数学推理能力至关重要。然而,现有数学预训练数据构建方案存在以下不足:
|
| 16 |
|
| 17 |
- **HTML 解析层面**:通用提取器(如 trafilatura、readability)主要面向新闻/文章场景设计,对数学公式等内容缺乏专门处理,常导致公式结构破坏或丢失;同时论坛类页面的数学讨论部分,难以完整提取。
|