Datasets:
ZhouChuYue
commited on
Commit
·
16cc0da
1
Parent(s):
371de48
Update README: improve wording, add Quick Start section
Browse files- Use HTML anchor tags in centered div for better HF rendering
- Lowercase "large language models", "embedding", "chain-of-thought"
- Replace "closed-source" with "proprietary"
- Remove "Level" from shortcoming bullet points
- Unify verb tense to present tense
- Change "scenarios" to "parsing", "hierarchical" to "tiered"
- Add transition sentence before dataset table
- Add Quick Start section with HuggingFace usage examples
README.md
CHANGED
|
@@ -41,31 +41,30 @@ default_config_name: UltraData-Math-L3-Conversation-Synthetic
|
|
| 41 |
|
| 42 |
<div align="center">
|
| 43 |
|
| 44 |
-
|
| 45 |
-
|
| 46 |
|
| 47 |
</div>
|
| 48 |
|
| 49 |
## 📚 Introduction
|
| 50 |
|
| 51 |
-
High-quality pre-training data is crucial for enhancing the mathematical reasoning capabilities of
|
| 52 |
|
| 53 |
-
- **HTML Parsing
|
| 54 |
-
- **Data Quality
|
| 55 |
-
- **Data Diversity
|
| 56 |
|
| 57 |
To address these issues, we propose ***UltraData-Math***—a large-scale high-quality pre-training dataset for mathematical reasoning tasks. This dataset is developed based on the [UltraData](xxx) L0-L4 Tiered Data Management Framework, containing four progressive levels:
|
| 58 |
|
| 59 |
-
- **L0 Raw Data Layer**:
|
| 60 |
- **L1 Filtered Data Layer**: Cleans noise through heuristic rules and performs document-level deduplication.
|
| 61 |
-
- **L2 Selected Data Layer**: Uses
|
| 62 |
- **L3 Refined Data Layer**: Produces structured content with clear reasoning through rewriting, synthetic generation, and refinement in various formats such as Q&A, multi-turn dialogues, multi-style rewriting, and knowledge-grounded textbooks.
|
| 63 |
|
| 64 |
Experiments show that on the MiniCPM-1.2B architecture, ***UltraData-Math*** achieves a score of **37.02** on the MATH500 benchmark, an improvement of **+3.62** compared to Nemotron-CC 4plus; it achieves **61.79** on GSM8K, an improvement of **+3.34**, while maintaining code generation and general knowledge capabilities.
|
| 65 |
|
| 66 |
***UltraData-Math*** has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) models.
|
| 67 |
|
| 68 |
-
- **[UltraData-Math-L1](https://huggingface.co/datasets/openbmb/UltraData-Math)**: Large-scale high-quality mathematical pre-training dataset, containing 170.5B tokens of web mathematical corpus.
|
| 69 |
- **[UltraData-Math-L2](https://huggingface.co/datasets/openbmb/UltraData-Math-L2)**: High-quality mathematical pre-training dataset selected by the quality model, containing 33.7B tokens of high-quality web mathematical corpus.
|
| 70 |
- **[UltraData-Math-L3](https://huggingface.co/datasets/openbmb/UltraData-Math-L3)**: High-quality refined mathematical dataset, containing 88B tokens of multi-format refined data (Q&A, multi-turn dialogues, knowledge textbooks, etc.).
|
| 71 |
|
|
@@ -81,11 +80,11 @@ To break through the limitations of existing mathematical datasets in quality an
|
|
| 81 |
|
| 82 |
**Goal**: Address the poor support of general HTML parsers for mathematical formulas and maximize the preservation of mathematical semantics in web pages.
|
| 83 |
|
| 84 |
-
The L0 phase mainly processes raw web data obtained from sources such as Common Crawl. Given the specificity of mathematical web pages, we
|
| 85 |
|
| 86 |
- **Unified Parsing Mode**: Automatically identifies page types to ensure complete content extraction as much as possible.
|
| 87 |
-
- **Multi-level Fallback Strategy**: To prevent data loss due to parsing failures, we
|
| 88 |
-
- **Mathematical Formula Standardization**: We
|
| 89 |
|
| 90 |
### L1: Heuristic Cleaning and Filtering
|
| 91 |
|
|
@@ -107,8 +106,8 @@ After obtaining text containing complete mathematical formulas, we clean the L0
|
|
| 107 |
|
| 108 |
Although L1 data has a clean format, the content quality varies. The L2 phase introduces a model-based quality assessment system:
|
| 109 |
|
| 110 |
-
- **Seed Data Annotation**: Use
|
| 111 |
-
- **Classifier Training and Distillation**: Train lightweight
|
| 112 |
- **Full-scale Inference**: Use the trained classifier to score and screen L1 data in full.
|
| 113 |
- *Retention*: Content containing detailed problem-solving steps, mathematical concept explanations, and high-level academic discussions.
|
| 114 |
- *Exclusion*: Simple stacking of nouns, meaningless lists of numbers, juvenile content, or noise from non-mathematical fields.
|
|
@@ -117,7 +116,7 @@ Although L1 data has a clean format, the content quality varies. The L2 phase in
|
|
| 117 |
|
| 118 |
**Goal**: Produce structured content with clear reasoning and explicit educational intent through rewriting, synthetic generation, and refinement, achieving textbook-quality standards and ensuring maximum learnability.
|
| 119 |
|
| 120 |
-
Natural web data is mostly declarative text, lacking structured reasoning steps and diverse pedagogical formats. To enhance the model's
|
| 121 |
|
| 122 |
- **Q&A Pair Generation**: Use high-performance models to rewrite declarative documents into "Question-Answer" pairs, constructing QA-style data with explicit reasoning steps.
|
| 123 |
- **Multi-turn Dialogue Synthesis**: Simulate "Teacher-Student" tutoring scenarios to generate multi-turn dialogue data containing follow-up questions, corrections, and guidance.
|
|
@@ -125,12 +124,36 @@ Natural web data is mostly declarative text, lacking structured reasoning steps
|
|
| 125 |
- **Knowledge Point Textbook Generation**: Generate systematic textbook-like content based on specific knowledge points to ensure the model masters core mathematical concepts.
|
| 126 |
- **Format Repair and Enhancement**: Fix formatting issues in the source data (e.g., broken LaTeX formulas, inconsistent notation) and enhance content coherence to achieve textbook-quality standards.
|
| 127 |
|
|
|
|
|
|
|
| 128 |
| Dataset | # Tokens | # Documents |
|
| 129 |
|:---|:---:|:---:|
|
| 130 |
| UltraData-Math-L1 | 170.5B | 85.6M |
|
| 131 |
| UltraData-Math-L2-preview | 33.7B | 14.98M |
|
| 132 |
| UltraData-Math-L3 | 88B | 81.4M |
|
| 133 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 134 |
## 📈 Experimental Results
|
| 135 |
|
| 136 |
We evaluated data quality using the **Decay Verification** method: continuing pre-training of a **MiniCPM-1.2B** base model (pre-trained on 1.3T tokens with **MiniCPM3-4B** tokenizer) with **~100B tokens** (30% target data + 70% general data). We used [OpenCompass](https://github.com/open-compass/opencompass) as our evaluation framework. Evaluation benchmarks include:
|
|
@@ -154,7 +177,7 @@ To fairly compare different parsing strategies, we conducted experiments on a da
|
|
| 154 |
|
| 155 |
### Pipeline Effectiveness (L1 vs L2 vs L3)
|
| 156 |
|
| 157 |
-
To validate the effectiveness of our L0-L3
|
| 158 |
|
| 159 |
| Dataset | Average | MMLU | MMLU-STEM | MATH500 | GSM8K | MBPP | HumanEval |
|
| 160 |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
|
|
|
| 41 |
|
| 42 |
<div align="center">
|
| 43 |
|
| 44 |
+
<a href="https://huggingface.co/datasets/openbmb/UltraData-Math">🤗 Dataset</a> | <a href="https://github.com/UltraData-OpenBMB/UltraData-Math">💻 Code Repository</a> | <a href="README_ZH.md">🇨🇳 中文 README</a>
|
|
|
|
| 45 |
|
| 46 |
</div>
|
| 47 |
|
| 48 |
## 📚 Introduction
|
| 49 |
|
| 50 |
+
High-quality pre-training data is crucial for enhancing the mathematical reasoning capabilities of large language models (LLMs). However, existing mathematical pre-training data construction schemes have the following shortcomings:
|
| 51 |
|
| 52 |
+
- **HTML Parsing**: General extractors (such as trafilatura, readability) are mainly designed for news/article parsing, lacking specialized processing for mathematical formulas and other content, often leading to formula structure destruction or loss; meanwhile, mathematical discussions on forum-like pages are difficult to extract completely.
|
| 53 |
+
- **Data Quality**: Existing datasets generally lack a systematic quality grading mechanism, with high-value mathematical content mixed with low-quality noise.
|
| 54 |
+
- **Data Diversity**: Mainstream datasets mostly originate from textbooks or competition question banks, lacking mathematical discussions and application scenarios in real web pages; synthetic data formats are single, difficult to cover diverse needs such as multi-turn dialogues and multi-style expressions.
|
| 55 |
|
| 56 |
To address these issues, we propose ***UltraData-Math***—a large-scale high-quality pre-training dataset for mathematical reasoning tasks. This dataset is developed based on the [UltraData](xxx) L0-L4 Tiered Data Management Framework, containing four progressive levels:
|
| 57 |
|
| 58 |
+
- **L0 Raw Data Layer**: Develops a mathematical parser based on *magic-html*, combined with *w3m* layout preservation rendering and multi-level fallback strategies, standardizing MathML, KaTeX, and AsciiMath into LaTeX format.
|
| 59 |
- **L1 Filtered Data Layer**: Cleans noise through heuristic rules and performs document-level deduplication.
|
| 60 |
+
- **L2 Selected Data Layer**: Uses proprietary large models to annotate seed data and distills it into a lightweight embedding classifier to achieve efficient quality grading of the full corpus.
|
| 61 |
- **L3 Refined Data Layer**: Produces structured content with clear reasoning through rewriting, synthetic generation, and refinement in various formats such as Q&A, multi-turn dialogues, multi-style rewriting, and knowledge-grounded textbooks.
|
| 62 |
|
| 63 |
Experiments show that on the MiniCPM-1.2B architecture, ***UltraData-Math*** achieves a score of **37.02** on the MATH500 benchmark, an improvement of **+3.62** compared to Nemotron-CC 4plus; it achieves **61.79** on GSM8K, an improvement of **+3.34**, while maintaining code generation and general knowledge capabilities.
|
| 64 |
|
| 65 |
***UltraData-Math*** has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) models.
|
| 66 |
|
| 67 |
+
- **[UltraData-Math-L1](https://huggingface.co/datasets/openbmb/UltraData-Math)**: Large-scale high-quality mathematical pre-training dataset, containing 170.5B tokens of web mathematical corpus.
|
| 68 |
- **[UltraData-Math-L2](https://huggingface.co/datasets/openbmb/UltraData-Math-L2)**: High-quality mathematical pre-training dataset selected by the quality model, containing 33.7B tokens of high-quality web mathematical corpus.
|
| 69 |
- **[UltraData-Math-L3](https://huggingface.co/datasets/openbmb/UltraData-Math-L3)**: High-quality refined mathematical dataset, containing 88B tokens of multi-format refined data (Q&A, multi-turn dialogues, knowledge textbooks, etc.).
|
| 70 |
|
|
|
|
| 80 |
|
| 81 |
**Goal**: Address the poor support of general HTML parsers for mathematical formulas and maximize the preservation of mathematical semantics in web pages.
|
| 82 |
|
| 83 |
+
The L0 phase mainly processes raw web data obtained from sources such as Common Crawl. Given the specificity of mathematical web pages, we develop specialized parsing strategies instead of directly using general ones like trafilatura or readability.
|
| 84 |
|
| 85 |
- **Unified Parsing Mode**: Automatically identifies page types to ensure complete content extraction as much as possible.
|
| 86 |
+
- **Multi-level Fallback Strategy**: To prevent data loss due to parsing failures, we implement a multi-level fallback mechanism to ensure text content is captured even if structured parsing fails.
|
| 87 |
+
- **Mathematical Formula Standardization**: We unify different mathematical expressions in web pages into standard LaTeX format, achieving data format normalization for unified model learning.
|
| 88 |
|
| 89 |
### L1: Heuristic Cleaning and Filtering
|
| 90 |
|
|
|
|
| 106 |
|
| 107 |
Although L1 data has a clean format, the content quality varies. The L2 phase introduces a model-based quality assessment system:
|
| 108 |
|
| 109 |
+
- **Seed Data Annotation**: Use proprietary large models to score a portion of seed data across multiple dimensions.
|
| 110 |
+
- **Classifier Training and Distillation**: Train lightweight embedding classifiers based on annotated data to equip them with the ability to identify high-value mathematical content.
|
| 111 |
- **Full-scale Inference**: Use the trained classifier to score and screen L1 data in full.
|
| 112 |
- *Retention*: Content containing detailed problem-solving steps, mathematical concept explanations, and high-level academic discussions.
|
| 113 |
- *Exclusion*: Simple stacking of nouns, meaningless lists of numbers, juvenile content, or noise from non-mathematical fields.
|
|
|
|
| 116 |
|
| 117 |
**Goal**: Produce structured content with clear reasoning and explicit educational intent through rewriting, synthetic generation, and refinement, achieving textbook-quality standards and ensuring maximum learnability.
|
| 118 |
|
| 119 |
+
Natural web data is mostly declarative text, lacking structured reasoning steps and diverse pedagogical formats. To enhance the model's chain-of-thought (CoT) capabilities and multi-turn interaction skills, we build the L3 refined data layer through the [UltraData-Math-Generator](https://github.com/UltraData-OpenBMB/UltraData-Math/tree/main/UltraData-Math-L3-Generator):
|
| 120 |
|
| 121 |
- **Q&A Pair Generation**: Use high-performance models to rewrite declarative documents into "Question-Answer" pairs, constructing QA-style data with explicit reasoning steps.
|
| 122 |
- **Multi-turn Dialogue Synthesis**: Simulate "Teacher-Student" tutoring scenarios to generate multi-turn dialogue data containing follow-up questions, corrections, and guidance.
|
|
|
|
| 124 |
- **Knowledge Point Textbook Generation**: Generate systematic textbook-like content based on specific knowledge points to ensure the model masters core mathematical concepts.
|
| 125 |
- **Format Repair and Enhancement**: Fix formatting issues in the source data (e.g., broken LaTeX formulas, inconsistent notation) and enhance content coherence to achieve textbook-quality standards.
|
| 126 |
|
| 127 |
+
Based on the above methodology, we produce the following ***UltraData-Math*** datasets:
|
| 128 |
+
|
| 129 |
| Dataset | # Tokens | # Documents |
|
| 130 |
|:---|:---:|:---:|
|
| 131 |
| UltraData-Math-L1 | 170.5B | 85.6M |
|
| 132 |
| UltraData-Math-L2-preview | 33.7B | 14.98M |
|
| 133 |
| UltraData-Math-L3 | 88B | 81.4M |
|
| 134 |
|
| 135 |
+
## 🚀 Quick Start
|
| 136 |
+
|
| 137 |
+
You can load the dataset directly from Hugging Face:
|
| 138 |
+
|
| 139 |
+
```python
|
| 140 |
+
from datasets import load_dataset
|
| 141 |
+
|
| 142 |
+
# Load UltraData-Math-L1
|
| 143 |
+
ds = load_dataset("openbmb/UltraData-Math", "UltraData-Math-L1")
|
| 144 |
+
|
| 145 |
+
# Load UltraData-Math-L2-preview
|
| 146 |
+
ds = load_dataset("openbmb/UltraData-Math", "UltraData-Math-L2-preview")
|
| 147 |
+
|
| 148 |
+
# Load UltraData-Math-L3 (default: Conversation-Synthetic)
|
| 149 |
+
ds = load_dataset("openbmb/UltraData-Math", "UltraData-Math-L3-Conversation-Synthetic")
|
| 150 |
+
|
| 151 |
+
# Other L3 configs:
|
| 152 |
+
# - UltraData-Math-L3-Multi-Style-Synthetic
|
| 153 |
+
# - UltraData-Math-L3-QA-Synthetic
|
| 154 |
+
# - UltraData-Math-L3-Textbook-Exercise-Synthetic
|
| 155 |
+
```
|
| 156 |
+
|
| 157 |
## 📈 Experimental Results
|
| 158 |
|
| 159 |
We evaluated data quality using the **Decay Verification** method: continuing pre-training of a **MiniCPM-1.2B** base model (pre-trained on 1.3T tokens with **MiniCPM3-4B** tokenizer) with **~100B tokens** (30% target data + 70% general data). We used [OpenCompass](https://github.com/open-compass/opencompass) as our evaluation framework. Evaluation benchmarks include:
|
|
|
|
| 177 |
|
| 178 |
### Pipeline Effectiveness (L1 vs L2 vs L3)
|
| 179 |
|
| 180 |
+
To validate the effectiveness of our L0-L3 tiered framework, we conducted ablation studies comparing models trained on different tiers of UltraData-Math. Unlike the L0 parser comparison above (which used a 2023-2024 subset), these results are based on the **full dataset**.
|
| 181 |
|
| 182 |
| Dataset | Average | MMLU | MMLU-STEM | MATH500 | GSM8K | MBPP | HumanEval |
|
| 183 |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|