Datasets:
ZhouChuYue
commited on
Commit
·
7e29ed8
1
Parent(s):
ce9708b
Update README: link UltraData-Math-Generator to HF Space, add UltraData collection URL
Browse files
README.md
CHANGED
|
@@ -51,7 +51,7 @@ High-quality pre-training data is crucial for enhancing the mathematical reasoni
|
|
| 51 |
- **Data Quality**: Existing datasets generally lack a systematic quality grading mechanism, with high-value mathematical content mixed with low-quality noise.
|
| 52 |
- **Data Diversity**: Mainstream datasets mostly originate from textbooks or competition question banks, lacking mathematical discussions and application scenarios in real web pages; synthetic data formats are single, difficult to cover diverse needs such as multi-turn dialogues and multi-style expressions.
|
| 53 |
|
| 54 |
-
To address these issues, we propose ***UltraData-Math***—a large-scale high-quality pre-training dataset for mathematical reasoning tasks. This dataset is developed based on the [UltraData](
|
| 55 |
|
| 56 |
- **L0 Raw Data Layer**: Develops a mathematical parser based on *magic-html*, combined with *w3m* layout preservation rendering and multi-level fallback strategies, standardizing MathML, KaTeX, and AsciiMath into LaTeX format.
|
| 57 |
- **L1 Filtered Data Layer**: Cleans noise through heuristic rules and performs document-level deduplication.
|
|
@@ -68,7 +68,7 @@ Experiments show that on the MiniCPM-1.2B architecture, ***UltraData-Math*** ach
|
|
| 68 |
|
| 69 |
## 🏗️ Data Processing Pipeline
|
| 70 |
|
| 71 |
-
To break through the limitations of existing mathematical datasets in quality and diversity, we established a refined grading standard centered on "mathematical content integrity" and "information density". ***UltraData-Math*** adopts the **L0-L4 Tiered Data Management Framework** proposed by the [UltraData](
|
| 72 |
|
| 73 |
<div align="center">
|
| 74 |
<img src="assets/ultradata-math-pipeline.png" width="900"/>
|
|
@@ -114,7 +114,7 @@ Although L1 data has a clean format, the content quality varies. The L2 phase in
|
|
| 114 |
|
| 115 |
**Goal**: Produce structured content with clear reasoning and explicit educational intent through rewriting, synthetic generation, and refinement, achieving textbook-quality standards and ensuring maximum learnability.
|
| 116 |
|
| 117 |
-
Natural web data is mostly declarative text, lacking structured reasoning steps and diverse pedagogical formats. To enhance the model's chain-of-thought (CoT) capabilities and multi-turn interaction skills, we build the L3 refined data layer through the [UltraData-Math-Generator](https://
|
| 118 |
|
| 119 |
- **Q&A Pair Generation**: Use high-performance models to rewrite declarative documents into "Question-Answer" pairs, constructing QA-style data with explicit reasoning steps.
|
| 120 |
- **Multi-turn Dialogue Synthesis**: Simulate "Teacher-Student" tutoring scenarios to generate multi-turn dialogue data containing follow-up questions, corrections, and guidance.
|
|
|
|
| 51 |
- **Data Quality**: Existing datasets generally lack a systematic quality grading mechanism, with high-value mathematical content mixed with low-quality noise.
|
| 52 |
- **Data Diversity**: Mainstream datasets mostly originate from textbooks or competition question banks, lacking mathematical discussions and application scenarios in real web pages; synthetic data formats are single, difficult to cover diverse needs such as multi-turn dialogues and multi-style expressions.
|
| 53 |
|
| 54 |
+
To address these issues, we propose ***UltraData-Math***—a large-scale high-quality pre-training dataset for mathematical reasoning tasks. This dataset is developed based on the [UltraData](https://huggingface.co/collections/openbmb/ultradata) L0-L4 Tiered Data Management Framework, containing four progressive levels:
|
| 55 |
|
| 56 |
- **L0 Raw Data Layer**: Develops a mathematical parser based on *magic-html*, combined with *w3m* layout preservation rendering and multi-level fallback strategies, standardizing MathML, KaTeX, and AsciiMath into LaTeX format.
|
| 57 |
- **L1 Filtered Data Layer**: Cleans noise through heuristic rules and performs document-level deduplication.
|
|
|
|
| 68 |
|
| 69 |
## 🏗️ Data Processing Pipeline
|
| 70 |
|
| 71 |
+
To break through the limitations of existing mathematical datasets in quality and diversity, we established a refined grading standard centered on "mathematical content integrity" and "information density". ***UltraData-Math*** adopts the **L0-L4 Tiered Data Management Framework** proposed by the [UltraData](https://huggingface.co/collections/openbmb/ultradata) paper. Through standardized level definitions, it achieves orderly management and efficient flow of mathematical data assets. Each level represents higher data purity and mathematical value, while also corresponding to a more refined degree of processing.
|
| 72 |
|
| 73 |
<div align="center">
|
| 74 |
<img src="assets/ultradata-math-pipeline.png" width="900"/>
|
|
|
|
| 114 |
|
| 115 |
**Goal**: Produce structured content with clear reasoning and explicit educational intent through rewriting, synthetic generation, and refinement, achieving textbook-quality standards and ensuring maximum learnability.
|
| 116 |
|
| 117 |
+
Natural web data is mostly declarative text, lacking structured reasoning steps and diverse pedagogical formats. To enhance the model's chain-of-thought (CoT) capabilities and multi-turn interaction skills, we build the L3 refined data layer through the [UltraData-Math-Generator](https://huggingface.co/spaces/openbmb/UltraData-Math-L3-Generator):
|
| 118 |
|
| 119 |
- **Q&A Pair Generation**: Use high-performance models to rewrite declarative documents into "Question-Answer" pairs, constructing QA-style data with explicit reasoning steps.
|
| 120 |
- **Multi-turn Dialogue Synthesis**: Simulate "Teacher-Student" tutoring scenarios to generate multi-turn dialogue data containing follow-up questions, corrections, and guidance.
|