ZhouChuYue commited on
Commit
85fdde4
·
1 Parent(s): 1f9c99d

Update README: Refine L3 description with format repair and textbook-quality standards

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -67,7 +67,7 @@ Experiments show that on the MiniCPM-1.2B architecture, ***UltraData-Math*** ach
67
 
68
  - **[UltraData-Math-L1](https://huggingface.co/datasets/openbmb/UltraData-Math)**: Large-scale high-quality mathematical pre-training dataset, containing 170.5B tokens of web mathematical corpus. (**<-- you are here**)
69
  - **[UltraData-Math-L2](https://huggingface.co/datasets/openbmb/UltraData-Math-L2)**: High-quality mathematical pre-training dataset selected by the quality model, containing 33.7B tokens of high-quality web mathematical corpus.
70
- - **[UltraData-Math-L3](https://huggingface.co/datasets/openbmb/UltraData-Math-L3)**: High-quality synthetic mathematical dataset, containing 88B tokens of multi-format synthetic data (Q&A, multi-turn dialogues, knowledge textbooks, etc.).
71
 
72
  ## 🏗️ Data Processing Pipeline
73
 
@@ -115,14 +115,15 @@ Although L1 data has a clean format, the content quality varies. The L2 phase in
115
 
116
  ### L3: Refined Data
117
 
118
- **Goal**: Compensate for the singularity of natural corpora in format and scenarios through rewriting, synthetic generation, and refinement, enhancing the model's Chain of Thought (CoT) capabilities.
119
 
120
- Natural web data is mostly declarative text. To enhance the model's instruction following and multi-turn interaction capabilities, we built the L3 synthetic data layer:
121
 
122
- - **Q&A Pair Generation**: Use high-performance models to rewrite declarative documents into "Question-Answer" pairs, constructing QA-style data.
123
  - **Multi-turn Dialogue Synthesis**: Simulate "Teacher-Student" tutoring scenarios to generate multi-turn dialogue data containing follow-up questions, corrections, and guidance.
124
  - **Multi-style Rewriting**: Rewrite single-source data into multiple styles (such as rigorous textbook style, competition problem-solving style, intuitive popular science style) to improve model generalization.
125
  - **Knowledge Point Textbook Generation**: Generate systematic textbook-like content based on specific knowledge points to ensure the model masters core mathematical concepts.
 
126
 
127
  | Dataset | # Tokens | # Documents |
128
  |:---|:---:|:---:|
 
67
 
68
  - **[UltraData-Math-L1](https://huggingface.co/datasets/openbmb/UltraData-Math)**: Large-scale high-quality mathematical pre-training dataset, containing 170.5B tokens of web mathematical corpus. (**<-- you are here**)
69
  - **[UltraData-Math-L2](https://huggingface.co/datasets/openbmb/UltraData-Math-L2)**: High-quality mathematical pre-training dataset selected by the quality model, containing 33.7B tokens of high-quality web mathematical corpus.
70
+ - **[UltraData-Math-L3](https://huggingface.co/datasets/openbmb/UltraData-Math-L3)**: High-quality refined mathematical dataset, containing 88B tokens of multi-format refined data (Q&A, multi-turn dialogues, knowledge textbooks, etc.).
71
 
72
  ## 🏗️ Data Processing Pipeline
73
 
 
115
 
116
  ### L3: Refined Data
117
 
118
+ **Goal**: Produce structured content with clear reasoning and explicit educational intent through rewriting, synthetic generation, and refinement, achieving textbook-quality standards and ensuring maximum learnability.
119
 
120
+ Natural web data is mostly declarative text, lacking structured reasoning steps and diverse pedagogical formats. To enhance the model's Chain of Thought (CoT) capabilities and multi-turn interaction skills, we built the L3 refined data layer through the [UltraData-Math-Generator](https://github.com/UltraData-OpenBMB/UltraData-Math/tree/main/UltraData-Math-L3-Generator):
121
 
122
+ - **Q&A Pair Generation**: Use high-performance models to rewrite declarative documents into "Question-Answer" pairs, constructing QA-style data with explicit reasoning steps.
123
  - **Multi-turn Dialogue Synthesis**: Simulate "Teacher-Student" tutoring scenarios to generate multi-turn dialogue data containing follow-up questions, corrections, and guidance.
124
  - **Multi-style Rewriting**: Rewrite single-source data into multiple styles (such as rigorous textbook style, competition problem-solving style, intuitive popular science style) to improve model generalization.
125
  - **Knowledge Point Textbook Generation**: Generate systematic textbook-like content based on specific knowledge points to ensure the model masters core mathematical concepts.
126
+ - **Format Repair and Enhancement**: Fix formatting issues in the source data (e.g., broken LaTeX formulas, inconsistent notation) and enhance content coherence to achieve textbook-quality standards.
127
 
128
  | Dataset | # Tokens | # Documents |
129
  |:---|:---:|:---:|