ZhouChuYue commited on
Commit
0a8cadc
·
1 Parent(s): 60f48fe

Update README: rename repository references to UltraData-Math

Browse files
Files changed (1) hide show
  1. README.md +10 -8
README.md CHANGED
@@ -18,8 +18,6 @@ tags:
18
  - high-quality
19
  - mathematical-reasoning
20
  configs:
21
- - config_name: UltraData-Math-L1
22
- data_files: "data/UltraData-Math-L1/**/*.parquet"
23
  - config_name: UltraData-Math-L3-Conversation-Synthetic
24
  data_files: "data/UltraData-Math-L3/Conversation-Synthetic/*.parquet"
25
  - config_name: UltraData-Math-L3-Multi-Style-Synthetic
@@ -28,6 +26,8 @@ configs:
28
  data_files: "data/UltraData-Math-L3/QA-Synthetic/*.parquet"
29
  - config_name: UltraData-Math-L3-Textbook-Exercise-Synthetic
30
  data_files: "data/UltraData-Math-L3/Textbook-Exercise-Synthetic/*.parquet"
 
 
31
  default_config_name: UltraData-Math-L3-Conversation-Synthetic
32
  ---
33
 
@@ -39,7 +39,7 @@ default_config_name: UltraData-Math-L3-Conversation-Synthetic
39
 
40
  <div align="center">
41
 
42
- [🤗 Dataset](https://huggingface.co/datasets/openbmb/UltraData-Math-L1) | [💻 Code Repository](https://github.com/UltraData-OpenBMB/UltraData-Math) | [🇨🇳 中文 README](README_ZH.md)
43
 
44
 
45
  </div>
@@ -63,7 +63,7 @@ Experiments show that on the MiniCPM-1B architecture, ***UltraData-Math*** achie
63
 
64
  ***UltraData-Math*** has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) models.
65
 
66
- - **[UltraData-Math-L1](https://huggingface.co/datasets/openbmb/UltraData-Math-L1)**: Large-scale high-quality mathematical pre-training dataset, containing 159.4B tokens of web mathematical corpus. (**<-- you are here**)
67
  - **[UltraData-Math-L3](https://huggingface.co/datasets/openbmb/UltraData-Math-L3)**: High-quality synthetic mathematical dataset, containing 37.1B tokens of multi-format synthetic data (Q&A, multi-turn dialogues, knowledge textbooks, etc.).
68
 
69
  ## 🏗️ Data Processing Pipeline
@@ -123,8 +123,8 @@ Natural web data is mostly declarative text. To enhance the model's instruction
123
 
124
  | Dataset | # Tokens | # Documents |
125
  |:---|:---:|:---:|
126
- | UltraData-Math-L1 | 159.4B | 85.56M |
127
- | UltraData-Math-L3 | 37.1B | 31.87M |
128
 
129
  ## 📈 Experimental Results
130
 
@@ -134,9 +134,9 @@ We used the **MiniCPM-1.2B** model architecture and **MiniCPM3-4B** tokenizer fo
134
  - **Code Generation:** HumanEval, MBPP
135
  - **Comprehensive Knowledge:** MMLU, MMLU-STEM
136
 
137
- ### L0 Parser Ablation Study
138
 
139
- Based on data from the same source, we used different parsers for extraction and trained independently to directly compare the effects of parsing strategies:
140
 
141
  | Parser | Average | MMLU | GSM8K | HumanEval | math | mbpp_full | mmlu-stem |
142
  |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
@@ -146,6 +146,8 @@ Based on data from the same source, we used different parsers for extraction and
146
  | Megamath | 42.32 | **51.46** | 54.06 | 29.88 | 26.04 | 45.64 | **46.81** |
147
  | magic-html + w3m | 41.29 | 51.23 | 51.63 | 26.83 | 26.58 | 45.02 | 46.45 |
148
 
 
 
149
  ### Full Evaluation Results
150
 
151
  We used a single dataset for independent training to directly compare the effects of different data sources:
 
18
  - high-quality
19
  - mathematical-reasoning
20
  configs:
 
 
21
  - config_name: UltraData-Math-L3-Conversation-Synthetic
22
  data_files: "data/UltraData-Math-L3/Conversation-Synthetic/*.parquet"
23
  - config_name: UltraData-Math-L3-Multi-Style-Synthetic
 
26
  data_files: "data/UltraData-Math-L3/QA-Synthetic/*.parquet"
27
  - config_name: UltraData-Math-L3-Textbook-Exercise-Synthetic
28
  data_files: "data/UltraData-Math-L3/Textbook-Exercise-Synthetic/*.parquet"
29
+ - config_name: UltraData-Math-L1
30
+ data_files: "data/UltraData-Math-L1/**/*.parquet"
31
  default_config_name: UltraData-Math-L3-Conversation-Synthetic
32
  ---
33
 
 
39
 
40
  <div align="center">
41
 
42
+ [🤗 Dataset](https://huggingface.co/datasets/openbmb/UltraData-Math) | [💻 Code Repository](https://github.com/UltraData-OpenBMB/UltraData-Math) | [🇨🇳 中文 README](README_ZH.md)
43
 
44
 
45
  </div>
 
63
 
64
  ***UltraData-Math*** has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) models.
65
 
66
+ - **[UltraData-Math-L1](https://huggingface.co/datasets/openbmb/UltraData-Math)**: Large-scale high-quality mathematical pre-training dataset, containing 159.4B tokens of web mathematical corpus. (**<-- you are here**)
67
  - **[UltraData-Math-L3](https://huggingface.co/datasets/openbmb/UltraData-Math-L3)**: High-quality synthetic mathematical dataset, containing 37.1B tokens of multi-format synthetic data (Q&A, multi-turn dialogues, knowledge textbooks, etc.).
68
 
69
  ## 🏗️ Data Processing Pipeline
 
123
 
124
  | Dataset | # Tokens | # Documents |
125
  |:---|:---:|:---:|
126
+ | UltraData-Math-L1 | 170.5B | 85.6M |
127
+ | UltraData-Math-L3 | 88B | 81.4M |
128
 
129
  ## 📈 Experimental Results
130
 
 
134
  - **Code Generation:** HumanEval, MBPP
135
  - **Comprehensive Knowledge:** MMLU, MMLU-STEM
136
 
137
+ ### L0 Parser Comparison
138
 
139
+ Based on data from the same source, we re-parsed the raw HTML using different parsers and applied the same L1 cleaning operators to all parsed results. Each cleaned dataset was then used for independent training to directly compare the effects of parsing strategies:
140
 
141
  | Parser | Average | MMLU | GSM8K | HumanEval | math | mbpp_full | mmlu-stem |
142
  |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
 
146
  | Megamath | 42.32 | **51.46** | 54.06 | 29.88 | 26.04 | 45.64 | **46.81** |
147
  | magic-html + w3m | 41.29 | 51.23 | 51.63 | 26.83 | 26.58 | 45.02 | 46.45 |
148
 
149
+
150
+
151
  ### Full Evaluation Results
152
 
153
  We used a single dataset for independent training to directly compare the effects of different data sources: