ZhouChuYue commited on
Commit
b448841
·
1 Parent(s): 3559ea7

Update README: align benchmark names and categories with paper

Browse files
Files changed (2) hide show
  1. README.md +4 -4
  2. README_ZH.md +3 -3
README.md CHANGED
@@ -162,10 +162,10 @@ ds = load_dataset("openbmb/UltraData-Math", "UltraData-Math-L3-Conversation-Synt
162
 
163
  We evaluated data quality using the **Decay Verification** method: continuing pre-training of a **MiniCPM-1.2B** base model (pre-trained on 1.3T tokens with **MiniCPM3-4B** tokenizer) with **~100B tokens** (30% target data + 70% general data). We used [OpenCompass](https://github.com/open-compass/opencompass) as our evaluation framework. Evaluation benchmarks include:
164
 
165
- - **Mathematical Reasoning:** MATH500, GSM8K, Math-Bench, R-Bench-Math
166
- - **Code Generation:** HumanEval, MBPP
167
- - **English:** MMLU, ARC-E, ARC-C, BBH, CSQA, HellaSwag, OBQA, PIQA, SIQA, WinoGrande
168
- - **Chinese:** CMMLU, C-Eval
169
 
170
  ### Effectiveness of L0 Parsing Strategy
171
 
 
162
 
163
  We evaluated data quality using the **Decay Verification** method: continuing pre-training of a **MiniCPM-1.2B** base model (pre-trained on 1.3T tokens with **MiniCPM3-4B** tokenizer) with **~100B tokens** (30% target data + 70% general data). We used [OpenCompass](https://github.com/open-compass/opencompass) as our evaluation framework. Evaluation benchmarks include:
164
 
165
+ - **General English:** MMLU, ARC-E, ARC-C, BigBench Hard (BBH), CommonSenseQA, HellaSwag, OpenbookQA, PIQA, SIQA, Winogrande
166
+ - **General Chinese:** C-Eval, CMMLU
167
+ - **Math Reasoning:** MATH500, GSM8K, Math-Bench, R-Bench-Math
168
+ - **Code Reasoning:** MBPP, HumanEval
169
 
170
  ### Effectiveness of L0 Parsing Strategy
171
 
README_ZH.md CHANGED
@@ -123,10 +123,10 @@ ds = load_dataset("openbmb/UltraData-Math", "UltraData-Math-L3-Conversation-Synt
123
 
124
  我们使用 **衰减验证(Decay Verification)** 方法评估数据质量:在 **MiniCPM-1.2B** 基座模型(使用 **MiniCPM3-4B** 分词器,预训练 1.3T tokens)上继续训练 **~100B tokens**(30% 目标数据 + 70% 通用数据)。我们使用 [OpenCompass](https://github.com/open-compass/opencompass) 作为评估框架。评估基准包括:
125
 
 
 
126
  - **数学推理:** MATH500、GSM8K、Math-Bench、R-Bench-Math
127
- - **代码生成:** HumanEval、MBPP
128
- - **英文:** MMLU、ARC-E、ARC-C、BBH、CSQA、HellaSwag、OBQA、PIQA、SIQA、WinoGrande
129
- - **中文:** CMMLU、C-Eval
130
 
131
  ### L0 解析策略有效性
132
 
 
123
 
124
  我们使用 **衰减验证(Decay Verification)** 方法评估数据质量:在 **MiniCPM-1.2B** 基座模型(使用 **MiniCPM3-4B** 分词器,预训练 1.3T tokens)上继续训练 **~100B tokens**(30% 目标数据 + 70% 通用数据)。我们使用 [OpenCompass](https://github.com/open-compass/opencompass) 作为评估框架。评估基准包括:
125
 
126
+ - **通用英文:** MMLU、ARC-E、ARC-C、BigBench Hard (BBH)、CommonSenseQA、HellaSwag、OpenbookQA、PIQA、SIQA、Winogrande
127
+ - **通用中文:** C-Eval、CMMLU
128
  - **数学推理:** MATH500、GSM8K、Math-Bench、R-Bench-Math
129
+ - **代码推理:** MBPP、HumanEval
 
 
130
 
131
  ### L0 解析策略有效性
132