ZhouChuYue commited on
Commit
6e09812
·
1 Parent(s): 24a0dff

Update README: Remove Avg_ZH column from results table

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -165,11 +165,11 @@ To fairly compare different parsing strategies, we conducted experiments on a da
165
 
166
  To validate the effectiveness of our L0-L3 hierarchical framework, we conducted ablation studies comparing models trained on different tiers of UltraData-Math. Unlike the L0 parser comparison above (which used a 2023-2024 subset), these results are based on the **full dataset**.
167
 
168
- | Dataset | Average | MMLU | ARC-E | ARC-C | BBH | CSQA | Hella. | OBQA | PIQA | SIQA | Wino. | Math | GSM8K | MBPP | HumanEval | CMMLU | C-Eval | Avg_ZH |
169
- | :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
170
- | **UltraData-Math-L1** | 48.39 | 51.41 | 54.50 | 37.29 | 37.75 | 60.44 | 58.02 | 41.60 | 74.21 | 41.71 | 57.14 | 27.78 | 54.66 | 44.71 | 29.88 | 51.28 | 51.89 | 51.59 |
171
- | **UltraData-Math-L2** | 48.59 | 50.93 | 55.20 | 36.95 | 39.27 | 60.20 | 57.52 | 39.80 | 74.48 | 44.73 | 57.77 | 29.20 | 52.92 | 44.50 | 32.32 | 51.13 | 50.55 | 50.84 |
172
- | **UltraData-Math-L3** | **52.23** | **51.67** | **59.79** | **38.98** | **43.62** | **61.18** | **58.27** | **57.00** | **74.76** | 43.35 | **59.04** | **37.02** | **61.79** | **49.27** | **32.93** | **52.87** | **54.08** | **53.48** |
173
 
174
  *Note: Results demonstrate that higher-tier data (L3) significantly boosts mathematical reasoning (MATH, GSM8K) and general capabilities.*
175
 
 
165
 
166
  To validate the effectiveness of our L0-L3 hierarchical framework, we conducted ablation studies comparing models trained on different tiers of UltraData-Math. Unlike the L0 parser comparison above (which used a 2023-2024 subset), these results are based on the **full dataset**.
167
 
168
+ | Dataset | Average | MMLU | ARC-E | ARC-C | BBH | CSQA | Hella. | OBQA | PIQA | SIQA | Wino. | Math | GSM8K | MBPP | HumanEval | CMMLU | C-Eval |
169
+ | :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
170
+ | **UltraData-Math-L1** | 48.39 | 51.41 | 54.50 | 37.29 | 37.75 | 60.44 | 58.02 | 41.60 | 74.21 | 41.71 | 57.14 | 27.78 | 54.66 | 44.71 | 29.88 | 51.28 | 51.89 |
171
+ | **UltraData-Math-L2** | 48.59 | 50.93 | 55.20 | 36.95 | 39.27 | 60.20 | 57.52 | 39.80 | 74.48 | 44.73 | 57.77 | 29.20 | 52.92 | 44.50 | 32.32 | 51.13 | 50.55 |
172
+ | **UltraData-Math-L3** | **52.23** | **51.67** | **59.79** | **38.98** | **43.62** | **61.18** | **58.27** | **57.00** | **74.76** | 43.35 | **59.04** | **37.02** | **61.79** | **49.27** | **32.93** | **52.87** | **54.08** |
173
 
174
  *Note: Results demonstrate that higher-tier data (L3) significantly boosts mathematical reasoning (MATH, GSM8K) and general capabilities.*
175