Update README.md
Browse files
README.md
CHANGED
|
@@ -88,29 +88,29 @@ The plot below highlights the alignment comparison of the model trained with Con
|
|
| 88 |

|
| 89 |
|
| 90 |
### Benchmark Results Table
|
| 91 |
-
The table below summarizes
|
| 92 |
|
| 93 |
-
| **Model** | **MH** | **M**
|
| 94 |
-
|
| 95 |
-
| Llama3.1-8B-Inst | 23.7
|
| 96 |
-
| OpenMath2-Llama3 | 38.4
|
| 97 |
-
| **Full Tune** | **38.5
|
| 98 |
-
| Partial Tune | 36.4
|
| 99 |
-
| Stack Exp.
|
| 100 |
-
| Hybrid Exp.
|
| 101 |
-
| **Control LLM*** | 38.1
|
| 102 |
|
| 103 |
---
|
| 104 |
|
| 105 |
-
### Explanation
|
| 106 |
- **MH**: MathHard
|
| 107 |
-
- **M**: Math
|
| 108 |
-
- **
|
| 109 |
-
- **Math
|
| 110 |
-
- **ARC**:
|
| 111 |
-
- **GPQA**: General knowledge
|
| 112 |
-
- **
|
| 113 |
-
- **
|
| 114 |
-
- **
|
| 115 |
- **Overall**: Combined average across all tasks
|
| 116 |
|
|
|
|
| 88 |

|
| 89 |
|
| 90 |
### Benchmark Results Table
|
| 91 |
+
The table below summarizes evaluation results across mathematical tasks and original capabilities.
|
| 92 |
|
| 93 |
+
| **Model** | **MH** | **M** | **G8K** | **M-Avg** | **ARC** | **GPQA** | **MLU** | **MLUP** | **O-Avg** | **Overall** |
|
| 94 |
+
|-------------------|--------|--------|---------|-----------|---------|----------|---------|----------|-----------|-------------|
|
| 95 |
+
| Llama3.1-8B-Inst | 23.7 | 50.9 | 85.6 | 52.1 | 83.4 | 29.9 | 72.4 | 46.7 | 60.5 | 56.3 |
|
| 96 |
+
| OpenMath2-Llama3 | 38.4 | 64.1 | 90.3 | 64.3 | 45.8 | 1.3 | 4.5 | 19.5 | 12.9 | 38.6 |
|
| 97 |
+
| **Full Tune** | **38.5**| **63.7**| 90.2 | **63.9** | 58.2 | 1.1 | 7.3 | 23.5 | 16.5 | 40.1 |
|
| 98 |
+
| Partial Tune | 36.4 | 61.4 | 89.0 | 61.8 | 66.2 | 6.0 | 25.7 | 30.9 | 29.3 | 45.6 |
|
| 99 |
+
| Stack Exp. | 35.6 | 61.0 | 90.8 | 61.8 | 69.3 | 18.8 | 61.8 | 43.1 | 53.3 | 57.6 |
|
| 100 |
+
| Hybrid Exp. | 34.4 | 61.1 | 90.1 | 61.5 | **81.8**| **25.9** | 67.2 | **43.9** | 57.1 | 59.3 |
|
| 101 |
+
| **Control LLM*** | 38.1 | 62.7 | **90.4**| 63.2 | 79.7 | 25.2 | **68.1**| 43.6 | **57.2** | **60.2** |
|
| 102 |
|
| 103 |
---
|
| 104 |
|
| 105 |
+
### Explanation:
|
| 106 |
- **MH**: MathHard
|
| 107 |
+
- **M**: Math
|
| 108 |
+
- **G8K**: GSM8K
|
| 109 |
+
- **M-Avg**: Math - Average across MathHard, Math, and GSM8K
|
| 110 |
+
- **ARC**: ARC benchmark
|
| 111 |
+
- **GPQA**: General knowledge QA
|
| 112 |
+
- **MLU**: MMLU (Massive Multitask Language Understanding)
|
| 113 |
+
- **MLUP**: MMLU Pro
|
| 114 |
+
- **O-Avg**: Orginal Capability - Average across ARC, GPQA, MMLU, and MMLUP
|
| 115 |
- **Overall**: Combined average across all tasks
|
| 116 |
|