Update README.md
Browse files
README.md
CHANGED
|
@@ -135,20 +135,20 @@ MonarchCoder-7B is a slerp merge of the following models using [LazyMergekit](ht
|
|
| 135 |
The main aim behind creating this model is to create a model that performs well in reasoning, conversation, and coding. AlphaMonarch pperforms amazing on reasoning and conversation tasks. Merging AlphaMonarch with a coding model yielded MonarchCoder-7B which performs better on OpenLLM, Nous, and HumanEval benchmark. Although [MonarchCoder-2x7B](abideen/MonarchCoder-MoE-2x7B) performs better than MonarchCoder-7B.
|
| 136 |
|
| 137 |
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
|
| 142 |
-
|
| 143 |
-
|
|
| 144 |
-
|
| 145 |
-
|
|
| 146 |
-
|AI2 Reasoning Challenge (25-Shot)|68.52|
|
| 147 |
-
|HellaSwag (10-Shot) |87.30|
|
| 148 |
-
|MMLU (5-Shot) |64.65|
|
| 149 |
-
|TruthfulQA (0-shot) |61.21|
|
| 150 |
-
|Winogrande (5-shot) |80.19|
|
| 151 |
-
|GSM8k (5-shot)
|
| 152 |
|
| 153 |
|
| 154 |
## 🧩 Configuration
|
|
|
|
| 135 |
The main aim behind creating this model is to create a model that performs well in reasoning, conversation, and coding. AlphaMonarch pperforms amazing on reasoning and conversation tasks. Merging AlphaMonarch with a coding model yielded MonarchCoder-7B which performs better on OpenLLM, Nous, and HumanEval benchmark. Although [MonarchCoder-2x7B](abideen/MonarchCoder-MoE-2x7B) performs better than MonarchCoder-7B.
|
| 136 |
|
| 137 |
|
| 138 |
+
## 🏆 Evaluation results
|
| 139 |
+
|
| 140 |
+
| Metric |MonarchCoder-Moe-2x7B||MonarchCoder-7B||AlphaMonarch|
|
| 141 |
+
|---------------------------------|---------------------|-----------------|------------|
|
| 142 |
+
|Avg. | 74.23 | 71.17 | 75.99 |
|
| 143 |
+
|HumanEval | 41.15 | 39.02 | 34.14 |
|
| 144 |
+
|HumanEval+ | 29.87 | 31.70 | 29.26 |
|
| 145 |
+
|MBPP | 40.60 | * | * |
|
| 146 |
+
|AI2 Reasoning Challenge (25-Shot)| 70.99 | 68.52 | 73.04 |
|
| 147 |
+
|HellaSwag (10-Shot) | 87.99 | 87.30 | 89.18 |
|
| 148 |
+
|MMLU (5-Shot) | 65.11 | 64.65 | 64.40 |
|
| 149 |
+
|TruthfulQA (0-shot) | 71.25 | 61.21 | 77.91 |
|
| 150 |
+
|Winogrande (5-shot) | 80.66 | 80.19 .| 84.69 |
|
| 151 |
+
|GSM8k (5-shot) . | 69.37 | 65.13 | 66.72 |
|
| 152 |
|
| 153 |
|
| 154 |
## 🧩 Configuration
|