Adding Evaluation Results
#1
by
leaderboard-pr-bot
- opened
README.md
CHANGED
|
@@ -14,4 +14,17 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 14 |
|
| 15 |
# smol-3b
|
| 16 |
|
| 17 |
-
See how open weights instead of open source feel like!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
# smol-3b
|
| 16 |
|
| 17 |
+
See how open weights instead of open source feel like!
|
| 18 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
| 19 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_rishiraj__smol-3b)
|
| 20 |
+
|
| 21 |
+
| Metric |Value|
|
| 22 |
+
|---------------------------------|----:|
|
| 23 |
+
|Avg. |50.27|
|
| 24 |
+
|AI2 Reasoning Challenge (25-Shot)|46.33|
|
| 25 |
+
|HellaSwag (10-Shot) |68.23|
|
| 26 |
+
|MMLU (5-Shot) |46.33|
|
| 27 |
+
|TruthfulQA (0-shot) |50.73|
|
| 28 |
+
|Winogrande (5-shot) |65.35|
|
| 29 |
+
|GSM8k (5-shot) |24.64|
|
| 30 |
+
|