Adding Evaluation Results
#1
by
leaderboard-pr-bot
- opened
README.md
CHANGED
|
@@ -1,6 +1,5 @@
|
|
| 1 |
---
|
| 2 |
license: gemma
|
| 3 |
-
base_model: tanliboy/zephyr-gemma-2-9b-sft
|
| 4 |
tags:
|
| 5 |
- alignment-handbook
|
| 6 |
- trl
|
|
@@ -9,6 +8,7 @@ tags:
|
|
| 9 |
- trl
|
| 10 |
- dpo
|
| 11 |
- generated_from_trainer
|
|
|
|
| 12 |
datasets:
|
| 13 |
- HuggingFaceH4/ultrafeedback_binarized
|
| 14 |
model-index:
|
|
@@ -81,3 +81,17 @@ The following hyperparameters were used during training:
|
|
| 81 |
- Pytorch 2.3.1+cu121
|
| 82 |
- Datasets 2.19.1
|
| 83 |
- Tokenizers 0.19.1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: gemma
|
|
|
|
| 3 |
tags:
|
| 4 |
- alignment-handbook
|
| 5 |
- trl
|
|
|
|
| 8 |
- trl
|
| 9 |
- dpo
|
| 10 |
- generated_from_trainer
|
| 11 |
+
base_model: tanliboy/zephyr-gemma-2-9b-sft
|
| 12 |
datasets:
|
| 13 |
- HuggingFaceH4/ultrafeedback_binarized
|
| 14 |
model-index:
|
|
|
|
| 81 |
- Pytorch 2.3.1+cu121
|
| 82 |
- Datasets 2.19.1
|
| 83 |
- Tokenizers 0.19.1
|
| 84 |
+
|
| 85 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
| 86 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_tanliboy__lambda-gemma-2-9b-dpo)
|
| 87 |
+
|
| 88 |
+
| Metric |Value|
|
| 89 |
+
|-------------------|----:|
|
| 90 |
+
|Avg. |21.34|
|
| 91 |
+
|IFEval (0-Shot) |45.01|
|
| 92 |
+
|BBH (3-Shot) |35.55|
|
| 93 |
+
|MATH Lvl 5 (4-Shot)| 0.00|
|
| 94 |
+
|GPQA (0-shot) | 8.50|
|
| 95 |
+
|MuSR (0-shot) | 7.94|
|
| 96 |
+
|MMLU-PRO (5-shot) |31.02|
|
| 97 |
+
|