Adding Evaluation Results
#1
by
leaderboard-pr-bot
- opened
README.md
CHANGED
|
@@ -40,4 +40,17 @@ For more information about PandaLM, pls check out [our github](https://github.co
|
|
| 40 |
journal = {GitHub repository},
|
| 41 |
howpublished = {\url{https://github.com/WeOpenML/PandaLM}},
|
| 42 |
}
|
| 43 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
journal = {GitHub repository},
|
| 41 |
howpublished = {\url{https://github.com/WeOpenML/PandaLM}},
|
| 42 |
}
|
| 43 |
+
```
|
| 44 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
| 45 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_WeOpenML__PandaLM-Alpaca-7B-v1)
|
| 46 |
+
|
| 47 |
+
| Metric | Value |
|
| 48 |
+
|-----------------------|---------------------------|
|
| 49 |
+
| Avg. | 41.31 |
|
| 50 |
+
| ARC (25-shot) | 50.85 |
|
| 51 |
+
| HellaSwag (10-shot) | 77.36 |
|
| 52 |
+
| MMLU (5-shot) | 35.91 |
|
| 53 |
+
| TruthfulQA (0-shot) | 36.63 |
|
| 54 |
+
| Winogrande (5-shot) | 71.9 |
|
| 55 |
+
| GSM8K (5-shot) | 0.91 |
|
| 56 |
+
| DROP (3-shot) | 15.61 |
|