Adding Evaluation Results
#2
by
leaderboard-pr-bot
- opened
README.md
CHANGED
|
@@ -113,3 +113,17 @@ To use this model, you need to [recover](https://github.com/thunlp/UltraChat/tre
|
|
| 113 |
User: user input<eos_token>
|
| 114 |
Assistant:
|
| 115 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 113 |
User: user input<eos_token>
|
| 114 |
Assistant:
|
| 115 |
```
|
| 116 |
+
|
| 117 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
| 118 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__UltraLM-13B-fp16)
|
| 119 |
+
|
| 120 |
+
| Metric | Value |
|
| 121 |
+
|-----------------------|---------------------------|
|
| 122 |
+
| Avg. | 48.05 |
|
| 123 |
+
| ARC (25-shot) | 57.59 |
|
| 124 |
+
| HellaSwag (10-shot) | 80.2 |
|
| 125 |
+
| MMLU (5-shot) | 51.85 |
|
| 126 |
+
| TruthfulQA (0-shot) | 51.56 |
|
| 127 |
+
| Winogrande (5-shot) | 75.85 |
|
| 128 |
+
| GSM8K (5-shot) | 10.69 |
|
| 129 |
+
| DROP (3-shot) | 8.59 |
|