Adding Evaluation Results
#2
by
leaderboard-pr-bot
- opened
README.md
CHANGED
|
@@ -28,3 +28,17 @@ response_ids = outputs[0][input_ids_len:]
|
|
| 28 |
response = tokenizer.decode(response_ids)
|
| 29 |
print(response)
|
| 30 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
response = tokenizer.decode(response_ids)
|
| 29 |
print(response)
|
| 30 |
```
|
| 31 |
+
|
| 32 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
| 33 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_WangZeJun__bloom-820m-chat)
|
| 34 |
+
|
| 35 |
+
| Metric | Value |
|
| 36 |
+
|-----------------------|---------------------------|
|
| 37 |
+
| Avg. | 26.55 |
|
| 38 |
+
| ARC (25-shot) | 23.38 |
|
| 39 |
+
| HellaSwag (10-shot) | 34.16 |
|
| 40 |
+
| MMLU (5-shot) | 25.98 |
|
| 41 |
+
| TruthfulQA (0-shot) | 40.32 |
|
| 42 |
+
| Winogrande (5-shot) | 53.2 |
|
| 43 |
+
| GSM8K (5-shot) | 0.0 |
|
| 44 |
+
| DROP (3-shot) | 8.85 |
|