leaderboard-pr-bot commited on
Commit
b67040a
·
1 Parent(s): f85d91f

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -58,3 +58,17 @@ The following hyperparameters were used during training:
58
  - Pytorch 2.0.1+cu117
59
  - Datasets 2.14.5
60
  - Tokenizers 0.13.3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  - Pytorch 2.0.1+cu117
59
  - Datasets 2.14.5
60
  - Tokenizers 0.13.3
61
+
62
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
63
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_euclaise__falcon_1b_stage1)
64
+
65
+ | Metric | Value |
66
+ |-----------------------|---------------------------|
67
+ | Avg. | 32.77 |
68
+ | ARC (25-shot) | 35.15 |
69
+ | HellaSwag (10-shot) | 62.4 |
70
+ | MMLU (5-shot) | 24.47 |
71
+ | TruthfulQA (0-shot) | 40.0 |
72
+ | Winogrande (5-shot) | 61.48 |
73
+ | GSM8K (5-shot) | 0.0 |
74
+ | DROP (3-shot) | 5.89 |