Tulpar-7b-v1 / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
5b0a0ee
|
raw
history blame
679 Bytes
metadata
license: llama2

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 49.94
ARC (25-shot) 57.0
HellaSwag (10-shot) 79.69
MMLU (5-shot) 51.33
TruthfulQA (0-shot) 51.83
Winogrande (5-shot) 72.45
GSM8K (5-shot) 0.68
DROP (3-shot) 36.58