test-22B / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
62e3b2c
|
raw
history blame
644 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 32.8
ARC (25-shot) 39.42
HellaSwag (10-shot) 64.51
MMLU (5-shot) 27.13
TruthfulQA (0-shot) 37.13
Winogrande (5-shot) 57.7
GSM8K (5-shot) 0.38
DROP (3-shot) 3.32