testmodel2 / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
e140b7c
|
raw
history blame
651 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 43.56
ARC (25-shot) 53.24
HellaSwag (10-shot) 78.78
MMLU (5-shot) 46.61
TruthfulQA (0-shot) 39.17
Winogrande (5-shot) 73.8
GSM8K (5-shot) 7.66
DROP (3-shot) 5.66