test_llama2_7b / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
35ec9cd
|
raw
history blame
654 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 43.43
ARC (25-shot) 53.07
HellaSwag (10-shot) 78.57
MMLU (5-shot) 46.86
TruthfulQA (0-shot) 38.75
Winogrande (5-shot) 74.03
GSM8K (5-shot) 7.13
DROP (3-shot) 5.61