test_llama2_ko_7b / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
7c7a124
|
raw
history blame
654 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 25.7
ARC (25-shot) 29.95
HellaSwag (10-shot) 26.94
MMLU (5-shot) 25.62
TruthfulQA (0-shot) 49.03
Winogrande (5-shot) 48.38
GSM8K (5-shot) 0.0
DROP (3-shot) 0.0