KoreanLM-hf / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
f7ccc1f
|
raw
history blame
656 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 43.99
ARC (25-shot) 51.45
HellaSwag (10-shot) 76.77
MMLU (5-shot) 40.61
TruthfulQA (0-shot) 44.34
Winogrande (5-shot) 69.77
GSM8K (5-shot) 3.41
DROP (3-shot) 21.59