Flash-Llama-3B / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
28f0402
|
raw
history blame
675 Bytes
metadata
license: mit

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 35.13
ARC (25-shot) 40.1
HellaSwag (10-shot) 71.56
MMLU (5-shot) 26.88
TruthfulQA (0-shot) 34.74
Winogrande (5-shot) 66.61
GSM8K (5-shot) 0.91
DROP (3-shot) 5.13