Stable-Vicuna-13B / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
91de26e
|
raw
history blame
678 Bytes
metadata
license: gpl-3.0

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 42.09
ARC (25-shot) 53.41
HellaSwag (10-shot) 78.57
MMLU (5-shot) 50.37
TruthfulQA (0-shot) 48.36
Winogrande (5-shot) 56.99
GSM8K (5-shot) 0.0
DROP (3-shot) 6.94