GPT_Large_Quantized / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
e9766dd
|
raw
history blame
687 Bytes
metadata
license: unknown

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 25.03
ARC (25-shot) 27.05
HellaSwag (10-shot) 26.29
MMLU (5-shot) 24.12
TruthfulQA (0-shot) 48.46
Winogrande (5-shot) 49.33
GSM8K (5-shot) 0.0
DROP (3-shot) 0.0