Bloom_1b_Quantized / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
c6f9e02
|
raw
history blame
706 Bytes
metadata
license: bigscience-bloom-rail-1.0

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 28.45
ARC (25-shot) 27.73
HellaSwag (10-shot) 42.83
MMLU (5-shot) 26.28
TruthfulQA (0-shot) 41.82
Winogrande (5-shot) 55.64
GSM8K (5-shot) 0.15
DROP (3-shot) 4.71