File size: 359 Bytes
7e34914 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
# Evaluation (Standardized)
## Goal
Make results comparable across community fine-tunes.
## Recommended metrics
- HumanEval pass@1
- Optional: MBPP pass@1
## Suggested tool
Use lm-evaluation-harness (or your preferred harness) to run HumanEval and report settings:
- base model
- training recipe (full / LoRA / QLoRA)
- sequence length
- epochs
- hardware
|