| # Evaluation (Standardized) | |
| ## Goal | |
| Make results comparable across community fine-tunes. | |
| ## Recommended metrics | |
| - HumanEval pass@1 | |
| - Optional: MBPP pass@1 | |
| ## Suggested tool | |
| Use lm-evaluation-harness (or your preferred harness) to run HumanEval and report settings: | |
| - base model | |
| - training recipe (full / LoRA / QLoRA) | |
| - sequence length | |
| - epochs | |
| - hardware | |