HyperScholar-OmniPython-50K / EVAL_README.md
gss1147's picture
Upload 10 files
7e34914 verified

Evaluation (Standardized)

Goal

Make results comparable across community fine-tunes.

Recommended metrics

  • HumanEval pass@1
  • Optional: MBPP pass@1

Suggested tool

Use lm-evaluation-harness (or your preferred harness) to run HumanEval and report settings:

  • base model
  • training recipe (full / LoRA / QLoRA)
  • sequence length
  • epochs
  • hardware