metadata
dataset_name: BenchmarkResults-Migration
license: mit
BenchmarkResults-Migration
This dataset contains benchmark evaluation results for a single selected checkpoint from the MyAwesomeModel training run.
Selected checkpoint: step_1000 Eval accuracy from checkpoint config:
Benchmarks and Scores
The table below lists each benchmark and the score produced by running evaluation/eval.py on the selected checkpoint. Scores are shown with three decimal places.
- Math Reasoning: 0.550
- Logical Reasoning: 0.819
- Common Sense: 0.700
- Reading Comprehension: 0.644
- Question Answering: 0.792
- Text Classification: N/A
- Sentiment Analysis: 0.607
- Code Generation: N/A
- Creative Writing: 0.676
- Dialogue Generation: N/A
- Summarization: 0.828
- Translation: 0.679
- Knowledge Retrieval: 0.736
- Instruction Following: 0.575
- Safety Evaluation: 0.553
Source: This dataset was generated by running evaluation/eval.py in the repository and packaging the results into a CSV file.