Search is not available for this dataset
step_number int64 | eval_accuracy null | Math Reasoning float64 | Logical Reasoning float64 | Common Sense float64 | Reading Comprehension float64 | Question Answering float64 | Text Classification null | Sentiment Analysis float64 | Code Generation null | Creative Writing float64 | Dialogue Generation null | Summarization float64 | Translation float64 | Knowledge Retrieval float64 | Instruction Following float64 | Safety Evaluation float64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,000 | null | 0.55 | 0.819 | 0.7 | 0.644 | 0.792 | null | 0.607 | null | 0.676 | null | 0.828 | 0.679 | 0.736 | 0.575 | 0.553 |
BenchmarkResults-Migration
This dataset contains benchmark evaluation results for a single selected checkpoint from the MyAwesomeModel training run.
Selected checkpoint: step_1000 Eval accuracy from checkpoint config:
Benchmarks and Scores
The table below lists each benchmark and the score produced by running evaluation/eval.py on the selected checkpoint. Scores are shown with three decimal places.
- Math Reasoning: 0.550
- Logical Reasoning: 0.819
- Common Sense: 0.700
- Reading Comprehension: 0.644
- Question Answering: 0.792
- Text Classification: N/A
- Sentiment Analysis: 0.607
- Code Generation: N/A
- Creative Writing: 0.676
- Dialogue Generation: N/A
- Summarization: 0.828
- Translation: 0.679
- Knowledge Retrieval: 0.736
- Instruction Following: 0.575
- Safety Evaluation: 0.553
Source: This dataset was generated by running evaluation/eval.py in the repository and packaging the results into a CSV file.
- Downloads last month
- 26