FuryAssassin commited on
Commit
cf62c66
·
verified ·
1 Parent(s): aa49af1

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +22 -47
README.md CHANGED
@@ -1,58 +1,33 @@
1
  ---
2
- annotations_creators:
3
- - no-annotation
4
- language_creators:
5
- - no-language
6
  license: mit
7
- multilinguality: monolingual
8
- pretty_name: BenchmarkResults-Migration
9
-
10
- dataset_size: 1
11
  ---
12
 
13
  # BenchmarkResults-Migration
14
 
15
- This dataset contains a single-row CSV with evaluation results for the chosen model checkpoint migrated from the original repository.
16
-
17
- ## Dataset summary
18
-
19
- - Repository: MyAwesomeModel evaluation migration
20
- - Selected checkpoint: step_1000
21
- - eval_accuracy (from checkpoint config): N/A
22
- - Number of benchmarks: 15
23
- - Generated on: 2026-02-13 19:32:06 UTC
24
-
25
- ## Benchmarks (columns)
26
-
27
- The CSV contains the following columns, in order:
28
-
29
- - step_number: integer checkpoint step
30
- - eval_accuracy: eval_accuracy value from checkpoint config (if available)
31
 
32
- ## Benchmarks and scores
 
33
 
34
- - Math Reasoning
35
- - Logical Reasoning
36
- - Common Sense
37
- - Reading Comprehension
38
- - Question Answering
39
- - Text Classification
40
- - Sentiment Analysis
41
- - Code Generation
42
- - Creative Writing
43
- - Dialogue Generation
44
- - Summarization
45
- - Translation
46
- - Knowledge Retrieval
47
- - Instruction Following
48
- - Safety Evaluation
49
 
50
- ## Usage
51
 
52
- You can download the CSV file directly from the dataset repository and load it with pandas:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
- ```python
55
- import pandas as pd
56
- df = pd.read_csv("https://huggingface.co/datasets/FuryAssassin/BenchmarkResults-Migration/resolve/main/benchmark_results.csv")
57
- print(df)
58
- ```
 
1
  ---
2
+ dataset_name: BenchmarkResults-Migration
 
 
 
3
  license: mit
 
 
 
 
4
  ---
5
 
6
  # BenchmarkResults-Migration
7
 
8
+ This dataset contains benchmark evaluation results for a single selected checkpoint from the MyAwesomeModel training run.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
+ Selected checkpoint: step_1000
11
+ Eval accuracy from checkpoint config:
12
 
13
+ ## Benchmarks and Scores
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
+ The table below lists each benchmark and the score produced by running evaluation/eval.py on the selected checkpoint. Scores are shown with three decimal places.
16
 
17
+ - Math Reasoning: 0.550
18
+ - Logical Reasoning: 0.819
19
+ - Common Sense: 0.700
20
+ - Reading Comprehension: 0.644
21
+ - Question Answering: 0.792
22
+ - Text Classification: N/A
23
+ - Sentiment Analysis: 0.607
24
+ - Code Generation: N/A
25
+ - Creative Writing: 0.676
26
+ - Dialogue Generation: N/A
27
+ - Summarization: 0.828
28
+ - Translation: 0.679
29
+ - Knowledge Retrieval: 0.736
30
+ - Instruction Following: 0.575
31
+ - Safety Evaluation: 0.553
32
 
33
+ Source: This dataset was generated by running evaluation/eval.py in the repository and packaging the results into a CSV file.