Adding Evaluation Results

#1
Files changed (1) hide show
  1. README.md +15 -1
README.md CHANGED
@@ -1,10 +1,10 @@
1
  ---
2
  license: mit
3
- base_model: gpt2
4
  tags:
5
  - generated_from_trainer
6
  metrics:
7
  - accuracy
 
8
  model-index:
9
  - name: artgpt
10
  results: []
@@ -55,3 +55,17 @@ The following hyperparameters were used during training:
55
  - Pytorch 2.1.2+cu121
56
  - Datasets 2.16.0
57
  - Tokenizers 0.15.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
 
3
  tags:
4
  - generated_from_trainer
5
  metrics:
6
  - accuracy
7
+ base_model: gpt2
8
  model-index:
9
  - name: artgpt
10
  results: []
 
55
  - Pytorch 2.1.2+cu121
56
  - Datasets 2.16.0
57
  - Tokenizers 0.15.0
58
+
59
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
60
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_buildingthemoon__testfinetunedmodel)
61
+
62
+ | Metric |Value|
63
+ |---------------------------------|----:|
64
+ |Avg. |29.18|
65
+ |AI2 Reasoning Challenge (25-Shot)|25.85|
66
+ |HellaSwag (10-Shot) |31.40|
67
+ |MMLU (5-Shot) |26.07|
68
+ |TruthfulQA (0-shot) |40.75|
69
+ |Winogrande (5-shot) |50.99|
70
+ |GSM8k (5-shot) | 0.00|
71
+