Adding Evaluation Results

#2
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -68,3 +68,17 @@ hf-causal-experimental (pretrained=winglian/basilisk-4b,use_accelerate=True,trus
68
  | | |acc_norm|0.6921|_ |0.0108|
69
  |winogrande | 0|acc |0.5399|_ |0.0140|
70
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  | | |acc_norm|0.6921|_ |0.0108|
69
  |winogrande | 0|acc |0.5399|_ |0.0140|
70
  ```
71
+
72
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
73
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_winglian__basilisk-4b)
74
+
75
+ | Metric | Value |
76
+ |-----------------------|---------------------------|
77
+ | Avg. | 27.26 |
78
+ | ARC (25-shot) | 25.85 |
79
+ | HellaSwag (10-shot) | 39.6 |
80
+ | MMLU (5-shot) | 24.61 |
81
+ | TruthfulQA (0-shot) | 43.74 |
82
+ | Winogrande (5-shot) | 53.12 |
83
+ | GSM8K (5-shot) | 0.0 |
84
+ | DROP (3-shot) | 3.89 |