lbourdois commited on
Commit
05967ed
·
verified ·
1 Parent(s): f68b294

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +355 -343
README.md CHANGED
@@ -1,344 +1,356 @@
1
- ---
2
- license: other
3
- license_name: qwen
4
- license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
5
- language:
6
- - en
7
- pipeline_tag: text-generation
8
- base_model:
9
- - Qwen/Qwen2.5-72B-Instruct
10
- model-index:
11
- - name: Qwen2.5-95B-Instruct
12
- results:
13
- - task:
14
- type: text-generation
15
- name: Text Generation
16
- dataset:
17
- name: IFEval (0-Shot)
18
- type: HuggingFaceH4/ifeval
19
- args:
20
- num_few_shot: 0
21
- metrics:
22
- - type: inst_level_strict_acc and prompt_level_strict_acc
23
- value: 84.31
24
- name: strict accuracy
25
- source:
26
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ssmits/Qwen2.5-95B-Instruct
27
- name: Open LLM Leaderboard
28
- - task:
29
- type: text-generation
30
- name: Text Generation
31
- dataset:
32
- name: BBH (3-Shot)
33
- type: BBH
34
- args:
35
- num_few_shot: 3
36
- metrics:
37
- - type: acc_norm
38
- value: 58.53
39
- name: normalized accuracy
40
- source:
41
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ssmits/Qwen2.5-95B-Instruct
42
- name: Open LLM Leaderboard
43
- - task:
44
- type: text-generation
45
- name: Text Generation
46
- dataset:
47
- name: MATH Lvl 5 (4-Shot)
48
- type: hendrycks/competition_math
49
- args:
50
- num_few_shot: 4
51
- metrics:
52
- - type: exact_match
53
- value: 6.04
54
- name: exact match
55
- source:
56
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ssmits/Qwen2.5-95B-Instruct
57
- name: Open LLM Leaderboard
58
- - task:
59
- type: text-generation
60
- name: Text Generation
61
- dataset:
62
- name: GPQA (0-shot)
63
- type: Idavidrein/gpqa
64
- args:
65
- num_few_shot: 0
66
- metrics:
67
- - type: acc_norm
68
- value: 15.21
69
- name: acc_norm
70
- source:
71
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ssmits/Qwen2.5-95B-Instruct
72
- name: Open LLM Leaderboard
73
- - task:
74
- type: text-generation
75
- name: Text Generation
76
- dataset:
77
- name: MuSR (0-shot)
78
- type: TAUR-Lab/MuSR
79
- args:
80
- num_few_shot: 0
81
- metrics:
82
- - type: acc_norm
83
- value: 13.61
84
- name: acc_norm
85
- source:
86
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ssmits/Qwen2.5-95B-Instruct
87
- name: Open LLM Leaderboard
88
- - task:
89
- type: text-generation
90
- name: Text Generation
91
- dataset:
92
- name: MMLU-PRO (5-shot)
93
- type: TIGER-Lab/MMLU-Pro
94
- config: main
95
- split: test
96
- args:
97
- num_few_shot: 5
98
- metrics:
99
- - type: acc
100
- value: 46.85
101
- name: accuracy
102
- source:
103
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ssmits/Qwen2.5-95B-Instruct
104
- name: Open LLM Leaderboard
105
- tags:
106
- - chat
107
- ---
108
-
109
- # Qwen2.5-95B-Instruct
110
-
111
- Qwen2.5-95B-Instruct is a [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) self-merge made with [MergeKit](https://github.com/arcee-ai/mergekit/tree/main).
112
-
113
- The layer ranges chosen for this merge were inspired by a rough estimate of the layer similarity analysis of [ssmits/Falcon2-5.5B-multilingual](https://huggingface.co/ssmits/Falcon2-5.5B-multilingual). Layer similarity analysis involves examining the outputs of different layers in a neural network to determine how similar or different they are. This technique can help identify which layers contribute most significantly to the model's performance. In the context of the Falcon-11B model, layer similarity analysis across multiple languages revealed that the first half of the layers were more important for maintaining performance. Additionally, this analysis can be used to more rigidly slice and add extra layers for optimal Next Token Prediction, allowing for possibly a model architecture that's more creative and powerful.
114
-
115
- - [alpindale/goliath-120b](https://huggingface.co/alpindale/goliath-120b)
116
- - [cognitivecomputations/MegaDolphin-120b](https://huggingface.co/cognitivecomputations/MegaDolphin-120b)
117
- - [mlabonne/Meta-Llama-3-120B-Instruct](https://huggingface.co/mlabonne/Meta-Llama-3-120B-Instruct)
118
-
119
- Special thanks to [Eric Hartford](https://huggingface.co/ehartford) for both inspiring and evaluating the original model, to [Charles Goddard](https://huggingface.co/chargoddard) for creating MergeKit, and to [Mathieu Labonne](https://huggingface.co/mlabonne) for creating the Meta-Llama-3-120B-Instruct model that served as the main inspiration for this merge.
120
-
121
- ## 🔍 Applications
122
-
123
- This model is probably good for creative writing tasks. It uses the Qwen chat template with a default context window of 128K.
124
-
125
- The model could be quite creative and maybe even better than the 72B model at some tasks.
126
-
127
- ## ⚡ Quantized models
128
-
129
- To be quantized.
130
-
131
- * **GGUF**: [Link to GGUF model]
132
- * **EXL2**: [Link to EXL2 model]
133
- * **mlx**: [Link to mlx model]
134
-
135
- ## 🏆 Evaluation
136
- This model has yet to be thoroughly evaluated. It is expected to excel in creative writing and more but may have limitations in other tasks.
137
- Use it with caution and don't expect it to outperform state-of-the-art models outside of specific creative use cases.
138
-
139
- Once the model is created and tested, this section will be updated with:
140
-
141
- * Links to evaluation threads on social media platforms
142
- * Examples of the model's performance in creative writing tasks
143
- * Comparisons with other large language models in various applications
144
- * Community feedback and use cases
145
-
146
- We encourage users to share their experiences and evaluations to help build a comprehensive understanding of the model's capabilities and limitations.
147
-
148
- ## 🧩 Configuration
149
-
150
- ```yaml
151
- slices:
152
- - sources:
153
- - layer_range: [0, 10]
154
- model: Qwen/Qwen2.5-72B-Instruct
155
- - sources:
156
- - layer_range: [5, 15]
157
- model: Qwen/Qwen2.5-72B-Instruct
158
- - sources:
159
- - layer_range: [10, 20]
160
- model: Qwen/Qwen2.5-72B-Instruct
161
- - sources:
162
- - layer_range: [15, 25]
163
- model: Qwen/Qwen2.5-72B-Instruct
164
- - sources:
165
- - layer_range: [20, 30]
166
- model: Qwen/Qwen2.5-72B-Instruct
167
- - sources:
168
- - layer_range: [25, 80]
169
- model: Qwen/Qwen2.5-72B-Instruct
170
- dtype: bfloat16
171
- merge_method: passthrough
172
- ```
173
-
174
- ## 💻 Usage
175
-
176
- ```python
177
- !pip install -qU transformers accelerate
178
-
179
- from transformers import AutoTokenizer
180
- import transformers
181
- import torch
182
-
183
- model = "ssmits/Qwen2.5-95B-Instruct"
184
- messages = [{"role": "user", "content": "What is a large language model?"}]
185
-
186
- tokenizer = AutoTokenizer.from_pretrained(model)
187
- prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
188
- pipeline = transformers.pipeline(
189
- "text-generation",
190
- model=model,
191
- torch_dtype=torch.float16,
192
- device_map="auto",
193
- )
194
-
195
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
196
- print(outputs[0]["generated_text"])
197
- ```
198
-
199
- ## 🏆 Evaluation
200
-
201
- Initial benchmarks show interesting performance characteristics compared to the 72B model:
202
-
203
- ### Strengths
204
- The 95B model shows notable improvements in:
205
-
206
- 1. **Mathematical Reasoning**
207
- - Up to 5.83x improvement in algebra tasks
208
- - 3.33x improvement in pre-algebra
209
- - Consistent gains across geometry, number theory, and probability tasks
210
- - Overall stronger performance in complex mathematical reasoning
211
-
212
- 2. **Spatial & Object Understanding**
213
- - 11% improvement in object placement tasks
214
- - 7% better at tabular data interpretation
215
- - Enhanced performance in logical deduction with multiple objects
216
-
217
- 3. **Complex Language Tasks**
218
- - 4% improvement in disambiguation tasks
219
- - 2% better at movie recommendations
220
- - Slight improvements in hyperbaton (complex word order) tasks
221
-
222
- 4. **Creative & Analytical Reasoning**
223
- - 10% improvement in murder mystery solving
224
- - Better performance in tasks requiring creative problem-solving
225
-
226
- ### Areas for Consideration
227
- While the model shows improvements in specific areas, users should note that the 72B model still performs better in many general language and reasoning tasks. The 95B version appears to excel particularly in mathematical and spatial reasoning while maintaining comparable performance in other areas.
228
-
229
- ### [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
230
- Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ssmits__Qwen2.5-95B-Instruct)
231
-
232
- | Metric |Value|
233
- |-------------------|----:|
234
- |Avg. |37.43|
235
- |IFEval (0-Shot) |84.31|
236
- |BBH (3-Shot) |58.53|
237
- |MATH Lvl 5 (4-Shot)| 6.04|
238
- |GPQA (0-shot) |15.21|
239
- |MuSR (0-shot) |13.61|
240
- |MMLU-PRO (5-shot) |46.85|
241
-
242
-
243
- | Key | 72b Result | 95b Result | Difference | Which is Higher | Multiplier |
244
- |:--------------------------------------------------------------------------|-------------:|-------------:|-------------:|:------------------|:-------------|
245
- | leaderboard_musr.acc_norm,none | 0.419 | 0.427 | 0.008 | 95b | 1.02 |
246
- | leaderboard_bbh_sports_understanding.acc_norm,none | 0.892 | 0.876 | -0.016 | 72b | 0.98 |
247
- | leaderboard_bbh_logical_deduction_three_objects.acc_norm,none | 0.94 | 0.928 | -0.012 | 72b | 0.99 |
248
- | leaderboard_math_geometry_hard.exact_match,none | 0 | 0.008 | 0.008 | 95b | 0.00 |
249
- | leaderboard_gpqa.acc_norm,none | 0.375 | 0.364 | -0.011 | 72b | 0.97 |
250
- | leaderboard_math_hard.exact_match,none | 0.012 | 0.06 | 0.048 | 95b | 5.00 |
251
- | leaderboard.exact_match,none | 0.012 | 0.06 | 0.048 | 95b | 5.00 |
252
- | leaderboard.prompt_level_loose_acc,none | 0.861 | 0.839 | -0.022 | 72b | 0.97 |
253
- | leaderboard.prompt_level_strict_acc,none | 0.839 | 0.813 | -0.026 | 72b | 0.97 |
254
- | leaderboard.inst_level_loose_acc,none | 0.904 | 0.891 | -0.013 | 72b | 0.99 |
255
- | leaderboard.acc_norm,none | 0.641 | 0.622 | -0.020 | 72b | 0.97 |
256
- | leaderboard.inst_level_strict_acc,none | 0.888 | 0.873 | -0.016 | 72b | 0.98 |
257
- | leaderboard.acc,none | 0.563 | 0.522 | -0.041 | 72b | 0.93 |
258
- | leaderboard_bbh_causal_judgement.acc_norm,none | 0.668 | 0.663 | -0.005 | 72b | 0.99 |
259
- | leaderboard_bbh_salient_translation_error_detection.acc_norm,none | 0.668 | 0.588 | -0.080 | 72b | 0.88 |
260
- | leaderboard_gpqa_extended.acc_norm,none | 0.372 | 0.364 | -0.007 | 72b | 0.98 |
261
- | leaderboard_math_prealgebra_hard.exact_match,none | 0.047 | 0.155 | 0.109 | 95b | 3.33 |
262
- | leaderboard_math_algebra_hard.exact_match,none | 0.02 | 0.114 | 0.094 | 95b | 5.83 |
263
- | leaderboard_bbh_boolean_expressions.acc_norm,none | 0.936 | 0.92 | -0.016 | 72b | 0.98 |
264
- | leaderboard_math_num_theory_hard.exact_match,none | 0 | 0.058 | 0.058 | 95b | 0.00 |
265
- | leaderboard_bbh_movie_recommendation.acc_norm,none | 0.768 | 0.78 | 0.012 | 95b | 1.02 |
266
- | leaderboard_math_counting_and_prob_hard.exact_match,none | 0 | 0.024 | 0.024 | 95b | 0.00 |
267
- | leaderboard_math_intermediate_algebra_hard.exact_match,none | 0 | 0.004 | 0.004 | 95b | 0.00 |
268
- | leaderboard_ifeval.prompt_level_strict_acc,none | 0.839 | 0.813 | -0.026 | 72b | 0.97 |
269
- | leaderboard_ifeval.inst_level_strict_acc,none | 0.888 | 0.873 | -0.016 | 72b | 0.98 |
270
- | leaderboard_ifeval.inst_level_loose_acc,none | 0.904 | 0.891 | -0.013 | 72b | 0.99 |
271
- | leaderboard_ifeval.prompt_level_loose_acc,none | 0.861 | 0.839 | -0.022 | 72b | 0.97 |
272
- | leaderboard_bbh_snarks.acc_norm,none | 0.927 | 0.904 | -0.022 | 72b | 0.98 |
273
- | leaderboard_bbh_web_of_lies.acc_norm,none | 0.676 | 0.616 | -0.060 | 72b | 0.91 |
274
- | leaderboard_bbh_penguins_in_a_table.acc_norm,none | 0.719 | 0.767 | 0.048 | 95b | 1.07 |
275
- | leaderboard_bbh_hyperbaton.acc_norm,none | 0.892 | 0.9 | 0.008 | 95b | 1.01 |
276
- | leaderboard_bbh_object_counting.acc_norm,none | 0.612 | 0.544 | -0.068 | 72b | 0.89 |
277
- | leaderboard_musr_object_placements.acc_norm,none | 0.258 | 0.285 | 0.027 | 95b | 1.11 |
278
- | leaderboard_bbh_logical_deduction_five_objects.acc_norm,none | 0.704 | 0.592 | -0.112 | 72b | 0.84 |
279
- | leaderboard_musr_team_allocation.acc_norm,none | 0.456 | 0.396 | -0.060 | 72b | 0.87 |
280
- | leaderboard_bbh_navigate.acc_norm,none | 0.832 | 0.788 | -0.044 | 72b | 0.95 |
281
- | leaderboard_bbh_tracking_shuffled_objects_seven_objects.acc_norm,none | 0.34 | 0.304 | -0.036 | 72b | 0.89 |
282
- | leaderboard_bbh_formal_fallacies.acc_norm,none | 0.776 | 0.756 | -0.020 | 72b | 0.97 |
283
- | all.leaderboard_musr.acc_norm,none | 0.419 | 0.427 | 0.008 | 95b | 1.02 |
284
- | all.leaderboard_bbh_sports_understanding.acc_norm,none | 0.892 | 0.876 | -0.016 | 72b | 0.98 |
285
- | all.leaderboard_bbh_logical_deduction_three_objects.acc_norm,none | 0.94 | 0.928 | -0.012 | 72b | 0.99 |
286
- | all.leaderboard_math_geometry_hard.exact_match,none | 0 | 0.008 | 0.008 | 95b | 0.00 |
287
- | all.leaderboard_gpqa.acc_norm,none | 0.375 | 0.364 | -0.011 | 72b | 0.97 |
288
- | all.leaderboard_math_hard.exact_match,none | 0.012 | 0.06 | 0.048 | 95b | 5.00 |
289
- | all.leaderboard.exact_match,none | 0.012 | 0.06 | 0.048 | 95b | 5.00 |
290
- | all.leaderboard.prompt_level_loose_acc,none | 0.861 | 0.839 | -0.022 | 72b | 0.97 |
291
- | all.leaderboard.prompt_level_strict_acc,none | 0.839 | 0.813 | -0.026 | 72b | 0.97 |
292
- | all.leaderboard.inst_level_loose_acc,none | 0.904 | 0.891 | -0.013 | 72b | 0.99 |
293
- | all.leaderboard.acc_norm,none | 0.641 | 0.622 | -0.020 | 72b | 0.97 |
294
- | all.leaderboard.inst_level_strict_acc,none | 0.888 | 0.873 | -0.016 | 72b | 0.98 |
295
- | all.leaderboard.acc,none | 0.563 | 0.522 | -0.041 | 72b | 0.93 |
296
- | all.leaderboard_bbh_causal_judgement.acc_norm,none | 0.668 | 0.663 | -0.005 | 72b | 0.99 |
297
- | all.leaderboard_bbh_salient_translation_error_detection.acc_norm,none | 0.668 | 0.588 | -0.080 | 72b | 0.88 |
298
- | all.leaderboard_gpqa_extended.acc_norm,none | 0.372 | 0.364 | -0.007 | 72b | 0.98 |
299
- | all.leaderboard_math_prealgebra_hard.exact_match,none | 0.047 | 0.155 | 0.109 | 95b | 3.33 |
300
- | all.leaderboard_math_algebra_hard.exact_match,none | 0.02 | 0.114 | 0.094 | 95b | 5.83 |
301
- | all.leaderboard_bbh_boolean_expressions.acc_norm,none | 0.936 | 0.92 | -0.016 | 72b | 0.98 |
302
- | all.leaderboard_math_num_theory_hard.exact_match,none | 0 | 0.058 | 0.058 | 95b | 0.00 |
303
- | all.leaderboard_bbh_movie_recommendation.acc_norm,none | 0.768 | 0.78 | 0.012 | 95b | 1.02 |
304
- | all.leaderboard_math_counting_and_prob_hard.exact_match,none | 0 | 0.024 | 0.024 | 95b | 0.00 |
305
- | all.leaderboard_math_intermediate_algebra_hard.exact_match,none | 0 | 0.004 | 0.004 | 95b | 0.00 |
306
- | all.leaderboard_ifeval.prompt_level_strict_acc,none | 0.839 | 0.813 | -0.026 | 72b | 0.97 |
307
- | all.leaderboard_ifeval.inst_level_strict_acc,none | 0.888 | 0.873 | -0.016 | 72b | 0.98 |
308
- | all.leaderboard_ifeval.inst_level_loose_acc,none | 0.904 | 0.891 | -0.013 | 72b | 0.99 |
309
- | all.leaderboard_ifeval.prompt_level_loose_acc,none | 0.861 | 0.839 | -0.022 | 72b | 0.97 |
310
- | all.leaderboard_bbh_snarks.acc_norm,none | 0.927 | 0.904 | -0.022 | 72b | 0.98 |
311
- | all.leaderboard_bbh_web_of_lies.acc_norm,none | 0.676 | 0.616 | -0.060 | 72b | 0.91 |
312
- | all.leaderboard_bbh_penguins_in_a_table.acc_norm,none | 0.719 | 0.767 | 0.048 | 95b | 1.07 |
313
- | all.leaderboard_bbh_hyperbaton.acc_norm,none | 0.892 | 0.9 | 0.008 | 95b | 1.01 |
314
- | all.leaderboard_bbh_object_counting.acc_norm,none | 0.612 | 0.544 | -0.068 | 72b | 0.89 |
315
- | all.leaderboard_musr_object_placements.acc_norm,none | 0.258 | 0.285 | 0.027 | 95b | 1.11 |
316
- | all.leaderboard_bbh_logical_deduction_five_objects.acc_norm,none | 0.704 | 0.592 | -0.112 | 72b | 0.84 |
317
- | all.leaderboard_musr_team_allocation.acc_norm,none | 0.456 | 0.396 | -0.060 | 72b | 0.87 |
318
- | all.leaderboard_bbh_navigate.acc_norm,none | 0.832 | 0.788 | -0.044 | 72b | 0.95 |
319
- | all.leaderboard_bbh_tracking_shuffled_objects_seven_objects.acc_norm,none | 0.34 | 0.304 | -0.036 | 72b | 0.89 |
320
- | all.leaderboard_bbh_formal_fallacies.acc_norm,none | 0.776 | 0.756 | -0.020 | 72b | 0.97 |
321
- | all.leaderboard_gpqa_main.acc_norm,none | 0.375 | 0.355 | -0.020 | 72b | 0.95 |
322
- | all.leaderboard_bbh_disambiguation_qa.acc_norm,none | 0.744 | 0.772 | 0.028 | 95b | 1.04 |
323
- | all.leaderboard_bbh_tracking_shuffled_objects_five_objects.acc_norm,none | 0.32 | 0.284 | -0.036 | 72b | 0.89 |
324
- | all.leaderboard_bbh_date_understanding.acc_norm,none | 0.784 | 0.764 | -0.020 | 72b | 0.97 |
325
- | all.leaderboard_bbh_geometric_shapes.acc_norm,none | 0.464 | 0.412 | -0.052 | 72b | 0.89 |
326
- | all.leaderboard_bbh_reasoning_about_colored_objects.acc_norm,none | 0.864 | 0.84 | -0.024 | 72b | 0.97 |
327
- | all.leaderboard_musr_murder_mysteries.acc_norm,none | 0.548 | 0.604 | 0.056 | 95b | 1.10 |
328
- | all.leaderboard_bbh_ruin_names.acc_norm,none | 0.888 | 0.86 | -0.028 | 72b | 0.97 |
329
- | all.leaderboard_bbh_logical_deduction_seven_objects.acc_norm,none | 0.644 | 0.664 | 0.020 | 95b | 1.03 |
330
- | all.leaderboard_bbh.acc_norm,none | 0.726 | 0.701 | -0.025 | 72b | 0.97 |
331
- | all.leaderboard_bbh_temporal_sequences.acc_norm,none | 0.996 | 0.968 | -0.028 | 72b | 0.97 |
332
- | all.leaderboard_mmlu_pro.acc,none | 0.563 | 0.522 | -0.041 | 72b | 0.93 |
333
- | leaderboard_gpqa_main.acc_norm,none | 0.375 | 0.355 | -0.020 | 72b | 0.95 |
334
- | leaderboard_bbh_disambiguation_qa.acc_norm,none | 0.744 | 0.772 | 0.028 | 95b | 1.04 |
335
- | leaderboard_bbh_tracking_shuffled_objects_five_objects.acc_norm,none | 0.32 | 0.284 | -0.036 | 72b | 0.89 |
336
- | leaderboard_bbh_date_understanding.acc_norm,none | 0.784 | 0.764 | -0.020 | 72b | 0.97 |
337
- | leaderboard_bbh_geometric_shapes.acc_norm,none | 0.464 | 0.412 | -0.052 | 72b | 0.89 |
338
- | leaderboard_bbh_reasoning_about_colored_objects.acc_norm,none | 0.864 | 0.84 | -0.024 | 72b | 0.97 |
339
- | leaderboard_musr_murder_mysteries.acc_norm,none | 0.548 | 0.604 | 0.056 | 95b | 1.10 |
340
- | leaderboard_bbh_ruin_names.acc_norm,none | 0.888 | 0.86 | -0.028 | 72b | 0.97 |
341
- | leaderboard_bbh_logical_deduction_seven_objects.acc_norm,none | 0.644 | 0.664 | 0.020 | 95b | 1.03 |
342
- | leaderboard_bbh.acc_norm,none | 0.726 | 0.701 | -0.025 | 72b | 0.97 |
343
- | leaderboard_bbh_temporal_sequences.acc_norm,none | 0.996 | 0.968 | -0.028 | 72b | 0.97 |
 
 
 
 
 
 
 
 
 
 
 
 
344
  | leaderboard_mmlu_pro.acc,none | 0.563 | 0.522 | -0.041 | 72b | 0.93 |
 
1
+ ---
2
+ license: other
3
+ license_name: qwen
4
+ license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
5
+ language:
6
+ - zho
7
+ - eng
8
+ - fra
9
+ - spa
10
+ - por
11
+ - deu
12
+ - ita
13
+ - rus
14
+ - jpn
15
+ - kor
16
+ - vie
17
+ - tha
18
+ - ara
19
+ pipeline_tag: text-generation
20
+ base_model:
21
+ - Qwen/Qwen2.5-72B-Instruct
22
+ tags:
23
+ - chat
24
+ model-index:
25
+ - name: Qwen2.5-95B-Instruct
26
+ results:
27
+ - task:
28
+ type: text-generation
29
+ name: Text Generation
30
+ dataset:
31
+ name: IFEval (0-Shot)
32
+ type: HuggingFaceH4/ifeval
33
+ args:
34
+ num_few_shot: 0
35
+ metrics:
36
+ - type: inst_level_strict_acc and prompt_level_strict_acc
37
+ value: 84.31
38
+ name: strict accuracy
39
+ source:
40
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ssmits/Qwen2.5-95B-Instruct
41
+ name: Open LLM Leaderboard
42
+ - task:
43
+ type: text-generation
44
+ name: Text Generation
45
+ dataset:
46
+ name: BBH (3-Shot)
47
+ type: BBH
48
+ args:
49
+ num_few_shot: 3
50
+ metrics:
51
+ - type: acc_norm
52
+ value: 58.53
53
+ name: normalized accuracy
54
+ source:
55
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ssmits/Qwen2.5-95B-Instruct
56
+ name: Open LLM Leaderboard
57
+ - task:
58
+ type: text-generation
59
+ name: Text Generation
60
+ dataset:
61
+ name: MATH Lvl 5 (4-Shot)
62
+ type: hendrycks/competition_math
63
+ args:
64
+ num_few_shot: 4
65
+ metrics:
66
+ - type: exact_match
67
+ value: 6.04
68
+ name: exact match
69
+ source:
70
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ssmits/Qwen2.5-95B-Instruct
71
+ name: Open LLM Leaderboard
72
+ - task:
73
+ type: text-generation
74
+ name: Text Generation
75
+ dataset:
76
+ name: GPQA (0-shot)
77
+ type: Idavidrein/gpqa
78
+ args:
79
+ num_few_shot: 0
80
+ metrics:
81
+ - type: acc_norm
82
+ value: 15.21
83
+ name: acc_norm
84
+ source:
85
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ssmits/Qwen2.5-95B-Instruct
86
+ name: Open LLM Leaderboard
87
+ - task:
88
+ type: text-generation
89
+ name: Text Generation
90
+ dataset:
91
+ name: MuSR (0-shot)
92
+ type: TAUR-Lab/MuSR
93
+ args:
94
+ num_few_shot: 0
95
+ metrics:
96
+ - type: acc_norm
97
+ value: 13.61
98
+ name: acc_norm
99
+ source:
100
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ssmits/Qwen2.5-95B-Instruct
101
+ name: Open LLM Leaderboard
102
+ - task:
103
+ type: text-generation
104
+ name: Text Generation
105
+ dataset:
106
+ name: MMLU-PRO (5-shot)
107
+ type: TIGER-Lab/MMLU-Pro
108
+ config: main
109
+ split: test
110
+ args:
111
+ num_few_shot: 5
112
+ metrics:
113
+ - type: acc
114
+ value: 46.85
115
+ name: accuracy
116
+ source:
117
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ssmits/Qwen2.5-95B-Instruct
118
+ name: Open LLM Leaderboard
119
+ ---
120
+
121
+ # Qwen2.5-95B-Instruct
122
+
123
+ Qwen2.5-95B-Instruct is a [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) self-merge made with [MergeKit](https://github.com/arcee-ai/mergekit/tree/main).
124
+
125
+ The layer ranges chosen for this merge were inspired by a rough estimate of the layer similarity analysis of [ssmits/Falcon2-5.5B-multilingual](https://huggingface.co/ssmits/Falcon2-5.5B-multilingual). Layer similarity analysis involves examining the outputs of different layers in a neural network to determine how similar or different they are. This technique can help identify which layers contribute most significantly to the model's performance. In the context of the Falcon-11B model, layer similarity analysis across multiple languages revealed that the first half of the layers were more important for maintaining performance. Additionally, this analysis can be used to more rigidly slice and add extra layers for optimal Next Token Prediction, allowing for possibly a model architecture that's more creative and powerful.
126
+
127
+ - [alpindale/goliath-120b](https://huggingface.co/alpindale/goliath-120b)
128
+ - [cognitivecomputations/MegaDolphin-120b](https://huggingface.co/cognitivecomputations/MegaDolphin-120b)
129
+ - [mlabonne/Meta-Llama-3-120B-Instruct](https://huggingface.co/mlabonne/Meta-Llama-3-120B-Instruct)
130
+
131
+ Special thanks to [Eric Hartford](https://huggingface.co/ehartford) for both inspiring and evaluating the original model, to [Charles Goddard](https://huggingface.co/chargoddard) for creating MergeKit, and to [Mathieu Labonne](https://huggingface.co/mlabonne) for creating the Meta-Llama-3-120B-Instruct model that served as the main inspiration for this merge.
132
+
133
+ ## 🔍 Applications
134
+
135
+ This model is probably good for creative writing tasks. It uses the Qwen chat template with a default context window of 128K.
136
+
137
+ The model could be quite creative and maybe even better than the 72B model at some tasks.
138
+
139
+ ## Quantized models
140
+
141
+ To be quantized.
142
+
143
+ * **GGUF**: [Link to GGUF model]
144
+ * **EXL2**: [Link to EXL2 model]
145
+ * **mlx**: [Link to mlx model]
146
+
147
+ ## 🏆 Evaluation
148
+ This model has yet to be thoroughly evaluated. It is expected to excel in creative writing and more but may have limitations in other tasks.
149
+ Use it with caution and don't expect it to outperform state-of-the-art models outside of specific creative use cases.
150
+
151
+ Once the model is created and tested, this section will be updated with:
152
+
153
+ * Links to evaluation threads on social media platforms
154
+ * Examples of the model's performance in creative writing tasks
155
+ * Comparisons with other large language models in various applications
156
+ * Community feedback and use cases
157
+
158
+ We encourage users to share their experiences and evaluations to help build a comprehensive understanding of the model's capabilities and limitations.
159
+
160
+ ## 🧩 Configuration
161
+
162
+ ```yaml
163
+ slices:
164
+ - sources:
165
+ - layer_range: [0, 10]
166
+ model: Qwen/Qwen2.5-72B-Instruct
167
+ - sources:
168
+ - layer_range: [5, 15]
169
+ model: Qwen/Qwen2.5-72B-Instruct
170
+ - sources:
171
+ - layer_range: [10, 20]
172
+ model: Qwen/Qwen2.5-72B-Instruct
173
+ - sources:
174
+ - layer_range: [15, 25]
175
+ model: Qwen/Qwen2.5-72B-Instruct
176
+ - sources:
177
+ - layer_range: [20, 30]
178
+ model: Qwen/Qwen2.5-72B-Instruct
179
+ - sources:
180
+ - layer_range: [25, 80]
181
+ model: Qwen/Qwen2.5-72B-Instruct
182
+ dtype: bfloat16
183
+ merge_method: passthrough
184
+ ```
185
+
186
+ ## 💻 Usage
187
+
188
+ ```python
189
+ !pip install -qU transformers accelerate
190
+
191
+ from transformers import AutoTokenizer
192
+ import transformers
193
+ import torch
194
+
195
+ model = "ssmits/Qwen2.5-95B-Instruct"
196
+ messages = [{"role": "user", "content": "What is a large language model?"}]
197
+
198
+ tokenizer = AutoTokenizer.from_pretrained(model)
199
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
200
+ pipeline = transformers.pipeline(
201
+ "text-generation",
202
+ model=model,
203
+ torch_dtype=torch.float16,
204
+ device_map="auto",
205
+ )
206
+
207
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
208
+ print(outputs[0]["generated_text"])
209
+ ```
210
+
211
+ ## 🏆 Evaluation
212
+
213
+ Initial benchmarks show interesting performance characteristics compared to the 72B model:
214
+
215
+ ### Strengths
216
+ The 95B model shows notable improvements in:
217
+
218
+ 1. **Mathematical Reasoning**
219
+ - Up to 5.83x improvement in algebra tasks
220
+ - 3.33x improvement in pre-algebra
221
+ - Consistent gains across geometry, number theory, and probability tasks
222
+ - Overall stronger performance in complex mathematical reasoning
223
+
224
+ 2. **Spatial & Object Understanding**
225
+ - 11% improvement in object placement tasks
226
+ - 7% better at tabular data interpretation
227
+ - Enhanced performance in logical deduction with multiple objects
228
+
229
+ 3. **Complex Language Tasks**
230
+ - 4% improvement in disambiguation tasks
231
+ - 2% better at movie recommendations
232
+ - Slight improvements in hyperbaton (complex word order) tasks
233
+
234
+ 4. **Creative & Analytical Reasoning**
235
+ - 10% improvement in murder mystery solving
236
+ - Better performance in tasks requiring creative problem-solving
237
+
238
+ ### Areas for Consideration
239
+ While the model shows improvements in specific areas, users should note that the 72B model still performs better in many general language and reasoning tasks. The 95B version appears to excel particularly in mathematical and spatial reasoning while maintaining comparable performance in other areas.
240
+
241
+ ### [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
242
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ssmits__Qwen2.5-95B-Instruct)
243
+
244
+ | Metric |Value|
245
+ |-------------------|----:|
246
+ |Avg. |37.43|
247
+ |IFEval (0-Shot) |84.31|
248
+ |BBH (3-Shot) |58.53|
249
+ |MATH Lvl 5 (4-Shot)| 6.04|
250
+ |GPQA (0-shot) |15.21|
251
+ |MuSR (0-shot) |13.61|
252
+ |MMLU-PRO (5-shot) |46.85|
253
+
254
+
255
+ | Key | 72b Result | 95b Result | Difference | Which is Higher | Multiplier |
256
+ |:--------------------------------------------------------------------------|-------------:|-------------:|-------------:|:------------------|:-------------|
257
+ | leaderboard_musr.acc_norm,none | 0.419 | 0.427 | 0.008 | 95b | 1.02 |
258
+ | leaderboard_bbh_sports_understanding.acc_norm,none | 0.892 | 0.876 | -0.016 | 72b | 0.98 |
259
+ | leaderboard_bbh_logical_deduction_three_objects.acc_norm,none | 0.94 | 0.928 | -0.012 | 72b | 0.99 |
260
+ | leaderboard_math_geometry_hard.exact_match,none | 0 | 0.008 | 0.008 | 95b | 0.00 |
261
+ | leaderboard_gpqa.acc_norm,none | 0.375 | 0.364 | -0.011 | 72b | 0.97 |
262
+ | leaderboard_math_hard.exact_match,none | 0.012 | 0.06 | 0.048 | 95b | 5.00 |
263
+ | leaderboard.exact_match,none | 0.012 | 0.06 | 0.048 | 95b | 5.00 |
264
+ | leaderboard.prompt_level_loose_acc,none | 0.861 | 0.839 | -0.022 | 72b | 0.97 |
265
+ | leaderboard.prompt_level_strict_acc,none | 0.839 | 0.813 | -0.026 | 72b | 0.97 |
266
+ | leaderboard.inst_level_loose_acc,none | 0.904 | 0.891 | -0.013 | 72b | 0.99 |
267
+ | leaderboard.acc_norm,none | 0.641 | 0.622 | -0.020 | 72b | 0.97 |
268
+ | leaderboard.inst_level_strict_acc,none | 0.888 | 0.873 | -0.016 | 72b | 0.98 |
269
+ | leaderboard.acc,none | 0.563 | 0.522 | -0.041 | 72b | 0.93 |
270
+ | leaderboard_bbh_causal_judgement.acc_norm,none | 0.668 | 0.663 | -0.005 | 72b | 0.99 |
271
+ | leaderboard_bbh_salient_translation_error_detection.acc_norm,none | 0.668 | 0.588 | -0.080 | 72b | 0.88 |
272
+ | leaderboard_gpqa_extended.acc_norm,none | 0.372 | 0.364 | -0.007 | 72b | 0.98 |
273
+ | leaderboard_math_prealgebra_hard.exact_match,none | 0.047 | 0.155 | 0.109 | 95b | 3.33 |
274
+ | leaderboard_math_algebra_hard.exact_match,none | 0.02 | 0.114 | 0.094 | 95b | 5.83 |
275
+ | leaderboard_bbh_boolean_expressions.acc_norm,none | 0.936 | 0.92 | -0.016 | 72b | 0.98 |
276
+ | leaderboard_math_num_theory_hard.exact_match,none | 0 | 0.058 | 0.058 | 95b | 0.00 |
277
+ | leaderboard_bbh_movie_recommendation.acc_norm,none | 0.768 | 0.78 | 0.012 | 95b | 1.02 |
278
+ | leaderboard_math_counting_and_prob_hard.exact_match,none | 0 | 0.024 | 0.024 | 95b | 0.00 |
279
+ | leaderboard_math_intermediate_algebra_hard.exact_match,none | 0 | 0.004 | 0.004 | 95b | 0.00 |
280
+ | leaderboard_ifeval.prompt_level_strict_acc,none | 0.839 | 0.813 | -0.026 | 72b | 0.97 |
281
+ | leaderboard_ifeval.inst_level_strict_acc,none | 0.888 | 0.873 | -0.016 | 72b | 0.98 |
282
+ | leaderboard_ifeval.inst_level_loose_acc,none | 0.904 | 0.891 | -0.013 | 72b | 0.99 |
283
+ | leaderboard_ifeval.prompt_level_loose_acc,none | 0.861 | 0.839 | -0.022 | 72b | 0.97 |
284
+ | leaderboard_bbh_snarks.acc_norm,none | 0.927 | 0.904 | -0.022 | 72b | 0.98 |
285
+ | leaderboard_bbh_web_of_lies.acc_norm,none | 0.676 | 0.616 | -0.060 | 72b | 0.91 |
286
+ | leaderboard_bbh_penguins_in_a_table.acc_norm,none | 0.719 | 0.767 | 0.048 | 95b | 1.07 |
287
+ | leaderboard_bbh_hyperbaton.acc_norm,none | 0.892 | 0.9 | 0.008 | 95b | 1.01 |
288
+ | leaderboard_bbh_object_counting.acc_norm,none | 0.612 | 0.544 | -0.068 | 72b | 0.89 |
289
+ | leaderboard_musr_object_placements.acc_norm,none | 0.258 | 0.285 | 0.027 | 95b | 1.11 |
290
+ | leaderboard_bbh_logical_deduction_five_objects.acc_norm,none | 0.704 | 0.592 | -0.112 | 72b | 0.84 |
291
+ | leaderboard_musr_team_allocation.acc_norm,none | 0.456 | 0.396 | -0.060 | 72b | 0.87 |
292
+ | leaderboard_bbh_navigate.acc_norm,none | 0.832 | 0.788 | -0.044 | 72b | 0.95 |
293
+ | leaderboard_bbh_tracking_shuffled_objects_seven_objects.acc_norm,none | 0.34 | 0.304 | -0.036 | 72b | 0.89 |
294
+ | leaderboard_bbh_formal_fallacies.acc_norm,none | 0.776 | 0.756 | -0.020 | 72b | 0.97 |
295
+ | all.leaderboard_musr.acc_norm,none | 0.419 | 0.427 | 0.008 | 95b | 1.02 |
296
+ | all.leaderboard_bbh_sports_understanding.acc_norm,none | 0.892 | 0.876 | -0.016 | 72b | 0.98 |
297
+ | all.leaderboard_bbh_logical_deduction_three_objects.acc_norm,none | 0.94 | 0.928 | -0.012 | 72b | 0.99 |
298
+ | all.leaderboard_math_geometry_hard.exact_match,none | 0 | 0.008 | 0.008 | 95b | 0.00 |
299
+ | all.leaderboard_gpqa.acc_norm,none | 0.375 | 0.364 | -0.011 | 72b | 0.97 |
300
+ | all.leaderboard_math_hard.exact_match,none | 0.012 | 0.06 | 0.048 | 95b | 5.00 |
301
+ | all.leaderboard.exact_match,none | 0.012 | 0.06 | 0.048 | 95b | 5.00 |
302
+ | all.leaderboard.prompt_level_loose_acc,none | 0.861 | 0.839 | -0.022 | 72b | 0.97 |
303
+ | all.leaderboard.prompt_level_strict_acc,none | 0.839 | 0.813 | -0.026 | 72b | 0.97 |
304
+ | all.leaderboard.inst_level_loose_acc,none | 0.904 | 0.891 | -0.013 | 72b | 0.99 |
305
+ | all.leaderboard.acc_norm,none | 0.641 | 0.622 | -0.020 | 72b | 0.97 |
306
+ | all.leaderboard.inst_level_strict_acc,none | 0.888 | 0.873 | -0.016 | 72b | 0.98 |
307
+ | all.leaderboard.acc,none | 0.563 | 0.522 | -0.041 | 72b | 0.93 |
308
+ | all.leaderboard_bbh_causal_judgement.acc_norm,none | 0.668 | 0.663 | -0.005 | 72b | 0.99 |
309
+ | all.leaderboard_bbh_salient_translation_error_detection.acc_norm,none | 0.668 | 0.588 | -0.080 | 72b | 0.88 |
310
+ | all.leaderboard_gpqa_extended.acc_norm,none | 0.372 | 0.364 | -0.007 | 72b | 0.98 |
311
+ | all.leaderboard_math_prealgebra_hard.exact_match,none | 0.047 | 0.155 | 0.109 | 95b | 3.33 |
312
+ | all.leaderboard_math_algebra_hard.exact_match,none | 0.02 | 0.114 | 0.094 | 95b | 5.83 |
313
+ | all.leaderboard_bbh_boolean_expressions.acc_norm,none | 0.936 | 0.92 | -0.016 | 72b | 0.98 |
314
+ | all.leaderboard_math_num_theory_hard.exact_match,none | 0 | 0.058 | 0.058 | 95b | 0.00 |
315
+ | all.leaderboard_bbh_movie_recommendation.acc_norm,none | 0.768 | 0.78 | 0.012 | 95b | 1.02 |
316
+ | all.leaderboard_math_counting_and_prob_hard.exact_match,none | 0 | 0.024 | 0.024 | 95b | 0.00 |
317
+ | all.leaderboard_math_intermediate_algebra_hard.exact_match,none | 0 | 0.004 | 0.004 | 95b | 0.00 |
318
+ | all.leaderboard_ifeval.prompt_level_strict_acc,none | 0.839 | 0.813 | -0.026 | 72b | 0.97 |
319
+ | all.leaderboard_ifeval.inst_level_strict_acc,none | 0.888 | 0.873 | -0.016 | 72b | 0.98 |
320
+ | all.leaderboard_ifeval.inst_level_loose_acc,none | 0.904 | 0.891 | -0.013 | 72b | 0.99 |
321
+ | all.leaderboard_ifeval.prompt_level_loose_acc,none | 0.861 | 0.839 | -0.022 | 72b | 0.97 |
322
+ | all.leaderboard_bbh_snarks.acc_norm,none | 0.927 | 0.904 | -0.022 | 72b | 0.98 |
323
+ | all.leaderboard_bbh_web_of_lies.acc_norm,none | 0.676 | 0.616 | -0.060 | 72b | 0.91 |
324
+ | all.leaderboard_bbh_penguins_in_a_table.acc_norm,none | 0.719 | 0.767 | 0.048 | 95b | 1.07 |
325
+ | all.leaderboard_bbh_hyperbaton.acc_norm,none | 0.892 | 0.9 | 0.008 | 95b | 1.01 |
326
+ | all.leaderboard_bbh_object_counting.acc_norm,none | 0.612 | 0.544 | -0.068 | 72b | 0.89 |
327
+ | all.leaderboard_musr_object_placements.acc_norm,none | 0.258 | 0.285 | 0.027 | 95b | 1.11 |
328
+ | all.leaderboard_bbh_logical_deduction_five_objects.acc_norm,none | 0.704 | 0.592 | -0.112 | 72b | 0.84 |
329
+ | all.leaderboard_musr_team_allocation.acc_norm,none | 0.456 | 0.396 | -0.060 | 72b | 0.87 |
330
+ | all.leaderboard_bbh_navigate.acc_norm,none | 0.832 | 0.788 | -0.044 | 72b | 0.95 |
331
+ | all.leaderboard_bbh_tracking_shuffled_objects_seven_objects.acc_norm,none | 0.34 | 0.304 | -0.036 | 72b | 0.89 |
332
+ | all.leaderboard_bbh_formal_fallacies.acc_norm,none | 0.776 | 0.756 | -0.020 | 72b | 0.97 |
333
+ | all.leaderboard_gpqa_main.acc_norm,none | 0.375 | 0.355 | -0.020 | 72b | 0.95 |
334
+ | all.leaderboard_bbh_disambiguation_qa.acc_norm,none | 0.744 | 0.772 | 0.028 | 95b | 1.04 |
335
+ | all.leaderboard_bbh_tracking_shuffled_objects_five_objects.acc_norm,none | 0.32 | 0.284 | -0.036 | 72b | 0.89 |
336
+ | all.leaderboard_bbh_date_understanding.acc_norm,none | 0.784 | 0.764 | -0.020 | 72b | 0.97 |
337
+ | all.leaderboard_bbh_geometric_shapes.acc_norm,none | 0.464 | 0.412 | -0.052 | 72b | 0.89 |
338
+ | all.leaderboard_bbh_reasoning_about_colored_objects.acc_norm,none | 0.864 | 0.84 | -0.024 | 72b | 0.97 |
339
+ | all.leaderboard_musr_murder_mysteries.acc_norm,none | 0.548 | 0.604 | 0.056 | 95b | 1.10 |
340
+ | all.leaderboard_bbh_ruin_names.acc_norm,none | 0.888 | 0.86 | -0.028 | 72b | 0.97 |
341
+ | all.leaderboard_bbh_logical_deduction_seven_objects.acc_norm,none | 0.644 | 0.664 | 0.020 | 95b | 1.03 |
342
+ | all.leaderboard_bbh.acc_norm,none | 0.726 | 0.701 | -0.025 | 72b | 0.97 |
343
+ | all.leaderboard_bbh_temporal_sequences.acc_norm,none | 0.996 | 0.968 | -0.028 | 72b | 0.97 |
344
+ | all.leaderboard_mmlu_pro.acc,none | 0.563 | 0.522 | -0.041 | 72b | 0.93 |
345
+ | leaderboard_gpqa_main.acc_norm,none | 0.375 | 0.355 | -0.020 | 72b | 0.95 |
346
+ | leaderboard_bbh_disambiguation_qa.acc_norm,none | 0.744 | 0.772 | 0.028 | 95b | 1.04 |
347
+ | leaderboard_bbh_tracking_shuffled_objects_five_objects.acc_norm,none | 0.32 | 0.284 | -0.036 | 72b | 0.89 |
348
+ | leaderboard_bbh_date_understanding.acc_norm,none | 0.784 | 0.764 | -0.020 | 72b | 0.97 |
349
+ | leaderboard_bbh_geometric_shapes.acc_norm,none | 0.464 | 0.412 | -0.052 | 72b | 0.89 |
350
+ | leaderboard_bbh_reasoning_about_colored_objects.acc_norm,none | 0.864 | 0.84 | -0.024 | 72b | 0.97 |
351
+ | leaderboard_musr_murder_mysteries.acc_norm,none | 0.548 | 0.604 | 0.056 | 95b | 1.10 |
352
+ | leaderboard_bbh_ruin_names.acc_norm,none | 0.888 | 0.86 | -0.028 | 72b | 0.97 |
353
+ | leaderboard_bbh_logical_deduction_seven_objects.acc_norm,none | 0.644 | 0.664 | 0.020 | 95b | 1.03 |
354
+ | leaderboard_bbh.acc_norm,none | 0.726 | 0.701 | -0.025 | 72b | 0.97 |
355
+ | leaderboard_bbh_temporal_sequences.acc_norm,none | 0.996 | 0.968 | -0.028 | 72b | 0.97 |
356
  | leaderboard_mmlu_pro.acc,none | 0.563 | 0.522 | -0.041 | 72b | 0.93 |