Add library_name and pipeline_tag

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +235 -0
README.md CHANGED
@@ -1,6 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
1
 
2
  ---
3
  license: llama2
 
 
4
  base_model:
5
  - unsloth/llama-2-13b
6
  - vanillaOVO/WizardMath-13B-V1.0
@@ -19,3 +32,225 @@ This repository includes one of the checkpoints used in the paper "Activation-In
19
  - **AIM:** True
20
 
21
  Benchmark results and paper details can be found at the official [GitHub](https://github.com/ahnobari/ActivationInformedMerging.git).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ base_model:
6
+ - unsloth/llama-2-13b
7
+ - vanillaOVO/WizardMath-13B-V1.0
8
+ - WizardLMTeam/WizardLM-13B-V1.2
9
+ tags:
10
+ - merge
11
+ ---
12
 
13
  ---
14
  license: llama2
15
+ library_name: transformers
16
+ pipeline_tag: text-generation
17
  base_model:
18
  - unsloth/llama-2-13b
19
  - vanillaOVO/WizardMath-13B-V1.0
 
32
  - **AIM:** True
33
 
34
  Benchmark results and paper details can be found at the official [GitHub](https://github.com/ahnobari/ActivationInformedMerging.git).
35
+
36
+
37
+ # Usage
38
+ You can re-deo the experiments we have here using the provided code. Below we detail how to replicate the experiments.
39
+
40
+ ## Merging Models
41
+ If you wish to merge the models yourself instead of using the provided checkpoints you can do so with the `merge.py` script provided. For example to perform DARE Ties merging on the Code, Math and Instruction Tuned models you can run:
42
+
43
+ ```bash
44
+ python merge.py --method dare_ties --base_model unsloth/llama-2-13b --models_to_merge WizardLMTeam/WizardLM-13B-V1.2,vanillaOVO/WizardMath-13B-V1.0,layoric/llama-2-13b-code-alpaca --save_path ./DARE_TIES_InstructMathCode
45
+ ```
46
+
47
+ ## Evaluating Models on Benchmarks
48
+ Once you have the checkpoints you want to test you can run the `evaluate_model.py` script to run the benchamrks on the model. For example to run the benchmarks on the model merged above you can run:
49
+
50
+
51
+ ```bash
52
+ python evaluate_model.py --model ./DARE_TIES_InstructMathCode
53
+ ```
54
+
55
+ or if you wanted to use the provided checkpoints:
56
+
57
+ ```bash
58
+ python evaluate_model.py --model ahn1376/DARETies___Code-Math-Instruction_Tuned
59
+ ```
60
+
61
+ ## Applying AIM to A Merged Model
62
+ If you want to apply AIM to any merged model you will need to provide the merged checkpoint as well as the base model checkpoint. The only hyper-parameter in AIM is $\omega$, which we recommend to be set between $0.2-0.6$ we set this to $0.4$ for the experiments in our paper, but in some cases lower values (more relaxation) will yeild better results. Below is how you can apply AIM to the checkpoint the code above makes:
63
+
64
+ ```bash
65
+ python performAIM.py --merged_model ./DARE_TIES_InstructMathCode --pretrained_model_name unsloth/llama-2-13b --omega 0.4 --save_path ./DARE_TIES_AIM_InstructMathCode
66
+ ```
67
+
68
+
69
+ # Summary of Findings
70
+ We find that in basically all merging methods we tested applying AIM improves performance and pushed the pareto front of the resulting model population and achieves the highest scrores in benchmarks. The figure below shows how with decreasing $\omega$ (more AIM relaxation) leads to further improvements in some models (HV gain is the hypervolume gained by adding the model to the population models used for merging (more is better)):
71
+
72
+ <img width="600px" alt="Screenshot 2025-02-04 at 10 15 38 AM" src="https://github.com/user-attachments/assets/5cd5119e-a292-45d4-972f-b2dd6febf6f8" />
73
+
74
+ We can observe this better by visualizing some of the pareto fronts for different model populations:
75
+
76
+ <img width="100%" alt="Screenshot 2025-02-04 at 10 22 25 AM" src="https://github.com/user-attachments/assets/5d88a71e-16ca-4f71-84f7-6e8de96ea69a" />
77
+
78
+ Overall the results of our experiments are as follows for the different tests:
79
+
80
+ ## Base Models
81
+
82
+ | Method | Model(s) | AIM | HumanEval | MBPP | MMLU | MATH | GSM8K | IFEval | HV Gain |
83
+ |--------|----------|-----|-----------|------|------|------|-------|---------|----------|
84
+ | - | Base | - | 17.07 | 27.80 | 52.18 | 0.70 | 4.20 | 25.10 | - |
85
+ | - | Code | - | 17.07 | 31.60 | 52.91 | 6.00 | 24.10 | 26.25 | - |
86
+ | - | Instruction Tuned | - | **26.83** | **34.80** | **53.41** | 7.50 | 43.40 | **35.67** | - |
87
+ | - | Math | - | 15.24 | 27.60 | 51.89 | **13.10** | **59.10** | 21.58 | - |
88
+
89
+ ## Merged Models
90
+
91
+ ### DARE Task Arithmetic
92
+
93
+ | Model(s) | AIM | HumanEval | MBPP | MMLU | MATH | GSM8K | IFEval | HV Gain |
94
+ |----------|-----|-----------|------|------|------|-------|---------|----------|
95
+ | Code + Instruction Tuned | No | 26.83 | 34.40 | 53.53 | 8.40 | 45.80 | 33.42 | 0.27 |
96
+ | | Yes | 29.27 (+9.09%) | 36.00 (+4.65%) | 54.18 (+1.21%) | 8.30 (-1.19%) | 46.20 (+0.87%) | 32.00 (-4.25%) | 0.28 (+2.49%) |
97
+ | Code + Math | No | 16.46 | 28.60 | 51.96 | 15.10 | 64.70 | 22.02 | 0.23 |
98
+ | | Yes | 15.85 (-3.71%) | 29.60 (+3.50%) | 52.50 (+1.04%) | 14.80 (-1.99%) | 64.10 (-0.93%) | 21.91 (-0.50%) | 0.23 (-1.65%) |
99
+ | Instruction Tuned + Math | No | 5.49 | 19.00 | 51.08 | 9.80 | 54.30 | 32.35 | 0.18 |
100
+ | | Yes | 12.20 (+122.22%) | 28.20 (+48.42%) | 52.72 (+3.21%) | 12.90 (+31.63%) | 62.20 (+14.55%) | 31.96 (-1.21%) | 0.26 (+40.71%) |
101
+ | Code + Instruction Tuned + Math | No | 11.59 | 19.60 | 50.89 | 9.10 | 49.70 | 33.20 | 0.16 |
102
+ | | Yes | 15.85 (+36.76%) | 27.00 (+37.76%) | 52.59 (+3.34%) | 12.20 (+34.07%) | 60.70 (+22.13%) | 33.59 (+1.17%) | 0.23 (+40.59%) |
103
+
104
+ ### DARE Ties
105
+
106
+ | Model(s) | AIM | HumanEval | MBPP | MMLU | MATH | GSM8K | IFEval | HV Gain |
107
+ |----------|-----|-----------|------|------|------|-------|---------|----------|
108
+ | Code + Instruction Tuned | No | 30.49 | 35.20 | 53.40 | 8.60 | 46.20 | 33.28 | 0.28 |
109
+ | | Yes | **30.49** | **36.80** (+4.55%) | 54.02 (+1.16%) | 8.60 | 47.20 (+2.16%) | 33.16 (-0.36%) | 0.29 (+1.63%) |
110
+ | Code + Math | No | 17.07 | 27.40 | 51.92 | 14.90 | 63.60 | 22.53 | 0.23 |
111
+ | | Yes | 17.68 (+3.57%) | 29.00 (+5.84%) | 52.61 (+1.33%) | 15.20 (+2.01%) | 63.90 (+0.47%) | 21.10 (-6.35%) | 0.24 (+4.00%) |
112
+ | Instruction Tuned + Math | No | 8.54 | 23.80 | 51.39 | 9.20 | 54.10 | 33.89 | 0.20 |
113
+ | | Yes | 15.85 (+85.60%) | 30.20 (+26.89%) | 52.89 (+2.92%) | 11.60 (+26.09%) | 57.80 (+6.84%) | 35.63 (+5.13%) | 0.26 (+31.22%) |
114
+ | Code + Instruction Tuned + Math | No | 13.41 | 21.20 | 51.15 | 8.70 | 51.50 | 35.75 | 0.17 |
115
+ | | Yes | 19.51 (+45.49%) | 28.60 (+34.91%) | 52.63 (+2.89%) | 11.60 (+33.33%) | 57.00 (+10.68%) | **36.20** (+1.26%) | 0.24 (+41.28%) |
116
+
117
+ ### Task Arithmetic
118
+
119
+ | Model(s) | AIM | HumanEval | MBPP | MMLU | MATH | GSM8K | IFEval | HV Gain |
120
+ |----------|-----|-----------|------|------|------|-------|---------|----------|
121
+ | Code + Instruction Tuned | No | 29.27 | 33.80 | 53.44 | 8.60 | 47.10 | 31.60 | 0.28 |
122
+ | | Yes | 29.88 (+2.08%) | 35.80 (+5.92%) | 54.12 (+1.27%) | 7.80 (-9.30%) | 46.60 (-1.06%) | 32.01 (+1.30%) | 0.28 (+0.61%) |
123
+ | Code + Math | No | 18.29 | 28.60 | 52.10 | 15.00 | 64.70 | 21.92 | 0.24 |
124
+ | | Yes | 17.68 (-3.34%) | 29.20 (+2.10%) | 52.52 (+0.81%) | 14.60 (-2.67%) | 64.50 (-0.31%) | 21.54 (-1.73%) | 0.24 (-2.65%) |
125
+ | Instruction Tuned + Math | No | 4.27 | 20.20 | 51.50 | 10.00 | 54.20 | 31.31 | 0.18 |
126
+ | | Yes | 8.54 (+100.00%) | 26.40 (+30.69%) | 52.83 (+2.58%) | 12.80 (+28.00%) | 61.30 (+13.10%) | 32.62 (+4.18%) | 0.24 (+34.52%) |
127
+ | Code + Instruction Tuned + Math | No | 11.59 | 19.60 | 51.20 | 9.00 | 52.70 | 32.87 | 0.16 |
128
+ | | Yes | 15.24 (+31.49%) | 27.40 (+39.80%) | 52.63 (+2.79%) | 12.00 (+33.33%) | 58.10 (+10.25%) | 33.91 (+3.16%) | 0.22 (+31.97%) |
129
+
130
+ ### Ties Merging
131
+
132
+ | Model(s) | AIM | HumanEval | MBPP | MMLU | MATH | GSM8K | IFEval | HV Gain |
133
+ |----------|-----|-----------|------|------|------|-------|---------|----------|
134
+ | Code + Instruction Tuned | No | 16.46 | 23.60 | 52.70 | 2.70 | 5.40 | 24.48 | 0.00 |
135
+ | | Yes | 15.24 (-7.41%) | 24.20 (+2.54%) | 53.15 (+0.85%) | 2.60 (-3.70%) | 5.20 (-3.70%) | 22.87 (-6.58%) | 0.05 (+inf%) |
136
+ | Code + Math | No | 15.85 | 26.80 | 51.86 | 14.30 | 62.60 | 21.63 | 0.20 |
137
+ | | Yes | 15.85 | 28.60 (+6.72%) | 52.29 (+0.83%) | **15.30** (+6.99%) | 63.80 (+1.92%) | 22.64 (+4.67%) | 0.23 (+13.55%) |
138
+ | Instruction Tuned + Math | No | 28.05 | 34.60 | 54.45 | 8.70 | 44.70 | 34.04 | 0.23 |
139
+ | | Yes | 27.44 (-2.17%) | 35.00 (+1.16%) | 54.74 (+0.53%) | 9.30 (+6.90%) | 46.10 (+3.13%) | 34.51 (+1.38%) | 0.25 (+6.38%) |
140
+ | Code + Instruction Tuned + Math | No | 21.34 | 29.20 | 53.97 | 6.30 | 29.20 | 26.95 | 0.11 |
141
+ | | Yes | 20.73 (-2.86%) | 29.20 | 54.46 (+0.91%) | 5.70 (-9.52%) | 23.70 (-18.84%) | 25.98 (-3.60%) | 0.11 (+4.33%) |
142
+
143
+ ### WIDEN
144
+
145
+ | Model(s) | AIM | HumanEval | MBPP | MMLU | MATH | GSM8K | IFEval | HV Gain |
146
+ |----------|-----|-----------|------|------|------|-------|---------|----------|
147
+ | Code + Instruction Tuned | No | 26.22 | 35.60 | 54.90 | 8.30 | 45.00 | 30.42 | 0.27 |
148
+ | | Yes | 25.61 (-2.33%) | 34.60 (-2.81%) | 54.97 (+0.13%) | 8.20 (-1.20%) | 44.10 (-2.00%) | 31.60 (+3.88%) | 0.26 (-0.93%) |
149
+ | Code + Math | No | 17.07 | 29.40 | 53.35 | 14.20 | 64.40 | 24.02 | 0.24 |
150
+ | | Yes | 17.07 | 29.60 (+0.68%) | 53.36 (+0.02%) | 14.30 (+0.70%) | 62.20 (-3.42%) | 23.95 (-0.29%) | 0.24 (-1.22%) |
151
+ | Instruction Tuned + Math | No | 24.39 | 30.40 | 54.20 | 14.60 | 66.00 | 30.82 | 0.30 |
152
+ | | Yes | 23.78 (-2.50%) | 32.00 (+5.26%) | 54.69 (+0.90%) | 15.10 (+3.42%) | **68.20** (+3.33%) | 31.23 (+1.33%) | **0.31** (+2.54%) |
153
+ | Code + Instruction Tuned + Math | No | 25.00 | 33.20 | 54.58 | 13.50 | 64.20 | 31.44 | 0.29 |
154
+ | | Yes | 26.83 (+7.32%) | 32.80 (-1.20%) | **54.98** (+0.73%) | 14.40 (+6.67%) | 64.00 (-0.31%) | 32.82 (+4.39%) | 0.30 (+4.70%) |
155
+
156
+ # Usage
157
+ You can re-deo the experiments we have here using the provided code. Below we detail how to replicate the experiments.
158
+
159
+ ## Merging Models
160
+ If you wish to merge the models yourself instead of using the provided checkpoints you can do so with the `merge.py` script provided. For example to perform DARE Ties merging on the Code, Math and Instruction Tuned models you can run:
161
+
162
+ ```bash
163
+ python merge.py --method dare_ties --base_model unsloth/llama-2-13b --models_to_merge WizardLMTeam/WizardLM-13B-V1.2,vanillaOVO/WizardMath-13B-V1.0,layoric/llama-2-13b-code-alpaca --save_path ./DARE_TIES_InstructMathCode
164
+ ```
165
+
166
+ ## Evaluating Models on Benchmarks
167
+ Once you have the checkpoints you want to test you can run the `evaluate_model.py` script to run the benchamrks on the model. For example to run the benchmarks on the model merged above you can run:
168
+
169
+
170
+ ```bash
171
+ python evaluate_model.py --model ./DARE_TIES_InstructMathCode
172
+ ```
173
+
174
+ or if you wanted to use the provided checkpoints:
175
+
176
+ ```bash
177
+ python evaluate_model.py --model ahn1376/DARETies___Code-Math-Instruction_Tuned
178
+ ```
179
+
180
+ ## Applying AIM to A Merged Model
181
+ If you want to apply AIM to any merged model you will need to provide the merged checkpoint as well as the base model checkpoint. The only hyper-parameter in AIM is $\omega$, which we recommend to be set between $0.2-0.6$ we set this to $0.4$ for the experiments in our paper, but in some cases lower values (more relaxation) will yeild better results. Below is how you can apply AIM to the checkpoint the code above makes:
182
+
183
+ ```bash
184
+ python performAIM.py --merged_model ./DARE_TIES_InstructMathCode --pretrained_model_name unsloth/llama-2-13b --omega 0.4 --save_path ./DARE_TIES_AIM_InstructMathCode
185
+ ```
186
+
187
+
188
+ # Summary of Findings
189
+ We find that in basically all merging methods we tested applying AIM improves performance and pushed the pareto front of the resulting model population and achieves the highest scrores in benchmarks. The figure below shows how with decreasing $\omega$ (more AIM relaxation) leads to further improvements in some models (HV gain is the hypervolume gained by adding the model to the population models used for merging (more is better)):
190
+
191
+ <img width="600px" alt="Screenshot 2025-02-04 at 10 15 38 AM" src="https://github.com/user-attachments/assets/5cd5119e-a292-45d4-972f-b2dd6febf6f8" />
192
+
193
+ We can observe this better by visualizing some of the pareto fronts for different model populations:
194
+
195
+ <img width="100%" alt="Screenshot 2025-02-04 at 10 22 25 AM" src="https://github.com/user-attachments/assets/5d88a71e-16ca-4f71-84f7-6e8de96ea69a" />
196
+
197
+ Overall the results of our experiments are as follows for the different tests:
198
+
199
+ ## Base Models
200
+
201
+ | Method | Model(s) | AIM | HumanEval | MBPP | MMLU | MATH | GSM8K | IFEval | HV Gain |
202
+ |--------|----------|-----|-----------|------|------|------|-------|---------|----------|
203
+ | - | Base | - | 17.07 | 27.80 | 52.18 | 0.70 | 4.20 | 25.10 | - |
204
+ | - | Code | - | 17.07 | 31.60 | 52.91 | 6.00 | 24.10 | 26.25 | - |
205
+ | - | Instruction Tuned | - | **26.83** | **34.80** | **53.41** | 7.50 | 43.40 | **35.67** | - |
206
+ | - | Math | - | 15.24 | 27.60 | 51.89 | **13.10** | **59.10** | 21.58 | - |
207
+
208
+ ## Merged Models
209
+
210
+ ### DARE Task Arithmetic
211
+
212
+ | Model(s) | AIM | HumanEval | MBPP | MMLU | MATH | GSM8K | IFEval | HV Gain |
213
+ |----------|-----|-----------|------|------|------|-------|---------|----------|
214
+ | Code + Instruction Tuned | No | 26.83 | 34.40 | 53.53 | 8.40 | 45.80 | 33.42 | 0.27 |
215
+ | | Yes | 29.27 (+9.09%) | 36.00 (+4.65%) | 54.18 (+1.21%) | 8.30 (-1.19%) | 46.20 (+0.87%) | 32.00 (-4.25%) | 0.28 (+2.49%) |
216
+ | Code + Math | No | 16.46 | 28.60 | 51.96 | 15.10 | 64.70 | 22.02 | 0.23 |
217
+ | | Yes | 15.85 (-3.71%) | 29.60 (+3.50%) | 52.50 (+1.04%) | 14.80 (-1.99%) | 64.10 (-0.93%) | 21.91 (-0.50%) | 0.23 (-1.65%) |
218
+ | Instruction Tuned + Math | No | 5.49 | 19.00 | 51.08 | 9.80 | 54.30 | 32.35 | 0.18 |
219
+ | | Yes | 12.20 (+122.22%) | 28.20 (+48.42%) | 52.72 (+3.21%) | 12.90 (+31.63%) | 62.20 (+14.55%) | 31.96 (-1.21%) | 0.26 (+40.71%) |
220
+ | Code + Instruction Tuned + Math | No | 11.59 | 19.60 | 50.89 | 9.10 | 49.70 | 33.20 | 0.16 |
221
+ | | Yes | 15.85 (+36.76%) | 27.00 (+37.76%) | 52.59 (+3.34%) | 12.20 (+34.07%) | 60.70 (+22.13%) | 33.59 (+1.17%) | 0.23 (+40.59%) |
222
+
223
+ ### DARE Ties
224
+
225
+ | Model(s) | AIM | HumanEval | MBPP | MMLU | MATH | GSM8K | IFEval | HV Gain |
226
+ |----------|-----|-----------|------|------|------|-------|---------|----------|
227
+ | Code + Instruction Tuned | No | 30.49 | 35.20 | 53.40 | 8.60 | 46.20 | 33.28 | 0.28 |
228
+ | | Yes | **30.49** | **36.80** (+4.55%) | 54.02 (+1.16%) | 8.60 | 47.20 (+2.16%) | 33.16 (-0.36%) | 0.29 (+1.63%) |
229
+ | Code + Math | No | 17.07 | 27.40 | 51.92 | 14.90 | 63.60 | 22.53 | 0.23 |
230
+ | | Yes | 17.68 (+3.57%) | 29.00 (+5.84%) | 52.61 (+1.33%) | 15.20 (+2.01%) | 63.90 (+0.47%) | 21.10 (-6.35%) | 0.24 (+4.00%) |
231
+ | Instruction Tuned + Math | No | 8.54 | 23.80 | 51.39 | 9.20 | 54.10 | 33.89 | 0.20 |
232
+ | | Yes | 15.85 (+85.60%) | 30.20 (+26.89%) | 52.89 (+2.92%) | 11.60 (+26.09%) | 57.80 (+6.84%) | 35.63 (+5.13%) | 0.26 (+31.22%) |
233
+ | Code + Instruction Tuned + Math | No | 13.41 | 21.20 | 51.15 | 8.70 | 51.50 | 35.75 | 0.17 |
234
+ | | Yes | 19.51 (+45.49%) | 28.60 (+34.91%) | 52.63 (+2.89%) | 11.60 (+33.33%) | 57.00 (+10.68%) | **36.20** (+1.26%) | 0.24 (+41.28%) |
235
+
236
+ ### Task Arithmetic
237
+
238
+ | Model(s) | AIM | HumanEval | MBPP | MMLU | MATH | GSM8K | IFEval | HV Gain |
239
+ |----------|-----|-----------|------|------|------|-------|---------|----------|
240
+ | Code + Instruction Tuned | No | 29.27 | 33.80 | 53.44 | 8.60 | 47.10 | 31.60 | 0.28 |
241
+ | | Yes | 29.88 (+2.08%) | 35.80 (+5.92%) | 54.12 (+1.27%) | 7.80 (-9.30%) | 46.60 (-1.06%) | 32.01 (+1.30%) | 0.28 (+0.61%) |
242
+ | Code + Math | No | 18.29 | 28.60 | 52.10 | 15.00 | 64.70 | 21.92 | 0.24 |
243
+ | | Yes | 17.68 (-3.34%) | 29.20 (+2.10%) | 52.52 (+0.81%) | 14.60 (-2.67%) | 64.50 (-0.31%) | 21.54 (-1.73%) | 0.24 (-2.65%) |
244
+ | Instruction Tuned + Math | No | 4.27 | 20.20 | 51.50 | 10.00 | 54.20 | 31.31 | 0.18 |
245
+ | | Yes | 8.54 (+100.00%) | 26.40 (+30.69%) | 52.83 (+2.58%) | 12.80 (+28.00%) | 61.30 (+13.10%) | 32.62 (+4.18%) | 0.24 (+34.52%) |
246
+ | Code + Instruction Tuned + Math | No | 11.59 | 19.60 | 51.20 | 9.00 | 52.70 | 32.87 | 0.16 |
247
+ | | Yes | 15.24 (+31.49%) | 27.40 (+39.80%) | 52.63 (+2.79%) | 12.00 (+33.33%) | 58.10 (+10.25%) | 33.91 (+3.16%) | 0.22 (+31.97%) |
248
+
249
+ ### Ties Merging
250
+
251
+ | Model(s) | AIM | HumanEval | MBPP | MMLU | MATH | GSM8K | IFEval | HV Gain |
252
+ |----------|-----|-----------|------|------|------|-------|---------|----------|
253
+ | Code + Instruction Tuned | No | 16.46 | 23.60 | 52.70 | 2.70 | 5.40 | 24.48 | 0.00 |
254
+ | | Yes | 15.24 (-7.41%) | 24.20 (+2.54%) | 53.15 (+0.85%) | 2.60 (-3.70%) | 5.20 (-3.70%) | 22.87 (-6.58%) | 0.05 (+inf%) |
255
+ | Code + Math | No | 15.85 | 26.80 | 51.86 | 14.30 | 62.60 | 21.63 | 0.20 |
256
+ | | Yes | 15.85 | 28.60 (+6.72%) | 52.29 (+0.83%) | **15.30** (+6.99%) |