Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Paper
•
2203.05482
•
Published
•
7
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Linear merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: linear
dtype: bfloat16
models:
- model: google/gemma-3-4b-pt
parameters:
weight: 1.0
- model: google/gemma-3-4b-it
parameters:
weight: 0.0
tokenizer:
source: google/gemma-3-4b-it
tokens:
<start_of_turn>:
source: google/gemma-3-4b-it
force: true
<end_of_turn>:
source: google/gemma-3-4b-it
force: true