Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Paper
• 2203.05482 • Published
• 8
This is a merge of pre-trained language models created using mergekit.
This model was merged using the linear merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
dtype: bfloat16
merge_method: linear
parameters:
int8_mask: 1.0
normalize: 1.0
slices:
- sources:
- layer_range: [0, 14]
model: Sakalti/light-7b-beta
parameters:
weight: 1.0
- layer_range: [14, 28]
model: Sakalti/light-7b-beta
parameters:
weight: 1.0