Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Paper • 2203.05482 • Published • 8
This is a merge of pre-trained language models created using mergekit.
This model was merged using the linear merge method using Qwen/Qwen2.5-0.5B-Instruct as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: Qwen/Qwen2.5-0.5B-Instruct
parameters:
weight: 2.0
- model: Qwen/Qwen2-0.5B-Instruct
parameters:
weight: 1.0
merge_method: linear
base_model: Qwen/Qwen2.5-0.5B-Instruct
parameters:
int8_mask: true
dtype: bfloat16