Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Paper
•
2203.05482
•
Published
•
7
This is a merge of pre-trained language models created using mergekit.
This model was merged using the linear merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
dtype: float16
merge_method: linear
slices:
- sources:
- layer_range: [0, 12]
model: mllm-dev/gpt2_f_experiment_0_drug_data_new_run
parameters:
weight: 1.0
- layer_range: [0, 12]
model: mllm-dev/gpt2_f_experiment_1_drug_data_new_run
parameters:
weight: 1.0
- layer_range: [0, 12]
model: mllm-dev/gpt2_f_experiment_2_drug_data_new_run
parameters:
weight: 1.0
- layer_range: [0, 12]
model: mllm-dev/gpt2_f_experiment_3_drug_data_new_run
parameters:
weight: 1.0
- layer_range: [0, 12]
model: mllm-dev/gpt2_f_experiment_4_drug_data_new_run
parameters:
weight: 1.0