Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Paper
• 2203.05482 • Published
• 8
This is a merge of pre-trained language models created using mergekit.
An experimental merge of the legendary L3-8B-Stheno with Fizzarolli's Rosier. The aim is to improve Stheno's "ball-rolling" capabilities and reduce its awkwardness with more niche content. For a first go, I'm surprised at how well it's doing so far, but given that this is literally my first LLM project ever, probably temper your expectations.
This model was merged using the linear merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: Sao10K/L3-8B-Stheno-v3.2
parameters:
weight: 0.5
- model: Fizzarolli/L3-8b-Rosier-v1
parameters:
weight: 0.5
merge_method: linear
parameters:
normalize: true
dtype: float16