Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Paper • 2203.05482 • Published • 8
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Jebadiah/Aria-ruby-v3", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Jebadiah/Aria-ruby-v3", trust_remote_code=True)This is a merge of pre-trained language models created using mergekit.
This model was merged using the linear merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: Jebadiah/Aria-diamond-v2
parameters:
weight: 6.6
- model: Jebadiah/Tess-coder-ruby-p7
parameters:
weight: 0.06
merge_method: linear
dtype: float16
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Jebadiah/Aria-ruby-v3", trust_remote_code=True)