Model Stock: All we need is just a few fine-tuned models
Paper • 2403.19522 • Published • 14
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using nbeerbower/Llama-3.1-Nemotron-lorablated-70B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: model_stock
models:
- model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
parameters:
weight: 1.0
- model: NexesMess/Llama_3.x_70b_Tess_Dolphin_128K_v1.02
parameters:
weight: 1.0
- model: mlabonne/Hermes-3-Llama-3.1-70B-lorablated
parameters:
weight: 1.0
- model: migtissera/Tess-3-Llama-3.1-70B
parameters:
weight: 1.0
base_model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
dtype: bfloat16
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
filter_wise: false
smooth: false
allow_negative_weights: false
chat_template: auto
tokenizer:
source: union