Model Stock: All we need is just a few fine-tuned models
Paper
•
2403.19522
•
Published
•
13
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using models/dolphin-2.9.3-mistral-nemo-12b as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: models/dolphin-2.9.3-mistral-nemo-12b
dtype: bfloat16
merge_method: model_stock
modules:
default:
slices:
- sources:
- layer_range: [0, 40]
model: models/Astra-v1-12B
- layer_range: [0, 40]
model: models/Chronos-Gold-12B-1.0
- layer_range: [0, 40]
model: models/dolphin-2.9.3-mistral-nemo-12b
- layer_range: [0, 40]
model: models/Dans-PersonalityEngine-V1.1.0-12b
- layer_range: [0, 40]
model: models/Eleusis-12B
- layer_range: [0, 40]
model: models/Muse-12B
- layer_range: [0, 40]
model: models/Pantheon-RP-1.6-12b-Nemo
- layer_range: [0, 40]
model: models/Pygmalion-3-12B
- layer_range: [0, 40]
model: models/Wayfarer-2-12B
- layer_range: [0, 40]
model: models/writing-roleplay-20k-context-nemo-12b-v1.0
- layer_range: [0, 40]
model: models/Zinakha-12B
parameters:
normalize: 1.0
tokenizer_source: models/Chronos-Gold-12B-1.0