Model Stock: All we need is just a few fine-tuned models
Paper
• 2403.19522 • Published
• 14
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using kainatq/Kainoverse-7b-v0.1 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: model_stock
base_model: kainatq/Kainoverse-7b-v0.1
parameters:
models:
- model: ResplendentAI/DaturaCookie_7B
- model: icefog72/IceDrunkenCherryRP-7b
- model: ChaoticNeutrals/RP_Vision_7B
- model: Endevor/InfinityRP-v1-7B
- model: MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.2-slerp
- model: icefog72/IceCocoaRP-7b
dtype: bfloat16