Simple merges of my diffrent models, then finetuned for 500 steps using lora and oasst1.
Merge kit info below:
stacked_model
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Passthrough merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
# oasst2-llama repeated 3×
- sources:
- model: simonko912/oasst2-llama
layer_range: [0, 12]
- sources:
- model: simonko912/oasst2-llama
layer_range: [0, 12]
- sources:
- model: simonko912/oasst2-llama
layer_range: [0, 12]
# oasst-big-llama
- sources:
- model: simonko912/oasst-big-llama
layer_range: [0, 12]
# oasst1-llama
- sources:
- model: simonko912/oasst1-llama
layer_range: [0, 12]
merge_method: passthrough
dtype: bfloat16
- Downloads last month
- 17
Model tree for simonko912/oasst-max-v1
Merge model
this model