Merge Experiments
Collection
Sorted from newest (top) to oldest (bottom)
•
73 items
•
Updated
•
4
This is a merge of pre-trained language models created using mergekit.
This model was merged using the SLERP merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: SicariusSicariiStuff/Assistant_Pepe_8B
architecture: MistralForCausalLM
merge_method: slerp
dtype: float32
out_dtype: bfloat16
slices:
- sources:
- model: EldritchLabs/Cthulhu-8B-v1.4
layer_range: [0, 32]
- model: SicariusSicariiStuff/Assistant_Pepe_8B
layer_range: [0, 32]
parameters:
t: 0.5
tokenizer:
source: union
#chat_template: auto