merge1
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Passthrough merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
merge_method: passthrough
dtype: bfloat16
tokenizer_source: EpistemeAI/ReasoningCore-Llama-3.2-3B-r1-v1_2
#base_model: huihui-ai/Hermes-3-Llama-3.2-3B-abliterated
slices:
- sources:
- model: NeuraLakeAi/iSA-03-Mini-3B-Hybrid-Preview
layer_range: [0, 16]
parameters:
weight: 0.7
- sources:
- model: merge
layer_range: [12, 22]
parameters:
weight: 0.65
- sources:
- model: EpistemeAI/ReasoningCore-Llama-3.2-3B-r1-v1_2
layer_range: [18, 28]
parameters:
weight: 0.8
- Downloads last month
- 5
Model tree for powermove72/LLama-4b-amt-v0.5.2
Merge model
this model