Editing Models with Task Arithmetic
Paper • 2212.04089 • Published • 7
This is a merge of pre-trained language models created using mergekit.
This model was merged using the task arithmetic merge method using meta-llama/Meta-Llama-3-8B-Instruct as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: meta-llama/Meta-Llama-3-8B-Instruct
- model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
weight: -0.5
- model: ChaoticNeutrals/Templar_v1_8B
parameters:
weight: [0.5, 0.125, 0.125, -0.125, -0.125]
- model: ChaoticNeutrals/Hathor_RP-v.01-L3-8B
parameters:
weight: [0.125, 0.5, -0.125, -0.125, 0.125]
- model: Sao10K/L3-8B-Stheno-v3.2
parameters:
weight: [-0.125, 0.125, 0.5, 0.125, -0.125]
- model: ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B
parameters:
weight: [0.125, -0.125, -0.125, 0.5, 0.125]
- model: ChaoticNeutrals/Domain-Fusion-L3-8B
parameters:
weight: [-0.125, -0.125, 0.125, 0.125, 0.5]
merge_method: task_arithmetic
base_model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
normalize: false
dtype: bfloat16