Editing Models with Task Arithmetic
Paper • 2212.04089 • Published • 8
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Kquant03/Samlagast-7B-bf16")
model = AutoModelForCausalLM.from_pretrained("Kquant03/Samlagast-7B-bf16")This is a merge of pre-trained language models created using mergekit.
This model was merged using the task arithmetic merge method using paulml/NeuralOmniBeagleMBX-v3-7B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: paulml/NeuralOmniWestBeaglake-7B
parameters:
weight: 1
- model: FelixChao/Faraday-7B
parameters:
weight: 1
- model: flemmingmiguel/MBX-7B-v3
parameters:
weight: 1
- model: paulml/NeuralOmniBeagleMBX-v3-7B
parameters:
weight: 1
merge_method: task_arithmetic
base_model: paulml/NeuralOmniBeagleMBX-v3-7B
parameters:
normalize: true
int8_mask: true
dtype: float16
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Kquant03/Samlagast-7B-bf16")