Editing Models with Task Arithmetic
Paper • 2212.04089 • Published • 8
llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)This is a merge of pre-trained language models created using mergekit.
This model was merged using the task arithmetic merge method using jeiku/Rosa_v1_3B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: task_arithmetic
base_model: jeiku/Rosa_v1_3B
parameters:
normalize: true
models:
- model: jeiku/Rosa_v1_3B+jeiku/No_Robots_Alpaca_StableLM
parameters:
weight: 0.5
- model: jeiku/Rosa_v1_3B+jeiku/Toxic_DPO_StableLM
parameters:
weight: 0.5
- model: jeiku/Rosa_v1_3B+jeiku/Alpaca_128_StableLM
parameters:
weight: 0.4
- model: jeiku/Rosa_v1_3B+jeiku/Everything_v3_128_StableLM
parameters:
weight: 0.4
- model: jeiku/Rosa_v1_3B+jeiku/Futa_Erotica_StableLM
parameters:
weight: 1
- model: jeiku/Rosa_v1_3B+jeiku/Gnosis_256_StableLM
parameters:
weight: 1
- model: jeiku/Rosa_v1_3B+jeiku/Humiliation_StableLM
parameters:
weight: 1
- model: jeiku/Rosa_v1_3B+jeiku/Theory_of_Mind_128_StableLM
parameters:
weight: 0.8
- model: jeiku/Rosa_v1_3B+jeiku/PIPPA_128_StableLM
parameters:
weight: 0.4
- model: jeiku/Rosa_v1_3B+jeiku/LimaRP_StableLM
parameters:
weight: 0.7
- model: jeiku/Rosa_v1_3B+jeiku/Theory_of_Mind_RP_128_StableLM
parameters:
weight: 0.6
- model: jeiku/Rosa_v1_3B+jeiku/Bluemoon_cleaned_StableLM
parameters:
weight: 0.8
- model: jeiku/Rosa_v1_3B+jeiku/RPGPT_StableLM
parameters:
weight: 0.4
dtype: float16
2-bit
4-bit
6-bit
16-bit
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="jeiku/Filet_3B_GGUF", filename="", )