| --- |
| tags: |
| - merge |
| - mergekit |
| - lazymergekit |
| - FelixChao/WestSeverus-7B-DPO-v2 |
| - jsfs11/WestOrcaNeuralMarco-DPO-v2-DARETIES-7B |
| - mlabonne/Daredevil-7B |
| base_model: |
| - FelixChao/WestSeverus-7B-DPO-v2 |
| - jsfs11/WestOrcaNeuralMarco-DPO-v2-DARETIES-7B |
| - mlabonne/Daredevil-7B |
| --- |
| |
| # WONMSeverusDevilv2-TIES |
|
|
| WONMSeverusDevilv2-TIES is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): |
| * [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2) |
| * [jsfs11/WestOrcaNeuralMarco-DPO-v2-DARETIES-7B](https://huggingface.co/jsfs11/WestOrcaNeuralMarco-DPO-v2-DARETIES-7B) |
| * [mlabonne/Daredevil-7B](https://huggingface.co/mlabonne/Daredevil-7B) |
|
|
| ## 🧩 Configuration |
|
|
| ```yaml |
| models: |
| - model: FelixChao/WestSeverus-7B-DPO-v2 |
| parameters: |
| density: [1, 0.7, 0.1] |
| weight: [0, 0.3, 0.7, 1] |
| - model: jsfs11/WestOrcaNeuralMarco-DPO-v2-DARETIES-7B |
| parameters: |
| density: [1, 0.7, 0.3] |
| weight: [0, 0.25, 0.5, 1] |
| - model: mlabonne/Daredevil-7B |
| parameters: |
| density: 0.33 |
| weight: |
| - filter: mlp |
| value: [0.35, 0.65] |
| - value: 0 |
| merge_method: ties |
| base_model: mistralai/Mistral-7B-v0.1 |
| parameters: |
| int8_mask: true |
| normalize: true |
| t: |
| - filter: lm_head |
| value: [0.55] |
| - filter: embed_tokens |
| value: [0.7] |
| - filter: self_attn |
| value: [0.65, 0.35] |
| - filter: mlp |
| value: [0.35, 0.65] |
| - filter: layernorm |
| value: [0.4, 0.6] |
| - filter: modelnorm |
| value: [0.6] |
| - value: 0.5 # fallback for rest of tensors |
| dtype: bfloat16 |
| ``` |
|
|
| ## 💻 Usage |
|
|
| ```python |
| !pip install -qU transformers accelerate |
| |
| from transformers import AutoTokenizer |
| import transformers |
| import torch |
| |
| model = "jsfs11/WONMSeverusDevilv2-TIES" |
| messages = [{"role": "user", "content": "What is a large language model?"}] |
| |
| tokenizer = AutoTokenizer.from_pretrained(model) |
| prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
| pipeline = transformers.pipeline( |
| "text-generation", |
| model=model, |
| torch_dtype=torch.float16, |
| device_map="auto", |
| ) |
| |
| outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
| print(outputs[0]["generated_text"]) |
| ``` |