| base_model: | |
| - m-a-p/neo_7b | |
| - m-a-p/neo_7b | |
| tags: | |
| - merge | |
| - mergekit | |
| - lazymergekit | |
| - m-a-p/neo_7b | |
| # Neo_7b-merge3 | |
| Neo_7b-merge3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): | |
| * [m-a-p/neo_7b](https://huggingface.co/m-a-p/neo_7b) | |
| * [m-a-p/neo_7b](https://huggingface.co/m-a-p/neo_7b) | |
| ## 🧩 Configuration | |
| ```yaml | |
| slices: | |
| # Group 1 | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [0, 0] | |
| - model: m-a-p/neo_7b | |
| layer_range: [3, 3] | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [1, 1] | |
| - model: m-a-p/neo_7b | |
| layer_range: [3, 3] | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [2, 2] | |
| - model: m-a-p/neo_7b | |
| layer_range: [3, 3] | |
| # Group 2 | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [4, 4] | |
| - model: m-a-p/neo_7b | |
| layer_range: [7, 7] | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [5, 5] | |
| - model: m-a-p/neo_7b | |
| layer_range: [7, 7] | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [6, 6] | |
| - model: m-a-p/neo_7b | |
| layer_range: [7, 7] | |
| # Group 3 | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [8, 8] | |
| - model: m-a-p/neo_7b | |
| layer_range: [11, 11] | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [9, 9] | |
| - model: m-a-p/neo_7b | |
| layer_range: [11, 11] | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [10, 10] | |
| - model: m-a-p/neo_7b | |
| layer_range: [11, 11] | |
| # Group 4 | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [12, 12] | |
| - model: m-a-p/neo_7b | |
| layer_range: [15, 15] | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [13, 13] | |
| - model: m-a-p/neo_7b | |
| layer_range: [15, 15] | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [14, 14] | |
| - model: m-a-p/neo_7b | |
| layer_range: [15, 15] | |
| # Group 5 | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [16, 16] | |
| - model: m-a-p/neo_7b | |
| layer_range: [19, 19] | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [17, 17] | |
| - model: m-a-p/neo_7b | |
| layer_range: [19, 19] | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [18, 18] | |
| - model: m-a-p/neo_7b | |
| layer_range: [19, 19] | |
| # Group 6 | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [20, 20] | |
| - model: m-a-p/neo_7b | |
| layer_range: [23, 23] | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [21, 21] | |
| - model: m-a-p/neo_7b | |
| layer_range: [23, 23] | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [22, 22] | |
| - model: m-a-p/neo_7b | |
| layer_range: [23, 23] | |
| # Group 7 (last group) | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [24, 24] | |
| - model: m-a-p/neo_7b | |
| layer_range: [27, 27] | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [25, 25] | |
| - model: m-a-p/neo_7b | |
| layer_range: [27, 27] | |
| - sources: | |
| - model: m-a-p/neo_7b | |
| layer_range: [26, 26] | |
| - model: m-a-p/neo_7b | |
| layer_range: [27, 27] | |
| merge_method: slerp | |
| base_model: m-a-p/neo_7b | |
| parameters: | |
| t: 0.3333 # Apply 1/3 of the 4th layer to each of the previous 3 layers | |
| dtype: bfloat16 | |
| output_path: ./merged_redistributed_neo_7b | |
| ``` | |
| ## 💻 Usage | |
| ```python | |
| !pip install -qU transformers accelerate | |
| from transformers import AutoTokenizer | |
| import transformers | |
| import torch | |
| model = "DewEfresh/Neo_7b-merge3" | |
| messages = [{"role": "user", "content": "What is a large language model?"}] | |
| tokenizer = AutoTokenizer.from_pretrained(model) | |
| prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) | |
| pipeline = transformers.pipeline( | |
| "text-generation", | |
| model=model, | |
| torch_dtype=torch.float16, | |
| device_map="auto", | |
| ) | |
| outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) | |
| print(outputs[0]["generated_text"]) | |
| ``` |