| --- |
| tags: |
| - merge |
| - mergekit |
| - lazymergekit |
| - Kukedlc/Neural4gsm8k |
| - nlpguy/AlloyIngotNeoX |
| - automerger/OgnoExperiment27-7B |
| - vanillaOVO/supermario_v4 |
| base_model: |
| - Kukedlc/Neural4gsm8k |
| - nlpguy/AlloyIngotNeoX |
| - automerger/OgnoExperiment27-7B |
| - vanillaOVO/supermario_v4 |
| --- |
| |
| # NeuralTopBench-7B-ties |
|
|
|
|
|  |
|
|
| NeuralTopBench-7B-ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): |
| * [Kukedlc/Neural4gsm8k](https://huggingface.co/Kukedlc/Neural4gsm8k) |
| * [nlpguy/AlloyIngotNeoX](https://huggingface.co/nlpguy/AlloyIngotNeoX) |
| * [automerger/OgnoExperiment27-7B](https://huggingface.co/automerger/OgnoExperiment27-7B) |
| * [vanillaOVO/supermario_v4](https://huggingface.co/vanillaOVO/supermario_v4) |
|
|
| ## 🧩 Configuration |
|
|
| ```yaml |
| models: |
| - model: CultriX/NeuralTrix-bf16 |
| # no parameters necessary for base model |
| - model: Kukedlc/Neural4gsm8k |
| parameters: |
| weight: 0.3 |
| density: 0.5 |
| - model: nlpguy/AlloyIngotNeoX |
| parameters: |
| weight: 0.2 |
| density: 0.5 |
| - model: automerger/OgnoExperiment27-7B |
| parameters: |
| weight: 0.2 |
| density: 0.5 |
| - model: vanillaOVO/supermario_v4 |
| parameters: |
| weight: 0.3 |
| density: 0.5 |
| merge_method: dare_ties |
| base_model: CultriX/NeuralTrix-bf16 |
| |
| parameters: |
| int8_mask: true |
| normalize: true |
| dtype: bfloat16 |
| ``` |
|
|
| ## 💻 Usage - Stream |
|
|
| ```python |
| # Requirements |
| !pip install -qU transformers accelerate bitsandbytes |
| |
| # Imports & settings |
| from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer |
| import warnings |
| import os |
| os.environ["TOKENIZERS_PARALLELISM"] = "false" |
| warnings.filterwarnings('ignore') |
| |
| # Model & Tokenizer |
| MODEL_NAME = 'Kukedlc/NeuralTopBench-7B-ties' |
| model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map='cuda:0', load_in_4bit=True) |
| tok = AutoTokenizer.from_pretrained(MODEL_NAME) |
| |
| # Inference |
| prompt = "I want you to generate a theory that unites quantum mechanics with the theory of relativity and cosmic consciousness\n" |
| inputs = tok([prompt], return_tensors="pt").to('cuda') |
| streamer = TextStreamer(tok) |
| |
| # Despite returning the usual output, the streamer will also print the generated text to stdout. |
| _ = model.generate(**inputs, streamer=streamer, max_new_tokens=512, do_sample=True, num_beams=1, top_p=0.9, temperature=0.7) |
| |
| ``` |
| ## 💻 Usage - Clasic |
|
|
| ```python |
| !pip install -qU transformers bitsandbytes accelerate |
| |
| from transformers import AutoTokenizer |
| import transformers |
| import torch |
| |
| model = 'Kukedlc/NeuralTopBench-7B-ties' |
| |
| tokenizer = AutoTokenizer.from_pretrained(model) |
| pipeline = transformers.pipeline( |
| "text-generation", |
| model=model, |
| model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, |
| ) |
| |
| messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] |
| prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
| outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
| print(outputs[0]["generated_text"]) |
| ``` |