| base_model: | |
| - google/gemma-2b-it | |
| - google/codegemma-2b | |
| license: apache-2.0 | |
| tags: | |
| - moe | |
| - frankenmoe | |
| - merge | |
| - mergekit | |
| - google/gemma-2b-it | |
| - google/codegemma-2b | |
| # gemma-2x2b | |
| gemma-2x2b is a Mixture of Experts (MoE) made with the following models using [Mergekit](https://github.com/arcee-ai/mergekit): | |
| * [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) | |
| * [google/codegemma-2b](https://huggingface.co/google/codegemma-2b) | |
| ## 🧩 Configuration | |
| ```yamlbase_model: mlabonne/Marcoro14-7B-slerp | |
| experts: | |
| - positive_prompts: | |
| - chat | |
| source_model: google/gemma-2b-it | |
| - positive_prompts: | |
| - code | |
| source_model: google/codegemma-2b | |
| experts_per_token: 2 | |
| gate_mode: hidden | |
| ``` | |
| ## 💻 Usage | |
| ```python | |
| !pip install -qU transformers bitsandbytes accelerate | |
| from transformers import AutoTokenizer | |
| import transformers | |
| import torch | |
| model = "mgv99/gemma-2x2b" | |
| tokenizer = AutoTokenizer.from_pretrained(model) | |
| pipeline = transformers.pipeline( | |
| "text-generation", | |
| model=model, | |
| model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, | |
| ) | |
| messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] | |
| prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) | |
| outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) | |
| print(outputs[0]["generated_text"]) | |
| ``` |