| | --- |
| | base_model: |
| | - openchat/openchat-3.5-1210 |
| | - beowolx/CodeNinja-1.0-OpenChat-7B |
| | license: mit |
| | tags: |
| | - moe |
| | - frankenmoe |
| | - merge |
| | - mergekit |
| | - openchat/openchat-3.5-1210 |
| | - beowolx/CodeNinja-1.0-OpenChat-7B |
| | --- |
| | |
| | # prueba-moe2 |
| |
|
| | prueba-moe2 is a Mixture of Experts (MoE) made with the following models using [Mergekit](https://github.com/arcee-ai/mergekit): |
| | * [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210) |
| | * [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B) |
| |
|
| | ## 🧩 Configuration |
| |
|
| | ```yamlbase_model: mlabonne/Marcoro14-7B-slerp |
| | experts: |
| | - positive_prompts: |
| | - chat |
| | - assistant |
| | - tell me |
| | - explain |
| | - what is |
| | source_model: openchat/openchat-3.5-1210 |
| | - positive_prompts: |
| | - code |
| | - python |
| | - javascript |
| | - programming |
| | - algorithm |
| | source_model: beowolx/CodeNinja-1.0-OpenChat-7B |
| | experts_per_token: 2 |
| | gate_mode: cheap_embed |
| | ``` |
| |
|
| | ## 💻 Usage |
| |
|
| | ```python |
| | !pip install -qU transformers bitsandbytes accelerate |
| | |
| | from transformers import AutoTokenizer |
| | import transformers |
| | import torch |
| | |
| | model = "mgv99/prueba-moe2" |
| | |
| | tokenizer = AutoTokenizer.from_pretrained(model) |
| | pipeline = transformers.pipeline( |
| | "text-generation", |
| | model=model, |
| | model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, |
| | ) |
| | |
| | messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] |
| | prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
| | outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
| | print(outputs[0]["generated_text"]) |
| | ``` |