| | --- |
| | license: apache-2.0 |
| | tags: |
| | - moe |
| | - frankenmoe |
| | - merge |
| | - mergekit |
| | - lazymergekit |
| | - Corianas/Microllama_Char_88k_step |
| | base_model: |
| | - Corianas/Microllama_Char_88k_step |
| | - Corianas/Microllama_Char_88k_step |
| | --- |
| | |
| | # microchar_moe |
| | |
| | microchar_moe is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): |
| | * [Corianas/Microllama_Char_88k_step](https://huggingface.co/Corianas/Microllama_Char_88k_step) |
| | * [Corianas/Microllama_Char_88k_step](https://huggingface.co/Corianas/Microllama_Char_88k_step) |
| |
|
| | ## 🧩 Configuration |
| |
|
| | ```yaml |
| | base_model: Corianas/Microllama_Char_88k_step |
| | gate_mode: random # one of "hidden", "cheap_embed", or "random" |
| | dtype: bfloat16 # output dtype (float32, float16, or bfloat16) |
| | ## (optional) |
| | # experts_per_token: 2 |
| | experts: |
| | - source_model: Corianas/Microllama_Char_88k_step |
| | positive_prompts: |
| | - "" |
| | ## (optional) |
| | # negative_prompts: |
| | # - "This is a prompt expert_model_1 should not be used for" |
| | - source_model: Corianas/Microllama_Char_88k_step |
| | positive_prompts: |
| | - "" |
| | ``` |
| |
|
| | ## 💻 Usage |
| |
|
| | ```python |
| | !pip install -qU transformers bitsandbytes accelerate |
| | |
| | from transformers import AutoTokenizer |
| | import transformers |
| | import torch |
| | |
| | model = "Corianas/microchar_moe" |
| | |
| | tokenizer = AutoTokenizer.from_pretrained(model) |
| | pipeline = transformers.pipeline( |
| | "text-generation", |
| | model=model, |
| | model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, |
| | ) |
| | |
| | messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] |
| | prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
| | outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
| | print(outputs[0]["generated_text"]) |
| | ``` |