๐ฎ Mixture of Experts
Collection
MoE done using mergekit and LazyMergekit: https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb#scrollTo=d5mYzDo1q96y
โข 11 items โข Updated โข 24
How to use mlabonne/Beyonder-4x7b with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="mlabonne/Beyonder-4x7b")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("mlabonne/Beyonder-4x7b")
model = AutoModelForCausalLM.from_pretrained("mlabonne/Beyonder-4x7b")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use mlabonne/Beyonder-4x7b with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "mlabonne/Beyonder-4x7b"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "mlabonne/Beyonder-4x7b",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/mlabonne/Beyonder-4x7b
How to use mlabonne/Beyonder-4x7b with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "mlabonne/Beyonder-4x7b" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "mlabonne/Beyonder-4x7b",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "mlabonne/Beyonder-4x7b" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "mlabonne/Beyonder-4x7b",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use mlabonne/Beyonder-4x7b with Docker Model Runner:
docker model run hf.co/mlabonne/Beyonder-4x7b
This model is a Mixure of Experts (MoE) made with mergekit (mixtral branch). It uses the following base models:
base_model: openchat/openchat-3.5-1210
gate_mode: hidden
experts:
- source_model: openchat/openchat-3.5-1210
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
negative_prompts:
- "storywriting"
- "mathematics"
- "reasoning"
- "code"
- "programming"
- source_model: beowolx/CodeNinja-1.0-OpenChat-7B
positive_prompts:
- "code"
- "python"
- "javascript"
- "programming"
- "algorithm"
negative_prompts:
- "chat"
- "assistant"
- "storywriting"
- "mathematics"
- "reasoning"
- source_model: maywell/PiVoT-0.1-Starling-LM-RP
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
negative_prompts:
- "chat"
- "assistant"
- "code"
- "programming"
- "mathematics"
- "reasoning"
- source_model: WizardLM/WizardMath-7B-V1.1
positive_prompts:
- "reason"
- "math"
- "mathematics"
- "solve"
- "count"
negative_prompts:
- "chat"
- "assistant"
- "code"
- "programming"
- "storywriting"
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Beyonder-4x7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Output:
A Mixture of Experts (MoE) is a neural network architecture that combines the strengths of multiple expert networks to make predictions. It leverages the idea of ensemble learning, where multiple models work together to improve performance. In each MoE, a gating network is used to select the most relevant expert for the input. The final output is a weighted combination of the expert outputs, determined by the gating network's predictions.