Instructions to use microsoft/GRIN-MoE with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use microsoft/GRIN-MoE with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="microsoft/GRIN-MoE", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import GRIN-MoE model = GRIN-MoE.from_pretrained("microsoft/GRIN-MoE", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use microsoft/GRIN-MoE with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "microsoft/GRIN-MoE" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "microsoft/GRIN-MoE", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/microsoft/GRIN-MoE
- SGLang
How to use microsoft/GRIN-MoE with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "microsoft/GRIN-MoE" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "microsoft/GRIN-MoE", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "microsoft/GRIN-MoE" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "microsoft/GRIN-MoE", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use microsoft/GRIN-MoE with Docker Model Runner:
docker model run hf.co/microsoft/GRIN-MoE
Question about sparsemixer
Hi there!
First of all - great work! :)
I'm experimenting with sparsemixer on a different model architecture and I'm looking at the sparsemixer's code for 2 reasons:
- to make it work with DeepSpeed (DeepSpeed hangs after a few steps during code testing)
- to make it work with top_k>2
I have 3 questions if you don't mind:
- So, there are 2 almost identical blocks of code in sparsemixer that could be put in a loop, I guess. The only difference is line 834 (the first block, for the first expert):
torch.rand_like(max_scores) > 0.75 # Heun's third-order method: f(x) - f(0) = .25 f'(x) + .75 f'(x/3.)
and line 881 (for the second expert):
torch.rand_like(max_scores).uniform_() > 0.75 # Heun's third-order method: f(x) - f(0) = .25 f'(x) + .75 f'(x/3.)
is this draw from a uniform distribution added there for any particular reason? It's drawing two times now - once during rand_like call and then during uniform_ call.
Have you tested it with DeepSpeed ZeRO3? It may be an issue with how I implemented sparsemixer into my experiment, but the same model with softmax+topk works (trains) just fine (the modeling code contains a workaround for hangs with ZeRO3, I understand what the problem is).
Are there any additional considerations to make it work with top_k>2, or is top_k=2 just implemented this way for this experiment with the model you trained?
Thank you
- there is no particular reason and the second
.uniform_()could be removed; - Yes, we tried that. We ends up using ZeRO1 + PP + activation checkpointing, which yields the best throughput (much better than ZeRO3);
- Yes, there are:
- we model the sampling of the top-k as iterative sampling, which brings issues. As an example, if k is 4, and the four expert you get is a,b,c,d. Sampling a->b->c->d and d->c->b->a yield the same set of experts, but are treated separately. This causes complications on gradient computation
- when there are multiple activated experts, it is a little bit tricky to integrate the first-order and third-order estimator. To start, I would recommond you to use the 1st-order estimator only (use set
mask_for_oneto1.0in all cases)