Instructions to use microsoft/MAI-DS-R1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use microsoft/MAI-DS-R1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="microsoft/MAI-DS-R1", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("microsoft/MAI-DS-R1", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("microsoft/MAI-DS-R1", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use microsoft/MAI-DS-R1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "microsoft/MAI-DS-R1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "microsoft/MAI-DS-R1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/microsoft/MAI-DS-R1
- SGLang
How to use microsoft/MAI-DS-R1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "microsoft/MAI-DS-R1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "microsoft/MAI-DS-R1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "microsoft/MAI-DS-R1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "microsoft/MAI-DS-R1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use microsoft/MAI-DS-R1 with Docker Model Runner:
docker model run hf.co/microsoft/MAI-DS-R1
Came for the model, stayed for the bots
I thought I would check in to see if the bots were going nuts about this model like Perplexity 1776. They didn't disappoint. I fail to understand why different alignments for open source models tailored to different cultures gets everyone so wound up? There is plenty of room and freedom to pick the alignment that suits you best. If you want R1 direct alignment then please enjoy that excellent model and the work they did. If you want Microsoft's alignment use that? Many models have gotten alignment changes or removals. But for some reason there is a group of accounts that flips out every time someone alters or removes alignment values on R1. Why? It's a big world with lots of different perspectives. Live and let live people.
This demonstrates the appeal of open-source models—anyone can fine-tune them based on their cultural context or values to align with local cultural norms and legal standards
I agree, I think it's great and really the best way forward. I just hope everyone can calm down a bit and not get into these little squabbles over "censorship standards" on models. Just release base models as open as you can and let people align them to meet their needs. I think like you that it is the appeal of having open models versus closed ones which is what we should be doing. You should be able to choose alignment, not have it forced on you.