Instructions to use ByteDance/Ouro-1.4B-Thinking with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ByteDance/Ouro-1.4B-Thinking with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ByteDance/Ouro-1.4B-Thinking", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("ByteDance/Ouro-1.4B-Thinking", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ByteDance/Ouro-1.4B-Thinking with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ByteDance/Ouro-1.4B-Thinking" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ByteDance/Ouro-1.4B-Thinking", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/ByteDance/Ouro-1.4B-Thinking
- SGLang
How to use ByteDance/Ouro-1.4B-Thinking with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ByteDance/Ouro-1.4B-Thinking" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ByteDance/Ouro-1.4B-Thinking", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ByteDance/Ouro-1.4B-Thinking" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ByteDance/Ouro-1.4B-Thinking", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use ByteDance/Ouro-1.4B-Thinking with Docker Model Runner:
docker model run hf.co/ByteDance/Ouro-1.4B-Thinking
Fix bos/eos token IDs + add enable_thinking to chat template
Summary
Fix token configuration and add enable_thinking support for the chat template.
Token ID fixes
The current bos_token and eos_token are both set to <|endoftext|> (id=0), which is incorrect for a ChatML-style model. This PR fixes them to match the actual chat format:
| Field | Before | After |
|---|---|---|
bos_token |
<|endoftext|> (0) |
<|im_start|> (1) |
eos_token |
<|endoftext|> (0) |
<|im_end|> (2) |
bos_token_id |
0 | 1 |
eos_token_id |
0 | 2 |
These are also set in config.json.
enable_thinking chat template support
The Ouro-Thinking model enters chain-of-thought reasoning mode when <think> is prepended to the assistant turn, but the current chat template does not support triggering this.
This PR adds enable_thinking parameter support (following the convention from Qwen3 and DeepSeek-R1), so that:
tokenizer.apply_chat_template(messages, add_generation_prompt=True, enable_thinking=True)
# produces: ...<|im_start|>assistant\n<think>\n
Backward compatible — default behavior is unchanged (no <think> unless explicitly requested).
This also enables proper integration with vLLM and lm-eval-harness, which pass enable_thinking=True to apply_chat_template().
Thanks to @sirorezka for identifying the bos/eos/pad token ID issues in PRs #2, #3. This PR bundles those fixes together with the enable_thinking chat template change.
Thank you so much!