Text Generation
Transformers
Safetensors
English
qwen3
math
reasoning
agent
qwen
grpo
reinforcement-learning
conversational
text-generation-inference
Instructions to use Intel/deepmath-v1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Intel/deepmath-v1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Intel/deepmath-v1") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Intel/deepmath-v1") model = AutoModelForCausalLM.from_pretrained("Intel/deepmath-v1") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Intel/deepmath-v1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Intel/deepmath-v1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Intel/deepmath-v1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Intel/deepmath-v1
- SGLang
How to use Intel/deepmath-v1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Intel/deepmath-v1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Intel/deepmath-v1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Intel/deepmath-v1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Intel/deepmath-v1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Intel/deepmath-v1 with Docker Model Runner:
docker model run hf.co/Intel/deepmath-v1
Update README.md
Browse files
README.md
CHANGED
|
@@ -18,13 +18,13 @@ library_name: transformers
|
|
| 18 |
pipeline_tag: text-generation
|
| 19 |
---
|
| 20 |
|
| 21 |
-
# DeepMath
|
| 22 |
|
| 23 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/62d93cd728f9c86a4031562e/ndb_WmPavW1MONAjsGpYT.jpeg" style="width:600px" alt="An LLM is using a calculator to answer questions." />
|
| 24 |
|
| 25 |
## Model Description
|
| 26 |
|
| 27 |
-
**DeepMath
|
| 28 |
|
| 29 |
- **Developed by:** Intel AI Labs
|
| 30 |
- **Model type:** Causal language model with agent capabilities
|
|
@@ -43,7 +43,7 @@ pipeline_tag: text-generation
|
|
| 43 |
|
| 44 |
## Model Architecture
|
| 45 |
|
| 46 |
-
DeepMath
|
| 47 |
|
| 48 |
- **Agent Interface:** Outputs special tokens for Python code execution during reasoning
|
| 49 |
- **Executor:** Sandboxed Python environment with allow-listed modules
|
|
|
|
| 18 |
pipeline_tag: text-generation
|
| 19 |
---
|
| 20 |
|
| 21 |
+
# DeepMath: A Lightweight Math Reasoning Agent
|
| 22 |
|
| 23 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/62d93cd728f9c86a4031562e/ndb_WmPavW1MONAjsGpYT.jpeg" style="width:600px" alt="An LLM is using a calculator to answer questions." />
|
| 24 |
|
| 25 |
## Model Description
|
| 26 |
|
| 27 |
+
**DeepMath** is a 4B parameter mathematical reasoning model that combines a fine-tuned LLM with a sandboxed Python executor. Built on [Qwen3-4B Thinking](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) and trained with **GRPO (Group Relative Policy Optimization)**, DeepMath generates concise Python snippets for computational steps instead of verbose text explanations, significantly reducing errors and output length.
|
| 28 |
|
| 29 |
- **Developed by:** Intel AI Labs
|
| 30 |
- **Model type:** Causal language model with agent capabilities
|
|
|
|
| 43 |
|
| 44 |
## Model Architecture
|
| 45 |
|
| 46 |
+
DeepMath uses a LoRA adapter fine-tuned on top of Qwen3-4B Thinking with the following components:
|
| 47 |
|
| 48 |
- **Agent Interface:** Outputs special tokens for Python code execution during reasoning
|
| 49 |
- **Executor:** Sandboxed Python environment with allow-listed modules
|