---
base_model:
- Qwen/Qwen3-4B
tags:
- transformers
- qwen3
- R1
- THΔ°NK
license: apache-2.0
language:
- en
---
[
]()
# Qwen3-R1 4B π
-GGUF version
''
https://huggingface.co/mradermacher/Qwen3-R1-4B-GGUF
https://huggingface.co/mradermacher/Qwen3-R1-4B-i1-GGUF
''
[](https://huggingface.co/Ali-Yaser/Qwen3-R1-8B)
[](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md)
[](https://huggingface.co/Qwen/Qwen3-8B)
## Model Description
**Qwen3-R1 Series** is a specialized math and reansoning awnsers-focused fine-tuned version of Qwen3-8B Instruct, optimized for Math and hard question tasks.
## π Model Details
- **Developed by:** Ali-Yaser
- **Model type:** GRPO thinker
- **Base Model:** Qwen/Qwen3-8B
- **Model Size:** 4B parameters
- **License:** Apache 2.0
- **Language(s):** English
- **Finetuned from:** Qwen3-4B
## π Quick Start
### Installation
I use a vLLM
```
# Install vLLM from pip:
pip install vllm
```
and lets download the model and run model
```python
# Load and run the model:
vllm serve "Ali-Yaser/Qwen3-R1-4B"
```
and Run it this is example
#
```
# Call the server using curl:
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Ali-Yaser/Qwen3-R1-4B",
"messages": [
{
"role": "user",
"content": "1+434x434+10x22=?"
}
]
}'
```