File size: 1,550 Bytes
f28224a 2b6f4e9 f28224a 2b6f4e9 f28224a 362e8e1 bb6a013 1a1d1d2 362e8e1 bb6a013 362e8e1 459c7fe 30ed583 362e8e1 bb6a013 362e8e1 bb6a013 362e8e1 06cd72a bb6a013 06cd72a bb6a013 06cd72a bb6a013 32e20c2 06cd72a bb6a013 32e20c2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
---
base_model:
- Qwen/Qwen3-8B
tags:
- transformers
- qwen3
- R1
- THİNK
license: apache-2.0
language:
- en
---
[<img src="https://i.imgur.com/vo0dm9p.jpeg" width="710"/>]()
# Qwen3-R1 8B 🚀
<div align="center">
[](https://huggingface.co/Ali-Yaser/Qwen3-R1-8B)
[](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md)
[](https://huggingface.co/Qwen/Qwen3-8B)
</div>
## Model Description
**Qwen3-R1 Series** is a specialized math and reansoning awnsers-focused fine-tuned version of Qwen3-8B Instruct, optimized for Math and hard question tasks.
## 📊 Model Details
- **Developed by:** Ali-Yaser
- **Model type:** GRPO thinker
- **Base Model:** Qwen/Qwen3-8B
- **Model Size:** 8B parameters
- **License:** Apache 2.0
- **Language(s):** English
- **Finetuned from:** Qwen3-8B
## 🚀 Quick Start
### Installation
I use a vLLM
```
# Install vLLM from pip:
pip install vllm
```
and lets download the model and run model
```python
# Load and run the model:
vllm serve "Ali-Yaser/Qwen3-R1-8B"
```
and Run it this is example
#
```
# Call the server using curl:
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Ali-Yaser/Qwen3-R1-8B",
"messages": [
{
"role": "user",
"content": "1+434334434+10x22=?"
}
]
}'
```
|