--- base_model: - Qwen/Qwen3-4B tags: - transformers - qwen3 - R1 - THΔ°NK license: apache-2.0 language: - en --- []() # Qwen3-R1 4B πŸš€ -GGUF version '' https://huggingface.co/mradermacher/Qwen3-R1-4B-GGUF https://huggingface.co/mradermacher/Qwen3-R1-4B-i1-GGUF ''
[![Model Size](https://img.shields.io/badge/Model%20Size-4B-red)](https://huggingface.co/Ali-Yaser/Qwen3-R1-8B) [![License](https://img.shields.io/badge/License-Apache%20-green)](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md) [![Base Model](https://img.shields.io/badge/Base-Qwen3%20-orange)](https://huggingface.co/Qwen/Qwen3-8B)
## Model Description **Qwen3-R1 Series** is a specialized math and reansoning awnsers-focused fine-tuned version of Qwen3-8B Instruct, optimized for Math and hard question tasks. ## πŸ“Š Model Details - **Developed by:** Ali-Yaser - **Model type:** GRPO thinker - **Base Model:** Qwen/Qwen3-8B - **Model Size:** 4B parameters - **License:** Apache 2.0 - **Language(s):** English - **Finetuned from:** Qwen3-4B ## πŸš€ Quick Start ### Installation I use a vLLM ``` # Install vLLM from pip: pip install vllm ``` and lets download the model and run model ```python # Load and run the model: vllm serve "Ali-Yaser/Qwen3-R1-4B" ``` and Run it this is example # ``` # Call the server using curl: curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Ali-Yaser/Qwen3-R1-4B", "messages": [ { "role": "user", "content": "1+434x434+10x22=?" } ] }' ```