Instructions to use CalmState/Qwen-3-4b-Polyglot-r2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use CalmState/Qwen-3-4b-Polyglot-r2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="CalmState/Qwen-3-4b-Polyglot-r2") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("CalmState/Qwen-3-4b-Polyglot-r2") model = AutoModelForCausalLM.from_pretrained("CalmState/Qwen-3-4b-Polyglot-r2") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use CalmState/Qwen-3-4b-Polyglot-r2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "CalmState/Qwen-3-4b-Polyglot-r2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "CalmState/Qwen-3-4b-Polyglot-r2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/CalmState/Qwen-3-4b-Polyglot-r2
- SGLang
How to use CalmState/Qwen-3-4b-Polyglot-r2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "CalmState/Qwen-3-4b-Polyglot-r2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "CalmState/Qwen-3-4b-Polyglot-r2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "CalmState/Qwen-3-4b-Polyglot-r2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "CalmState/Qwen-3-4b-Polyglot-r2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use CalmState/Qwen-3-4b-Polyglot-r2 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for CalmState/Qwen-3-4b-Polyglot-r2 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for CalmState/Qwen-3-4b-Polyglot-r2 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for CalmState/Qwen-3-4b-Polyglot-r2 to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="CalmState/Qwen-3-4b-Polyglot-r2", max_seq_length=2048, ) - Docker Model Runner
How to use CalmState/Qwen-3-4b-Polyglot-r2 with Docker Model Runner:
docker model run hf.co/CalmState/Qwen-3-4b-Polyglot-r2
Polyglot r2
This model is a specialized fine-tune of Qwen-3 4b, optimized specifically for the Polyglot Air desktop application workflow. This version was trained on a dataset approximately 4 times larger than its predecessor, significantly expanding its range of commands and improving the accuracy of its responses.
It has been trained to strictly adhere to suffix-based commands, ensuring instant, clean, and direct responses without conversational filler, hallucinations, or "Here is the translation" preambles.
๐ฏ Purpose
The default chat models often struggle with context switching when presented with short strings followed by a command (e.g., text::en). They might explain the translation or chat with the user.
This model is fine-tuned to treat the ::suffix as a strict instruction code.
- Input:
O arquivo nรฃo pode ser salvo porque o disco estรก cheio::en - Output:
The file cannot be saved because the disk is full
โก Suffix Command Table
Use these suffixes at the end of your selected text to trigger specific transformations.
Translation
| Suffix | Action | Example Input | Model Output |
|---|---|---|---|
::en |
Translate to English | Bom dia::en |
Good morning |
::pt |
Translate to Portuguese (Portugal) | Good morning::pt |
Bom dia |
::ptbr |
Translate to Portuguese (Brazil) | The bus is coming::ptbr |
O รดnibus estรก vindo |
::es |
Translate to Spanish | Hello friend::es |
Hola amigo |
::zh |
Translate to Chinese (Simplified) | Hello::zh |
ไฝ ๅฅฝ |
Tone & Style Adjustment
| Suffix | Action | Example Input | Model Output |
|---|---|---|---|
::fix |
Fix spelling and grammar | i goes to skool yesterday::fix |
I went to school yesterday |
::formal |
Rewrite in a professional/formal tone | Hey, send me that file asap::formal |
Could you please send me that file as soon as possible? |
::casual |
Rewrite in a casual tone | I acknowledge receipt of your message::casual |
Got your message, thanks! |
::informal |
Rewrite in a slang/informal tone | That is very interesting::informal |
That's pretty cool |
::polite |
Make polite | I need the report now.::polite |
Could you please provide the report when you have a moment? |
::technical |
Make technical | The computer is broken.::technical |
The CPU is experiencing a critical hardware failure. |
::creative |
Make creative | The sun set.::creative |
The sun painted the sky in hues of fire and gold. |
::business |
Make business-oriented | We should sell more.::business |
We must optimize our sales funnel to increase revenue. |
::news |
Rewrite in a news style | A car crashed on the main street.::news |
A vehicular collision occurred on Main Street earlier today. |
::social |
Rewrite in a social media style | I'm launching a new product.::social |
๐ฅ BIG NEWS! So excited to launch our new product! #launch #new |
Content & Structure Manipulation
| Suffix | Action | Example Input | Model Output |
|---|---|---|---|
::summarize |
Summarize the text | [Long Text]::summarize |
[Concise Summary] |
::expand |
Expand / add details | The project was a success.::expand |
The project was a resounding success, exceeding all initial expectations. |
::elaborate |
Elaborate / add details | She was happy.::elaborate |
A wide smile spread across her face, and her eyes sparkled with joy. |
::simplify |
Simplify the text | The confluence of factors precipitated this.::simplify |
Many things caused this to happen. |
::concise |
Make concise | Due to the fact that it was raining, we left.::concise |
We left because it was raining. |
::toQuestion |
Transform into a question | The report is due tomorrow.::toQuestion |
Is the report due tomorrow? |
::toStatement |
Transform into a statement | Should we start the project?::toStatement |
We should start the project. |
๐ How to use with Polyglot Air
- Download this model (or the GGUF version if available).
- Add it to your Ollama library.
- Open Polyglot Air.
- Go to Settings > Model and select this model.
- Enjoy seamless, instruction-following transformations.
Uploaded finetuned model
- Developed by: CalmState
- License: apache-2.0
- Finetuned from model : unsloth/Qwen3-4B-Instruct-2507
This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 11
Model tree for CalmState/Qwen-3-4b-Polyglot-r2
Base model
Qwen/Qwen3-4B-Instruct-2507