How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf cortexso/qwen3:
# Run inference directly in the terminal:
llama-cli -hf cortexso/qwen3:
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf cortexso/qwen3:
# Run inference directly in the terminal:
llama-cli -hf cortexso/qwen3:
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf cortexso/qwen3:
# Run inference directly in the terminal:
./llama-cli -hf cortexso/qwen3:
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf cortexso/qwen3:
# Run inference directly in the terminal:
./build/bin/llama-cli -hf cortexso/qwen3:
Use Docker
docker model run hf.co/cortexso/qwen3:
Quick Links

Overview

Qwen Team developed and released the Qwen3 series, a state-of-the-art family of language models optimized for advanced reasoning, dialogue, instruction-following, and agentic use cases. Qwen3 introduces innovative thinking/non-thinking mode switching, long context capabilities, and multilingual support, all while achieving high efficiency and performance.

The Qwen3 models span several sizes and include support for seamless reasoning, complex tool usage, and detailed multi-turn conversations, making them ideal for applications such as research assistants, code generation, enterprise chatbots, and more.

Variants

Qwen3

No Variant Branch Cortex CLI command
1 Qwen3-0.6B 0.6b cortex run qwen3:0.6b
2 Qwen3-1.7B 1.7b cortex run qwen3:1.7b
3 Qwen3-4B 4b cortex run qwen3:4b
4 Qwen3-8B 8b cortex run qwen3:8b
5 Qwen3-14B 14b cortex run qwen3:14b
6 Qwen3-32B 32b cortex run qwen3:32b
7 Qwen3-30B-A3B 30b-a3b cortex run qwen3:30b-a3b

Each branch contains multiple quantized GGUF versions:

  • Qwen3-0.6B: q2_k, q3_k_l, q3_k_m, q3_k_s, q4_k_m, q4_k_s, q5_k_m, q5_k_s, q6_k, q8_0
  • Qwen3-1.7B: q2_k, q3_k_l, q3_k_m, q3_k_s, q4_k_m, q4_k_s, q5_k_m, q5_k_s, q6_k, q8_0
  • Qwen3-4B: q2_k, q3_k_l, q3_k_m, q3_k_s, q4_k_m, q4_k_s, q5_k_m, q5_k_s, q6_k, q8_0
  • Qwen3-8B: q2_k, q3_k_l, q3_k_m, q3_k_s, q4_k_m, q4_k_s, q5_k_m, q5_k_s, q6_k, q8_0
  • Qwen3-32B: q2_k, q3_k_l, q3_k_m, q3_k_s, q4_k_m, q4_k_s, q5_k_m, q5_k_s, q6_k, q8_0
  • Qwen3-30B-A3B: *q2_k, q3_k_l, q3_k_m, q3_k_s, q4_k_m, q4_k_s, q5_k_m, q5_k_s, q6_k, q8_0

Use it with Jan (UI)

  1. Install Jan using Quickstart
  2. Use in Jan model Hub:
    cortexso/qwen3
    

Use it with Cortex (CLI)

  1. Install Cortex using Quickstart
  2. Run the model with command:
    cortex run qwen3
    

Credits

Downloads last month
1,921
GGUF
Model size
0.8B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support