Instructions to use FGFGFGGDFGDFGSD/Qwen3 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use FGFGFGGDFGDFGSD/Qwen3 with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="FGFGFGGDFGDFGSD/Qwen3", filename="model-f16.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use FGFGFGGDFGDFGSD/Qwen3 with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf FGFGFGGDFGDFGSD/Qwen3:F16 # Run inference directly in the terminal: llama-cli -hf FGFGFGGDFGDFGSD/Qwen3:F16
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf FGFGFGGDFGDFGSD/Qwen3:F16 # Run inference directly in the terminal: llama-cli -hf FGFGFGGDFGDFGSD/Qwen3:F16
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf FGFGFGGDFGDFGSD/Qwen3:F16 # Run inference directly in the terminal: ./llama-cli -hf FGFGFGGDFGDFGSD/Qwen3:F16
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf FGFGFGGDFGDFGSD/Qwen3:F16 # Run inference directly in the terminal: ./build/bin/llama-cli -hf FGFGFGGDFGDFGSD/Qwen3:F16
Use Docker
docker model run hf.co/FGFGFGGDFGDFGSD/Qwen3:F16
- LM Studio
- Jan
- Ollama
How to use FGFGFGGDFGDFGSD/Qwen3 with Ollama:
ollama run hf.co/FGFGFGGDFGDFGSD/Qwen3:F16
- Unsloth Studio new
How to use FGFGFGGDFGDFGSD/Qwen3 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for FGFGFGGDFGDFGSD/Qwen3 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for FGFGFGGDFGDFGSD/Qwen3 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for FGFGFGGDFGDFGSD/Qwen3 to start chatting
- Pi new
How to use FGFGFGGDFGDFGSD/Qwen3 with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf FGFGFGGDFGDFGSD/Qwen3:F16
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "FGFGFGGDFGDFGSD/Qwen3:F16" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use FGFGFGGDFGDFGSD/Qwen3 with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf FGFGFGGDFGDFGSD/Qwen3:F16
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default FGFGFGGDFGDFGSD/Qwen3:F16
Run Hermes
hermes
- Docker Model Runner
How to use FGFGFGGDFGDFGSD/Qwen3 with Docker Model Runner:
docker model run hf.co/FGFGFGGDFGDFGSD/Qwen3:F16
- Lemonade
How to use FGFGFGGDFGDFGSD/Qwen3 with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull FGFGFGGDFGDFGSD/Qwen3:F16
Run and chat with the model
lemonade run user.Qwen3-F16
List all available models
lemonade list
File size: 7,927 Bytes
ad02d13 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 | {
"add_prefix_space": false,
"backend": "tokenizers",
"bos_token": null,
"clean_up_tokenization_spaces": false,
"eos_token": "<|im_end|>",
"errors": "replace",
"is_local": true,
"model_max_length": 40960,
"pad_token": "<|PAD_TOKEN|>",
"padding_side": "right",
"split_special_tokens": false,
"tokenizer_class": "Qwen2Tokenizer",
"unk_token": null,
"added_tokens_decoder": {
"151643": {
"content": "<|endoftext|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},
"151644": {
"content": "<|im_start|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},
"151645": {
"content": "<|im_end|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},
"151646": {
"content": "<|object_ref_start|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},
"151647": {
"content": "<|object_ref_end|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},
"151648": {
"content": "<|box_start|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},
"151649": {
"content": "<|box_end|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},
"151650": {
"content": "<|quad_start|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},
"151651": {
"content": "<|quad_end|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},
"151652": {
"content": "<|vision_start|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},
"151653": {
"content": "<|vision_end|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},
"151654": {
"content": "<|vision_pad|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},
"151655": {
"content": "<|image_pad|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},
"151656": {
"content": "<|video_pad|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},
"151657": {
"content": "<tool_call>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": false
},
"151658": {
"content": "</tool_call>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": false
},
"151659": {
"content": "<|fim_prefix|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": false
},
"151660": {
"content": "<|fim_middle|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": false
},
"151661": {
"content": "<|fim_suffix|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": false
},
"151662": {
"content": "<|fim_pad|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": false
},
"151663": {
"content": "<|repo_name|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": false
},
"151664": {
"content": "<|file_sep|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": false
},
"151665": {
"content": "<tool_response>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": false
},
"151666": {
"content": "</tool_response>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": false
},
"151667": {
"content": "<think>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": false
},
"151668": {
"content": "</think>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": false
},
"151669": {
"content": "<|PAD_TOKEN|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
}
},
"chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %} {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n"
} |