# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("OpenPipe/Qwen3-14B-Instruct")
model = AutoModelForCausalLM.from_pretrained("OpenPipe/Qwen3-14B-Instruct")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Qwen3-14B
Qwen3-14B-Instruct Highlights
OpenPipe/Qwen3-14B-Instruct is a finetune friendly instruct variant of Qwen3-14B. Qwen3 release does not include a 14B Instruct (non-thinking) model, this fork introduces an updated chat template that makes Qwen3-14B non-thinking by default and be highly compatible with OpenPipe and other finetuning frameworks.
The default Qwen3 chat template does not render <think></think> tags on the previous assistant message, which can lead to inconsistencies between training and generation. This version resolves that issue by adding <think></think> tags to all assistant prompts and generation templates to ensure message format consistency during both training and inference.
The model retains the strong general capabilities of Qwen3-14B while providing a more finetuning friendly chat template.
Model Overview
Qwen3-14B has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 14.8B
- Number of Paramaters (Non-Embedding): 13.2B
- Number of Layers: 40
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: 32,768 natively and 131,072 tokens with YaRN.
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.
- Downloads last month
- 124,334
Model tree for OpenPipe/Qwen3-14B-Instruct
Base model
Qwen/Qwen3-14B-Base
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="OpenPipe/Qwen3-14B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)