Phi-4-reasoning-AWQ / README.md
steadyflow's picture
Update model card: add system prompt, inference params, reasoning parser flags
4b0a1cd verified
metadata
license: mit
base_model: microsoft/Phi-4-reasoning
tags:
  - phi
  - phi4
  - reasoning
  - awq
  - int4
  - quantized
library_name: transformers
pipeline_tag: text-generation
quantization_config:
  quant_method: awq
  bits: 4
  group_size: 128
  zero_point: true
  version: gemm

Phi-4-Reasoning AWQ

AWQ INT4 quantization of microsoft/Phi-4-reasoning (14B parameters).

Quantization Details

Parameter Value
Method AWQ (Activation-Aware Weight Quantization)
Bit width 4-bit weights, 16-bit activations (W4A16)
Group size 128
Zero point Enabled
Kernel GEMM
Calibration 512 WikiText-2 samples
Source model microsoft/Phi-4-reasoning (~28GB FP16)
Quantized size ~8.6GB

Usage with vLLM

vllm serve steadyflow/Phi-4-reasoning-AWQ \
  --quantization awq \
  --gpu-memory-utilization 0.90 \
  --max-model-len 4096 \
  --enable-reasoning \
  --reasoning-parser deepseek_r1

The --enable-reasoning --reasoning-parser deepseek_r1 flags separate the chain-of-thought (reasoning field) from the final answer (content field) in the OpenAI-compatible API response.

Known issue: Some vLLM versions have bugs with Phi-4 reasoning output (infinite loops, missing <think> tags). If you encounter this, try setting VLLM_USE_V1=0 or omit the reasoning flags and parse <think>...</think> tags from the output yourself.

Fits on a single NVIDIA T4 GPU (16GB VRAM).

Recommended Inference Parameters

Per Microsoft's model card, use these parameters for best reasoning quality:

temperature = 0.8
top_p = 0.95
do_sample = True
max_new_tokens = 32768  # increase for complex reasoning chains

Required System Prompt

Phi-4 reasoning models require a specific system prompt to activate structured reasoning with <think> tags:

Your role as an assistant involves thoroughly exploring questions through a systematic thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution using the specified format: <think> {Thought section} </think> {Solution section}. In the Thought section, detail your reasoning process in steps. Each step should include detailed considerations such as analysing questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The Solution section should be logical, accurate, and concise and detail necessary steps needed to reach the conclusion. Now, try to solve the following question through the above guidelines:

Usage with Transformers

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer

model = AutoAWQForCausalLM.from_quantized("steadyflow/Phi-4-reasoning-AWQ")
tokenizer = AutoTokenizer.from_pretrained("steadyflow/Phi-4-reasoning-AWQ")

messages = [
    {"role": "system", "content": "<system prompt above>"},
    {"role": "user", "content": "Your question here"},
]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(inputs.to(model.device), max_new_tokens=4096, temperature=0.8, top_p=0.95, do_sample=True)
print(tokenizer.decode(outputs[0]))