steadyflow commited on
Commit
4b0a1cd
·
verified ·
1 Parent(s): 02ed7b0

Update model card: add system prompt, inference params, reasoning parser flags

Browse files
Files changed (1) hide show
  1. README.md +34 -3
README.md CHANGED
@@ -22,8 +22,6 @@ quantization_config:
22
 
23
  AWQ INT4 quantization of [microsoft/Phi-4-reasoning](https://huggingface.co/microsoft/Phi-4-reasoning) (14B parameters).
24
 
25
- Calibrated with WikiText-2 for general-purpose use.
26
-
27
  ## Quantization Details
28
 
29
  | Parameter | Value |
@@ -43,11 +41,36 @@ Calibrated with WikiText-2 for general-purpose use.
43
  vllm serve steadyflow/Phi-4-reasoning-AWQ \
44
  --quantization awq \
45
  --gpu-memory-utilization 0.90 \
46
- --max-model-len 4096
 
 
47
  ```
48
 
 
 
 
 
49
  Fits on a single NVIDIA T4 GPU (16GB VRAM).
50
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
  ## Usage with Transformers
52
 
53
  ```python
@@ -56,4 +79,12 @@ from transformers import AutoTokenizer
56
 
57
  model = AutoAWQForCausalLM.from_quantized("steadyflow/Phi-4-reasoning-AWQ")
58
  tokenizer = AutoTokenizer.from_pretrained("steadyflow/Phi-4-reasoning-AWQ")
 
 
 
 
 
 
 
 
59
  ```
 
22
 
23
  AWQ INT4 quantization of [microsoft/Phi-4-reasoning](https://huggingface.co/microsoft/Phi-4-reasoning) (14B parameters).
24
 
 
 
25
  ## Quantization Details
26
 
27
  | Parameter | Value |
 
41
  vllm serve steadyflow/Phi-4-reasoning-AWQ \
42
  --quantization awq \
43
  --gpu-memory-utilization 0.90 \
44
+ --max-model-len 4096 \
45
+ --enable-reasoning \
46
+ --reasoning-parser deepseek_r1
47
  ```
48
 
49
+ The `--enable-reasoning --reasoning-parser deepseek_r1` flags separate the chain-of-thought (`reasoning` field) from the final answer (`content` field) in the OpenAI-compatible API response.
50
+
51
+ > **Known issue:** Some vLLM versions have bugs with Phi-4 reasoning output (infinite loops, missing `<think>` tags). If you encounter this, try setting `VLLM_USE_V1=0` or omit the reasoning flags and parse `<think>...</think>` tags from the output yourself.
52
+
53
  Fits on a single NVIDIA T4 GPU (16GB VRAM).
54
 
55
+ ## Recommended Inference Parameters
56
+
57
+ Per Microsoft's model card, use these parameters for best reasoning quality:
58
+
59
+ ```python
60
+ temperature = 0.8
61
+ top_p = 0.95
62
+ do_sample = True
63
+ max_new_tokens = 32768 # increase for complex reasoning chains
64
+ ```
65
+
66
+ ## Required System Prompt
67
+
68
+ Phi-4 reasoning models require a specific system prompt to activate structured reasoning with `<think>` tags:
69
+
70
+ ```
71
+ Your role as an assistant involves thoroughly exploring questions through a systematic thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution using the specified format: <think> {Thought section} </think> {Solution section}. In the Thought section, detail your reasoning process in steps. Each step should include detailed considerations such as analysing questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The Solution section should be logical, accurate, and concise and detail necessary steps needed to reach the conclusion. Now, try to solve the following question through the above guidelines:
72
+ ```
73
+
74
  ## Usage with Transformers
75
 
76
  ```python
 
79
 
80
  model = AutoAWQForCausalLM.from_quantized("steadyflow/Phi-4-reasoning-AWQ")
81
  tokenizer = AutoTokenizer.from_pretrained("steadyflow/Phi-4-reasoning-AWQ")
82
+
83
+ messages = [
84
+ {"role": "system", "content": "<system prompt above>"},
85
+ {"role": "user", "content": "Your question here"},
86
+ ]
87
+ inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
88
+ outputs = model.generate(inputs.to(model.device), max_new_tokens=4096, temperature=0.8, top_p=0.95, do_sample=True)
89
+ print(tokenizer.decode(outputs[0]))
90
  ```