Instructions to use steampunque/phi-4-MP-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use steampunque/phi-4-MP-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="steampunque/phi-4-MP-GGUF", filename="phi-4.Q4_K_H.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use steampunque/phi-4-MP-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf steampunque/phi-4-MP-GGUF # Run inference directly in the terminal: llama-cli -hf steampunque/phi-4-MP-GGUF
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf steampunque/phi-4-MP-GGUF # Run inference directly in the terminal: llama-cli -hf steampunque/phi-4-MP-GGUF
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf steampunque/phi-4-MP-GGUF # Run inference directly in the terminal: ./llama-cli -hf steampunque/phi-4-MP-GGUF
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf steampunque/phi-4-MP-GGUF # Run inference directly in the terminal: ./build/bin/llama-cli -hf steampunque/phi-4-MP-GGUF
Use Docker
docker model run hf.co/steampunque/phi-4-MP-GGUF
- LM Studio
- Jan
- Ollama
How to use steampunque/phi-4-MP-GGUF with Ollama:
ollama run hf.co/steampunque/phi-4-MP-GGUF
- Unsloth Studio new
How to use steampunque/phi-4-MP-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for steampunque/phi-4-MP-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for steampunque/phi-4-MP-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for steampunque/phi-4-MP-GGUF to start chatting
- Docker Model Runner
How to use steampunque/phi-4-MP-GGUF with Docker Model Runner:
docker model run hf.co/steampunque/phi-4-MP-GGUF
- Lemonade
How to use steampunque/phi-4-MP-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull steampunque/phi-4-MP-GGUF
Run and chat with the model
lemonade run user.phi-4-MP-GGUF-{{QUANT_TAG}}List all available models
lemonade list
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf steampunque/phi-4-MP-GGUF# Run inference directly in the terminal:
llama-cli -hf steampunque/phi-4-MP-GGUFUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf steampunque/phi-4-MP-GGUF# Run inference directly in the terminal:
./llama-cli -hf steampunque/phi-4-MP-GGUFBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf steampunque/phi-4-MP-GGUF# Run inference directly in the terminal:
./build/bin/llama-cli -hf steampunque/phi-4-MP-GGUFUse Docker
docker model run hf.co/steampunque/phi-4-MP-GGUFMixed Precision GGUF layer quantization of phi-4 by microsoft
Original model: https://huggingface.co/microsoft/phi-4
The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. This quant is sized at ~Q4_K_M bpw. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the Q4_K_H layer quants are as follows:
Q4_K_L : Q4_K_L : Q4_K_M + attn_o = q6_k
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
LAYER_TYPES='[
[0 ,"Q5_K_M"],[1 ,"Q5_K_S"],[2 ,"Q4_K_L"],[3 ,"Q4_K_M"],[4 ,"Q4_K_S"],[5 ,"Q4_K_S"],[6 ,"Q4_K_S"],[7 ,"Q3_K_L"],
[8 ,"Q3_K_M"],[9 ,"Q3_K_M"],[10,"Q3_K_M"],[11,"Q3_K_M"],[12,"Q3_K_M"],[13,"Q3_K_M"],[14,"Q3_K_M"],[15,"Q3_K_M"],
[16,"Q3_K_L"],[17,"Q3_K_M"],[18,"Q3_K_L"],[19,"Q3_K_M"],[20,"Q3_K_L"],[21,"Q3_K_M"],[22,"Q3_K_L"],[23,"Q3_K_M"],
[24,"Q4_K_S"],[25,"Q3_K_L"],[26,"Q4_K_S"],[27,"Q3_K_L"],[28,"Q4_K_S"],[29,"Q3_K_L"],[30,"Q4_K_S"],[31,"Q3_K_L"],
[32,"Q4_K_S"],[33,"Q4_K_M"],[34,"Q4_K_M"],[35,"Q4_K_L"],[36,"Q5_K_S"],[37,"Q5_K_M"],[38,"Q5_K_L"],[39,"Q6_K_S"]
]'
FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"
Comparison:
| Quant | size | PPL | Comment |
|---|---|---|---|
| Q4_K_M | 9.05e9 | 6.7 | - |
| Q4_K_H | 8.65e9 | 6.8 | Hybrid quant with Q6_K embedding Q6_K output |
The quant was optimized for strong reasoning performance across a curated set of test prompts.
Usage:
The unique feature of phi-4 is strong reasoning without having used RL methods in the instruct training. Instead it was pre trained on a highly curated set of (apparently) very high quality data (i.e. strong textbooks) to help provide an inherent strong reasoning ability. In tests it does perform extremely well on reasoning, exhibiting muc higher than typical "common sense", with none of the nauseous and inefficient overthinking endemic of most RL models.
The layer quants for this model were evaluated on a set of test/eval prompts using greedy sampling. The quant/model show extremely good performance on reasoning problems. Its mostly useless for code prompts however, even though it doesn't score terribly bad on code evals it cannot reliable generate working code on even simple tasks.
The model can be speculated using Qwen2.5 0.5B Instruct if the inference engine can support dynamic vocab translation between draft and target models. Approx gen performance using a downstream speculator with llama.cpp on a 4070:
| Q | QKV | ND | NKV | gen tps | Comment |
|---|---|---|---|---|---|
| Q4_K_H | F16 | 0 | 16k | 49 | No draft |
| Q4_K_H | F16 | 4 | 14k | 72 | Spec 4 |
| Q4_K_H | Q8_0 | 4 | 16k | 71 | Spec 4 |
| Q4_K_H | Q8_0 | 3 | 16k | 72 | Spec 3 |
Benchmarks:
General model benchmarks for the model are given here: https://huggingface.co/spaces/steampunque/benchlm
Download the file from below:
| Link | Type | Size/e9 B | Notes |
|---|---|---|---|
| phi-4.Q4_K_H.gguf | Q4_K_H | 8.65e9 B | 0.4B smaller than Q4_K_M with much better performance |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
- Downloads last month
- 5
Model tree for steampunque/phi-4-MP-GGUF
Base model
microsoft/phi-4
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf steampunque/phi-4-MP-GGUF# Run inference directly in the terminal: llama-cli -hf steampunque/phi-4-MP-GGUF