Phi-4 Multimodal – Quantized GGUF + Omni Projector
This repository provides pre-converted GGUF weights for running microsoft/Phi-4-multimodal-instruct with a quantized language model and a multimodal projector (mmproj) on top of a specialized llama.cpp fork.
- GitHub (code + server setup): Ahmed-Shayan-Arsalan/Phi4-multimodal-Quantisized-Llama.cpp
The goal is to make Phi‑4 multimodal practical to run locally for text, vision, and audio tasks. All weights here are format conversions of the original Microsoft model and do not introduce new training data.
Files in This Repository
- phi4-mm-Q4_K_M.gguf: Quantized Phi‑4 multimodal language model (LLM).
- Quantization: Q4_K_M (4‑bit group-wise).
- Usage: Your main
-mmodel in llama.cpp.
- phi4-mm-omni.gguf: Multimodal projector (mmproj).
- Contents: Vision encoder (SigLIP/Navit-style) and audio Conformer encoder.
- Precision: Stored in F16 / F32 to preserve multimodal quality.
- Usage: Your
--mmprojor-mmmodel in llama.cpp.
- (Optional variants):
phi4-mm-f16.gguf(unquantized ref),phi4-mm-vision-q8.gguf(alternative quantization).
Intended Use
These GGUF files are designed for:
- Local inference with llama.cpp or compatible runtimes.
- Research and experimentation on multimodal reasoning.
- Prototyping agents that consume text, images, and audio.
Not intended for:
- Training from scratch.
- Any use violating the original Microsoft Phi-4 License.
How These GGUFs Were Created
1. Download the Base Model
git lfs install
git clone [https://huggingface.co/microsoft/Phi-4-multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) phi-4-multimodal
# Phi-4-Multimodal Deployment Guide (llama.cpp)
## 2. Export the text LLM to GGUF
python convert_hf_to_gguf.py \
/path/to/phi-4-multimodal \
--outtype f16 \
--outfile phi4-mm-f16.gguf
## 3. Quantize the LLM
./build/bin/llama-quantize \
phi4-mm-f16.gguf \
phi4-mm-Q4_K_M.gguf \
Q4_K_M
## 4. Export the Multimodal Projector (mmproj)
To extract the vision and audio encoders into a separate GGUF:
python convert_hf_to_gguf.py \
/path/to/phi-4-multimodal \
--mmproj \
--outtype f16 \
--outfile phi4-mm-omni.gguf
Technical Note: A custom MmprojModel path in the conversion script maps tensors from model.embed_tokens_extend.* to the CLIP-style and Conformer layouts expected by the llama.cpp runtime.
---
## How to Use (llama.cpp)
### Server Mode (Recommended)
This exposes an OpenAI-style HTTP API supporting multimodal prompts.
./build/bin/llama-server \
-m /path/to/phi4-mm-Q4_K_M.gguf \
-mm /path/to/phi4-mm-omni.gguf \
--host 0.0.0.0 \
--port 8080
Vision: Send image_url parts or MTMD markers.
Audio: Send audio content according to the multimodal documentation.
### CLI Mode
./build/bin/llama-cli \
-m /path/to/phi4-mm-Q4_K_M.gguf \
-mm /path/to/phi4-mm-omni.gguf \
--color \
--prompt "Explain this image in detail:"
---
## Example Capabilities
- Text: Instruction following, reasoning, coding, multi‑turn chat.
- Vision: Visual question answering (VQA), captioning, document/chart understanding.
- Audio: Automatic speech recognition (ASR), translation (EN → FR), and summarization (where Conformer path is enabled).
## Limitations & Risks
- Hallucinations: May misinterpret content or hallucinate facts.
- Verification: Not suitable for medical, legal, or safety-critical decisions without human verification.
- Compliance: You must comply with the original Microsoft license.
## Acknowledgements
- Base model: microsoft/Phi-4-multimodal-instruct
- Serving stack: llama.cpp and its contributors.
- Special thanks to the Microsoft Phi-4 team for the underlying pretraining.
- Downloads last month
- 45
Hardware compatibility
Log In
to add your hardware
4-bit
Model tree for ShayanCyan/phi4-multimodal-quantisized-gguf
Base model
microsoft/Phi-4-multimodal-instruct