ShayanCyan's picture
Update README.md
7ad8847 verified
metadata
license: other
license_name: phi4-model-license
license_link: https://huggingface.co/microsoft/Phi-4-multimodal-instruct/blob/main/LICENSE
language:
  - en
  - ur
  - de
  - es
  - tr
  - fr
  - it
base_model:
  - microsoft/Phi-4-multimodal-instruct
tags:
  - phi
  - phi4-multimodal
  - quantized
  - visual-question-answering
  - speech-translation
  - speech-summarization
  - audio
  - vision
  - gguf
library_name: other
pipeline_tag: image-to-text

Phi-4 Multimodal – Quantized GGUF + Omni Projector

This repository provides pre-converted GGUF weights for running microsoft/Phi-4-multimodal-instruct with a quantized language model and a multimodal projector (mmproj) on top of a specialized llama.cpp fork.

The goal is to make Phi‑4 multimodal practical to run locally for text, vision, and audio tasks. All weights here are format conversions of the original Microsoft model and do not introduce new training data.


Files in This Repository

  • phi4-mm-Q4_K_M.gguf: Quantized Phi‑4 multimodal language model (LLM).
    • Quantization: Q4_K_M (4‑bit group-wise).
    • Usage: Your main -m model in llama.cpp.
  • phi4-mm-omni.gguf: Multimodal projector (mmproj).
    • Contents: Vision encoder (SigLIP/Navit-style) and audio Conformer encoder.
    • Precision: Stored in F16 / F32 to preserve multimodal quality.
    • Usage: Your --mmproj or -mm model in llama.cpp.
  • (Optional variants): phi4-mm-f16.gguf (unquantized ref), phi4-mm-vision-q8.gguf (alternative quantization).

Intended Use

These GGUF files are designed for:

  • Local inference with llama.cpp or compatible runtimes.
  • Research and experimentation on multimodal reasoning.
  • Prototyping agents that consume text, images, and audio.

Not intended for:


How These GGUFs Were Created

1. Download the Base Model

git lfs install
git clone [https://huggingface.co/microsoft/Phi-4-multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) phi-4-multimodal

# Phi-4-Multimodal Deployment Guide (llama.cpp)

## 2. Export the text LLM to GGUF
python convert_hf_to_gguf.py \
  /path/to/phi-4-multimodal \
  --outtype f16 \
  --outfile phi4-mm-f16.gguf

## 3. Quantize the LLM
./build/bin/llama-quantize \
  phi4-mm-f16.gguf \
  phi4-mm-Q4_K_M.gguf \
  Q4_K_M

## 4. Export the Multimodal Projector (mmproj)
To extract the vision and audio encoders into a separate GGUF:

python convert_hf_to_gguf.py \
  /path/to/phi-4-multimodal \
  --mmproj \
  --outtype f16 \
  --outfile phi4-mm-omni.gguf

Technical Note: A custom MmprojModel path in the conversion script maps tensors from model.embed_tokens_extend.* to the CLIP-style and Conformer layouts expected by the llama.cpp runtime.

---

## How to Use (llama.cpp)

### Server Mode (Recommended)
This exposes an OpenAI-style HTTP API supporting multimodal prompts.

./build/bin/llama-server \
  -m /path/to/phi4-mm-Q4_K_M.gguf \
  -mm /path/to/phi4-mm-omni.gguf \
  --host 0.0.0.0 \
  --port 8080

Vision: Send image_url parts or MTMD markers.
Audio: Send audio content according to the multimodal documentation.

### CLI Mode
./build/bin/llama-cli \
  -m /path/to/phi4-mm-Q4_K_M.gguf \
  -mm /path/to/phi4-mm-omni.gguf \
  --color \
  --prompt "Explain this image in detail:"

---

## Example Capabilities
- Text: Instruction following, reasoning, coding, multi‑turn chat.
- Vision: Visual question answering (VQA), captioning, document/chart understanding.
- Audio: Automatic speech recognition (ASR), translation (EN → FR), and summarization (where Conformer path is enabled).

## Limitations & Risks
- Hallucinations: May misinterpret content or hallucinate facts.
- Verification: Not suitable for medical, legal, or safety-critical decisions without human verification.
- Compliance: You must comply with the original Microsoft license.

## Acknowledgements
- Base model: microsoft/Phi-4-multimodal-instruct
- Serving stack: llama.cpp and its contributors.
- Special thanks to the Microsoft Phi-4 team for the underlying pretraining.