llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)phi-4-gguf
phi-4-gguf is a GGUF Q4_K_M quantized version of Microsoft Phi-4, providing a fast, small inference implementation, optimized for AI PCs.
Model Description
- Developed by: Microsoft
- Quantized by: bartowksi
- Model type: phi4
- Parameters: 14.7 billion
- Model Parent: microsoft/phi-4
- Language(s) (NLP): English
- License: Apache 2.0
- Uses: Chat, general-purpose LLM
- Quantization: int4
Model Card Contact
- Downloads last month
- 107
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for llmware/phi-4-gguf
Base model
microsoft/phi-4
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="llmware/phi-4-gguf", filename="phi-4-Q4_1.gguf", )