How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="lm-kit/phi-4-mini-3.8b-instruct-gguf",
	filename="",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

Model Summary

This repository hosts quantized versions of the Phi-4-mini-instruct model.

Format: GGUF
Converter: llama.cpp 06c2b1561d8b882bc018554591f8c35eb04ad30e
Quantizer: LM-Kit.NET 2025.3.1

For more detailed information on the base model, please visit the following link

Downloads last month
389
GGUF
Model size
4B params
Architecture
phi3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support