How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="AI-Engine/MiniCPM-V-2_6-GGUF",
	filename="",
)
llm.create_chat_completion(
	messages = [
		{
			"role": "user",
			"content": [
				{
					"type": "text",
					"text": "Describe this image in one sentence."
				},
				{
					"type": "image_url",
					"image_url": {
						"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
					}
				}
			]
		}
	]
)

GGUF llama.cpp quantized version of:

Recommended Prompt Format (ChatML)

<|im_start|>system 
Provide some context and/or instructions to the model.<|im_end|> 
<|im_start|>user 
The user’s message goes here<|im_end|> 
<|im_start|>assistant 
AI message goes here<|im_end|> 

Quant Version: b3933 with imatrix

Downloads last month
54
GGUF
Model size
8B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

2-bit

5-bit

8-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AI-Engine/MiniCPM-V-2_6-GGUF

Quantized
(13)
this model