Instructions to use openbmb/MiniCPM-V-2_6 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use openbmb/MiniCPM-V-2_6 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="openbmb/MiniCPM-V-2_6", trust_remote_code=True) messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("openbmb/MiniCPM-V-2_6", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use openbmb/MiniCPM-V-2_6 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "openbmb/MiniCPM-V-2_6" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "openbmb/MiniCPM-V-2_6", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/openbmb/MiniCPM-V-2_6
- SGLang
How to use openbmb/MiniCPM-V-2_6 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "openbmb/MiniCPM-V-2_6" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "openbmb/MiniCPM-V-2_6", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "openbmb/MiniCPM-V-2_6" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "openbmb/MiniCPM-V-2_6", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use openbmb/MiniCPM-V-2_6 with Docker Model Runner:
docker model run hf.co/openbmb/MiniCPM-V-2_6
Batch inference
Does MiniCPM support batch inference using the transformers library? Inference using only 1 image with question is very slow, I want to run batch inference, where each batch it's N images with N corresponding questions.
Batch inference is supported. The msgs parameter could be a list of msgs, like:
res = model.chat(
image=None,
msgs=[msgs, msgs],
tokenizer=tokenizer,
sampling=True,
stream=False
)
where msgs is as the same as that in the example.
Inputing a list of None is too weird, we update the code and disable image parameter in batch inference. Please check the update. @epishchik
Now I'm really confused. I read your documentation more carefully, and as I understand it, you always use image=None and include images in the msgs parameter. But I have already experimented (with the version downloaded before this whole discussion) with the code below and it works (I don't get errors and my VRAM usage increases when I increase the number of images in the list).
from PIL import Image
images = [
Image.open("img1.jpg"),
Image.open("img2.jpg")
]
questions = [
[{"role": "user", "content": "Question for img1.jpg"}],
[{"role": "user", "content": "Question for img2.jpg"}]
]
res = model.chat(
image=images,
msgs=msgs,
tokenizer=tokenizer
)
print(res)
Will I get incorrect output using this code and is it really batch inference, it seems to work in batch mode but I'm confused because your documentation and code in modelling_minicpmv.py says that I should pass image=None and include images in the msgs parameter to use batch inference.
Actually we want you to construct messages more carefully when using batch inference. When you send image to the model, it will add the image to the begin of the messages.
So you can run with the follows instead.
questions = [
[{"role": "user", "content": [Image.open("img1.jpg"), "Question for img1.jpg"]}],
[{"role": "user", "content": [Image.open("img1.jpg"), "Question for img2.jpg"]}]
]
Based on the possibility user may send a list of images and we don't know whether it is batched input or multi-images input. So we disabled this choice and recommend to use msgs. Sry for the inconvenience.
Got it, tnx for the explanation!