When will there be better support for vLLM?

#6
by Xiakj - opened

I encountered some compatibility issues while deploying with vLLM.

i have the same problem, the model is loading, but it seems like i cannot run inference,
when i only send a text promt im getting a response, but when adding an image as base64 to it, it just hangs indefinetly and vllm doesnt do anything.

i tried the sdk aswell but the same thing happens for some reason. i have a feeling something is missing...

You should use the source code or nightly build and update transformers according to the readme.
Additionally, base64 should be supported, just like regular VLMs.

with the vllm/vllm-openai:nightly docker image from this morning (pushed a few hours ago) it works.

FROM vllm/vllm-openai:nightly
RUN apt update &&  apt install git -y
RUN pip install --upgrade git+https://github.com/huggingface/transformers.git

you can use this docker image for it.

then i ran into a max image size problem probably : The decoder prompt contains a(n) image item with length 4897, which exceeds the pre-allocated encoder cache size 4800. Please reduce the input size or increase the encoder cache size by setting --limit-mm-per-prompt at startup.
I fixed this by making the image a little smaller

now i get proper transcriptions

for anyone having the same problem

EDIT:
just tried with the SDK and i get the same decoder cache size error (without layout component in sdk)
if anyone knows how to make the encoder size bigger per image it would be appreciated

with the vllm/vllm-openai:nightly docker image from this morning (pushed a few hours ago) it works.

FROM vllm/vllm-openai:nightly
RUN apt update &&  apt install git -y
RUN pip install --upgrade git+https://github.com/huggingface/transformers.git

you can use this docker image for it.

then i ran into a max image size problem probably : The decoder prompt contains a(n) image item with length 4897, which exceeds the pre-allocated encoder cache size 4800. Please reduce the input size or increase the encoder cache size by setting --limit-mm-per-prompt at startup.
I fixed this by making the image a little smaller

now i get proper transcriptions

for anyone having the same problem

EDIT:
just tried with the SDK and i get the same decoder cache size error (without layout component in sdk)
if anyone knows how to make the encoder size bigger per image it would be appreciated

Same issue here with the latest vllm nightly, the encoder cache is automatically set to 4000 no matter what vllm parameter I try to change, and that's evidently not enough for most high DPI images.

Sign up or log in to comment