any plan to support vllm-omni?
#3
by
ziozzang
- opened
https://docs.vllm.ai/projects/vllm-omni/en/latest/
vllm just started omni serving platform. any plan to support hyperclova x into de-facto standard?
Hi @ziozzang ,
We are planning to support our models using vllm-omni or vLLM.
However, we’re not yet sure whether all of our multimodal components (vision/audio encoders and decoders) can be supported in vllm-omni.
At a minimum, it seems that the vision modules will be supported.
currently working on supporting omni model in vllm-omni repo: https://github.com/vllm-project/vllm-omni/pull/585
@lssj14 By the way, Can you look into this problem as well? https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Think-32B/discussions/3