deployment and inference surpport
#1
by
guiguzongheng
- opened
Does this model support deployment with vLLM for inference?
Can you provide the complete command for deploying it with vLLM?
Can you also provide an example inference code snippet?