Where is the FastDeploy recipe for PaddleOCR-VL-1.5 inference?
#15
by jirachixy - opened
I’m trying to run PaddleOCR-VL-1.5 with the FastDeploy backend. The model card includes inference performance numbers for FastDeploy, vLLM, and sglang, with FastDeploy performing best. Could you share the recommended FastDeploy inference recipe for PaddleOCR-VL-1.5, especially for offline deployment?