jopillar commited on
Commit
6ed6d9e
·
verified ·
1 Parent(s): 650f07a

Update README.md

Browse files

paddle ocr的文档已经过时,按照文档中的命令启动会导致使用的是PaddleOCR-VL模型而不是1.5的版本,此外在paddleocr中使用vllm作为后端,使用文档结构解析功能的时候,会请求PaddleOCR-VL-1.5模型,但是在vllm中 模型名字默认是带组织前缀的也就是说此时vllm 能够提供服务的模型名字是PaddlePaddle/PaddleOCR-VL-1.5 而不是PaddleOCR-VL-1.5,从而导致VLLM会返回如下错误

> Error with model error=ErrorInfo(message='The model `PaddleOCR-VL-1.5` does not exist.', type='NotFoundError', param='model', code=404)

Files changed (1) hide show
  1. README.md +18 -1
README.md CHANGED
@@ -132,7 +132,24 @@ for res in output:
132
  - Method 2: vLLM method
133
 
134
  [vLLM: PaddleOCR-VL Usage Guide](https://docs.vllm.ai/projects/recipes/en/latest/PaddlePaddle/PaddleOCR-VL.html)
135
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
136
  2. Call the PaddleOCR CLI or Python API:
137
  ```bash
138
  paddleocr doc_parser \
 
132
  - Method 2: vLLM method
133
 
134
  [vLLM: PaddleOCR-VL Usage Guide](https://docs.vllm.ai/projects/recipes/en/latest/PaddlePaddle/PaddleOCR-VL.html)
135
+
136
+ Please change the startup command from
137
+ ```bash
138
+ vllm serve PaddlePaddle/PaddleOCR-VL \
139
+ --trust-remote-code \
140
+ --max-num-batched-tokens 16384 \
141
+ --no-enable-prefix-caching \
142
+ --mm-processor-cache-gb 0
143
+ ```
144
+ to
145
+ ```
146
+ vllm serve PaddlePaddle/PaddleOCR-VL-1.5 \
147
+ --trust-remote-code \
148
+ --max-num-batched-tokens 16384 \
149
+ --no-enable-prefix-caching \
150
+ --mm-processor-cache-gb 0 \
151
+ --served-model-name PaddleOCR-VL-1.5
152
+ ```
153
  2. Call the PaddleOCR CLI or Python API:
154
  ```bash
155
  paddleocr doc_parser \