[Doc] Update vllm minimun version
Browse files
README.md
CHANGED
|
@@ -128,7 +128,7 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
| 128 |
For deployment, we recommend using vLLM.
|
| 129 |
* **Install vLLM**: You can install vLLM by running the following command.
|
| 130 |
```bash
|
| 131 |
-
pip install "vllm>=0.
|
| 132 |
```
|
| 133 |
* **Model Deployment**: Use vLLM to deploy your model. For example, you can use the command to set up a server similar to openAI:
|
| 134 |
```bash
|
|
|
|
| 128 |
For deployment, we recommend using vLLM.
|
| 129 |
* **Install vLLM**: You can install vLLM by running the following command.
|
| 130 |
```bash
|
| 131 |
+
pip install "vllm>=0.5.5"
|
| 132 |
```
|
| 133 |
* **Model Deployment**: Use vLLM to deploy your model. For example, you can use the command to set up a server similar to openAI:
|
| 134 |
```bash
|