Update README.md
Browse files
README.md
CHANGED
|
@@ -121,7 +121,7 @@ print(tokenizer.decode(response, skip_special_tokens=True))
|
|
| 121 |
|
| 122 |
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
|
| 123 |
|
| 124 |
-
```
|
| 125 |
from vllm import LLM, SamplingParams
|
| 126 |
from transformers import AutoTokenizer
|
| 127 |
|
|
|
|
| 121 |
|
| 122 |
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
|
| 123 |
|
| 124 |
+
```python
|
| 125 |
from vllm import LLM, SamplingParams
|
| 126 |
from transformers import AutoTokenizer
|
| 127 |
|