Update README.md
Browse files
README.md
CHANGED
|
@@ -70,7 +70,7 @@ curl http://localhost:8000/v1/chat/completions \
|
|
| 70 |
```
|
| 71 |
|
| 72 |
## How to use advanced vllm options
|
| 73 |
-
|
| 74 |
```--compilation_config '{"full_cuda_graph": true}'``` : Activates cuda [full graph capture](https://docs.vllm.ai/en/stable/design/cuda_graphs/#cudagraphmodes)
|
| 75 |
```--rope-scaling '{"rope_type":"yarn","factor":2.0,"original_max_position_embeddings":65536}'```: Apply [yarn](https://arxiv.org/abs/2309.00071) to support 128K context length
|
| 76 |
```--enable-auto-tool-choice --tool-call-parser hermes``` : Enables [tool calling](https://docs.vllm.ai/en/latest/features/tool_calling/)
|
|
|
|
| 70 |
```
|
| 71 |
|
| 72 |
## How to use advanced vllm options
|
| 73 |
+
For maximum performance, we highly recommend using the options below.
|
| 74 |
```--compilation_config '{"full_cuda_graph": true}'``` : Activates cuda [full graph capture](https://docs.vllm.ai/en/stable/design/cuda_graphs/#cudagraphmodes)
|
| 75 |
```--rope-scaling '{"rope_type":"yarn","factor":2.0,"original_max_position_embeddings":65536}'```: Apply [yarn](https://arxiv.org/abs/2309.00071) to support 128K context length
|
| 76 |
```--enable-auto-tool-choice --tool-call-parser hermes``` : Enables [tool calling](https://docs.vllm.ai/en/latest/features/tool_calling/)
|