Update README.md
Browse files
README.md
CHANGED
|
@@ -291,4 +291,12 @@ To speed up your inference, you can use the vLLM engine from [our repository](ht
|
|
| 291 |
|
| 292 |
Make sure to switch to the `v0.9.2rc2_hyperclovax_vision_seed` branch.
|
| 293 |
|
| 294 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 291 |
|
| 292 |
Make sure to switch to the `v0.9.2rc2_hyperclovax_vision_seed` branch.
|
| 293 |
|
| 294 |
+
**Launch API server**:
|
| 295 |
+
- https://oss.navercorp.com/HYPERSCALE-AI-VISION/vllm/blob/main/README.md
|
| 296 |
+
|
| 297 |
+
**Request Example**:
|
| 298 |
+
- https://github.com/vllm-project/vllm/pull/20931#issue-3229161410
|
| 299 |
+
|
| 300 |
+
**Offline Inference Examples**:
|
| 301 |
+
- https://github.com/vllm-project/vllm/blob/main/examples/offline_inference/vision_language.py
|
| 302 |
+
- https://github.com/vllm-project/vllm/blob/main/examples/offline_inference/vision_language_multi_image.py
|