Update README.md
Browse files
README.md
CHANGED
|
@@ -222,7 +222,7 @@ Tell me the weather in Seoul<|im_end|>
|
|
| 222 |
|
| 223 |
```
|
| 224 |
|
| 225 |
-
- Note that the prompt ends with `assistant/think\n`(think + `\n
|
| 226 |
- Generation continues until either the <|stop|> or <|endofturn|> token appears immediately after `<|im_end|>`.
|
| 227 |
|
| 228 |
To have the assistant respond in non-reasoning mode (i.e., answer directly), you can input the following prompt.
|
|
@@ -555,10 +555,10 @@ The HyperCLOVA X SEED Think model is built on a custom LLM architecture based on
|
|
| 555 |
git clone https://github.com/NAVER-Cloud-HyperCLOVA-X/hcx-vllm-plugin
|
| 556 |
```
|
| 557 |
|
| 558 |
-
2. vLLM Plugin Build & Installation: While keeping the
|
| 559 |
|
| 560 |
```bash
|
| 561 |
-
pip install
|
| 562 |
```
|
| 563 |
|
| 564 |
After downloading the model checkpoint to a local path (`/path/to/hyperclova-x-seed-think-14b`), you can perform text inference by running the following commands on a GPU environment with A100 or higher.
|
|
@@ -601,4 +601,3 @@ The model is licensed under [HyperCLOVA X SEED Model License Agreement](./LICENS
|
|
| 601 |
## Questions
|
| 602 |
|
| 603 |
For any other questions, please feel free to contact us at [dl_hcxtuneup@navercorp.com](mailto:dl_hcxtuneup@navercorp.com).
|
| 604 |
-
|
|
|
|
| 222 |
|
| 223 |
```
|
| 224 |
|
| 225 |
+
- Note that the prompt ends with `assistant/think\n`(think + `\n).
|
| 226 |
- Generation continues until either the <|stop|> or <|endofturn|> token appears immediately after `<|im_end|>`.
|
| 227 |
|
| 228 |
To have the assistant respond in non-reasoning mode (i.e., answer directly), you can input the following prompt.
|
|
|
|
| 555 |
git clone https://github.com/NAVER-Cloud-HyperCLOVA-X/hcx-vllm-plugin
|
| 556 |
```
|
| 557 |
|
| 558 |
+
2. vLLM Plugin Build & Installation: While keeping the NAVER-Cloud-HyperCLOVA-X/hcx-vllm-plugin path downloaded in step 1, refer to the commands below.
|
| 559 |
|
| 560 |
```bash
|
| 561 |
+
pip install -e .
|
| 562 |
```
|
| 563 |
|
| 564 |
After downloading the model checkpoint to a local path (`/path/to/hyperclova-x-seed-think-14b`), you can perform text inference by running the following commands on a GPU environment with A100 or higher.
|
|
|
|
| 601 |
## Questions
|
| 602 |
|
| 603 |
For any other questions, please feel free to contact us at [dl_hcxtuneup@navercorp.com](mailto:dl_hcxtuneup@navercorp.com).
|
|
|