Update README.md
Browse files
README.md
CHANGED
|
@@ -33,7 +33,7 @@ To enable deployment of [Ring-Linear-2.0](https://github.com/inclusionAI/Ring-V2
|
|
| 33 |
|
| 34 |
Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below.
|
| 35 |
|
| 36 |
-
First, create a Conda environment with Python 3.10 and CUDA 12.8
|
| 37 |
```shell
|
| 38 |
conda create -n vllm python=3.10
|
| 39 |
conda activate vllm
|
|
@@ -41,12 +41,12 @@ conda activate vllm
|
|
| 41 |
|
| 42 |
Next, install our vLLM wheel package:
|
| 43 |
```shell
|
| 44 |
-
pip install https://media.githubusercontent.com/media/
|
| 45 |
```
|
| 46 |
|
| 47 |
Finally, install compatible versions of PyTorch and Torchvision after vLLM is installed:
|
| 48 |
```shell
|
| 49 |
-
pip install
|
| 50 |
```
|
| 51 |
|
| 52 |
#### Offline Inference
|
|
|
|
| 33 |
|
| 34 |
Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below.
|
| 35 |
|
| 36 |
+
First, create a Conda environment with Python 3.10 and CUDA 12.8:
|
| 37 |
```shell
|
| 38 |
conda create -n vllm python=3.10
|
| 39 |
conda activate vllm
|
|
|
|
| 41 |
|
| 42 |
Next, install our vLLM wheel package:
|
| 43 |
```shell
|
| 44 |
+
pip install https://media.githubusercontent.com/media/zheyishine/vllm_whl/refs/heads/main/vllm-0.8.5.post2.dev28%2Bgd327eed71.cu128-cp310-cp310-linux_x86_64.whl --force-reinstall
|
| 45 |
```
|
| 46 |
|
| 47 |
Finally, install compatible versions of PyTorch and Torchvision after vLLM is installed:
|
| 48 |
```shell
|
| 49 |
+
pip install transformers==4.51.1
|
| 50 |
```
|
| 51 |
|
| 52 |
#### Offline Inference
|