Commit
·
c52ae5b
1
Parent(s):
298b99e
Update README.md
Browse files
README.md
CHANGED
|
@@ -31,206 +31,3 @@ ______________________________________________________________________
|
|
| 31 |
|
| 32 |
______________________________________________________________________
|
| 33 |
|
| 34 |
-
## Introduction
|
| 35 |
-
|
| 36 |
-
LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the [MMRazor](https://github.com/open-mmlab/mmrazor) and [MMDeploy](https://github.com/open-mmlab/mmdeploy) teams. It has the following core features:
|
| 37 |
-
|
| 38 |
-
- **Efficient Inference Engine (TurboMind)**: Based on [FasterTransformer](https://github.com/NVIDIA/FasterTransformer), we have implemented an efficient inference engine - TurboMind, which supports the inference of LLaMA and its variant models on NVIDIA GPUs.
|
| 39 |
-
|
| 40 |
-
- **Interactive Inference Mode**: By caching the k/v of attention during multi-round dialogue processes, it remembers dialogue history, thus avoiding repetitive processing of historical sessions.
|
| 41 |
-
|
| 42 |
-
- **Multi-GPU Model Deployment and Quantization**: We provide comprehensive model deployment and quantification support, and have been validated at different scales.
|
| 43 |
-
|
| 44 |
-
- **Persistent Batch Inference**: Further optimization of model execution efficiency.
|
| 45 |
-
|
| 46 |
-

|
| 47 |
-
|
| 48 |
-
## Performance
|
| 49 |
-
|
| 50 |
-
**Case I**: output token throughput with fixed input token and output token number (1, 2048)
|
| 51 |
-
|
| 52 |
-
**Case II**: request throughput with real conversation data
|
| 53 |
-
|
| 54 |
-
Test Setting: LLaMA-7B, NVIDIA A100(80G)
|
| 55 |
-
|
| 56 |
-
The output token throughput of TurboMind exceeds 2000 tokens/s, which is about 5% - 15% higher than DeepSpeed overall and outperforms huggingface transformers by up to 2.3x.
|
| 57 |
-
And the request throughput of TurboMind is 30% higher than vLLM.
|
| 58 |
-
|
| 59 |
-

|
| 60 |
-
|
| 61 |
-
## Quick Start
|
| 62 |
-
|
| 63 |
-
### Installation
|
| 64 |
-
|
| 65 |
-
Install lmdeploy with pip ( python 3.8+) or [from source](./docs/en/build.md)
|
| 66 |
-
|
| 67 |
-
```shell
|
| 68 |
-
pip install lmdeploy
|
| 69 |
-
```
|
| 70 |
-
|
| 71 |
-
### Deploy InternLM
|
| 72 |
-
|
| 73 |
-
#### Get InternLM model
|
| 74 |
-
|
| 75 |
-
```shell
|
| 76 |
-
# 1. Download InternLM model
|
| 77 |
-
|
| 78 |
-
# Make sure you have git-lfs installed (https://git-lfs.com)
|
| 79 |
-
git lfs install
|
| 80 |
-
git clone https://huggingface.co/internlm/internlm-chat-7b /path/to/internlm-chat-7b
|
| 81 |
-
|
| 82 |
-
# if you want to clone without large files – just their pointers
|
| 83 |
-
# prepend your git clone with the following env var:
|
| 84 |
-
GIT_LFS_SKIP_SMUDGE=1
|
| 85 |
-
|
| 86 |
-
# 2. Convert InternLM model to turbomind's format, which will be in "./workspace" by default
|
| 87 |
-
python3 -m lmdeploy.serve.turbomind.deploy internlm-chat-7b /path/to/internlm-chat-7b
|
| 88 |
-
|
| 89 |
-
```
|
| 90 |
-
|
| 91 |
-
#### Inference by TurboMind
|
| 92 |
-
|
| 93 |
-
```shell
|
| 94 |
-
python -m lmdeploy.turbomind.chat ./workspace
|
| 95 |
-
```
|
| 96 |
-
|
| 97 |
-
> **Note**<br />
|
| 98 |
-
> When inferring with FP16 precision, the InternLM-7B model requires at least 15.7G of GPU memory overhead on TurboMind. <br />
|
| 99 |
-
> It is recommended to use NVIDIA cards such as 3090, V100, A100, etc.
|
| 100 |
-
> Disable GPU ECC can free up 10% memory, try `sudo nvidia-smi --ecc-config=0` and reboot system.
|
| 101 |
-
|
| 102 |
-
> **Note**<br />
|
| 103 |
-
> Tensor parallel is available to perform inference on multiple GPUs. Add `--tp=<num_gpu>` on `chat` to enable runtime TP.
|
| 104 |
-
|
| 105 |
-
#### Serving with gradio
|
| 106 |
-
|
| 107 |
-
```shell
|
| 108 |
-
python3 -m lmdeploy.serve.gradio.app ./workspace
|
| 109 |
-
```
|
| 110 |
-
|
| 111 |
-

|
| 112 |
-
|
| 113 |
-
#### Serving with Triton Inference Server
|
| 114 |
-
|
| 115 |
-
Launch inference server by:
|
| 116 |
-
|
| 117 |
-
```shell
|
| 118 |
-
bash workspace/service_docker_up.sh
|
| 119 |
-
```
|
| 120 |
-
|
| 121 |
-
Then, you can communicate with the inference server by command line,
|
| 122 |
-
|
| 123 |
-
```shell
|
| 124 |
-
python3 -m lmdeploy.serve.client {server_ip_addresss}:33337
|
| 125 |
-
```
|
| 126 |
-
|
| 127 |
-
or webui,
|
| 128 |
-
|
| 129 |
-
```shell
|
| 130 |
-
python3 -m lmdeploy.serve.gradio.app {server_ip_addresss}:33337
|
| 131 |
-
```
|
| 132 |
-
|
| 133 |
-
For the deployment of other supported models, such as LLaMA, LLaMA-2, vicuna and so on, you can find the guide from [here](docs/en/serving.md)
|
| 134 |
-
|
| 135 |
-
### Inference with PyTorch
|
| 136 |
-
|
| 137 |
-
For detailed instructions on Inference pytorch models, see [here](docs/en/pytorch.md).
|
| 138 |
-
|
| 139 |
-
#### Single GPU
|
| 140 |
-
|
| 141 |
-
```shell
|
| 142 |
-
python3 -m lmdeploy.pytorch.chat $NAME_OR_PATH_TO_HF_MODEL \
|
| 143 |
-
--max_new_tokens 64 \
|
| 144 |
-
--temperture 0.8 \
|
| 145 |
-
--top_p 0.95 \
|
| 146 |
-
--seed 0
|
| 147 |
-
```
|
| 148 |
-
|
| 149 |
-
#### Tensor Parallel with DeepSpeed
|
| 150 |
-
|
| 151 |
-
```shell
|
| 152 |
-
deepspeed --module --num_gpus 2 lmdeploy.pytorch.chat \
|
| 153 |
-
$NAME_OR_PATH_TO_HF_MODEL \
|
| 154 |
-
--max_new_tokens 64 \
|
| 155 |
-
--temperture 0.8 \
|
| 156 |
-
--top_p 0.95 \
|
| 157 |
-
--seed 0
|
| 158 |
-
```
|
| 159 |
-
|
| 160 |
-
You need to install deepspeed first to use this feature.
|
| 161 |
-
|
| 162 |
-
```
|
| 163 |
-
pip install deepspeed
|
| 164 |
-
```
|
| 165 |
-
|
| 166 |
-
## Quantization
|
| 167 |
-
|
| 168 |
-
### Step 1. Obtain Quantization Parameters
|
| 169 |
-
|
| 170 |
-
First, run the quantization script to obtain the quantization parameters.
|
| 171 |
-
|
| 172 |
-
> After execution, various parameters needed for quantization will be stored in `$WORK_DIR`; these will be used in the following steps..
|
| 173 |
-
|
| 174 |
-
```
|
| 175 |
-
python3 -m lmdeploy.lite.apis.calibrate \
|
| 176 |
-
--model $HF_MODEL \
|
| 177 |
-
--calib_dataset 'c4' \ # Calibration dataset, supports c4, ptb, wikitext2, pileval
|
| 178 |
-
--calib_samples 128 \ # Number of samples in the calibration set, if memory is insufficient, you can appropriately reduce this
|
| 179 |
-
--calib_seqlen 2048 \ # Length of a single piece of text, if memory is insufficient, you can appropriately reduce this
|
| 180 |
-
--work_dir $WORK_DIR \ # Folder storing Pytorch format quantization statistics parameters and post-quantization weight
|
| 181 |
-
|
| 182 |
-
```
|
| 183 |
-
|
| 184 |
-
### Step 2. Actual Model Quantization
|
| 185 |
-
|
| 186 |
-
`LMDeploy` supports INT4 quantization of weights and INT8 quantization of KV Cache. Run the corresponding script according to your needs.
|
| 187 |
-
|
| 188 |
-
#### Weight INT4 Quantization
|
| 189 |
-
|
| 190 |
-
LMDeploy uses AWQ algorithm for model weight quantization
|
| 191 |
-
|
| 192 |
-
> Requires input from the $WORK_DIR of step 1, and the quantized weights will also be stored in this folder.
|
| 193 |
-
|
| 194 |
-
```
|
| 195 |
-
python3 -m lmdeploy.lite.apis.auto_awq \
|
| 196 |
-
--w_bits 4 \ # Bit number for weight quantization
|
| 197 |
-
--w_sym False \ # Whether to use symmetric quantization for weights
|
| 198 |
-
--w_group_size 128 \ # Group size for weight quantization statistics
|
| 199 |
-
--work_dir $WORK_DIR \ # Directory saving quantization parameters from Step 1
|
| 200 |
-
```
|
| 201 |
-
|
| 202 |
-
#### KV Cache INT8 Quantization
|
| 203 |
-
|
| 204 |
-
In fp16 mode, kv_cache int8 quantization can be enabled, and a single card can serve more users.
|
| 205 |
-
First execute the quantization script, and the quantization parameters are stored in the `workspace/triton_models/weights` transformed by `deploy.py`.
|
| 206 |
-
|
| 207 |
-
```
|
| 208 |
-
python3 -m lmdeploy.lite.apis.kv_qparams \
|
| 209 |
-
--work_dir $WORK_DIR \ # Directory saving quantization parameters from Step 1
|
| 210 |
-
--turbomind_dir $TURBOMIND_DIR \
|
| 211 |
-
--kv_sym False \ # Whether to use symmetric or asymmetric quantization.
|
| 212 |
-
--num_tp 1 \ # The number of GPUs used for tensor parallelism
|
| 213 |
-
```
|
| 214 |
-
|
| 215 |
-
Then adjust `workspace/triton_models/weights/config.ini`
|
| 216 |
-
|
| 217 |
-
- `use_context_fmha` changed to 0, means off
|
| 218 |
-
- `quant_policy` is set to 4. This parameter defaults to 0, which means it is not enabled
|
| 219 |
-
|
| 220 |
-
Here is [quantization test results](./docs/en/quantization.md).
|
| 221 |
-
|
| 222 |
-
> **Warning**<br />
|
| 223 |
-
> runtime Tensor Parallel for quantilized model is not available. Please setup `--tp` on `deploy` to enable static TP.
|
| 224 |
-
|
| 225 |
-
## Contributing
|
| 226 |
-
|
| 227 |
-
We appreciate all contributions to LMDeploy. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
|
| 228 |
-
|
| 229 |
-
## Acknowledgement
|
| 230 |
-
|
| 231 |
-
- [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)
|
| 232 |
-
- [llm-awq](https://github.com/mit-han-lab/llm-awq)
|
| 233 |
-
|
| 234 |
-
## License
|
| 235 |
-
|
| 236 |
-
This project is released under the [Apache 2.0 license](LICENSE).
|
|
|
|
| 31 |
|
| 32 |
______________________________________________________________________
|
| 33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|