Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,68 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
---
|
| 4 |
+
# INT4 Weight-only Quantization and Deployment (W4A16)
|
| 5 |
+
|
| 6 |
+
LMDeploy adopts [AWQ](https://arxiv.org/abs/2306.00978) algorithm for 4bit weight-only quantization. By developed the high-performance cuda kernel, the 4bit quantized model inference achieves up to 2.4x faster than FP16.
|
| 7 |
+
|
| 8 |
+
LMDeploy supports the following NVIDIA GPU for W4A16 inference:
|
| 9 |
+
|
| 10 |
+
- Turing(sm75): 20 series, T4
|
| 11 |
+
|
| 12 |
+
- Ampere(sm80,sm86): 30 series, A10, A16, A30, A100
|
| 13 |
+
|
| 14 |
+
- Ada Lovelace(sm90): 40 series
|
| 15 |
+
|
| 16 |
+
Before proceeding with the quantization and inference, please ensure that lmdeploy is installed.
|
| 17 |
+
|
| 18 |
+
```shell
|
| 19 |
+
pip install lmdeploy[all]
|
| 20 |
+
```
|
| 21 |
+
|
| 22 |
+
This article comprises the following sections:
|
| 23 |
+
|
| 24 |
+
<!-- toc -->
|
| 25 |
+
|
| 26 |
+
- [Inference](#inference)
|
| 27 |
+
- [Evaluation](#evaluation)
|
| 28 |
+
- [Service](#service)
|
| 29 |
+
|
| 30 |
+
<!-- tocstop -->
|
| 31 |
+
## Inference
|
| 32 |
+
|
| 33 |
+
Trying the following codes, you can perform the batched offline inference with the quantized model:
|
| 34 |
+
|
| 35 |
+
```python
|
| 36 |
+
from lmdeploy import pipeline
|
| 37 |
+
from lmdeploy.messages import TurbomindEngineConfig
|
| 38 |
+
from lmdeploy.vl import load_image
|
| 39 |
+
|
| 40 |
+
model = 'OpenGVLab/InternVL-Chat-V1-5-AWQ'
|
| 41 |
+
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
|
| 42 |
+
backend_config = TurbomindEngineConfig(model_format='awq')
|
| 43 |
+
pipe = pipeline(model, backend_config=backend_config, log_level='INFO')
|
| 44 |
+
response = pipe(('describe this image', image))
|
| 45 |
+
print(response)
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
For more information about the pipeline parameters, please refer to [here](https://github.com/InternLM/lmdeploy/blob/main/docs/en/inference/pipeline.md).
|
| 49 |
+
|
| 50 |
+
## Evaluation
|
| 51 |
+
|
| 52 |
+
Please overview [this guide](https://opencompass.readthedocs.io/en/latest/advanced_guides/evaluation_turbomind.html) about model evaluation with LMDeploy.
|
| 53 |
+
|
| 54 |
+
## Service
|
| 55 |
+
|
| 56 |
+
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
|
| 57 |
+
|
| 58 |
+
```shell
|
| 59 |
+
lmdeploy serve api_server OpenGVLab/InternVL-Chat-V1-5-AWQ --backend turbomind --model-format awq
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
The default port of `api_server` is `23333`. After the server is launched, you can communicate with server on terminal through `api_client`:
|
| 63 |
+
|
| 64 |
+
```shell
|
| 65 |
+
lmdeploy serve api_client http://0.0.0.0:23333
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
You can overview and try out `api_server` APIs online by swagger UI at `http://0.0.0.0:23333`, or you can also read the API specification from [here](https://github.com/InternLM/lmdeploy/blob/main/docs/en/serving/restful_api.md).
|