| | --- |
| | license: mit |
| | pipeline_tag: image-text-to-text |
| | library_name: transformers |
| | base_model: OpenGVLab/InternVL2-40B |
| | new_version: OpenGVLab/InternVL2_5-40B-AWQ |
| | base_model_relation: quantized |
| | language: |
| | - multilingual |
| | tags: |
| | - internvl |
| | - custom_code |
| | --- |
| | |
| | # InternVL2-40B-AWQ |
| |
|
| | [\[π GitHub\]](https://github.com/OpenGVLab/InternVL) [\[π InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[π InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[π Mini-InternVL\]](https://arxiv.org/abs/2410.16261) [\[π InternVL 2.5\]](https://huggingface.co/papers/2412.05271) |
| |
|
| | [\[π Blog\]](https://internvl.github.io/blog/) [\[π¨οΈ Chat Demo\]](https://internvl.opengvlab.com/) [\[π€ HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[π Quick Start\]](#quick-start) [\[π Documents\]](https://internvl.readthedocs.io/en/latest/) |
| |
|
| | ## Introduction |
| |
|
| | <div align="center"> |
| | <img src="https://raw.githubusercontent.com/InternLM/lmdeploy/0be9e7ab6fe9a066cfb0a09d0e0c8d2e28435e58/resources/lmdeploy-logo.svg" width="450"/> |
| | </div> |
| |
|
| | ### INT4 Weight-only Quantization and Deployment (W4A16) |
| |
|
| | LMDeploy adopts [AWQ](https://arxiv.org/abs/2306.00978) algorithm for 4bit weight-only quantization. By developed the high-performance cuda kernel, the 4bit quantized model inference achieves up to 2.4x faster than FP16. |
| |
|
| | LMDeploy supports the following NVIDIA GPU for W4A16 inference: |
| |
|
| | - Turing(sm75): 20 series, T4 |
| |
|
| | - Ampere(sm80,sm86): 30 series, A10, A16, A30, A100 |
| |
|
| | - Ada Lovelace(sm90): 40 series |
| |
|
| | Before proceeding with the quantization and inference, please ensure that lmdeploy is installed. |
| |
|
| | ```shell |
| | pip install lmdeploy>=0.5.3 |
| | ``` |
| |
|
| | This article comprises the following sections: |
| |
|
| | <!-- toc --> |
| |
|
| | - [Inference](#inference) |
| | - [Service](#service) |
| |
|
| | <!-- tocstop --> |
| |
|
| | ### Inference |
| |
|
| | Trying the following codes, you can perform the batched offline inference with the quantized model: |
| |
|
| | ```python |
| | from lmdeploy import pipeline, TurbomindEngineConfig |
| | from lmdeploy.vl import load_image |
| | |
| | model = 'OpenGVLab/InternVL2-40B-AWQ' |
| | image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg') |
| | backend_config = TurbomindEngineConfig(model_format='awq') |
| | pipe = pipeline(model, backend_config=backend_config, log_level='INFO') |
| | response = pipe(('describe this image', image)) |
| | print(response.text) |
| | ``` |
| |
|
| | For more information about the pipeline parameters, please refer to [here](https://github.com/InternLM/lmdeploy/blob/main/docs/en/inference/pipeline.md). |
| |
|
| | ### Service |
| |
|
| | LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup: |
| |
|
| | ```shell |
| | lmdeploy serve api_server OpenGVLab/InternVL2-40B-AWQ --backend turbomind --server-port 23333 --model-format awq |
| | ``` |
| |
|
| | To use the OpenAI-style interface, you need to install OpenAI: |
| |
|
| | ```shell |
| | pip install openai |
| | ``` |
| |
|
| | Then, use the code below to make the API call: |
| |
|
| | ```python |
| | from openai import OpenAI |
| | |
| | client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:23333/v1') |
| | model_name = client.models.list().data[0].id |
| | response = client.chat.completions.create( |
| | model=model_name, |
| | messages=[{ |
| | 'role': |
| | 'user', |
| | 'content': [{ |
| | 'type': 'text', |
| | 'text': 'describe this image', |
| | }, { |
| | 'type': 'image_url', |
| | 'image_url': { |
| | 'url': |
| | 'https://modelscope.oss-cn-beijing.aliyuncs.com/resource/tiger.jpeg', |
| | }, |
| | }], |
| | }], |
| | temperature=0.8, |
| | top_p=0.8) |
| | print(response) |
| | ``` |
| |
|
| | ## License |
| |
|
| | This project is released under the MIT License. This project uses the pre-trained Nous-Hermes-2-Yi-34B as a component, which is licensed under the Apache License 2.0. |
| |
|
| | ## Citation |
| |
|
| | If you find this project useful in your research, please consider citing: |
| |
|
| | ```BibTeX |
| | @article{chen2024expanding, |
| | title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling}, |
| | author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others}, |
| | journal={arXiv preprint arXiv:2412.05271}, |
| | year={2024} |
| | } |
| | @article{gao2024mini, |
| | title={Mini-internvl: A flexible-transfer pocket multimodal model with 5\% parameters and 90\% performance}, |
| | author={Gao, Zhangwei and Chen, Zhe and Cui, Erfei and Ren, Yiming and Wang, Weiyun and Zhu, Jinguo and Tian, Hao and Ye, Shenglong and He, Junjun and Zhu, Xizhou and others}, |
| | journal={arXiv preprint arXiv:2410.16261}, |
| | year={2024} |
| | } |
| | @article{chen2024far, |
| | title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites}, |
| | author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others}, |
| | journal={arXiv preprint arXiv:2404.16821}, |
| | year={2024} |
| | } |
| | @inproceedings{chen2024internvl, |
| | title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks}, |
| | author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and others}, |
| | booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, |
| | pages={24185--24198}, |
| | year={2024} |
| | } |
| | ``` |
| |
|