Image-Text-to-Text
Transformers
TensorBoard
Safetensors
multilingual
internvl_chat
feature-extraction
internvl
custom_code
conversational
Instructions to use OpenGVLab/InternVL-Chat-V1-5 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OpenGVLab/InternVL-Chat-V1-5 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="OpenGVLab/InternVL-Chat-V1-5", trust_remote_code=True) messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("OpenGVLab/InternVL-Chat-V1-5", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use OpenGVLab/InternVL-Chat-V1-5 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "OpenGVLab/InternVL-Chat-V1-5" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/InternVL-Chat-V1-5", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/OpenGVLab/InternVL-Chat-V1-5
- SGLang
How to use OpenGVLab/InternVL-Chat-V1-5 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "OpenGVLab/InternVL-Chat-V1-5" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/InternVL-Chat-V1-5", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "OpenGVLab/InternVL-Chat-V1-5" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/InternVL-Chat-V1-5", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use OpenGVLab/InternVL-Chat-V1-5 with Docker Model Runner:
docker model run hf.co/OpenGVLab/InternVL-Chat-V1-5
Upload folder using huggingface_hub
Browse files
modeling_internvl_chat.py
CHANGED
|
@@ -77,6 +77,8 @@ class InternVLChatModel(PreTrainedModel):
|
|
| 77 |
)
|
| 78 |
|
| 79 |
self.img_context_token_id = None
|
|
|
|
|
|
|
| 80 |
|
| 81 |
def forward(
|
| 82 |
self,
|
|
@@ -256,6 +258,7 @@ class InternVLChatModel(PreTrainedModel):
|
|
| 256 |
self.img_context_token_id = img_context_token_id
|
| 257 |
|
| 258 |
template = get_conv_template(self.template)
|
|
|
|
| 259 |
eos_token_id = tokenizer.convert_tokens_to_ids(template.sep)
|
| 260 |
|
| 261 |
history = [] if history is None else history
|
|
|
|
| 77 |
)
|
| 78 |
|
| 79 |
self.img_context_token_id = None
|
| 80 |
+
self.conv_template = get_conv_template(self.template)
|
| 81 |
+
self.system_message = self.conv_template.system_message
|
| 82 |
|
| 83 |
def forward(
|
| 84 |
self,
|
|
|
|
| 258 |
self.img_context_token_id = img_context_token_id
|
| 259 |
|
| 260 |
template = get_conv_template(self.template)
|
| 261 |
+
template.system_message = self.system_message
|
| 262 |
eos_token_id = tokenizer.convert_tokens_to_ids(template.sep)
|
| 263 |
|
| 264 |
history = [] if history is None else history
|