Image-Text-to-Text
Transformers
PyTorch
multilingual
internvl_chat
feature-extraction
internvl
custom_code
Instructions to use OpenGVLab/InternVL-Chat-V1-1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OpenGVLab/InternVL-Chat-V1-1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="OpenGVLab/InternVL-Chat-V1-1", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("OpenGVLab/InternVL-Chat-V1-1", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use OpenGVLab/InternVL-Chat-V1-1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "OpenGVLab/InternVL-Chat-V1-1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/InternVL-Chat-V1-1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/OpenGVLab/InternVL-Chat-V1-1
- SGLang
How to use OpenGVLab/InternVL-Chat-V1-1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "OpenGVLab/InternVL-Chat-V1-1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/InternVL-Chat-V1-1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "OpenGVLab/InternVL-Chat-V1-1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/InternVL-Chat-V1-1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use OpenGVLab/InternVL-Chat-V1-1 with Docker Model Runner:
docker model run hf.co/OpenGVLab/InternVL-Chat-V1-1
| <html> | |
| <head> | |
| <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> | |
| <title>Directory listing for /InternVL-Chat-V1-1/</title> | |
| </head> | |
| <body> | |
| <h1>Directory listing for /InternVL-Chat-V1-1/</h1> | |
| <hr> | |
| <ul> | |
| <li><a href=".gitattributes">.gitattributes</a></li> | |
| <li><a href="added_tokens.json">added_tokens.json</a></li> | |
| <li><a href="config.json">config.json</a></li> | |
| <li><a href="configuration_intern_vit.py">configuration_intern_vit.py</a></li> | |
| <li><a href="configuration_internvl_chat.py">configuration_internvl_chat.py</a></li> | |
| <li><a href="conversation.py">conversation.py</a></li> | |
| <li><a href="modeling_intern_vit.py">modeling_intern_vit.py</a></li> | |
| <li><a href="modeling_internvl_chat.py">modeling_internvl_chat.py</a></li> | |
| <li><a href="preprocessor_config.json">preprocessor_config.json</a></li> | |
| <li><a href="pytorch_model-00001-of-00004.bin">pytorch_model-00001-of-00004.bin</a></li> | |
| <li><a href="pytorch_model-00002-of-00004.bin">pytorch_model-00002-of-00004.bin</a></li> | |
| <li><a href="pytorch_model-00003-of-00004.bin">pytorch_model-00003-of-00004.bin</a></li> | |
| <li><a href="pytorch_model-00004-of-00004.bin">pytorch_model-00004-of-00004.bin</a></li> | |
| <li><a href="pytorch_model.bin.index.json">pytorch_model.bin.index.json</a></li> | |
| <li><a href="special_tokens_map.json">special_tokens_map.json</a></li> | |
| <li><a href="tokenizer.model">tokenizer.model</a></li> | |
| <li><a href="tokenizer_config.json">tokenizer_config.json</a></li> | |
| </ul> | |
| <hr> | |
| </body> | |
| </html> | |