Instructions to use OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview", trust_remote_code=True) messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview
- SGLang
How to use OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview with Docker Model Runner:
docker model run hf.co/OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview
Query about `model_max_length` configuration
Hello, and thank you for this fantastic model.
I have a quick question about the configuration. I noticed that config.json sets max_position_embeddings to 131072, but the tokenizer_config.json has model_max_length set to 16384.
This causes a confusing situation when serving the model with vLLM. The startup log correctly shows that the engine is using the full context:INFO ... Using max model len 131072
However, the API server still throws a validation warning based on the tokenizer's setting for any input over 16k tokens:Token indices sequence length is longer than the specified maximum sequence length for this model (38682 > 16384).
Is this discrepancy intentional?
Anw, thanks again for all your hard work on this releases!
Thank you for your interest in our work. max_position_embeddings denotes the maximum supported context length (without introducing additional algorithms for context length extension), whereas model_max_length refers to the max context length used during training—set to 32,768 in the Pretrain/CPT stage and 16,384 in the CascadeRL stage. In our practice, we found that the model can handle context lengths within 64K. Additionally, the warning from the tokenizer has no practical effect, since the current max_position_embeddings is larger than model_max_length.