Instructions to use OpenGVLab/Mono-InternVL-2B-S1-1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OpenGVLab/Mono-InternVL-2B-S1-1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="OpenGVLab/Mono-InternVL-2B-S1-1", trust_remote_code=True) messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("OpenGVLab/Mono-InternVL-2B-S1-1", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use OpenGVLab/Mono-InternVL-2B-S1-1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "OpenGVLab/Mono-InternVL-2B-S1-1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/Mono-InternVL-2B-S1-1", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/OpenGVLab/Mono-InternVL-2B-S1-1
- SGLang
How to use OpenGVLab/Mono-InternVL-2B-S1-1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "OpenGVLab/Mono-InternVL-2B-S1-1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/Mono-InternVL-2B-S1-1", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "OpenGVLab/Mono-InternVL-2B-S1-1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/Mono-InternVL-2B-S1-1", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use OpenGVLab/Mono-InternVL-2B-S1-1 with Docker Model Runner:
docker model run hf.co/OpenGVLab/Mono-InternVL-2B-S1-1
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("OpenGVLab/Mono-InternVL-2B-S1-1", trust_remote_code=True, dtype="auto")Mono-InternVL-2B-S1-1
This repository contains the Mono-InternVL-2B model after S1.1 concept learning, as part of the work presented in Mono-InternVL-1.5: Towards Cheaper and Faster Monolithic Multimodal Large Language Models.
Please refer to our project page and GitHub repository for full introduction, code, and usage instructions.
Mono-InternVL is a family of monolithic multimodal large language models (MLLMs) that integrates visual encoding and language decoding into a single LLM, aiming for cheaper and faster inference. It addresses challenges of unstable optimization and catastrophic forgetting by embedding a new visual parameter space into a pre-trained LLM, enabling stable learning of visual knowledge via delta tuning.
✨ Key Highlights
- Monolithic Architecture: Integrates visual encoding and language decoding into a single LLM, simplifying the model structure.
- Endogenous Visual Pre-training (EViP++): Features an innovative pre-training strategy that maximizes visual capabilities through progressive learning and incorporates additional visual attention experts.
- Efficiency: Significantly reduces training and inference costs, including a fused CUDA kernel for faster MoE operations, while maintaining competitive performance.
📊 Performance
Mono-InternVL achieves competitive performance across various multimodal benchmarks, often outperforming other monolithic MLLMs. Compared to its modular counterpart, InternVL-1.5, Mono-InternVL-1.5 achieves similar multimodal performance while reducing first-token latency by up to 69%.
Below is a summary of some key benchmarks:
| Benchmark | Mono-InternVL-2B | Mini-InternVL-2B-1-5 | Emu3 |
|---|---|---|---|
| Type | Monolithic | Modular | Monolithic |
| #Activated Params | 1.8B | 2.2B | 8B |
| MMVet | 40.1 | 39.3 | 37.2 |
| OCRBench | 767 | 654 | 687 |
| MathVista | 45.7 | 41.1 | — |
| TextVQA | 72.6 | 70.5 | 64.7 |
| DocVQA | 80.0 | 85.0 | 76.3 |
(For full performance details, please refer to the paper and project page)
🚀 Quick Inference (using Transformers)
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
# Load model and tokenizer (ensure transformers==4.37.2)
path = 'OpenGVLab/Mono-InternVL-2B'
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True
).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
# Load image (ensure image is preprocessed if needed as per GitHub instructions)
# For simplicity, using a dummy image path here.
# Refer to the GitHub repo for `load_image` utility function.
# pixel_values = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = None # Replace with actual image tensor
generation_config = dict(max_new_tokens=1024, do_sample=True)
# Example: single-image single-round conversation
question = '<image>
Please describe the image shortly.'
# response = model.chat(tokenizer, pixel_values, question, generation_config)
# print(f'User: {question}
Assistant: {response}')
# Example: pure-text conversation
question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question}
Assistant: {response}')
Citation
If you find this project useful in your research, please consider citing the related papers:
@article{mono_internvl_v1,
title={Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training},
author={Luo, Gen and Yang, Xue and Dou, Wenhan and Wang, Zhaokai and Liu, Jiawen and Dai, Jifeng and Qiao, Yu and Zhu, Xizhou},
journal={arXiv preprint arXiv:2410.08202},
year={2024}
}
@article{mono_internvl_v1.5,
title={Mono-InternVL-1.5: Towards Cheaper and Faster Monolithic Multimodal Large Language Models},
author={Luo, Gen and Dou, Wenhan and Li, Wenhao and Wang, Zhaokai and Yang, Xue and Tian, Changyao and Li, Hao and Wang, Weiyun and Wang, Wenhai and Zhu, Xizhou and Qiao, Yu and Dai, Jifeng},
journal={arXiv preprint arXiv:2507.12566},
year={2025}
}
- Downloads last month
- 51
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="OpenGVLab/Mono-InternVL-2B-S1-1", trust_remote_code=True) messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)