Image-Text-to-Text
Transformers
TensorBoard
Safetensors
feature-extraction
conversational
custom_code
Instructions to use lmms-lab/LLaVA-OneVision-1.5-4B-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use lmms-lab/LLaVA-OneVision-1.5-4B-Instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="lmms-lab/LLaVA-OneVision-1.5-4B-Instruct", trust_remote_code=True) messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("lmms-lab/LLaVA-OneVision-1.5-4B-Instruct", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use lmms-lab/LLaVA-OneVision-1.5-4B-Instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "lmms-lab/LLaVA-OneVision-1.5-4B-Instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lmms-lab/LLaVA-OneVision-1.5-4B-Instruct", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/lmms-lab/LLaVA-OneVision-1.5-4B-Instruct
- SGLang
How to use lmms-lab/LLaVA-OneVision-1.5-4B-Instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "lmms-lab/LLaVA-OneVision-1.5-4B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lmms-lab/LLaVA-OneVision-1.5-4B-Instruct", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "lmms-lab/LLaVA-OneVision-1.5-4B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lmms-lab/LLaVA-OneVision-1.5-4B-Instruct", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use lmms-lab/LLaVA-OneVision-1.5-4B-Instruct with Docker Model Runner:
docker model run hf.co/lmms-lab/LLaVA-OneVision-1.5-4B-Instruct
Commit History
Update modeling_llavaonevision1_5.py e6bb4cd verified
Update README.md 21cbb17 verified
Update README.md d6fbfb7 verified
Update README.md d51517f verified
Update README.md ef3fa61 verified
Update README.md 6430d15 verified
Update README.md 152c522 verified
Yiye commited on
Update README.md fb4fc57 verified
Update README.md 381249b verified
Upload events.out.tfevents.1758101239.109436.0 034d0f7 verified
Create tensorboard/instruct/README.md 854a794 verified
Create README.md 03e6f76 verified
Yiye commited on
Upload vocab.json with huggingface_hub 4a137ed verified
Yiye commited on
Upload tokenizer.json with huggingface_hub 029d188 verified
Yiye commited on
Upload special_tokens_map.json with huggingface_hub aee2e22 verified
Yiye commited on
Upload model.safetensors.index.json with huggingface_hub 36e0b04 verified
Yiye commited on
Upload model-00002-of-00002.safetensors with huggingface_hub 1232b04 verified
Yiye commited on
Upload merges.txt with huggingface_hub 269b086 verified
Yiye commited on
Upload configuration_llavaonevision1_5.py with huggingface_hub 5d5fed2 verified
Yiye commited on
Upload chat_template.jinja with huggingface_hub 4af0aab verified
Yiye commited on
Upload tokenizer_config.json with huggingface_hub 057e4a8 verified
Yiye commited on
Upload preprocessor_config.json with huggingface_hub a09199b verified
Yiye commited on
Upload modeling_llavaonevision1_5.py with huggingface_hub 3a186f0 verified
Yiye commited on
Upload model-00001-of-00002.safetensors with huggingface_hub 344d069 verified
Yiye commited on
Upload generation_config.json with huggingface_hub e3b72e1 verified
Yiye commited on
Upload added_tokens.json with huggingface_hub 2686dea verified
Yiye commited on
Upload config.json with huggingface_hub 7a59d7a verified
Yiye commited on
initial commit ca0fe71 verified
Yiye commited on