Instructions to use ConvLLaVA/ConvLLaVA-sft-768 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ConvLLaVA/ConvLLaVA-sft-768 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ConvLLaVA/ConvLLaVA-sft-768")# Load model directly from transformers import AutoProcessor, AutoModelForCausalLM processor = AutoProcessor.from_pretrained("ConvLLaVA/ConvLLaVA-sft-768") model = AutoModelForCausalLM.from_pretrained("ConvLLaVA/ConvLLaVA-sft-768") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ConvLLaVA/ConvLLaVA-sft-768 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ConvLLaVA/ConvLLaVA-sft-768" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ConvLLaVA/ConvLLaVA-sft-768", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/ConvLLaVA/ConvLLaVA-sft-768
- SGLang
How to use ConvLLaVA/ConvLLaVA-sft-768 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ConvLLaVA/ConvLLaVA-sft-768" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ConvLLaVA/ConvLLaVA-sft-768", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ConvLLaVA/ConvLLaVA-sft-768" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ConvLLaVA/ConvLLaVA-sft-768", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use ConvLLaVA/ConvLLaVA-sft-768 with Docker Model Runner:
docker model run hf.co/ConvLLaVA/ConvLLaVA-sft-768
ConvLLaVA Model Card
Model details
Model type: ConvLLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: lmsys/vicuna-7b-v1.5
Model date: ConvLLaVA-768 was trained in March 2024.
Paper or resources for more information: https://github.com/alibaba/conv-llava/
License
Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
Where to send questions or comments about the model: https://github.com/alibaba/conv-llava/issues
Intended use
Primary intended uses: The primary use of ConvLLaVA is research on large multimodal models and chatbots.
Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
Training dataset
- 1.2M ShareGPT4V-PT caption data.
- 100K ShareGPT4V caption data.
- 1.4M ALLaVA caption and instruction data.
- 186K VFLAN multitask data.
- 158K GPT-generated multimodal instruction-following data.
- 500K academic-task-oriented VQA data mixture.
- 40K ShareGPT data.
Paper
arxiv.org/abs/2405.15738
- Downloads last month
- 12