Image-Text-to-Text
Transformers
PyTorch
Safetensors
English
blip-2
visual-question-answering
vision
image-to-text
image-captioning
Instructions to use Salesforce/blip2-opt-2.7b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Salesforce/blip2-opt-2.7b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="Salesforce/blip2-opt-2.7b")# Load model directly from transformers import AutoProcessor, AutoModelForVisualQuestionAnswering processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b") model = AutoModelForVisualQuestionAnswering.from_pretrained("Salesforce/blip2-opt-2.7b") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Salesforce/blip2-opt-2.7b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Salesforce/blip2-opt-2.7b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Salesforce/blip2-opt-2.7b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Salesforce/blip2-opt-2.7b
- SGLang
How to use Salesforce/blip2-opt-2.7b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Salesforce/blip2-opt-2.7b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Salesforce/blip2-opt-2.7b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Salesforce/blip2-opt-2.7b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Salesforce/blip2-opt-2.7b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Salesforce/blip2-opt-2.7b with Docker Model Runner:
docker model run hf.co/Salesforce/blip2-opt-2.7b
Inference API usage
#4
by robertwolf - opened
I am trying to use the inference API and I can't seem to add the 'question' in the inputs for the API, would anyone know how to do it?
This is my current script. It does return a caption, but it does not seem to take into account the text input.
import base64
import json
import requests
API_URL = "https://api-inference.huggingface.co/models/Salesforce/blip2-opt-2.7b"
headers = {"Authorization": f"Bearer {'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'}"}
json_request = {
"inputs": "How could someone get out of the house?",
"parameters":
{
"max_new_tokens": 100
},
"options":
{"wait_for_model": True}
}
def query(filename):
with open(filename, "rb") as f:
data = f.read()
json_request['image'] = base64.b64encode(data).decode('utf-8')
response = requests.request("POST", API_URL, headers=headers, json=json_request)
return json.loads(response.content.decode("utf-8"))
data = query("house.jpg")
print(data)
Hi,
For the moment BLIP and BLIP-2 are only supported by the image-to-text pipeline, and this pipeline doesn't support a text prompt for the moment.
Will discuss this with the team, thanks for bringing it up!