Image-to-Text
Transformers
PyTorch
Safetensors
English
blip-2
visual-question-answering
vision
image-captioning
Instructions to use Salesforce/blip2-opt-2.7b-coco with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Salesforce/blip2-opt-2.7b-coco with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "image-to-text" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("image-to-text", model="Salesforce/blip2-opt-2.7b-coco")# Load model directly from transformers import AutoProcessor, AutoModelForVisualQuestionAnswering processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b-coco") model = AutoModelForVisualQuestionAnswering.from_pretrained("Salesforce/blip2-opt-2.7b-coco") - Notebooks
- Google Colab
- Kaggle
Adding `safetensors` variant of this model
#2
by SFconvertbot - opened
model-00001-of-00002.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2e42fbb755c1e6f20163897c930b21195991f30dfd82d17cf65ccf992d4e8c69
|
| 3 |
+
size 9998365880
|
model-00002-of-00002.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ec126c79363120a346db188eb3524a07547f4fdd6d63a049d855650444dedd4f
|
| 3 |
+
size 5497664472
|
model.safetensors.index.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|