Instructions to use Salesforce/blip-image-captioning-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Salesforce/blip-image-captioning-base with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "image-to-text" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("image-to-text", model="Salesforce/blip-image-captioning-base")# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = AutoModelForImageTextToText.from_pretrained("Salesforce/blip-image-captioning-base") - Notebooks
- Google Colab
- Kaggle
Commit History
Add tokenizer.json (#8) 60667f7
Update README.md (#5) 9f3084c
Update README.md b1a4162
Update README.md 3007fbe
Update README.md cab088c
Update README.md ac7d6bc
Update README.md 12d29a7
Update README.md 25d2acb
Update README.md af1d88b
Update README.md 039efaf
Create README.md abbaae4
Upload BlipForConditionalGeneration 8fccca1
Upload BlipForConditionalGeneration 1a92cdc
Update preprocessor_config.json 14216c0
Update preprocessor_config.json eadf1b8
Update preprocessor_config.json 156b8ed
Upload processor f656a37
initial commit 08a482e
Younes Belkada commited on