Instructions to use Salesforce/blip-vqa-capfilt-large with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Salesforce/blip-vqa-capfilt-large with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("visual-question-answering", model="Salesforce/blip-vqa-capfilt-large")# Load model directly from transformers import AutoProcessor, AutoModelForVisualQuestionAnswering processor = AutoProcessor.from_pretrained("Salesforce/blip-vqa-capfilt-large") model = AutoModelForVisualQuestionAnswering.from_pretrained("Salesforce/blip-vqa-capfilt-large") - Notebooks
- Google Colab
- Kaggle
Adding `safetensors` variant of this model
#14 opened over 1 year ago
by
SFconvertbot
Adding `safetensors` variant of this model
#13 opened over 1 year ago
by
SFconvertbot
Adding `safetensors` variant of this model
#12 opened about 2 years ago
by
SFconvertbot
How to send inference request to deployed end point
#11 opened over 2 years ago
by
AkiraKuniyoshi
Adding `safetensors` variant of this model
#8 opened almost 3 years ago
by
SFconvertbot
Adding `safetensors` variant of this model
#7 opened almost 3 years ago
by
SFconvertbot
Update config.json
#4 opened about 3 years ago
by
ybelkada
Update tokenizer_config.json
#1 opened over 3 years ago
by
ybelkada