Instructions to use nlpconnect/vit-gpt2-image-captioning with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use nlpconnect/vit-gpt2-image-captioning with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "image-to-text" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("image-to-text", model="nlpconnect/vit-gpt2-image-captioning")# Load model directly from transformers import AutoTokenizer, AutoModelForImageTextToText tokenizer = AutoTokenizer.from_pretrained("nlpconnect/vit-gpt2-image-captioning") model = AutoModelForImageTextToText.from_pretrained("nlpconnect/vit-gpt2-image-captioning") - Notebooks
- Google Colab
- Kaggle
What should the input look like?
#43
by tararelan - opened
I've deployed the model on SageMaker, and I want to test it out. I'm not sure what the input it supposed to look like, but I tried the following:
{
"inputs": {
"image": "https://upload.wikimedia.org/wikipedia/commons/thumb/5/53/The_Kelpies_1-1_Stitch.jpg/1200px-The_Kelpies_1-1_Stitch.jpg"
}
}
I then get the following error:
Received client error (400) from primary with message "{
"code": 400,
"type": "InternalServerException",
"message": "Incorrect format used for image. Should be an url linking to an image, a base64 string, a local path, or a PIL image."
}
Any idea what it's supposed to be?