Instructions to use nlpconnect/vit-gpt2-image-captioning with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use nlpconnect/vit-gpt2-image-captioning with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "image-to-text" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("image-to-text", model="nlpconnect/vit-gpt2-image-captioning")# Load model directly from transformers import AutoTokenizer, AutoModelForImageTextToText tokenizer = AutoTokenizer.from_pretrained("nlpconnect/vit-gpt2-image-captioning") model = AutoModelForImageTextToText.from_pretrained("nlpconnect/vit-gpt2-image-captioning") - Notebooks
- Google Colab
- Kaggle
Update README.md
#21
by sgrohmann2 - opened
README.md
CHANGED
|
@@ -77,7 +77,7 @@ image_to_text = pipeline("image-to-text", model="nlpconnect/vit-gpt2-image-capti
|
|
| 77 |
|
| 78 |
image_to_text("https://ankur3107.github.io/assets/images/image-captioning-example.png")
|
| 79 |
|
| 80 |
-
# [{'generated_text': '
|
| 81 |
|
| 82 |
|
| 83 |
```
|
|
|
|
| 77 |
|
| 78 |
image_to_text("https://ankur3107.github.io/assets/images/image-captioning-example.png")
|
| 79 |
|
| 80 |
+
# [{'generated_text': 'an nba player at a soccer game drinking beer '}]
|
| 81 |
|
| 82 |
|
| 83 |
```
|