How to use google/pix2struct-ocrvqa-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("visual-question-answering", model="google/pix2struct-ocrvqa-base")
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("google/pix2struct-ocrvqa-base") model = AutoModelForImageTextToText.from_pretrained("google/pix2struct-ocrvqa-base")
Need help with the API which can give back embeddings of the input image. The plan is to use pix2struct encodings to tune other down stream tasks.
· Sign up or log in to comment