How to use google/pix2struct-docvqa-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("visual-question-answering", model="google/pix2struct-docvqa-base")
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("google/pix2struct-docvqa-base") model = AutoModelForImageTextToText.from_pretrained("google/pix2struct-docvqa-base")
convert_pix2struct_checkpoint_to_pytorch.py this file is not there on the github
· Sign up or log in to comment