# Load model directly
from transformers import AutoTokenizer, AutoModelForImageTextToText
tokenizer = AutoTokenizer.from_pretrained("vikp/texify2")
model = AutoModelForImageTextToText.from_pretrained("vikp/texify2")Quick Links
To be used with texify. Set MODEL_CHECKPOINT=vikp/texify2
Note that this is a testing checkpoint that most people won't want to use. The correct checkpoint is vikp/texify. I'm leaving this up since I know it is used in a few places.
- Downloads last month
- 108
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="vikp/texify2")