# Load model directly
from transformers import AutoTokenizer, AutoModelForImageTextToText
tokenizer = AutoTokenizer.from_pretrained("raxtemur/RuTrOCR_Base")
model = AutoModelForImageTextToText.from_pretrained("raxtemur/RuTrOCR_Base")Quick Links
On current step fine-tune seems to break LLM states:(
Model encoder: facebook/deit-base-distilled-patch16-384 Model decoder: DeepPavlov/rubert-base-cased
- Downloads last month
- 4
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="raxtemur/RuTrOCR_Base")