--- language: en tags: - handwriting-recognition - vision2seq - qwen - image-to-text - htr - tensorflow license: mit pipeline_tag: image-to-text library_name: transformers --- # 🖋️ Finetuned Full HTR Model (Qwen-based) This is a **Qwen Vision2Seq** model fine-tuned for **Handwritten Text Recognition (HTR)**. It reads handwritten text from images and generates clean, editable output using advanced transformer-based image-to-text techniques. ## 🔍 Model Summary - **Model Architecture**: Qwen-Vision2Seq (Image encoder + Language decoder) - **Framework**: TensorFlow (via Hugging Face Transformers) - **Input**: Handwritten text image - **Output**: Recognized plain text ## 🧠 How to Use (with Hugging Face Transformers) ```python from transformers import AutoProcessor, AutoModelForVision2Seq from PIL import Image import torch # Load processor and model processor = AutoProcessor.from_pretrained("Emeritus-21/Finetuned-full-HTR-model", trust_remote_code=True) model = AutoModelForVision2Seq.from_pretrained("Emeritus-21/Finetuned-full-HTR-model", trust_remote_code=True) device = "cuda" if torch.cuda.is_available() else "cpu" model = model.to(device) # Load and process image image = Image.open("your_image.jpg").convert("RGB") inputs = processor(images=image, return_tensors="pt").to(device) # Generate prediction generated_ids = model.generate(**inputs) recognized_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] print("📝 Recognized Text:", recognized_text)