File size: 1,526 Bytes
e65f65b
 
 
 
69eb4cf
 
e65f65b
69eb4cf
 
e65f65b
 
 
 
 
69eb4cf
e65f65b
69eb4cf
e65f65b
69eb4cf
e65f65b
69eb4cf
 
 
 
e65f65b
69eb4cf
e65f65b
 
69eb4cf
e65f65b
 
 
69eb4cf
 
 
e65f65b
 
 
 
69eb4cf
e65f65b
69eb4cf
e65f65b
69eb4cf
 
 
e65f65b
69eb4cf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
language: en
tags:
  - handwriting-recognition
  - vision2seq
  - qwen
  - image-to-text
  - htr
  - tensorflow
license: mit
pipeline_tag: image-to-text
library_name: transformers
---

# 🖋️ Finetuned Full HTR Model (Qwen-based)

This is a **Qwen Vision2Seq** model fine-tuned for **Handwritten Text Recognition (HTR)**. It reads handwritten text from images and generates clean, editable output using advanced transformer-based image-to-text techniques.

## 🔍 Model Summary

- **Model Architecture**: Qwen-Vision2Seq (Image encoder + Language decoder)
- **Framework**: TensorFlow (via Hugging Face Transformers)
- **Input**: Handwritten text image
- **Output**: Recognized plain text

## 🧠 How to Use (with Hugging Face Transformers)

```python
from transformers import AutoProcessor, AutoModelForVision2Seq
from PIL import Image
import torch

# Load processor and model
processor = AutoProcessor.from_pretrained("Emeritus-21/Finetuned-full-HTR-model", trust_remote_code=True)
model = AutoModelForVision2Seq.from_pretrained("Emeritus-21/Finetuned-full-HTR-model", trust_remote_code=True)

device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device)

# Load and process image
image = Image.open("your_image.jpg").convert("RGB")
inputs = processor(images=image, return_tensors="pt").to(device)

# Generate prediction
generated_ids = model.generate(**inputs)
recognized_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]

print("📝 Recognized Text:", recognized_text)