File size: 2,110 Bytes
dce86e8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
language:
- en
library_name: transformers
pipeline_tag: image-text-to-text
tags:
- ocr
- vision-language
- qwen2-vl
- vila
- multimodal
license: apache-2.0
---

# Easy DeepOCR - VILA-Qwen2-VL-8B

A vision-language model fine-tuned for OCR tasks, based on VILA architecture with Qwen2-VL-8B as the language backbone.

## Model Description

This model combines:
- **Language Model**: Qwen2-VL-8B
- **Vision Encoders**: SAM + CLIP
- **Architecture**: VILA (Visual Language Adapter)
- **Task**: Optical Character Recognition (OCR)

## Model Structure
```
easy_deepocr/
β”œβ”€β”€ config.json              # Model configuration
β”œβ”€β”€ llm/                     # Qwen2-VL-8B language model weights
β”œβ”€β”€ mm_projector/            # Multimodal projection layer
β”œβ”€β”€ sam_clip_ckpt/           # SAM and CLIP vision encoder weights
└── trainer_state.json       # Training state information
```

## Usage
```python
# TODO: Add your inference code here
from transformers import AutoModel, AutoTokenizer

model = AutoModel.from_pretrained("pkulium/easy_deepocr", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("pkulium/easy_deepocr")

# Example inference
# image = ...
# text = ...
```

## Training Details

- **Base Model**: Qwen2-VL-8B
- **Vision Encoders**: SAM + CLIP
- **Training Framework**: VILA
- **Training Type**: Pretraining for OCR tasks

## Intended Use

This model is designed for:
- Document OCR
- Scene text recognition
- Handwriting recognition
- Multi-language text extraction

## Limitations

- [Add any known limitations]
- Model performance may vary with image quality
- Best suited for [specify use cases]

## Citation

If you use this model, please cite:
```bibtex
@misc{easy_deepocr,
  author = {Ming Liu},
  title = {Easy DeepOCR - VILA-Qwen2-VL-8B},
  year = {2025},
  publisher = {HuggingFace},
  url = {https://huggingface.co/pkulium/easy_deepocr}
}
```

## Acknowledgments

- [VILA](https://github.com/NVlabs/VILA) for the architecture
- [Qwen2-VL](https://github.com/QwenLM/Qwen2-VL) for the language model
- SAM and CLIP for vision encoding capabilities