Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: en
|
| 3 |
+
license: mit
|
| 4 |
+
tags:
|
| 5 |
+
- vision
|
| 6 |
+
- document-ai
|
| 7 |
+
- donut
|
| 8 |
+
- ocr-free
|
| 9 |
+
- image-to-text
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# document_parsing_donut_v1
|
| 13 |
+
|
| 14 |
+
## Overview
|
| 15 |
+
This model is an implementation of the **Donut** (Document Understanding Transformer) architecture. Unlike traditional OCR-based systems, this model is OCR-free, meaning it maps raw document images directly to structured JSON outputs. It is fine-tuned to parse complex layouts such as invoices, receipts, and technical forms without a separate text recognition step.
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
## Model Architecture
|
| 20 |
+
The model utilizes a vision-encoder text-decoder framework:
|
| 21 |
+
- **Encoder**: A Swin Transformer that processes high-resolution images into visual features.
|
| 22 |
+
- **Decoder**: A BART-based multi-lingual transformer that generates text tokens in a sequence-to-sequence manner.
|
| 23 |
+
- **Objective**: The model is trained using a cross-entropy loss to predict the next token based on both the visual input and preceding tokens:
|
| 24 |
+
$$\mathcal{L} = -\sum_{t=1}^{T} \log P(y_t | y_{<t}, \mathbf{x})$$
|
| 25 |
+
|
| 26 |
+
## Intended Use
|
| 27 |
+
- **Automated Data Entry**: Extracting key-value pairs from digitized business documents.
|
| 28 |
+
- **Layout Analysis**: Identifying structural components (headers, tables, footers) in multi-page PDFs.
|
| 29 |
+
- **Archival Digitization**: Converting historical scanned documents into searchable, structured data.
|
| 30 |
+
|
| 31 |
+
## Limitations
|
| 32 |
+
- **Resolution Sensitivity**: Performance drops significantly if images are scaled below 960x1280 pixels.
|
| 33 |
+
- **Language Bias**: While capable, accuracy is highest for Latin-script documents; CJK and Arabic scripts require specialized fine-tuning.
|
| 34 |
+
- **Handwriting**: The model is optimized for printed text and may struggle with highly cursive or disorganized handwriting.
|