Update README.md
Browse files
README.md
CHANGED
|
@@ -1,14 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# Overview
|
|
|
|
| 2 |
Infinity-Doc-55K is a high-quality diverse full-text parsing dataset, comprising 55,066 real-world and synthetic scanned documents. The dataset features rich layout variations and comprehensive structural annotations, enabling robust training of document parsing models. Additionally, this dataset encompasses a broad spectrum of document types, including financial reports, medical reports, academic reports, books, magazines, web pages, and synthetic documents.
|
| 3 |
|
| 4 |

|
| 5 |
|
| 6 |
# Data Construction Pipeline
|
|
|
|
| 7 |
To construct a comprehensive dataset for document parsing, we integrate both real-world and synthetic data generation pipelines. Our real-world data pipeline collects diverse scanned documents from various practical domains (such as financial reports, medical records, and academic papers), employing a multi-expert strategy with cross-validation to generate reliable pseudo-ground-truth annotations for structural elements like text, tables, and formulas. Complementing this, our synthetic data pipeline programmatically creates a wide array of documents by injecting content from sources like Wikipedia into predefined HTML layouts, rendering them into scanned formats, and extracting precise ground-truth annotations directly from the original HTML. This dual approach yields a rich, diverse, and cost-effective dataset with accurate and well-aligned supervision, effectively overcoming common issues of imprecise or inconsistent labeling found in other datasets and enabling robust training for end-to-end document parsing models.
|
| 8 |
|
| 9 |

|
| 10 |
|
| 11 |
# Data Statistics
|
|
|
|
| 12 |
| Document Type | Samples Number |
|
| 13 |
| --- | --- |
|
| 14 |
| Synthetic Documents | 6546 |
|
|
@@ -21,6 +30,7 @@ To construct a comprehensive dataset for document parsing, we integrate both rea
|
|
| 21 |
| All | 55066 |
|
| 22 |
|
| 23 |
# Data Format
|
|
|
|
| 24 |
```json
|
| 25 |
{
|
| 26 |
"images": ["path/to/image"],
|
|
@@ -41,4 +51,5 @@ To construct a comprehensive dataset for document parsing, we integrate both rea
|
|
| 41 |
```
|
| 42 |
|
| 43 |
# License
|
|
|
|
| 44 |
This dataset is licensed under cc-by-nc-sa-4.0.
|
|
|
|
| 1 |
+
# Infinity-Doc-55K
|
| 2 |
+
|
| 3 |
+
<a><img src="assets/logo.png" height="16" width="16" style="display: inline"><b> Paper (coming soon) </b></a> |
|
| 4 |
+
<a href="https://github.com/infly-ai/INF-MLLM/tree/main/Infinity-Parser"><img src="https://github.githubassets.com/images/modules/logos_page/GitHub-Mark.png" height="16" width="16" style="display: inline"><b> Github </b></a> |
|
| 5 |
+
<a>💬<b> Web Demo (coming soon) </b></a>
|
| 6 |
+
|
| 7 |
# Overview
|
| 8 |
+
|
| 9 |
Infinity-Doc-55K is a high-quality diverse full-text parsing dataset, comprising 55,066 real-world and synthetic scanned documents. The dataset features rich layout variations and comprehensive structural annotations, enabling robust training of document parsing models. Additionally, this dataset encompasses a broad spectrum of document types, including financial reports, medical reports, academic reports, books, magazines, web pages, and synthetic documents.
|
| 10 |
|
| 11 |

|
| 12 |
|
| 13 |
# Data Construction Pipeline
|
| 14 |
+
|
| 15 |
To construct a comprehensive dataset for document parsing, we integrate both real-world and synthetic data generation pipelines. Our real-world data pipeline collects diverse scanned documents from various practical domains (such as financial reports, medical records, and academic papers), employing a multi-expert strategy with cross-validation to generate reliable pseudo-ground-truth annotations for structural elements like text, tables, and formulas. Complementing this, our synthetic data pipeline programmatically creates a wide array of documents by injecting content from sources like Wikipedia into predefined HTML layouts, rendering them into scanned formats, and extracting precise ground-truth annotations directly from the original HTML. This dual approach yields a rich, diverse, and cost-effective dataset with accurate and well-aligned supervision, effectively overcoming common issues of imprecise or inconsistent labeling found in other datasets and enabling robust training for end-to-end document parsing models.
|
| 16 |
|
| 17 |

|
| 18 |
|
| 19 |
# Data Statistics
|
| 20 |
+
|
| 21 |
| Document Type | Samples Number |
|
| 22 |
| --- | --- |
|
| 23 |
| Synthetic Documents | 6546 |
|
|
|
|
| 30 |
| All | 55066 |
|
| 31 |
|
| 32 |
# Data Format
|
| 33 |
+
|
| 34 |
```json
|
| 35 |
{
|
| 36 |
"images": ["path/to/image"],
|
|
|
|
| 51 |
```
|
| 52 |
|
| 53 |
# License
|
| 54 |
+
|
| 55 |
This dataset is licensed under cc-by-nc-sa-4.0.
|