Update README.md
Browse files
README.md
CHANGED
|
@@ -13,8 +13,34 @@
|
|
| 13 |
|
| 14 |
# Introduction
|
| 15 |
|
| 16 |
-
|
| 17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
# Architecture
|
| 20 |
|
|
|
|
| 13 |
|
| 14 |
# Introduction
|
| 15 |
|
| 16 |
+
We are delighted to release Infinity-Parser2-2B, our latest state-of-the-art document understanding model. Compared to our prior model, Infinity-Parser-7B, we have deeply optimized our data engine and multi-task reinforcement learning. We have successfully condensed robust multi-modal parsing capabilities into a highly efficient 2B-parameter model, offering massive speedups and brand-new zero-shot capabilities for real-world business scenarios.
|
| 17 |
+
|
| 18 |
+
## 🌟 Key Features
|
| 19 |
+
|
| 20 |
+
### 📚 Upgraded Data Engine
|
| 21 |
+
- **Massive & Diverse Data:** Added **1M+ full-text samples** across 9 document types (academic, financial, books, etc.).
|
| 22 |
+
- **Targeted Enrichment:** Injected 170K synthetic financial tables, 900K formulas, and 5K negative samples to mitigate hallucinations.
|
| 23 |
+
- **Adaptive Sampling:** Dynamically adjusts data distribution based on task importance and dataset size for balanced learning.
|
| 24 |
+
|
| 25 |
+
### 🧠 Multi-Task Reinforcement Learning
|
| 26 |
+
- **Verifiable Reward System:** Designed a novel reward mechanism to support Joint Reinforcement Learning (RL).
|
| 27 |
+
- **Unified Optimization:** Simultaneously co-optimizes multiple tasks, ranging from full-text and table parsing to layout analysis and Document VQA.
|
| 28 |
+
|
| 29 |
+
### 📈 Breakthrough Parsing Performance
|
| 30 |
+
Despite its compact 2B size, it significantly outperforms our previous 7B model:
|
| 31 |
+
- **Domain SOTA:** Achieves SOTA on financial benchmarks (`FinDocBench`, `FinTabBench`), surpassing frontier models like DeepSeek-OCR2, GLM-OCR, and PaddleOCR-VL-v1.5.
|
| 32 |
+
- **Public Benchmarks:** Achieves SOTA on `olmOCR-Bench` and `PubTabNet`, with highly competitive results on `OmniDocBench-v1.5` and `UniMERNet`.
|
| 33 |
+
- **General Multimodal:** Scores **66.06** on average across 7 benchmarks (e.g., MathVista, MMMU), beating the Qwen3-VL-2B base (+3.2pt).
|
| 34 |
+
|
| 35 |
+
### 🚀 Massive Inference Acceleration (3.68x Faster)
|
| 36 |
+
- **Optimized Architecture:** Transitioned to the highly efficient **Qwen3-VL-2B** base model.
|
| 37 |
+
- **Blazing Fast:** Inference throughput surged by **3.68x** (from 441 to **1,624 tokens/sec**), slashing latency and deployment costs without accuracy drop.
|
| 38 |
+
|
| 39 |
+
### ✨ Expanded Capabilities (Zero-to-One Additions)
|
| 40 |
+
Unlocked entirely new skills in this release:
|
| 41 |
+
- **Chart Parsing:** Scores 79.91 on `Chart2Table`.
|
| 42 |
+
- **Chemical Structures:** Scores 68.05 on our new `ChemDraw-198` and 52.78 on `CoSyn-Chemical`.
|
| 43 |
+
- **Layout Analysis:** Achieves 64.92 on `DocLayNet` and 73.16 on `OmniDocBench-v1.5-layout`, matching dedicated layout models like DocLayout-YOLO.
|
| 44 |
|
| 45 |
# Architecture
|
| 46 |
|