Update README.md
Browse files
README.md
CHANGED
|
@@ -15,32 +15,13 @@
|
|
| 15 |
|
| 16 |
We are delighted to release Infinity-Parser2-2B, our latest state-of-the-art document understanding model. Compared to our prior model, Infinity-Parser-7B, we have deeply optimized our data engine and multi-task reinforcement learning. We have successfully condensed robust multi-modal parsing capabilities into a highly efficient 2B-parameter model, offering massive speedups and brand-new zero-shot capabilities for real-world business scenarios.
|
| 17 |
|
| 18 |
-
##
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
- **
|
| 22 |
-
- **
|
| 23 |
-
- **
|
| 24 |
-
|
| 25 |
-
### 🧠 Multi-Task Reinforcement Learning
|
| 26 |
-
- **Verifiable Reward System:** Designed a novel reward mechanism to support Joint Reinforcement Learning (RL).
|
| 27 |
-
- **Unified Optimization:** Simultaneously co-optimizes multiple tasks, ranging from full-text and table parsing to layout analysis and Document VQA.
|
| 28 |
-
|
| 29 |
-
### 📈 Breakthrough Parsing Performance
|
| 30 |
-
Despite its compact 2B size, it significantly outperforms our previous 7B model:
|
| 31 |
-
- **Domain SOTA:** Achieves SOTA on financial benchmarks (`FinDocBench`, `FinTabBench`), surpassing frontier models like DeepSeek-OCR2, GLM-OCR, and PaddleOCR-VL-v1.5.
|
| 32 |
-
- **Public Benchmarks:** Achieves SOTA on `olmOCR-Bench` and `PubTabNet`, with highly competitive results on `OmniDocBench-v1.5` and `UniMERNet`.
|
| 33 |
-
- **General Multimodal:** Scores **66.06** on average across 7 benchmarks (e.g., MathVista, MMMU), beating the Qwen3-VL-2B base (+3.2pt).
|
| 34 |
-
|
| 35 |
-
### 🚀 Massive Inference Acceleration (3.68x Faster)
|
| 36 |
-
- **Optimized Architecture:** Transitioned to the highly efficient **Qwen3-VL-2B** base model.
|
| 37 |
-
- **Blazing Fast:** Inference throughput surged by **3.68x** (from 441 to **1,624 tokens/sec**), slashing latency and deployment costs without accuracy drop.
|
| 38 |
-
|
| 39 |
-
### ✨ Expanded Capabilities (Zero-to-One Additions)
|
| 40 |
-
Unlocked entirely new skills in this release:
|
| 41 |
-
- **Chart Parsing:** Scores 79.91 on `Chart2Table`.
|
| 42 |
-
- **Chemical Structures:** Scores 68.05 on our new `ChemDraw-198` and 52.78 on `CoSyn-Chemical`.
|
| 43 |
-
- **Layout Analysis:** Achieves 64.92 on `DocLayNet` and 73.16 on `OmniDocBench-v1.5-layout`, matching dedicated layout models like DocLayout-YOLO.
|
| 44 |
|
| 45 |
# Architecture
|
| 46 |
|
|
|
|
| 15 |
|
| 16 |
We are delighted to release Infinity-Parser2-2B, our latest state-of-the-art document understanding model. Compared to our prior model, Infinity-Parser-7B, we have deeply optimized our data engine and multi-task reinforcement learning. We have successfully condensed robust multi-modal parsing capabilities into a highly efficient 2B-parameter model, offering massive speedups and brand-new zero-shot capabilities for real-world business scenarios.
|
| 17 |
|
| 18 |
+
## Key Features
|
| 19 |
+
|
| 20 |
+
- **Upgraded Data Engine**: We comprehensively upgraded our data engine by adding over 1 million diverse full-text samples, 170K synthetic financial tables, 900K formulas, and targeted negative samples to mitigate hallucinations. Combined with a dynamic adaptive sampling strategy, this ensures highly balanced and robust multi-task learning across various document types.
|
| 21 |
+
- **Multi-Task Reinforcement Learning**: We designed a novel verifiable reward system to support Joint Reinforcement Learning (RL), enabling the model to seamlessly and simultaneously co-optimize multiple complex tasks, including full-text parsing, table and formula extraction, layout analysis, and document VQA.
|
| 22 |
+
- **Breakthrough Parsing Performance**: Despite its compact 2B size, it significantly outperforms our previous 7B model. It achieves State-of-the-Art (SOTA) results on both in-house financial benchmarks (`FinDocBench`, `FinTabBench`)—surpassing frontier models like DeepSeek-OCR2 and GLM-OCR—and public sets like `olmOCR-Bench` and `PubTabNet`, while maintaining highly competitive general multimodal capabilities.
|
| 23 |
+
- **Massive Inference Acceleration (3.68x Faster)**: By transitioning to the highly efficient Qwen3-VL-2B architecture, our inference throughput has surged by **3.68x** (jumping from 441 to 1,624 tokens/sec), dramatically slashing deployment latency and costs without compromising core parsing accuracy.
|
| 24 |
+
- **Expanded Capabilities**: We have unlocked entirely new zero-shot skills in this release, achieving strong benchmark results in chart parsing (`Chart2Table`), chemical structure recognition (including our new `ChemDraw-198`), and layout analysis, where it successfully matches the performance of dedicated specialized models like DocLayout-YOLO.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
# Architecture
|
| 27 |
|