Update README.md
Browse files
README.md
CHANGED
|
@@ -20,13 +20,10 @@ We are excited to release Infinity-Parser2-Pro, our latest flagship document und
|
|
| 20 |
|
| 21 |
## Key Features
|
| 22 |
|
| 23 |
-
- Upgraded Data Engine: We have comprehensively enhanced our synthetic data engine to support both fixed-layout and flexible-layout document formats. By generating over 1 million diverse full-text samples covering a wide range of document layouts, combined with a dynamic adaptive sampling strategy, we ensure highly balanced and robust multi-task learning across various document types.
|
| 24 |
-
|
| 25 |
-
-
|
| 26 |
-
|
| 27 |
-
- Breakthrough Parsing Performance: It substantially outperforms our previous 7B model, achieving 86.7% on olmOCR-Bench, surpassing frontier models such as DeepSeek-OCR-2, PaddleOCR-VL-1.5, and dots.mocr.
|
| 28 |
-
|
| 29 |
-
- Inference Acceleration: By adopting the highly efficient MoE architecture, our inference throughput has increased by 21% (from 441 to 534 tokens/sec), reducing deployment latency and costs.
|
| 30 |
|
| 31 |
# Performance
|
| 32 |
|
|
|
|
| 20 |
|
| 21 |
## Key Features
|
| 22 |
|
| 23 |
+
- **Upgraded Data Engine**: We have comprehensively enhanced our synthetic data engine to support both fixed-layout and flexible-layout document formats. By generating over 1 million diverse full-text samples covering a wide range of document layouts, combined with a dynamic adaptive sampling strategy, we ensure highly balanced and robust multi-task learning across various document types.
|
| 24 |
+
- **Multi-Task Reinforcement Learning**: We designed a novel verifiable reward system to support Joint Reinforcement Learning (RL), enabling seamless and simultaneous co-optimization of multiple complex tasks, including doc2json and doc2markdown.
|
| 25 |
+
- **Breakthrough Parsing Performance**: It substantially outperforms our previous 7B model, achieving 86.7% on olmOCR-Bench, surpassing frontier models such as DeepSeek-OCR-2, PaddleOCR-VL-1.5, and dots.mocr.
|
| 26 |
+
- **Inference Acceleration**: By adopting the highly efficient MoE architecture, our inference throughput has increased by 21% (from 441 to 534 tokens/sec), reducing deployment latency and costs.
|
|
|
|
|
|
|
|
|
|
| 27 |
|
| 28 |
# Performance
|
| 29 |
|