Image-Text-to-Text
Transformers
Safetensors
youtu_vl
text-generation
conversational
custom_code
Youtu-Parsing / README.md
Yinsongliu's picture
Improve model card: add transformers library
b82ee76 verified
metadata
license: other
license_name: youtu-parsing
license_link: https://huggingface.co/tencent/Youtu-Parsing/blob/main/LICENSE.txt
pipeline_tag: image-text-to-text
base_model:
  - tencent/Youtu-LLM-2B
library_name: transformers
base_model_relation: finetune

🎯 Introduction

Youtu-Parsing is a specialized document parsing model built upon the open-source Youtu-LLM 2B foundation. By extending the capabilities of the base model with a prompt-guided framework and NaViT-style dynamic visual encoder, Youtu-Parsing offers enhanced parsing capabilities for diverse document elements including text, tables, formulas, and charts. The model incorporates an efficient parallel decoding mechanism that significantly accelerates inference, making it practical for real-world document analysis applications. We share Youtu-Parsing with the community to facilitate research and development in document understanding.

✨ Key Features

πŸ“„ Document Structure Preservation

  • Text Localization: Accurately detects and localizes text regions with pixel-level precision, ensuring no content is missed or misplaced across diverse document layouts.
  • Reading Order Restoration: Intelligently reconstructs the logical reading sequence of document content, maintaining proper flow across columns, sections, and pages for coherent understanding.

πŸ“Š Advanced Content Recognition

  • Text Recognition: Provides precise text recognition across diverse scenarios.
  • Formula Recognition: Automatically converts mathematical expressions to LaTeX format.
  • Table Recognition: Automatically detects tables and converts them to HTML format.
  • Chart Recognition: Converts charts to markdown tables, mind maps and flow charts to mermaid format.

⚑ High-Performance Inference

  • Token Parallelism: Enables simultaneous inference of multiple tokens for accelerated processing, achieving 5-11x speedup.
  • Query Parallelism: Combines multiple queries together to maximize Token Parallelism benefits, providing an additional 2x speedup on top of Token Parallelism acceleration.

πŸ“Š Performance

1. OminiDocBench v1.5

2. olmOCR

πŸš€ Quick Start

Install packages

conda create -n youtu_parsing python=3.10
conda activate youtu_parsing
pip install git+https://github.com/TencentCloudADP/youtu-parsing.git#subdirectory=youtu_hf_parser

# install the flash-attn2
# For CUDA 12.x + PyTorch 2.6 + Python 3.10 + Linux x86_64:
pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.6cxx11abiFALSE-cp310-cp310-linux_x86_64.whl

# Alternative: Install from PyPI
pip install flash-attn==2.7.0

Usage with transformers

from youtu_hf_parser import YoutuOCRParserHF

# Initialize the parser
parser = YoutuOCRParserHF(
    model_path=model_path,
    enable_angle_correct=True,  # Set to False to disable angle correction
    angle_correct_model_path=angle_correct_model_path
)

# Parse an image
parser.parse_file(input_path=image_path, output_dir=output_dir)

🎨 Visualization

Text Recognition

Formula Recognition

Table Recognition

Chart Recognition

🀝 Acknowledgements

We would like to thank Youtu-LLM, OmniDocBench, olmOCR, dots.ocr, MinerU, PaddleOCR, PSENet for providing model weights, benchmarks and valuable code. We also appreciate everyone's contribution to this open-source project!

πŸ“š Citation

If you find our work useful in your research, please consider citing the following paper:

@article{youtu-parsing,
  title={Youtu-Parsing: Perception, Structuring and Recognition via High-Parallelism Decoding},
  author={Tencent Youtu Lab},
  year={2026},
  eprint={2601.20430},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2601.20430}, 
}

@article{youtu-vl,
  title={Youtu-VL: Unleashing Visual Potential via Unified Vision-Language Supervision},
  author={Tencent Youtu Lab},
  year={2026},
  eprint={2601.19798},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2601.19798}, 
}

@article{youtu-llm,
  title={Youtu-LLM: Unlocking the Native Agentic Potential for Lightweight Large Language Models},
  author={Tencent Youtu Lab},
  year={2025},
  eprint={2512.24618},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
  url={https://arxiv.org/abs/2512.24618}, 
}