--- license: cc-by-nc-4.0 language: - en - ar - hi tags: - Document_Understanding - Document_Packet_Splitting - Document_Comprehension - Document_Classification - Document_Recognition - Document_Segmentation pretty_name: DocSplit Benchmark size_categories: - 1M93% packet accuracy) - **Poly-Rand**: Most challenging (20-30% degradation for weaker models) ## Project Structure ``` doc-split-benchmark/ ├── README.md ├── requirements.txt # All dependencies ├── src/ │ ├── assets/ # Asset creation from PDFs │ │ ├── run.py # Main script │ │ ├── models.py # Document models │ │ └── services/ │ │ ├── pdf_loader.py │ │ ├── textract_ocr.py │ │ └── asset_writer.py │ │ │ └── benchmarks/ # Benchmark generation │ ├── run.py # Main script │ ├── models.py # Benchmark models │ └── services/ │ ├── asset_loader.py │ ├── split_manager.py │ ├── benchmark_generator.py │ ├── benchmark_writer.py │ └── strategies/ │ ├── mono_seq.py # DocSplit-Mono-Seq │ ├── mono_rand.py # DocSplit-Mono-Rand │ ├── poly_seq.py # DocSplit-Poly-Seq │ ├── poly_int.py # DocSplit-Poly-Int │ └── poly_rand.py # DocSplit-Poly-Rand │ ├── notebooks/ # Interactive examples │ ├── 01_create_assets.ipynb │ ├── 02_create_benchmarks.ipynb │ └── 03_analyze_benchmarks.ipynb │ └── data/ # Generated data (not in repo) ├── raw_data/ # Downloaded PDFs ├── assets/ # Extracted images + OCR └── benchmarks/ # Generated benchmarks ``` ## Usage ### Step 1: Create Assets Convert raw PDFs into structured assets with page images (300 DPI PNG) and OCR text (Markdown). #### Option A: AWS Textract OCR (Default) Best for English documents. Processes all document categories with Textract. ```bash python src/assets/run.py \ --raw-data-path data/raw_data \ --output-path data/assets \ --s3-bucket your-bucket-name \ --s3-prefix textract-temp \ --workers 10 \ --save-mapping ``` **Requirements:** - AWS credentials configured (`aws configure`) - S3 bucket for temporary file uploads - No GPU required #### Option B: Hybrid OCR (Textract + DeepSeek) Uses Textract for most categories, DeepSeek OCR only for the "language" category (multilingual documents). **Note:** For this project, DeepSeek OCR was used only for the "language" category and executed in AWS SageMaker AI with GPU instances (e.g., `ml.g6.xlarge`). **1. Install flash-attention (Required for DeepSeek):** ```bash # For CUDA 12.x with Python 3.12: cd /mnt/sagemaker-nvme # Use larger disk for downloads wget https://github.com/Dao-AILab/flash-attention/releases/download/v2.8.3/flash_attn-2.8.3+cu12torch2.9cxx11abiTRUE-cp312-cp312-linux_x86_64.whl pip install flash_attn-2.8.3+cu12torch2.9cxx11abiTRUE-cp312-cp312-linux_x86_64.whl # For other CUDA/Python versions: https://github.com/Dao-AILab/flash-attention/releases ``` **2. Set cache directory (Important for SageMaker):** ```bash # SageMaker: Use larger NVMe disk instead of small home directory export HF_HOME=/mnt/sagemaker-nvme/cache export TRANSFORMERS_CACHE=/mnt/sagemaker-nvme/cache ``` **3. Run asset creation:** ```bash python src/assets/run.py \ --raw-data-path data/raw_data \ --output-path data/assets \ --s3-bucket your-bucket-name \ --use-deepseek-for-language \ --workers 10 \ --save-mapping ``` **Requirements:** - NVIDIA GPU with CUDA support (tested on ml.g6.xlarge) - ~10GB+ disk space for model downloads - flash-attention library installed - AWS credentials (for Textract on non-language categories) - S3 bucket (for Textract on non-language categories) **How it works:** - Documents in `raw_data/language/` → DeepSeek OCR (GPU) - All other categories → AWS Textract (cloud) #### Parameters - `--raw-data-path`: Directory containing source PDFs organized by document type - `--output-path`: Where to save extracted assets (images + OCR text) - `--s3-bucket`: S3 bucket name (required for Textract) - `--s3-prefix`: S3 prefix for temporary files (default: textract-temp) - `--workers`: Number of parallel processes (default: 10) - `--save-mapping`: Save CSV mapping document IDs to file paths - `--use-deepseek-for-language`: Use DeepSeek OCR for "language" category only - `--limit`: Process only N documents (useful for testing) #### What Happens 1. Scans `raw_data/` directory for PDFs organized by document type 2. Extracts each page as 300 DPI PNG image 3. Runs OCR (Textract or DeepSeek) to extract text 4. Saves structured assets in `output-path/{doc_type}/{doc_name}/` 5. Optionally creates `document_mapping.csv` listing all processed documents 6. These assets become the input for Step 2 (benchmark generation) #### Output Structure ``` data/assets/ └── {doc_type}/{filename}/ ├── original/{filename}.pdf └── pages/{page_num}/ ├── page-{num}.png # 300 DPI image └── page-{num}-textract.md # OCR text ``` ### Step 2: Generate Benchmarks Create DocSplit benchmarks with train/test/validation splits. ```bash python src/benchmarks/run.py \ --strategy poly_seq \ --assets-path data/assets \ --output-path data/benchmarks \ --num-docs-train 800 \ --num-docs-test 200 \ --num-docs-val 500 \ --size small \ --random-seed 42 ``` **Parameters:** - `--strategy`: Benchmark strategy - `mono_seq`, `mono_rand`, `poly_seq`, `poly_int`, `poly_rand`, or `all` (default: all) - `--assets-path`: Directory containing assets from Step 1 (default: data/assets) - `--output-path`: Where to save benchmarks (default: data/benchmarks) - `--num-docs-train`: Number of spliced documents for training (default: 8) - `--num-docs-test`: Number of spliced documents for testing (default: 5) - `--num-docs-val`: Number of spliced documents for validation (default: 2) - `--size`: Benchmark size - `small` (5-20 pages) or `large` (20-500 pages) (default: small) - `--split-mapping`: Path to split mapping JSON (default: data/metadata/split_mapping.json) - `--random-seed`: Seed for reproducibility (default: 42) **What Happens:** 1. Loads all document assets from Step 1 2. Creates or loads stratified train/test/val split (60/25/15 ratio) 3. Generates spliced documents by concatenating/shuffling pages per strategy 4. Saves benchmark CSV files with ground truth labels **Output Structure:** ``` data/ ├── metadata/ │ └── split_mapping.json # Document split assignments (shared across strategies) └── benchmarks/ └── {strategy}/ # e.g., poly_seq, mono_rand └── {size}/ # small or large ├── train.csv ├── test.csv └── validation.csv ``` ## Interactive Notebooks Explore the toolkit with Jupyter notebooks: 1. **`notebooks/01_create_assets.ipynb`** - Create assets from PDFs 2. **`notebooks/02_create_benchmarks.ipynb`** - Generate benchmarks with different strategies 3. **`notebooks/03_analyze_benchmarks.ipynb`** - Analyze and visualize benchmark statistics ## Benchmark Output Format Each benchmark JSON contains: ```json { "benchmark_name": "poly_seq", "strategy": "PolySeq", "split": "train", "created_at": "2026-01-30T12:00:00", "documents": [ { "spliced_doc_id": "splice_0001", "source_documents": [ {"doc_type": "invoice", "doc_name": "doc1", "pages": [1,2,3]}, {"doc_type": "letter", "doc_name": "doc2", "pages": [1,2]} ], "ground_truth": [ {"page_num": 1, "doc_type": "invoice", "source_doc": "doc1", "source_page": 1}, {"page_num": 2, "doc_type": "invoice", "source_doc": "doc1", "source_page": 2}, ... ], "total_pages": 5 } ], "statistics": { "total_spliced_documents": 1000, "total_pages": 7500, "unique_doc_types": 16 } } ``` ## Requirements - Python 3.8+ - AWS credentials (for Textract OCR) - Dependencies: `boto3`, `loguru`, `pymupdf`, `pillow` ## Citation If you use this toolkit, please cite the DocSplit paper: ```bibtex @article{docsplit2025, title={DocSplit: A Comprehensive Benchmark Dataset and Evaluation Approach for Document Packet Recognition and Splitting}, year={2025} } ``` ## License Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: CC-BY-NC-4.0 # How to cite this dataset ```bibtex @misc{docsplit, author = {Islam, Md Mofijul and Salekin, Md Sirajus and Balakrishnan, Nivedha and Bishop, Vincil C. and Jain, Niharika and Romo, Spencer and Strahan, Bob and Xie, Boyi and Socolinsky, Diego A. }, title = {DocSplit: A Comprehensive Benchmark Dataset and Evaluation Approach for Document Packet Recognition and Splitting}, howpublished = {\url{https://huggingface.co/datasets/amazon/doc_split/}}, url = {https://huggingface.co/datasets/amazon/doc_split/}, type = {dataset}, year = {2026}, month = {February}, timestamp = {2024-02-04}, note = {Accessed: 2026-02-04} }