Datasets:
Size:
1M<n<10M
ArXiv:
Tags:
Document_Understanding
Document_Packet_Splitting
Document_Comprehension
Document_Classification
Document_Recognition
Document_Segmentation
DOI:
License:
| license: cc-by-nc-4.0 | |
| language: | |
| - en | |
| - ar | |
| - hi | |
| tags: | |
| - Document_Understanding | |
| - Document_Packet_Splitting | |
| - Document_Comprehension | |
| - Document_Classification | |
| - Document_Recognition | |
| - Document_Segmentation | |
| pretty_name: DocSplit Benchmark | |
| size_categories: | |
| - 1M<n<10M | |
| **In addition to the dataset, we release this repository containing the complete toolkit for generating the benchmark datasets, along with Jupyter notebooks for data analysis.** | |
| ## Quick Start: Load the Dataset | |
| ```python | |
| from datasets import load_dataset | |
| # Load all splits | |
| ds = load_dataset("amazon/doc_split") | |
| # Or load a single split | |
| test = load_dataset("amazon/doc_split", split="test") | |
| ``` | |
| Each row represents a spliced document packet: | |
| ```python | |
| doc = ds["train"][0] | |
| print(doc["doc_id"]) # UUID for this packet | |
| print(doc["total_pages"]) # Total pages in the packet | |
| print(len(doc["subdocuments"])) # Number of constituent documents | |
| for sub in doc["subdocuments"]: | |
| print(f" {sub['doc_type_id']}: {len(sub['page_ordinals'])} pages") | |
| ``` | |
| > **Note:** The `image_path` and `text_path` fields in each page reference assets that are not included in the dataset download. See [Data Formats](#data-formats) for details. | |
| # DocSplit: Document Packet Splitting Benchmark Generator | |
| A toolkit for creating benchmark datasets to test document packet splitting systems. Document packet splitting is the task of separating concatenated multi-page documents into individual documents with correct page ordering. | |
| ## Overview | |
| This toolkit generates five benchmark datasets of varying complexity to test how well models can: | |
| 1. **Detect document boundaries** within concatenated packets | |
| 2. **Classify document types** accurately | |
| 3. **Reconstruct correct page ordering** within each document | |
| ## Dataset Schema | |
| When loaded via `load_dataset()`, each row contains: | |
| | Field | Type | Description | | |
| |-------|------|-------------| | |
| | `doc_id` | string | UUID identifying the spliced packet | | |
| | `total_pages` | int | Total number of pages in the packet | | |
| | `subdocuments` | list | Array of constituent documents | | |
| Each subdocument contains: | |
| | Field | Type | Description | | |
| |-------|------|-------------| | |
| | `doc_type_id` | string | Document type category | | |
| | `local_doc_id` | string | Identifier within the packet | | |
| | `group_id` | string | Group identifier | | |
| | `page_ordinals` | list[int] | Page positions within the packet | | |
| | `pages` | list | Per-page metadata (image_path, text_path, original_doc_name) | | |
| ## Document Source | |
| We uses the documents from **RVL-CDIP-N-MP**: | |
| [https://huggingface.co/datasets/jordyvl/rvl_cdip_n_mp](https://huggingface.co/datasets/jordyvl/rvl_cdip_n_mp) | |
| ## Quick Start | |
| ### Clone from Hugging Face | |
| This repository is hosted on Hugging Face at: [https://huggingface.co/datasets/amazon/doc_split](https://huggingface.co/datasets/amazon/doc_split) | |
| Choose one of the following methods to download the repository: | |
| #### Option 1: Using Git with Git LFS (Recommended) | |
| Git LFS (Large File Storage) is required for Hugging Face datasets as they often contain large files. | |
| **Install Git LFS:** | |
| ```bash | |
| # Linux (Ubuntu/Debian): | |
| sudo apt-get install git-lfs | |
| git lfs install | |
| # macOS (Homebrew): | |
| brew install git-lfs | |
| git lfs install | |
| # Windows: Download from https://git-lfs.github.com, then run: | |
| # git lfs install | |
| ``` | |
| **Clone the repository:** | |
| ```bash | |
| git clone https://huggingface.co/datasets/amazon/doc_split | |
| cd doc_split | |
| pip install -r requirements.txt | |
| ``` | |
| #### Option 2: Using Hugging Face CLI | |
| ```bash | |
| # 1. Install the Hugging Face Hub CLI | |
| pip install -U "huggingface_hub[cli]" | |
| # 2. (Optional) Login if authentication is required | |
| huggingface-cli login | |
| # 3. Download the dataset | |
| huggingface-cli download amazon/doc_split --repo-type dataset --local-dir doc_split | |
| # 4. Navigate and install dependencies | |
| cd doc_split | |
| pip install -r requirements.txt | |
| ``` | |
| #### Option 3: Using Python SDK (huggingface_hub) | |
| ```python | |
| from huggingface_hub import snapshot_download | |
| # Download the entire dataset repository | |
| local_dir = snapshot_download( | |
| repo_id="amazon/doc_split", | |
| repo_type="dataset", | |
| local_dir="doc_split" | |
| ) | |
| print(f"Dataset downloaded to: {local_dir}") | |
| ``` | |
| Then install dependencies: | |
| ```bash | |
| cd doc_split | |
| pip install -r requirements.txt | |
| ``` | |
| #### Tips | |
| - **Check Disk Space**: Hugging Face datasets can be large. Check the "Files and versions" tab on the Hugging Face page to see the total size before downloading. | |
| - **Partial Clone**: If you only need specific files (e.g., code without large data files), use: | |
| ```bash | |
| GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/amazon/doc_split | |
| cd doc_split | |
| # Then selectively pull specific files: | |
| git lfs pull --include="*.py" | |
| ``` | |
| --- | |
| ## Usage | |
| ### Step 1: Create Assets | |
| Convert raw PDFs into structured assets with page images (300 DPI PNG) and OCR text (Markdown). | |
| > **Note:** The code defaults for `--raw-data-path` (`../raw_data`) and `--output-path` (`../processed_assets`) assume running from within `src/assets/`. When running from the repo root, pass explicit paths as shown below. | |
| #### Option A: AWS Textract OCR (Default) | |
| > **⚠️ Requires Python 3.12:** This command uses `amazon-textract-textractor`, which has C extension dependencies that may not build on Python 3.13+. See [Requirements](#requirements). | |
| Best for English documents. Processes all document categories with Textract. | |
| ```bash | |
| python src/assets/run.py \ | |
| --raw-data-path data/raw_data \ | |
| --output-path data/assets \ | |
| --s3-bucket your-bucket-name \ | |
| --s3-prefix textract-temp \ | |
| --workers 10 \ | |
| --save-mapping | |
| ``` | |
| **Requirements:** | |
| - AWS credentials configured (`aws configure`) | |
| - S3 bucket for temporary file uploads | |
| - No GPU required | |
| #### Option B: Hybrid OCR (Textract + DeepSeek) | |
| Uses Textract for most categories, DeepSeek OCR only for the "language" category (multilingual documents). | |
| **Note:** For this project, DeepSeek OCR was used only for the "language" category and executed in AWS SageMaker AI with GPU instances (e.g., `ml.g6.xlarge`). | |
| **1. Install flash-attention (Required for DeepSeek):** | |
| ```bash | |
| # For CUDA 12.x with Python 3.12: | |
| cd /mnt/sagemaker-nvme # Use larger disk for downloads | |
| wget https://github.com/Dao-AILab/flash-attention/releases/download/v2.8.3/flash_attn-2.8.3+cu12torch2.9cxx11abiTRUE-cp312-cp312-linux_x86_64.whl | |
| pip install flash_attn-2.8.3+cu12torch2.9cxx11abiTRUE-cp312-cp312-linux_x86_64.whl | |
| # For other CUDA/Python versions: https://github.com/Dao-AILab/flash-attention/releases | |
| ``` | |
| **2. Set cache directory (Important for SageMaker):** | |
| ```bash | |
| # SageMaker: Use larger NVMe disk instead of small home directory | |
| export HF_HOME=/mnt/sagemaker-nvme/cache | |
| export TRANSFORMERS_CACHE=/mnt/sagemaker-nvme/cache | |
| ``` | |
| **3. Run asset creation:** | |
| ```bash | |
| python src/assets/run.py \ | |
| --raw-data-path data/raw_data \ | |
| --output-path data/assets \ | |
| --s3-bucket your-bucket-name \ | |
| --use-deepseek-for-language \ | |
| --workers 10 \ | |
| --save-mapping | |
| ``` | |
| **Requirements:** | |
| - NVIDIA GPU with CUDA support (tested on ml.g6.xlarge) | |
| - ~10GB+ disk space for model downloads | |
| - flash-attention library installed | |
| - AWS credentials (for Textract on non-language categories) | |
| - S3 bucket (for Textract on non-language categories) | |
| **How it works:** | |
| - Documents in `raw_data/language/` → DeepSeek OCR (GPU) | |
| - All other categories → AWS Textract (cloud) | |
| #### Parameters | |
| - `--raw-data-path`: Directory containing source PDFs organized by document type | |
| - `--output-path`: Where to save extracted assets (images + OCR text) | |
| - `--s3-bucket`: S3 bucket name (required for Textract) | |
| - `--s3-prefix`: S3 prefix for temporary files (default: textract-temp) | |
| - `--workers`: Number of parallel processes (default: 10) | |
| - `--save-mapping`: Save CSV mapping document IDs to file paths | |
| - `--use-deepseek-for-language`: Use DeepSeek OCR for "language" category only | |
| - `--limit`: Process only N documents (useful for testing) | |
| #### What Happens | |
| 1. Scans `raw_data/` directory for PDFs organized by document type | |
| 2. Extracts each page as 300 DPI PNG image | |
| 3. Runs OCR (Textract or DeepSeek) to extract text | |
| 4. Saves structured assets in `output-path/{doc_type}/{doc_name}/` | |
| 5. Optionally creates `document_mapping.csv` listing all processed documents | |
| 6. These assets become the input for Step 2 (benchmark generation) | |
| #### Output Structure | |
| ``` | |
| data/assets/ | |
| └── {doc_type}/{filename}/ | |
| ├── original/{filename}.pdf | |
| └── pages/{page_num}/ | |
| ├── page-{num}.png # 300 DPI image | |
| └── page-{num}-textract.md # OCR text | |
| ``` | |
| ## Interactive Notebooks | |
| Explore the toolkit with Jupyter notebooks: | |
| 1. **`notebooks/01_create_assets.ipynb`** - Create assets from PDFs | |
| 2. **`notebooks/02_create_benchmarks.ipynb`** - Generate benchmarks with different strategies | |
| 3. **`notebooks/03_analyze_benchmarks.ipynb`** - Analyze and visualize benchmark statistics | |
| ## Data Formats | |
| The dataset provides two complementary formats for each benchmark: | |
| ### Ground Truth JSON (used by `load_dataset`) | |
| One JSON file per document packet in `datasets/{strategy}/{size}/ground_truth_json/{split}/`: | |
| ```json | |
| { | |
| "doc_id": "...", | |
| "total_pages": ..., | |
| "subdocuments": [ | |
| { | |
| "doc_type_id": "...", | |
| "local_doc_id": "...", | |
| "group_id": "...", | |
| "page_ordinals": [...], | |
| "pages": [ | |
| { | |
| "page": 1, | |
| "original_doc_name": "...", | |
| "image_path": "rvl-cdip-nmp-assets/...", | |
| "text_path": "rvl-cdip-nmp-assets/...", | |
| "local_doc_id_page_ordinal": ... | |
| } | |
| ] | |
| } | |
| ] | |
| } | |
| ``` | |
| ### CSV (flat row-per-page format) | |
| One CSV per split in `datasets/{strategy}/{size}/`: | |
| | Column | Description | | |
| |--------|-------------| | |
| | `doc_type` | Document type category | | |
| | `original_doc_name` | Source document filename | | |
| | `parent_doc_name` | UUID of the spliced packet (matches `doc_id` in JSON) | | |
| | `local_doc_id` | Local identifier within the packet | | |
| | `page` | Page number within the packet | | |
| | `image_path` | Path to page image (prefix: `data/assets/`) | | |
| | `text_path` | Path to OCR text (prefix: `data/assets/`) | | |
| | `group_id` | Group identifier | | |
| | `local_doc_id_page_ordinal` | Page ordinal within the original source document | | |
| ### Asset Paths | |
| The image and text paths in both formats reference assets that are **not included** in this repository: | |
| - JSON paths use prefix `rvl-cdip-nmp-assets/` | |
| - CSV paths use prefix `data/assets/` | |
| To resolve these paths, run the asset creation pipeline (see [Create Assets](#step-1-create-assets)). The data can be used for metadata and label analysis without the actual images. | |
| ## Requirements | |
| - Python 3.12+ recommended (see note below) | |
| - AWS credentials (for Textract OCR) | |
| - Dependencies: `pip install -r requirements.txt` | |
| > **⚠️ Python Version:** The `amazon-textract-textractor` package (required by `src/assets/run.py`) depends on C extensions (`editdistance`) that may fail to build on Python 3.13+. **Python 3.12 is recommended.** Using [uv](https://docs.astral.sh/uv/) as your package installer can also help resolve build issues. | |
| > **Note:** `requirements.txt` currently includes GPU dependencies (PyTorch, Transformers) that are only needed for DeepSeek OCR on multilingual documents. If you only need Textract OCR or want to explore the pre-generated data, the core dependencies are: `boto3`, `loguru`, `pymupdf`, `pillow`, `pydantic`, `amazon-textract-textractor`, `tenacity`. | |
| --- | |
| ### Download Source Data and Generate Benchmarks | |
| ```bash | |
| # 1. Download and extract RVL-CDIP-N-MP source data from HuggingFace (1.25 GB) | |
| # This dataset contains multi-page PDFs organized by document type | |
| # (invoices, letters, forms, reports, etc.) | |
| mkdir -p data/raw_data | |
| cd data/raw_data | |
| wget https://huggingface.co/datasets/jordyvl/rvl_cdip_n_mp/resolve/main/data.tar.gz | |
| tar -xzf data.tar.gz | |
| rm data.tar.gz | |
| cd ../.. | |
| # 2. Create assets from raw PDFs | |
| # Extracts each page as PNG image and runs OCR to get text | |
| # These assets are then used in step 3 to create benchmark datasets | |
| # Output: Structured assets in data/assets/ with images and text per page | |
| python src/assets/run.py --raw-data-path data/raw_data --output-path data/assets | |
| # 3. Generate benchmark datasets | |
| # This concatenates documents using different strategies and creates | |
| # train/test/validation splits with ground truth labels | |
| # Output: Benchmark JSON files in data/benchmarks/ ready for model evaluation | |
| python src/benchmarks/run.py \ | |
| --strategy poly_seq \ | |
| --assets-path data/assets \ | |
| --output-path data/benchmarks | |
| ``` | |
| ## Pipeline Overview | |
| ``` | |
| Raw PDFs → [Create Assets] → Page Images + OCR Text → [Generate Benchmarks] → DocSplit Benchmarks | |
| ``` | |
| ## Five Benchmark Datasets | |
| The toolkit generates five benchmarks of increasing complexity, based on the DocSplit paper: | |
| ### 1. **DocSplit-Mono-Seq** (`mono_seq`) | |
| **Single Category Document Concatenation Sequentially** | |
| - Concatenates documents from the same category | |
| - Preserves original page order | |
| - **Challenge**: Boundary detection without category transitions as discriminative signals | |
| - **Use Case**: Legal document processing where multiple contracts of the same type are bundled | |
| ### 2. **DocSplit-Mono-Rand** (`mono_rand`) | |
| **Single Category Document Pages Randomization** | |
| - Same as Mono-Seq but shuffles pages within documents | |
| - **Challenge**: Boundary detection + page sequence reconstruction | |
| - **Use Case**: Manual document assembly with page-level disruptions | |
| ### 3. **DocSplit-Poly-Seq** (`poly_seq`) | |
| **Multi Category Documents Concatenation Sequentially** | |
| - Concatenates documents from different categories | |
| - Preserves page ordering | |
| - **Challenge**: Inter-document boundary detection with category diversity | |
| - **Use Case**: Medical claims processing with heterogeneous documents | |
| ### 4. **DocSplit-Poly-Int** (`poly_int`) | |
| **Multi Category Document Pages Interleaving** | |
| - Interleaves pages from different categories in round-robin fashion | |
| - **Challenge**: Identifying which non-contiguous pages belong together | |
| - **Use Case**: Mortgage processing where deeds, tax records, and notices are interspersed | |
| ### 5. **DocSplit-Poly-Rand** (`poly_rand`) | |
| **Multi Category Document Pages Randomization** | |
| - Complete randomization across all pages (maximum entropy) | |
| - **Challenge**: Worst-case scenario with no structural assumptions | |
| - **Use Case**: Document management system failures or emergency recovery | |
| ### Dataset Statistics | |
| The pre-generated benchmarks include train, test, and validation splits in both `small` (5–20 pages per packet) and `large` (20–500 pages per packet) sizes. For `mono_rand/large`: | |
| | Split | Document Count | | |
| |-------|---------------| | |
| | Train | 417 | | |
| | Test | 96 | | |
| | Validation | 51 | | |
| ## Project Structure | |
| ``` | |
| doc-split-benchmark/ | |
| ├── README.md | |
| ├── requirements.txt | |
| ├── src/ | |
| │ ├── assets/ # Asset creation from PDFs | |
| │ │ ├── __init__.py | |
| │ │ ├── models.py | |
| │ │ ├── run.py # Main entry point | |
| │ │ └── services/ | |
| │ │ ├── __init__.py | |
| │ │ ├── asset_creator.py | |
| │ │ ├── asset_writer.py | |
| │ │ ├── deepseek_ocr.py | |
| │ │ ├── pdf_loader.py | |
| │ │ └── textract_ocr.py | |
| │ │ | |
| │ └── benchmarks/ # Benchmark generation | |
| │ ├── __init__.py | |
| │ ├── models.py | |
| │ ├── run.py # Main entry point | |
| │ └── services/ | |
| │ ├── __init__.py | |
| │ ├── asset_loader.py | |
| │ ├── split_manager.py | |
| │ ├── benchmark_generator.py | |
| │ ├── benchmark_writer.py | |
| │ └── shuffle_strategies/ | |
| │ ├── __init__.py | |
| │ ├── base_strategy.py | |
| │ ├── mono_seq.py | |
| │ ├── mono_rand.py | |
| │ ├── poly_seq.py | |
| │ ├── poly_int.py | |
| │ └── poly_rand.py | |
| │ | |
| ├── notebooks/ | |
| │ ├── 01_create_assets.ipynb | |
| │ ├── 02_create_benchmarks.ipynb | |
| │ └── 03_analyze_benchmarks.ipynb | |
| │ | |
| ├── datasets/ # Pre-generated benchmark data | |
| │ └── {strategy}/{size}/ | |
| │ ├── train.csv | |
| │ ├── test.csv | |
| │ ├── validation.csv | |
| │ └── ground_truth_json/ | |
| │ ├── train/*.json | |
| │ ├── test/*.json | |
| │ └── validation/*.json | |
| │ | |
| └── data/ # Generated by toolkit (not in repo) | |
| ├── raw_data/ | |
| ├── assets/ | |
| └── benchmarks/ | |
| ``` | |
| ### Generate Benchmarks [Detailed] | |
| Create DocSplit benchmarks with train/test/validation splits. | |
| ```bash | |
| python src/benchmarks/run.py \ | |
| --strategy poly_seq \ | |
| --assets-path data/assets \ | |
| --output-path data/benchmarks \ | |
| --num-docs-train 800 \ | |
| --num-docs-test 500 \ | |
| --num-docs-val 200 \ | |
| --size small \ | |
| --random-seed 42 | |
| ``` | |
| **Parameters:** | |
| - `--strategy`: Benchmark strategy - `mono_seq`, `mono_rand`, `poly_seq`, `poly_int`, `poly_rand`, or `all` (default: all) | |
| - `--assets-path`: Directory containing assets from Step 1 (default: data/assets) | |
| - `--output-path`: Where to save benchmarks (default: data/benchmarks) | |
| - `--num-docs-train`: Number of spliced documents for training (default: 800) | |
| - `--num-docs-test`: Number of spliced documents for testing (default: 500) | |
| - `--num-docs-val`: Number of spliced documents for validation (default: 200) | |
| - `--size`: Benchmark size - `small` (5-20 pages) or `large` (20-500 pages) (default: small) | |
| - `--split-mapping`: Path to split mapping JSON (default: data/metadata/split_mapping.json) | |
| - `--random-seed`: Seed for reproducibility (default: 42) | |
| **What Happens:** | |
| 1. Loads all document assets from Step 1 | |
| 2. Creates or loads stratified train/test/val split (60/25/15 ratio) | |
| 3. Generates spliced documents by concatenating/shuffling pages per strategy | |
| 4. Saves benchmark CSV files with ground truth labels | |
| **Output Structure:** | |
| ``` | |
| data/ | |
| ├── metadata/ | |
| │ └── split_mapping.json # Document split assignments (shared across strategies) | |
| └── benchmarks/ | |
| └── {strategy}/ # e.g., poly_seq, mono_rand | |
| └── {size}/ # small or large | |
| ├── train.csv | |
| ├── test.csv | |
| └── validation.csv | |
| ``` | |
| # How to cite this dataset | |
| ```bibtex | |
| @misc{islam2026docsplitcomprehensivebenchmarkdataset, | |
| title={DocSplit: A Comprehensive Benchmark Dataset and Evaluation Approach for Document Packet Recognition and Splitting}, | |
| author={Md Mofijul Islam and Md Sirajus Salekin and Nivedha Balakrishnan and Vincil C. Bishop III and Niharika Jain and Spencer Romo and Bob Strahan and Boyi Xie and Diego A. Socolinsky}, | |
| year={2026}, | |
| eprint={2602.15958}, | |
| archivePrefix={arXiv}, | |
| primaryClass={cs.CL}, | |
| url={https://arxiv.org/abs/2602.15958}, | |
| } | |
| ``` | |
| # License | |
| Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. | |
| SPDX-License-Identifier: CC-BY-NC-4.0 | |