--- title: Invoice Processor Ml emoji: โšก colorFrom: indigo colorTo: pink sdk: docker pinned: false license: mit short_description: Hybrid invoice extraction using LayoutLMv3 and Regex --- # ๐Ÿ“„ Smart Invoice Processor A production-grade Hybrid Invoice Extraction System that combines the semantic understanding of LayoutLMv3 with the precision of Regex Heuristics. Designed for robustness, it features a Dual-Engine Architecture with automatic fallback logic to ensure 100% extraction coverage for business-critical fields (Invoice #, Date, Total) even when the AI model is uncertain. ![Python](https://img.shields.io/badge/Python-3.10+-blue.svg) ![Streamlit](https://img.shields.io/badge/Streamlit-1.51+-red.svg) ![DocTR](https://img.shields.io/badge/DocTR-0.9+-green.svg) ![Transformers](https://img.shields.io/badge/Transformers-4.x-purple.svg) ![PyTorch](https://img.shields.io/badge/PyTorch-2.x-orange.svg) [![๐Ÿค— Live Demo](https://img.shields.io/badge/๐Ÿค—%20Live%20Demo-Hugging%20Face%20Spaces-yellow.svg)](https://huggingface.co/spaces/GSoumyajit2005/invoice-processor-ml) --- ## ๐Ÿš€ Try it Live! > **No installation required!** Try the full application instantly on Hugging Face Spaces: > > ### ๐Ÿ‘‰ [**Launch Live Demo**](https://huggingface.co/spaces/GSoumyajit2005/invoice-processor-ml) ๐Ÿ‘ˆ > > Upload any invoice image and watch the hybrid ML+Regex engine extract structured data in real-time. --- ## ๐ŸŽฏ Features ### ๐Ÿง  Core Intelligence - **Hybrid Inference Engine:** Automatically triggers a Regex Fallback Engine if the ML model (LayoutLMv3) returns low confidence or missing critical fields (Invoice #, Date). - **ML-Based Extraction:** Fine-tuned `LayoutLMv3` Transformer for semantic understanding of complex layouts (SROIE dataset). - **Rule-Based Fallback:** Deterministic regex patterns ensure 100% coverage for standard fields when ML is uncertain. ### ๐Ÿ›ก๏ธ Robustness & Engineering - **Defensive Data Handling:** Implemented coordinate clamping to prevent model crashes from negative OCR bounding boxes. - **GPU-Accelerated OCR:** DocTR (Mindee) with automatic CUDA acceleration for faster inference in production. - **Clean JSON Output:** Normalized schema handling nested entities, line items, and validation flags. - **Defensive Persistence:** Optional PostgreSQL integration (local Docker or cloud Supabase) that automatically saves extracted data when credentials are present, but gracefully degrades (skips saving) in serverless/demo environments. - **Async Database Saves:** Background thread processing ensures fast UI response (~5-7s) while database operations happen asynchronously. - **Duplicate Prevention:** Implemented _Semantic Hashing_ (Vendor + Date + Total + ID) to automatically detect and prevent duplicate invoice entries. ### ๐Ÿ’ป Usability - **Streamlit Web UI:** Interactive dashboard for real-time inference, visualization, and side-by-side comparison (ML vs. Regex). - **PDF Preview & Overlay:** Visual preview of uploaded PDFs with ML-detected bounding boxes overlay for transparency. - **CLI & Batch Processing:** Process single files or entire directories via command line with JSON export. - **Auto-Validation:** Heuristic checks to validate that the extracted "Total Amount" matches the sum of line items. > Note on Invoice Numbers: The SROIE dataset used for training does not include "Invoice Number" labels. To solve this, the system uses a Hybrid Fallback Mechanism: if the ML model (LayoutLMv3) returns null for the Invoice Number, the system automatically triggers a targeted Regex extraction to ensure this critical field is captured. --- ## ๐Ÿ› ๏ธ Technical Deep Dive (Why this architecture?) ### 1. The "Safety Net" Fallback Logic Standard ML models often fail on specific fields like "Invoice Number" if the layout is unseen. This system implements a **priority-based extraction**: 1. **Primary:** LayoutLMv3 predicts entity labels (context-aware). 2. **Fallback:** If `Invoice_No` or `Total` is null, the system executes a targeted Regex scan on the raw text. _Result:_ Combines the generalization of AI with the determinism of Rules. ### 2. Robustness & Error Handling - **OCR Noise:** Uses DocTR's deep learning-based text recognition for improved accuracy over traditional OCR. - **Coordinate Normalization:** A custom `clamp()` function ensures all bounding boxes stay strictly within [0, 1000] to prevent Transformer index errors. ### 3. Dual-Engine Architecture The system implements a **Dual-Engine Architecture** with automatic fallback logic: 1. **Primary Engine:** LayoutLMv3 predicts entity labels (context-aware). 2. **Fallback Engine:** If `Invoice_No` or `Total` is null, the system executes a targeted Regex scan on the raw text. ### 4. Clean JSON Output The system outputs a clean JSON with the following fields: - `receipt_number`: The invoice number (extracted by LayoutLMv3 or Regex). - `date`: The invoice date (extracted by LayoutLMv3 or Regex). - `bill_to`: The bill-to information (extracted by LayoutLMv3 or Regex). - `items`: The list of items (extracted by LayoutLMv3 or Regex). - `total_amount`: The total amount (extracted by LayoutLMv3 or Regex). - `extraction_confidence`: The confidence of the extraction (0-100). - `validation_passed`: Whether the validation passed (true/false). ### 5. Defensive Database Architecture To support both local development (with full persistence) and lightweight cloud demos (without databases), the system uses a **"Soft Fail" Persistence Layer**: 1. **Connection Check:** On startup, the system checks for PostgreSQL credentials. If missing, the database engine is disabled. 2. **Repository Guard:** All CRUD operations check for an active session. If the database is disabled, save operations are skipped silently without crashing the pipeline. 3. **Semantic Hashing:** Before saving, a content-based hash is generated to ensure idempotency. --- ## ๐Ÿ“Š Demo ### Web Interface ![Homepage](docs/screenshots/homepage.png) _Clean upload โ†’ extract flow with method selector (ML vs Regex)._ ### Successful Extraction (ML-based) ![Success Result](docs/screenshots/success_result.png) _Fields extracted with LayoutLMv3._ ### Format Detection (simulated) ![Format Detection](docs/screenshots/format_detection.png) _UI shows simple format hints and confidence._ ### Example JSON (Rule-based) ```json { "receipt_number": "PEGIV-1030765", "date": "15/01/2019", "bill_to": { "name": "THE PEAK QUARRY WORKS", "email": null }, "items": [], "total_amount": 193.0, "extraction_confidence": 100, "validation_passed": true, "vendor": "OJC MARKETING SDN BHD", "address": "NO JALAN BAYU 4, BANDAR SERI ALAM, 81750 MASAI, JOHOR" } ``` ### Example JSON (ML-based) ```json { "receipt_number": null, "date": "15/01/2019", "bill_to": null, "items": [], "total_amount": 193.0, "vendor": "OJC MARKETING SDN BHD", "address": "NO JALAN BAYU 4, BANDAR SERI ALAM, 81750 MASAI, JOHOR", "raw_text": "โ€ฆ", "raw_ocr_words": ["โ€ฆ"], "raw_predictions": { "DATE": {"text": "15/01/2019", "bbox": [[โ€ฆ]]}, "TOTAL": {"text": "193.00", "bbox": [[โ€ฆ]]}, "COMPANY": {"text": "OJC MARKETING SDN BHD", "bbox": [[โ€ฆ]]}, "ADDRESS": {"text": "โ€ฆ", "bbox": [[โ€ฆ]]} } } ``` ## ๐Ÿš€ Quick Start ### Prerequisites - Python 3.10+ - Conda / Miniforge (recommended) - NVIDIA GPU with CUDA (strongly recommended for usable performance) โš ๏ธ CPU-only execution is supported but significantly slower (5โ€“10s per invoice) and intended only for testing. ### Installation (Conda โ€“ Recommended) 1. Clone the repository: ```bash git clone https://github.com/GSoumyajit2005/invoice-processor-ml cd invoice-processor-ml ``` 2. Create and activate the Conda environment: ```bash conda env create -f environment.yml conda activate invoice-ml ``` 3. Verify CUDA availability (recommended): ```bash python - < Note: `requirements.txt` is consumed internally by `environment.yml`. > Do not install it manually with pip. ### Training the Model (Optional) To retrain the model from scratch using the provided scripts: ```bash python scripts/train_combined.py ``` (Note: Requires SROIE dataset in data/sroie) ### API Usage (Optional) To run the API server: ```bash python src/api.py ``` The API provides endpoints for processing invoices and extracting information. ### Running with Database (Optional) To enable data persistence, run the included Docker Compose file to spin up PostgreSQL: ```bash docker-compose up -d ``` The application will automatically detect the database and start saving invoices. ## ๐Ÿ’ป Usage ### Web Interface (Recommended) The easiest way to use the processor is via the web interface. ```bash streamlit run app.py ``` - Upload an invoice image (PNG/JPG). - Choose extraction method in sidebar: - ML-Based (LayoutLMv3) - Rule-Based (Regex) - View JSON, download results. ### Command-Line Interface (CLI) You can also process invoices directly from the command line. #### 1. Processing a Single Invoice This command processes the provided sample invoice and prints the results to the console. ```bash python src/pipeline.py data/samples/sample_invoice.jpg --save --method ml # or python src/pipeline.py data/samples/sample_invoice.jpg --save --method rules ``` #### 2. Batch Processing a Folder The CLI can process an entire folder of images at once. First, place your own invoice images (e.g., `my_invoice1.jpg`, `my_invoice2.png`) into the `data/raw/` folder. Then, run the following command. It will process all images in `data/raw/`. Saved files are written to `outputs/{stem}_{method}.json`. ```bash python src/pipeline.py data/raw --save --method ml ``` ### Python API You can integrate the pipeline directly into your own Python scripts. ```python from src.pipeline import process_invoice import json result = process_invoice('data/samples/sample_invoice.jpg', method='ml') print(json.dumps(result, indent=2)) ``` ## ๐Ÿ—๏ธ Architecture ``` โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ Upload Image โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ โ–ผ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ Preprocessing โ”‚ (OpenCV grayscale/denoise) โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ โ–ผ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ OCR โ”‚ (DocTR) โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ โ”‚ โ–ผ โ–ผ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ Rule-based IE โ”‚ โ”‚ ML-based IE (NER) โ”‚ โ”‚ (regex, heur.) โ”‚ โ”‚ LayoutLMv3 token-class โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ–ผ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ Post-process โ”‚ โ”‚ validate, scores โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ โ”‚ โ–ผ โ–ผ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ JSON Output โ”‚ โ”‚ DB (PostgreSQL) โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ (Optional Save) โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ ``` ## ๐Ÿ“ Project Structure ``` invoice-processor-ml/ โ”‚ โ”œโ”€โ”€ data/ โ”‚ โ”œโ”€โ”€ raw/ # Input invoice images for processing โ”‚ โ””โ”€โ”€ processed/ # (Reserved for future use) โ”‚ โ”œโ”€โ”€ data/samples/ โ”‚ โ””โ”€โ”€ sample_invoice.jpg # Public sample for quick testing โ”‚ โ”œโ”€โ”€ docs/ โ”‚ โ””โ”€โ”€ screenshots/ # UI Screenshots for the README demo โ”‚ โ”œโ”€โ”€ models/ โ”‚ โ””โ”€โ”€ layoutlmv3-doctr-trained/ # Fine-tuned model (trained with DocTR OCR) โ”‚ โ”œโ”€โ”€ outputs/ # Default folder for saved JSON results โ”‚ โ”œโ”€โ”€ scripts/ # Training and analysis scripts โ”‚ โ”œโ”€โ”€ eval_new_dataset.py # Evaluation scripts โ”‚ โ”œโ”€โ”€ explore_new_dataset.py # Dataset exploration tools โ”‚ โ”œโ”€โ”€ prepare_doctr_data.py # DocTR data alignment for training โ”‚ โ”œโ”€โ”€ train_combined.py # Main training loop (SROIE + Custom Data) โ”‚ โ””โ”€โ”€ train_layoutlm.py # LayoutLMv3 fine-tuning script โ”‚ โ”œโ”€โ”€ src/ โ”‚ โ”œโ”€โ”€ api.py # FastAPI REST endpoint for API access โ”‚ โ”œโ”€โ”€ data_loader.py # Unified data loader for training โ”‚ โ”œโ”€โ”€ database.py # Database connection with environment-aware 'soft fail' check โ”‚ โ”œโ”€โ”€ extraction.py # Regex-based information extraction logic โ”‚ โ”œโ”€โ”€ ml_extraction.py # ML-based extraction (LayoutLMv3 + DocTR) โ”‚ โ”œโ”€โ”€ models.py # SQLModel tables (Invoice, LineItem) with schema validation โ”‚ โ”œโ”€โ”€ pdf_utils.py # PDF text extraction and image conversion โ”‚ โ”œโ”€โ”€ pipeline.py # Main orchestrator for the pipeline and CLI โ”‚ โ”œโ”€โ”€ preprocessing.py # Image preprocessing functions (grayscale, denoise) โ”‚ โ”œโ”€โ”€ repository.py # CRUD operations with session safety handling โ”‚ โ”œโ”€โ”€ schema.py # Pydantic models for API response validation โ”‚ โ”œโ”€โ”€ sroie_loader.py # SROIE dataset loading logic โ”‚ โ””โ”€โ”€ utils.py # Utility functions (semantic hashing, etc.) โ”‚ โ”œโ”€โ”€ tests/ โ”‚ โ”œโ”€โ”€ test_extraction.py # Tests for regex extraction module โ”‚ โ”œโ”€โ”€ test_full_pipeline.py # Full end-to-end integration tests โ”‚ โ”œโ”€โ”€ test_pipeline.py # Pipeline process tests โ”‚ โ””โ”€โ”€ test_preprocessing.py # Tests for the preprocessing module โ”‚ โ”œโ”€โ”€ app.py # Streamlit web interface โ”œโ”€โ”€ requirements.txt # Python dependencies โ”œโ”€โ”€ environment.yml # Conda environment configuration โ”œโ”€โ”€ docker-compose.yml # Docker Compose configuration for PostgreSQL โ”œโ”€โ”€ Dockerfile # Dockerfile for building the application container โ”œโ”€โ”€ .gitignore # Git ignore file โ””โ”€โ”€ README.md # You are Here! ``` ## ๐Ÿง  Model & Training - **Model**: `microsoft/layoutlmv3-base` (125M params) - **Task**: Token Classification (NER) with 9 labels: `O, B/I-COMPANY, B/I-ADDRESS, B/I-DATE, B/I-TOTAL` - **Dataset**: SROIE (ICDAR 2019, English retail receipts), mychen76/invoices-and-receipts_ocr_v1 (English) - **Training**: RTX 3050 6GB, PyTorch 2.x, Transformers 4.x - **Result**: F1 Score โ‰ˆ 0.83 (Real-world performance on DocTR-aligned validation set) - Training scripts (local): - `scripts/train_combined.py` (data prep, training loop with validation + model save) - Model saved to: `models/layoutlmv3-doctr-trained/` ## ๐Ÿ“ˆ Performance - **OCR Precision**: State-of-the-art hierarchical detection using **DocTR (ResNet-50)**. Outperforms Tesseract on complex/noisy layouts. - **ML-based Extraction**: - **Accuracy**: ~83% F1 Score on SROIE + custom invoices - **Speed**: - **GPU (recommended)**: <1s per invoice - **CPU (fallback)**: ~5โ€“7s per invoice โš ๏ธ CPU-only execution is supported for testing and experimentation but results in significantly higher latency due to the heavy OCR and layout-aware models. ## โš ๏ธ Known Limitations 1. **Layout Sensitivity**: The ML model was fineโ€‘tuned on SROIE (retail receipts) and mychen76/invoices-and-receipts_ocr_v1 (English). Professional multi-column invoices may underperform until you fineโ€‘tune on more diverse datasets. 2. **Invoice Number**: SROIE dataset lacks invoice number labels. The system solves this by using the Hybrid Fallback Engine, which successfully extracts invoice numbers using Regex whenever the ML model output is empty. 3. **Line Items/Tables**: Not trained for table extraction yet. Rule-based supports simple totals; table extraction comes later. 4. **Inference Latency**: CPU execution is significantly slower due to heavy OCR and layout-aware models. ## ๐Ÿ”ฎ Future Enhancements - [x] Add and fineโ€‘tune on mychen76/invoices-and-receipts_ocr_v1 (English) for broader invoice formats - [ ] (Optional) Add FATURA (table-focused) for line-item extraction - [ ] Sliding-window chunking for >512 token documents (to avoid truncation) - [ ] Table detection (Camelot/Tabula/DeepDeSRT) for line items - [x] PDF support (pdf2image) for multipage invoices - [x] FastAPI backend + Docker - [x] CI/CD pipeline (GitHub Actions โ†’ HuggingFace Spaces auto-deploy) - [ ] Multilingual OCR (PaddleOCR) and multilingual fineโ€‘tuning - [ ] Confidence calibration and better validation rules - [x] Database persistence layer (PostgreSQL with SQLModel & Redundancy checks) ## ๐Ÿ› ๏ธ Tech Stack | Component | Technology | | ---------------- | ----------------------------------- | | OCR | DocTR (Mindee) | | Image Processing | OpenCV, Pillow | | ML/NLP | PyTorch 2.x, Transformers | | Model | LayoutLMv3 (token class.) | | Web Interface | Streamlit | | Data Format | JSON | | CI/CD | GitHub Actions โ†’ HuggingFace Spaces | | Containerization | Docker | | Database | PostgreSQL, SQLModel | | Containerization | Docker & Docker Compose | ## ๐Ÿ“š What I Learned - OCR challenges (confusable characters, confidence-based filtering) - Layout-aware NER with LayoutLMv3 (text + bbox + pixels) - Data normalization (bbox to 0โ€“1000 scale) - End-to-end pipelines (UI + CLI + JSON output) - When regex is enough vs when ML is needed - Evaluation (seqeval F1 for NER) ## ๐Ÿค Contributing Contributions welcome! Areas needing improvement: - New patterns for regex extractor - Better preprocessing for OCR - New datasets and training configs - Tests and CI ## ๐Ÿ“ License MIT License - See LICENSE file for details ## ๐Ÿ‘จโ€๐Ÿ’ป Author **Soumyajit Ghosh** - 3rd Year BTech Student - Exploring AI/ML and practical applications - [LinkedIn](https://www.linkedin.com/in/soumyajit-ghosh-tech) | [GitHub](https://github.com/GSoumyajit2005) | [Portfolio](https://soumyajitghosh.vercel.app) --- **Note**: "This is a learning project demonstrating an end-to-end ML pipeline. Not recommended for production use without further validation, retraining on diverse datasets, and security hardening."