test_out_two / README.md
Alysonhower's picture
Upload README.md with huggingface_hub
5bc8208 verified
---
tags:
- ocr
- document-processing
- deepseek
- deepseek-ocr
- markdown
- uv-script
- generated
---
# Document OCR using DeepSeek-OCR
This dataset contains markdown-formatted OCR results from images in [Alysonhower/test](https://huggingface.co/datasets/Alysonhower/test) using DeepSeek-OCR.
## Processing Details
- **Source Dataset**: [Alysonhower/test](https://huggingface.co/datasets/Alysonhower/test)
- **Model**: [deepseek-ai/DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR)
- **Number of Samples**: 1
- **Processing Time**: 1.5 minutes
- **Processing Date**: 2025-10-23 13:06 UTC
### Configuration
- **Image Column**: `image`
- **Output Column**: `markdown`
- **Dataset Split**: `train`
- **Resolution Mode**: gundam
- **Base Size**: 1024
- **Image Size**: 640
- **Crop Mode**: True
## Model Information
DeepSeek-OCR is a state-of-the-art document OCR model that excels at:
- πŸ“ **LaTeX equations** - Mathematical formulas preserved in LaTeX format
- πŸ“Š **Tables** - Extracted and formatted as HTML/markdown
- πŸ“ **Document structure** - Headers, lists, and formatting maintained
- πŸ–ΌοΈ **Image grounding** - Spatial layout and bounding box information
- πŸ” **Complex layouts** - Multi-column and hierarchical structures
- 🌍 **Multilingual** - Supports multiple languages
### Resolution Modes
- **Tiny** (512Γ—512): Fast processing, 64 vision tokens
- **Small** (640Γ—640): Balanced speed/quality, 100 vision tokens
- **Base** (1024Γ—1024): High quality, 256 vision tokens
- **Large** (1280Γ—1280): Maximum quality, 400 vision tokens
- **Gundam** (dynamic): Adaptive multi-tile processing for large documents
## Dataset Structure
The dataset contains all original columns plus:
- `markdown`: The extracted text in markdown format with preserved structure
- `inference_info`: JSON list tracking all OCR models applied to this dataset
## Usage
```python
from datasets import load_dataset
import json
# Load the dataset
dataset = load_dataset("{{output_dataset_id}}", split="train")
# Access the markdown text
for example in dataset:
print(example["markdown"])
break
# View all OCR models applied to this dataset
inference_info = json.loads(dataset[0]["inference_info"])
for info in inference_info:
print(f"Column: {{info['column_name']}} - Model: {{info['model_id']}}")
```
## Reproduction
This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) DeepSeek OCR script:
```bash
uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr.py \
Alysonhower/test \
<output-dataset> \
--resolution-mode gundam \
--image-column image
```
## Performance
- **Processing Speed**: ~0.0 images/second
- **Processing Method**: Sequential (Transformers API, no batching)
Note: This uses the official Transformers implementation. For faster batch processing,
consider using the vLLM version once DeepSeek-OCR is officially supported by vLLM.
Generated with πŸ€– [UV Scripts](https://huggingface.co/uv-scripts)