davanstrien HF Staff commited on
Commit
85e5eb2
·
verified ·
1 Parent(s): 2f3c550

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +97 -31
README.md CHANGED
@@ -1,33 +1,99 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: document_id
5
- dtype: string
6
- - name: page_number
7
- dtype: string
8
- - name: image
9
- dtype: image
10
- - name: text
11
- dtype: string
12
- - name: alto_xml
13
- dtype: string
14
- - name: has_image
15
- dtype: bool
16
- - name: has_alto
17
- dtype: bool
18
- - name: markdown
19
- dtype: string
20
- - name: inference_info
21
- dtype: string
22
- splits:
23
- - name: train
24
- num_bytes: 736245
25
- num_examples: 10
26
- download_size: 715248
27
- dataset_size: 736245
28
- configs:
29
- - config_name: default
30
- data_files:
31
- - split: train
32
- path: data/train-*
33
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - ocr
4
+ - document-processing
5
+ - deepseek
6
+ - deepseek-ocr
7
+ - markdown
8
+ - uv-script
9
+ - generated
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
+
12
+ # Document OCR using DeepSeek-OCR
13
+
14
+ This dataset contains markdown-formatted OCR results from images in [NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset](https://huggingface.co/datasets/NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset) using DeepSeek-OCR.
15
+
16
+ ## Processing Details
17
+
18
+ - **Source Dataset**: [NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset](https://huggingface.co/datasets/NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset)
19
+ - **Model**: [deepseek-ai/DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR)
20
+ - **Number of Samples**: 10
21
+ - **Processing Time**: 1.7 min
22
+ - **Processing Date**: 2025-10-22 16:52 UTC
23
+
24
+ ### Configuration
25
+
26
+ - **Image Column**: `image`
27
+ - **Output Column**: `markdown`
28
+ - **Dataset Split**: `train`
29
+ - **Batch Size**: 8
30
+ - **Resolution Mode**: large
31
+ - **Base Size**: 1280
32
+ - **Image Size**: 1280
33
+ - **Crop Mode**: False
34
+ - **Max Model Length**: 8,192 tokens
35
+ - **Max Output Tokens**: 8,192
36
+ - **GPU Memory Utilization**: 80.0%
37
+
38
+ ## Model Information
39
+
40
+ DeepSeek-OCR is a state-of-the-art document OCR model that excels at:
41
+ - 📐 **LaTeX equations** - Mathematical formulas preserved in LaTeX format
42
+ - 📊 **Tables** - Extracted and formatted as HTML/markdown
43
+ - 📝 **Document structure** - Headers, lists, and formatting maintained
44
+ - 🖼️ **Image grounding** - Spatial layout and bounding box information
45
+ - 🔍 **Complex layouts** - Multi-column and hierarchical structures
46
+ - 🌍 **Multilingual** - Supports multiple languages
47
+
48
+ ### Resolution Modes
49
+
50
+ - **Tiny** (512×512): Fast processing, 64 vision tokens
51
+ - **Small** (640×640): Balanced speed/quality, 100 vision tokens
52
+ - **Base** (1024×1024): High quality, 256 vision tokens
53
+ - **Large** (1280×1280): Maximum quality, 400 vision tokens
54
+ - **Gundam** (dynamic): Adaptive multi-tile processing for large documents
55
+
56
+ ## Dataset Structure
57
+
58
+ The dataset contains all original columns plus:
59
+ - `markdown`: The extracted text in markdown format with preserved structure
60
+ - `inference_info`: JSON list tracking all OCR models applied to this dataset
61
+
62
+ ## Usage
63
+
64
+ ```python
65
+ from datasets import load_dataset
66
+ import json
67
+
68
+ # Load the dataset
69
+ dataset = load_dataset("{{output_dataset_id}}", split="train")
70
+
71
+ # Access the markdown text
72
+ for example in dataset:
73
+ print(example["markdown"])
74
+ break
75
+
76
+ # View all OCR models applied to this dataset
77
+ inference_info = json.loads(dataset[0]["inference_info"])
78
+ for info in inference_info:
79
+ print(f"Column: {{info['column_name']}} - Model: {{info['model_id']}}")
80
+ ```
81
+
82
+ ## Reproduction
83
+
84
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) DeepSeek OCR vLLM script:
85
+
86
+ ```bash
87
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr-vllm.py \\
88
+ NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset \\
89
+ <output-dataset> \\
90
+ --resolution-mode large \\
91
+ --image-column image
92
+ ```
93
+
94
+ ## Performance
95
+
96
+ - **Processing Speed**: ~0.1 images/second
97
+ - **Processing Method**: Batch processing with vLLM (2-3x speedup over sequential)
98
+
99
+ Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)