Alysonhower commited on
Commit
00d41f6
·
verified ·
1 Parent(s): 82b7d7a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +31 -46
README.md CHANGED
@@ -2,67 +2,53 @@
2
  tags:
3
  - ocr
4
  - document-processing
5
- - nanonets
6
- - nanonets-ocr2
7
  - markdown
8
  - uv-script
9
  - generated
10
- dataset_info:
11
- features:
12
- - name: image
13
- dtype: image
14
- - name: markdown
15
- dtype: string
16
- - name: inference_info
17
- dtype: string
18
- splits:
19
- - name: train
20
- num_bytes: 146150
21
- num_examples: 1
22
- download_size: 149864
23
- dataset_size: 146150
24
- configs:
25
- - config_name: default
26
- data_files:
27
- - split: train
28
- path: data/train-*
29
  ---
30
 
31
- # Document OCR using Nanonets-OCR2-3B
32
 
33
- This dataset contains markdown-formatted OCR results from images in [Alysonhower/test](https://huggingface.co/datasets/Alysonhower/test) using Nanonets-OCR2-3B.
34
 
35
  ## Processing Details
36
 
37
  - **Source Dataset**: [Alysonhower/test](https://huggingface.co/datasets/Alysonhower/test)
38
- - **Model**: [nanonets/Nanonets-OCR2-3B](https://huggingface.co/nanonets/Nanonets-OCR2-3B)
39
- - **Model Size**: 3.75B parameters
40
  - **Number of Samples**: 1
41
- - **Processing Time**: 9.1 minutes
42
- - **Processing Date**: 2025-10-14 22:11 UTC
43
 
44
  ### Configuration
45
 
46
  - **Image Column**: `image`
47
  - **Output Column**: `markdown`
48
  - **Dataset Split**: `train`
49
- - **Batch Size**: 16
50
- - **Max Model Length**: 15,000 tokens
51
- - **Max Output Tokens**: 15,000
52
- - **GPU Memory Utilization**: 80.0%
53
 
54
  ## Model Information
55
 
56
- Nanonets-OCR2-3B is a state-of-the-art document OCR model that excels at:
57
  - 📐 **LaTeX equations** - Mathematical formulas preserved in LaTeX format
58
- - 📊 **Tables** - Extracted and formatted as HTML
59
  - 📝 **Document structure** - Headers, lists, and formatting maintained
60
- - 🖼️ **Images** - Captions and descriptions included in `<img>` tags
61
- - ☑️ **Forms** - Checkboxes rendered as ☐/☑
62
- - 🔖 **Watermarks** - Wrapped in `<watermark>` tags
63
- - 📄 **Page numbers** - Wrapped in `<page_number>` tags
64
  - 🌍 **Multilingual** - Supports multiple languages
65
 
 
 
 
 
 
 
 
 
66
  ## Dataset Structure
67
 
68
  The dataset contains all original columns plus:
@@ -91,23 +77,22 @@ for info in inference_info:
91
 
92
  ## Reproduction
93
 
94
- This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) Nanonets OCR2 script:
95
 
96
  ```bash
97
- uv run https://huggingface.co/datasets/Alysonhower/scripts/resolve/main/nanonets-ocr2.pyy \
98
  Alysonhower/test \
99
  <output-dataset> \
100
- --model nanonets/Nanonets-OCR2-3B \
101
- --image-column image \
102
- --batch-size 16 \
103
- --max-model-len 15000 \
104
- --max-tokens 15000 \
105
- --gpu-memory-utilization 0.8
106
  ```
107
 
108
  ## Performance
109
 
110
  - **Processing Speed**: ~0.0 images/second
111
- - **GPU Configuration**: vLLM with 80% GPU memory utilization
 
 
 
112
 
113
  Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)
 
2
  tags:
3
  - ocr
4
  - document-processing
5
+ - deepseek
6
+ - deepseek-ocr
7
  - markdown
8
  - uv-script
9
  - generated
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
+ # Document OCR using DeepSeek-OCR
13
 
14
+ This dataset contains markdown-formatted OCR results from images in [Alysonhower/test](https://huggingface.co/datasets/Alysonhower/test) using DeepSeek-OCR.
15
 
16
  ## Processing Details
17
 
18
  - **Source Dataset**: [Alysonhower/test](https://huggingface.co/datasets/Alysonhower/test)
19
+ - **Model**: [deepseek-ai/DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR)
 
20
  - **Number of Samples**: 1
21
+ - **Processing Time**: 1.5 minutes
22
+ - **Processing Date**: 2025-10-22 01:07 UTC
23
 
24
  ### Configuration
25
 
26
  - **Image Column**: `image`
27
  - **Output Column**: `markdown`
28
  - **Dataset Split**: `train`
29
+ - **Resolution Mode**: gundam
30
+ - **Base Size**: 1024
31
+ - **Image Size**: 640
32
+ - **Crop Mode**: True
33
 
34
  ## Model Information
35
 
36
+ DeepSeek-OCR is a state-of-the-art document OCR model that excels at:
37
  - 📐 **LaTeX equations** - Mathematical formulas preserved in LaTeX format
38
+ - 📊 **Tables** - Extracted and formatted as HTML/markdown
39
  - 📝 **Document structure** - Headers, lists, and formatting maintained
40
+ - 🖼️ **Image grounding** - Spatial layout and bounding box information
41
+ - 🔍 **Complex layouts** - Multi-column and hierarchical structures
 
 
42
  - 🌍 **Multilingual** - Supports multiple languages
43
 
44
+ ### Resolution Modes
45
+
46
+ - **Tiny** (512×512): Fast processing, 64 vision tokens
47
+ - **Small** (640×640): Balanced speed/quality, 100 vision tokens
48
+ - **Base** (1024×1024): High quality, 256 vision tokens
49
+ - **Large** (1280×1280): Maximum quality, 400 vision tokens
50
+ - **Gundam** (dynamic): Adaptive multi-tile processing for large documents
51
+
52
  ## Dataset Structure
53
 
54
  The dataset contains all original columns plus:
 
77
 
78
  ## Reproduction
79
 
80
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) DeepSeek OCR script:
81
 
82
  ```bash
83
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr.py \
84
  Alysonhower/test \
85
  <output-dataset> \
86
+ --resolution-mode gundam \
87
+ --image-column image
 
 
 
 
88
  ```
89
 
90
  ## Performance
91
 
92
  - **Processing Speed**: ~0.0 images/second
93
+ - **Processing Method**: Sequential (Transformers API, no batching)
94
+
95
+ Note: This uses the official Transformers implementation. For faster batch processing,
96
+ consider using the vLLM version once DeepSeek-OCR is officially supported by vLLM.
97
 
98
  Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)