davanstrien HF Staff commited on
Commit
4b5f951
·
verified ·
1 Parent(s): 17b958c

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +47 -24
README.md CHANGED
@@ -2,62 +2,85 @@
2
  tags:
3
  - ocr
4
  - document-processing
5
- - glm-ocr
 
6
  - markdown
7
  - uv-script
8
  - generated
9
  ---
10
 
11
- # Document OCR using GLM-OCR
12
 
13
- This dataset contains OCR results from images in [NationalLibraryOfScotland/medical-history-of-british-india](https://huggingface.co/datasets/NationalLibraryOfScotland/medical-history-of-british-india) using GLM-OCR, a compact 0.9B OCR model achieving SOTA performance.
14
 
15
  ## Processing Details
16
 
17
  - **Source Dataset**: [NationalLibraryOfScotland/medical-history-of-british-india](https://huggingface.co/datasets/NationalLibraryOfScotland/medical-history-of-british-india)
18
- - **Model**: [zai-org/GLM-OCR](https://huggingface.co/zai-org/GLM-OCR)
19
- - **Task**: text recognition
20
  - **Number of Samples**: 10
21
- - **Processing Time**: 6.2 min
22
- - **Processing Date**: 2026-02-14 18:31 UTC
23
 
24
  ### Configuration
25
 
26
  - **Image Column**: `image`
27
  - **Output Column**: `markdown`
28
  - **Dataset Split**: `train`
29
- - **Batch Size**: 16
30
  - **Max Model Length**: 8,192 tokens
31
  - **Max Output Tokens**: 8,192
32
- - **Temperature**: 0.01
33
- - **Top P**: 1e-05
34
  - **GPU Memory Utilization**: 80.0%
35
 
36
  ## Model Information
37
 
38
- GLM-OCR is a compact, high-performance OCR model:
39
- - 0.9B parameters
40
- - 94.62% on OmniDocBench V1.5
41
- - CogViT visual encoder + GLM-0.5B language decoder
42
- - Multi-Token Prediction (MTP) loss for efficiency
43
- - Multilingual: zh, en, fr, es, ru, de, ja, ko
44
- - MIT licensed
45
 
46
  ## Dataset Structure
47
 
48
  The dataset contains all original columns plus:
49
- - `markdown`: The extracted text in markdown format
50
  - `inference_info`: JSON list tracking all OCR models applied to this dataset
51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
  ## Reproduction
53
 
 
 
54
  ```bash
55
- uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
56
- NationalLibraryOfScotland/medical-history-of-british-india \
57
- <output-dataset> \
58
- --image-column image \
59
- --batch-size 16 \
60
- --task ocr
61
  ```
62
 
 
 
 
 
 
63
  Generated with [UV Scripts](https://huggingface.co/uv-scripts)
 
2
  tags:
3
  - ocr
4
  - document-processing
5
+ - deepseek
6
+ - deepseek-ocr
7
  - markdown
8
  - uv-script
9
  - generated
10
  ---
11
 
12
+ # Document OCR using DeepSeek-OCR
13
 
14
+ This dataset contains markdown-formatted OCR results from images in [NationalLibraryOfScotland/medical-history-of-british-india](https://huggingface.co/datasets/NationalLibraryOfScotland/medical-history-of-british-india) using DeepSeek-OCR.
15
 
16
  ## Processing Details
17
 
18
  - **Source Dataset**: [NationalLibraryOfScotland/medical-history-of-british-india](https://huggingface.co/datasets/NationalLibraryOfScotland/medical-history-of-british-india)
19
+ - **Model**: [deepseek-ai/DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR)
 
20
  - **Number of Samples**: 10
21
+ - **Processing Time**: 7.0 min
22
+ - **Processing Date**: 2026-02-14 18:32 UTC
23
 
24
  ### Configuration
25
 
26
  - **Image Column**: `image`
27
  - **Output Column**: `markdown`
28
  - **Dataset Split**: `train`
29
+ - **Batch Size**: 8
30
  - **Max Model Length**: 8,192 tokens
31
  - **Max Output Tokens**: 8,192
 
 
32
  - **GPU Memory Utilization**: 80.0%
33
 
34
  ## Model Information
35
 
36
+ DeepSeek-OCR is a state-of-the-art document OCR model that excels at:
37
+ - LaTeX equations - Mathematical formulas preserved in LaTeX format
38
+ - Tables - Extracted and formatted as HTML/markdown
39
+ - Document structure - Headers, lists, and formatting maintained
40
+ - Image grounding - Spatial layout and bounding box information
41
+ - Complex layouts - Multi-column and hierarchical structures
42
+ - Multilingual - Supports multiple languages
43
 
44
  ## Dataset Structure
45
 
46
  The dataset contains all original columns plus:
47
+ - `markdown`: The extracted text in markdown format with preserved structure
48
  - `inference_info`: JSON list tracking all OCR models applied to this dataset
49
 
50
+ ## Usage
51
+
52
+ ```python
53
+ from datasets import load_dataset
54
+ import json
55
+
56
+ # Load the dataset
57
+ dataset = load_dataset("{{output_dataset_id}}", split="train")
58
+
59
+ # Access the markdown text
60
+ for example in dataset:
61
+ print(example["markdown"])
62
+ break
63
+
64
+ # View all OCR models applied to this dataset
65
+ inference_info = json.loads(dataset[0]["inference_info"])
66
+ for info in inference_info:
67
+ print(f"Column: {{info['column_name']}} - Model: {{info['model_id']}}")
68
+ ```
69
+
70
  ## Reproduction
71
 
72
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) DeepSeek OCR vLLM script:
73
+
74
  ```bash
75
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr-vllm.py \\
76
+ NationalLibraryOfScotland/medical-history-of-british-india \\
77
+ <output-dataset> \\
78
+ --image-column image
 
 
79
  ```
80
 
81
+ ## Performance
82
+
83
+ - **Processing Speed**: ~0.0 images/second
84
+ - **Processing Method**: Batch processing with vLLM (2-3x speedup over sequential)
85
+
86
  Generated with [UV Scripts](https://huggingface.co/uv-scripts)