davanstrien HF Staff commited on
Commit
ed451ad
·
verified ·
1 Parent(s): 8851443

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +42 -19
README.md CHANGED
@@ -2,24 +2,24 @@
2
  tags:
3
  - ocr
4
  - document-processing
5
- - glm-ocr
 
6
  - markdown
7
  - uv-script
8
  - generated
9
  ---
10
 
11
- # Document OCR using GLM-OCR
12
 
13
- This dataset contains OCR results from images in [NationalLibraryOfScotland/medical-history-of-british-india](https://huggingface.co/datasets/NationalLibraryOfScotland/medical-history-of-british-india) using GLM-OCR, a compact 0.9B OCR model achieving SOTA performance.
14
 
15
  ## Processing Details
16
 
17
  - **Source Dataset**: [NationalLibraryOfScotland/medical-history-of-british-india](https://huggingface.co/datasets/NationalLibraryOfScotland/medical-history-of-british-india)
18
- - **Model**: [zai-org/GLM-OCR](https://huggingface.co/zai-org/GLM-OCR)
19
- - **Task**: text recognition
20
  - **Number of Samples**: 50
21
- - **Processing Time**: 8.9 min
22
- - **Processing Date**: 2026-02-14 19:13 UTC
23
 
24
  ### Configuration
25
 
@@ -27,21 +27,19 @@ This dataset contains OCR results from images in [NationalLibraryOfScotland/medi
27
  - **Output Column**: `markdown`
28
  - **Dataset Split**: `train`
29
  - **Batch Size**: 16
 
30
  - **Max Model Length**: 8,192 tokens
31
  - **Max Output Tokens**: 8,192
32
- - **Temperature**: 0.01
33
- - **Top P**: 1e-05
34
  - **GPU Memory Utilization**: 80.0%
35
 
36
  ## Model Information
37
 
38
- GLM-OCR is a compact, high-performance OCR model:
39
- - 0.9B parameters
40
- - 94.62% on OmniDocBench V1.5
41
- - CogViT visual encoder + GLM-0.5B language decoder
42
- - Multi-Token Prediction (MTP) loss for efficiency
43
- - Multilingual: zh, en, fr, es, ru, de, ja, ko
44
- - MIT licensed
45
 
46
  ## Dataset Structure
47
 
@@ -49,15 +47,40 @@ The dataset contains all original columns plus:
49
  - `markdown`: The extracted text in markdown format
50
  - `inference_info`: JSON list tracking all OCR models applied to this dataset
51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
  ## Reproduction
53
 
 
 
54
  ```bash
55
- uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
56
  NationalLibraryOfScotland/medical-history-of-british-india \
57
  <output-dataset> \
58
  --image-column image \
59
  --batch-size 16 \
60
- --task ocr
 
 
 
61
  ```
62
 
63
- Generated with [UV Scripts](https://huggingface.co/uv-scripts)
 
2
  tags:
3
  - ocr
4
  - document-processing
5
+ - dots-ocr
6
+ - multilingual
7
  - markdown
8
  - uv-script
9
  - generated
10
  ---
11
 
12
+ # Document OCR using dots.ocr
13
 
14
+ This dataset contains OCR results from images in [NationalLibraryOfScotland/medical-history-of-british-india](https://huggingface.co/datasets/NationalLibraryOfScotland/medical-history-of-british-india) using DoTS.ocr, a compact 1.7B multilingual model.
15
 
16
  ## Processing Details
17
 
18
  - **Source Dataset**: [NationalLibraryOfScotland/medical-history-of-british-india](https://huggingface.co/datasets/NationalLibraryOfScotland/medical-history-of-british-india)
19
+ - **Model**: [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr)
 
20
  - **Number of Samples**: 50
21
+ - **Processing Time**: 10.2 min
22
+ - **Processing Date**: 2026-02-14 19:16 UTC
23
 
24
  ### Configuration
25
 
 
27
  - **Output Column**: `markdown`
28
  - **Dataset Split**: `train`
29
  - **Batch Size**: 16
30
+ - **Prompt Mode**: ocr
31
  - **Max Model Length**: 8,192 tokens
32
  - **Max Output Tokens**: 8,192
 
 
33
  - **GPU Memory Utilization**: 80.0%
34
 
35
  ## Model Information
36
 
37
+ DoTS.ocr is a compact multilingual document parsing model that excels at:
38
+ - 🌍 **100+ Languages** - Multilingual document support
39
+ - 📊 **Table extraction** - Structured data recognition
40
+ - 📐 **Formulas** - Mathematical notation preservation
41
+ - 📝 **Layout-aware** - Reading order and structure preservation
42
+ - 🎯 **Compact** - Only 1.7B parameters
 
43
 
44
  ## Dataset Structure
45
 
 
47
  - `markdown`: The extracted text in markdown format
48
  - `inference_info`: JSON list tracking all OCR models applied to this dataset
49
 
50
+ ## Usage
51
+
52
+ ```python
53
+ from datasets import load_dataset
54
+ import json
55
+
56
+ # Load the dataset
57
+ dataset = load_dataset("{output_dataset_id}", split="train")
58
+
59
+ # Access the markdown text
60
+ for example in dataset:
61
+ print(example["markdown"])
62
+ break
63
+
64
+ # View all OCR models applied to this dataset
65
+ inference_info = json.loads(dataset[0]["inference_info"])
66
+ for info in inference_info:
67
+ print(f"Column: {info['column_name']} - Model: {info['model_id']}")
68
+ ```
69
+
70
  ## Reproduction
71
 
72
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) DoTS OCR script:
73
+
74
  ```bash
75
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-ocr.py \
76
  NationalLibraryOfScotland/medical-history-of-british-india \
77
  <output-dataset> \
78
  --image-column image \
79
  --batch-size 16 \
80
+ --prompt-mode ocr \
81
+ --max-model-len 8192 \
82
+ --max-tokens 8192 \
83
+ --gpu-memory-utilization 0.8
84
  ```
85
 
86
+ Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)