Files changed (1) hide show
  1. README.md +44 -29
README.md CHANGED
@@ -38,46 +38,61 @@ configs:
38
  path: data/test-*
39
  ---
40
 
41
- ## Dataset Structure
42
 
43
- - **Features:** `filename`, `label`, `url`, `BDORC_work_id`, `char_len`, `script`, `print_method`
44
- - **Splits:** `train`, `eval`, `test`
45
 
46
- ---
47
 
48
- ## 📊 Split-wise Metadata
 
 
 
49
 
50
- | Split | # Samples | Total Chars (`char_len` sum) |
51
- |--------|----------:|----------------------------:
52
- | Train | 601,152 | 37,334,253 |
53
- | Eval | 75,136 | 4,657,320 |
54
- | Test | 75,168 | 4,666,128 |
55
- | Total | 751,456| 46,657,701|
56
 
57
- ---
 
 
 
 
 
58
 
59
- ## 🏷️ Column Value Counts
 
 
 
 
60
 
61
- ### print_method
 
 
62
 
63
- | Split | PrintMethod_Relief_WoodBlock | PrintMethod_Modern
64
- |-----------------|--------------------------------:|-----------------:
65
- | Train | 21,314 | 579,838 |
66
- | Eval | 2,624 | 72,512 |
67
- | Test | 2,565 | 72,603 |
68
- | Total | 724,953| 26,503
69
 
70
- ### script
 
71
 
72
- | split | ScriptTibt | ScriptDbuCan | ScriptHani
73
- |-----------|--------:|-------------------:|----------:
74
- | Train | 555,594 |39,733|4,188|
75
- | Eval | 69,420 |4,981|536|
76
- | Test | 69,343|5,093| 546|
77
- | Total | 69,420 | 4,9807 | 5,270
78
 
79
- ## 🚀 Usage
80
 
81
  ```python
82
  from datasets import load_dataset
83
- ds = load_dataset("openpecha/OCR-Google_Books", split="train")
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  path: data/test-*
39
  ---
40
 
41
+ # Dataset Card for OCR-Google_Books
42
 
43
+ A line-to-text dataset for Tibetan OCR.
 
44
 
45
+ ## Dataset Details
46
 
47
+ ### Dataset Description
48
+ - **Curated by:** Buddhist Digital Resource Center
49
+ - **Language:** Tibetan
50
+ - **Total Samples:** 751,456 line images with text transcriptions
51
 
52
+ ### Dataset Structure
53
+ - **Features:**
54
+ - `id`: Image file identifier
55
+ - `label`: Text transcription
56
+ - `url`: Source URL of the original document
 
57
 
58
+ - **Splits:**
59
+ - **Train:** 601,152 samples (37.3M characters)
60
+ - **Eval:** 75,136 samples (4.7M characters)
61
+ - **Test:** 75,168 samples (4.7M characters)
62
+
63
+ ## Uses
64
 
65
+ ### Direct Use
66
+ - Training and evaluation of Tibetan OCR models
67
+ - Multi-script OCR development
68
+ - Comparative analysis of modern vs. traditional printing methods
69
+ - Large-scale OCR model pretraining
70
 
71
+ ### Out-of-Scope Use
72
+ - Not be suitable for handwritten Tibetan texts
73
+ - May not suitably represent contemporary digital Tibetan fonts
74
 
75
+ ## Dataset Creation
 
 
 
 
 
76
 
77
+ ### Curation Rationale and Process
78
+ This dataset was created to support the development of robust OCR systems for Tibetan literature, encompassing both modern typography and traditional woodblock printing methods. The inclusion of multiple scripts and printing techniques makes it valuable for training models that can handle diverse Tibetan textual sources.
79
 
80
+ The dataset is constructed from Google Books scans of Tibetan texts, with Line-level image-text pairs extracted from scanned pages
 
 
 
 
 
81
 
82
+ ## Usage
83
 
84
  ```python
85
  from datasets import load_dataset
86
+
87
+ # Load training split
88
+ dataset = load_dataset("openpecha/OCR-Google_Books", split="train")
89
+
90
+ # Example features
91
+ print(dataset[0])
92
+ # {'id': 'I1KG1163750042_0025',
93
+ #'label':'ཡིན་པས་ཆབ་སྲིད་དང་འབྲེལ་བ་བྱུང་བ་ཙམ་ལ་ངོ་མཚར་དགོས་དོན་གང་',
94
+ # 'url': 'https://s3.amazonaws.com/monlam.ai.ocr/OCR/training_images/I1KG1163750042_0025.jpg'}
95
+ ```
96
+
97
+ ## Dataset Contact
98
+ BDRC - help@bdrc.org