OCR-Google_Books / README.md
billingsmoore's picture
Upload dataset
8c37dae verified
|
raw
history blame
2.27 kB
---
dataset_info:
- config_name: default
features:
- name: filename
dtype: string
- name: label
dtype: string
- name: url
dtype: string
- name: BDRC_work_id
dtype: string
- name: char_len
dtype: int64
- name: script
dtype: string
- name: print_method
dtype: string
splits:
- name: train
num_bytes: 210467728
num_examples: 601152
- name: eval
num_bytes: 26280512
num_examples: 75136
- name: test
num_bytes: 26308535
num_examples: 75168
download_size: 76386563
dataset_size: 263056775
- config_name: updated_schema
features:
- name: id
dtype: string
- name: label
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 176527069
num_examples: 601152
download_size: 55583886
dataset_size: 176527069
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
- split: test
path: data/test-*
- config_name: updated_schema
data_files:
- split: train
path: updated_schema/train-*
---
## Dataset Structure
- **Features:** `filename`, `label`, `url`, `BDORC_work_id`, `char_len`, `script`, `print_method`
- **Splits:** `train`, `eval`, `test`
---
## 📊 Split-wise Metadata
| Split | # Samples | Total Chars (`char_len` sum) |
|--------|----------:|----------------------------:
| Train | 601,152 | 37,334,253 |
| Eval | 75,136 | 4,657,320 |
| Test | 75,168 | 4,666,128 |
| Total | 751,456| 46,657,701|
---
## 🏷️ Column Value Counts
### print_method
| Split | PrintMethod_Relief_WoodBlock | PrintMethod_Modern
|-----------------|--------------------------------:|-----------------:
| Train | 21,314 | 579,838 |
| Eval | 2,624 | 72,512 |
| Test | 2,565 | 72,603 |
| Total | 724,953| 26,503
### script
| split | ScriptTibt | ScriptDbuCan | ScriptHani
|-----------|--------:|-------------------:|----------:
| Train | 555,594 |39,733|4,188|
| Eval | 69,420 |4,981|536|
| Test | 69,343|5,093| 546|
| Total | 69,420 | 4,9807 | 5,270
## 🚀 Usage
```python
from datasets import load_dataset
ds = load_dataset("openpecha/OCR-Google_Books", split="train")