metadata
dataset_info:
config_name: default
features:
- name: image
dtype:
image:
decode: false
- name: text
dtype: string
- name: line_id
dtype: string
- name: line_reading_order
dtype: int64
- name: region_id
dtype: string
- name: region_reading_order
dtype: int64
- name: region_type
dtype: string
- name: filename
dtype: string
- name: project_name
dtype: string
splits:
- name: train
num_examples: 1486
num_bytes: 416796252
download_size: 416796252
dataset_size: 416796252
configs:
- config_name: default
data_files:
- split: train
path: data/train/**/*.parquet
tags:
- image-to-text
- htr
- trocr
- transcription
- pagexml
license: mit
Dataset Card for rawxml-to-line-test
This dataset was created using pagexml-hf converter from Transkribus PageXML data.
Dataset Summary
This dataset contains 1486 samples across 1 split(s).
Projects Included
- 1611-02-25_Rezess_(HAStK-RBA_Best__82_A_51)
- B_IX_490_duplicated
Dataset Structure
Data Splits
- train: 1486 samples
Dataset Size
- Approximate total size: 397.49 MB
- Total samples: 1486
Features
- image:
Image(mode=None, decode=False) - text:
Value('string') - line_id:
Value('string') - line_reading_order:
Value('int64') - region_id:
Value('string') - region_reading_order:
Value('int64') - region_type:
Value('string') - filename:
Value('string') - project_name:
Value('string')
Data Organization
Data is organized as parquet shards by split and project:
data/
├── <split>/
│ └── <project_name>/
│ └── <timestamp>-<shard>.parquet
The HuggingFace Hub automatically merges all parquet files when loading the dataset.
Usage
from datasets import load_dataset
# Load entire dataset
dataset = load_dataset("jwidmer/rawxml-to-line-test")
# Load specific split
train_dataset = load_dataset("jwidmer/rawxml-to-line-test", split="train")