jwidmer's picture
Update dataset card
dcf1c1f verified
---
dataset_info:
config_name: default
features:
- name: image
dtype:
image:
decode: false
- name: xml_content
dtype: string
- name: filename
dtype: string
- name: project_name
dtype: string
splits:
- name: train
num_examples: 27
num_bytes: 412007872
- name: test
num_examples: 2
num_bytes: 412007872
download_size: 824015744
dataset_size: 824015744
configs:
- config_name: default
data_files:
- split: train
path: data/train/**/*.parquet
- split: test
path: data/test/**/*.parquet
tags:
- image-to-text
- htr
- trocr
- transcription
- pagexml
license: mit
---
# Dataset Card for rawxl-test-overwrite
This dataset was created using pagexml-hf converter from Transkribus PageXML data.
## Dataset Summary
This dataset contains 29 samples across 2 split(s).
### Projects Included
- 1611-02-25_Rezess_(HAStK-RBA_Best__82_A_51)
- B_IX_490_duplicated
## Dataset Structure
### Data Splits
- **train**: 27 samples
- **test**: 2 samples
### Dataset Size
- Approximate total size: 785.84 MB
- Total samples: 29
### Features
- **image**: `Image(mode=None, decode=False)`
- **xml_content**: `Value('string')`
- **filename**: `Value('string')`
- **project_name**: `Value('string')`
## Data Organization
Data is organized as parquet shards by split and project:
```
data/
β”œβ”€β”€ <split>/
β”‚ └── <project_name>/
β”‚ └── <timestamp>-<shard>.parquet
```
The HuggingFace Hub automatically merges all parquet files when loading the dataset.
## Usage
```python
from datasets import load_dataset
# Load entire dataset
dataset = load_dataset("jwidmer/rawxl-test-overwrite")
# Load specific split
train_dataset = load_dataset("jwidmer/rawxl-test-overwrite", split="train")
```