|
|
--- |
|
|
dataset_info: |
|
|
config_name: default |
|
|
features: |
|
|
- name: image |
|
|
dtype: |
|
|
image: |
|
|
decode: false |
|
|
- name: text |
|
|
dtype: string |
|
|
- name: line_id |
|
|
dtype: string |
|
|
- name: line_reading_order |
|
|
dtype: int64 |
|
|
- name: region_id |
|
|
dtype: string |
|
|
- name: region_reading_order |
|
|
dtype: int64 |
|
|
- name: region_type |
|
|
dtype: string |
|
|
- name: filename |
|
|
dtype: string |
|
|
- name: project_name |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_examples: 447 |
|
|
num_bytes: 104075904 |
|
|
download_size: 104075904 |
|
|
dataset_size: 104075904 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train/**/*.parquet |
|
|
tags: |
|
|
- image-to-text |
|
|
- htr |
|
|
- trocr |
|
|
- transcription |
|
|
- pagexml |
|
|
license: mit |
|
|
--- |
|
|
|
|
|
# Dataset Card for line-test-cache |
|
|
|
|
|
This dataset was created using pagexml-hf converter from Transkribus PageXML data. |
|
|
|
|
|
## Dataset Summary |
|
|
|
|
|
This dataset contains 447 samples across 1 split(s). |
|
|
|
|
|
### Projects Included |
|
|
|
|
|
- B_IX_490_duplicated |
|
|
- export_doc2_modell_training_casanatense_pagexml_202507041437 |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
### Data Splits |
|
|
|
|
|
- **train**: 447 samples |
|
|
|
|
|
### Dataset Size |
|
|
|
|
|
- Approximate total size: 99.25 MB |
|
|
- Total samples: 447 |
|
|
|
|
|
### Features |
|
|
|
|
|
- **image**: `Image(mode=None, decode=False)` |
|
|
- **text**: `Value('string')` |
|
|
- **line_id**: `Value('string')` |
|
|
- **line_reading_order**: `Value('int64')` |
|
|
- **region_id**: `Value('string')` |
|
|
- **region_reading_order**: `Value('int64')` |
|
|
- **region_type**: `Value('string')` |
|
|
- **filename**: `Value('string')` |
|
|
- **project_name**: `Value('string')` |
|
|
|
|
|
## Data Organization |
|
|
|
|
|
Data is organized as parquet shards by split and project: |
|
|
``` |
|
|
data/ |
|
|
├── <split>/ |
|
|
│ └── <project_name>/ |
|
|
│ └── <timestamp>-<shard>.parquet |
|
|
``` |
|
|
|
|
|
The HuggingFace Hub automatically merges all parquet files when loading the dataset. |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load entire dataset |
|
|
dataset = load_dataset("jwidmer/line-test-cache") |
|
|
|
|
|
# Load specific split |
|
|
train_dataset = load_dataset("jwidmer/line-test-cache", split="train") |
|
|
``` |
|
|
|