--- dataset_info: config_name: default features: - name: image dtype: image: decode: false - name: text dtype: string - name: line_id dtype: string - name: line_reading_order dtype: int64 - name: region_id dtype: string - name: region_reading_order dtype: int64 - name: region_type dtype: string - name: filename dtype: string - name: project_name dtype: string splits: - name: train num_examples: 1148 num_bytes: 185010852 - name: test num_examples: 61 num_bytes: 185010852 download_size: 370021704 dataset_size: 370021704 configs: - config_name: default data_files: - split: train path: data/train/**/*.parquet - split: test path: data/test/**/*.parquet tags: - image-to-text - htr - trocr - transcription - pagexml license: mit --- # Dataset Card for lines-test-service This dataset was created using pagexml-hf converter from Transkribus PageXML data. ## Dataset Summary This dataset contains 1,209 samples across 2 split(s). ## Dataset Structure ### Data Splits - **train**: 1,148 samples - **test**: 61 samples ### Dataset Size - Approximate total size: 352.88 MB - Total samples: 1,209 ### Features - **image**: `Image(mode=None, decode=False)` - **text**: `Value('string')` - **line_id**: `Value('string')` - **line_reading_order**: `Value('int64')` - **region_id**: `Value('string')` - **region_reading_order**: `Value('int64')` - **region_type**: `Value('string')` - **filename**: `Value('string')` - **project_name**: `Value('string')` ## Data Organization Data is organized as parquet shards by split and project: ``` data/ ├── / │ └── / │ └── -.parquet ``` The HuggingFace Hub automatically merges all parquet files when loading the dataset. ## Usage ```python from datasets import load_dataset # Load entire dataset dataset = load_dataset("jwidmer/lines-test-service") # Load specific split train_dataset = load_dataset("jwidmer/lines-test-service", split="train") ``` ### Projects Included B_IX_490_duplicated