File size: 2,092 Bytes
fd45481
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
dataset_info:
  config_name: default
  features:
    - name: image
      dtype:
        image:
          decode: false
    - name: text
      dtype: string
    - name: line_id
      dtype: string
    - name: line_reading_order
      dtype: int64
    - name: region_id
      dtype: string
    - name: region_reading_order
      dtype: int64
    - name: region_type
      dtype: string
    - name: filename
      dtype: string
    - name: project_name
      dtype: string
  splits:
  - name: train
    num_examples: 447
    num_bytes: 104075904
  download_size: 104075904
  dataset_size: 104075904
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train/**/*.parquet
tags:
  - image-to-text
  - htr
  - trocr
  - transcription
  - pagexml
license: mit
---

# Dataset Card for line-test-cache

This dataset was created using pagexml-hf converter from Transkribus PageXML data.

## Dataset Summary

This dataset contains 447 samples across 1 split(s).

### Projects Included

- B_IX_490_duplicated
- export_doc2_modell_training_casanatense_pagexml_202507041437

## Dataset Structure

### Data Splits

- **train**: 447 samples

### Dataset Size

- Approximate total size: 99.25 MB
- Total samples: 447

### Features

- **image**: `Image(mode=None, decode=False)`
- **text**: `Value('string')`
- **line_id**: `Value('string')`
- **line_reading_order**: `Value('int64')`
- **region_id**: `Value('string')`
- **region_reading_order**: `Value('int64')`
- **region_type**: `Value('string')`
- **filename**: `Value('string')`
- **project_name**: `Value('string')`

## Data Organization

Data is organized as parquet shards by split and project:
```
data/
├── <split>/
│   └── <project_name>/
│       └── <timestamp>-<shard>.parquet
```

The HuggingFace Hub automatically merges all parquet files when loading the dataset.

## Usage

```python
from datasets import load_dataset

# Load entire dataset
dataset = load_dataset("jwidmer/line-test-cache") 

# Load specific split
train_dataset = load_dataset("jwidmer/line-test-cache", split="train")
```