File size: 2,138 Bytes
54ca88c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e4f126f
 
54ca88c
 
e4f126f
 
 
54ca88c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e4f126f
54ca88c
 
 
 
 
e4f126f
54ca88c
 
 
 
e4f126f
 
54ca88c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
dataset_info:
  config_name: default
  features:
  - name: image
    dtype:
      image:
        decode: false
  - name: text
    dtype: string
  - name: line_id
    dtype: string
  - name: line_reading_order
    dtype: int64
  - name: region_id
    dtype: string
  - name: region_reading_order
    dtype: int64
  - name: region_type
    dtype: string
  - name: filename
    dtype: string
  - name: project_name
    dtype: string
  splits:
  - name: train
    num_examples: 1148
    num_bytes: 185010852
  - name: test
    num_examples: 61
    num_bytes: 185010852
  download_size: 370021704
  dataset_size: 370021704
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train/**/*.parquet
  - split: test
    path: data/test/**/*.parquet
tags:
  - image-to-text
  - htr
  - trocr
  - transcription
  - pagexml
license: mit
---

# Dataset Card for lines-test-service

This dataset was created using pagexml-hf converter from Transkribus PageXML data.

## Dataset Summary

This dataset contains 1,209 samples across 2 split(s).

## Dataset Structure

### Data Splits

- **train**: 1,148 samples
- **test**: 61 samples

### Dataset Size

- Approximate total size: 352.88 MB
- Total samples: 1,209

### Features

- **image**: `Image(mode=None, decode=False)`
- **text**: `Value('string')`
- **line_id**: `Value('string')`
- **line_reading_order**: `Value('int64')`
- **region_id**: `Value('string')`
- **region_reading_order**: `Value('int64')`
- **region_type**: `Value('string')`
- **filename**: `Value('string')`
- **project_name**: `Value('string')`

## Data Organization

Data is organized as parquet shards by split and project:
```
data/
├── <split>/
│   └── <project_name>/
│       └── <timestamp>-<shard>.parquet
```

The HuggingFace Hub automatically merges all parquet files when loading the dataset.

## Usage

```python
from datasets import load_dataset

# Load entire dataset
dataset = load_dataset("jwidmer/lines-test-service") 

# Load specific split
train_dataset = load_dataset("jwidmer/lines-test-service", split="train")
```

### Projects Included

B_IX_490_duplicated