jwidmer commited on
Commit
fd45481
·
verified ·
1 Parent(s): aa367ed

Update dataset card

Browse files
Files changed (1) hide show
  1. README.md +103 -0
README.md ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ config_name: default
4
+ features:
5
+ - name: image
6
+ dtype:
7
+ image:
8
+ decode: false
9
+ - name: text
10
+ dtype: string
11
+ - name: line_id
12
+ dtype: string
13
+ - name: line_reading_order
14
+ dtype: int64
15
+ - name: region_id
16
+ dtype: string
17
+ - name: region_reading_order
18
+ dtype: int64
19
+ - name: region_type
20
+ dtype: string
21
+ - name: filename
22
+ dtype: string
23
+ - name: project_name
24
+ dtype: string
25
+ splits:
26
+ - name: train
27
+ num_examples: 447
28
+ num_bytes: 104075904
29
+ download_size: 104075904
30
+ dataset_size: 104075904
31
+ configs:
32
+ - config_name: default
33
+ data_files:
34
+ - split: train
35
+ path: data/train/**/*.parquet
36
+ tags:
37
+ - image-to-text
38
+ - htr
39
+ - trocr
40
+ - transcription
41
+ - pagexml
42
+ license: mit
43
+ ---
44
+
45
+ # Dataset Card for line-test-cache
46
+
47
+ This dataset was created using pagexml-hf converter from Transkribus PageXML data.
48
+
49
+ ## Dataset Summary
50
+
51
+ This dataset contains 447 samples across 1 split(s).
52
+
53
+ ### Projects Included
54
+
55
+ - B_IX_490_duplicated
56
+ - export_doc2_modell_training_casanatense_pagexml_202507041437
57
+
58
+ ## Dataset Structure
59
+
60
+ ### Data Splits
61
+
62
+ - **train**: 447 samples
63
+
64
+ ### Dataset Size
65
+
66
+ - Approximate total size: 99.25 MB
67
+ - Total samples: 447
68
+
69
+ ### Features
70
+
71
+ - **image**: `Image(mode=None, decode=False)`
72
+ - **text**: `Value('string')`
73
+ - **line_id**: `Value('string')`
74
+ - **line_reading_order**: `Value('int64')`
75
+ - **region_id**: `Value('string')`
76
+ - **region_reading_order**: `Value('int64')`
77
+ - **region_type**: `Value('string')`
78
+ - **filename**: `Value('string')`
79
+ - **project_name**: `Value('string')`
80
+
81
+ ## Data Organization
82
+
83
+ Data is organized as parquet shards by split and project:
84
+ ```
85
+ data/
86
+ ├── <split>/
87
+ │ └── <project_name>/
88
+ │ └── <timestamp>-<shard>.parquet
89
+ ```
90
+
91
+ The HuggingFace Hub automatically merges all parquet files when loading the dataset.
92
+
93
+ ## Usage
94
+
95
+ ```python
96
+ from datasets import load_dataset
97
+
98
+ # Load entire dataset
99
+ dataset = load_dataset("jwidmer/line-test-cache")
100
+
101
+ # Load specific split
102
+ train_dataset = load_dataset("jwidmer/line-test-cache", split="train")
103
+ ```