image imagewidth (px) 1k 1.27k | xml_content stringlengths 32.2k 416k | filename stringlengths 23 23 | project_name stringclasses 5 values |
|---|---|---|---|
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083608_0330_3473621.jpg | MM_1_004 | |
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083608_0009_3472489.jpg | MM_1_004 | |
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083608_0022_3472517.jpg | MM_1_004 | |
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083608_0396_3473842.jpg | MM_1_004 | |
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083608_0013_3472498.jpg | MM_1_004 | |
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083608_0079_3472681.jpg | MM_1_004 | |
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083608_0371_3473770.jpg | MM_1_004 | |
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083608_0250_3473328.jpg | MM_1_004 | |
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083608_0030_3472533.jpg | MM_1_004 | |
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<PcGts xmlns=\"http://schema.primare(...TRUNCATED) | 083608_0284_3473454.jpg | MM_1_004 |
Dataset Card for transkribus-exports-30840-raw-xml
This dataset was created using pagexml-hf converter from Transkribus PageXML data.
Dataset Summary
This dataset contains 137487 samples across 1 split(s). All of the images and the data concern images and transcriptions in German from the 19th century.
The dataset was used to create the linked model TrOCR: (https://huggingface.co/dh-unibe/trocr-kurrent).
For an evaluation see: Hodel, Tobias, David Schoch, Christa Schneider, and Jake Purcell. “General Models for Handwritten Text Recognition: Feasibility and State-of-the Art. German Kurrent as an Example.” Journal of Open Humanities Data 7 (July 2021): 13. (https://doi.org/10.5334/johd.46).
Dataset Includes
Text from:
- University Archives of Greifswald: Konzilsprotokolle (see also: https://ariadne-portal.uni-greifswald.de/?arc=1&type=obj&id=5164866): -- A_Haeckermann
- GTA ETHZ: -- Parts of the GT produced within the Scholarly edition "Semper" (https://www.semper-edition.ch/)
Dataset Structure
Data Splits
- train: 137487 samples
Dataset Size
- Approximate total size: 994516.35 MB
- Total samples: 137487
Features
- image:
Image(mode=None, decode=False) - xml_content:
Value('string') - filename:
Value('string') - project_name:
Value('string')
Data Organization
Data is organized as parquet shards by split and project:
data/
├── <split>/
│ └── <project_name>/
│ └── <timestamp>-<shard>.parquet
The HuggingFace Hub automatically merges all parquet files when loading the dataset.
Usage
from datasets import load_dataset
# Load entire dataset
dataset = load_dataset("dh-unibe/transkribus-exports-30840-raw-xml")
# Load specific split
train_dataset = load_dataset("dh-unibe/transkribus-exports-30840-raw-xml", split="train")
- Downloads last month
- 69