--- license: cc-by-4.0 task_categories: - text-classification language: - de tags: - Natural_language_processing - Knowledge_Representation_and_Reasoning - Named_entity_recognition pretty_name: ZEFYS2025 size_categories: - 10K 100k tokens) in German language from historical sources including named entity tags as well as links to corresponding knowledge base entries (where applicable) for the purpose of historical NER/EL motivated the creation of this dataset. Alternative datasets with similar characteristics are the CoNNL 2003, the GermEval 2014 and the NewsEye 2021 datasets. See Tjong Kim Sang, E., & Meulder, F.D. (2003). Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition. *Conference on Computational Natural Language Learning (CoNNL 2003)*. [https://doi.org/10.3115/1119176.1119195](https://doi.org/10.3115/1119176.1119195). Benikova, D., Biemann, C., & Reznicek, M. (2014). NoSta-D Named Entity Annotation for German: Guidelines and Dataset. *Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)*. [http://www.lrec-conf.org/proceedings/lrec2014/pdf/276_Paper.pdf](http://www.lrec-conf.org/proceedings/lrec2014/pdf/276_Paper.pdf). Hamdi, A., Linhares Pontes, E., Boros, E., Nguyen, T.T.H., Hackl, G., Moreno, J.G., & Doucet, A. (2021). A Multilingual Dataset for Named Entity Recognition, Entity Linking and Stance Detection in Historical Newspapers. *Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '21)*. [https://doi.org/10.1145/3404835.3463255](https://doi.org/10.1145/3404835.3463255). ## Source Data ### Initial Data Collection The 100 historical newspaper pages were selected from the newspaper information system of Berlin State Library [ZEFYS](https://zefys.staatsbibliothek-berlin.de/?lang=en) (an abbreviation of **ZE**itungsin**F**ormationss**YS**tem) with the aim to provide a sufficiently large and homogeneously annotated dataset. For ZEFYS2025, pre-existing named entity datasets from two distinct projects – Europeana Newspapers (cf. Neudecker, Clemens (2016). An Open Corpus for Named Entity Recognition in Historic Newspapers. *Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)*, pages 4348–4352, Portorož, Slovenia. European Language Resources Association (ELRA). [https://aclanthology.org/L16-1689](https://aclanthology.org/L16-1689)); and SoNAR (cf. Menzel, Sina, Schnaitter, Hannes, Zinck, Josefine, Petras, Vivien, Neudecker, Clemens, Labusch, Kai, Leitner, Elena and Rehm, Georg (2021). Named Entity Linking mit Wikidata und GND – Das Potenzial handkuratierter und strukturierter Datenquellen für die semantische Anreicherung von Volltexten. *Qualität in der Inhaltserschließung*, edited by Michael Franke-Maier, Anna Kasprzik, Andreas Ledl and Hans Schürmann, Berlin, Boston: De Gruyter Saur, pp. 229-258. DOI: [https://doi.org/10.1515/9783110691597-012](https://doi.org/10.1515/9783110691597-012)) – were combined with additional newly annotated newspaper pages into a single coherent resource. Out of the 100 pages, 84 pages were produced automatically by OCR. No normalisation or modernisation of the OCR was performed; however, post-OCR correction was done for the named entities in the dataset in order to present them in a normalized, consistent form. Moreover, 16 of the pages were annotated on the basis of manually transcribed ground truth. ### Source Data Producers The source data were produced by Staatsbibliothek zu Berlin – Berlin State Library within the continuous process of digitising historical newspapers and providing full texts of selected newspapers. ### Digitisation Pipeline All of the newspapers were digitised for presentation in the newspaper information system of Berlin State Library [ZEFYS](https://zefys.staatsbibliothek-berlin.de/?lang=en). Most often, conservation / preservation motivated the digitisation. For a part of the newspapers presented in ZEFYS, full texts are provided. The 100 newspaper pages which form the dataset therefore represent a selection of the selection of newspapers available in ZEFYS. ## Preprocessing and Cleaning The output of the OCR process performed for the provision of the newspapers in the ZEFYS system was transformed into .tsv format using a page2tsv tool ([https://github.com/qurator-spk/page2tsv](https://github.com/qurator-spk/page2tsv)); tokenization and sentence splitting were performed using SoMajo ([https://github.com/tsproisl/SoMaJo](https://github.com/tsproisl/SoMaJo)). For initial entity recognition and linking, the tool described in Labusch, K., & Neudecker, C. (2022). Entity Linking in Multilingual Newspapers and Classical Commentaries with BERT. *Conference and Labs of the Evaluation Forum*. 1079-1089. [https://ceur-ws.org/Vol-3180/paper-85.pdf](https://ceur-ws.org/Vol-3180/paper-85.pdf) was used. ## Annotations ### Annotation Process After the preprocessing described above, human annotators added annotations and entity links or corrected the suggestions provided by the entity recognition and linking systems. For the annotation process, the neat (named entity annotation tool) tool developed by one of the project collaborators was used. Three entity types (persons, locations, organisations) were annotated. Annotating nested named entities was allowed with a limit of depth one, but nested named entities are not considered in the entity linking annotations. Annotations mainly followed the annotation guidelines developed for the Impresso project, see Ehrmann, M., Watter, C., Romanello, M., Clematide, S., & Flückiger, A. (2020). Impresso Named Entity Annotation Guidelines. [https://doi.org/10.5281/zenodo.3585749](https://doi.org/10.5281/zenodo.3585749). Using the annotation tool, all automatically generated annotations have been intellectually checked and revised by a group of German native speakers. This included the verification of the NE tags assigned previously as well as the addition or deletion of tags if needed. For NEs confirmed in this step, existing links were checked for the most precise and correct option for linking, and missing links to entities were inserted if available. To flag ambiguous or uncertain cases arising during annotation for later discussion, an extra NE-TAG class *TODO* was used in this process. Consensus about challenging cases collected throughout the annotation was reached in regular discussion meetings, and the set of instructions was expanded iteratively, adding further rules or examples when deemed necessary. Since the revision was carried out by one expert per page only, it was not possible to calculate inter-annotator agreement or similar measures to assess consistency between multiple annotators. Instead, we employed computational methods to assist in localizing and reducing inconsistencies within extensively annotated datasets. The automated analyses identified reappearing tokens throughout our dataset with a high divergence in how they were transcribed, tagged or linked. In additional correction loops, these inconsistencies were systematically reviewed once again by the annotators. ### Annotators During the time when the annotations took place, all annotators were employed at Staatsbibliothek zu Berlin – Berlin State Library. Socio-demographic information on the annotators is not available, but see the information provided above in the section "Dataset curators". ### Crowd Labour Not applicable. ## Data Provenance All 100 newspaper pages have been selected from ZEFYS, the newspaper information system of Berlin State Library. This portal provides free access to historical newspapers held and digitised by the Berlin State Library up to 1945 with a public domain license (Public Domain Mark, PDM). An exception are the "DDR-Presse" newspapers, for which specific rights and access restrictions apply. Therefore, no newspapers from the DDR-Presse project were included in this dataset. ## Use of Linked Open Data, Controlled Vocabulary, Multilingual Ontologies/Taxonomies If possible, named entities were linked to Wikidata entries. Beyond this knowledge graph, no other controlled vocabularies or multilingual ontologies were used during the establishment of the dataset. ## Version Information There is no previous version of this dataset. ### Release Date 2025-09-10 ### Date of Modification Not applicable. ### Checksums **MD5 checksum of the ZEFYS2025.zip:** 76f0948086b3923ef751e1fd3a1805a1 **SHA256 checksum of the ZEFYS2025.zip:** f1a0d931f4ba4fc3c3330df26bcde33354df3c223d2e0af8d633c912c7a996ea ## Maintenance Plan ### Maintenance Level *Actively Maintained* – This dataset will be actively maintained, including but not limited to updates to the data. ### Update Periodicity Only a part of the 100 .tsv files (about 1/3rd) contains coordinates for the page facsimiles of the according tokens. It is foreseen to update these data for all 100 newspaper pages in future work. # Examples and Considerations for Using the Data This dataset was established for the training of machine learning models capable of correctly identifying named entities and linking them to wikidata entries. The tasks of named entity recognition and entity linking are conceived to enable these information extraction techniques especially in historical newspapers. However, they might work well on other digital assets as well if they come from a comparable time span. ## Ethical Considerations ### Personal and Other Sensitive Information The dataset does not contain personal or sensitive information beyond what is available on Wikidata anyway. It does not contain any sensitive data in the sense of contemporary privacy laws. ### Discussion of Biases As the 100 newspaper pages were more or less randomly selected from the digitised newspaper collections of Berlin State Library, the emphasis of this digital collection can be regarded as a bias: The focus is on newspapers published in Prussia. The historical worldviews and preferences reflected in such newspapers within the time span between 1837 and 1940 can clearly be understood as biases that reflect the role of Prussia as a great European power and core of the German Empire after 1870/71. However, given the long time frame, this dataset also reflects linguistic change as well as the change of preferences of what is newsworthy and what should therefore be reported in newspapers. This linguistic change and shift of focus is especially evident in the newspapers printed during the Weimar Republic ### Potential Societal Impact of Using the Dataset In this dataset, persons, locations and organisations are identified and linked that have been mentioned in German-language newspapers published before 1940. Most probably, the social impact of the dataset is therefore very low. However, advancing information extraction techniques facilitates the creation of knowledge and furthers research as well as the discovery of new sources. ## Examples of Datasets, Publications and Models that (re-)use the Dataset The dataset has been used to fine-tune and evaluate various models on the NER downstream task (see [https://github.com/qurator-spk/sbb_ner_hf](https://github.com/qurator-spk/sbb_ner_hf)). The data selection and annotation process as well as the results of model training and evaluation have been published as a contribution to the [KONVENS 2025 Conference on Natural Language Processing](https://konvens-2025.hs-hannover.de/). ## Known Non-Ethical Limitations In a general way, it can be stated that models pretrained on historical German datasets perform better on historical datasets, whereas models pretrained with contemporary data perform better on contemporary datasets. ## Unanticipated Uses made of this Dataset Not applicable. Datasheet as of September 10th, 2025