sarnoult's picture
Correct table formatting in README
a3b6b5f verified
metadata
license: mit
task_categories:
  - token-classification
language:
  - nl
tags:
  - digital_humanities
size_categories:
  - 1K<n<10K

Dataset Card for Dataset Name

The globalise_NER_token_classification dataset is a fine-grained dataset for the training of token-classification NER models on Dutch East-India Company texts (17th to 18th century).

Dataset Details

Dataset Description

The dataset provides 15 fine-grained labels detailing activities and people of the Dutch East-India Company (VOC), and can be used to train NER token-classification models for the period 17th-18th century and the domain of VOC texts. The texts are taken from the Overgebleven Brieven & Papieren corpus, and preprocessed for annotation as described in (Arnoult et al., 2025).

  • Curated by: Brecht Nijman
  • Funded by: Dutch Research Council (NWO)
  • Shared by: Globalise team
  • Language(s) (NLP): nl (Early Modern Dutch)
  • License: MIT

Dataset Sources

Uses

The dataset is intended for training NER token-classification models for Early Modern Dutch in the VOC domain.

Direct Use

Training or data-augmentation for Dutch historical NER models.

Out-of-Scope Use

The training data represents a historical variant of Dutch and a restricted domain (VOC documents), and is not expected to be useful for other variants of Dutch or domains.

Dataset Structure

Annotations were collected in several rounds: a first part of the training data was collected together with the validation data, with a random split based on token sequences; a later round of annotations was split by document between additional training data and test data. See (Arnoult et al., 2025) for more details.

train validation test
sequences 576 78 98
tokens 44695 6001 10133
entities 5932 893 887

Dataset Creation

Curation Rationale

The dataset was created to provide domain-specific NER labels for the processing of the Overgebleven Brieven & Papieren corpus.

Source Data

The Overgebleven Brieven & Papieren corpus, a collection of various documents written under the VOC administration: reports, ship inventories, letters, etc.

Data Collection and Processing

The source corpus was preprocessed as follows:

Documents were manually reconstructed from page scans to provide coherent contexts for annotations. 26 documents were selected for annotation, spanning the years 1618 to 1782.

Who are the source data producers?

The corpus texts were written by the VOC administration, and later collected by the Huygens Instituut and its predecessors.

Annotations

The annotations enrich the texts with 15 NER tags, identifying common entity types (persons, locations, organisations), VOC-domain types (commodities, ships) and providing fine-grained types for people (profession, status). The tagset is described further in (Arnoult et al., 2025).

Annotation process

See (Arnoult et al., 2025).

Who are the annotators?

idem.

Personal and Sensitive Information

The dataset contains personal information about past people.

Bias, Risks, and Limitations

The source corpus is biased, representing the standpoint of an early colonial organisation that notably used military force and engaged in slave trade.

Recommendations

Users should be aware of the often violent character of the texts underlying the annotations.

Citation

BibTeX:

[tbd]

APA:

[tbd]

Dataset Card Contact

Sophie Arnoult