Datasets:

Modalities:
Text
Formats:
json
Languages:
German
Libraries:
Datasets
pandas
License:
ud-hdt / README.md
stefan-it's picture
docs: add initial version
ce1e0df verified
metadata
license: cc-by-4.0
language:
  - de
size_categories:
  - 10K<n<100K

UD German-HDT: Tokenized Sentences

This datasets hosts a sentence-tokenized version of the Universal Dependencies German-HDT dataset.

Creation

The following script can be used to reproduce the creation of the dataset:

import json

from flair.datasets import UD_GERMAN_HDT

corpus = UD_GERMAN_HDT()

with open("./train.jsonl", "wt") as f_out:
    for sentence in corpus.train:
        current_example = {
            "text": sentence.to_tokenized_string()
        }
        f_out.write(json.dumps(current_example) + "\n")

The extracted dataset has 153,035 sentences.