license: cc-by-4.0 language: - de size_categories: - 1K<n<10K
This datasets hosts a sentence-tokenized version of the DFKI MobIE dataset.
The following script can be used to reproduce the creation of the dataset:
import json from flair.datasets import NER_GERMAN_MOBIE corpus = NER_GERMAN_MOBIE() with open("./train.jsonl", "wt") as f_out: for sentence in corpus.train: current_example = { "text": sentence.to_tokenized_string() } f_out.write(json.dumps(current_example) + "\n")
The extracted dataset has 6,900 sentences.