Datasets:
| license: cc-by-4.0 | |
| language: | |
| - de | |
| size_categories: | |
| - 10K<n<100K | |
| # GermEval 2014: Tokenized Sentences | |
| This datasets hosts a sentence-tokenized version of the [GermEval 2014 NER](https://sites.google.com/site/germeval2014ner/data) dataset. | |
| ## Creation | |
| The following script can be used to reproduce the creation of the dataset: | |
| ```python | |
| import json | |
| from flair.datasets import NER_GERMAN_GERMEVAL | |
| corpus = NER_GERMAN_GERMEVAL() | |
| with open("./germeval14/train.jsonl", "wt") as f_out: | |
| for sentence in germeval_corpus.train: | |
| current_example = { | |
| "text": sentence.to_tokenized_string() | |
| } | |
| f_out.write(json.dumps(current_example) + "\n") | |
| ``` | |
| The extracted dataset has 24,000 sentences. |