Datasets:

Modalities:
Text
Formats:
json
Languages:
German
Libraries:
Datasets
pandas
License:
german-ler / README.md
stefan-it's picture
docs: fix number of total sentences
97cb8d7 verified
metadata
license: cc-by-4.0
language:
  - de
size_categories:
  - 10K<n<100K

German LER: Tokenized Sentences

This datasets hosts a sentence-tokenized version of the German LER dataset.

Creation

The following script can be used to reproduce the creation of the dataset:

import json

from flair.datasets import NER_GERMAN_LEGAL

corpus = NER_GERMAN_LEGAL()

with open("./train.jsonl", "wt") as f_out:
    for sentence in corpus.train:
        current_example = {
            "text": sentence.to_tokenized_string()
        }
        f_out.write(json.dumps(current_example) + "\n")

The extracted dataset has 53,384 sentences.