Datasets:

Modalities:
Text
Formats:
json
Languages:
German
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
co-funer / README.md
stefan-it's picture
docs: add initial version
00de683 verified
metadata
license: cc-by-4.0
language:
  - de
size_categories:
  - n<1K

CO-Fun: Tokenized Sentences

This datasets hosts a sentence-tokenized version of the CO-Fun: A German Dataset on Company Outsourcing in Fund Prospectuses for Named Entity Recognition and Relation Extraction dataset.

Creation

The following script can be used to reproduce the creation of the dataset:

import flair
import json

from flair.datasets.sequence_labeling import ColumnCorpus
from flair.file_utils import cached_path

from pathlib import Path
from typing import Optional, Union


class NER_CO_FUNER(ColumnCorpus):
    def __init__(
        self,
        base_path: Optional[Union[str, Path]] = None,
        in_memory: bool = True,
        **corpusargs,
    ) -> None:
        base_path = flair.cache_root / "datasets" if not base_path else Path(base_path)
        dataset_name = self.__class__.__name__.lower()
        data_folder = base_path / dataset_name
        data_path = flair.cache_root / "datasets" / dataset_name

        columns = {0: "text", 2: "ner"}

        hf_download_path = "https://huggingface.co/datasets/stefan-it/co-funer/resolve/main"

        for split in ["train", "dev", "test"]:
            cached_path(f"{hf_download_path}/{split}.tsv", data_path)
        
        super().__init__(
            data_folder,
            columns,
            in_memory=in_memory,
            comment_symbol=None,
            **corpusargs,
        )

corpus = NER_CO_FUNER()

with open("./train.jsonl", "wt") as f_out:
    for sentence in corpus.train:
        current_example = {
            "text": sentence.to_tokenized_string()
        }
        f_out.write(json.dumps(current_example) + "\n")

The extracted dataset has 758 sentences.