Datasets:

Modalities:
Text
Formats:
json
Languages:
German
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 1,782 Bytes
00de683
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
license: cc-by-4.0
language:
- de
size_categories:
- n<1K
---

# CO-Fun: Tokenized Sentences

This datasets hosts a sentence-tokenized version of the [CO-Fun: A German Dataset on Company Outsourcing in Fund Prospectuses for Named Entity Recognition and Relation Extraction](https://arxiv.org/abs/2403.15322) dataset.

## Creation

The following script can be used to reproduce the creation of the dataset:

```python
import flair
import json

from flair.datasets.sequence_labeling import ColumnCorpus
from flair.file_utils import cached_path

from pathlib import Path
from typing import Optional, Union


class NER_CO_FUNER(ColumnCorpus):
    def __init__(
        self,
        base_path: Optional[Union[str, Path]] = None,
        in_memory: bool = True,
        **corpusargs,
    ) -> None:
        base_path = flair.cache_root / "datasets" if not base_path else Path(base_path)
        dataset_name = self.__class__.__name__.lower()
        data_folder = base_path / dataset_name
        data_path = flair.cache_root / "datasets" / dataset_name

        columns = {0: "text", 2: "ner"}

        hf_download_path = "https://huggingface.co/datasets/stefan-it/co-funer/resolve/main"

        for split in ["train", "dev", "test"]:
            cached_path(f"{hf_download_path}/{split}.tsv", data_path)
        
        super().__init__(
            data_folder,
            columns,
            in_memory=in_memory,
            comment_symbol=None,
            **corpusargs,
        )

corpus = NER_CO_FUNER()

with open("./train.jsonl", "wt") as f_out:
    for sentence in corpus.train:
        current_example = {
            "text": sentence.to_tokenized_string()
        }
        f_out.write(json.dumps(current_example) + "\n")
```

The extracted dataset has 758 sentences.