Datasets:

Modalities:
Text
Formats:
json
Languages:
German
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
stefan-it commited on
Commit
00de683
·
verified ·
1 Parent(s): b6ccc4b

docs: add initial version

Browse files
Files changed (1) hide show
  1. README.md +65 -3
README.md CHANGED
@@ -1,3 +1,65 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ language:
4
+ - de
5
+ size_categories:
6
+ - n<1K
7
+ ---
8
+
9
+ # CO-Fun: Tokenized Sentences
10
+
11
+ This datasets hosts a sentence-tokenized version of the [CO-Fun: A German Dataset on Company Outsourcing in Fund Prospectuses for Named Entity Recognition and Relation Extraction](https://arxiv.org/abs/2403.15322) dataset.
12
+
13
+ ## Creation
14
+
15
+ The following script can be used to reproduce the creation of the dataset:
16
+
17
+ ```python
18
+ import flair
19
+ import json
20
+
21
+ from flair.datasets.sequence_labeling import ColumnCorpus
22
+ from flair.file_utils import cached_path
23
+
24
+ from pathlib import Path
25
+ from typing import Optional, Union
26
+
27
+
28
+ class NER_CO_FUNER(ColumnCorpus):
29
+ def __init__(
30
+ self,
31
+ base_path: Optional[Union[str, Path]] = None,
32
+ in_memory: bool = True,
33
+ **corpusargs,
34
+ ) -> None:
35
+ base_path = flair.cache_root / "datasets" if not base_path else Path(base_path)
36
+ dataset_name = self.__class__.__name__.lower()
37
+ data_folder = base_path / dataset_name
38
+ data_path = flair.cache_root / "datasets" / dataset_name
39
+
40
+ columns = {0: "text", 2: "ner"}
41
+
42
+ hf_download_path = "https://huggingface.co/datasets/stefan-it/co-funer/resolve/main"
43
+
44
+ for split in ["train", "dev", "test"]:
45
+ cached_path(f"{hf_download_path}/{split}.tsv", data_path)
46
+
47
+ super().__init__(
48
+ data_folder,
49
+ columns,
50
+ in_memory=in_memory,
51
+ comment_symbol=None,
52
+ **corpusargs,
53
+ )
54
+
55
+ corpus = NER_CO_FUNER()
56
+
57
+ with open("./train.jsonl", "wt") as f_out:
58
+ for sentence in corpus.train:
59
+ current_example = {
60
+ "text": sentence.to_tokenized_string()
61
+ }
62
+ f_out.write(json.dumps(current_example) + "\n")
63
+ ```
64
+
65
+ The extracted dataset has 758 sentences.