Datasets:
File size: 713 Bytes
bb2d85a cee29fe bb2d85a | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | ---
license: cc-by-4.0
language:
- de
size_categories:
- 10K<n<100K
---
# BIOfid: Tokenized Sentences
This datasets hosts a sentence-tokenized version of the [BIOfid](https://github.com/texttechnologylab/BIOfid/tree/master/BIOfid-Dataset-NER) dataset.
## Creation
The following script can be used to reproduce the creation of the dataset:
```python
import json
from flair.datasets import NER_GERMAN_BIOFID
corpus = NER_GERMAN_BIOFID()
with open("./train.jsonl", "wt") as f_out:
for sentence in corpus.train:
current_example = {
"text": sentence.to_tokenized_string()
}
f_out.write(json.dumps(current_example) + "\n")
```
The extracted dataset has 12,668 sentences. |