Salommee asahi417 commited on
Commit
f400fee
·
verified ·
0 Parent(s):

Duplicate from tner/conll2003

Browse files

Co-authored-by: Asahi Ushio <asahi417@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ftz filter=lfs diff=lfs merge=lfs -text
6
+ *.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.h5 filter=lfs diff=lfs merge=lfs -text
8
+ *.joblib filter=lfs diff=lfs merge=lfs -text
9
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
+ *.model filter=lfs diff=lfs merge=lfs -text
11
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
12
+ *.onnx filter=lfs diff=lfs merge=lfs -text
13
+ *.ot filter=lfs diff=lfs merge=lfs -text
14
+ *.parquet filter=lfs diff=lfs merge=lfs -text
15
+ *.pb filter=lfs diff=lfs merge=lfs -text
16
+ *.pt filter=lfs diff=lfs merge=lfs -text
17
+ *.pth filter=lfs diff=lfs merge=lfs -text
18
+ *.rar filter=lfs diff=lfs merge=lfs -text
19
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
21
+ *.tflite filter=lfs diff=lfs merge=lfs -text
22
+ *.tgz filter=lfs diff=lfs merge=lfs -text
23
+ *.wasm filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ # Audio files - uncompressed
29
+ *.pcm filter=lfs diff=lfs merge=lfs -text
30
+ *.sam filter=lfs diff=lfs merge=lfs -text
31
+ *.raw filter=lfs diff=lfs merge=lfs -text
32
+ # Audio files - compressed
33
+ *.aac filter=lfs diff=lfs merge=lfs -text
34
+ *.flac filter=lfs diff=lfs merge=lfs -text
35
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
36
+ *.ogg filter=lfs diff=lfs merge=lfs -text
37
+ *.wav filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license:
5
+ - other
6
+ multilinguality:
7
+ - monolingual
8
+ size_categories:
9
+ - 10K<n<100K
10
+ task_categories:
11
+ - token-classification
12
+ task_ids:
13
+ - named-entity-recognition
14
+ pretty_name: CoNLL-2003
15
+ ---
16
+
17
+ # Dataset Card for "tner/conll2003"
18
+
19
+ ## Dataset Description
20
+
21
+ - **Repository:** [T-NER](https://github.com/asahi417/tner)
22
+ - **Paper:** [https://www.aclweb.org/anthology/W03-0419/](https://www.aclweb.org/anthology/W03-0419/)
23
+ - **Dataset:** CoNLL 2003
24
+ - **Domain:** News
25
+ - **Number of Entity:** 3
26
+
27
+
28
+ ### Dataset Summary
29
+ CoNLL-2003 NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
30
+ - Entity Types: `ORG`, `PER`, `LOC`, `MISC`
31
+ ## Dataset Structure
32
+
33
+ ### Data Instances
34
+ An example of `train` looks as follows.
35
+
36
+ ```
37
+ {
38
+ 'tags': ['SOCCER','-', 'JAPAN', 'GET', 'LUCKY', 'WIN', ',', 'CHINA', 'IN', 'SURPRISE', 'DEFEAT', '.'],
39
+ 'tokens': [0, 0, 5, 0, 0, 0, 0, 3, 0, 0, 0, 0]
40
+ }
41
+ ```
42
+
43
+ ### Label ID
44
+ The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/conll2003/raw/main/dataset/label.json).
45
+ ```python
46
+ {
47
+ "O": 0,
48
+ "B-ORG": 1,
49
+ "B-MISC": 2,
50
+ "B-PER": 3,
51
+ "I-PER": 4,
52
+ "B-LOC": 5,
53
+ "I-ORG": 6,
54
+ "I-MISC": 7,
55
+ "I-LOC": 8
56
+ }
57
+ ```
58
+
59
+ ### Data Splits
60
+
61
+ | name |train|validation|test|
62
+ |---------|----:|---------:|---:|
63
+ |conll2003|14041| 3250|3453|
64
+
65
+ ### Licensing Information
66
+
67
+ From the [CoNLL2003 shared task](https://www.clips.uantwerpen.be/conll2003/ner/) page:
68
+
69
+ > The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.
70
+
71
+ The copyrights are defined below, from the [Reuters Corpus page](https://trec.nist.gov/data/reuters/reuters.html):
72
+
73
+ > The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
74
+ >
75
+ > [Organizational agreement](https://trec.nist.gov/data/reuters/org_appl_reuters_v4.html)
76
+ >
77
+ > This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
78
+ >
79
+ > [Individual agreement](https://trec.nist.gov/data/reuters/ind_appl_reuters_v4.html)
80
+ >
81
+ > This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
82
+
83
+ ### Citation Information
84
+
85
+ ```
86
+ @inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
87
+ title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
88
+ author = "Tjong Kim Sang, Erik F. and De Meulder, Fien",
89
+ booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
90
+ year = "2003",
91
+ url = "https://www.aclweb.org/anthology/W03-0419",
92
+ pages = "142--147",
93
+ }
94
+ ```
conll2003.py ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ NER dataset compiled by T-NER library https://github.com/asahi417/tner/tree/master/tner """
2
+ import json
3
+ from itertools import chain
4
+ import datasets
5
+
6
+ logger = datasets.logging.get_logger(__name__)
7
+ _DESCRIPTION = """[CoNLL 2003 NER dataset](https://aclanthology.org/W03-0419/)"""
8
+ _NAME = "conll2003"
9
+ _VERSION = "1.0.0"
10
+ _CITATION = """
11
+ @inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
12
+ title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
13
+ author = "Tjong Kim Sang, Erik F. and De Meulder, Fien",
14
+ booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
15
+ year = "2003",
16
+ url = "https://www.aclweb.org/anthology/W03-0419",
17
+ pages = "142--147",
18
+ }
19
+ """
20
+
21
+ _HOME_PAGE = "https://github.com/asahi417/tner"
22
+ _URL = f'https://huggingface.co/datasets/tner/{_NAME}/raw/main/dataset'
23
+ _URLS = {
24
+ str(datasets.Split.TEST): [f'{_URL}/test.json'],
25
+ str(datasets.Split.TRAIN): [f'{_URL}/train.json'],
26
+ str(datasets.Split.VALIDATION): [f'{_URL}/valid.json'],
27
+ }
28
+
29
+
30
+ class Conll2003Config(datasets.BuilderConfig):
31
+ """BuilderConfig"""
32
+
33
+ def __init__(self, **kwargs):
34
+ """BuilderConfig.
35
+
36
+ Args:
37
+ **kwargs: keyword arguments forwarded to super.
38
+ """
39
+ super(Conll2003Config, self).__init__(**kwargs)
40
+
41
+
42
+ class Conll2003(datasets.GeneratorBasedBuilder):
43
+ """Dataset."""
44
+
45
+ BUILDER_CONFIGS = [
46
+ Conll2003Config(name=_NAME, version=datasets.Version(_VERSION), description=_DESCRIPTION),
47
+ ]
48
+
49
+ def _split_generators(self, dl_manager):
50
+ downloaded_file = dl_manager.download_and_extract(_URLS)
51
+ return [datasets.SplitGenerator(name=i, gen_kwargs={"filepaths": downloaded_file[str(i)]})
52
+ for i in [datasets.Split.TRAIN, datasets.Split.VALIDATION, datasets.Split.TEST]]
53
+
54
+ def _generate_examples(self, filepaths):
55
+ _key = 0
56
+ for filepath in filepaths:
57
+ logger.info(f"generating examples from = {filepath}")
58
+ with open(filepath, encoding="utf-8") as f:
59
+ _list = [i for i in f.read().split('\n') if len(i) > 0]
60
+ for i in _list:
61
+ data = json.loads(i)
62
+ yield _key, data
63
+ _key += 1
64
+
65
+ def _info(self):
66
+ return datasets.DatasetInfo(
67
+ description=_DESCRIPTION,
68
+ features=datasets.Features(
69
+ {
70
+ "tokens": datasets.Sequence(datasets.Value("string")),
71
+ "tags": datasets.Sequence(datasets.Value("int32")),
72
+ }
73
+ ),
74
+ supervised_keys=None,
75
+ homepage=_HOME_PAGE,
76
+ citation=_CITATION,
77
+ )
dataset/label.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"O": 0, "B-ORG": 1, "B-MISC": 2, "B-PER": 3, "I-PER": 4, "B-LOC": 5, "I-ORG": 6, "I-MISC": 7, "I-LOC": 8}
dataset/test.json ADDED
The diff for this file is too large to render. See raw diff
 
dataset/train.json ADDED
The diff for this file is too large to render. See raw diff
 
dataset/valid.json ADDED
The diff for this file is too large to render. See raw diff