parquet-converter commited on
Commit
22bb794
·
1 Parent(s): 06bc381

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,37 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.model filter=lfs diff=lfs merge=lfs -text
11
- *.msgpack filter=lfs diff=lfs merge=lfs -text
12
- *.onnx filter=lfs diff=lfs merge=lfs -text
13
- *.ot filter=lfs diff=lfs merge=lfs -text
14
- *.parquet filter=lfs diff=lfs merge=lfs -text
15
- *.pb filter=lfs diff=lfs merge=lfs -text
16
- *.pt filter=lfs diff=lfs merge=lfs -text
17
- *.pth filter=lfs diff=lfs merge=lfs -text
18
- *.rar filter=lfs diff=lfs merge=lfs -text
19
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
- *.tar.* filter=lfs diff=lfs merge=lfs -text
21
- *.tflite filter=lfs diff=lfs merge=lfs -text
22
- *.tgz filter=lfs diff=lfs merge=lfs -text
23
- *.wasm filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- # Audio files - uncompressed
29
- *.pcm filter=lfs diff=lfs merge=lfs -text
30
- *.sam filter=lfs diff=lfs merge=lfs -text
31
- *.raw filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - compressed
33
- *.aac filter=lfs diff=lfs merge=lfs -text
34
- *.flac filter=lfs diff=lfs merge=lfs -text
35
- *.mp3 filter=lfs diff=lfs merge=lfs -text
36
- *.ogg filter=lfs diff=lfs merge=lfs -text
37
- *.wav filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,81 +0,0 @@
1
- ---
2
- language:
3
- - en
4
- license:
5
- - other
6
- multilinguality:
7
- - monolingual
8
- size_categories:
9
- - 1k<10K
10
- task_categories:
11
- - token-classification
12
- task_ids:
13
- - named-entity-recognition
14
- pretty_name: BTC
15
- ---
16
-
17
- # Dataset Card for "tner/btc"
18
-
19
- ## Dataset Description
20
-
21
- - **Repository:** [T-NER](https://github.com/asahi417/tner)
22
- - **Paper:** [https://aclanthology.org/C16-1111/](https://aclanthology.org/C16-1111/)
23
- - **Dataset:** Broad Twitter Corpus
24
- - **Domain:** Twitter
25
- - **Number of Entity:** 3
26
-
27
-
28
- ### Dataset Summary
29
- Broad Twitter Corpus NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
30
- - Entity Types: `LOC`, `ORG`, `PER`
31
-
32
- ## Dataset Structure
33
-
34
- ### Data Instances
35
- An example of `train` looks as follows.
36
-
37
- ```
38
- {
39
- 'tokens': ['I', 'hate', 'the', 'words', 'chunder', ',', 'vomit', 'and', 'puke', '.', 'BUUH', '.'],
40
- 'tags': [6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6]
41
- }
42
- ```
43
-
44
- ### Label ID
45
- The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/btc/raw/main/dataset/label.json).
46
- ```python
47
- {
48
- "B-LOC": 0,
49
- "B-ORG": 1,
50
- "B-PER": 2,
51
- "I-LOC": 3,
52
- "I-ORG": 4,
53
- "I-PER": 5,
54
- "O": 6
55
- }
56
- ```
57
-
58
- ### Data Splits
59
-
60
- | name |train|validation|test|
61
- |---------|----:|---------:|---:|
62
- |btc | 6338| 1001|2000|
63
-
64
- ### Citation Information
65
-
66
- ```
67
- @inproceedings{derczynski-etal-2016-broad,
68
- title = "Broad {T}witter Corpus: A Diverse Named Entity Recognition Resource",
69
- author = "Derczynski, Leon and
70
- Bontcheva, Kalina and
71
- Roberts, Ian",
72
- booktitle = "Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
73
- month = dec,
74
- year = "2016",
75
- address = "Osaka, Japan",
76
- publisher = "The COLING 2016 Organizing Committee",
77
- url = "https://aclanthology.org/C16-1111",
78
- pages = "1169--1179",
79
- abstract = "One of the main obstacles, hampering method development and comparative evaluation of named entity recognition in social media, is the lack of a sizeable, diverse, high quality annotated corpus, analogous to the CoNLL{'}2003 news dataset. For instance, the biggest Ritter tweet corpus is only 45,000 tokens {--} a mere 15{\%} the size of CoNLL{'}2003. Another major shortcoming is the lack of temporal, geographic, and author diversity. This paper introduces the Broad Twitter Corpus (BTC), which is not only significantly bigger, but sampled across different regions, temporal periods, and types of Twitter users. The gold-standard named entity annotations are made by a combination of NLP experts and crowd workers, which enables us to harness crowd recall while maintaining high quality. We also measure the entity drift observed in our dataset (i.e. how entity representation varies over time), and compare to newswire. The corpus is released openly, including source text and intermediate annotations.",
80
- }
81
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
btc.py DELETED
@@ -1,84 +0,0 @@
1
- """ NER dataset compiled by T-NER library https://github.com/asahi417/tner/tree/master/tner """
2
- import json
3
- from itertools import chain
4
- import datasets
5
-
6
- logger = datasets.logging.get_logger(__name__)
7
- _DESCRIPTION = """[BTC](https://aclanthology.org/C16-1111/)"""
8
- _NAME = "btc"
9
- _VERSION = "1.0.1"
10
- _CITATION = """
11
- @inproceedings{derczynski-etal-2016-broad,
12
- title = "Broad {T}witter Corpus: A Diverse Named Entity Recognition Resource",
13
- author = "Derczynski, Leon and
14
- Bontcheva, Kalina and
15
- Roberts, Ian",
16
- booktitle = "Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
17
- month = dec,
18
- year = "2016",
19
- address = "Osaka, Japan",
20
- publisher = "The COLING 2016 Organizing Committee",
21
- url = "https://aclanthology.org/C16-1111",
22
- pages = "1169--1179",
23
- abstract = "One of the main obstacles, hampering method development and comparative evaluation of named entity recognition in social media, is the lack of a sizeable, diverse, high quality annotated corpus, analogous to the CoNLL{'}2003 news dataset. For instance, the biggest Ritter tweet corpus is only 45,000 tokens {--} a mere 15{\%} the size of CoNLL{'}2003. Another major shortcoming is the lack of temporal, geographic, and author diversity. This paper introduces the Broad Twitter Corpus (BTC), which is not only significantly bigger, but sampled across different regions, temporal periods, and types of Twitter users. The gold-standard named entity annotations are made by a combination of NLP experts and crowd workers, which enables us to harness crowd recall while maintaining high quality. We also measure the entity drift observed in our dataset (i.e. how entity representation varies over time), and compare to newswire. The corpus is released openly, including source text and intermediate annotations.",
24
- }
25
- """
26
-
27
- _HOME_PAGE = "https://github.com/asahi417/tner"
28
- _URL = f'https://huggingface.co/datasets/tner/{_NAME}/raw/main/dataset'
29
- _URLS = {
30
- str(datasets.Split.TEST): [f'{_URL}/test.json'],
31
- str(datasets.Split.TRAIN): [f'{_URL}/train.json'],
32
- str(datasets.Split.VALIDATION): [f'{_URL}/valid.json'],
33
- }
34
-
35
-
36
- class BTCConfig(datasets.BuilderConfig):
37
- """BuilderConfig"""
38
-
39
- def __init__(self, **kwargs):
40
- """BuilderConfig.
41
-
42
- Args:
43
- **kwargs: keyword arguments forwarded to super.
44
- """
45
- super(BTCConfig, self).__init__(**kwargs)
46
-
47
-
48
- class BTC(datasets.GeneratorBasedBuilder):
49
- """Dataset."""
50
-
51
- BUILDER_CONFIGS = [
52
- BTCConfig(name=_NAME, version=datasets.Version(_VERSION), description=_DESCRIPTION),
53
- ]
54
-
55
- def _split_generators(self, dl_manager):
56
- downloaded_file = dl_manager.download_and_extract(_URLS)
57
- return [datasets.SplitGenerator(name=i, gen_kwargs={"filepaths": downloaded_file[str(i)]})
58
- for i in [datasets.Split.TRAIN, datasets.Split.VALIDATION, datasets.Split.TEST]]
59
-
60
- def _generate_examples(self, filepaths):
61
- _key = 0
62
- for filepath in filepaths:
63
- logger.info(f"generating examples from = {filepath}")
64
- with open(filepath, encoding="utf-8") as f:
65
- _list = [i for i in f.read().split('\n') if len(i) > 0]
66
- for i in _list:
67
- data = json.loads(i)
68
- yield _key, data
69
- _key += 1
70
-
71
- def _info(self):
72
- names = ["B-LOC", "B-ORG", "B-PER", "I-LOC", "I-ORG", "I-PER", "O"]
73
- return datasets.DatasetInfo(
74
- description=_DESCRIPTION,
75
- features=datasets.Features(
76
- {
77
- "tokens": datasets.Sequence(datasets.Value("string")),
78
- "tags": datasets.Sequence(datasets.features.ClassLabel(names=names))
79
- }
80
- ),
81
- supervised_keys=None,
82
- homepage=_HOME_PAGE,
83
- citation=_CITATION,
84
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
btc/btc-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1666911b8082460f1dba43bf776ab9ac6113ae07aa1c3c30a8dafdd525d2d4d0
3
+ size 179697
btc/btc-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f98dac6f5a5ce2f024ffe936ec9274201b44f0cc9fb3e3401ed1c2c07169c63
3
+ size 493817
btc/btc-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4282958e018cde90aff7e869102d0294487b944984c123aeecf9df9a32037c9c
3
+ size 77721
dataset/label.json DELETED
@@ -1 +0,0 @@
1
- {"B-LOC": 0, "B-ORG": 1, "B-PER": 2, "I-LOC": 3, "I-ORG": 4, "I-PER": 5, "O": 6}
 
 
dataset/test.json DELETED
The diff for this file is too large to render. See raw diff
 
dataset/train.json DELETED
The diff for this file is too large to render. See raw diff
 
dataset/valid.json DELETED
The diff for this file is too large to render. See raw diff