Datasets:

Modalities:
Text
Libraries:
Datasets
License:
Janosch Hoefer commited on
Commit
f846452
·
1 Parent(s): ea95b31

added setup.py

Browse files
Files changed (2) hide show
  1. README.md +167 -0
  2. tweetyface_debug.py +132 -0
README.md ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language:
5
+ - en
6
+ - de
7
+ language_creators:
8
+ - crowdsourced
9
+ license:
10
+ - apache-2.0
11
+ multilinguality:
12
+ - multilingual
13
+ pretty_name: tweetyface_debug
14
+ size_categories:
15
+ - 10K<n<100K
16
+ source_datasets: []
17
+ tags: []
18
+ task_categories:
19
+ - text-generation
20
+ task_ids: []
21
+ ---
22
+
23
+ # DEBUG Dataset Card for "tweetyface"
24
+
25
+ ## Table of Contents
26
+
27
+ - [Table of Contents](#table-of-contents)
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+ - [Contributions](#contributions)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage:**
54
+ - **Repository:** [GitHub](https://github.com/ml-projects-kiel/OpenCampus-ApplicationofTransformers)
55
+
56
+ ### Dataset Summary
57
+
58
+ DEBUG
59
+
60
+ ### Supported Tasks and Leaderboards
61
+
62
+ [More Information Needed]
63
+
64
+ ### Languages
65
+
66
+ English, German
67
+
68
+ ## Dataset Structure
69
+
70
+ ### Data Instances
71
+
72
+ #### english
73
+
74
+ - **Size of downloaded dataset files:** 4.77 MB
75
+ - **Size of the generated dataset:** 5.92 MB
76
+ - **Total amount of disk used:** 4.77 MB
77
+
78
+ #### german
79
+
80
+ - **Size of downloaded dataset files:** 2.58 MB
81
+ - **Size of the generated dataset:** 3.10 MB
82
+ - **Total amount of disk used:** 2.59 MB
83
+
84
+ An example of 'validation' looks as follows.
85
+
86
+ ```
87
+ {
88
+ "text": "@SpaceX @Space_Station About twice as much useful mass to orbit as rest of Earth combined",
89
+ "label": elonmusk,
90
+ "idx": 1001283
91
+ }
92
+ ```
93
+
94
+ ### Data Fields
95
+
96
+ The data fields are the same among all splits and languages.
97
+
98
+ - `text`: a `string` feature.
99
+ - `label`: a classification label
100
+ - `idx`: an `string` feature.
101
+ - `ref_tweet`: a `bool` feature.
102
+ - `reply_tweet`: a `bool` feature.
103
+
104
+ ### Data Splits
105
+
106
+ | name | train | validation |
107
+ | ------- | ----: | ---------: |
108
+ | english | 27857 | 6965 |
109
+ | german | 10254 | 2564 |
110
+
111
+ ## Dataset Creation
112
+
113
+ ### Curation Rationale
114
+
115
+ [More Information Needed]
116
+
117
+ ### Source Data
118
+
119
+ #### Initial Data Collection and Normalization
120
+
121
+ [More Information Needed]
122
+
123
+ #### Who are the source language producers?
124
+
125
+ [More Information Needed]
126
+
127
+ ### Annotations
128
+
129
+ #### Annotation process
130
+
131
+ [More Information Needed]
132
+
133
+ #### Who are the annotators?
134
+
135
+ [More Information Needed]
136
+
137
+ ### Personal and Sensitive Information
138
+
139
+ [More Information Needed]
140
+
141
+ ## Considerations for Using the Data
142
+
143
+ ### Social Impact of Dataset
144
+
145
+ [More Information Needed]
146
+
147
+ ### Discussion of Biases
148
+
149
+ [More Information Needed]
150
+
151
+ ### Other Known Limitations
152
+
153
+ [More Information Needed]
154
+
155
+ ## Additional Information
156
+
157
+ ### Dataset Curators
158
+
159
+ [More Information Needed]
160
+
161
+ ### Licensing Information
162
+
163
+ [More Information Needed]
164
+
165
+ ### Citation Information
166
+
167
+ [More Information Needed]
tweetyface_debug.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace NLP Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """tweetyface dataset."""
18
+
19
+
20
+ import json
21
+
22
+ import datasets
23
+
24
+ _DESCRIPTION = """\
25
+ DEBUG DATASET
26
+ """
27
+
28
+ _HOMEPAGE = "https://github.com/ml-projects-kiel/OpenCampus-ApplicationofTransformers"
29
+
30
+ URL = "https://raw.githubusercontent.com/ml-projects-kiel/OpenCampus-ApplicationofTransformers/qa/develop/"
31
+
32
+ _URLs = {
33
+ "english": {
34
+ "train": URL + "tweetyface_en/train.json",
35
+ "validation": URL + "tweetyface_en/validation.json",
36
+ },
37
+ "german": {
38
+ "train": URL + "tweetyface_de/train.json",
39
+ "validation": URL + "tweetyface_de/validation.json",
40
+ },
41
+ }
42
+
43
+ _VERSION = "0.3.0"
44
+
45
+ _LICENSE = """
46
+ Apache License Version 2.0
47
+ """
48
+
49
+
50
+ class TweetyFaceConfig(datasets.BuilderConfig):
51
+ """BuilderConfig for TweetyFace."""
52
+
53
+ def __init__(self, **kwargs):
54
+ """BuilderConfig for TweetyFace.
55
+
56
+ Args:
57
+ **kwargs: keyword arguments forwarded to super.
58
+ """
59
+ super(TweetyFaceConfig, self).__init__(**kwargs)
60
+
61
+
62
+ class TweetyFace(datasets.GeneratorBasedBuilder):
63
+ """tweetyface"""
64
+
65
+ BUILDER_CONFIGS = [
66
+ TweetyFaceConfig(
67
+ name=lang,
68
+ description=f"{lang.capitalize()} Twitter Users",
69
+ version=datasets.Version(_VERSION),
70
+ )
71
+ for lang in _URLs.keys()
72
+ ]
73
+
74
+ def _info(self):
75
+ if self.config.name == "english":
76
+ names = [
77
+ "MKBHD",
78
+ "elonmusk",
79
+ "alyankovic",
80
+ "Cristiano",
81
+ "katyperry",
82
+ "neiltyson",
83
+ "BillGates",
84
+ "BillNye",
85
+ "GretaThunberg",
86
+ "BarackObama",
87
+ "Trevornoah",
88
+ ]
89
+ else:
90
+ names = [
91
+ "OlafScholz",
92
+ "Karl_Lauterbach",
93
+ "janboehm",
94
+ "Markus_Soeder",
95
+ ]
96
+ return datasets.DatasetInfo(
97
+ description=_DESCRIPTION + self.config.description,
98
+ features=datasets.Features(
99
+ {
100
+ "text": datasets.Value("string"),
101
+ "label": datasets.features.ClassLabel(names=names),
102
+ "idx": datasets.Value("string"),
103
+ "ref_tweet": datasets.Value("bool"),
104
+ "reply_tweet": datasets.Value("bool"),
105
+ }
106
+ ),
107
+ homepage=_HOMEPAGE,
108
+ license=_LICENSE,
109
+ )
110
+
111
+ def _split_generators(self, dl_manager):
112
+ """Returns SplitGenerators."""
113
+ my_urls = _URLs[self.config.name]
114
+ data_dir = dl_manager.download_and_extract(my_urls)
115
+ return [
116
+ datasets.SplitGenerator(
117
+ name=datasets.Split.TRAIN,
118
+ gen_kwargs={"filepath": data_dir["train"]},
119
+ ),
120
+ datasets.SplitGenerator(
121
+ name=datasets.Split.VALIDATION,
122
+ gen_kwargs={"filepath": data_dir["validation"]},
123
+ ),
124
+ ]
125
+
126
+ def _generate_examples(self, filepath):
127
+ """This function returns the examples in the raw (text) form by iterating on all the files."""
128
+ with open(filepath, encoding="utf-8") as f:
129
+ for row in f:
130
+ data = json.loads(row)
131
+ idx = data["idx"]
132
+ yield idx, data