shootstuff parquet-converter commited on
Commit
14f4cb9
·
0 Parent(s):

Duplicate from dair-ai/emotion

Browse files

Co-authored-by: Parquet-converter (BOT) <parquet-converter@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,292 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language_creators:
5
+ - machine-generated
6
+ language:
7
+ - en
8
+ license:
9
+ - other
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ task_ids:
19
+ - multi-class-classification
20
+ paperswithcode_id: emotion
21
+ pretty_name: Emotion
22
+ tags:
23
+ - emotion-classification
24
+ dataset_info:
25
+ - config_name: split
26
+ features:
27
+ - name: text
28
+ dtype: string
29
+ - name: label
30
+ dtype:
31
+ class_label:
32
+ names:
33
+ '0': sadness
34
+ '1': joy
35
+ '2': love
36
+ '3': anger
37
+ '4': fear
38
+ '5': surprise
39
+ splits:
40
+ - name: train
41
+ num_bytes: 1741533
42
+ num_examples: 16000
43
+ - name: validation
44
+ num_bytes: 214695
45
+ num_examples: 2000
46
+ - name: test
47
+ num_bytes: 217173
48
+ num_examples: 2000
49
+ download_size: 1287193
50
+ dataset_size: 2173401
51
+ - config_name: unsplit
52
+ features:
53
+ - name: text
54
+ dtype: string
55
+ - name: label
56
+ dtype:
57
+ class_label:
58
+ names:
59
+ '0': sadness
60
+ '1': joy
61
+ '2': love
62
+ '3': anger
63
+ '4': fear
64
+ '5': surprise
65
+ splits:
66
+ - name: train
67
+ num_bytes: 45444017
68
+ num_examples: 416809
69
+ download_size: 26888538
70
+ dataset_size: 45444017
71
+ configs:
72
+ - config_name: split
73
+ data_files:
74
+ - split: train
75
+ path: split/train-*
76
+ - split: validation
77
+ path: split/validation-*
78
+ - split: test
79
+ path: split/test-*
80
+ default: true
81
+ - config_name: unsplit
82
+ data_files:
83
+ - split: train
84
+ path: unsplit/train-*
85
+ train-eval-index:
86
+ - config: default
87
+ task: text-classification
88
+ task_id: multi_class_classification
89
+ splits:
90
+ train_split: train
91
+ eval_split: test
92
+ col_mapping:
93
+ text: text
94
+ label: target
95
+ metrics:
96
+ - type: accuracy
97
+ name: Accuracy
98
+ - type: f1
99
+ name: F1 macro
100
+ args:
101
+ average: macro
102
+ - type: f1
103
+ name: F1 micro
104
+ args:
105
+ average: micro
106
+ - type: f1
107
+ name: F1 weighted
108
+ args:
109
+ average: weighted
110
+ - type: precision
111
+ name: Precision macro
112
+ args:
113
+ average: macro
114
+ - type: precision
115
+ name: Precision micro
116
+ args:
117
+ average: micro
118
+ - type: precision
119
+ name: Precision weighted
120
+ args:
121
+ average: weighted
122
+ - type: recall
123
+ name: Recall macro
124
+ args:
125
+ average: macro
126
+ - type: recall
127
+ name: Recall micro
128
+ args:
129
+ average: micro
130
+ - type: recall
131
+ name: Recall weighted
132
+ args:
133
+ average: weighted
134
+ ---
135
+
136
+ # Dataset Card for "emotion"
137
+
138
+ ## Table of Contents
139
+ - [Dataset Description](#dataset-description)
140
+ - [Dataset Summary](#dataset-summary)
141
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
142
+ - [Languages](#languages)
143
+ - [Dataset Structure](#dataset-structure)
144
+ - [Data Instances](#data-instances)
145
+ - [Data Fields](#data-fields)
146
+ - [Data Splits](#data-splits)
147
+ - [Dataset Creation](#dataset-creation)
148
+ - [Curation Rationale](#curation-rationale)
149
+ - [Source Data](#source-data)
150
+ - [Annotations](#annotations)
151
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
152
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
153
+ - [Social Impact of Dataset](#social-impact-of-dataset)
154
+ - [Discussion of Biases](#discussion-of-biases)
155
+ - [Other Known Limitations](#other-known-limitations)
156
+ - [Additional Information](#additional-information)
157
+ - [Dataset Curators](#dataset-curators)
158
+ - [Licensing Information](#licensing-information)
159
+ - [Citation Information](#citation-information)
160
+ - [Contributions](#contributions)
161
+
162
+ ## Dataset Description
163
+
164
+ - **Homepage:** [https://github.com/dair-ai/emotion_dataset](https://github.com/dair-ai/emotion_dataset)
165
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
166
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
167
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
168
+ - **Size of downloaded dataset files:** 16.13 MB
169
+ - **Size of the generated dataset:** 47.62 MB
170
+ - **Total amount of disk used:** 63.75 MB
171
+
172
+ ### Dataset Summary
173
+
174
+ Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.
175
+
176
+ ### Supported Tasks and Leaderboards
177
+
178
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
179
+
180
+ ### Languages
181
+
182
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
183
+
184
+ ## Dataset Structure
185
+
186
+ ### Data Instances
187
+
188
+ An example looks as follows.
189
+ ```
190
+ {
191
+ "text": "im feeling quite sad and sorry for myself but ill snap out of it soon",
192
+ "label": 0
193
+ }
194
+ ```
195
+
196
+ ### Data Fields
197
+
198
+ The data fields are:
199
+ - `text`: a `string` feature.
200
+ - `label`: a classification label, with possible values including `sadness` (0), `joy` (1), `love` (2), `anger` (3), `fear` (4), `surprise` (5).
201
+
202
+ ### Data Splits
203
+
204
+ The dataset has 2 configurations:
205
+ - split: with a total of 20_000 examples split into train, validation and split
206
+ - unsplit: with a total of 416_809 examples in a single train split
207
+
208
+ | name | train | validation | test |
209
+ |---------|-------:|-----------:|-----:|
210
+ | split | 16000 | 2000 | 2000 |
211
+ | unsplit | 416809 | n/a | n/a |
212
+
213
+ ## Dataset Creation
214
+
215
+ ### Curation Rationale
216
+
217
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
218
+
219
+ ### Source Data
220
+
221
+ #### Initial Data Collection and Normalization
222
+
223
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
224
+
225
+ #### Who are the source language producers?
226
+
227
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
228
+
229
+ ### Annotations
230
+
231
+ #### Annotation process
232
+
233
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
234
+
235
+ #### Who are the annotators?
236
+
237
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
238
+
239
+ ### Personal and Sensitive Information
240
+
241
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
242
+
243
+ ## Considerations for Using the Data
244
+
245
+ ### Social Impact of Dataset
246
+
247
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
248
+
249
+ ### Discussion of Biases
250
+
251
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
252
+
253
+ ### Other Known Limitations
254
+
255
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
256
+
257
+ ## Additional Information
258
+
259
+ ### Dataset Curators
260
+
261
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
262
+
263
+ ### Licensing Information
264
+
265
+ The dataset should be used for educational and research purposes only.
266
+
267
+ ### Citation Information
268
+
269
+ If you use this dataset, please cite:
270
+ ```
271
+ @inproceedings{saravia-etal-2018-carer,
272
+ title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
273
+ author = "Saravia, Elvis and
274
+ Liu, Hsien-Chi Toby and
275
+ Huang, Yen-Hao and
276
+ Wu, Junlin and
277
+ Chen, Yi-Shin",
278
+ booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
279
+ month = oct # "-" # nov,
280
+ year = "2018",
281
+ address = "Brussels, Belgium",
282
+ publisher = "Association for Computational Linguistics",
283
+ url = "https://www.aclweb.org/anthology/D18-1404",
284
+ doi = "10.18653/v1/D18-1404",
285
+ pages = "3687--3697",
286
+ abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
287
+ }
288
+ ```
289
+
290
+ ### Contributions
291
+
292
+ Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.
split/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f8407fa1ca9c310f55781f082ed73812f6551e8dda2c61973123a121869245b
3
+ size 128987
split/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10817f0f2ea42358bc62f69a09dfb8bd71701727df6d5a387bea742f3ea06417
3
+ size 1030740
split/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c70f0e660b5ebd1ea9a37d2a851f516f08a6d6477cdfc11be204e22a2f1102fd
3
+ size 127466
unsplit/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba60fe890562b2770967d63f9d7eb104691e028ca68716cd4e926996ecb31441
3
+ size 26888538