jiayucunyan victoriadreis commited on
Commit
fe268ad
·
0 Parent(s):

Duplicate from Silly-Machine/TuPyE-Dataset

Browse files

Co-authored-by: Victoria Reis <victoriadreis@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ *.ipynb
README.md ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ annotations_creators:
4
+ - crowdsourced
5
+ language_creators:
6
+ - crowdsourced
7
+ language:
8
+ - pt
9
+ multilinguality:
10
+ - monolingual
11
+ size_categories:
12
+ - 10K<n<100K
13
+ source_datasets:
14
+ - crowdsourced
15
+ task_categories:
16
+ - text-classification
17
+ task_ids: []
18
+ pretty_name: TuPy-Dataset
19
+ language_bcp47:
20
+ - pt-BR
21
+ tags:
22
+ - hate-speech-detection
23
+ configs:
24
+ - config_name: multilabel
25
+ data_files:
26
+ - split: train
27
+ path: multilabel/multilabel_train.csv
28
+ - split: test
29
+ path: multilabel/multilabel_test.csv
30
+ - config_name: binary
31
+ data_files:
32
+ - split: train
33
+ path: binary/binary_train.csv
34
+ - split: test
35
+ path: binary/binary_test.csv
36
+ ---
37
+
38
+ # Portuguese Hate Speech Expanded Dataset (TuPyE)
39
+ TuPyE, an enhanced iteration of TuPy, encompasses a compilation of 43,668 meticulously annotated documents specifically
40
+ selected for the purpose of hate speech detection within diverse social network contexts.
41
+ This augmented dataset integrates supplementary annotations and amalgamates with datasets sourced from
42
+ [Fortuna et al. (2019)](https://aclanthology.org/W19-3510/),
43
+ [Leite et al. (2020)](https://arxiv.org/abs/2010.04543),
44
+ and [Vargas et al. (2022)](https://arxiv.org/abs/2103.14972),
45
+ complemented by an infusion of 10,000 original documents from the [TuPy-Dataset](https://huggingface.co/datasets/Silly-Machine/TuPy-Dataset).
46
+
47
+ In light of the constrained availability of annotated data in Portuguese pertaining to the English language,
48
+ TuPyE is committed to the expansion and enhancement of existing datasets.
49
+ This augmentation serves to facilitate the development of advanced hate speech detection models through the utilization of machine learning (ML)
50
+ and natural language processing (NLP) techniques.
51
+ This repository is organized as follows:
52
+
53
+ ```sh
54
+ root.
55
+ ├── binary : binary dataset (including training and testing split)
56
+ ├── multilabel : multilabel dataset (including training and testing split)
57
+ └── README.md : documentation and card metadata
58
+ ```
59
+ We highly recommend reading the associated research paper [TuPy-E: detecting hate speech in Brazilian Portuguese social media with a novel dataset and comprehensive analysis of models](https://arxiv.org/abs/2312.17704) to gain
60
+ comprehensive insights into the advancements integrated into this extended dataset.
61
+
62
+ ## Security measures
63
+ To safeguard user identity and uphold the integrity of this dataset, all user mentions have been anonymized as "@user," and any references to external websites have been omitted
64
+
65
+ ## Annotation and voting process
66
+
67
+ In the pursuit of advancing the field of automatic hate speech detection in Portuguese, our team undertook the meticulous task of creating a comprehensive database.
68
+ This endeavor involved the integration of labeled document sets from seminal studies in the domain, specifically those conducted by Fortuna et al. (2019),
69
+ Leite et al. (2020), and Vargas et al. (2022). To ensure the highest degree of consistency and compatibility within our dataset,
70
+ we adhered to stringent guidelines for text integration, detailed as follows:
71
+
72
+ 1. **Fortuna et al. (2019)**: This study presented a dataset of 5,670 tweets, each annotated by three independent evaluators to ascertain the presence or absence of hate speech. In our integration process, we adopted a simple majority-voting mechanism to classify each document, ensuring a consistent approach to hate speech identification across the dataset.
73
+
74
+ 2. **Leite et al. (2020)**: The dataset from this research encompassed 21,000 tweets, annotated by 129 volunteers. Each tweet was reviewed by three different assessors. The study identified six categories of toxic speech, namely: (i) homophobia, (ii) racism, (iii) xenophobia, (iv) offensive language, (v) obscene language, and (vi) misogyny. In aligning with our operational definition of hate speech, we chose to exclude texts that solely fell under the categories of offensive and/or obscene language. Consistent with our methodology, a straightforward majority-voting process was utilized for the classification of these texts.
75
+
76
+ 3. **Vargas et al**. (2022): This research involved a compilation of 7,000 comments sourced from Instagram, each labeled by a trio of annotators. These data had already been subjected to a simple majority-voting classification, thereby obviating the need for us to apply additional text classification protocols.
77
+
78
+ Through the application of these rigorous integration guidelines, we have succeeded in establishing a robust, unified database that stands as a valuable resource for the development and refinement of automatic hate speech detection systems in the Portuguese language.
79
+
80
+ ## Data structure
81
+ A data point comprises the tweet text (a string) along with thirteen categories, each category is assigned a value of 0 when there is an
82
+ absence of aggressive or hateful content and a value of 1 when such content is present. These values represent the consensus of
83
+ annotators regarding the presence of aggressive, hate, ageism, aporophobia, body shame, capacitism, lgbtphobia, political, racism,
84
+ religious intolerance, misogyny, xenophobia, and others. An illustration from the multilabel TuPyE dataset is depicted below:
85
+
86
+ ```python
87
+ {
88
+ source:"twitter",
89
+ text: "e tem pobre de direita imbecil que ainda defendia a manutenção da política de preços atrelada ao dólar link",
90
+ researcher:"leite et al", year:2020,
91
+ aggressive: 1, hate: 1, ageism: 0, aporophobia: 1, body shame: 0, capacitism: 0, lgbtphobia: 0, political: 1, racism : 0,
92
+ religious intolerance : 0, misogyny : 0, xenophobia : 0, other : 0
93
+ }
94
+ ```
95
+
96
+ # Dataset content
97
+
98
+ The table 1 delineates the quantity of documents annotated in TuPyE, systematically categorized by the respective researchers.
99
+
100
+ #### Table 1 - TuPyE composition
101
+
102
+ | Label | Count |Source |
103
+ |----------------------|--------|---------|
104
+ | Leite et al. | 21,000 |Twitter |
105
+ | TuPy | 10,000 |Twitter |
106
+ | Vargas et al. | 7,000 |Instagram|
107
+ | Fortuna et al. | 5,668 |Twitter |
108
+
109
+ Table 2 provides a detailed breakdown of the dataset, delineating the volume of data based on the occurrence of aggressive speech and the manifestation of hate speech within the documents
110
+
111
+ #### Table 2 - Count of non-aggressive and aggressive documents
112
+
113
+ | Label | Count |
114
+ |----------------------|--------|
115
+ | Non-aggressive | 31121 |
116
+ | Aggressive - Not hate| 3180 |
117
+ | Aggressive - Hate | 9367 |
118
+ | Total | 43668 |
119
+
120
+ Table 3 provides a detailed analysis of the dataset, delineating the data volume in relation to the occurrence of distinct categories of hate speech.
121
+
122
+ #### Table 3 - Hate categories count
123
+
124
+ | Label | Count |
125
+ |--------------------------|-------|
126
+ | Ageism | 57 |
127
+ | Aporophobia | 66 |
128
+ | Body shame | 285 |
129
+ | Capacitism | 99 |
130
+ | LGBTphobia | 805 |
131
+ | Political | 1149 |
132
+ | Racism | 290 |
133
+ | Religious intolerance | 108 |
134
+ | Misogyny | 1675 |
135
+ | Xenophobia | 357 |
136
+ | Other | 4476 |
137
+ | Total | 9367 |
138
+
139
+
140
+ # Acknowledge
141
+ The TuPy-E project is the result of the development of Felipe Oliveira's thesis and the work of several collaborators. This project is financed by the Federal University of Rio de Janeiro ([UFRJ](https://ufrj.br/)) and the Alberto Luiz Coimbra Institute for Postgraduate Studies and Research in Engineering ([COPPE](https://coppe.ufrj.br/)).
142
+
143
+ # References
144
+ [1] P. Fortuna, J. Rocha Da Silva, J. Soler-Company, L. Wanner, and S. Nunes, “A Hierarchically-Labeled Portuguese Hate Speech Dataset,” 2019. [Online]. Available: https://github.com/t-davidson/hate-s
145
+
146
+ [2] J. A. Leite, D. F. Silva, K. Bontcheva, and C. Scarton, “Toxic Language Detection in Social Media for Brazilian Portuguese: New Dataset and Multilingual Analysis,” Oct. 2020, [Online]. Available: http://arxiv.org/abs/2010.04543
147
+
148
+ [3] F. Vargas, I. Carvalho, F. Góes, T. A. S. Pardo, and F. Benevenuto, “HateBR: A Large Expert Annotated Corpus of Brazilian Instagram Comments for Offensive Language and Hate Speech Detection,” 2022. [Online]. Available: https://aclanthology.org/2022.lrec-1.777/
binary/binary_test.csv ADDED
The diff for this file is too large to render. See raw diff
 
binary/binary_train.csv ADDED
The diff for this file is too large to render. See raw diff
 
binary/tupye_binary_vote.csv ADDED
The diff for this file is too large to render. See raw diff
 
multilabel/multilabel_test.csv ADDED
The diff for this file is too large to render. See raw diff
 
multilabel/multilabel_train.csv ADDED
The diff for this file is too large to render. See raw diff
 
multilabel/tupye_dummy_vote.csv ADDED
The diff for this file is too large to render. See raw diff