Datasets:
license: cc-by-4.0
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- pt
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: TuPy-Dataset
language_bcp47:
- pt-BR
tags:
- hate-speech-detection
configs:
- config_name: multilabel
data_files:
- split: train
path: multilabel/multilabel_train.csv
- split: test
path: multilabel/multilabel_test.csv
- config_name: binary
data_files:
- split: train
path: binary/binary_train.csv
- split: test
path: binary/binary_test.csv
Portuguese Hate Speech Expanded Dataset (TuPyE)
TuPyE, an enhanced iteration of TuPy, encompasses a compilation of 43,668 meticulously annotated documents specifically selected for the purpose of hate speech detection within diverse social network contexts. This augmented dataset integrates supplementary annotations and amalgamates with datasets sourced from Fortuna et al. (2019), Leite et al. (2020), and Vargas et al. (2022), complemented by an infusion of 10,000 original documents from the TuPy-Dataset.
In light of the constrained availability of annotated data in Portuguese pertaining to the English language, TuPyE is committed to the expansion and enhancement of existing datasets. This augmentation serves to facilitate the development of advanced hate speech detection models through the utilization of machine learning (ML) and natural language processing (NLP) techniques. This repository is organized as follows:
root.
├── binary : binary dataset (including training and testing split)
├── multilabel : multilabel dataset (including training and testing split)
└── README.md : documentation and card metadata
Security measures
To safeguard user identity and uphold the integrity of this dataset, all user mentions have been anonymized as "@user," and any references to external websites have been omitted
Annotation and voting process
Regarding the unpublished part of the TuPyE dataset, we utilized a simple voting process to generate the binary matrices. Each document underwent three separate evaluations. If a document received two or more identical classifications, the assigned value was set to 1; otherwise, it was marked as 0. The annotated raw data can be accessed in the project repository. The following table offers a brief summary of the annotators' profiles and qualifications:
Table 1 – Annotators
| Annotator | Gender | Education | Political | Color |
|---|---|---|---|---|
| Annotator 1 | Female | Ph.D. Candidate in civil engineering | Far-left | White |
| Annotator 2 | Male | Master's candidate in human rights | Far-left | Black |
| Annotator 3 | Female | Master's degree in behavioral psychology | Liberal | White |
| Annotator 4 | Male | Master's degree in behavioral psychology | Right-wing | Black |
| Annotator 5 | Female | Ph.D. Candidate in behavioral psychology | Liberal | Black |
| Annotator 6 | Male | Ph.D. Candidate in linguistics | Far-left | White |
| Annotator 7 | Female | Ph.D. Candidate in civil engineering | Liberal | White |
| Annotator 8 | Male | Ph.D. Candidate in civil engineering | Liberal | Black |
| Annotator 9 | Male | Master's degree in behavioral psychology | Far-left | White |
Data structure
A data point comprises the tweet text (a string) along with thirteen categories, each category is assigned a value of 0 when there is an absence of aggressive or hateful content and a value of 1 when such content is present. These values represent the consensus of annotators regarding the presence of aggressive, hate, ageism, aporophobia, body shame, capacitism, lgbtphobia, political, racism, religious intolerance, misogyny, xenophobia, and others. An illustration from the multilabel TuPy dataset is depicted below:
{
text: "e tem pobre de direita imbecil que ainda defendia a manutenção da política de preços atrelada ao dólar link",
aggressive: 1, hate: 1, ageism: 0, aporophobia: 1, body shame: 0, capacitism: 0, lgbtphobia: 0, political: 1, racism : 0,
religious intolerance : 0, misogyny : 0, xenophobia : 0, other : 0
}
Dataset content
Table 2 provides a detailed breakdown of the dataset, delineating the volume of data based on the occurrence of aggressive speech and the manifestation of hate speech within the documents
Table 2 - Count of non-aggressive and aggressive documents
| Label | Count |
|---|---|
| Non-aggressive | 31121 |
| Aggressive - Not hate | 3180 |
| Aggressive - Hate | 9367 |
| Total | 43668 |
Table 3 provides a detailed analysis of the dataset, delineating the data volume in relation to the occurrence of distinct categories of hate speech.
Table 3 - Hate categories count
| Label | Count |
|---|---|
| Ageism | 57 |
| Aporophobia | 66 |
| Body shame | 285 |
| Capacitism | 99 |
| LGBTphobia | 805 |
| Political | 1149 |
| Racism | 290 |
| Religious intolerance | 108 |
| Misogyny | 1675 |
| Xenophobia | 357 |
| Other | 4476 |
| Total | 9367 |
BibTeX citation
This dataset can be cited as follows:
@misc {silly-machine_2023,
author = { {Silly-Machine} },
title = { TuPy-Dataset (Revision de6b18c) },
year = 2023,
url = { https://huggingface.co/datasets/Silly-Machine/TuPy-Dataset },
doi = { 10.57967/hf/1529 },
publisher = { Hugging Face }
}
Acknowledge
The TuPy project is the result of the development of Felipe Oliveira's thesis and the work of several collaborators. This project is financed by the Federal University of Rio de Janeiro (UFRJ) and the Alberto Luiz Coimbra Institute for Postgraduate Studies and Research in Engineering (COPPE).