ACL-OCL / Base_JSON /prefixR /json /restup /2020.restup-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:03:35.079967Z"
},
"title": "Profiling Italian Misogynist: An Empirical Study",
"authors": [
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Milano-Bicocca",
"location": {}
},
"email": "elisabetta.fersini@unimib.it"
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Milano-Bicocca",
"location": {}
},
"email": "debora.nozza@unibocconi.it"
},
{
"first": "Giulia",
"middle": [],
"last": "Boifava",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Milano-Bicocca",
"location": {}
},
"email": "g.boifava1@campus.unimib.it"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Hate speech may take different forms in online social environments. In this paper, we address the problem of automatic detection of misogynous language on Italian tweets by focusing both on raw text and stylometric profiles. The proposed exploratory investigation about the adoption of stylometry for enhancing the recognition capabilities of machine learning models has demonstrated that profiling users can lead to good discrimination of misogynous and not misogynous contents.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Hate speech may take different forms in online social environments. In this paper, we address the problem of automatic detection of misogynous language on Italian tweets by focusing both on raw text and stylometric profiles. The proposed exploratory investigation about the adoption of stylometry for enhancing the recognition capabilities of machine learning models has demonstrated that profiling users can lead to good discrimination of misogynous and not misogynous contents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The problem of identifying misogynist language in online social contexts has recently attracted significant attention. Social networks need to update their policy to address this issue and due to the high volume of texts shared daily, the automatic detection of misogynist and sexist text content is required. However, the problem of automatic misogyny identification from a linguistic point of view is still in its early stage. In particular, trivial statistics about the usage of misogynous language in Twitter have been provided in (Hewitt et al., 2016) , while in (Anzovino et al., 2018) a first tentative of defining linguistic features and machine learning models for automatically recognizing this phenomenon has been presented. Given this relevant social problem, several shared tasks have been recently proposed for different languages (i.e. Italian, Spanish and English) to discriminate misogynous and not misogynous contents, demonstrating the interest of the Natural Language Processing community on investigating the linguistic and communication behaviour of this phenomenon. The Automatic Misogyny Identification (AMI) challenge (Fersini et al., 2018a; Fersini et al., 2018b) has been proposed at Ibereval 2018 1 for Spanish and English, and in Evalita 2018 (Caselli et al., 2018) for Italian and English. The main goal of AMI is to distinguish misogynous contents from non-misogynous ones, to categorize misogynistic behaviors and finally to classify the target of a tweet. Afterwards, (Basile et al., 2019) proposed HatEval, the shared task at SemEval 2019 on multilingual detection of hate speech against immigrants and women in Twitter for Spanish and English. The aim of HatEval is to detect the presence of hate speech against immigrants and women, and to identify further features in hateful contents such as the aggressive attitude and the target harassed, to distinguish if the incitement is against an individual rather than a group. This challenges offered the unique opportunity to firstly address the problem of hate speech against women in online social networks.",
"cite_spans": [
{
"start": 535,
"end": 556,
"text": "(Hewitt et al., 2016)",
"ref_id": "BIBREF15"
},
{
"start": 568,
"end": 591,
"text": "(Anzovino et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 1143,
"end": 1166,
"text": "(Fersini et al., 2018a;",
"ref_id": "BIBREF10"
},
{
"start": 1167,
"end": 1189,
"text": "Fersini et al., 2018b)",
"ref_id": "BIBREF11"
},
{
"start": 1272,
"end": 1294,
"text": "(Caselli et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 1501,
"end": 1522,
"text": "(Basile et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "During the above mentioned challenges, several systems have been presented to obtain the best performing solution in terms of recognition performance. Most of the participants to the AMI challenge considered a single type 1 https://sites.google.com/view/ibereval-2018 of text representation, i.e. traditional TF-IDF representation, while (Bakarov, 2018) and (Buscaldi, 2018) considered only weighted n-grams at character level for better dealing with misspellings and capturing few stylistic aspects. Additionally to the traditional textual feature representation techniques, i.e. bag of words/characters, n-grams of words/characters eventually weighted with TF-IDF, several approaches used specific lexical features for improving the input space and consequently the classification performances. In (Basile and Rubagotti, 2018) the authors experimented feature abstraction following the bleaching approach proposed by Goot et al. (Goot et al., 2018) for modelling gender through the language. Finally, specific lexicons for dealing with hate speech language have been included as features in several approaches (Frenda et al., 2018) , (Ahluwalia et al., 2018) and (Pamungkas et al., 2018) . Few participants to the AMI challenge, (Fortuna et al., 2018) and (Saha et al., 2018) considered the popular Embeddings techniques both at word and sentence level. More recently, (Nozza et al., 2019) investigated the use of a novel Deep Learning Representation model, the Universal Sentence Encoder introduced in (Cer et al., 2018) built using a transformer architecture (Vaswani et al., 2017) for tweet representation. The use of this more sophisticated model for textual representation coupled with a simple single-layer neural network architecture allowed the authors to outperform the first-ranked approach (Saha et al., 2018) at Evalita 2018. Thus, in the HatEval challenge, more than half of the participants exploited Word Embeddings or Deep Learning models (Sabour et al., 2017; Cer et al., 2018) for textual representation. Concerning the machine learning models, the majority of the available investigations in the state of the art are usually based on traditional Support Vector Machines and Deep Learning methods, mainly Recurrent Neural Networks. Several works have been done for adopting or even enlarging some lexical resources for misogyny detection purposes. The lexicons for addressing misogyny detection for the Italian language have been mostly obtained from lists available online, i.e. \"Le parole per ferire\" given by Tullio De Mauro 2 , and the HurtLex multilingual lexicon (Bassignana et al., 2018).",
"cite_spans": [
{
"start": 338,
"end": 353,
"text": "(Bakarov, 2018)",
"ref_id": "BIBREF2"
},
{
"start": 358,
"end": 374,
"text": "(Buscaldi, 2018)",
"ref_id": "BIBREF7"
},
{
"start": 800,
"end": 828,
"text": "(Basile and Rubagotti, 2018)",
"ref_id": "BIBREF3"
},
{
"start": 919,
"end": 950,
"text": "Goot et al. (Goot et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 1112,
"end": 1133,
"text": "(Frenda et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 1136,
"end": 1160,
"text": "(Ahluwalia et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 1165,
"end": 1189,
"text": "(Pamungkas et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 1231,
"end": 1253,
"text": "(Fortuna et al., 2018)",
"ref_id": null
},
{
"start": 1258,
"end": 1277,
"text": "(Saha et al., 2018)",
"ref_id": null
},
{
"start": 1371,
"end": 1391,
"text": "(Nozza et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 1505,
"end": 1523,
"text": "(Cer et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 1563,
"end": 1585,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF21"
},
{
"start": 1803,
"end": 1822,
"text": "(Saha et al., 2018)",
"ref_id": null
},
{
"start": 1957,
"end": 1978,
"text": "(Sabour et al., 2017;",
"ref_id": "BIBREF19"
},
{
"start": 1979,
"end": 1996,
"text": "Cer et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "State of the art",
"sec_num": "2."
},
{
"text": "Although the above mentioned approaches represent a fundamental step towards the definition of mechanisms able to distinguish between misogynous and not misogynous contents, it is still pending the verification of the hypothesis that the writing style of authors could be a strong indication of misogynous profiles that therefore are likely inclined to produce misogynous contents. To this purpose, in this paper, we propose to investigate the ability of some stylometric features to characterize misogynous and not misogynous profiles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State of the art",
"sec_num": "2."
},
{
"text": "The traditional feature vector representing a message m (used to train a given classifier) usually includes only terms that belong to a common vocabulary V of terms derived from a message collection:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m = (w 1 , w 2 , ..., w |V | , l)",
"eq_num": "(1)"
}
],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "where w t denotes the weight of term t belonging to m with label l. However, some stylometric signals can be used to enhance the traditional feature vector and therefore learning models to distinguish between misogynous and not misogynous contents. The expanded feature vector of a message is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m s = (w 1 , w 2 , ..., w |V | , s 1 , s 2 , . . . , s n , l)",
"eq_num": "(2)"
}
],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "where s 1 , s 2 , . . . , s n represent the n additional stylometric features. The stylometric features investigate in this paper can be broadly distinguished as follow:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "\u2022 Pragmatic particles: to better capture non-literal signals that could convey misogynous expressions, several valuable pragmatic forms could be taken into account. Pragmatic particles, such as emoticons, mentions and hashtags expressions, represent those linguistic elements typically used on social ratio to elicit, remark and make direct a given message.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "\u2022 Punctuation: as stated in (Watanabe et al., 2018) , how an internet user uses exclamation, interjections, and other punctuation marks is not necessarily an explicit cue indicating misogyny, they can be used to implicitly elicit a misogynous message (e.g. \"Women rights? come on...go back to the kitchen!!!\").",
"cite_spans": [
{
"start": 28,
"end": 51,
"text": "(Watanabe et al., 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "\u2022 Part-Of-Speech (POS) lexical components: the way of using some specific part of speech could be a relevant indicator of misogyny. For this reason, a POS tagger could be applied in order to assign lexical functions and derive some stylometric features related to them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "The above mentioned stylometric categories have led us to investigate the following features as candidates to capture misogynous profile and therefore to be included as additional features s i reported in Eq. (2):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "\u2022 average number of sentences",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "\u2022 average number of words",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "\u2022 frequency of the number of unique words",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "\u2022 frequency of complex words (more than 5 characters)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "\u2022 average of the number of characters in a word",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "\u2022 frequency of the number of verbs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "\u2022 frequency of the number of auxiliary verbs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "\u2022 frequency of the number of adjectives \u2022 frequency of the third singular person pronouns related to female",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "\u2022 frequency of the third plural person pronouns related to male",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "\u2022 frequency of the third plural person pronouns related to female",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "\u2022 frequency of the # symbol",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "\u2022 frequency of the @ symbol",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3."
},
{
"text": "To validate the hypothesis that a stylistic profile can help to detect misogynous contents from the not misogynous ones, we trained several machine learning models both on the traditional feature vector (Eq. 1) and on the expanded feature vector (Eq. 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 frequency of proper nouns",
"sec_num": null
},
{
"text": "In order to validate our hypothesis that a stylistic profile of Italian misogynist can improve the generalization capabilities of machine learning models trained for misogyny detection purposes, we adopted the Italian benchmark dataset provided for the AMI@Evalita Challenge. The dataset has been collected by following the subsequent policies:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1."
},
{
"text": "\u2022 Streaming download using a set of representative keywords, e.g. pu****a, tr**a, f**a di legno",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1."
},
{
"text": "\u2022 Monitoring of potential victims' accounts, e.g. gamergate victims and public feminist women",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1."
},
{
"text": "\u2022 Downloading the history of identified misogynist, i.e. explicitly declared hate against women on their Twitter profiles",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1."
},
{
"text": "The annotated Italian corpus is finally composed of 5000 tweets, almost balanced between misogynous and not misogynous labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1."
},
{
"text": "Concerning the machine learning models trained to distinguish between misogynous and not misogynous tweets, Na\u00efve Bayes (NB), Support Vector Machines (SVM) and Multi-Layer Perceptron (MLP) have been adopted 3 . Regarding the traditional feature vector, the text of each tweet has been stemmed and its TF-IDF representation has been obtained by exploiting the sklearn library (Pedregosa et al., 2011) . For the stylometric features, we employed the Italian models of the spaCy library to obtain the partof-speech tags to collect nouns, adjectives, adverbs. We also created a manual list of prepositions and articles. The list of offensive words has been extracted from an online resource 4 . Concerning the experimental evaluation, a 10-folds cross validation has been performed. To compare the two feature spaces, traditional textual feature vector and the ones with additional stylometric features, P recision, Recall and F1-measure have been estimated focusing on both labels (i.e. 0=notMisogynous, 1=misogynous).",
"cite_spans": [
{
"start": 375,
"end": 399,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Performance Measures",
"sec_num": "4.2."
},
{
"text": "We report in Table 1 the experimental results obtained by training all the considered machine learning models on the two feature space, i.e. the first based on Tf-IDF only and the second one based on TF-IDF and stylometric features. We can easily note that the stylometric features provide a strong contribution for discriminating between misogynous and not misogynous messages. It is interesting to note that the stylometric features are not only able to improve the performance with respect to the traditional features, but they lead to have good performance for both classes guarantying a good compromise of Precision and Recall for misogynous and not misogynous instances. In this way, we are able to provide a feature representation and a machine learning approach that is able to recognize \"the easy class\" related to not misogynous contents and \"the difficult class\" related to the misogynous text. In order to better understand the role of stylometric cues, we performed an error analysis on those messages that were wrongly classified by the best performing model, i.e. Support Vector Machines. First of all, the proposed analysis involving stylometry has led to 20% of classification error, where 43.85% of misclassified instances are not misogynous tweets that are classified as misogynous and 56.15% of misclassified instances are misogynous tweets that are classified as not misogynous. For those instances for which the actual label was not misogynous but the classifier predicted them as misogynous, we can highlight the main types of errors:",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.3."
},
{
"text": "\u2022 Unsolved Mentions: the model, do not solving the user mentions, is biased by adjectives. In particular, when referring to a target by using a mention (denote by the @ symbol), the stylometric features are not able to capture the gender-related to a given noun and therefore is biased by the bad words typically related to women. An example of this type of errors are represented by the following sentence: @laltrodiego Mer*a schifosa lurida that can be translated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.3."
},
{
"text": "@laltrodiego Bad Sh*tty Sh*t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.3."
},
{
"text": "The target of the tweet is an account of a male user, but the model do not have the chance to solve the uncertainty related to the mention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.3."
},
{
"text": "\u2022 Wrong Target: in this case, the model is again biased by adjectives typically denoting bad words because it is not able to recognize female proper nouns. In particular, when mentioning a given entity (i.e. football In this case, the implicit target is an event and the model, observing offensive words such as putt*na/bit*h wrongly predict the message as misogynous.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.3."
},
{
"text": "An analogous behaviour has been observed when the actual labels of tweets are misogynous but the classifier predicted them as not misogynous. In particular, the errors are mainly related to one main lack of information:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Precision",
"sec_num": null
},
{
"text": "\u2022 Absence of Syntactic Features: the model, which does not consider the syntactical structure of the sentence, is not able to determine the target of an offensive adjective. An example of these types of errors are represented by the following sentence: The target of the offensive language is clearly a woman, but the model since it does not consider the structure of the sentence it is biased by those adjectives related to men.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Precision",
"sec_num": null
},
{
"text": "Se",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Precision",
"sec_num": null
},
{
"text": "The error analysis has highlighted on one side the necessity of properly dealing with the target of the message, and on the other hand, it has pointed out the needs to more additional stylometric features to obtain a better understanding on the structuring of sentences of both misogynous and not misogynous contents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Precision",
"sec_num": null
},
{
"text": "In this paper, a preliminary empirical investigation about the profiling of Italian misogynous contents has been performed. A set of stylometric features have been studied for validating the hypothesis that cues about the writing style of authors can contribute to better distinguish misogynous contents from the not misogynous ones. The experimental evaluation has corroborated the hypothesis that the use of stylometric features improves the recognition capabilities of several machine learning models for misogyny detection purposes. Concerning future work, several additional syntactic features will be considered for a better understanding of the structure of the sentences. Additionally, the capabilities of the investigated features will be evaluated focusing on additional languages, i.e. Spanish and English, also investigating which set of features contributes most on the results of the classifiers. As final future work, a different paradigm for profiling misogynist will be investigated. In particular, a benchmark profile of misogynistic and not misogynistic language will be created to then enable a learning-by-difference approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "4.4."
},
{
"text": "https://www.internazionale.it/opinione/tullio-demauro/2016/09/27/razzismo-parole-ferire",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The experiments have been conducted using default parameters of models implemented in sklearn: https://scikit-learn.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://bit.ly/2HK3fYE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Detecting Hate Speech Against Women in English Tweets",
"authors": [
{
"first": "R",
"middle": [],
"last": "Ahluwalia",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Soni",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Callow",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nascimento",
"suffix": ""
},
{
"first": "M",
"middle": [
"D"
],
"last": "Cock",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahluwalia, R., Soni, H., Callow, E., Nascimento, A., and Cock, M. D. (2018). Detecting Hate Speech Against Women in English Tweets. In Proceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2018), Turin, Italy. CEUR.org.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Automatic Identification and Classification of Misogynistic Language on Twitter",
"authors": [
{
"first": "M",
"middle": [],
"last": "Anzovino",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Applications of Natural Language to Information Systems",
"volume": "",
"issue": "",
"pages": "57--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anzovino, M., Fersini, E., and Rosso, P. (2018). Auto- matic Identification and Classification of Misogynistic Language on Twitter. In International Conference on Applications of Natural Language to Information Sys- tems, pages 57-64. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Vector Space Models for Automatic Misogyny Identification",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bakarov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bakarov, A. (2018). Vector Space Models for Auto- matic Misogyny Identification. In Proceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2018), Turin, Italy. CEUR.org.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatic Identification of Misogyny in English and Italian Tweets at EVALITA 2018 with a Multilingual Hate Lexicon",
"authors": [
{
"first": "A",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Rubagotti",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Basile, A. and Rubagotti, C. (2018). Automatic Identi- fication of Misogyny in English and Italian Tweets at EVALITA 2018 with a Multilingual Hate Lexicon. In Proceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Fi- nal Workshop (EVALITA 2018), Turin, Italy. CEUR.org.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Multilingual detection of hate speech against immigrants and women in twitter",
"authors": [],
"year": null,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation (SemEval-2019). Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter. In Pro- ceedings of the 13th International Workshop on Seman- tic Evaluation (SemEval-2019). Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Hurtlex: A multilingual lexicon of words to hurt",
"authors": [
{
"first": "E",
"middle": [],
"last": "Bassignana",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Patti",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "5th Italian Conference on Computational Linguistics",
"volume": "2253",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bassignana, E., Basile, V., and Patti, V. (2018). Hurtlex: A multilingual lexicon of words to hurt. In 5th Italian Conference on Computational Linguistics, CLiC-it 2018, volume 2253, pages 1-6. CEUR-WS.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Tweetaneuse AMI EVALITA2018: Character-based Models for the Automatic Misogyny Identification Task",
"authors": [
{
"first": "D",
"middle": [],
"last": "Buscaldi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Buscaldi, D. (2018). Tweetaneuse AMI EVALITA2018: Character-based Models for the Automatic Misogyny Identification Task. In Proceedings of Sixth Evalua- tion Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2018), Turin, Italy. CEUR.org.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "EVALITA 2018: Overview of the 6th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian",
"authors": [
{
"first": "T",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Novielli",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caselli, T., Novielli, N., Patti, V., and Rosso, P. (2018). EVALITA 2018: Overview of the 6th Evaluation Cam- paign of Natural Language Processing and Speech Tools for Italian. In Tommaso Caselli, et al., editors, Proceed- ings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2018), Turin, Italy. CEUR.org.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Universal sentence encoder for English",
"authors": [
{
"first": "D",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "S.-Y",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "St",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Guajardo-Cespedes",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tar",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Strope",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kurzweil",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations (EMNLP 2018)",
"volume": "",
"issue": "",
"pages": "169--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cer, D., Yang, Y., Kong, S.-y., Hua, N., Limtiaco, N., St. John, R., Constant, N., Guajardo-Cespedes, M., Yuan, S., Tar, C., Strope, B., and Kurzweil, R. (2018). Universal sentence encoder for English. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations (EMNLP 2018), pages 169-174. Association for Computational Linguistics, November.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Overview of the Evalita 2018 Task on Automatic Misogyny Identification (AMI)",
"authors": [
{
"first": "E",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 6th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA'18)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fersini, E., Nozza, D., and Rosso, P. (2018a). Overview of the Evalita 2018 Task on Automatic Misogyny Iden- tification (AMI). In Tommaso Caselli, et al., editors, Proceedings of the 6th evaluation campaign of Natu- ral Language Processing and Speech tools for Italian (EVALITA'18), Turin, Italy. CEUR.org.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Overview of the Task on Automatic Misogyny Identification at IberEval",
"authors": [
{
"first": "E",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Anzovino",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fersini, E., Rosso, P., and Anzovino, M. (2018b). Overview of the Task on Automatic Misogyny Identifica- tion at IberEval 2018. In Proceedings of the Third Work- shop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018). CEUR-WS.org.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automatic Lexicons Expansion for Multilingual Misogyny Detection",
"authors": [
{
"first": "S",
"middle": [],
"last": "Frenda",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ghanem",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Guzm\u00e1n-Falc\u00f3n",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Montes-Y-G\u00f3mez",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Villase\u00f1or-Pineda",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frenda, S., Ghanem, B., Guzm\u00e1n-Falc\u00f3n, E., Montes-y- G\u00f3mez, M., and Villase\u00f1or-Pineda, L. (2018). Auto- matic Lexicons Expansion for Multilingual Misogyny Detection. In Proceedings of Sixth Evaluation Cam- paign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2018), Turin, Italy. CEUR.org.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bleaching Text: Abstract Features for Crosslingual Gender Prediction",
"authors": [
{
"first": "R",
"middle": [],
"last": "Goot",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Matroos",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Nissim",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Plank",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "383--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goot, R., Ljube\u0161i\u0107, N., Matroos, I., Nissim, M., and Plank, B. (2018). Bleaching Text: Abstract Features for Cross- lingual Gender Prediction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 383-389.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The Problem of identifying Misogynist Language on Twitter (and other online social spaces)",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hewitt",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Tiropanis",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Bokhove",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 8th ACM Conference on Web Science",
"volume": "",
"issue": "",
"pages": "333--335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hewitt, S., Tiropanis, T., and Bokhove, C. (2016). The Problem of identifying Misogynist Language on Twitter (and other online social spaces). In Proceedings of the 8th ACM Conference on Web Science, pages 333-335. ACM.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Unintended bias in misogyny detection",
"authors": [
{
"first": "D",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Volpetti",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Fersini",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE/WIC/ACM International Conference on Web Intelligence",
"volume": "",
"issue": "",
"pages": "149--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nozza, D., Volpetti, C., and Fersini, E. (2019). Unintended bias in misogyny detection. In IEEE/WIC/ACM Interna- tional Conference on Web Intelligence, pages 149-155.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic Identification of Misogyny in English and Italian Tweets at EVALITA 2018 with a Multilingual Hate Lexicon",
"authors": [
{
"first": "E",
"middle": [
"W"
],
"last": "Pamungkas",
"suffix": ""
},
{
"first": "A",
"middle": [
"T"
],
"last": "Cignarella",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Patti",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pamungkas, E. W., Cignarella, A. T., Basile, V., and Patti, V. (2018). Automatic Identification of Misogyny in English and Italian Tweets at EVALITA 2018 with a Multilingual Hate Lexicon. In Proceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2018), Turin, Italy. CEUR.org.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Scikit-learn: Machine learning in Python",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Duchesnay",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cour- napeau, D., Brucher, M., Perrot, M., and Duchesnay, E. (2011). Scikit-learn: Machine learning in Python. Jour- nal of Machine Learning Research, 12:2825-2830.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dynamic routing between capsules",
"authors": [
{
"first": "S",
"middle": [],
"last": "Sabour",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Frosst",
"suffix": ""
},
{
"first": "G",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3856--3866",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabour, S., Frosst, N., and Hinton, G. E. (2017). Dynamic routing between capsules. In Advances in neural infor- mation processing systems, pages 3856-3866.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Attention is All you Need",
"authors": [
{
"first": "A",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "A",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems (NIPS 2017)",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2017). Attention is All you Need. In Advances in Neural In- formation Processing Systems 30: Annual Conference on Neural Information Processing Systems (NIPS 2017), pages 6000-6010.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Hate speech on twitter: A pragmatic approach to collect hateful and offensive expressions and perform hate speech detection",
"authors": [
{
"first": "H",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bouazizi",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ohtsuki",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Access",
"volume": "6",
"issue": "",
"pages": "13825--13835",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Watanabe, H., Bouazizi, M., and Ohtsuki, T. (2018). Hate speech on twitter: A pragmatic approach to collect hate- ful and offensive expressions and perform hate speech detection. IEEE Access, 6:13825-13835.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "frequency of the number of superlative adjectives \u2022 frequency of the number of superlative relative adjectives \u2022 frequency the number of comparative adjectives \u2022 frequency of the number of nouns \u2022 frequency of the number of conjunctions \u2022 frequency of the number of adverbs \u2022 frequency of articles \u2022 frequency of indefinite articles \u2022 frequency of definite articles \u2022 frequency of indefinite articles prepositions \u2022 frequency of pronouns \u2022 frequency of numbers \u2022 frequency of special characters \u2022 frequency of emoji \u2022 frequency of unigrams \u2022 frequency of bigrams \u2022 frequency of trigrams \u2022 frequency of offensive words \u2022 frequency of punctuation \u2022 frequency of commas \u2022 frequency of colon \u2022 frequency of semi-comma \u2022 frequency of exclamation mark \u2022 frequency of question mark \u2022 frequency of quotes \u2022 frequency of upper-case words \u2022 frequency of words starting with upper case \u2022 frequency of stretched words \u2022 frequency of the first singular person pronouns \u2022 frequency of the first plural person pronouns \u2022 frequency of the second singular person pronouns \u2022 frequency of the second plural person pronouns \u2022 frequency of the third singular person pronouns related to male",
"uris": null
}
}
}
}