ACL-OCL / Base_JSON /prefixP /json /peoples /2020.peoples-1.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:45:07.461824Z"
},
"title": "Cross-lingual Emotion Intensity Prediction",
"authors": [
{
"first": "Irean",
"middle": [],
"last": "Navas",
"suffix": "",
"affiliation": {},
"email": "irean.navas@gmail.com"
},
{
"first": "Toni",
"middle": [],
"last": "Badia",
"suffix": "",
"affiliation": {},
"email": "tbadia@upf.edu"
},
{
"first": "Jeremy",
"middle": [],
"last": "Barnes",
"suffix": "",
"affiliation": {},
"email": "jeremycb@ifi.uio.no"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Emotion intensity prediction determines the degree or intensity of an emotion that the author expresses in a text, extending previous categorical approaches to emotion detection. While most previous work on this topic has concentrated on English texts, other languages would also benefit from fine-grained emotion classification, preferably without having to recreate the amount of annotated data available in English in each new language. Consequently, we explore crosslingual transfer approaches for fine-grained emotion detection in Spanish and Catalan tweets. To this end we annotate a test set of Spanish and Catalan tweets using Best-Worst scaling. We compare six cross-lingual approaches, e.g., machine translation and cross-lingual embeddings, which have varying requirements for parallel data-from millions of parallel sentences to completely unsupervised. The results show that on this data, methods with low paralleldata requirements perform surprisingly better than methods that use more parallel data, which we explain through an in-depth error analysis. We make the dataset and the code available at https://github.com/jerbarnes/fine-grained_cross-lingual_emotion.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Emotion intensity prediction determines the degree or intensity of an emotion that the author expresses in a text, extending previous categorical approaches to emotion detection. While most previous work on this topic has concentrated on English texts, other languages would also benefit from fine-grained emotion classification, preferably without having to recreate the amount of annotated data available in English in each new language. Consequently, we explore crosslingual transfer approaches for fine-grained emotion detection in Spanish and Catalan tweets. To this end we annotate a test set of Spanish and Catalan tweets using Best-Worst scaling. We compare six cross-lingual approaches, e.g., machine translation and cross-lingual embeddings, which have varying requirements for parallel data-from millions of parallel sentences to completely unsupervised. The results show that on this data, methods with low paralleldata requirements perform surprisingly better than methods that use more parallel data, which we explain through an in-depth error analysis. We make the dataset and the code available at https://github.com/jerbarnes/fine-grained_cross-lingual_emotion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Emotion analysis within natural language processing attempts to identify the private states (Wiebe et al., 2005) expressed in written text, which in many cases are only implicitly available. Research often classifies these emotions into discrete categories (Ekman, 1999; Plutchik, 2001) , such as anger, fear, joy, or sadness. This discrete approach to emotion has been applied to fairy tales (Alm et al., 2005) , headlines (Strapparava and Mihalcea, 2007) , and more recently micro-blogging services, such as twitter Schuff et al., 2017) . However, people can express emotion in ways that require a more fine-grained approach than the basic discrete version. Take the following two sentences:",
"cite_spans": [
{
"start": 92,
"end": 112,
"text": "(Wiebe et al., 2005)",
"ref_id": "BIBREF42"
},
{
"start": 257,
"end": 270,
"text": "(Ekman, 1999;",
"ref_id": "BIBREF17"
},
{
"start": 271,
"end": 286,
"text": "Plutchik, 2001)",
"ref_id": "BIBREF36"
},
{
"start": 393,
"end": 411,
"text": "(Alm et al., 2005)",
"ref_id": "BIBREF0"
},
{
"start": 424,
"end": 456,
"text": "(Strapparava and Mihalcea, 2007)",
"ref_id": "BIBREF39"
},
{
"start": 518,
"end": 538,
"text": "Schuff et al., 2017)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) I am not feeling particularly happy today",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) I feel like I am the most miserable person on earth Both of these examples would be labelled with the emotion sadness. However, it is clear that the second sentence expresses a larger degree of sadness than the first, which categorical approaches to emotion analysis would not be able to identify. This motivates the need to move to a more fine-grained approach to emotion analysis. Emotion intensity prediction (Mohammad and Bravo-Marquez, 2017) does just this by extending emotion prediction from a classification task to a regression task. Given a text, the goal is to determine a real-valued number between \u22121 and 1 representing the intensity of the emotion present. This approach allows to capture more subtle differences between expressions of emotion.",
"cite_spans": [
{
"start": 416,
"end": 450,
"text": "(Mohammad and Bravo-Marquez, 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Current state-of-the-art approaches to emotion intensity are based on supervised machine learning approaches, which combine several sources of annotated corpora, emotion and sentiment lexicons in order to achieve the best performance. However, the combination of all of these necessary resources is only available in a few high-resource languages, with English easily having the largest number. Collecting a similar set of resources for all other languages is prohibitively expensive and would require years of work. Therefore, it would be preferable to find a way to use the available resources in English to perform emotion intensity prediction in other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Cross-lingual methods -either translation or cross-lingual embedding approaches -offer a possible solution to the lack of labeled data and have shown promise for sentiment analysis at document-level (Chen et al., 2018; Chen et al., 2019) , sentence-level (Barnes et al., 2018; Feng and Wan, 2019) , and fine-grained (Hangya et al., 2018 ; Barnes and Klinger, 2019) . However, the greater number of classes in emotion classification and the difficulty of the regression task means that it is not obvious that the cross-lingual approaches that work well for sentiment analysis will necessarily work for cross-lingual emotion intensity prediction.",
"cite_spans": [
{
"start": 199,
"end": 218,
"text": "(Chen et al., 2018;",
"ref_id": "BIBREF10"
},
{
"start": 219,
"end": 237,
"text": "Chen et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 255,
"end": 276,
"text": "(Barnes et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 277,
"end": 296,
"text": "Feng and Wan, 2019)",
"ref_id": "BIBREF19"
},
{
"start": 316,
"end": 336,
"text": "(Hangya et al., 2018",
"ref_id": "BIBREF22"
},
{
"start": 339,
"end": 364,
"text": "Barnes and Klinger, 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we provide the first attempt at cross-lingual emotion intensity prediction, by comparing methods which rely on cross-lingual embeddings, machine translation, and unsupervised machine translation to transfer resources from English to predict the emotion in languages that do not have large available datasets or lexicon resources. For testing, we additionally annotate a dataset of tweets in Spanish and Catalan. Our results show that surprisingly unsupervised machine translation is able to outperform both supervised machine translation and cross-lingual embedding methods on these datasets. We additionally perform detailed quantitative and qualitative error analyses of the supervised and unsupervised translation approaches, concluding that while the overall translation quality of the supervised system is better than the unsupervised system, it often does not translate hashtags, which are an important source of emotion information for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the rest of the paper we discuss related work (Section 2) and provide a description of the datasets (Section 3) and models (Section 4) used for the experiments. We then discuss the results (Section 5) and provide an in-depth analysis of why certain models perform better (Section 6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Emotion detection attempts to identify explicitly or implicitly mentioned emotions in a text, either by following a proposed set of basic emotion categories (Plutchik, 1980; Ekman, 1992; Ekman, 1999) or through valence-arousal approaches (Russell, 2003) . However, in contrast to other tasks which also attempt to detect evaluative language, such as subjectivity or sentiment analysis, there are relatively few annotated resources, and most of these resources are found only in English (Alm et al., 2005; Aman and Szpakowicz, 2007; Strapparava and Mihalcea, 2007; Schuff et al., 2017) . A notable exception is the deISEAR dataset (Troiano et al., 2019) , which crowdsources descriptions of emotional events in German.",
"cite_spans": [
{
"start": 157,
"end": 173,
"text": "(Plutchik, 1980;",
"ref_id": "BIBREF35"
},
{
"start": 174,
"end": 186,
"text": "Ekman, 1992;",
"ref_id": "BIBREF16"
},
{
"start": 187,
"end": 199,
"text": "Ekman, 1999)",
"ref_id": "BIBREF17"
},
{
"start": 238,
"end": 253,
"text": "(Russell, 2003)",
"ref_id": "BIBREF37"
},
{
"start": 478,
"end": 504,
"text": "English (Alm et al., 2005;",
"ref_id": null
},
{
"start": 505,
"end": 531,
"text": "Aman and Szpakowicz, 2007;",
"ref_id": "BIBREF1"
},
{
"start": 532,
"end": 563,
"text": "Strapparava and Mihalcea, 2007;",
"ref_id": "BIBREF39"
},
{
"start": 564,
"end": 584,
"text": "Schuff et al., 2017)",
"ref_id": "BIBREF38"
},
{
"start": 630,
"end": 652,
"text": "(Troiano et al., 2019)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Annotating categorical emotion is a subjective and complicated task, which often leads to low interannotator agreement (Schuff et al., 2017) . However, Best-worst scaling has shown to improve overall agreement scores when annotating tweets (Mohammad and Bravo-Marquez, 2017; Mohammad et al., 2018) . In this approach, annotators are shown n items (n > 1, normally 4) and they must choose only the items that are best and worst, i. e. those that most and least represent the phenomena under question. Despite these advances in annotating, for most languages in the world, there exists no annotated emotion dataset which could enable supervised emotion classification.",
"cite_spans": [
{
"start": 119,
"end": 140,
"text": "(Schuff et al., 2017)",
"ref_id": "BIBREF38"
},
{
"start": 240,
"end": 274,
"text": "(Mohammad and Bravo-Marquez, 2017;",
"ref_id": "BIBREF27"
},
{
"start": 275,
"end": 297,
"text": "Mohammad et al., 2018)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "On English data, previous approaches to classifying emotion have used word and character n-gram features (Mohammad, 2012) , sentiment and emotion lexicon features , as well as a variety of neural networks (K\u00f6per et al., 2017; Felbo et al., 2017; Bostan and Klinger, 2019) . Typically, strong emotion classification systems use a combination of these features to get the strongest performance.",
"cite_spans": [
{
"start": 105,
"end": 121,
"text": "(Mohammad, 2012)",
"ref_id": "BIBREF33"
},
{
"start": 205,
"end": 225,
"text": "(K\u00f6per et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 226,
"end": 245,
"text": "Felbo et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 246,
"end": 271,
"text": "Bostan and Klinger, 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Emotion intensity proposes a more fine-grained view of emotion classification. Specifically, given a tweet and an emotion X, the goal is to determine the intensity or degree of emotion X expressed in the text -a real-valued score between 0 and 1. This task has already been the topic of two shared tasks (Mohammad and Bravo-Marquez, 2017; Mohammad et al., 2018) , which attracted many participants.",
"cite_spans": [
{
"start": 304,
"end": 338,
"text": "(Mohammad and Bravo-Marquez, 2017;",
"ref_id": "BIBREF27"
},
{
"start": 339,
"end": 361,
"text": "Mohammad et al., 2018)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion intensity prediction",
"sec_num": "2.1"
},
{
"text": "Given the complexity of the task and the relatively small amount of annotated training data available, it is perhaps unsurprising that state-of-the-art methods incorporate information from external sources, either in the form of specialized word embeddings (Goel et al., 2017) , lexicon features (K\u00f6per et al., 2017; Duppada and Hiray, 2017) , or transfer learning methods (Felbo et al., 2017) .",
"cite_spans": [
{
"start": 257,
"end": 276,
"text": "(Goel et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 296,
"end": 316,
"text": "(K\u00f6per et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 317,
"end": 341,
"text": "Duppada and Hiray, 2017)",
"ref_id": "BIBREF15"
},
{
"start": 373,
"end": 393,
"text": "(Felbo et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion intensity prediction",
"sec_num": "2.1"
},
{
"text": "Additionally, in contrast to related tasks, such as sentiment analysis where end-to-end neural methods often give state-of-the-art results Ambartsoumian and Popowich, 2018) , for emotion intensity prediction n-grams, character n-grams, word embedding features, and lexicon features play a more important role (Mohammad and Bravo-Marquez, 2017; K\u00f6per et al., 2017; Duppada and Hiray, 2017) . However, it is not clear if the same features are equally important when performing this task crosslingually.",
"cite_spans": [
{
"start": 139,
"end": 172,
"text": "Ambartsoumian and Popowich, 2018)",
"ref_id": "BIBREF2"
},
{
"start": 309,
"end": 343,
"text": "(Mohammad and Bravo-Marquez, 2017;",
"ref_id": "BIBREF27"
},
{
"start": 344,
"end": 363,
"text": "K\u00f6per et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 364,
"end": 388,
"text": "Duppada and Hiray, 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion intensity prediction",
"sec_num": "2.1"
},
{
"text": "For other tasks which classify affective text, such as sentiment analysis, cross-lingual approaches have shown promise for classifying a low-resource target language by leveraging labeled data from highresource source languages, such as English (Barnes et al., 2018; Chen et al., 2019) .",
"cite_spans": [
{
"start": 245,
"end": 266,
"text": "(Barnes et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 267,
"end": 285,
"text": "Chen et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual approaches",
"sec_num": "2.2"
},
{
"text": "We divide cross-lingual approaches into machine translation (MT) techniques and word embedding techniques, as they generally have different data requirements and different models. MT approaches can either be supervised, which requires parallel corpora, or unsupervised, relying only on monolingual corpora) approaches. While most MT research has focused on resource-rich languages where Neural MT (NMT) has indeed displaced Statistical MT, a recent line of work has managed to train a NMT system without any supervision, relying on monolingual corpora alone (Artetxe et al., 2018) . This would be particularly useful for low-resource languages if the translation quality proved good enough to enable a classifier to reliable predict the emotion.",
"cite_spans": [
{
"start": 558,
"end": 580,
"text": "(Artetxe et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual approaches",
"sec_num": "2.2"
},
{
"text": "Cross-lingual embedding methods instead require large monolingual corpora, and small amounts of bilingual signal, often only small bilingual lexica. Barnes et al. (2018) uses monolingual embeddings, bilingual lexicons and jointly learns cross-lingual embeddings while training an sentiment classifier. The bilingual sentiment embeddings (BLSE) method predicts sentiment of source sentences projecting the vector of the source embeddings into the joint space and repeats the process for the target language using the target embeddings, meaning the original datasets and not the translations, and projecting them into the joint space to obtain the prediction.",
"cite_spans": [
{
"start": 149,
"end": 169,
"text": "Barnes et al. (2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual approaches",
"sec_num": "2.2"
},
{
"text": "More recently, cross-lingual methods have resorted to multilingual language modelling (Devlin et al., 2019; Conneau et al., 2020) based on pretraining large transformer models (Vaswani et al., 2017) on unlabeled text. These models do not explicitly model inter-language representations, but they give surprising cross-lingual performance on many tasks (Wu and Dredze, 2019) .",
"cite_spans": [
{
"start": 86,
"end": 107,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 108,
"end": 129,
"text": "Conneau et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 176,
"end": 198,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF41"
},
{
"start": 352,
"end": 373,
"text": "(Wu and Dredze, 2019)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual approaches",
"sec_num": "2.2"
},
{
"text": "For training the emotion intensity classifiers, we use the English data from the WASSA 2017 shared task on emotion intensity prediction (Mohammad and Bravo-Marquez, 2017) . The authors collected tweets and used crowd-annotation to achieve real-valued labels for four emotions (anger, fear, joy, and sadness). We use their predefined splits (statistics are shown in Table 3 ).",
"cite_spans": [
{
"start": 136,
"end": 170,
"text": "(Mohammad and Bravo-Marquez, 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 365,
"end": 372,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "For testing in the two target languages (Spanish and Catalan) we create two annotated datasets following the methodology of Mohammad and Bravo-Marquez (2017) . We gather tweets (342 Spanish tweets and 280 Catalan tweets) which contained one of the six emotion terms 1 anger, disgust, fear, joy, sadness and surprise). This original download leads to between 385-600 tweets per emotion, totalling 3279 Spanish and 2941 Catalan tweets. These tweets are then filtered and normalized in order to delete the retweets, mentions and links, as well as adverts and tweets that only contained images, resulting in 342 tweets in Spanish and 280 in Catalan.",
"cite_spans": [
{
"start": 124,
"end": 157,
"text": "Mohammad and Bravo-Marquez (2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation of test data",
"sec_num": "3.1"
},
{
"text": "1. Que lindo fue volver a meterse a nadar hoy 2. Hoy estoy triste.... 3. le acabo de romper una pata a la ara\u00f1a y se me subio a la mano 4. estoy tan enfadado.... gracias a dios que nadie me entiende. Following the methodology set out in Best-Worst Scaling (BWS), each annotator is given four items (4-tuple) and is asked which item is the best (highest in terms of the property of interest) and which is the worst (least in terms of the property of interest) (Kiritchenko and Mohammad, 2016) . The annotator must then choose which one of those 4 tweets represents each emotion the most and which represents it the least. Given the small number of tweets, we include all in the annotation of each emotion. An example of the annotation process in Spanish is shown in Table 1 .",
"cite_spans": [
{
"start": 459,
"end": 491,
"text": "(Kiritchenko and Mohammad, 2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 765,
"end": 772,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Annotation of test data",
"sec_num": "3.1"
},
{
"text": "This annotation task presents a number of challenges. For example, annotators cannot simply rely on keywords in the tweet to identify the intensity of an emotion, given that many times the authors of tweets use emotional hashtags ironically. Additionally, differentiating between four tweets that all have relatively low intensities of an emotion can be difficult. Finally, inferring the emotion conveyed in a short text is known to be subjective and challenging on its own (Schuff et al., 2017) .",
"cite_spans": [
{
"start": 474,
"end": 495,
"text": "(Schuff et al., 2017)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation of test data",
"sec_num": "3.1"
},
{
"text": "After annotating the tuples, we use split-half correlation to determine inter-annotator agreement (shown in Table 2 ). We report Pearson correlation, which is a number between -1 and 1 that indicates the extent to which two variables are linearly related, and Spearman correlation, which measures the strength and direction of association between two ranked variables. As shown in Table 2 , we obtain strong correlations (> 0.6) for all emotions except for surprise. We find a higher correlation score in Spanish than in Catalan for all emotions. The lower correlation for the Catalan tweets could be due to the fact that there are fewer tweets referring to joy and surprise, which made the annotation task harder. It is important to point out that surprise has the lowest scores, which is known to be a difficult emotion to classify (Schuff et al., 2017) , and has even been split into positive and negative surprise (Alm et al., 2005) . In fact, we disregard surprise and disgust for the rest of the experiments, as we have no English annotated data for these emotions. However, the data will be made available with all annotations.",
"cite_spans": [
{
"start": 834,
"end": 855,
"text": "(Schuff et al., 2017)",
"ref_id": "BIBREF38"
},
{
"start": 918,
"end": 936,
"text": "(Alm et al., 2005)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 108,
"end": 115,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 381,
"end": 388,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Inter-annotator agreement",
"sec_num": "3.2"
},
{
"text": "In order to determine how much bilingual signal is required to predict crosslingual emotion intensity, we compare four methods with differing data needs. In the following, we describe these methods from those that require the largest amount of bilingual signal (MT) to those that require the least (UNSUP). Supervised Machine Translation (MT): We use GoogleTranslate 2 , which makes use of large amounts of parallel data, to translate the test samples to English. We then train a Support Vector Regression model 3 on bag of words representations (MT-BOW) and a second model with a number of additional features (MT-FULL). For the MT-FULL model, we include n-gram features (1-4), character n-grams (3-5), embedding features created by averaging the embeddings of all the tokens in the tweet, and finally features from the following lexicons: NRC Hashtag Sentiment Lexicon , NRC hashtag emotion association lexicon (Mohammad, 2012; and the NRC Word-Emotion Association Lexicon (Mohammad and Turney, 2013) , where each feature is a real valued number which represents how much each word is associated to a polarity or emotion. The final representation of each tweet using the MT-FULL method is therefore a 65860 dimensional vector. Finally, we train the SVR model using the following settings (linear kernel, C = 100) on the original English training data and test on the translated test set.",
"cite_spans": [
{
"start": 913,
"end": 929,
"text": "(Mohammad, 2012;",
"ref_id": "BIBREF33"
},
{
"start": 975,
"end": 1002,
"text": "(Mohammad and Turney, 2013)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Cross-lingual Word Embeddings (CWE): We create 300 dimensional monolingual word2vec embeddings for source and target languages by training on Wikipedia corpora (see UNSUP for more information on the corpora) and then use VecMap (Artetxe et al., 2017) to learn an orthogonal projection of the word embeddings to a joint shared embedding space using a small bilingual lexicon 4 as supervision (5749 and 5310 translation pairs for EN-ES and EN-CA, respectively). Finally, we train a Support Vector Regression model on the source language (EN) using only the crosslingual embeddings as features using the following settings (linear kernel, C = 100) and test it on the target languages (ES, CA). It is important to highlight that this method does not use the translations but the original texts.",
"cite_spans": [
{
"start": 228,
"end": 250,
"text": "(Artetxe et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Bilingual Sentiment Embeddings (BLSE): Like CWE, this model uses a bilingual lexicon to learn a mapping from both original vector spaces to a shared bilingual space, but instead jointly learns to predict the sentiment and employs two linear projection matrices. This allows the model to infuse the target embedding space with sentiment information by updating the source space for sentiment and requiring that the target space resemble it as much as possible, using the bilingual dictionary to anchor terms. In this work, we adapt the model to predict emotion intensity by replacing the cross-entropy loss with mean-squared error. We train the model on the English training data and the same bilingual lexicons as for CWE, optimizing with Adam (Kingma and Ba, 2014) for 100 epochs with an \u03b1 of 0.001. We keep the model with the best performance on the source language development set, and finally test on the target test set. As with CWE, we highlight that this method does not use the translations but the original texts.",
"cite_spans": [
{
"start": 744,
"end": 765,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "We train an unsupervised statistical machine translation model (Artetxe et al., 2018) English-Catalan. The model first creates monolingual embeddings, then learns to project them to a bilingual space by selecting identical strings as pivots, which serve as a noisy bilingual lexicon, which is improved iteratively. Next, the model induces a noisy phrase table for the SMT model, which is again improved iteratively. We extract cleaned corpora 5 from Wikipedia dumps and sentence and word tokenize them, resulting in 89~/29~/10~million sentences for English, Spanish, and Catalan, respectively. We train the UNSUP model using the default settings (removing sentences with fewer than 3 and more than 80 tokens, 5gram language model, 300 dimensional embeddings, 10 rounds of unsupervised tuning for the SMT and 3 rounds of backtranslation). We then translate the test data to English using the UNSUP system. Finally, we use bag-of-words representations (UNSUP-BOW) and additional n-gram, character n-gram, embedding, and lexicon information (UNSUP-FULL) and train a Support Vector Regression model using the following settings (linear kernel, C = 100), as we do with the MT models.",
"cite_spans": [
{
"start": 63,
"end": 85,
"text": "(Artetxe et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Statistical Machine Translation (UNSUP)",
"sec_num": null
},
{
"text": "We use pretrained MBERT and XLM-ROBERTA models to extract features for each example by taking the final [CLS] embedding as the representation for the example. These features are then used to train an SVR model, as with the other experiments. We additionally experimented with adding a linear layer after the final LM layer and fine-tuning the full model, only fine-tuning the linear layer, and using a max pooled representation instead of the [CLS] embedding, but found that these approaches did not perform as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-lingual Language Models",
"sec_num": null
},
{
"text": "The Pearson correlation results are summarized for all models in Table 4 for Spanish and Catalan. We report the individual scores for anger, fear, joy, and sadness, as well as the averaged Pearson score of all emotions.",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 72,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The approach that obtains the highest overall Pearson correlation across all emotions on both languages is UNSUP-FULL, averaging 0.37 on Catalan and 0.32 on Spanish. In addition, it is the best performing model on 4 of the 8 tasks, except for Catalan joy, where MT-FULL is 0.01 percentage points (pp.) better, reaching 0.46 and Catalan sadness (XLM-ROBERTA is 0.04 pp. better, at 0.25) and Catalan fear (XLM-ROBERTA is 0.02 pp. better, at 0.42) and Spanish sadness (BLSE is 0.02 pp. better, reaching 0.12). XLM-ROBERTA is the second best model, averaging 0.35 and 0.30 on Catalan and Spanish, respectively, while MT-FULL is slightly worse (0.34 and 0.26). MT-BOW and UNSUP-BOW perform much worse (0.15/0.07 and 0.10/0.05), and the cross-lingual embedding methods are the worst by far (0.04/0.03 for CWE and 0.07/0.15 for BLSE). hashtags lexical insert. delet. untrans. slang names nums . Total CA MT 90 53 2 18 17 26 5 2 213 UNSUP 60 67 7 14 168 29 81 9 435 ES MT 62 37 0 4 12 68 0 0 183 UNSUP 35 142 13 43 84 101 49 16 467 Table 5 : Error analysis of the the machine translation used in MT and UNSUP approaches. The error categories include incorrectly translated hashtags, lexical errors, insertions, deletions, untranslated segments, translation errors of slang and non-standard language, mistranslated names and numbers. Number refer to the number of tweets where these errors are found, rather than the number of errors.",
"cite_spans": [],
"ref_spans": [
{
"start": 886,
"end": 1058,
"text": ". Total CA MT 90 53 2 18 17 26 5 2 213 UNSUP 60 67 7 14 168 29 81 9 435 ES MT 62 37 0 4 12 68 0 0 183 UNSUP 35 142 13 43 84 101 49",
"ref_id": "TABREF0"
},
{
"start": 1066,
"end": 1073,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "It is clear that the additional features (character n-grams, embedding features, and lexicon features) are essential. MT-FULL performs an average of 0.19 pp. Pearson better than MT-BOW, while UNSUP-FULL leads to 0.28 pp. improvement over UNSUP-BOW. We further confirm this in Section 6.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Regarding the cross-lingual embedding models, it seems evident that these do not contain enough information to accurately predict emotion intensity in the target language. BLSE does outperform CWE on both Catalan and Spanish (an average of 0.03 pp. and 0.12 pp., respectively) and both MT-BOW and UNSUP-BOW on Catalan (0.08 pp. and 0.10 pp.), but the overall performance is still poor. These models are also the poorest performers monolingually.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "There is large performance gap between XLM-ROBERTA and MBERT, on both the monolingual (0.27 pp.) and cross-lingual tasks (0.38/0.22 pp.), as MBERT is the weakest cross-lingual model and XLM-ROBERTA the second best.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Additionally, there is a divergence between sadness and the rest of the emotions analyzed, with no model achieving more than 0.21 or 0.12 in Catalan and Spanish. This seems to indicate that sadness may be harder to classify cross-lingually, as monolingually this class is has the best classification results (Mohammad and Bravo-Marquez, 2017; K\u00f6per et al., 2017) . This class also has the fewest training and development examples in English, which may indicate that the good previous results monolingually may have been due to overfitting to the data. It is also possible that the particulars of the target language test data are the reason for this difference, although the inter-annotator agreement scores suggest that sadness is not more difficult than the other classes.",
"cite_spans": [
{
"start": 308,
"end": 342,
"text": "(Mohammad and Bravo-Marquez, 2017;",
"ref_id": "BIBREF27"
},
{
"start": 343,
"end": 362,
"text": "K\u00f6per et al., 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In this section we compare both quantitatively and qualitatively the differences in translation quality between MT and UNSUP. Furthermore, we perform an ablation study to determine which features are the most important for MONO, MT-FULL, and UNSUP-FULL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "Given that twitter is a social network where people express their emotions and opinions on a large variety of topics -social or personal events, news, and politics -the translation task is made more difficult. Additionally, relevant information to emotion classification in tweets is often contained in hashtags , which are known to be difficult to translate (Gotti et al., 2014) . Therefore, cross-lingual approaches to fine-grained emotion detection in twitter are particularly challenging since the language used in twitter usually contains abbreviations, acronyms, emoticons, unusual orthographic elements, slang, and misspellings (Liew and Turtle, 2016). All of these phenomena are difficult for both translation-and projection-based cross-lingual approaches.",
"cite_spans": [
{
"start": 359,
"end": 379,
"text": "(Gotti et al., 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Differences in translation quality",
"sec_num": "6.1"
},
{
"text": "We manually examine the MT and UNSUP translations of the Catalan and Spanish tweets for translation errors. For each tweet, we determine if there has been an error regarding the hashtags, any lexical errors, insertions, deletions, untranslated segments, errors with non-standard language, errors translating original #DiosLosCr\u00edaYEllosSeJuntan L'advocat de Camacho en el cas M\u00e9todo 3 va redactar la sent\u00e8ncia de Puig Antich link MT # DiosLosCr\u00edaYesLocated The Camacho lawyer in the case Method 3 wrote the sentence of Puig Antich link UNSUP # DiosLosCr\u00edaYEllosSeJuntan l 'advocat of camacho in the case hist\u00f3ria 3 drafted the verdict of abu-jamal link manual trans. #BirdsOfAFeatherFlockTogether Camacho's lawyer in the M\u00e9todo 3 case is the one who sentenced Puig Antich link Table 6 : An example of a tweet in Catalan (original), its translations using the two machine translation systems (MT, UNSUP), as well as a manual translation. Untranslated tokens are highlighted in red, while entity errors are highlighted in blue. Table 7 : An example of tweet in Spanish (original) and its translation using the two machine translations systems (MT, UNSUP). Hashtag translation errors are highlighted in grey and lexical errors are highlighted in green.",
"cite_spans": [],
"ref_spans": [
{
"start": 776,
"end": 783,
"text": "Table 6",
"ref_id": null
},
{
"start": 1025,
"end": 1032,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Differences in translation quality",
"sec_num": "6.1"
},
{
"text": "names and errors translating number and show the results in Table 5 . MT has fewer errors overall compared to UNSUP (213/183 compared to 435/467, respectively) and has fewer of all error types, except for hashtags. For the task of predicting emotion intensity in tweets, the hashtags are often the most informative source, which explains why UNSUP-FULL performs better than MT-FULL in our experiments. The Spanish translation models generally perform better than the Catalan ones. This is likely due to the larger amount of training data available. However, the Spanish data also contains more use of nonstandard language, which is reflected in the slang errors. In these cases, MT generally performs much better than UNSUP. Interestingly, UNSUP tends to mistranslate named entities. Specifically, the model often replaces a named entity with a similar entity in the target language. For example, mentions of the Catalan freedom fighter Salvador Puig i Antich are consistently translated to Mumia Abu-Jamal, an American journalist (see Table 6 ). Both were accused of killing a police officer and sentenced to death, which lead to large protests. This is likely due to the nearest neighbor search used to create the original phrase tables.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 5",
"ref_id": null
},
{
"start": 1036,
"end": 1043,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Differences in translation quality",
"sec_num": "6.1"
},
{
"text": "Besides the mistranlation of named entities, in Table 6 we can also see that the multiword hashtag, which contains information necessary to properly interpret the emotional content of the tweet, has not been translated by MT or UNSUP. Note that although this problem could be improved by properly segmenting the hashtags in a previous step (Declerck and Lendvai, 2015; \u00c7elebi and \u00d6zg\u00fcr, 2016) , translation would still likely lead to a loss of information (Gotti et al., 2014) important for emotion classification. Table 7 , instead, shows an example from the Spanish dataset where, even though the MT version better preserves the semantics of the original tweet, it did not correctly translate the emotional hashtag, while UNSUP did. On the other hand, for cases where MT-FULL has better performance, translation quality tends to be the main factor. Specifically, UNSUP-FULL tends to leave many words untranslated. Table 8 : Ablation study of MONO (used to show an informative monolingual baseline), MT-FULL and UNSUP-FULL on the Catalan dataset, where we show the drop in performance (Pearson correlation) when we remove only a single feature at a time (except for -all lex, where all lexicon features are removed). We show the largest drop in bold and the second largest underlined. On most emotions (anger, fear joy), removing the n-gram feature leads to the largest drop both mono-and cross-lingually. For sadness, however, the NRC sentiment lexicon features (sent) are most decisive.",
"cite_spans": [
{
"start": 340,
"end": 368,
"text": "(Declerck and Lendvai, 2015;",
"ref_id": "BIBREF13"
},
{
"start": 369,
"end": 392,
"text": "\u00c7elebi and \u00d6zg\u00fcr, 2016)",
"ref_id": "BIBREF9"
},
{
"start": 456,
"end": 476,
"text": "(Gotti et al., 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 48,
"end": 55,
"text": "Table 6",
"ref_id": null
},
{
"start": 515,
"end": 522,
"text": "Table 7",
"ref_id": null
},
{
"start": 916,
"end": 923,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Differences in translation quality",
"sec_num": "6.1"
},
{
"text": "In order to determine which features are most predictive for emotion intensity, we perform an ablation study of MONO, MT-FULL, and UNSUP-FULL on the Catalan test data 6 . Specifically, we remove a single feature at at time, except for -all lex, where all lexicon features are removed. We include MONO as an upper-bound to determine what features are most important for the task, given enough monolingual data, but note that the test data is different for MT-FULL and UNSUP-FULL, so the exact results are therefore not strictly comparable. The results are shown in Table 8 . In general, the cross-lingual models exhibit the same relationship to the features as the monolingual model, although with generally lower performance. Token n-grams are the most important feature for anger and fear, although less important for joy and sadness. Word embedding features seem to contribute nothing to the performance. Character n-gram features, on the other hand, contribute little, or even hurt the performance (removing them actually improves the results for MT-FULL and UNSUP-FULL on fear, and for MONO and UNSUP-FULL on sadness). Finally, the lexicon features are important for all emotions.",
"cite_spans": [],
"ref_spans": [
{
"start": 564,
"end": 571,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation study",
"sec_num": "6.2"
},
{
"text": "For joy, removing any one set of features does not lead to large drops in performance. Given the good performance of all models on this emotion, it seems to indicate that this class is the easiest to predict, and that the features are relatively redundant. However for UNSUP-FULL, removing all lexicon features still leads to a drop of 0.16 pp.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation study",
"sec_num": "6.2"
},
{
"text": "Sadness is the most difficult emotion, with the cross-lingual models performing on par with MONO. The lexicon features are the most informative for all models, specifically the NRC sentiment lexicon features (-sent). This effect is even stronger cross-lingually, where without them, the models perform at chance level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation study",
"sec_num": "6.2"
},
{
"text": "In this paper, we provided the first attempt at cross-lingual emotion intensity prediction, by comparing methods which rely on differing amounts of cross-lingual signal, ranging from millions of parallel sentences (MT), small bilingual dictionaries (cross-lingual embeddings), to no explicit cross-lingual signal at all (UNSUP). We compare these methods on two target languages, Spanish and Catalan, which do not have large available emotion datasets or lexicon resources. In order to test the models, we additionally annotated a small dataset of tweets in Spanish and Catalan.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Our results show that translation methods outperform embedding-based methods for almost all emotions and achieve reasonable average results, although there is still a noticeable gap to reach monolingual levels. Surprisingly, unsupervised translation is the best performing cross-lingual method, largely due to the fact that it more accurately translates hashtags. XLM-ROBERTA performs nearly as well, but unfortunately cannot be combined with sentiment and emotion lexicons available in English, as it processes the original target-language data. These results may not hold for other domains, such as literature or opinion pieces, where emotional information is not concentrated in a similar way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "In the future, it would be interesting to perform experiments on various domains, in order to determine whether unsupervised machine translation for cross-lingual emotion is robust to domain shift. Theoretically, this is simpler for unsupervised MT rather than supervised MT, which could motivate further research in this direction. As lexicon information has proven so useful for this task, it could be interesting to look into approaches that use this information to improve pretrained multilingual language models. Additionally, given the importance of hashtags for emotion detection in tweets, it would be important for future work in cross-lingual emotion detection to concentrate on achieving better translations of hashtags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Finally, we contemplate promising research on emotion detection and classification using the newly annotated data in Catalan and Spanish we introduce here. We expect that this will contribute to furthering research on these two languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We used the following translations of the emotion terms to gather tweets: felicidad, enfadado, tristeza, asco, miedo, sorpresa for Spanish and felicitat, enfadat, tristesa, f\u00e0stic, por, sorpresa in Catalan.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available at https://translate.google.com.3 We use the version available in the Sklearn toolkit(Pedregosa et al., 2011).4 We use the lexicons made available fromBarnes et al. (2018).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the wikiextractor tool available at https://github.com/attardi/wikiextractor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The ablation study results for the Spanish data are similar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Emotions from text: Machine learning for textbased emotion prediction",
"authors": [
{
"first": "Cecilia",
"middle": [],
"last": "Ovesdotter Alm",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "579--586",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cecilia Ovesdotter Alm, Dan Roth, and Richard Sproat. 2005. Emotions from text: Machine learning for text- based emotion prediction. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 579-586, Vancouver, BC, Canada, October.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Identifying expressions of emotion in text",
"authors": [
{
"first": "Saima",
"middle": [],
"last": "Aman",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
}
],
"year": 2007,
"venue": "Text, Speech and Dialogue: 10th International Conference",
"volume": "",
"issue": "",
"pages": "196--205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saima Aman and Stan Szpakowicz. 2007. Identifying expressions of emotion in text. In Text, Speech and Dia- logue: 10th International Conference, TSD 2007, Pilsen, Czech Republic, September 3-7, 2007. Proceedings, pages 196-205. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Self-attention: A better building block for sentiment analysis neural network classifiers",
"authors": [
{
"first": "Artaches",
"middle": [],
"last": "Ambartsoumian",
"suffix": ""
},
{
"first": "Fred",
"middle": [],
"last": "Popowich",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "130--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Artaches Ambartsoumian and Fred Popowich. 2018. Self-attention: A better building block for sentiment analysis neural network classifiers. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 130-139, Brussels, Belgium, October.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning bilingual word embeddings with (almost) no bilingual data",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "451--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451-462, Vancouver, Canada, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unsupervised statistical machine translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3632--3642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. Unsupervised statistical machine translation. In Pro- ceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632-3642, Brussels, Belgium, October-November. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Embedding projection for targeted cross-lingual sentiment: Model comparisons and a real-world study",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Barnes",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Artificial Intelligence Research",
"volume": "66",
"issue": "",
"pages": "691--742",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeremy Barnes and Roman Klinger. 2019. Embedding projection for targeted cross-lingual sentiment: Model comparisons and a real-world study. Journal of Artificial Intelligence Research, 66:691-742.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Assessing State-of-the-Art Sentiment Models on State-of-the-Art Sentiment Datasets",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Barnes",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "2--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeremy Barnes, Roman Klinger, and Sabine Schulte im Walde. 2017. Assessing State-of-the-Art Sentiment Mod- els on State-of-the-Art Sentiment Datasets. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 2-12, Copenhagen, Denmark, September.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bilingual sentiment embeddings: Joint projection of sentiment across languages",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Barnes",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2483--2493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeremy Barnes, Roman Klinger, and Sabine Schulte im Walde. 2018. Bilingual sentiment embeddings: Joint projection of sentiment across languages. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2483-2493, Melbourne, Australia, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Exploring fine-tuned embeddings that model intensifiers for emotion analysis",
"authors": [
{
"first": "Laura",
"middle": [
"Ana"
],
"last": "",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Bostan",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Tenth Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "25--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Ana Maria Bostan and Roman Klinger. 2019. Exploring fine-tuned embeddings that model intensifiers for emotion analysis. In Proceedings of the Tenth Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 25-34, Minneapolis, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Segmenting hashtags using automatically created training data",
"authors": [
{
"first": "Arda",
"middle": [],
"last": "\u00c7elebi",
"suffix": ""
},
{
"first": "Arzucan",
"middle": [],
"last": "\u00d6zg\u00fcr",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "2981--2985",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arda \u00c7elebi and Arzucan \u00d6zg\u00fcr. 2016. Segmenting hashtags using automatically created training data. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 2981-2985, Portoro\u017e, Slovenia, May. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Adversarial deep averaging networks for cross-lingual sentiment classification",
"authors": [
{
"first": "Xilun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Athiwaratkun",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Kilian",
"middle": [],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "557--570",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2018. Adversarial deep aver- aging networks for cross-lingual sentiment classification. Transactions of the Association for Computational Linguistics, 6:557-570.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Multi-source crosslingual model transfer: Learning what to share",
"authors": [
{
"first": "Xilun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Hassan Awadallah",
"suffix": ""
},
{
"first": "Hany",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3098--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xilun Chen, Ahmed Hassan Awadallah, Hany Hassan, Wei Wang, and Claire Cardie. 2019. Multi-source cross- lingual model transfer: Learning what to share. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3098-3112, Florence, Italy, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Unsupervised crosslingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross- lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Processing and normalizing hashtags",
"authors": [
{
"first": "Thierry",
"middle": [],
"last": "Declerck",
"suffix": ""
},
{
"first": "Piroska",
"middle": [],
"last": "Lendvai",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Hissar",
"suffix": ""
},
{
"first": "September",
"middle": [],
"last": "Bulgaria",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ltd",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "104--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thierry Declerck and Piroska Lendvai. 2015. Processing and normalizing hashtags. In Proceedings of the In- ternational Conference Recent Advances in Natural Language Processing, pages 104-109, Hissar, Bulgaria, September. INCOMA Ltd. Shoumen, BULGARIA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Seernet at EmoInt-2017: Tweet emotion intensity estimator",
"authors": [
{
"first": "Venkatesh",
"middle": [],
"last": "Duppada",
"suffix": ""
},
{
"first": "Sushant",
"middle": [],
"last": "Hiray",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "205--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Venkatesh Duppada and Sushant Hiray. 2017. Seernet at EmoInt-2017: Tweet emotion intensity estimator. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 205-211, Copenhagen, Denmark, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An argument for basic emotions",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Ekman",
"suffix": ""
}
],
"year": 1992,
"venue": "Cognition and Emotion",
"volume": "",
"issue": "",
"pages": "169--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Ekman. 1992. An argument for basic emotions. Cognition and Emotion, pages 169-200.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Basic emotions",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Ekman",
"suffix": ""
}
],
"year": 1999,
"venue": "Handbook of Cognition and Emotion",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Ekman. 1999. Basic emotions. In Tim Dalgleish and M. J. Powers, editors, Handbook of Cognition and Emotion. Wiley.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm",
"authors": [
{
"first": "Bjarke",
"middle": [],
"last": "Felbo",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Mislove",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Iyad",
"middle": [],
"last": "Rahwan",
"suffix": ""
},
{
"first": "Sune",
"middle": [],
"last": "Lehmann",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1615--1625",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bjarke Felbo, Alan Mislove, Anders S\u00f8gaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1615-1625, Copenhagen, Denmark, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning bilingual sentiment-specific word embeddings without crosslingual supervision",
"authors": [
{
"first": "Yanlin",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "420--429",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanlin Feng and Xiaojun Wan. 2019. Learning bilingual sentiment-specific word embeddings without cross- lingual supervision. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 420- 429, Minneapolis, Minnesota, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Prayas at EmoInt 2017: An ensemble of deep neural architectures for emotion intensity prediction in tweets",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Devang",
"middle": [],
"last": "Kulshreshtha",
"suffix": ""
},
{
"first": "Prayas",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Kaushal Kumar",
"middle": [],
"last": "Shukla",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "58--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Goel, Devang Kulshreshtha, Prayas Jain, and Kaushal Kumar Shukla. 2017. Prayas at EmoInt 2017: An ensemble of deep neural architectures for emotion intensity prediction in tweets. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 58-65, Copenhagen, Denmark, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Hashtag occurrences, layout and translation: A corpus-driven analysis of tweets published by the Canadian government",
"authors": [
{
"first": "Fabrizio",
"middle": [],
"last": "Gotti",
"suffix": ""
},
{
"first": "Phillippe",
"middle": [],
"last": "Langlais",
"suffix": ""
},
{
"first": "Atefeh",
"middle": [],
"last": "Farzindar",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)",
"volume": "",
"issue": "",
"pages": "2254--2261",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabrizio Gotti, Phillippe Langlais, and Atefeh Farzindar. 2014. Hashtag occurrences, layout and translation: A corpus-driven analysis of tweets published by the Canadian government. In Proceedings of the Ninth Interna- tional Conference on Language Resources and Evaluation (LREC'14), pages 2254-2261, Reykjavik, Iceland, May. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Two methods for domain adaptation of bilingual tasks: Delightfully simple and broadly applicable",
"authors": [
{
"first": "Viktor",
"middle": [],
"last": "Hangya",
"suffix": ""
},
{
"first": "Fabienne",
"middle": [],
"last": "Braune",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "810--820",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viktor Hangya, Fabienne Braune, Alexander Fraser, and Hinrich Sch\u00fctze. 2018. Two methods for domain adapta- tion of bilingual tasks: Delightfully simple and broadly applicable. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 810-820, Melbourne, Aus- tralia, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 3rd International Conference on Learning Representations (ICLR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Proceedings of the 3rd International Conference on Learning Representations (ICLR), dec.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Capturing reliable fine-grained sentiment associations by crowdsourcing and best-worst scaling",
"authors": [
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "811--817",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Svetlana Kiritchenko and Saif M. Mohammad. 2016. Capturing reliable fine-grained sentiment associations by crowdsourcing and best-worst scaling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 811-817, San Diego, California, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "IMS at EmoInt-2017: Emotion intensity prediction with affective norms, automatically extended resources and deep learning",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "K\u00f6per",
"suffix": ""
},
{
"first": "Evgeny",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "50--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian K\u00f6per, Evgeny Kim, and Roman Klinger. 2017. IMS at EmoInt-2017: Emotion intensity prediction with affective norms, automatically extended resources and deep learning. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 50-57, Copenhagen, Denmark, September.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Exploring fine-grained emotion detection in tweets",
"authors": [
{
"first": "Jasy",
"middle": [],
"last": "Suet",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Liew",
"suffix": ""
},
{
"first": "Howard",
"middle": [
"R"
],
"last": "Turtle",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL Student Research Workshop",
"volume": "",
"issue": "",
"pages": "73--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jasy Suet Yan Liew and Howard R. Turtle. 2016. Exploring fine-grained emotion detection in tweets. In Pro- ceedings of the NAACL Student Research Workshop, pages 73-80, San Diego, California, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "WASSA-2017 shared task on emotion intensity",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "34--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad and Felipe Bravo-Marquez. 2017. WASSA-2017 shared task on emotion intensity. In Proceed- ings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 34-49, Copenhagen, Denmark, September.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Using hashtags to capture fine emotion categories from tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Intelligence",
"volume": "31",
"issue": "2",
"pages": "301--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad and Svetlana Kiritchenko. 2015. Using hashtags to capture fine emotion categories from tweets. Computational Intelligence, 31(2):301-326.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Crowdsourcing a word-emotion association lexicon",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"D"
],
"last": "Mohammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Intelligence",
"volume": "29",
"issue": "3",
"pages": "436--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a word-emotion association lexicon. Computa- tional Intelligence, 29(3):436-465.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "NRC-canada: Building the state-of-the-art in sentiment analysis of tweets",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation",
"volume": "2",
"issue": "",
"pages": "321--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. NRC-canada: Building the state-of-the-art in sentiment analysis of tweets. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 321-327, Atlanta, Georgia, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Sentiment, emotion, purpose, and style in electoral tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 2015,
"venue": "Information Processing Management",
"volume": "51",
"issue": "4",
"pages": "480--499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad, Xiaodan Zhu, Svetlana Kiritchenko, and Joel Martin. 2015. Sentiment, emotion, purpose, and style in electoral tweets. Information Processing Management, 51(4):480 -499.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "SemEval-2018 task 1: Affect in tweets",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval-2018 task 1: Affect in tweets. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 1-17, New Orleans, Louisiana, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "#emotional tweets",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2012,
"venue": "*SEM 2012: The First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "7--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad. 2012. #emotional tweets. In *SEM 2012: The First Joint Conference on Lexical and Computa- tional Semantics -Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceed- ings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 246-255, Montr\u00e9al, Canada, 7-8 June. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Scikit-learn: Machine learning in python",
"authors": [
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Bertrand",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Dubourg",
"suffix": ""
}
],
"year": 2011,
"venue": "Matthieu Brucher, Matthieu Perrot, and \u00c9douard Duchesnay",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Math- ieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cour- napeau, Matthieu Brucher, Matthieu Perrot, and \u00c9douard Duchesnay. 2011. Scikit-learn: Machine learning in python. J. Mach. Learn. Res., 12:2825-2830, November.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A general psychoevolutionary theory of emotion. Emotion: Theory, research, and experience",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Plutchik",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "1",
"issue": "",
"pages": "3--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Plutchik. 1980. A general psychoevolutionary theory of emotion. Emotion: Theory, research, and experi- ence, 1(3):3-33.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "The nature of emotions",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Plutchik",
"suffix": ""
}
],
"year": 2001,
"venue": "American Scientist",
"volume": "89",
"issue": "",
"pages": "344--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Plutchik. 2001. The nature of emotions. American Scientist, 89(July-August):344-350.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Core affect and the psychological construction of emotion",
"authors": [
{
"first": "James",
"middle": [
"A"
],
"last": "Russell",
"suffix": ""
}
],
"year": 2003,
"venue": "Psychological review",
"volume": "110",
"issue": "1",
"pages": "1--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James A. Russell. 2003. Core affect and the psychological construction of emotion. Psychological review, 110(1):1-145.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Annotation, modelling and analysis of fine-grained emotions on a stance and sentiment detection corpus",
"authors": [
{
"first": "Hendrik",
"middle": [],
"last": "Schuff",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Barnes",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Mohme",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "13--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hendrik Schuff, Jeremy Barnes, Julian Mohme, Sebastian Pad\u00f3, and Roman Klinger. 2017. Annotation, modelling and analysis of fine-grained emotions on a stance and sentiment detection corpus. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 13-23, Copenhagen, Denmark, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "SemEval-2007 Task 14: Affective Text",
"authors": [
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)",
"volume": "",
"issue": "",
"pages": "70--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlo Strapparava and Rada Mihalcea. 2007. SemEval-2007 Task 14: Affective Text. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 70-74, Prague, Czech Republic, June.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Crowdsourcing and validating event-focused emotion corpora for German and English",
"authors": [
{
"first": "Enrica",
"middle": [],
"last": "Troiano",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4005--4011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enrica Troiano, Sebastian Pad\u00f3, and Roman Klinger. 2019. Crowdsourcing and validating event-focused emotion corpora for German and English. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 4005-4011, Florence, Italy, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Annotating expressions of opinions and emotions in language",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2005,
"venue": "Language Resources and Evaluation",
"volume": "39",
"issue": "2-3",
"pages": "165--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, 39(2-3):165-210.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT",
"authors": [
{
"first": "Shijie",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "833--844",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833-844, Hong Kong, China, November. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td/><td>Catalan</td><td>Spanish</td></tr><tr><td/><td colspan=\"2\">Pearson Spearman Pearson Spearman</td></tr><tr><td>anger</td><td>0.68 (0.02) 0.66 (0.03)</td><td>0.77 (0.02) 0.78 (0.02)</td></tr><tr><td>disgust</td><td>0.71 (0.02) 0.70 (0.02)</td><td>0.76 (0.02) 0.77 (0.02)</td></tr><tr><td>fear</td><td>0.69 (0.02) 0.67 (0.02)</td><td>0.74 (0.02) 0.74 (0.02)</td></tr><tr><td>joy</td><td>0.66 (0.02) 0.64 (0.02)</td><td>0.76 (0.02) 0.76 (0.02)</td></tr><tr><td colspan=\"2\">sadness 0.65 (0.02) 0.64 (0.02)</td><td>0.74 (0.02) 0.75 (0.02)</td></tr><tr><td colspan=\"2\">surprise 0.44 (0.03) 0.44 (0.04)</td><td>0.65 (0.02) 0.62 (0.02)</td></tr></table>",
"html": null,
"type_str": "table",
"text": "An example 4-tuple for the Spanish data for sadness.",
"num": null
},
"TABREF1": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "",
"num": null
},
"TABREF2": {
"content": "<table><tr><td>anger</td><td>857</td><td>84</td><td>760</td><td>280</td><td>342</td></tr><tr><td>fear</td><td>1147</td><td>110</td><td>995</td><td>280</td><td>342</td></tr><tr><td>joy</td><td>823</td><td>79</td><td>714</td><td>280</td><td>342</td></tr><tr><td colspan=\"2\">sadness 786</td><td>74</td><td>673</td><td>280</td><td>342</td></tr></table>",
"html": null,
"type_str": "table",
"text": "EN train EN dev EN test CA test ES test",
"num": null
},
"TABREF3": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Statistics of the English train (EN train ), development (EN dev ), and Catalan (CA test ) and Spanish (ES test ) test sets used in the experiments. Each model is trained on EN train and then tested on EN test , CA test , and ES test . EN dev is only used for hyperparameter optimization.",
"num": null
},
"TABREF5": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Pearson results of monolingual English-English experiments, as well as cross-lingual English-Catalan and English-Spanish for each emotion and each model. Average column added and best results are shown in bold.",
"num": null
}
}
}
}