ACL-OCL / Base_JSON /prefixS /json /starsem /2020.starsem-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:41:09.654709Z"
},
"title": "Token Sequence Labeling vs. Clause Classification for English Emotion Stimulus Detection",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Oberl\u00e4nder",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {
"addrLine": "Pfaffenwaldring 5b",
"postCode": "70569",
"settlement": "Stuttgart",
"country": "Germany"
}
},
"email": "laura.oberlaender@ims.uni-stuttgart.de"
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {
"addrLine": "Pfaffenwaldring 5b",
"postCode": "70569",
"settlement": "Stuttgart",
"country": "Germany"
}
},
"email": "roman.klinger@ims.uni-stuttgart.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Emotion stimulus detection is the task of finding the cause of an emotion in a textual description, similar to target or aspect detection for sentiment analysis. Previous work approached this in three ways, namely (1) as text classification into an inventory of predefined possible stimuli (\"Is the stimulus category A or B?\"), (2) as sequence labeling of tokens (\"Which tokens describe the stimulus?\"), and (3) as clause classification (\"Does this clause contain the emotion stimulus?\"). So far, setting (3) has been evaluated broadly on Mandarin and (2) on English, but no comparison has been performed. Therefore, we analyze whether clause classification or token sequence labeling is better suited for emotion stimulus detection in English. We propose an integrated framework which enables us to evaluate the two different approaches comparably, implement models inspired by state-of-the-art approaches in Mandarin, and test them on four English data sets from different domains. Our results show that token sequence labeling is superior on three out of four datasets, in both clause-based and token sequence-based evaluation. The only case in which clause classification performs better is one data set with a high density of clause annotations. Our error analysis further confirms quantitatively and qualitatively that clauses are not the appropriate stimulus unit in English. 1 Introduction Research in emotion analysis from text focuses on classification, i.e., mapping sentences or documents to emotion categories based on psychological theories (e.g., Ekman (1992), Plutchik (2001)). While this task answers the question which emotion This work is licensed under a Creative Commons Attribution 4.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Emotion stimulus detection is the task of finding the cause of an emotion in a textual description, similar to target or aspect detection for sentiment analysis. Previous work approached this in three ways, namely (1) as text classification into an inventory of predefined possible stimuli (\"Is the stimulus category A or B?\"), (2) as sequence labeling of tokens (\"Which tokens describe the stimulus?\"), and (3) as clause classification (\"Does this clause contain the emotion stimulus?\"). So far, setting (3) has been evaluated broadly on Mandarin and (2) on English, but no comparison has been performed. Therefore, we analyze whether clause classification or token sequence labeling is better suited for emotion stimulus detection in English. We propose an integrated framework which enables us to evaluate the two different approaches comparably, implement models inspired by state-of-the-art approaches in Mandarin, and test them on four English data sets from different domains. Our results show that token sequence labeling is superior on three out of four datasets, in both clause-based and token sequence-based evaluation. The only case in which clause classification performs better is one data set with a high density of clause annotations. Our error analysis further confirms quantitatively and qualitatively that clauses are not the appropriate stimulus unit in English. 1 Introduction Research in emotion analysis from text focuses on classification, i.e., mapping sentences or documents to emotion categories based on psychological theories (e.g., Ekman (1992), Plutchik (2001)). While this task answers the question which emotion This work is licensed under a Creative Commons Attribution 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "is expressed in a text, it does not detect the textual unit, which reveals why the emotion has been developed. For instance, in the example \"Paul is angry because he lost his wallet.\" it remains hidden that lost his wallet is the reason for experiencing the emotion of anger. This stimulus, e.g., an event description, a person, a state of affairs, or an object enables deeper insight, similar to targeted or aspectbased sentiment analysis (Jakob and Gurevych, 2010; Yang and Cardie, 2013; Klinger and Cimiano, 2013; Pontiki et al., 2015 Pontiki et al., , 2016 . This situation is dissatisfying for (at least) two reasons. First, detecting the emotions expressed in social media and their stimuli might play a role in understanding why different social groups change their attitude towards specific events and could help recognize specific issues in society. Second, understanding the relationship between stimuli and emotions is also compelling from a psychological point of view, given that emotions are commonly considered responses to relevant situations (Scherer, 2005) . Models which tackle the task of detecting the stimulus in a text have seen three different problem formulations in the past: (1) Classification into a predefined inventory of possible stimuli (Mohammad et al., 2014) , similarly to previous work in sentiment analysis (Ganu et al., 2009) , (2) classification of precalculated or annotated clauses as containing a stimulus or not (Gui et al., 2016, i.a.) , and (3) detecting the tokens that describe the stimulus, e.g., with IOB labels (Ghazi et al., 2015, i.a.) . We follow the two settings in which the stimuli are not predefined categories (2+3, cf. Figure 1) .",
"cite_spans": [
{
"start": 440,
"end": 466,
"text": "(Jakob and Gurevych, 2010;",
"ref_id": "BIBREF17"
},
{
"start": 467,
"end": 489,
"text": "Yang and Cardie, 2013;",
"ref_id": "BIBREF45"
},
{
"start": 490,
"end": 516,
"text": "Klinger and Cimiano, 2013;",
"ref_id": "BIBREF21"
},
{
"start": 517,
"end": 537,
"text": "Pontiki et al., 2015",
"ref_id": "BIBREF32"
},
{
"start": 538,
"end": 560,
"text": "Pontiki et al., , 2016",
"ref_id": "BIBREF31"
},
{
"start": 1059,
"end": 1074,
"text": "(Scherer, 2005)",
"ref_id": "BIBREF36"
},
{
"start": 1269,
"end": 1292,
"text": "(Mohammad et al., 2014)",
"ref_id": "BIBREF27"
},
{
"start": 1344,
"end": 1363,
"text": "(Ganu et al., 2009)",
"ref_id": "BIBREF10"
},
{
"start": 1455,
"end": 1479,
"text": "(Gui et al., 2016, i.a.)",
"ref_id": null
},
{
"start": 1561,
"end": 1587,
"text": "(Ghazi et al., 2015, i.a.)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1678,
"end": 1687,
"text": "Figure 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These two settings have their advantages and disadvantages. The clause classification setting is more coarse-grained and, therefore, more likely to perform well than the token sequence labeling setting, but it might miss the exact starting and endpoints of a stimulus span and needs clause annotations or a syntactic parse with the risk of error propagation. The token sequence labeling setting might be more challenging, but has the potential to output more exactly which tokens belong to the stimulus. Further, sequence labeling is a more standard machine learning setting than a pipeline of clause detection and classification. These two different formulations are naturally evaluated in two different ways and have not been compared before, to the best of our knowledge. Therefore, it remains unclear which task formulation is more appropriate for English. Further, the most recent approaches have been evaluated only on Mandarin Chinese, with the only exception being the EmotionCauseAnalysis dataset being considered by Fan et al. (2019) , but not in comparison to token sequence labeling. No other English emotion stimulus data sets have been tackled with clause classification methods. We hypothesize that clauses are not appropriate units for English, as Ghazi et al. (2015) already noted that: \"such granularity [is] too large to be considered an emotion stimulus in English\". A similar argument has been brought up during the development of semantic role labeling methods: Punyakanok et al. (2008) stated that \"argument[s] may span over different parts of a sentence\".",
"cite_spans": [
{
"start": 1026,
"end": 1043,
"text": "Fan et al. (2019)",
"ref_id": "BIBREF7"
},
{
"start": 1264,
"end": 1283,
"text": "Ghazi et al. (2015)",
"ref_id": "BIBREF14"
},
{
"start": 1484,
"end": 1508,
"text": "Punyakanok et al. (2008)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our contributions are as follows: (1) we develop an integrated framework that represents different formulations for the emotion stimulus detection task and evaluate these on four available English datasets; (2) as part of this framework, we propose a clause detector for English which is required to perform stimulus detection via clause classification in a real-world setting; (3) show that token sequence labeling is indeed the preferred approach for stimulus detection in most available English datasets; (4) show in an error analysis that this is mostly because clauses are not the appropriate unit for stimuli in English. Finally, (5), we make our implementation and annotations for both clauses and tokens available at http://www.ims.uni-stuttgart.de/ data/emotion-stimulus-detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The remainder of the paper is organized as follows. We first introduce our integrated framework of stimulus detection which enables us to evaluate clause classification and token sequence labeling in a comparable manner (Section 2). We then turn to the experiments (Section 3) in which we analyze results on four different English data sets. Section 4 discusses typical errors in detail, which leads to a better understanding of how stimuli are formulated in English. We conclude in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The two approaches for open-domain stimulus detection, namely, clause classification and token sequence labeling, have not been compared on English. We propose an integrated framework ( Figure 2 ) which takes tokens t as input, splits this sequence into clauses and classifies them (clause detection can be bypassed if manual annotations of clauses are available). The token sequence labeling does not rely on clause annotations. The output, either clauses c with classifications y (y \u2208 {yes, no} n ) or tokens t with labels l are then mapped to each other to enable a comparative evaluation. We explain these steps in the following subsections.",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 194,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "An Integrated Framework for Stimulus Detection",
"sec_num": "2"
},
{
"text": "The clause classification methods rely on representing an instance as a sequence of clauses. Clauses in English grammar are defined as the smallest grammatical structures that contain a subject and a predicate, and can express a complete proposition (Kroeger, 2005) . We show our algorithm to detect clauses in Algorithm 1.",
"cite_spans": [
{
"start": 250,
"end": 265,
"text": "(Kroeger, 2005)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Clause Extraction",
"sec_num": "2.1"
},
{
"text": "To mark the segments that would potentially approximate clauses, we rely on the constituency parse tree of the token sequence (Line 2). For that reason, we use the Berkeley Neural Parser (Kitaev and Klein, 2018). As illustrated by Feng et al. (2012) and Tafreshi and Diab (2018) we also do that by segmenting the constituency parse tree of the instance (Line 9) at the borders of constituents (Bies et al., 1995) . We then join the segments until convergence heuristically based on punctuation (Line 12). We illustrate the algorithm in the example in Figure 3 .",
"cite_spans": [
{
"start": 231,
"end": 249,
"text": "Feng et al. (2012)",
"ref_id": "BIBREF9"
},
{
"start": 254,
"end": 278,
"text": "Tafreshi and Diab (2018)",
"ref_id": "BIBREF37"
},
{
"start": 393,
"end": 412,
"text": "(Bies et al., 1995)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 551,
"end": 559,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Clause Extraction",
"sec_num": "2.1"
},
{
"text": "Our goal is to compare sequence labeling and clause classification. To attribute the performance of the model to the formulation of the task, we keep the differences between the models at a minimum. We therefore first discuss the model components and then how we put them together.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stimulus Detection",
"sec_num": "2.2"
},
{
"text": "Our models are composed of four layers. As Embedding Layer, we use pretrained embeddings to embed each token in the instance s = t 1 . . . t n to obtain e 1 , . . . , e n . For the Encoding Layer, we use a bidirectional LSTM which outputs a sequence of hidden states h 1 , ..., h n . In an additional Attention Layer, each word or clause is represented as the concatenation of its embedding and a weighted average over other words or clauses in the instance:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stimulus Detection",
"sec_num": "2.2"
},
{
"text": "u i = [ h i ; n j=1 a i,j \u2022 h j ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stimulus Detection",
"sec_num": "2.2"
},
{
"text": "The weights a i,j are calculated as the dot-product between h i and every other word, and by normalizing the scores using softmax",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stimulus Detection",
"sec_num": "2.2"
},
{
"text": "a i = softmax( h T i \u2022 h j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stimulus Detection",
"sec_num": "2.2"
},
{
"text": ". We concatenate all representations to obtain the final representation vector s. The Output Layer is different for the two different task formulations (sequence labeling vs. single softmax). For the case of the single softmax, the input to the classifier is the representation of the clause obtained on the previous layer and the classifier output is defined as o i = softmax(W \u2022 ReLU(Dropout(h( s)))). When labels are not predicted independently from each other but rather in a sequential manner, we use a linear-chain conditional random field (Lafferty et al., 2001) . It takes the sequence of probability vectors from the previous layer u 1 , u 2 , . . . and outputs a sequence of labels y 1 , y 2 , . . .. The score of the labeled sequence is defined as the sum of the probabilities of individual labels and the transition probabilities:",
"cite_spans": [
{
"start": 546,
"end": 569,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stimulus Detection",
"sec_num": "2.2"
},
{
"text": "s(y 1:n ) = n i=1 u i (y i ) + n i=2 T [y i\u22121 , y i ],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stimulus Detection",
"sec_num": "2.2"
},
{
"text": "where the matrix T that contains the transition probabilities between one label and another (i.e., T [i, j] represents the probability that a token labeled i is followed by a token labeled j). At prediction time, the most likely sequence is chosen with the Viterbi algorithm (Viterbi, 1967) .",
"cite_spans": [
{
"start": 101,
"end": 107,
"text": "[i, j]",
"ref_id": null
},
{
"start": 275,
"end": 290,
"text": "(Viterbi, 1967)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stimulus Detection",
"sec_num": "2.2"
},
{
"text": "With these components, we can now put together the actual models which we use for stimulus detection. We compare three different models, one for token sequence labeling (SL) and two for clause classification (CC). The model architectures are illustrated in Figure 4 . Token Sequence Labeling (SL). In this model, we formulate emotion stimulus detection as token sequence labeling with the IOB alphabet (Ramshaw and Marcus, 1995) . As embeddings, we use wordlevel GloVe embeddings (Pennington et al., 2014) . The sequence-to-sequence architecture comprises a bidirectional LSTM, an attention layer and the CRF output layer. Independent Clause Classification (ICC). This model, similarly proposed by Cheng et al. (2017) , takes the clauses from the clause detector (or from annotated data) and classifies them as containing the stimulus or not. The model has a similar architecture to the one before, with the exception of the final classifier, which is a single softmax to output a single label. The training objective is to minimize the cross-entropy loss. This model does not have access to clauses other than the one it predicts for. Joint Clause Classification (JCC). In this model, the neural architecture we employ is slightly different from before to enable it to make a prediction for clauses in the context of all clauses. It comprises multiple LSTM modules as word-level encoders, one for each clause. The LSTM at the wordlevel encodes the tokens of one clause into one representation. The next layer is a clause-level encoder based on two bidirectional LSTMs, where the clause representations are learned and updated by integrating the relations between multiple clauses. After we obtain the final clause representation for each clause, we perform sequence labeling with a CRF on the clause level. The training objective is to minimize the negative log-likelihood loss across all clauses. This implementation follows the architecture by , with the change of the upper layer, which is, in our case, an LSTM clause encoder and not a transformer, to keep the architecture comparable across our different formulations. Therefore, this is comparable to all other hierarchical models proposed for the task .",
"cite_spans": [
{
"start": 402,
"end": 428,
"text": "(Ramshaw and Marcus, 1995)",
"ref_id": "BIBREF34"
},
{
"start": 480,
"end": 505,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF29"
},
{
"start": 698,
"end": 717,
"text": "Cheng et al. (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 257,
"end": 265,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Stimulus Detection",
"sec_num": "2.2"
},
{
"text": "crf attention LSTM embedding l 1 , . . . , l n t 1 , . . . , t n l i \u2208 {I, O, B} softmax attention LSTM embedding SL ICC y i \u2208 {yes, no} c i crf attention LSTM embedding c 1 , . . . , c m word encoder LSTM clause encoder JCC y 1 , . . . , y m y i \u2208 {yes, no}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stimulus Detection",
"sec_num": "2.2"
},
{
"text": "The last component of our integrated framework maps the different representations of each formulation of emotion stimulus detection between each other, namely clause classifications to token sequence labeling and vice versa. We obtain clause classifications from token label sequences (T \u2192 C in Figure 2 ) by accepting any clause that has at least one token being labeled as B or I as a stimulus clause. The other way around, clause classes are mapped to tokens (C \u2192 T ) in such a way that the first token of a stimulus clause is a B and all the remaining tokens in the respective clause are I. Tokens from clauses that do not correspond to a stimulus all receive O labels.",
"cite_spans": [],
"ref_spans": [
{
"start": 295,
"end": 303,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mapping between Task Formulations",
"sec_num": "2.3"
},
{
"text": "We now put the models to use to understand the differences between sequence labeling and clause classification for English emotion stimulus detection and the suitability of clauses as the unit of analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "3"
},
{
"text": "We base our experiments on four data sets. 1 For each data set, we report the size, the number of stimulus annotations and statistics for tokens and clauses in Table 1 . EmotionStimulus. This data set proposed by Ghazi et al. (2015) is constructed based on FrameNet's emotion-directed frame. 2 The authors used FrameNet's annotated data for 173 emotion lexical units, grouped the lexical units into seven basic emotions using their synonyms and built a dataset manually annotated with both the emotion stimulus and the emotion. The corpus consists of 820 sentences with annotations of emotion categories and stimuli. The rest of 1,594 sentences only contain an emotion label. For this dataset, we see the lowest average number of clauses for which all tokens correspond to a stimulus (\u00b5 w. all S/I in Table 1 ). This result shows that the stimuli annotations rarely align with the clause boundaries.",
"cite_spans": [
{
"start": 213,
"end": 232,
"text": "Ghazi et al. (2015)",
"ref_id": "BIBREF14"
},
{
"start": 292,
"end": 293,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 160,
"end": 167,
"text": "Table 1",
"ref_id": "TABREF3"
},
{
"start": 801,
"end": 808,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Data Sets",
"sec_num": "3.1"
},
{
"text": "ElectoralTweets. Frame Semantics also inspires a dataset of social media posts (Mohammad et al., 2014) . The corpus consists of 4,056 tweets of which 2,427 contain emotion stimulus annotations on the token level. The annotation was performed via crowdsourcing. The tweets are the shortest instance type in length and have a higher average of clauses per instance than the GoodNewsEveryone or the EmotionStimulus datasets. They also show the same mean of stimulus tokens per instance as EmotionCauseAnalysis with a slightly higher mean for the number of clauses in which all tokens correspond to stimulus annotations. GoodNewsEveryone. The data set by Bostan et al.",
"cite_spans": [
{
"start": 79,
"end": 102,
"text": "(Mohammad et al., 2014)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets",
"sec_num": "3.1"
},
{
"text": "(2020) consists of news headlines. From a total of 5000 instances, 4,798 contain a stimulus. The headlines have the shortest stimuli in token count. Similar to the ElectoralTweets, they also have a high average stimulus token density in clauses. This set has the lowest mean number of clauses per instance (\u00b5 I in Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 314,
"end": 321,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Data Sets",
"sec_num": "3.1"
},
{
"text": "EmotionCauseAnalysis (Gao et al., 2017) comparably annotate English and Mandarin texts on the clause level and the token level. In our work, we use the English subset, which is the only English corpus annotated for stimuli both at the clause level and at the token level. This dataset has the fewest instances without stimuli among all the others. It also has the longest instances and stimuli. The mean of stimuli tokens annotated per clause is comparable to EmotionStimulus despite having a higher mean of stimuli tokens per instance. In the upcoming experiments, we use the clause annotations and not automatically recognized clauses with Algorithm 1 as input to our framework.",
"cite_spans": [
{
"start": 21,
"end": 39,
"text": "(Gao et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets",
"sec_num": "3.1"
},
{
"text": "Before turning to the actual evaluation of the emotion stimulus detection methods, we evaluate the quality of the automatic clause detection. For an intrinsic evaluation, we annotate 50 instances from each test corpus in each data set with two annotators trained on the clause extraction task in two iterations. The two annotators are graduate students and have different scientific backgrounds: computational linguistics (A1) and computer science with a specialization in computer vision (A2). Each student annotated 50 instances of each dataset from the datasets we use in the same order. As an environment for the annotation process, we used a simple spreadsheet application. We did this small annotation experiment as an inner check for our understanding of the clause extraction task. None of the annotators is a native English speaker; A1 is a native speaker of a Romance language, and A2 a German speaker. The inter-annotator agreement is shown in Table 2 . We achieve an acceptable average agreement of \u03ba=.65.",
"cite_spans": [],
"ref_spans": [
{
"start": 955,
"end": 962,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Clause Identification Evaluation",
"sec_num": "3.2"
},
{
"text": "We now turn to the question if annotated clauses (as an upper bound to an automatic system) align well with annotated stimuli (Stimuli vs. Anno. Clauses in Table 2 ). The evaluation is based on recall (i.e., measuring for how many stimuli a clause exists), either for the whole stimulus (exact), or for the left or the right boundary. We see that except for the corpus EmotionStimulus, the right boundaries match better than the left.",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 163,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Clause Identification Evaluation",
"sec_num": "3.2"
},
{
"text": "Turning to extracted clauses instead of annotated ones (Extra. vs. Anno. Clauses) we first evaluate the automatic extraction algorithm. We obtain F 1 values between 0.76% and 0.80%, which we consider acceptable though they also show that error propagation could occur.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clause Identification Evaluation",
"sec_num": "3.2"
},
{
"text": "For the actual extrinsic evaluation, if clause boundaries are correctly found for annotated stimuli (Stimuli vs. Extra. Clauses), we see that the results are only slightly lower than for the gold annotations, except for EmotionStimulus. Therefore, we do not expect to see error propagation due to an imperfect extraction algorithm for most data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clause Identification Evaluation",
"sec_num": "3.2"
},
{
"text": "These results suggest that clauses are not an appropriate unit for stimuli in English. Still, we do not know yet if the clause detection task's simplicity outweighs these disadvantages in contrast to token sequence labeling. We turn to answer this in the following. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clause Identification Evaluation",
"sec_num": "3.2"
},
{
"text": "We evaluate the quality of all models with five different measures. Motivated by the formulation of clause classification, we (1) evaluate the prediction on the clause level with precision, recall, and F 1 . For the sequence labeling evaluation, we use four variations. (2) Exact, where we consider a consecutive token sequence to be correct if a gold annotation exists that exactly matches, (3) Relaxed, where an overlap of one token with a gold annotation is sufficient, (4) Left-Exact and (5) Right-Exact, where at least the most left/right token in the prediction needs to have a gold-annotated counterpart. One might argue that sequence labeling evaluation is unfair for the clause classification, as it is more fine-grained than the actual prediction method. However, for transparency across methods and analysis of advantages and disadvantages of the different methods, we use this approach in addition to clause classification evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Procedure",
"sec_num": "3.3.1"
},
{
"text": "We split the data for each set randomly into three sets: 80% train, 10% dev, and 10% test. We use dropout with a probability of 0.5, train with Adam (Kingma and Ba, 2015) with a base learning rate of 0.003, and a batch size of 10. At test time, we select the model with the best validation accuracy after 50 epochs with a patience of 10 epochs. All models use embedding sizes of 300 and hidden state sizes of 100 (Pennington et al., 2014) . We do not tune hyperparameters for any of the architectures and implement all models with the AllenNLP library (Gardner et al., 2018) .",
"cite_spans": [
{
"start": 413,
"end": 438,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF29"
},
{
"start": 552,
"end": 574,
"text": "(Gardner et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Procedure",
"sec_num": "3.3.1"
},
{
"text": "We now study the performance of the different models on the English data sets. Figure 5 summarizes the results. (Precision and recall values are available in Table 7 in Appendices.) Which of the modeling approaches performs best on English data? If we only compare the absolute numbers in F 1 , we see that the clause classification evaluation (Class) shows the highest result across all models and data set. The only exception is the EmotionStimulus data, in which the Left-Exact evaluation is slightly higher. When we rely on this evaluation score, we see that the token sequence labeling method shows a superior result to the classification methods in two data sets, namely GoodNewsEveryone and EmotionCauseAnalysis. On ElectoralTweets and EmotionStimulus, the re- Early stop 0 4 1 3 0 6 2 7 0 5 1 4 33 Late stop 11 9 10 8 19 30 7 25 17 31 22 202 Early start & stop 0 3 0 1 9 11 5 1 6 10 3 2 51 Early start 152 16 0 6 192 73 9 164 220 58 3 159 1052 Late start 28 3 0 1 3 8 1 0 2 7 1 0 54 Late start & stop 2 1 0 0 0 2 0 1 0 1 0 1 8 Contained 0 0 0 0 0 0 0 1 0 0 0 2 3 Multiple 143 189 11 260 sults are en par across all methods with this evaluation measure. We find this surprising to some degree, as this evaluation is more natural for the classification tasks (ICC and JCC) than for sequence labeling (SL), which requires the mapping step.",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 87,
"text": "Figure 5",
"ref_id": "FIGREF3"
},
{
"start": 158,
"end": 165,
"text": "Table 7",
"ref_id": null
},
{
"start": 768,
"end": 1186,
"text": "Early stop 0 4 1 3 0 6 2 7 0 5 1 4 33 Late stop 11 9 10 8 19 30 7 25 17 31 22 202 Early start & stop 0 3 0 1 9 11 5 1 6 10 3 2 51 Early start 152 16 0 6 192 73 9 164 220 58 3 159 1052 Late start 28 3 0 1 3 8 1 0 2 7 1 0 54 Late start & stop 2 1 0 0 0 2 0 1 0 1 0 1 8 Contained 0 0 0 0 0 0 0 1 0 0 0 2 3 Multiple 143 189 11 260",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3.2"
},
{
"text": "As this suggests that clauses are not the appropriate unit, it is worth comparing these results with the Exact evaluation measure, which evaluates on the token-sequence level. We observe that token sequence labeling outperforms both clause classification methods on three of the four data sets, with ElectoralTweets being the only exception with the shortest textual instances and the highest number of clauses in which all tokens correspond to stimulus annotation (see Table 1 ). Therefore, we conclude that token sequence labeling is superior to clause classification on (most of our) English data sets. Do clause classification models perform better on the left or the right side of the stimulus clause? Given the evaluation of the clause detection, we expect the right boundary to be better found for GoodNewsEveryone and Emotion-CauseAnalysis and the left boundary for Emotion-Stimulus. Surprisingly, this is not entirely truethe right boundary is found with higher F 1 on all data sets, not only on those where the clauses are better aligned with the stimulus' right boundary. Nevertheless, the effect is more reliable for Good-NewsEveryone, as expected. Does token sequence labeling perform better on the left or the right side of the stimulus clause?",
"cite_spans": [],
"ref_spans": [
{
"start": 470,
"end": 477,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3.2"
},
{
"text": "We can ask this similar question for token sequence labeling, though it might be harder to motivate than in the classification setting. Non-surprisingly, such a clear pattern cannot be observed. For Elec-toralTweets and EmotionCauseAnalysis, the difference between the left and right match is minimal. For GoodNewsEveryone, it can be observed to a lesser extent than for the classification approaches, and for EmotionStimulus, the left boundary is better found than the right boundary. It seems that for the longer sequences in EmotionStimulus and Emotion-CauseAnalysis, the beginning of the stimulus span is easier to find than for shorter sequences. Is joint prediction of clause labels beneficial? This hypothesis can be confirmed; however, the differences are of a different magnitude depending on the data set. For GoodNewsEveryone, the effect is more substantial than for the other corpora. ElectoralTweets shows the smallest difference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3.2"
},
{
"text": "In the following, we analyze the error types made by the different models on all data sets and investigate in which ways SL improves over the ICC and JCC models. We hypothesize that the higher flexibility of token-based sequence labeling leads to different types of errors than the clause-based classification models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4"
},
{
"text": "For quantitative analysis, we define different error types, illustrated in Table 3 with different symbols as abbreviations. The top bar illustrates the gold span, while the bottom corresponds to the predicted span. The error types illustrated with symbols and correspond to false positives; are false negatives. All other error types correspond to either both false positive and false negative in a strict evaluation setting or true positives in one of the relaxed evaluation settings. Do ICC and JCC particularly miss starting or end points of the stimulus annotation? We see in Table 3 that for Late stop , CC models make considerably more mistakes across all datasets. ICC does so on ET and ECA, while JCC makes more mistakes on GNE and ES. For data sets in which stimulus annotations end with a clause, errors of this type are less likely. These results are more prominent for Early start & stop . Do all methods have similar issues with finding the whole consecutive stimulus? We see this in the error type Multiple . When the CC models make this mistake, it can be attributed to the automatic fine-grained clause extraction, which can cause a small clause within a gold span to become a false negative. However, we see that SL shows higher numbers of this issue than CC. This result is also reflected in the surprisingly low number of Contained ( ) -if the prediction is completely inside a gold annotation, the gold annotation tends to be long, and this increases the chance that it is (wrongly) split into multiple predictions. How do the error types differ across models? The Early Start (& Stop) and Surrounded ( , , ) counts show differences across the different types of models. Presumably, the clause classification models do have difficulties in finding the left boundary, and they are more prone to \"start early\" than the token sequence labeling models. This might be due to gold spans starting in the middle of a clause which is predicted to contain the stimulus. How do the error types differ across data sets? The results and error types differ across data sets (see particularly , , ). This points out what we have seen in the evaluation already: The structure of a stimulus depends on the domain and annotation. The least challenging data set is EmotionStimulus with the lowest numbers of errors across all models. This result is caused by most sentences having similar syntactic trees, all stimuli are explicit and mostly introduced in a similar way. For qualitative analyses, Figure 6 shows one example of each type of error described above. In the first example, the JCC model does not learn to include the second part of the coordination -\"and the pain\". In the second example, similarly, the SL model misses the right part of the coordination. For most cases of independent clauses that we inspect, we see a common pattern for both types of models, which is that the prediction stops while encountering coordinating conjunctions. In the sixth example, the prediction span includes the emotion cue. This issue could be solved by doing sequence labeling instead or by informing the model of the presence of other semantic roles. These examples raise the following question: would improved clause segmentation lead to improvements for the clause-classification models across all data sets?",
"cite_spans": [
{
"start": 1597,
"end": 1605,
"text": "(& Stop)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 75,
"end": 82,
"text": "Table 3",
"ref_id": "TABREF8"
},
{
"start": 580,
"end": 587,
"text": "Table 3",
"ref_id": "TABREF8"
},
{
"start": 2498,
"end": 2506,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4"
},
{
"text": "The task of detecting the stimulus of an expressed emotion in text received relatively little attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Next to the corpora we mentioned so far, the REMAN corpus (Kim and Klinger, 2018) consists of English excerpts from literature, sampled from Project Gutenberg. The authors consider triples of sentences as a trade-of between longer passages and sentences. Further, Neviarouskaya and Aono (2013) annotated English sentences on the token level.",
"cite_spans": [
{
"start": 58,
"end": 81,
"text": "(Kim and Klinger, 2018)",
"ref_id": "BIBREF18"
},
{
"start": 264,
"end": 293,
"text": "Neviarouskaya and Aono (2013)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Besides English and Mandarin, Russo et al. (2011) developed a method for the identification of Italian sentences that contain an emotion cause phrase. Yada et al. (2017) annotate Japanese sentences on newspaper articles, web news articles, and Q&A sites. Table 8 in Appendices shows which corpora and methods have been used and compared in previous work for the available English and Chinese sets. We see that the methods applied on the Chinese sets are not evaluated on the English sets. firstly investigated the interactions between emotions and the corresponding stimuli from a linguistic perspective. They publish a list of linguistic cues that help in identifying emotion stimuli and develop a rule-based approach. Chen et al. (2010) build on top of their work to develop a machine learning method. Li and Xu (2014) implement a rule-based system to detect the stimuli in Weibo posts and further inform an emotion classifier with the output of this system. Other approaches to develop rules include manual strategies (Gao et al., 2015) , bootstrapping (Yada et al., 2017) and the use of constituency and dependency parsing (Neviarouskaya and Aono, 2013) .",
"cite_spans": [
{
"start": 30,
"end": 49,
"text": "Russo et al. (2011)",
"ref_id": "BIBREF35"
},
{
"start": 151,
"end": 169,
"text": "Yada et al. (2017)",
"ref_id": "BIBREF44"
},
{
"start": 720,
"end": 738,
"text": "Chen et al. (2010)",
"ref_id": "BIBREF3"
},
{
"start": 804,
"end": 820,
"text": "Li and Xu (2014)",
"ref_id": "BIBREF25"
},
{
"start": 1021,
"end": 1039,
"text": "(Gao et al., 2015)",
"ref_id": "BIBREF11"
},
{
"start": 1056,
"end": 1075,
"text": "(Yada et al., 2017)",
"ref_id": "BIBREF44"
},
{
"start": 1127,
"end": 1157,
"text": "(Neviarouskaya and Aono, 2013)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "All recently published state-of-the-art methods for the task of emotion stimulus detection via clause classification are evaluated on the Mandarin data by Gui et al. (2016) . They include multi-kernel learning (Gui et al., 2016) and long short-term memory networks (LSTM) (Cheng et al., 2017) . propose a convolutional multipleslot deep memory network (ConvMS-Memnet), and Li et al. (2018) a co-attention neural network model, which encodes the clauses with a coattention based bi-directional long short-term memory into high-level input representations, which are further passed into a convolutional layer. proposed an architecture with components for \"position augmented embedding\" and \"dynamic global label\" which takes the relative position of the stimuli to the emotion keywords and use the predictions of previous clauses as features for predicting subsequent clauses. integrate the relative position of stimuli and evaluate a transformer-based model that classifies all clauses jointly within a text. Similarly, Yu et al. (2019) proposes a word-phrase-clause hierarchical network. The transformer-based model achieves state of the art, however, it is shown that the RNN based encoders are very close in performance . Therefore, we use a comparable model that is grounded on the same concept of a hierarchical setup with LSTMs as encoders. Further, there is a strand of research which jointly predicts the clause that contains the emotion stimulus together with its emotion cue (Wei et al., 2020; Fan et al., 2020) . However, the comparability of methods across data sets has been limited in previous work, as Table 8 in the appendices shows.",
"cite_spans": [
{
"start": 155,
"end": 172,
"text": "Gui et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 210,
"end": 228,
"text": "(Gui et al., 2016)",
"ref_id": "BIBREF16"
},
{
"start": 272,
"end": 292,
"text": "(Cheng et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 1019,
"end": 1035,
"text": "Yu et al. (2019)",
"ref_id": "BIBREF46"
},
{
"start": 1484,
"end": 1502,
"text": "(Wei et al., 2020;",
"ref_id": "BIBREF39"
},
{
"start": 1503,
"end": 1520,
"text": "Fan et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 1616,
"end": 1623,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "We contributed to emotion stimulus detection in two ways. Firstly, we evaluated emotion stimulus detection across several English annotated data sets. Secondly, we analyzed if the current standard formulation for stimulus detection on Mandarin Chinese is also a good choice for English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We find that the domain and annotation of the data sets have a large impact on the performance. The worst performance of the token sequence labeling approach is obtained on the crowdsourced data set ElectoralTweets. The well-formed sentences of EmotionStimulus pose fewer difficulties to our models than tweets and headlines. We see that the sequence labeling approaches are more appropriate for the phenomenon of stimulus mentions in English. This shows in the evaluation of the comparably coarse-grained clause level and is also backed by our error analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "For future work, we propose closer investigation of whether other smaller constituents might represent the stimulus better for English and a check of whether the strong results for the sequence labeling hold for other languages. Notably, the clause classification setup has its benefits, and this might lead to a promising setting as joint modeling or as a filtering step to finding parts of the text which might contain a stimulus mention. Another step is to investigate if the emotion stimulus and the emotion category classification benefit from joint modeling in English as it has been shown for Mandarin (Chen et al., 2018) .",
"cite_spans": [
{
"start": 609,
"end": 628,
"text": "(Chen et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Corpora which we do not consider for our experiments are discussed in the related work section.2 https://framenet2.icsi.berkeley.edu/fnReports/data/ frameIndex.xml?frame=Emotion directed",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research has been conducted within the project SEAT (Structured Multi-Domain Emotion Analysis from Text, KL 2869/1-1), funded by the German Research Council (DFG). We thank Enrica Troiano, Evgeny Kim, Gabriella Lapesa, and Sean Papay for fruitful discussions and feedback on earlier versions of the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "SL Figure 8 : Mapping of previous state-of-the-art methods to data sets. + indicates that we are aware of a publication which reports on the method being evaluated on the respective data set and a \u2212 indicates our assumption that no reported results exist with the respective method being evaluated on the respective data set. ET corresponds to ElectoralTweets, ES to EmotionStimulus, GNE to GoodNewsEveryone, whereas the other data set are as being introduced above.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Bracketing guidelines for Treebank II style Penn Treebank project",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Ferguson",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Katz",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Mac-Intyre",
"suffix": ""
},
{
"first": "Victoria",
"middle": [],
"last": "Tredinnick",
"suffix": ""
},
{
"first": "Grace",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "Britta",
"middle": [],
"last": "Schasberger",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Bies, Mark Ferguson, Karen Katz, Robert Mac- Intyre, Victoria Tredinnick, Grace Kim, Mary Ann Marcinkiewicz, and Britta Schasberger. 1995. Bracketing guidelines for Treebank II style Penn Treebank project. Online: http://languagelog.ldc. upenn.edu/myl/PennTreebank1995.pdf.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "GoodNewsEveryone: A corpus of news headlines annotated with emotions, semantic roles, and reader perception",
"authors": [
{
"first": "Laura",
"middle": [
"Ana"
],
"last": "",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Bostan",
"suffix": ""
},
{
"first": "Evgeny",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "1554--1566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Ana Maria Bostan, Evgeny Kim, and Roman Klinger. 2020. GoodNewsEveryone: A corpus of news headlines annotated with emotions, semantic roles, and reader perception. In Proceedings of The 12th Language Resources and Evaluation Con- ference, pages 1554-1566, Marseille, France. Euro- pean Language Resources Association.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Joint learning for emotion classification and emotion cause detection",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wenjun",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Xiyao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Shoushan",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "646--651",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Chen, Wenjun Hou, Xiyao Cheng, and Shoushan Li. 2018. Joint learning for emotion classification and emotion cause detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 646-651, Brussels, Bel- gium. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Emotion cause detection with linguistic constructions",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sophia Yat Mei",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Shoushan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "179--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Chen, Sophia Yat Mei Lee, Shoushan Li, and Chu- Ren Huang. 2010. Emotion cause detection with linguistic constructions. In Proceedings of the 23rd International Conference on Computational Linguis- tics (Coling 2010), pages 179-187, Beijing, China. Coling 2010 Organizing Committee.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An emotion cause corpus for chinese microblogs with multiple-user structures",
"authors": [
{
"first": "Xiyao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Bixiao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Shoushan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM Transactions on Asian and Low-Resource Language Information Processing",
"volume": "17",
"issue": "1",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiyao Cheng, Ying Chen, Bixiao Cheng, Shoushan Li, and Guodong Zhou. 2017. An emotion cause corpus for chinese microblogs with multiple-user structures. ACM Transactions on Asian and Low-Resource Lan- guage Information Processing, 17(1):6:1-6:19.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "From independent prediction to reordered prediction: Integrating relative position and global label information to emotion cause identification",
"authors": [
{
"first": "Zixiang",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Huihui",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mengran",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Xia",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "6343--6350",
"other_ids": {
"DOI": [
"10.1609/aaai.v33i01.33016343"
]
},
"num": null,
"urls": [],
"raw_text": "Zixiang Ding, Huihui He, Mengran Zhang, and Rui Xia. 2019. From independent prediction to re- ordered prediction: Integrating relative position and global label information to emotion cause identifi- cation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 6343-6350.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An argument for basic emotions",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Ekman",
"suffix": ""
}
],
"year": 1992,
"venue": "Cognition & emotion",
"volume": "6",
"issue": "3-4",
"pages": "169--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Ekman. 1992. An argument for basic emotions. Cognition & emotion, 6(3-4):169-200.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A knowledge regularized hierarchical approach for emotion cause analysis",
"authors": [
{
"first": "Chuang",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Hongyu",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Jiachen",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Gui",
"suffix": ""
},
{
"first": "Lidong",
"middle": [],
"last": "Bing",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ruifeng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ruibin",
"middle": [],
"last": "Mao",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5614--5624",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1563"
]
},
"num": null,
"urls": [],
"raw_text": "Chuang Fan, Hongyu Yan, Jiachen Du, Lin Gui, Li- dong Bing, Min Yang, Ruifeng Xu, and Ruibin Mao. 2019. A knowledge regularized hierarchical approach for emotion cause analysis. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 5614-5624, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Transition-based directed graph construction for emotion-cause pair extraction",
"authors": [
{
"first": "Chuang",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Chaofa",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Jiachen",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Gui",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ruifeng",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3707--3717",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuang Fan, Chaofa Yuan, Jiachen Du, Lin Gui, Min Yang, and Ruifeng Xu. 2020. Transition-based di- rected graph construction for emotion-cause pair ex- traction. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 3707-3717, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Characterizing stylistic elements in syntactic structure",
"authors": [
{
"first": "Song",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Ritwik",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "1522--1533",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Song Feng, Ritwik Banerjee, and Yejin Choi. 2012. Characterizing stylistic elements in syntactic struc- ture. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning, pages 1522-1533, Jeju Island, Korea. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Beyond the stars: Improving rating predictions using review text content",
"authors": [
{
"first": "Gayatree",
"middle": [],
"last": "Ganu",
"suffix": ""
},
{
"first": "Noemie",
"middle": [],
"last": "Elhadad",
"suffix": ""
},
{
"first": "Am\u00e9lie",
"middle": [],
"last": "Marian",
"suffix": ""
}
],
"year": 2009,
"venue": "Twelfth International Workshop on the Web and Databases",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gayatree Ganu, Noemie Elhadad, and Am\u00e9lie Marian. 2009. Beyond the stars: Improving rating predic- tions using review text content. In Twelfth Interna- tional Workshop on the Web and Databases (WebDB 2009).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A rulebased approach to emotion cause detection for chinese micro-blogs",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jiushuo",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Expert Systems with Applications",
"volume": "42",
"issue": "9",
"pages": "4517--4528",
"other_ids": {
"DOI": [
"10.1016/j.eswa.2015.01.064"
]
},
"num": null,
"urls": [],
"raw_text": "Kai Gao, Hua Xu, and Jiushuo Wang. 2015. A rule- based approach to emotion cause detection for chi- nese micro-blogs. Expert Systems with Applications, 42(9):4517-4528.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Overview of NTCIR-13 ECA task",
"authors": [
{
"first": "Qinghong",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jiannan",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ruifeng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Gui",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yulan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Kam-Fai",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 13th NTCIR Conference on Evaluation of Information Access Technologies",
"volume": "",
"issue": "",
"pages": "361--366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qinghong Gao, Jiannan Hu, Ruifeng Xu, Gui Lin, Yulan He, Qin Lu, and Kam-Fai Wong. 2017. Overview of NTCIR-13 ECA task. In Proceed- ings of the 13th NTCIR Conference on Evaluation of Information Access Technologies, pages 361-366, Tokyo, Japan.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "AllenNLP: A deep semantic natural language processing platform",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Grus",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Workshop for NLP Open Source Software (NLP-OSS)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2501"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language pro- cessing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1- 6, Melbourne, Australia. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Detecting emotion stimuli in emotion-bearing sentences",
"authors": [
{
"first": "Diman",
"middle": [],
"last": "Ghazi",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "152--165",
"other_ids": {
"DOI": [
"10.1007/978-3-319-18117-2_12"
]
},
"num": null,
"urls": [],
"raw_text": "Diman Ghazi, Diana Inkpen, and Stan Szpakowicz. 2015. Detecting emotion stimuli in emotion-bearing sentences. In International Conference on Intelli- gent Text Processing and Computational Linguistics, pages 152-165. Springer.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A question answering approach for emotion cause extraction",
"authors": [
{
"first": "Lin",
"middle": [],
"last": "Gui",
"suffix": ""
},
{
"first": "Jiannan",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Yulan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Ruifeng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Jiachen",
"middle": [],
"last": "Du",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1593--1602",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1167"
]
},
"num": null,
"urls": [],
"raw_text": "Lin Gui, Jiannan Hu, Yulan He, Ruifeng Xu, Qin Lu, and Jiachen Du. 2017. A question answering ap- proach for emotion cause extraction. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1593-1602, Copenhagen, Denmark. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Event-driven emotion cause extraction with corpus construction",
"authors": [
{
"first": "Lin",
"middle": [],
"last": "Gui",
"suffix": ""
},
{
"first": "Dongyin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Ruifeng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1639--1649",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1170"
]
},
"num": null,
"urls": [],
"raw_text": "Lin Gui, Dongyin Wu, Ruifeng Xu, Qin Lu, and Yu Zhou. 2016. Event-driven emotion cause extrac- tion with corpus construction. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1639-1649, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Extracting opinion targets in a single and cross-domain setting with conditional random fields",
"authors": [
{
"first": "Niklas",
"middle": [],
"last": "Jakob",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1035--1045",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niklas Jakob and Iryna Gurevych. 2010. Extracting opinion targets in a single and cross-domain setting with conditional random fields. In Proceedings of the 2010 Conference on Empirical Methods in Nat- ural Language Processing, pages 1035-1045. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Who feels what and why? annotation of a literature corpus with semantic roles of emotions",
"authors": [
{
"first": "Evgeny",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1345--1359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evgeny Kim and Roman Klinger. 2018. Who feels what and why? annotation of a literature corpus with semantic roles of emotions. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1345-1359. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Constituency parsing with a self-attentive encoder",
"authors": [
{
"first": "Nikita",
"middle": [],
"last": "Kitaev",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2676--2686",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1249"
]
},
"num": null,
"urls": [],
"raw_text": "Nikita Kitaev and Dan Klein. 2018. Constituency pars- ing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676-2686, Melbourne, Australia. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Bidirectional inter-dependencies of subjective expressions and targets and their value for a joint model",
"authors": [
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Cimiano",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "848--854",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roman Klinger and Philipp Cimiano. 2013. Bi- directional inter-dependencies of subjective expres- sions and targets and their value for a joint model. In Proceedings of the 51st Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 848-854, Sofia, Bulgaria. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Analyzing grammar: An introduction",
"authors": [
{
"first": "Paul",
"middle": [
"R"
],
"last": "Kroeger",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul R. Kroeger. 2005. Analyzing grammar: An intro- duction. Cambridge University Press.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In International Conference on Ma- chine Learning, pages 282-289.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Emotion cause events: Corpus construction and analysis",
"authors": [
{
"first": "Sophia Yat Mei",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Shoushan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)",
"volume": "",
"issue": "",
"pages": "1121--1128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sophia Yat Mei Lee, Ying Cohen, Shoushan Li, and Chu-Ren Huang. 2010. Emotion cause events: Corpus construction and analysis. In Proceedings of the Seventh International Conference on Lan- guage Resources and Evaluation (LREC'10), pages 1121-1128, Valletta, Malta. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Text-based emotion classification using emotion cause extraction. Expert Systems with Applications",
"authors": [
{
"first": "Weiyuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "41",
"issue": "",
"pages": "1742--1749",
"other_ids": {
"DOI": [
"10.1016/j.eswa.2013.08.073"
]
},
"num": null,
"urls": [],
"raw_text": "Weiyuan Li and Hua Xu. 2014. Text-based emotion classification using emotion cause extraction. Ex- pert Systems with Applications, 41(4):1742-1749.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A co-attention neural network model for emotion cause analysis with emotional context awareness",
"authors": [
{
"first": "Xiangju",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Kaisong",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Shi",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Daling",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yifei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4752--4757",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1506"
]
},
"num": null,
"urls": [],
"raw_text": "Xiangju Li, Kaisong Song, Shi Feng, Daling Wang, and Yifei Zhang. 2018. A co-attention neural network model for emotion cause analysis with emotional context awareness. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 4752-4757, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Semantic role labeling of emotions in tweets",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "32--41",
"other_ids": {
"DOI": [
"10.3115/v1/W14-2607"
]
},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad, Xiaodan Zhu, and Joel Martin. 2014. Semantic role labeling of emotions in tweets. In Pro- ceedings of the 5th Workshop on Computational Ap- proaches to Subjectivity, Sentiment and Social Me- dia Analysis, pages 32-41, Baltimore, Maryland. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Extracting causes of emotions from text",
"authors": [
{
"first": "Alena",
"middle": [],
"last": "Neviarouskaya",
"suffix": ""
},
{
"first": "Masaki",
"middle": [],
"last": "Aono",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "932--936",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alena Neviarouskaya and Masaki Aono. 2013. Extract- ing causes of emotions from text. In Proceedings of the Sixth International Joint Conference on Nat- ural Language Processing, pages 932-936, Nagoya, Japan. Asian Federation of Natural Language Pro- cessing.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1532-1543, Doha, Qatar. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The nature of emotions human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Plutchik",
"suffix": ""
}
],
"year": 2001,
"venue": "American Scientist",
"volume": "89",
"issue": "4",
"pages": "344--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Plutchik. 2001. The nature of emotions hu- man emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. American Scientist, 89(4):344- 350.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "SemEval-2016 task 5: Aspect based sentiment analysis",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Pontiki",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Galanis",
"suffix": ""
},
{
"first": "Haris",
"middle": [],
"last": "Papageorgiou",
"suffix": ""
},
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
},
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
},
{
"first": "Al-",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Smadi",
"suffix": ""
},
{
"first": "Yanyan",
"middle": [],
"last": "Al-Ayyoub",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Orph\u00e9e",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "V\u00e9ronique",
"middle": [],
"last": "De Clercq",
"suffix": ""
},
{
"first": "Marianna",
"middle": [],
"last": "Hoste",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Apidianaki",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Tannier",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Loukachevitch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kotelnikov",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "19--30",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1002"
]
},
"num": null,
"urls": [],
"raw_text": "Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Moham- mad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph\u00e9e De Clercq, V\u00e9ronique Hoste, Marianna Apidianaki, Xavier Tannier, Na- talia Loukachevitch, Evgeniy Kotelnikov, Nuria Bel, Salud Mar\u00eda Jim\u00e9nez-Zafra, and G\u00fcl\u015fen Eryigit. 2016. SemEval-2016 task 5: Aspect based senti- ment analysis. In Proceedings of the 10th Interna- tional Workshop on Semantic Evaluation (SemEval- 2016), pages 19-30, San Diego, California. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "SemEval-2015 task 12: Aspect based sentiment analysis",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Pontiki",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Galanis",
"suffix": ""
},
{
"first": "Haris",
"middle": [],
"last": "Papageorgiou",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "486--495",
"other_ids": {
"DOI": [
"10.18653/v1/S15-2082"
]
},
"num": null,
"urls": [],
"raw_text": "Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. SemEval-2015 task 12: Aspect based sentiment analysis. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 486-495, Denver, Colorado. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "The importance of syntactic parsing and inference in semantic role labeling",
"authors": [
{
"first": "Vasin",
"middle": [],
"last": "Punyakanok",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "2",
"pages": "257--287",
"other_ids": {
"DOI": [
"10.1162/coli.2008.34.2.257"
]
},
"num": null,
"urls": [],
"raw_text": "Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257-287.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Text chunking using transformation-based learning",
"authors": [
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Mitch",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 1995,
"venue": "Third Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "82--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lance Ramshaw and Mitch Marcus. 1995. Text chunk- ing using transformation-based learning. In Third Workshop on Very Large Corpora, pages 82-94.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "EMOCause: An easy-adaptable approach to extract emotion cause contexts",
"authors": [
{
"first": "Irene",
"middle": [],
"last": "Russo",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Rubino",
"suffix": ""
},
{
"first": "Ester",
"middle": [],
"last": "Boldrini",
"suffix": ""
},
{
"first": "Patricio",
"middle": [],
"last": "Mart\u00ednez-Barco",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis (WASSA 2.011)",
"volume": "",
"issue": "",
"pages": "153--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irene Russo, Tommaso Caselli, Francesco Rubino, Es- ter Boldrini, and Patricio Mart\u00ednez-Barco. 2011. EMOCause: An easy-adaptable approach to extract emotion cause contexts. In Proceedings of the 2nd Workshop on Computational Approaches to Subjec- tivity and Sentiment Analysis (WASSA 2.011), pages 153-160, Portland, Oregon. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "What are emotions? And how can they be measured?",
"authors": [
{
"first": "R",
"middle": [],
"last": "Klaus",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Scherer",
"suffix": ""
}
],
"year": 2005,
"venue": "Social Science Information",
"volume": "44",
"issue": "4",
"pages": "695--729",
"other_ids": {
"DOI": [
"10.1177/0539018405058216"
]
},
"num": null,
"urls": [],
"raw_text": "Klaus R. Scherer. 2005. What are emotions? And how can they be measured? Social Science Information, 44(4):695-729.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Sentence and clause level emotion annotation, detection, and classification in a multi-genre corpus",
"authors": [
{
"first": "Shabnam",
"middle": [],
"last": "Tafreshi",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "1246--1251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shabnam Tafreshi and Mona Diab. 2018. Sentence and clause level emotion annotation, detection, and classification in a multi-genre corpus. In Proceed- ings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), pages 1246-1251, Miyazaki, Japan. European Lan- guage Resources Association (ELRA).",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Error bounds for convolutional codes and an asymptotically optimum decoding algorithm",
"authors": [
{
"first": "Andrew",
"middle": [
"J"
],
"last": "Viterbi",
"suffix": ""
}
],
"year": 1967,
"venue": "IEEE Transactions on Information Theory",
"volume": "13",
"issue": "2",
"pages": "260--269",
"other_ids": {
"DOI": [
"10.1109/TIT.1967.1054010"
]
},
"num": null,
"urls": [],
"raw_text": "Andrew J. Viterbi. 1967. Error bounds for convolu- tional codes and an asymptotically optimum decod- ing algorithm. IEEE Transactions on Information Theory, 13(2):260-269.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Effective inter-clause modeling for end-to-end emotion-cause pair extraction",
"authors": [
{
"first": "Penghui",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Jiahao",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Wenji",
"middle": [],
"last": "Mao",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3171--3181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Penghui Wei, Jiahao Zhao, and Wenji Mao. 2020. Effective inter-clause modeling for end-to-end emotion-cause pair extraction. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 3171-3181, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Emotion-cause pair extraction: A new task to emotion analysis in texts",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Zixiang",
"middle": [],
"last": "Ding",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1003--1012",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1096"
]
},
"num": null,
"urls": [],
"raw_text": "Rui Xia and Zixiang Ding. 2019. Emotion-cause pair extraction: A new task to emotion analysis in texts. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1003-1012, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "RTHN: A RNN-Transformer Hierarchical Network for Emotion Cause Extraction",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Mengran",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zixiang",
"middle": [],
"last": "Ding",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19)",
"volume": "",
"issue": "",
"pages": "5285--5291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Xia, Mengran Zhang, and Zixiang Ding. 2019. RTHN: A RNN-Transformer Hierarchical Network for Emotion Cause Extraction. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19), pages 5285-5291, Macao, China. International Joint Conferences on Artificial Intelligence.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Extracting emotion causes using learning to rank methods from an information retrieval perspective",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Hongfei",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yufeng",
"middle": [],
"last": "Diao",
"suffix": ""
},
{
"first": "Lian",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Kan",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Access",
"volume": "7",
"issue": "",
"pages": "15573--15583",
"other_ids": {
"DOI": [
"10.1109/ACCESS.2019.2894701"
]
},
"num": null,
"urls": [],
"raw_text": "Bo Xu, Hongfei Lin, Yuan Lin, Yufeng Diao, Lian Yang, and Kan Xu. 2019. Extracting emotion causes using learning to rank methods from an information retrieval perspective. IEEE Access, 7:15573-15583.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "An ensemble approach for emotion cause detection with event extraction and multikernel svms",
"authors": [
{
"first": "Ruifeng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jiannan",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Dongyin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Gui",
"suffix": ""
}
],
"year": 2017,
"venue": "Tsinghua Science and Technology",
"volume": "22",
"issue": "6",
"pages": "646--659",
"other_ids": {
"DOI": [
"10.23919/TST.2017.8195347"
]
},
"num": null,
"urls": [],
"raw_text": "Ruifeng Xu, Jiannan Hu, Qin Lu, Dongyin Wu, and Lin Gui. 2017. An ensemble approach for emo- tion cause detection with event extraction and multi- kernel svms. Tsinghua Science and Technology, 22(6):646-659.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A bootstrap method for automatic rule acquisition on emotion cause extraction",
"authors": [
{
"first": "Shuntaro",
"middle": [],
"last": "Yada",
"suffix": ""
},
{
"first": "Kazushi",
"middle": [],
"last": "Ikeda",
"suffix": ""
},
{
"first": "Keiichiro",
"middle": [],
"last": "Hoashi",
"suffix": ""
},
{
"first": "Kyo",
"middle": [],
"last": "Kageura",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE International Conference on Data Mining Workshops (ICDMW)",
"volume": "",
"issue": "",
"pages": "414--421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuntaro Yada, Kazushi Ikeda, Keiichiro Hoashi, and Kyo Kageura. 2017. A bootstrap method for au- tomatic rule acquisition on emotion cause extrac- tion. In 2017 IEEE International Conference on Data Mining Workshops (ICDMW), pages 414-421.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Joint inference for fine-grained opinion extraction",
"authors": [
{
"first": "Bishan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1640--1649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1640-1649, Sofia, Bulgaria. Association for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Multiple level hierarchical network-based clause selection for emotion cause extraction",
"authors": [
{
"first": "Xinyi",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Wenge",
"middle": [],
"last": "Rong",
"suffix": ""
},
{
"first": "Zhuo",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuanxin",
"middle": [],
"last": "Ouyang",
"suffix": ""
},
{
"first": "Zhang",
"middle": [],
"last": "Xiong",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Access",
"volume": "7",
"issue": "",
"pages": "9071--9079",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinyi Yu, Wenge Rong, Zhuo Zhang, Yuanxin Ouyang, and Zhang Xiong. 2019. Multiple level hierarchical network-based clause selection for emotion cause extraction. IEEE Access, 7:9071-9079.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Different formulations for emotion stimulus detection. Framework for emotion stimulus detection. Tokens t are split into clauses for clause class. Mapping ensures that both methods result in clause classifications (t, l) i and token sequences with labels (c, y) j .",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": ", |a b c|} = {0, 3} Go over nodes tagged S, SBAR, ... On node SBARQ (a b) Add idx (0) to gaps, new gaps: {0, 3} Add idxr + 1 (2) to gaps, new gaps: {0, 3, 2} On node S (a b c) Add idx (0) to gaps, new gaps: {0, 3, 2} Add idxr + 1 (3) to gaps, new gaps: {0, 3, 2} segments = \u2205 For each pair i, j in sorted gaps ({0, 2, 3}) i=0, j=2 Append tokens[0:2] (a b) to segments, new segments: [a b] i=2, j=3 Append tokens[2:3] (c) to segments, new segments: [a b, c] Return segments: [a b, c]Figure 3: Example for the application of Algorithm 1.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Comparable model architectures.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Results of the three different models across four different datasets",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"html": null,
"type_str": "table",
"text": "Clause-based Classification: No Stimulus Stimulus [ She's pleased at ] [ how things have turned out . ] She 's pleased at how things have turned out .",
"content": "<table><tr><td/><td/><td colspan=\"3\">Token Sequence Labeling:</td><td/><td/></tr><tr><td>O O</td><td>O</td><td>O B</td><td>I</td><td>I</td><td>I</td><td>I O</td></tr></table>",
"num": null
},
"TABREF3": {
"html": null,
"type_str": "table",
"text": "Data sets available for the Emotion Stimulus Detection task in English. Size: number of annotated instances, Stimuli : number of instances with stimuli annotated; \u00b5, \u03c3: mean/standard deviation of length of stimuli in tokens; \u00b5S/I: mean number of stimulus tokens per instance; \u00b5S/C: mean number of stimulus tokens per clause; Total: total number of clauses, w. S: number of clauses that contain a stimulus; \u00b5 I: average number of clauses per instance; \u00b5 w. all S/I: average number of clauses in which all tokens correspond to annotated stimuli.",
"content": "<table/>",
"num": null
},
"TABREF5": {
"html": null,
"type_str": "table",
"text": "Evaluation of Clause Detection. Note that for EmotionCauseAnalysis, the clauses stem from the annotation provided in the original data and not from our automatic detection method.",
"content": "<table><tr><td>100</td><td/><td/><td/><td/></tr><tr><td>90</td><td/><td/><td/><td/></tr><tr><td>80</td><td/><td/><td/><td/></tr><tr><td>70</td><td/><td/><td/><td/></tr><tr><td>60</td><td/><td/><td/><td/></tr><tr><td>50</td><td/><td/><td/><td/></tr><tr><td>40</td><td/><td/><td/><td/></tr><tr><td>30</td><td/><td/><td/><td/></tr><tr><td>20</td><td/><td/><td/><td/></tr><tr><td>10</td><td/><td/><td/><td/></tr><tr><td>0</td><td colspan=\"2\">SL Emotion Stimulus ICC JCC</td><td colspan=\"2\">SL Electoral Tweets ICC JCC</td><td>SL Emotion Cause ICC JCC</td><td>SL GoodNewsEveryone ICC JCC</td></tr><tr><td/><td>Exact</td><td colspan=\"2\">Relaxed</td><td>LeftExact</td><td>RightExact</td><td>Class</td></tr></table>",
"num": null
},
"TABREF8": {
"html": null,
"type_str": "table",
"text": "Counts for each error type for each model across all data sets.",
"content": "<table/>",
"num": null
},
"TABREF9": {
"html": null,
"type_str": "table",
"text": "Steve talked to me a lot about being abandoned and the pain that caused. JCC ECA No what I told about the way they treated you and me made him angry. SL ECA Fuck Mitt Romney and Fuck Barack Obama ... God got me !!!!! ICC ET Maurice Mitchell wants you to do more than vote. ICC GNE And he started to despair that his exploration was going to be entirely unsuccessful ... ICC ECA Deeply ashamed of my wayward notions , I tried my best to contradict myself. ICC ES Anyone else find it weird I get excited about stuff like the RNC tonight ?! # polisciprobs SL ET Doesn't he do it well said the girl following with admiring eyes, every movement of him. JCC ECA If he feared that some terrible secret might evaporate from them , it was a mania with him. SL ECA I was furious because the Mac XL wasn't real said Hoffman . SL ECA With such obvious delight in food, it 's hard to see how Blanc remains so slim. SL ES Triad Thugs Use Clubs to Punish Hong Kong ' s Protesters .",
"content": "<table><tr><td>Err Example</td><td colspan=\"2\">Model Data set</td></tr><tr><td/><td>JCC</td><td>GNE</td></tr><tr><td>I'm glad to see you so happy Lupin</td><td>ICC</td><td>ES</td></tr><tr><td colspan=\"3\">Figure 6: Examples for error types for different models and data sets. Extracted clauses are separate by .</td></tr></table>",
"num": null
}
}
}
}