ACL-OCL / Base_JSON /prefixQ /json /Q14 /Q14-1013.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q14-1013",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:11:47.907593Z"
},
"title": "Senti-LSSVM: Sentiment-Oriented Multi-Relation Extraction with Latent Structural SVM",
"authors": [
{
"first": "Lizhen",
"middle": [],
"last": "Qu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yi",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": "yi.zhang@nuance.com"
},
{
"first": "Rui",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lili",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {},
"email": "ljiang@mpi-inf.mpg.de"
},
{
"first": "Rainer",
"middle": [],
"last": "Gemulla",
"suffix": "",
"affiliation": {},
"email": "rgemulla@mpi-inf.mpg.de"
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": "",
"affiliation": {},
"email": "weikum@mpi-inf.mpg.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Extracting instances of sentiment-oriented relations from user-generated web documents is important for online marketing analysis. Unlike previous work, we formulate this extraction task as a structured prediction problem and design the corresponding inference as an integer linear program. Our latent structural SVM based model can learn from training corpora that do not contain explicit annotations of sentiment-bearing expressions, and it can simultaneously recognize instances of both binary (polarity) and ternary (comparative) relations with regard to entity mentions of interest. The empirical evaluation shows that our approach significantly outperforms stateof-the-art systems across domains (cameras and movies) and across genres (reviews and forum posts). The gold standard corpus that we built will also be a valuable resource for the community.",
"pdf_parse": {
"paper_id": "Q14-1013",
"_pdf_hash": "",
"abstract": [
{
"text": "Extracting instances of sentiment-oriented relations from user-generated web documents is important for online marketing analysis. Unlike previous work, we formulate this extraction task as a structured prediction problem and design the corresponding inference as an integer linear program. Our latent structural SVM based model can learn from training corpora that do not contain explicit annotations of sentiment-bearing expressions, and it can simultaneously recognize instances of both binary (polarity) and ternary (comparative) relations with regard to entity mentions of interest. The empirical evaluation shows that our approach significantly outperforms stateof-the-art systems across domains (cameras and movies) and across genres (reviews and forum posts). The gold standard corpus that we built will also be a valuable resource for the community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sentiment-oriented relation extraction (Choi et al., 2006) is concerned with recognizing sentiment polarities and comparative relations between entities from natural language text. Identifying such relations often requires syntactic and semantic analysis at both sentence and phrase level. Most prior work on sentiment analysis consider either i) subjective sentence detection (Yu and K\u00fcbler, 2011) , ii) polarity classification (Johansson and Moschitti, 2011; , or iii) comparative relation identification (Jindal and Liu, 2006; Ganapathibhotla and Liu, 2008) . In practice, however, differ-ent types of sentiment-oriented relations frequently coexist in documents. In particular, we found that more than 38% of the sentences in our test corpus contain more than one type of relations. The isolated analysis approach is inappropriate because i) it sacrifices acuracy by ignoring the intricate interplay among different types of relations; ii) it could lead to conflicting predictions such as estimating a relation candidate as both negative and comparative. Therefore, in this paper, we identify instances of both sentiment polarities and comparative relations for entities of interest simultaneously. We assume that all the mentions of entities and attributes are given, and entities are disambiguated. It is a widely used assumption when evaluating a module in a pipeline system that the outputs of preceding modules are error-free.",
"cite_spans": [
{
"start": 39,
"end": 58,
"text": "(Choi et al., 2006)",
"ref_id": "BIBREF2"
},
{
"start": 377,
"end": 398,
"text": "(Yu and K\u00fcbler, 2011)",
"ref_id": "BIBREF38"
},
{
"start": 429,
"end": 460,
"text": "(Johansson and Moschitti, 2011;",
"ref_id": "BIBREF13"
},
{
"start": 507,
"end": 529,
"text": "(Jindal and Liu, 2006;",
"ref_id": "BIBREF12"
},
{
"start": 530,
"end": 560,
"text": "Ganapathibhotla and Liu, 2008)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge, the only existing system capable of extracting both comparisons and sentiment polarities is a rule-based system proposed by Ding et al. (2009) . We argue that it is better to tackle the task by using a unified model with structured outputs. It allows us to consider a set of correlated relation instances jointly and characterize their interaction through a set of soft and hard constraints. For example, we can encode constraints to discourage an attribute to participate in a polarity relation and a comparative relation at the same time. As a result, the system extracts a set of correlated instances of sentiment-oriented relations from a given sentence. For example, with the sentence about the camera Canon 7D, \"The sensor is great, but the price is higher than Nikon D7000.\" the expected output is positive (Canon 7D, sensor) and preferred(Nikon D7000, Canon 7D, textitprice).",
"cite_spans": [
{
"start": 154,
"end": 172,
"text": "Ding et al. (2009)",
"ref_id": "BIBREF5"
},
{
"start": 844,
"end": 854,
"text": "(Canon 7D,",
"ref_id": null
},
{
"start": 855,
"end": 862,
"text": "sensor)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, constructing a fully annotated training corpus for this task is labor-intensive and requires strong linguistic background. We minimize this overhead by applying a simplified annotation scheme, in which annotators mark mentions of entities and attributes, disambiguate the entities, and label instances of relations for each sentence. Based on the new scheme, we have created a small Sentiment Relation Graph (SRG) corpus for the domains of cameras and movies, which significantly differs from the corpora used in prior work (Wei and Gulla, 2010; Kessler et al., 2010; Toprak et al., 2010; Hu and Liu, 2004) in the following ways: i) both sentiment polarities and comparative relations are annotated; ii) all mentioned entities are disambiguated; and iii) no subjective expressions are annotated, unless they are part of entity mentions.",
"cite_spans": [
{
"start": 533,
"end": 554,
"text": "(Wei and Gulla, 2010;",
"ref_id": "BIBREF31"
},
{
"start": 555,
"end": 576,
"text": "Kessler et al., 2010;",
"ref_id": "BIBREF14"
},
{
"start": 577,
"end": 597,
"text": "Toprak et al., 2010;",
"ref_id": "BIBREF29"
},
{
"start": 598,
"end": 615,
"text": "Hu and Liu, 2004)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The new annotation scheme raises a new challenge for learning algorithms in that they need to automatically find textual evidences for each annotated relation during training. For example, with the sentence \"I like the Rebel a little better, but that is another price jump\", simply assigning a sentimentbearing expression to the nearest relation candidate is insufficient, especially when the sentiment is not explicitly expressed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose SENTI-LSSVM, a latent structural SVM based model for sentiment-oriented relation extraction. SENTI-LSSVM is applied to find the most likely set of the relation instances expressed in a given sentence, where the latent variables are used to assign the most appropriate textual evidences to the respective instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, the contributions of this paper are the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose SENTI-LSSVM: the first unified statistical model with the capability of extracting instances of both binary and ternary sentimentoriented relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We design a task-specific integer linear programming (ILP) formulation for inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We construct a new SRG corpus as a valuable asset for the evaluation of sentiment relation extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We conduct extensive experiments with online reviews and forum posts, showing that SENTI-LSSVM model can effectively learn from a training corpus without explicitly annotated subjective expressions and that its performance significantly outperforms state-of-the-art systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are ample works on analyzing sentiment polarities and entity comparisons, but the majority of them studied the two tasks in isolation. Most prior approaches for fine-grained sentiment analysis focus on polarity classification. Supervised approaches on expression-level analysis require the annotation of sentiment-bearing expressions as training data (Jin et al., 2009; Choi and Cardie, 2010; Johansson and Moschitti, 2011; Yessenalina and Cardie, 2011; Wei and Gulla, 2010) . However, the corresponding annotation process is time-consuming. Although sentence-level annotations are easier to obtain, the analysis at this level cannot cope with sentences conveying relations of multiple types (McDonald et al., 2007; T\u00e4ckstr\u00f6m and McDonald, 2011; Socher et al., 2012) . Lexiconbased approaches require no training data (Ku et al., 2006; Kim and Hovy, 2006; Godbole et al., 2007; Ding et al., 2008; Popescu and Etzioni, 2005; Liu et al., 2005) but suffer from inferior performance Qu et al., 2012) . In contrast, our method requires no annotation of sentiment-bearing expressions for training and can predict both sentiment polarities and comparative relations.",
"cite_spans": [
{
"start": 357,
"end": 375,
"text": "(Jin et al., 2009;",
"ref_id": "BIBREF11"
},
{
"start": 376,
"end": 398,
"text": "Choi and Cardie, 2010;",
"ref_id": "BIBREF1"
},
{
"start": 399,
"end": 429,
"text": "Johansson and Moschitti, 2011;",
"ref_id": "BIBREF13"
},
{
"start": 430,
"end": 459,
"text": "Yessenalina and Cardie, 2011;",
"ref_id": "BIBREF36"
},
{
"start": 460,
"end": 480,
"text": "Wei and Gulla, 2010)",
"ref_id": "BIBREF31"
},
{
"start": 698,
"end": 721,
"text": "(McDonald et al., 2007;",
"ref_id": "BIBREF20"
},
{
"start": 722,
"end": 751,
"text": "T\u00e4ckstr\u00f6m and McDonald, 2011;",
"ref_id": "BIBREF28"
},
{
"start": 752,
"end": 772,
"text": "Socher et al., 2012)",
"ref_id": "BIBREF25"
},
{
"start": 824,
"end": 841,
"text": "(Ku et al., 2006;",
"ref_id": "BIBREF17"
},
{
"start": 842,
"end": 861,
"text": "Kim and Hovy, 2006;",
"ref_id": "BIBREF15"
},
{
"start": 862,
"end": 883,
"text": "Godbole et al., 2007;",
"ref_id": "BIBREF8"
},
{
"start": 884,
"end": 902,
"text": "Ding et al., 2008;",
"ref_id": "BIBREF4"
},
{
"start": 903,
"end": 929,
"text": "Popescu and Etzioni, 2005;",
"ref_id": "BIBREF22"
},
{
"start": 930,
"end": 947,
"text": "Liu et al., 2005)",
"ref_id": "BIBREF18"
},
{
"start": 985,
"end": 1001,
"text": "Qu et al., 2012)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Sentiment-oriented comparative relations have been studied in the context of user-generated discourse (Jindal and Liu, 2006; Ganapathibhotla and Liu, 2008) . Approaches rely on linguistically motivated rules and assume the existence of independent keywords in sentences which indicate comparative relations. Therefore, these methods fall short of extracting comparative relations based on domain dependent information.",
"cite_spans": [
{
"start": 102,
"end": 124,
"text": "(Jindal and Liu, 2006;",
"ref_id": "BIBREF12"
},
{
"start": 125,
"end": 155,
"text": "Ganapathibhotla and Liu, 2008)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Both Johansson and Moschitti (2011) and Wu et al. (2011) formulate fine-grained sentiment analysis as a learning problem with structured outputs. However, they focus only on polarity classification of expressions and require annotation of sentimentbearing expressions for training as well.",
"cite_spans": [
{
"start": 5,
"end": 35,
"text": "Johansson and Moschitti (2011)",
"ref_id": "BIBREF13"
},
{
"start": 40,
"end": 56,
"text": "Wu et al. (2011)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While ILP has been previously applied for inference in sentiment analysis (Choi and Cardie, 2009; Somasundaran and Wiebe, 2009; Wu et al., 2011) , our task requires a complete ILP reformulation due to 1) the absence of annotated sentiment expressions and 2) the constraints imposed by the joint extraction of both sentiment polarity and comparative relations.",
"cite_spans": [
{
"start": 74,
"end": 97,
"text": "(Choi and Cardie, 2009;",
"ref_id": "BIBREF0"
},
{
"start": 98,
"end": 127,
"text": "Somasundaran and Wiebe, 2009;",
"ref_id": "BIBREF26"
},
{
"start": 128,
"end": 144,
"text": "Wu et al., 2011)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This section gives an overview of the whole system for extracting sentiment-oriented relation instances. Prior to presenting the system architecture, we introduce the essential concepts and the definitions of two kinds of directed hypergraphs as the representation of correlated relation instances extracted from sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "3"
},
{
"text": "Entity. An entity is an abstract or concrete thing, which needs not be of material existence. An entity in this paper refers to either a product or a brand.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concepts and Definitions",
"sec_num": "3.1"
},
{
"text": "Attribute. An attribute is an object closely associated with or belonging to an entity, such as the lens of digital camera.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concepts and Definitions",
"sec_num": "3.1"
},
{
"text": "A sentimentoriented relation is either a sentiment polarity or a comparative relation, defined on tuples of entities and attributes. A sentiment polarity relation conveys either a positive or a negative attitude towards entities or their attributes, whereas a comparative relation indicates the preference of one entity over the other entity w.r.t. an attribute. Relation Instance. An instance of sentiment polarity takes the form r(entity, attribute) with r \u2208 {positive, negative}, such as positive(Canon 7D, sensor). The polarity instances expressed in the form of unary relations, such as \"Nikon D7000 is excellent.\", are denoted as binary relations r(entity, whole), where the attribute whole indicates the entity as a whole. In contrast, an instance of comparative relation is in the form of preferred{entity, entity, attribute}, e.g. preferred(Canon 7D, Nikon D7000, price). For brevity, we refer to an instance set of sentiment-oriented relations extracted from a sentence as an sSoR. To represent the instances of the remaining relations, we represent them as other{entity, attribute}, such as textitpartOf{wheel, car}. These relations include objective relations and the subjective relations other than sentimentoriented relations. Mention-Based Relation Instances. A mentionbased relation instance refers to a tuple of entity mentions with a certain relation. This concept is introduced as the representation of instances in a sentence by replacing entities with the corresponding entity mentions, such as positive(\"Canon SD880i\", \"wide angle view\"). Mention-Based Relation Graph. A mention-based relation graph (or MRG ) represents a collection of mention-based relation instances expressed in a sentence. As illustrated in Figure 1 , an MRG is a directed hypergraph G = M, E with a vertex set M and an edge set E. A vertex m i \u2208 M denotes a mention of an entity or an attribute occurring either within the sentence or in its context. We say that a mention is from the context if it is mentioned in the previous sentence or is an attribute implied in the current sentence. An instance of a binary relation in an MRG takes the form of a binary edge e l = (m i , m a ), where m i and m a denote an entity mention and an attribute mention respectively, and the type l \u2208 {positive, negative, other}. A ternary edge e l indicating comparative relation is represented as e l = (m i , m j , m a ), where two entity mentions m i and m j are compared with respect to the attribute mention m a . We define the type l \u2208 {better,worse} to indicate two possible directions of the relation and assume m i occurs before m j . As a result, we have a set L of five relation types: positive, negative, better, worse or other. According to these definitions, the annotations in the SRG corpus are actually MRGs and disambiguated entities. If there are multiple mentions referring to the same entity, annotators are asked to choose the most obvious one because it saves annotation time and is less demanding for the entity recognition and diambiguation modules. Evidentiary Mention-Based Relation Graph. An evidentiary mention-based relation graph, coined eMRG , extends an MRG by associating each edge with a textual evidence to support the corresponding relation assertions (see Figure 2 ). Consequently, an edge in an eMRG is denoted by a pair (a, c), where a represents a mention-based relation instance and c is the associated textual evidence. It is also referred to as an evidentiary edge. represented as e l = (m i , m j , m a ), an MRG as an evidentiary MRG (eMRG) and the edges of eMRGs as evidentiary edges, as shown in Figure 2 . As illustrated by Figure 3 , at the core of our system is the SENTI-LSSVM model, which extracts sets of mention-based relationships in the form of eMRGs from sentences. For a given sentence with known entity mentions, we select all possible mention sets as relation candidates, where each set includes at least one entity mention. Then we associate each relation candidate with a set of constituents or the whole sentence as the textual evidence candidates (cf. Section 6.1). Subsequently, the inference component aims to find the most likely eMRG from all possible combinations of mention-based relation instances and their textual evidences (cf. Section 6.2). The representation eMRG is chosen because it characterizes exactly the model outputs by letting each edge correspond to an instance of mention-based relation and the associated textual evidence. Finally, the model parameters of this model are learned by an online algorithm (cf. Section 7).",
"cite_spans": [],
"ref_spans": [
{
"start": 1735,
"end": 1743,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 3272,
"end": 3280,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 3622,
"end": 3630,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 3651,
"end": 3659,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Sentiment-Oriented Relation.",
"sec_num": null
},
{
"text": "Since instance sets of sentiment-oriented relations (sSoRs) are the expected outputs, we can obtain sSoRs from MRGs by using a simple rule-based algorithm. The algorithm essentially maps the mentions from an MRG into entities and attributes in an sSoR and label the corresponding tuples with the relation types of the edges from an MRG. For instances of comparative relation, the label better or worse is mapped to the relation type preferred.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "3.2"
},
{
"text": "The task of sentiment-oriented relation extraction is to determine the most likely sSoR in a sentence. Since sSoRs are derived from the corresponding MRGs as described in Section 3, the task is reduced to find the most likely MRG for each sentence. Since an MRG is created by assigning relation types to a subset of all relation candidates, which are possible tuples of mentions with unknown relation types, the number of MRGs can be extremely high.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENTI-LSSVM Model",
"sec_num": "4"
},
{
"text": "To tackle the task, one solution is to employ an edge-factored linear model in the framework of structural SVM (Martins et al., 2009; Tsochantaridis et al., 2004) . The model suggests that a bag of features should be specified for each relation candidate, and then the model predicts the most likely candidate sets along with their relation types to form the optimal MRGs. As we observed, for a relation candidate, the most informative features are the words near its entity mentions in the original text. How-ever, if we represent a candidate by all these words, it is very likely that the instances of different relation types share overly similar features, because a mention is often involved in more than one relation candidate, as shown in Figure 2 . As a consequence, the instances of different relations represented by overly similar features can easily confuse the learning algorithm. Thus, it is critical to select proper constituents or sentences as textual evidences for each relation candidate in both training and testing.",
"cite_spans": [
{
"start": 107,
"end": 133,
"text": "SVM (Martins et al., 2009;",
"ref_id": null
},
{
"start": 134,
"end": 162,
"text": "Tsochantaridis et al., 2004)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 745,
"end": 753,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "SENTI-LSSVM Model",
"sec_num": "4"
},
{
"text": "Consequently, we divide the task of sentimentoriented relation extraction into two subtasks : i) identifying the most likely MRGs; ii) assigning proper textual evidences to each edge of MRGs to support their relation assertions. It is desirable to carry out the two subtasks jointly as these two subtasks could enhance each other. First, the identification of relation types requires proper textual evidences; second, the soft and hard constraints imposed by the correlated relation instances facilitate the recognition of the corresponding textual evidences. Since the eMRGs are created by attaching every MRG with a set of textual evidences, tackling the two subtasks simultaneously is equivalent to selecting the most likely eMRG from a set of eMRG candidates. It is challenging because our SRG corpus does not contain any annotation of textual evidences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENTI-LSSVM Model",
"sec_num": "4"
},
{
"text": "Formally, let X denote the set of all available sentences, and we define y \u2208 Y(x)(x \u2208 X ) as the set of labeled edges of an MRG and Y = \u222a x\u2208X Y(x). Since the assignments of textual evidences are not observed, an assignment of evidences to y is denoted by a latent variable h \u2208 H(x) and H = \u222a x\u2208X H(x). Then (y, h) corresponds to an eMRG, and (a, c) \u2208 (y, h) is a labeled edge a attached with a textual evidence c. Given a labeled dataset D = {(x 1 , y 1 ), ..., (x n , y n )} \u2208 (X \u00d7 Y) n , we aim to learn a discriminant function f : X \u2192 Y \u00d7 H that outputs the optimal eMRG (y, h) \u2208 Y(x) \u00d7 H(x) for a given sentence x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENTI-LSSVM Model",
"sec_num": "4"
},
{
"text": "Due to the introduction of latent variables, we adopt the latent structural SVM (Yu and Joachims, 2009) for structural classification. Our discriminant function is defined as",
"cite_spans": [
{
"start": 80,
"end": 103,
"text": "(Yu and Joachims, 2009)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SENTI-LSSVM Model",
"sec_num": "4"
},
{
"text": "f (x) = argmax (y,h)\u2208Y(x)\u00d7H(x) \u03b2 \u03a6(x, y, h) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENTI-LSSVM Model",
"sec_num": "4"
},
{
"text": "where \u03a6(x, y, h) is the feature function of an eMRG (y, h) and \u03b2 is the corresponding weight vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENTI-LSSVM Model",
"sec_num": "4"
},
{
"text": "To ensure tractability, we also employ edge-based factorization for our model. Let M p denote a set of entity mentions and y r (m i ) be a set of edges labeled with sentiment-oriented relations incident to m i , the factorization of \u03a6(x, y, h) is given as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENTI-LSSVM Model",
"sec_num": "4"
},
{
"text": "\u03a6(x, y, h) = (a,c)\u2208(y,h) \u03a6 e (x, a, c) + (2) mi\u2208Mp a,a \u2208yr(mi),a =a \u03a6 c (a, a )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENTI-LSSVM Model",
"sec_num": "4"
},
{
"text": "where \u03a6 e (x, a, c) is a local edge feature function for a labeled edge a attached with a textual evidence c and \u03a6 c (a, a ) is a feature function capturing cooccurrence of two labeled edges a m i and a m i incident to an entity mention m i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENTI-LSSVM Model",
"sec_num": "4"
},
{
"text": "The following features are used in the feature functions (Equation 2): Unigrams: As mentioned before, a textual evidence attached to an edge in MRG is either a word, phrase or sentence. We consider all lemmatized unigrams in the textual evidence as unigram features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Space",
"sec_num": "5"
},
{
"text": "Context: Since web users usually express related sentiments about the same entity across sentence boundaries, we describe the sentiment flow using a set of contextual binary features. For example, if entity A is mentioned in both the previous sentence and the current sentence, a set of contextual binary features are used to indicate all possible combinations of the current and the previous mentioned sentimentoriented relations regarding to entity A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Space",
"sec_num": "5"
},
{
"text": "Co-occurrence: We have mentioned the cooccurrence feature in Equation 2, indicated by \u03a6 c (a, a ). It captures the co-occurrence of two labeled edges incident to the same entity mention. Note that the co-occurrence feature function is considered only if there is a contrast conjunction such as \"but\" between the non-shared entity mentions incident to the two labeled edges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Space",
"sec_num": "5"
},
{
"text": "Senti-predictors: Following the idea of (Qu et al., 2012), we encode the prediction results from the rule-based phrase-level multi-relation predictor (Ding et al., 2009) and from the bag-of-opinions predictor (Qu et al., 2010) as features based on the textual evidence. The output of the first predictor is an integer value, while the output of the second predictor is a sentiment relation, such as \"positive\", \"negative\", \"better\" or \"worse\". We map the relational outputs into integer values and then encode the outputs from both predictors as senti-predictor features.",
"cite_spans": [
{
"start": 150,
"end": 169,
"text": "(Ding et al., 2009)",
"ref_id": "BIBREF5"
},
{
"start": 209,
"end": 226,
"text": "(Qu et al., 2010)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Space",
"sec_num": "5"
},
{
"text": "Others: The commonly used part-of-speech tags are also included as features. Moreover, for an edge candidate, a set of binary features are used to denote the types of the edge and its entity mentions. For instance, a binary feature indicates whether an edge is a binary edge related to an entity mentioned in context. To characterize the syntactic dependencies between two adjacent entity mentions, we use the path in the dependency tree between the heads of the corresponding constituents, the number of words and other mentions in-between as features. Additionally, if the textual evidence is a constituent, its feature w.r.t. an edge is the dependency path to the closest mention of the edge that does not overlap with this constituent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Space",
"sec_num": "5"
},
{
"text": "In order to find the best eMRG for a given sentence with a well trained model, we need to determine the most likely relation type for each relation candidate and support the corresponding assertions with proper textual evidences. We formulate this task as an Integer Linear Programming (ILP). Instead of considering all constituents of a sentence, we empirically select a subset as textual evidences for each relation candidate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Inference",
"sec_num": "6"
},
{
"text": "Textual evidences are selected based on the constituent trees of sentences parsed by the Stanford parser (Klein and Manning, 2003) . For each mention in a sentence, we first locate a constituent in the tree with the maximal overlap by Jaccard similarity. Starting from this constituent, we consider two types of candidates: type I candidates are constituents at the highest level which contain neither any word of another mention nor any contrast conjunctions such as \"but\"; type II candidates are constituents at the highest level which cover exactly two mentions of an edge and do not overlap with any other mentions. For a binary edge connecting an entity mention and an attribute mention, we consider a type I candidate starting from the attribute men-tion. For a binary edge connecting two entity mentions, we consider type I candidates starting from both mentions. Moreover, for a comparative ternary edge, we consider both type I and type II candidates starting from the attribute mention. This strategy is based on our observation that these candidates often cover the most important information w.r.t. the covered entity mentions.",
"cite_spans": [
{
"start": 105,
"end": 130,
"text": "(Klein and Manning, 2003)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Textual Evidence Candidates Selection",
"sec_num": "6.1"
},
{
"text": "We formulate the inference problem of finding the best eMRG as an ILP problem due to its convenient integration of both soft and hard constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP Formulation",
"sec_num": "6.2"
},
{
"text": "Given the model parameters \u03b2, we reformulate the score of an eMRG in the discriminant function (1) as follows,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP Formulation",
"sec_num": "6.2"
},
{
"text": "\u03b2 \u03a6(x, y, h) = (a,c)\u2208(y,h) s ac z ac + m i \u2208Mp a,a \u2208yr(m i ),a =a s aa z aa",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP Formulation",
"sec_num": "6.2"
},
{
"text": "where s ac = \u03b2 \u03a6 e (x, a, c) denotes the score of a labeled edge a attached with a textual evidence c, s aa = \u03b2 \u03a6 c (a, a ) is the edge co-occurrence score, the binary variable z ac indicates the presence or absence of the corresponding edge, and z aa indicates if two edges co-occurr. As not every edge set can form an eMRG, we require that a valid eMRG should satisfy a set of linear constraints, which form our constraint space. Then function (1) is equivalent to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP Formulation",
"sec_num": "6.2"
},
{
"text": "max z\u2208B s z + \u00b5z d s.t. A \uf8ee \uf8f0 z \u03b7 \u03c4 \uf8f9 \uf8fb \u2264 d z, \u03b7, \u03c4 \u2208 B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP Formulation",
"sec_num": "6.2"
},
{
"text": "where B = 2 S with S = {0, 1}, and \u03b7 and \u03c4 are auxiliary binary variables that help define the constraint space. The above optimization problem takes exactly the form of an ILP because both the constraints and the objective function are linear, and all variables take only integer values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP Formulation",
"sec_num": "6.2"
},
{
"text": "In the following, we consider two types of constraint space, 1) an eMRG with only binary edges and 2) an eMRG with both binary and ternary edges. eMRG with only Binary Edges: An eMRG has only binary edges if a sentence contains no attribute mention or at most one entity mention. We expect that each edge has only one relation type and is supported by a single textual evidence. To facilitate the formulation of constraints, we introduce \u03b7 e l to denote the presence or absence of a labeled edge e l , and \u03b7 ec to indicate if a textual evidence c is assigned to an unlabeled edge e. Then the binary variable for the corresponding evidentiary edge z e l c = \u03b7 ec \u2227 \u03b7 e l , where the ILP formulation of conjunction can be found in (Martins et al., 2009) .",
"cite_spans": [
{
"start": 729,
"end": 751,
"text": "(Martins et al., 2009)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ILP Formulation",
"sec_num": "6.2"
},
{
"text": "Let C e denote the set of textual evidence candidates of an unlabeled edge e. The constraint of at most one textual evidence per edge is formulated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP Formulation",
"sec_num": "6.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c\u2208Ce \u03b7 ec \u2264 1",
"eq_num": "(3)"
}
],
"section": "ILP Formulation",
"sec_num": "6.2"
},
{
"text": "Once a textual evidence is assigned to an edge, their relation labels should match and the number of labeled edges must agree with the number of attached textual evidences. Further, we assume that a textual evidence c conveys at most one relation so that an evidence will not be assigned to the relations of different types, which is the main problem for the structural SVM based model. Let \u03b7 cl indicate that the textual evidence c is labeled by the relation type l. The corresponding constraints are expressed as,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP Formulation",
"sec_num": "6.2"
},
{
"text": "l\u2208Le \u03b7 e l = c\u2208Ce \u03b7 ec ; z e l c \u2264 \u03b7 cl ; l\u2208L \u03b7 cl \u2264 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP Formulation",
"sec_num": "6.2"
},
{
"text": "where L e denotes the set of all possible labels for an unlabeled edge e, and L is the set of all relation types of MRGs (cf. Section 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP Formulation",
"sec_num": "6.2"
},
{
"text": "In order to avoid a textual evidence being overly reused by multiple relation candidates, we first penalize the assignment of a textual evidence c to a labeled edge a by associating the corresponding z ac with a fixed negative cost \u2212\u00b5 in the objective function. Then the selection of one textual evidence per edge a is encouraged by associating \u00b5 to This soft constraint not only encourages one textual evidence per edge, but also keeps it eligible for multiple assignments. For any two labeled edge a and a incident to the same entity mention, the edge-to-edge cooccurrence is described by z c a,a = z a \u2227 z a . eMRG with both Binary and Ternary Edges: If there are more than one entity mentions and at least one attribute mention in a sentence, an eMRG can potentially have both binary and ternary edges. In this case, we assume that each mention of attributes can participate either in binary relations or in ternary relations. The assumption holds in more than 99.9% of the sentences in our SRG corpus, thus we describe it as a set of hard constraints. Geometrically, the assumption can be visualized as the selection between two alternative structures incident to the same attribute mention, as shown in Figure 4 . Note that, in the binary edge structure, we include not only the edges incident to the attribute mention but also the edge between the two entity mentions.",
"cite_spans": [],
"ref_spans": [
{
"start": 1209,
"end": 1217,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "ILP Formulation",
"sec_num": "6.2"
},
{
"text": "Let S b m i be the set of all possible labeled edges in a binary edge structure of an attribute mention \u03b7 e l indicates whether the attribute mention is associated with a binary edge structure or not. In the same manner, we use \u03c4 t m i = e l \u2208S t m i \u03b7 e l to indicate the association of the an attribute mention m i with an ternary edge structure from the set of all incident ternary edges S t m i . The selection between two alternative structures is formulated as \u03c4 b m i + \u03c4 t m i = 1. As this influences only the edges incident to an attribute mention, we keep all the constraints introduced in the previous section unchanged except for constraint (3), which is modified as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP Formulation",
"sec_num": "6.2"
},
{
"text": "c\u2208Ce \u03b7 ec \u2264 \u03c4 b m i ; c\u2208Ce \u03b7 ec \u2264 \u03c4 t m i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP Formulation",
"sec_num": "6.2"
},
{
"text": "Therefore, we can have either binary edges or ternary edges for an attribute mention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP Formulation",
"sec_num": "6.2"
},
{
"text": "Given a set of training sentences D = {(x 1 , y 1 ), . . . , (x n , y n )}, the best weight vector \u03b2 of the discriminant function (1) is found by solving the following optimization problem:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Model Parameters",
"sec_num": "7"
},
{
"text": "min \u03b2 1 n n i=1 [ max (\u0177,\u0125)\u2208Y(x)\u00d7H(x) (\u03b2 \u03a6(x,\u0177,\u0125)+\u03b4(\u0125,\u0177, y)) \u2212 max h\u2208H(x) \u03b2 \u03a6(x, y,h)] + \u03c1|\u03b2|] (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Model Parameters",
"sec_num": "7"
},
{
"text": "where \u03b4(\u0125,\u0177, y) is a loss function measuring the discrepancies between an eMRG (y,h) with gold standard edge labels y and an eMRG (\u0177,\u0125) with inferred labeled edges\u0177 and textual evidences\u0125. Due to the sparse nature of the lexical features, we apply L1 regularizer to the weight vector \u03b2, and the degree of sparsity is controlled by the hyperparameter \u03c1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Model Parameters",
"sec_num": "7"
},
{
"text": "Since the L1 norm in the above optimization problem is not differentiable at zero, we apply the online forward-backward splitting (FOBOS) algorithm (Duchi and Singer, 2009) . It requires two steps for updating the weight vector \u03b2 by using a single training sentence x on each iteration t.",
"cite_spans": [
{
"start": 148,
"end": 172,
"text": "(Duchi and Singer, 2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Model Parameters",
"sec_num": "7"
},
{
"text": "\u03b2 t+ 1 2 = \u03b2 t \u2212 \u03b5 t \u2206 t \u03b2 t+1 = arg min \u03b2 1 2 \u03b2 \u2212 \u03b2 t 2 + \u03b5 t \u03c1|\u03b2|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Model Parameters",
"sec_num": "7"
},
{
"text": "where \u2206 t is the subgradient computed without considering the L1 norm and \u03b5 t is the learning rate. For a labeled sentence x, \u2206 t = \u03a6(x,\u0177 * ,\u0125 * ) \u2212 \u03a6(x, y,h * ), where the feature functions of the corresponding eMRGs are inferred by solving (\u0177 * ,\u0125 * ) = arg max (\u0125,\u0177)\u2208H(x)\u00d7Y(x) [\u03b2 \u03a6(x,\u0177,\u0125) + \u03b4(\u0125,\u0177, y)] and (y,h * ) = arg maxh \u2208H(x) \u03b2 \u03a6(x, y,h), as indicated in the optimization problem (4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Model Parameters",
"sec_num": "7"
},
{
"text": "The former inference problem is similar to the one we considered in the previous section except the inclusion of the loss function. We incorporate the loss function into the ILP formulation by defining the loss between an MRG (y, h) and a gold standard MRG as the sum of per-edge costs. In our experiments, we consider a positive cost \u03d5 for each wrongly labeled edge a, so that if an edge a has a different label from the gold standard, we add \u03d5 to the coefficient s ac of the corresponding variable z ac in the objective function of the ILP formulation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Model Parameters",
"sec_num": "7"
},
{
"text": "In addition, since the non-positive weights of edge labels in the initial learning phrase often lead to eMRGs with many unlabeled edges, which harms the learning performance, we fix it by adding a constraint for the minimal number of labeled edges in an eMRG, a\u2208A c\u2208Ca",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Model Parameters",
"sec_num": "7"
},
{
"text": "\u03b7 ac \u2265 \u03b6 (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Model Parameters",
"sec_num": "7"
},
{
"text": "where A is the set of all labeled edge candidates and \u03b6 denotes the minimal number of labeled edges. Empirically, the best way to determine \u03b6 is to make it equal to the maximal number of labeled edges in an eMRG with the restriction that a textual evidence can be assigned to at most one edge. By considering all the edge candidates A and all the textual evidence candidates C as two vertex sets in a bipartite graph\u011c = V = (A, C), E (with edges in E indicating which textual evidence can be assigned to which edge), \u03b6 corresponds to exactly the size of a maximum matching of the bipartite graph 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Model Parameters",
"sec_num": "7"
},
{
"text": "To find the optimal eMRG (y,h * ), for the gold label k of each edge, we consider the following set of constraints for inference since the labels of the edges are known for the training data,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Model Parameters",
"sec_num": "7"
},
{
"text": "c\u2208Ce \u03b7 ec \u2264 1; \u03b7 ec \u2264 l ck k \u2208L l ck \u2264 1; e\u2208Sc \u03b7 ec \u2264 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Model Parameters",
"sec_num": "7"
},
{
"text": "We include also the soft constraints, which avoid a textual evidence being overly reused by multiple relations, and the constraints similar to (5) to ensure a minimal number of labeled edges and a minimal number of sentiment-oriented relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Model Parameters",
"sec_num": "7"
},
{
"text": "For evaluation we constructed the SRG corpus, which in total consists of 1686 manually annotated online reviews and forum posts in the digital camera and movie domains 2 . For each domain, we maintain a set of attributes and a list of entity names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SRG Corpus",
"sec_num": "8"
},
{
"text": "The annotation scheme for the sentiment representation asserts minimal linguistic knowledge from our annotators. By focusing on the meanings of the sentences, the annotators make decisions based on their language intuition, not restricted by specific syntactic structures. Taking the example in Figure 2 , the annotators only need to mark the mentions of entities and attributes from both the sentences and the context, disambiguate them, and label (\"Canon 7D\", \"Nikon D7000\", price) as worse and (\"Canon 7D\", \"sensor\") as positive, whereas in prior work, people have annotated the sentiment-bearing expressions such as \"great\" and link them to the respective relation instances as well. This also enables them to annotate instances of both sentiment polarity and comparative relaton, which are conveyed by not only explicit sentiment-bearing expressions like \"excellent performance\", but also factual expressions implying evaluations such as \"The 7V has 10x optical zoom and the 9V has 16x.\". 14 annotators participated in the annotation project. After a short training period, annotators worked on randomly assigned documents one at a time. For product reviews, the system lists all relevant information about the entity and the predefined attributes. For forum posts, the system shows only the attribute list. For each sentence in a document, the annotator first determines if it refers to an entity of interest. If not, the sentence is marked as off-topic. Otherwise, the annotator will identify the most obvious mentions, disambiguate them, and mark the MRGs. We evaluate the inter-annotator agreement on sSoRs in terms of Cohen's Kappa (\u03ba) (Cohen, 1968 ). An average Kappa value of 0.698 was achieved on a randomly selected set consisting of 412 sentences. Table 1 shows the corpus distribution after normalizing them into sSoRs. Camera forum posts contain the largest proportion of comparisons because they are mainly about the recommendation of digital cameras. In contrast, web users are much less interested in comparing movies, in both reviews and forums. In all subsets, positive relations play a dominant role since web users intend to express more positive attitudes online than negative ones (Pang and Lee, 2007) .",
"cite_spans": [
{
"start": 1629,
"end": 1646,
"text": "Cohen's Kappa (\u03ba)",
"ref_id": null
},
{
"start": 1647,
"end": 1659,
"text": "(Cohen, 1968",
"ref_id": "BIBREF3"
},
{
"start": 2208,
"end": 2228,
"text": "(Pang and Lee, 2007)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 295,
"end": 304,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1764,
"end": 1771,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "SRG Corpus",
"sec_num": "8"
},
{
"text": "This section describes the empirical evaluation of SENTI-LSSVM together with two competitive baselines on the SRG corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "9"
},
{
"text": "We implemented a rule-based baseline (DING-RULE) and a structural SVM (Tsochantaridis et al., 2004) baseline (SENTI-SSVM) for comparison. The former system extends the work of Ding et al. (2009) , which designed several linguisticallymotivated rules based on a sentiment polarity lexicon for relation identification and assumes there is only one type of sentiment relation in a sentence. In our implementation, we keep all the rules of (Ding et al., 2009 ) and add one phrase-level rule when there are more than one mention in a sentence. The additional rule assigns sentiment-bearing words and negators to its nearest relation candidates based on the absolute surface distance between the words and the corresponding mentions. In this case, the phraselevel sentiment-oriented relations depend only on the assigned sentiment words and negators. The latter system is based on a structural SVM and does not consider the assignment of textual evidences to relation instances during inference. The textual features of a relation candidate are all lexical and sentiment predictor features within a surface distance of four words from the mentions of the candidate.",
"cite_spans": [
{
"start": 70,
"end": 99,
"text": "(Tsochantaridis et al., 2004)",
"ref_id": "BIBREF30"
},
{
"start": 176,
"end": 194,
"text": "Ding et al. (2009)",
"ref_id": "BIBREF5"
},
{
"start": 436,
"end": 454,
"text": "(Ding et al., 2009",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "9.1"
},
{
"text": "Thus, this baseline does not need the inference constraints of SENTI-LSSVM for the selection of textual evidences. To gain more insights into the model, we also evaluate the contribution of individual features of SENTI-LSSVM. In addition, to show if identifying sentiment polarities and comparative relations jointly works better than tackling each task on its own, we train SENTI-LSSVM for each task separately and combine their predictions according to compatibility rules and the corresponding graph scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "9.1"
},
{
"text": "For each domain and text genre, we withheld 15% documents for development and use the remaining for cross validation. The hyperparameters of all systems are tuned on the development datasets. For all experiments of SENTI-LSSVM, we use \u03c1 = 0.0001 for the L1 regularizer in Eq. 4and \u03d5 = 0.05 for the loss function; and for SENTI-SSVM, \u03c1 = 0.0001 and \u03d5 = 0.01. Since the relation type of off-topic sentences is certainly other, we evaluate all systems with 5-fold cross-validation only on the on-topic sentences in the evaluation dataset. Since the same sSoR can have several equivalent MRGs and the relation type other is not of our interest, we evaluate the sSoRs in terms of precision, recall and F-measure. All reported numbers are averages over the 5 folds. Table 2 shows the complete results of all systems. Here our model SENTI-LSSVM outperformed all baselines in terms of the average F-measure scores and recalls by a large margin. The F-measure on movie reviews is about 14% over the best baseline. The rule-based system has higher precision than recall in most cases. However, simply increasing the coverage of the domain independent sentiment polarity lexicon might lead to worse performance (Taboada et al., 2011 ) because many sentiment oriented relations are conveyed by domain dependent expressions and factual expressions implying evaluations, such as \"This camera does not have manual control.\" Compared to DING-RULE, SENTI-SSVM performs better in the camera domain but worse for the movies due to many misclassification of negative relation instances as other. It also wrongly predicted more positive instances as other than SENTI-LSSVM. We found that the recalls of these instances are low because they often have overly similar features with the instances of the type other linking to the same mentions. The problem gets worse in the movie domain since i) many sentences contain no explicit sentiment-bearing words; ii) the prior polarity of the sentiment-bearing words do not agree with their contextual polarity in the sentences. Consider the following example from a forum post about the movie \"Superman Returns\": \"Have a look at Superman: the Animated Series or Justice League Unlimited . . . that is how the characters of Superman and Lex Luthor should be.\". In contrast, our model minimizes the overlapping features by assigning them to the most likely relation candidates. This leads to significantly better performance. Although SENTI-SSVM has low recall for both positive and negative relations, it achieves the highest recall for the comparative relation among all systems in the movie domain and camera reviews. Since less than 1% of all instances are for comparative relations in these document sets and all models are trained to optimize the overall accuracy, SENTI-LSSVM intends to trade off the minority class for the overall better performance. This advantage disappears on the camera forum posts, where the number of instances of comparative relation is 12 times more than that in the other data sets.",
"cite_spans": [
{
"start": 1200,
"end": 1221,
"text": "(Taboada et al., 2011",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 760,
"end": 767,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "9.1"
},
{
"text": "All systems perform better in predicting positive relations than the negative ones. This corresponds well to the empirical findings in (Wilson, 2008) that people intend to use more complex expressions for negative sentiments than their affirmative counterparts. It is also in accordance with the distribution of these relations in our SRG corpus which is randomly sampled from the online documents. For learning systems, it can also be explained by the fact that the training data for positive relations are considerably more than those for negative ones. The comparative relation is the hardest one to process since we found that many corresponding expressions do not contain explicit keywords for comparison.",
"cite_spans": [
{
"start": 135,
"end": 149,
"text": "(Wilson, 2008)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "9.2"
},
{
"text": "To understand the performance of the key feature groups in our model better, we remove each group from the full SENTI-LSSVM system and evaluate the variations with movie reviews and camera forum posts, which have relatively balanced distribution of relation types. As shown in Table 3 , the features from the sentiment predictors make significant contributions for both datasets. The different drops of the performance indicate that the po- -8.4) 46.0 (+0.6) \u00acco-occurrence 62.6 (-0.3) 44.9 (-0.5) \u00acsenti-predictors 61.3 (-1.6) 34.3 (-11.1) Table 3 : Micro-average F-measure of SENTI-LSSVM with different feature models larities predicted by rules are more consistent in camera forum posts than in movie reviews. Due to the complexity of expressions in the movie reviews our model cannot benefit from the unigram features but these features are a good compensation for the sentiment predictor features in camera forum posts. The sharp drop by removing the context features from our model on movie reviews indicates that the sentiments in movie reviews depend highly on the relations of the previous sentences. In contrast, the sentiment-oriented relations of the previous sentences could be a reason of overfitting for camera forum data. The edge co-occurrence features do not play an important role in our model since the number of co-occurred sentiment-oriented relations in the sentences with contrast conjunctions like \"but\" is small. However, we found that allowing the co-occurrence of any sentiment-oriented relations would harm the performance of the model. In addition, our experiments showed that the sep-arated approach, which trains a model for sentiment polarities and comparative relations respectively, leads to a decrease by almost 1% in terms of the F-measure averaged over all four datasets. The largest drop of F-measure is 3% on camera forum posts, since this dataset contains the largest proportion of comparative relations. We found that the errors are increased when the trained models make conflicting predictions. In this case, the joint approach can take all factors into account and make more consistent decisions than the separated approaches.",
"cite_spans": [
{
"start": 441,
"end": 446,
"text": "-8.4)",
"ref_id": null
},
{
"start": 521,
"end": 527,
"text": "(-1.6)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 277,
"end": 284,
"text": "Table 3",
"ref_id": null
},
{
"start": 541,
"end": 548,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "9.2"
},
{
"text": "We proposed SENTI-LSSVM model for extracting instances of both sentiment polarities and comparative relations. For evaluating and training the model, we created an SRG corpus by using a lightweight annotation scheme. We showed that our model can automatically find textual evidences to support its relation predictions and achieves significantly better F-measure scores than alternative state-of-the-art methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "10"
},
{
"text": "Transactions of the Association for Computational Linguistics, 2 (2014) 155-168. Action Editor: Janyce Wiebe.Submitted 6/2013; Revised 11/2013; Published 4/2014. c 2014 Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It is computed by the Hopcroft-Karp algorithm(Hopcroft and Karp, 1973) in our implementation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The 107 camera reviews are from bestbuy.com and Amazon.com; the 667 camera forum posts are downloaded from forum.digitalcamerareview.com; the 138 movie reviews and 774 forum posts are from imdb.com and boards.ie respectively",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Adapting a polarity lexicon using integer linear programming for domainspecific sentiment classification",
"authors": [
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "590--598",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yejin Choi and Claire Cardie. 2009. Adapting a polarity lexicon using integer linear programming for domain- specific sentiment classification. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 -Volume 2, EMNLP '09, pages 590-598, Stroudsburg, PA, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Hierarchical sequential learning for extracting opinions and their attributes",
"authors": [
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "269--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yejin Choi and Claire Cardie. 2010. Hierarchical se- quential learning for extracting opinions and their at- tributes. In Proceedings of the Annual meeting of the Association for Computational Linguistics, pages 269-274. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Joint extraction of entities and relations for opinion recognition",
"authors": [
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Breck",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "431--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yejin Choi, Eric Breck, and Claire Cardie. 2006. Joint extraction of entities and relations for opinion recog- nition. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 431- 439, Stroudsburg, PA, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Weighted Kappa: Nominal Scale Agreement Provision for Scaled Disagreement or Partial Credit",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1968,
"venue": "Psychological bulletin",
"volume": "70",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Cohen. 1968. Weighted Kappa: Nominal Scale Agreement Provision for Scaled Disagreement or Par- tial Credit. Psychological bulletin, 70(4):213.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A holistic lexicon-based approach to opinion mining",
"authors": [
{
"first": "Xiaowen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Philip",
"middle": [
"S"
],
"last": "Yu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 International Conference on Web Search and Data Mining",
"volume": "",
"issue": "",
"pages": "231--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaowen Ding, Bing Liu, and Philip S. Yu. 2008. A holistic lexicon-based approach to opinion mining. In Proceedings of the 2008 International Conference on Web Search and Data Mining, pages 231-240, New York, NY, USA. ACM.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Entity discovery and assignment for opinion mining applications",
"authors": [
{
"first": "Xiaowen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1125--1134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaowen Ding, Bing Liu, and Lei Zhang. 2009. Entity discovery and assignment for opinion mining applica- tions. In Proceedings of the ACM SIGKDD Confer- ence on Knowledge Discovery and Data Mining, pages 1125-1134.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Efficient online and batch learning using forward backward splitting",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2009,
"venue": "The Journal of Machine Learning Research",
"volume": "10",
"issue": "",
"pages": "2899--2934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi and Yoram Singer. 2009. Efficient online and batch learning using forward backward splitting. The Journal of Machine Learning Research, 10:2899- 2934.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Mining opinions in comparative sentences",
"authors": [
{
"first": "Murthy",
"middle": [],
"last": "Ganapathibhotla",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "241--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Murthy Ganapathibhotla and Bing Liu. 2008. Mining opinions in comparative sentences. In Proceedings of the 22nd International Conference on Computational Linguistics -Volume 1, pages 241-248, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Large-scale sentiment analysis for news and blogs (system demonstration)",
"authors": [
{
"first": "Namrata",
"middle": [],
"last": "Godbole",
"suffix": ""
},
{
"first": "Manjunath",
"middle": [],
"last": "Srinivasaiah",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Skiena",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the International AAAI Conference on Weblogs and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Namrata Godbole, Manjunath Srinivasaiah, and Steven Skiena. 2007. Large-scale sentiment analysis for news and blogs (system demonstration). In Proceed- ings of the International AAAI Conference on Weblogs and Social Media.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An n\u02c65/2 algorithm for maximum matchings in bipartite graphs",
"authors": [
{
"first": "E",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Richard M",
"middle": [],
"last": "Hopcroft",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Karp",
"suffix": ""
}
],
"year": 1973,
"venue": "SIAM Journal on computing",
"volume": "2",
"issue": "4",
"pages": "225--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John E Hopcroft and Richard M Karp. 1973. An n\u02c65/2 algorithm for maximum matchings in bipartite graphs. SIAM Journal on computing, 2(4):225-231.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "168--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, Proceedings of the ACM SIGKDD Conference on Knowledge Discov- ery and Data Mining, pages 168-177, New York, NY, USA. ACM.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Opinionminer: a novel machine learning system for web opinion mining and extraction",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Hung",
"middle": [
"Hay"
],
"last": "Ho",
"suffix": ""
},
{
"first": "Rohini",
"middle": [
"K"
],
"last": "Srihari",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "1195--1204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Jin, Hung Hay Ho, and Rohini K. Srihari. 2009. Opinionminer: a novel machine learning system for web opinion mining and extraction. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1195- 1204, New York, NY, USA. ACM.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Mining comparative sentences and relations",
"authors": [
{
"first": "Nitin",
"middle": [],
"last": "Jindal",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1331--1336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitin Jindal and Bing Liu. 2006. Mining comparative sentences and relations. In Proceedings of the 21st In- ternational Conference on Artificial Intelligence -Vol- ume 2, AAAI'06, pages 1331-1336. AAAI Press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Extracting opinion expressions and their polaritiesexploration of pipelines and joint models",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Annual meeting of the Association for Computational Linguistics",
"volume": "11",
"issue": "",
"pages": "101--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Johansson and Alessandro Moschitti. 2011. Extracting opinion expressions and their polarities- exploration of pipelines and joint models. In Proceed- ings of the Annual meeting of the Association for Com- putational Linguistics, volume 11, pages 101-106.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The 2010 icwsm jdpa sentment corpus for the automotive domain",
"authors": [
{
"first": "Jason",
"middle": [
"S"
],
"last": "Kessler",
"suffix": ""
},
{
"first": "Miriam",
"middle": [],
"last": "Eckert",
"suffix": ""
},
{
"first": "Lyndsie",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Nicolov",
"suffix": ""
}
],
"year": 2010,
"venue": "4th International AAAI Conference on Weblogs and Social Media Data Workshop Challenge (ICWSM-DWC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason S. Kessler, Miriam Eckert, Lyndsie Clark, and Nicolas Nicolov. 2010. The 2010 icwsm jdpa sent- ment corpus for the automotive domain. In 4th Inter- national AAAI Conference on Weblogs and Social Me- dia Data Workshop Challenge (ICWSM-DWC 2010).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Extracting opinions, opinion holders, and topics expressed in online news media text",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Soo",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Workshop on Sentiment and Subjectivity in Text, SST '06",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soo-Min Kim and Eduard Hovy. 2006. Extracting opin- ions, opinion holders, and topics expressed in online news media text. In Proceedings of the Workshop on Sentiment and Subjectivity in Text, SST '06, pages 1-8, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "423--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st An- nual Meeting on Association for Computational Lin- guistics -Volume 1, ACL '03, pages 423-430, Strouds- burg, PA, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Opinion extraction, summarization and tracking in news and blog corpora",
"authors": [
{
"first": "Lun-Wei",
"middle": [],
"last": "Ku",
"suffix": ""
},
{
"first": "Yu-Ting",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Hsin-Hsi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2006,
"venue": "AAAI Spring Symposium: Computational Approaches to Analyzing Weblogs",
"volume": "",
"issue": "",
"pages": "100--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lun-Wei Ku, Yu-Ting Liang, and Hsin-Hsi Chen. 2006. Opinion extraction, summarization and tracking in news and blog corpora. In AAAI Spring Sympo- sium: Computational Approaches to Analyzing We- blogs, pages 100-107.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Opinion observer: analyzing and comparing opinions on the web",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Junsheng",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 14th international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "342--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Liu, Minqing Hu, and Junsheng Cheng. 2005. Opinion observer: analyzing and comparing opinions on the web. In Proceedings of the 14th international conference on World Wide Web, pages 342-351, New York, NY, USA. ACM.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Concise integer linear programming formulations for dependency parsing",
"authors": [
{
"first": "L",
"middle": [],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Martins",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "342--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andr\u00e9 L. Martins, Noah A. Smith, and Eric P. Xing. 2009. Concise integer linear programming formula- tions for dependency parsing. In Proceedings of the Annual meeting of the Association for Computational Linguistics, pages 342-350.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Structured models for fine-to-coarse sentiment analysis",
"authors": [
{
"first": "Ryan",
"middle": [
"T"
],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Kerry",
"middle": [],
"last": "Hannan",
"suffix": ""
},
{
"first": "Tyler",
"middle": [],
"last": "Neylon",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Wells",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"C"
],
"last": "Reynar",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan T. McDonald, Kerry Hannan, Tyler Neylon, Mike Wells, and Jeffrey C. Reynar. 2007. Structured mod- els for fine-to-coarse sentiment analysis. In Proceed- ings of the Annual meeting of the Association for Com- putational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "2",
"issue": "",
"pages": "1--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lillian Lee. 2007. Opinion mining and sentiment analysis. Foundations and Trends in Infor- mation Retrieval, 2(1-2):1-135.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Extracting product features and opinions from reviews",
"authors": [
{
"first": "Ana-Maria",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05",
"volume": "",
"issue": "",
"pages": "339--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ana-Maria Popescu and Oren Etzioni. 2005. Extract- ing product features and opinions from reviews. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Lan- guage Processing, HLT '05, pages 339-346, Strouds- burg, PA, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The bag-of-opinions method for review rating prediction from sparse text patterns",
"authors": [
{
"first": "Lizhen",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Ifrim",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "913--921",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lizhen Qu, Georgiana Ifrim, and Gerhard Weikum. 2010. The bag-of-opinions method for review rat- ing prediction from sparse text patterns. In Chu-Ren Huang and Dan Jurafsky, editors, Proceedings of the 23rd International Conference on Computational Lin- guistics (Coling 2010), ACL Anthology, pages 913- 921, Beijing, China. Tsinghua University Press.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A weakly supervised model for sentence-level semantic orientation analysis with multiple experts",
"authors": [
{
"first": "Lizhen",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Rainer",
"middle": [],
"last": "Gemulla",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2012,
"venue": "Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "149--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lizhen Qu, Rainer Gemulla, and Gerhard Weikum. 2012. A weakly supervised model for sentence-level seman- tic orientation analysis with multiple experts. In Joint Conference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning (EMNLP-CoNLL), pages 149-159, Jeju Island, Korea, July. Proceedings of the Annual meeting of the Association for Computational Linguis- tics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Semantic compositionality through recursive matrix-vector spaces",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Brody",
"middle": [],
"last": "Huval",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1201--1211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceed- ings of the Conference on Empirical Methods in Natu- ral Language Processing, pages 1201-1211.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Recognizing stances in online debates",
"authors": [
{
"first": "Swapna",
"middle": [],
"last": "Somasundaran",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing",
"volume": "",
"issue": "",
"pages": "226--234",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Swapna Somasundaran and Janyce Wiebe. 2009. Rec- ognizing stances in online debates. In Proceedings of the Joint conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Lan- guage Processing, pages 226-234.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Lexiconbased methods for sentiment analysis",
"authors": [
{
"first": "Maite",
"middle": [],
"last": "Taboada",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Brooke",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Tofiloski",
"suffix": ""
},
{
"first": "Kimberly",
"middle": [
"D"
],
"last": "Voll",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": ""
}
],
"year": 2011,
"venue": "Computational Linguistics",
"volume": "37",
"issue": "2",
"pages": "267--307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maite Taboada, Julian Brooke, Milan Tofiloski, Kim- berly D. Voll, and Manfred Stede. 2011. Lexicon- based methods for sentiment analysis. Computational Linguistics, 37(2):267-307.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Discovering fine-grained sentiment with latent variable structured prediction models",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 33rd European conference on Advances in information retrieval, ECIR'11",
"volume": "",
"issue": "",
"pages": "368--374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar T\u00e4ckstr\u00f6m and Ryan McDonald. 2011. Discov- ering fine-grained sentiment with latent variable struc- tured prediction models. In Proceedings of the 33rd European conference on Advances in information re- trieval, ECIR'11, pages 368-374, Berlin, Heidelberg. Springer-Verlag.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Sentence and expression level annotation of opinions in user-generated discourse",
"authors": [
{
"first": "Cigdem",
"middle": [],
"last": "Toprak",
"suffix": ""
},
{
"first": "Niklas",
"middle": [],
"last": "Jakob",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10",
"volume": "",
"issue": "",
"pages": "575--584",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cigdem Toprak, Niklas Jakob, and Iryna Gurevych. 2010. Sentence and expression level annotation of opinions in user-generated discourse. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10, pages 575-584, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Support vector machine learning for interdependent and structured output spaces",
"authors": [
{
"first": "Ioannis",
"middle": [],
"last": "Tsochantaridis",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
},
{
"first": "Yasemin",
"middle": [],
"last": "Altun",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "104--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, and Yasemin Altun. 2004. Support vec- tor machine learning for interdependent and structured output spaces. In Proceedings of the International Conference on Machine Learning, pages 104-112.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Sentiment learning on product reviews via sentiment ontology tree",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Jon",
"middle": [
"Atle"
],
"last": "Gulla",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "404--413",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Wei and Jon Atle Gulla. 2010. Sentiment learn- ing on product reviews via sentiment ontology tree. In Proceedings of the Annual meeting of the Association for Computational Linguistics, pages 404-413.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Annotating expressions of opinions and emotions in language",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2005,
"venue": "Language Resources and Evaluation",
"volume": "39",
"issue": "2-3",
"pages": "165--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, 39(2- 3):165-210.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Recognizing contextual polarity in phrase-level sentiment analysis",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Hoffmann",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05",
"volume": "",
"issue": "",
"pages": "347--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of the confer- ence on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05, pages 347-354, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Fine-grained subjectivity and sentiment analysis: recognizing the intensity, polarity, and attitudes of private states",
"authors": [
{
"first": "Wilson",
"middle": [],
"last": "Theresa Ann",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Ann Wilson. 2008. Fine-grained subjectivity and sentiment analysis: recognizing the intensity, po- larity, and attitudes of private states. Ph.D. thesis, UNIVERSITY OF PITTSBURGH.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Structural opinion mining for graph-based sentiment representation",
"authors": [
{
"first": "Yuanbin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Lide",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1332--1341",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuanbin Wu, Qi Zhang, Xuanjing Huang, and Lide Wu. 2011. Structural opinion mining for graph-based sen- timent representation. In Proceedings of the Confer- ence on Empirical Methods in Natural Language Pro- cessing, pages 1332-1341.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Compositional matrix-space models for sentiment analysis",
"authors": [
{
"first": "Ainur",
"middle": [],
"last": "Yessenalina",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "172--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ainur Yessenalina and Claire Cardie. 2011. Composi- tional matrix-space models for sentiment analysis. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 172-182.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Learning structural svms with latent variables",
"authors": [
{
"first": "Chun-Nam John",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chun-Nam John Yu and Thorsten Joachims. 2009. Learning structural svms with latent variables. In Pro- ceedings of the International Conference on Machine Learning, page 147.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Filling the gap: Semi-supervised learning for opinion detection across domains",
"authors": [
{
"first": "Ning",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "200--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ning Yu and Sandra K\u00fcbler. 2011. Filling the gap: Semi-supervised learning for opinion detection across domains. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, pages 200-209. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "An example of MRG."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "An example of eMRG. The textual evidences are wrapped by green dashed boxes."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "System architecture."
},
"FIGREF3": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "z d c in the objective function, where z d c = e\u2208Sc \u03b7 ec and S c is the set of edges that the textual evidence c serves as a candidate. The disjunction z d c is expressed as: z d c \u2265 \u03b7 e , e \u2208 S c z Alternative structures associated with an attribute mention."
},
"FIGREF4": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "m i . Variable \u03c4 b m i = e l \u2208S b m i"
},
"TABREF1": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Distribution of relation instances in SRG corpus."
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td>P</td><td>Positive R</td><td>F</td><td>P</td><td>Negative R</td><td>F</td><td>P</td><td>Comparison R</td><td>F</td><td>Micro-average P R F</td></tr><tr><td colspan=\"11\">Camera Forum 56.4 Movie DING-RULE Forum DING-RULE 63.7 37.4 47.1 27.6 34.3 30.6 8.9 SENTI-SSVM 66.2 30.1 41.3 25.6 17.3 20.7 44.2 56.7 49.7 53.3 27.9 36.6 5.6 6.8 48.2 35.9 41.2 SENTI-LSSVM 63.3 44.2 52.1 29.7 45.6 36.0 40.1 45.0 42.4 49.7 44.6 47.0</td></tr><tr><td>Movie Re-view</td><td>DING-RULE SENTI-SSVM SENTI-LSSVM</td><td colspan=\"9\">66.5 47.2 55.2 42.0 39.1 40.5 31.4 12.0 17.4 56.2 44.0 49.4 61.3 54.0 57.4 45.2 13.7 21.1 24.5 63.3 35.3 54.6 39.2 45.7 59.0 79.1 67.6 53.3 51.4 52.3 28.3 34.0 30.9 57.9 68.8 62.9</td></tr></table>",
"text": "39.0 46.1 46.2 24.0 31.6 42.6 14.0 21.0 53.4 30.8 39.0 SENTI-SSVM 60.2 35.6 44.8 44.2 38.5 41.2 28.0 40.1 32.9 43.7 36.7 39.9 SENTI-LSSVM 69.2 38.9 49.8 50.8 39.3 44.3 42.6 35.1 38.5 56.5 38.0 45.4 Camera Re-view DING-RULE 83.6 69.0 75.6 68.6 38.8 49.6 30.0 16.9 21.6 81.1 58.6 68.1 SENTI-SSVM 72.6 75.4 74.0 63.9 62.5 63.2 28.0 38.9 32.5 68.1 70.4 69.3 SENTI-LSSVM 77.3 85.4 81.2 68.9 61.3 64.9 22.3 20.7 21.6 73.1 73.4 73.7"
},
"TABREF3": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>Feature Models</td><td colspan=\"2\">Movie Reviews Camera Forums</td></tr><tr><td>full system</td><td>62.9</td><td>45.4</td></tr><tr><td>\u00acunigram \u00accontext</td><td>63.2 (+0.3) 54.5 (</td><td>41.2 (-4.2)</td></tr></table>",
"text": "Evaluation results for DING-RULE, SENTI-SSVM and SENTI-LSSVM. Boldface figures are statistically significantly better than all others in the same comparison group under t-test with p = 0.05."
}
}
}
}