ACL-OCL / Base_JSON /prefixE /json /E14 /E14-1040.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E14-1040",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:40:43.057116Z"
},
"title": "Sentiment Propagation via Implicature Constraints",
"authors": [
{
"first": "Lingjia",
"middle": [],
"last": "Deng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pittsburgh",
"location": {}
},
"email": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pittsburgh",
"location": {}
},
"email": "wiebe@cs.pitt.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Opinions may be expressed implicitly via inference over explicit sentiments and events that positively/negatively affect entities (goodFor/badFor events). We investigate how such inferences may be exploited to improve sentiment analysis, given goodFor/badFor event information. We apply Loopy Belief Propagation to propagate sentiments among entities. The graph-based model improves over explicit sentiment classification by 10 points in precision and, in an evaluation of the model itself, we find it has an 89% chance of propagating sentiments correctly.",
"pdf_parse": {
"paper_id": "E14-1040",
"_pdf_hash": "",
"abstract": [
{
"text": "Opinions may be expressed implicitly via inference over explicit sentiments and events that positively/negatively affect entities (goodFor/badFor events). We investigate how such inferences may be exploited to improve sentiment analysis, given goodFor/badFor event information. We apply Loopy Belief Propagation to propagate sentiments among entities. The graph-based model improves over explicit sentiment classification by 10 points in precision and, in an evaluation of the model itself, we find it has an 89% chance of propagating sentiments correctly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Previous research in sentiment analysis and opinion extraction has largely focused on the interpretation of explicitly stated opinions. However, many opinions are expressed implicitly via opinion implicature (i.e., opinion-oriented defeasible inference). Consider the following sentence: EX (1) The bill would lower health care costs, which would be a tremendous positive change across the entire health-care system.",
"cite_spans": [
{
"start": 288,
"end": 290,
"text": "EX",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The writer is clearly positive toward the idea of lowering health care costs. But how does s/he feel about the costs? If s/he is positive toward the idea of lowering them, then, presumably, she is negative toward the costs themselves (specifically, how high they are). The only explicit sentiment expression, tremendous positive change, is positive, yet we can infer a negative attitude toward the object of the event itself (i.e., health care costs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Going further, since the bill is the agent of an event toward which the writer is positive, we may (defeasibly) infer that the writer is positive toward the bill, even though there are no explicit sentiment expressions describing it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Now, consider The bill would curb skyrocketing health care costs. The writer expresses an explicit negative sentiment (skyrocketing) toward the object (health care costs) of the event. Note that curbing costs, like lowering them, is bad for them (the costs are reduced). We can reason that, because the event is bad for something toward which the writer is negative, the writer is positive toward the event. We can reason from there, as above, that the writer is positive toward the bill, since it is the agent of the positive event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These examples illustrate how explicit sentiments toward one entity may be propagated to other entities via opinion implicature rules. The rules involve events that positively or negatively affect entities. We call such events good-For/badFor (hereafter gfbf )events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work investigates how gfbf event interactions among entities, combined with opinion inferences, may be exploited to improve classification of the writer's sentiments toward entities mentioned in the text. We introduce four rule schemas which reveal sentiment constraints among gfbf events and their agents and objects. Those constraints are incorporated into a graph-based model, where a node represents an entity (agent/object), and an edge exists between two nodes if the two entities participate in one or more gfbf events with each other. Scores on the nodes represent the explicit sentiments, if any, expressed by the writer toward the entities. Scores on the edges are based on constraints derived from the rules. Loopy Belief Propagation (LBP) (Pearl, 1982) is applied to accomplish sentiment propagation in the graph.",
"cite_spans": [
{
"start": 756,
"end": 769,
"text": "(Pearl, 1982)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Two evaluations are performed. The first shows that the graph-based model improves over an explicit sentiment classification system. The second evaluates the graph-based model itself (and hence the implicature rules), assessing its ability to correctly propagate sentiments to nodes whose polarities are unknown. We find it has an 89% chance of propagating sentiment values correctly. This is the first paper to address this type of sentiment propagation to improve sentiment analysis. To eliminate interference introduced by other components, we use manually annotated gfbf information to build the graph. Thus, the evaluations in this paper are able to demonstrate the promise of the overall framework itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Much work in sentiment analysis has been on document-level classification. Since different sentiments may be expressed toward different entities in a document, fine-grained analysis may be more informative for applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "However, fine-grained sentiment analysis remains a challenging task for NLP systems. For fully-automatic systems evaluated on the MPQA corpus , for example, a recent paper (Johansson and Moschitti, 2013) reports results that improve over previous work, yet the Fmeasures are in the 40s and 50s.",
"cite_spans": [
{
"start": 172,
"end": 203,
"text": "(Johansson and Moschitti, 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Most work in NLP addresses explicit sentiment, but some address implicit sentiment. For example, identify noun product features that imply opinions, and (Feng et al., 2013) identify objective words that have positive or negative connotations. However, identifying terms that imply opinions is a different task than sentiment propagation between entities. (Dasigi et al., 2012) search for implicit attitudes shared between authors, while we address inferences within a single text.",
"cite_spans": [
{
"start": 153,
"end": 172,
"text": "(Feng et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 355,
"end": 376,
"text": "(Dasigi et al., 2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Several papers apply compositional semantics to determine polarity (e.g., (Moilanen and Pulman, 2007; Choi and Cardie, 2008; Moilanen et al., 2010) ; see (Liu, 2012) for an overview). The goal of such work is to determine one overall polarity of an expression or sentence. In contrast, our framework commits to a holder having sentiments toward various events and entities in the sentence, possibly of different polarities.",
"cite_spans": [
{
"start": 74,
"end": 101,
"text": "(Moilanen and Pulman, 2007;",
"ref_id": "BIBREF9"
},
{
"start": 102,
"end": 124,
"text": "Choi and Cardie, 2008;",
"ref_id": "BIBREF1"
},
{
"start": 125,
"end": 147,
"text": "Moilanen et al., 2010)",
"ref_id": "BIBREF10"
},
{
"start": 154,
"end": 165,
"text": "(Liu, 2012)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The idea of gfbf events in sentiment analysis is not entirely new. For example, two papers mentioned above Choi and Cardie, 2008) include linguistic patterns for the tasks that they address that include gfbf events, but they don't define general implicature rules relating sentiments and gfbf events, agents, and objects as we do. Recently, in linguistics, Anand and Reschke (2010; identify classes of gfbf terms, and carry out studies involving artificially constructed gfbf triples and corpus examples matching fixed linguistic templates. Our work focuses on gfbf triples in naturally-occurring data and uses generalized implicature rules. Goyal et al. (2012) generate a lexicon of patient polarity verbs, which correspond to gfbf events whose spans are verbs. Riloff et al. (2013) investigate sarcasm where the writer holds a positive sentiment toward a negative situation. However, neither of these works performs sentiment inference. Graph-based models have been used for various tasks in sentiment analysis. Some work (Wang et al., 2011; Tan et al., 2011) apply LBP on a graph capturing the relations between users and tweets in Twitter data . However, they assume the nodes and the neighbors of nodes share the same sentiments. In contrast, we don't assume that neighbors share the same sentiment, and the task we address is different.",
"cite_spans": [
{
"start": 107,
"end": 129,
"text": "Choi and Cardie, 2008)",
"ref_id": "BIBREF1"
},
{
"start": 357,
"end": 381,
"text": "Anand and Reschke (2010;",
"ref_id": "BIBREF0"
},
{
"start": 642,
"end": 661,
"text": "Goyal et al. (2012)",
"ref_id": "BIBREF5"
},
{
"start": 763,
"end": 783,
"text": "Riloff et al. (2013)",
"ref_id": "BIBREF13"
},
{
"start": 1024,
"end": 1043,
"text": "(Wang et al., 2011;",
"ref_id": "BIBREF16"
},
{
"start": 1044,
"end": 1061,
"text": "Tan et al., 2011)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This section describes the opinion-implicature framework motivating the design of the graphbased method for sentiment analysis proposed below. The components of the framework are gfbf events, explicit sentiments, and rules operating over gfbf events and sentiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Implicatures",
"sec_num": "3"
},
{
"text": "The definition of a gfbf event is from (Deng et al., 2013) . A GOODFOR event is an event that positively affects an entity (similarly, for BADFOR events). (Deng et al., 2013) point out that gfbf objects are not equivalent to benefactive/malefactive semantic roles. An example they give is She baked a cake for me: a cake is the object of GOOD-FOR event baked (creating something is good for it (Anand and Reschke, 2010) ), while me is the filler of its benefactive semantic role (Z\u00fa\u00f1iga and Kittil\u00e4, 2010) .",
"cite_spans": [
{
"start": 39,
"end": 58,
"text": "(Deng et al., 2013)",
"ref_id": "BIBREF3"
},
{
"start": 155,
"end": 174,
"text": "(Deng et al., 2013)",
"ref_id": "BIBREF3"
},
{
"start": 394,
"end": 419,
"text": "(Anand and Reschke, 2010)",
"ref_id": "BIBREF0"
},
{
"start": 479,
"end": 505,
"text": "(Z\u00fa\u00f1iga and Kittil\u00e4, 2010)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Implicatures",
"sec_num": "3"
},
{
"text": "Four implicature rule schemas are relevant for this paper. 1 Four individual rules are covered by each schema. sent(\u03b1) = \u03b2 means that the writer's sentiment toward \u03b1 is \u03b2, where \u03b1 is a GOODFOR event, a BADFOR event, or the agent or object of a gfbf event, and \u03b2 is either positive or negative (pos or neg, for short). P \u2192 Q is to infer Q from P.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Implicatures",
"sec_num": "3"
},
{
"text": "Rule1: sent(gfbf event) \u2192 sent(object) 1.1 sent(GOODFOR) = pos \u2192 sent(object) = pos 1.2 sent(GOODFOR) = neg \u2192 sent(object) = neg 1.3 sent(BADFOR) = pos \u2192 sent(object) = neg 1.4 sent(BADFOR) = neg \u2192 sent(object) = pos Suppose a sentiment analysis system recognizes only one explicit sentiment expression, skyrocketing. According to the annotations, there are several gfbf events. Each is listed below in the form agent, gfbf, object .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Implicatures",
"sec_num": "3"
},
{
"text": "E 1 : reform, lower, costs E 2 : reform, prohibit, E 3 E 3 : companies, overcharge, patients E 4 : Obama, support, reform",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Implicatures",
"sec_num": "3"
},
{
"text": "In E 1 , from the negative sentiment expressed by skyrocketing (the writer is negative toward the (p. 39).\" (Huddleston and Pullum, 2002) costs because they are too high), and the fact that costs is the object of a BADFOR event (lower), Rule2.4 infers a positive attitude toward E 1 . Now, Rule3.3 applies. We infer the writer is positive toward the reform, since it is the agent of E 1 , toward which the writer is positive.",
"cite_spans": [
{
"start": 108,
"end": 137,
"text": "(Huddleston and Pullum, 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Implicatures",
"sec_num": "3"
},
{
"text": "E 2 illustrates the case where the object is an event. Specifically, the object of E 2 is E 3 , a BAD-FOR event (overcharging). As we can see, E 2 keeps E 3 from happening. Events such as E 2 are REVERSERs, because they reverse the polarity of a gfbf event (from BADFOR to GOODFOR, or vice versa). Note that REVERSERs may be seen as BADFOR events, because they make their objects irrealis (i.e., not happen). Similarly, a RE-TAINER such as help in \"help Mary save Bill\" can be viewed as a GOODFOR event. (We call a RE-VERSER or a RETAINER an INFLUENCER.) In this paper, RETAINERS are treated as GOODFOR events and REVERSERS are treated as BADFOR events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Implicatures",
"sec_num": "3"
},
{
"text": "Above, we inferred that the writer is positive toward reform, the agent of E 2 . By Rule 4.3, the writer is positive toward E 2 ; then by Rule 1.3, the writer is negative toward E 3 , the object of E 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Implicatures",
"sec_num": "3"
},
{
"text": "For E 3 , using Rule 1.4 we know the writer is positive toward patients and using Rule 3.4 we know the writer is negative toward companies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Implicatures",
"sec_num": "3"
},
{
"text": "Turning to E 4 , support health care reform is GOODFOR reform. We already inferred the writer is positive toward reform. Rule 2.1 infers that the writer is positive toward E 4 . Rule 3.1 then infers that the writer is positive toward the agent of E 4 , Obama.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Implicatures",
"sec_num": "3"
},
{
"text": "In summary, we infer that the writer is positive toward E 1 , health care reform, E 2 , patients, E 4 , and Obama, and negative toward E 3 and private insurance companies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Implicatures",
"sec_num": "3"
},
{
"text": "We use the data described in (Deng et al., 2013 ), 2 which consists of 134 documents about a controversial topic, \"the Affordable Care Act.\" The documents are editorials and blogs, and are full of opinions.",
"cite_spans": [
{
"start": 29,
"end": 47,
"text": "(Deng et al., 2013",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "In the data, gfbf triples are annotated specifying the spans of the gfbf event, its agent, and its object, as well as the polarity of the gfbf event (GOODFOR or BADFOR), and the writer's attitude toward the agent and object (positive, negative, or neutral). Influencers are also annotated. The agents of gfbf and influencer events are noun phrases. The object of a gfbf event is a noun phrase, but the object of an influencer is a gfbf event or another influencer. A triple chain is a chain of zero or more influencers ending in a gfbf event, where the object of each element of the chain is the following element in the chain. (e.g. in EX(2), the two event prohibit and overcharging is a triple chain.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "In total, there are 1,762 annotated gfbf triples, out of which 692 are GOODFOR or RETAINER and 1,070 are BADFOR or REVERSER. From the writer's perspective, 1,495 noun phrases are annotated positive, 1,114 are negative and the remaining 8 are neutral. This is not surprising, given that most of the sentences in the data are opinionated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "We propose a graph-based model of entities and the gfbf relations between them to enable sentiment propagation between entities. In this section, we introduce the definition of the graph (in 5.1), the LBP algorithm (in 5.2), and the definition of its functions for our task (in 5.3 and 5.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Model",
"sec_num": "5"
},
{
"text": "We define a gfbf entity graph EG = {N, E}, in which the node set N consists of nodes, each representing an annotated noun phrase agent or object span. The edge set E consists of edges, each linking two nodes if they co-occur in a triple chain with each other. Consider the triples of EX(2) in Section 3 below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of the Entity Graph",
"sec_num": "5.1"
},
{
"text": "E 1 : reform, lower, costs E 2 : reform, prohibit, E 3 E 3 : companies, overcharge, patients E 4 : Obama, support, reform",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of the Entity Graph",
"sec_num": "5.1"
},
{
"text": "The node of reform is linked to nodes of costs via E 1 and Obama via E 4 . 3 Note that, for E 2 and E 3 , the two are linked in a chain: reform, prohibit, companies, overcharge, patients . The three nodes reform, companies and patients participate in this triple chain; thus, pairwise edges exist among them. The edge linking companies and patients is BADFOR (because of overcharging). The edge linking reform and companies is also a BADFOR since we treat a REVERSER as BADFOR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of the Entity Graph",
"sec_num": "5.1"
},
{
"text": "The edge linking reform and patients encodes two BADFOR events (prohibit-overcharge); computationally we say two BADFORs result in a GOOD-FOR, so the edge linking the two is GOODFOR. 4 Given a text, we get the spans of gfbf events and their agents and objects plus the polarities of the events (GOODFOR/BADFOR) from the manual annotations, and then build the graph upon them. However, the manual annotations of the writer's sentiments toward the agents and objects are used as the gold standard for evaluation.",
"cite_spans": [
{
"start": 174,
"end": 184,
"text": "GOODFOR. 4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of the Entity Graph",
"sec_num": "5.1"
},
{
"text": "initialize all mi\u2192j(pos) = mi\u2192j(neg) = 1 repeat foreach ni \u2208 N do foreach nj \u2208 N eighbor(ni) do foreach y \u2208 pos, neg do calculate mi\u2192j(y) normalize mi\u2192j(pos) + mi\u2192j(neg) = 1 until all mi\u2192j stop changing; for each ni \u2208 N assign its polarity as argmax",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Inference via LBP",
"sec_num": "5.2"
},
{
"text": "y\u2208pos,neg \u03a6i(y) * n k \u2208N eighbor(n i ) m k\u2192i (y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Inference via LBP",
"sec_num": "5.2"
},
{
"text": "neutral, in case of a tie With graph EG containing cycles and no apparent structure, we utilize an approximate collective classification algorithm, loopy belief propagation (LBP) (Pearl, 1982; Yedidia et al., 2005) , to classify nodes through belief message passing. The algorithm is shown in Table 1 .",
"cite_spans": [
{
"start": 179,
"end": 192,
"text": "(Pearl, 1982;",
"ref_id": "BIBREF11"
},
{
"start": 193,
"end": 214,
"text": "Yedidia et al., 2005)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 293,
"end": 300,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Sentiment Inference via LBP",
"sec_num": "5.2"
},
{
"text": "In LBP, each node has a score, \u03a6 i (y), and each edge has a score, \u03a8 ij (y i , y j ). In our case, \u03a6 i (y) represents the writer's explicit sentiment toward n i . \u03a8 ij (y i , y j ) is the score on edge e ij , representing the likelihood that node n i has polarity y i and n j has polarity y j . The specific definitions of the two functions are given in Sections 5.3 and 5.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Inference via LBP",
"sec_num": "5.2"
},
{
"text": "LBP is an iterative message passing algorithm. A message from n i to n j over edge e ij has two values: m i\u2192j (pos) is how much information from node n i indicates node n j is positive, and m i\u2192j (neg) is how much information from node n i indicates node n j is negative. In each iteration, the two are normalized such that m i\u2192j (pos) + m i\u2192j (neg) = 1. The message from n i to its neighbor n j is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Inference via LBP",
"sec_num": "5.2"
},
{
"text": "mi\u2192j(pos) = \u03a8ij(pos, pos) * \u03a6i(pos) * n k \u2208N eighbor(n i )/n j m k\u2192i (pos)+ \u03a8ij(neg, pos) * \u03a6i(neg) * n k \u2208N eighbor(n i )/n j m k\u2192i (neg) (1) mi\u2192j(neg) = \u03a8ij(neg, neg) * \u03a6i(neg) * n k \u2208N eighbor(n i )/n j m k\u2192i (neg)+ \u03a8ij(pos, neg) * \u03a6i(pos) * n k \u2208N eighbor(n i )/n j m k\u2192i (pos) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Inference via LBP",
"sec_num": "5.2"
},
{
"text": "For example, the first part of Equation 1means that the positive message n i conveys to n j (i.e., m i\u2192j (pos)) comes from n i being positive itself (\u03a6 i (pos)), the likelihood of edge e ij with its nodes n i being positive and n j being positive (\u03a8 ij (pos, pos)), and the positive message n i 's neighbors (besides n j ) convey to it ( k\u2208N eighbor(n i )/n j m k\u2192i (pos)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Inference via LBP",
"sec_num": "5.2"
},
{
"text": "After convergence, the polarity of each node is determined by its explicit sentiment and the messages its neighbors convey to it, as shown at the end of the algorithm in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 170,
"end": 177,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Sentiment Inference via LBP",
"sec_num": "5.2"
},
{
"text": "By this method, we take into account both sentiments and the interactions between entities via gfbf events in order to discover implicit attitudes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Inference via LBP",
"sec_num": "5.2"
},
{
"text": "Note that the node and edge scores are determined initially and do not change. Only m i\u2192j changes from iteration to iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Inference via LBP",
"sec_num": "5.2"
},
{
"text": "The score \u03a8 i,j encodes constraints based on the gfbf relationships that nodes n i and n j participate in, together with the implicature rules given above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u03a8 ij (y i , y j ): GFBF Implicature Relations",
"sec_num": "5.3"
},
{
"text": "Rule schemas 1 and 3 infer sentiments toward entities (agent/object) from sentiments toward gfbf events. All cases covered by them are shown in Table 2 (use s(\u03b1) to represent sent(\u03b1)). A table of Rule schemas 2 and 4 would be exactly the same, except that the inference (\u2192) would be in the opposite direction (\u2190).",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 151,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "\u03a8 ij (y i , y j ): GFBF Implicature Relations",
"sec_num": "5.3"
},
{
"text": "From Table 2 , we see that, regardless of the writer's sentiment toward the event, if the event is GOODFOR, then the writer's sentiment toward the agent and object are the same, while if the event is BADFOR, the writer's sentiment toward the agent and object are opposite. Thus, the event type and the writer's sentiments toward the agents and objects give us constraints. Therefore, we define \u03a8 ij (pos, pos) and \u03a8 ij (neg, neg) to be 1 if the two nodes are linked by a GOODFOR edge; otherwise, it is 0; and we define \u03a8 ij (neg, pos) and \u03a8 ij (pos, neg) to be 1 if the two nodes are linked by a BADFOR edge; otherwise, it is 0.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "\u03a8 ij (y i , y j ): GFBF Implicature Relations",
"sec_num": "5.3"
},
{
"text": "The score of a node, \u03a6 i (y), represents the sentiment explicitly expressed by the writer toward that entity in the document. Since y ranges over (pos, neg), each node has a positive and a negative score; the scores sum to 1. If it is a positive node, then its positive value ranges from 0.5 to 1, and its negative value ranges from 0 to 0.5 (similarly for negative nodes). For any node without explicit sentiment, both the positive and negative values are 0.5, indicating a neutral node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u03a6 i (y): Explicit Sentiment Classifier",
"sec_num": "5.4"
},
{
"text": "Thus, we build a sentiment classifier that takes a node as input and outputs a positive and a negative score. It is built from widely-used, freely available resources: the OpinionFinder and General Inquirer (Stone et al., 1966) lexicons and the OpinionFinder system. 5 We also use a new Opinion Extraction system (Johansson and Moschitti, 2013 ) that shows better performance than previous work on fine-grained sentiment analysis, 6 and a new automatically developed connotation lexicon (Feng et al., 2013) . 7 We implement a weighted voting method among these various sentiment resources. After that, for nodes that have not yet been assigned polar values (positive or negative), we implement a simple local discourse heuristic to try to assign them polar values.",
"cite_spans": [
{
"start": 207,
"end": 227,
"text": "(Stone et al., 1966)",
"ref_id": "BIBREF14"
},
{
"start": 267,
"end": 268,
"text": "5",
"ref_id": null
},
{
"start": 313,
"end": 343,
"text": "(Johansson and Moschitti, 2013",
"ref_id": "BIBREF7"
},
{
"start": 431,
"end": 432,
"text": "6",
"ref_id": null
},
{
"start": 487,
"end": 506,
"text": "(Feng et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 509,
"end": 510,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u03a6 i (y): Explicit Sentiment Classifier",
"sec_num": "5.4"
},
{
"text": "The particular strategies were chosen based only on a separate development set, which is not included in the data used in the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u03a6 i (y): Explicit Sentiment Classifier",
"sec_num": "5.4"
},
{
"text": "Opinion Extraction outputs a polarity expression with its source, and OpinionFinder outputs a polarity word. But neither of the tools extracts the target. To extract the target, for each word in the opinion expression, we select other words in the sentence which are in a mod, obj dependency parsing relation with it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Sentiment Tools",
"sec_num": "5.4.1"
},
{
"text": "We match up the extracted expressions and the gfbf annotations according to their offsets in the text. For an opinion expression appearing in the sentence with no gfbf annotation, if the root word (in the dependency parse) of the expression span is the same as the root word of a gfbf span, or the root word of an agent span, or the root word of an object span, we assume they match up. Then we assign polarity as follows. If the expression refers only to the agent or object, then the agent or object is assigned the polarity of the expression. If the expression covers the gfbf event and its object, we assume the sentiment is toward the gfbf event and then assign sentiment according to Rule schema 1 (sent(gfbf event) \u2192 sent(object)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Sentiment Tools",
"sec_num": "5.4.1"
},
{
"text": "To classify the sentiment expressed within the span of an agent or object, we check whether the words in the span appear in one or more of the lexicons. 8 If a lexicon finds both positive and negative words in the span, we resolve the conflict by choosing the polarity of the root word in the span. If the root word does not have a polar value, we choose the majority polarity of the sentiment words. If there are an equal number of positive and negative words, the polarity is neutral.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicons",
"sec_num": "5.4.2"
},
{
"text": "All together we have two sentiment systems and three lexicons. Before explicit sentiment classifying, each node has a positive value of 0.5 and a negative value of 0.5. We give the five votes equal weight (0.1), and add the number of positive votes multiplied by 0.1 to the positive value, and the number of negative votes multiplied by 0.1 to the negative value. After this addition, both values are in the range 0.5 to 1. If the positive value is larger, we maintain the positive value and assign the negative value to be 1-positive value (similarly if the negative value is larger).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Voting Scheme among Resources",
"sec_num": "5.4.3"
},
{
"text": "For a sentence s, we assume the writer's sentiments toward the gfbf events in the clauses of s, the previous sentence, and the next sentence, are the same. Consider EX 3 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse",
"sec_num": "5.4.4"
},
{
"text": "(3) has three clauses, (a)-(c). Suppose the explicit sentiment classifier recognizes that event (a), denying coverage for pre-existing conditions, is negative and it does not find any other explicit sentiments in the sentence. The system assumes the writer's sentiments toward (b) and (c) are negative as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EX",
"sec_num": null
},
{
"text": "After assigning all possible polarities to events within a sentence, polarities are propagated to the other still-neutral gfbf events in the previous and next sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EX",
"sec_num": null
},
{
"text": "Finally, event-level polarities are propagated to still-neutral objects using Rule schema 1. 9 If there is a conflict, we take the majority sentiment; if there is a tie, the object remains neutral.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EX",
"sec_num": null
},
{
"text": "However, the confidence of the discourse voting is smaller than the explicit sentiment voting, since discourse structure is complex. If by discourse an object node is classified as positive, the positive value is 0.5 + random(0, 0.1) and the negative value is 1-positive value. Thus, the positive value of a positive node is larger than its negative value, but not exceeding too much (similarly for negative nodes).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EX",
"sec_num": null
},
{
"text": "Of the 134 documents in the dataset, 6 were used as a development set, and 3 do not have any annotation. We use the remaining 125 for experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Data",
"sec_num": "6.1"
},
{
"text": "To evaluate the performance of classifying the writer's sentiments toward agents and objects, we define three metrics to evaluate performance. For the entire dataset, accuracy evaluates the percentage of nodes that are classified correctly. Precision and recall are defined to evaluate polar (nonneutral) classification. In the equations, auto is the system's output and gold is the gold-standard label from annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "6.2"
},
{
"text": "In this section, we evaluate the performance of the overall system. In 6.5, we evaluate the graph model itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Performance",
"sec_num": "6.3"
},
{
"text": "Two baselines are defined. One is assigning the majority class label, which is positive, to all agents/objects (M ajority(+)). The second is assuming that agents/objects in a GOODFOR relation are positive and agents/objects in a BADFOR relation are negative (GF BF ). In addition, we evaluate the explicit sentiment classifier introduced in Section 5.4 (Explicit). The results are shown in Table 3 . Table 3 : Performance of baselines and graph.",
"cite_spans": [],
"ref_spans": [
{
"start": 390,
"end": 397,
"text": "Table 3",
"ref_id": null
},
{
"start": 400,
"end": 407,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overall Performance",
"sec_num": "6.3"
},
{
"text": "As can be seen, M ajority and GF BF give approximately 56% precision. Explicit sentiment classification alone performs hardly better in precision and much lower in recall. As mentioned in Section 2, fine-grained sentiment analysis is still very difficult for NLP systems. However, the graph model improves greatly over Explicit in both precision and recall. While recall of the graph model is comparable to the M ajority, precision is much higher.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy Precision Recall",
"sec_num": null
},
{
"text": "During the experiment, if the LBP does not converge until 100 iterations, it is forced to stop. The average number of iteration is 34.192. Table 4 shows the results of an error analysis to determine what contributes to the graph model's errors.",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 146,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Accuracy Precision Recall",
"sec_num": null
},
{
"text": "1 wrong sentiment from voting 0.2132 2 wrong sentiment from discourse 0.0462 3 subgraph with wrong polarity 0.3189 4 subgraph with no polarity 0.4160 5 other 0.0056 Table 4 : Errors for graph model.",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 172,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.4"
},
{
"text": "Rows 1-2 are the error sources for nodes assigned a polar value before graph propagation. Row 1 errors are due to the sentiment-voting system, Row 2 are due to discourse processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.4"
},
{
"text": "Rows 3-4 are the error sources for nodes that have not been assigned a polar value by Explicit. Such a node receives a polar value only via propagation from other nodes in its subgraph (i.e., the connected component of the graph containing the node). Row 5 is the percentage of other errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.4"
},
{
"text": "As shown in Rows 1-2, 25.94% of the errors are due to Explicit. These may propagate incorrect labels to other nodes in the graph. As shown in Row 3, 31.89% of the errors are due to nodes not classified polar by Explicit, but given incorrect values because their subgraph has an incorrect polarity. Row 4 shows that 41.60% of the errors are due to nodes that are not assigned any polar value. Given non-ideal input from sentiment analysis, how does the graph model increase precision by 10 percentage points?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.4"
},
{
"text": "There are two main ways. For nodes which remain neutral after Explicit, they might be classified correctly via the graph. For nodes which are given incorrect polar labels by Explicit, they might be fixed by the graph. Table 5 shows the best the graph model could do, given the noisy input from Explicit. Over all of the nodes, more propagated labels are incorrect than correct. However, if there are no incorrect, or more correct than incorrect sentiments in the subgraph (connected component), then many more of the propagated labels are correct than incorrect. In all cases, more of the changed labels are correct than incorrect.",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 225,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.4"
},
{
"text": "The implicature rules are defeasible. In this section we introduce an experiment to valid the con- propagated propagated changed changed label correct label incorrect correctly incorrectly all subgraphs 399 536 424 274 subgraphs having no incorrect sentiment 347 41 260 23 subgraphs having more correct than incorrect sentiment 356 42 288 35 Table 5 : Effects of graph model given Explicit input sistency of implicature rule. Recall that in Section 5.3, the definition of \u03a8 i,j is based on implicature rules and sentiment is propagated based on \u03a8 i,j . Thus, this is also an evaluation of the performance of the graph model itself. We performed an experiment to assess the chance of a node being correctly classified only via the graph.",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 371,
"text": "propagated propagated changed changed label correct label incorrect correctly incorrectly all subgraphs 399 536 424 274 subgraphs having no incorrect sentiment 347 41 260 23 subgraphs having more correct than incorrect sentiment 356 42 288 35 Table 5",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Consistency and Isolated Performance of Graph Model",
"sec_num": "6.5"
},
{
"text": "In each subgraph (connected component), we assign one of the nodes in the subgraph with its gold-standard polarity. Then we run LBP on the subgraph and record whether the other nodes in the subgraph are classified correctly or not. The experiment is run on the subgraph |S| times, where |S| is the number of nodes in the subgraph, so that each node is assigned its gold-standard polarity exactly once. Each node is given a propagated value |S| \u2212 1 times, as each of the other nodes in its subgraph receives its gold-standard polarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistency and Isolated Performance of Graph Model",
"sec_num": "6.5"
},
{
"text": "To evaluate the chance of a node given a correct propagated label, we use Equations (6) and (7). correct(a|b) = 1 a is correct 0 otherwise (6) correctness(a) = b\u2208Sa,b =a correct(a|b) |Sa| \u2212 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistency and Isolated Performance of Graph Model",
"sec_num": "6.5"
},
{
"text": "where S a is the set of nodes in a's subgraph. Given b being assigned its gold-standard polarity, if a is classified correctly, then correct(a|b) is 1; otherwise 0. |S a | is the number of nodes in a's subgraph. correctness(a) is the percentage of assignments to a that are correct. If it is 1, then a is correctly classified given the correct classification of any single node in its subgraph. For example, suppose there are three nodes in a subgraph, A, B and C. For A we (1) assign B its gold label and carry out propagation on the subgraph, (2) assign C its gold label and carry out propagation again, then (3) calculate correctness(A). Then the same process is repeated for B and C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistency and Isolated Performance of Graph Model",
"sec_num": "6.5"
},
{
"text": "Some subgraphs contain only two nodes, the agent and the object. In this case, graph propagation corresponds to single applications of two implicature rules. Other subgraphs contain more nodes. Two results are shown in Table 6 . One is the result on the whole experiment data, the other is the result for all nodes whose subgraphs have more than two nodes.",
"cite_spans": [],
"ref_spans": [
{
"start": 219,
"end": 226,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Consistency and Isolated Performance of Graph Model",
"sec_num": "6.5"
},
{
"text": "# subgraph correctness all subgraphs 983 0.8874 multi-node subgraphs 169 0.9030 Table 6 : Performance of graph model itself.",
"cite_spans": [],
"ref_spans": [
{
"start": 80,
"end": 87,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "As we can see, a node has an 89% chance of being correct if there is one correct explicit subjectivity node in its subgraph. If we only consider subgraphs with more than two nodes, the correctness chance is higher. The results indicate that, if given correct sentiments, the graph model will assign the unknown nodes with correct labels 90% of the time. Further, the results indicate that the implicature rules are consistent for most of the times across the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "We developed a graph-based model based on implicature rules to propagate sentiments among entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "The model improves over explicit sentiment classification by 10 points in precision and, in an evaluation of the model itself, we find it has an 89% chance of propagating sentiments correctly. An important question for future work is under what conditions do the implicatures not go through in context. Two cases we have discovered involve Rule schema 3: the inference toward the agent is defeated if the action was accidental or if the agent was forced to perform it. We are investigating lexical clues for recognizing such cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Implicatures \"normally accompany the utterances of a given sentence unless special factors exclude that possibility",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available at http://mpqa.cs.pitt.edu",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This assumes that the two instances of \"reform\" co-refer. However, the system does not resolve co-reference -the methods that we tried did not improve overall performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Also, GOODFOR+BADFOR=BADFOR; GOOD-FOR+GOODFOR=GOODFOR",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://mpqa.cs.pitt.edu and http://www.wjh.harvard.edu/ inquirer/ 6 As evaluated on the MPQA corpus. Note that the authors ran their system for us on the data we use.7 http://www.cs.stonybrook.edu/\u223cychoi/connotation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The comparison is done after lemmatization, using the wordNet lemmatization in NLTK, and with the same POS, according to the Stanford POStagger toolkit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that, in the gfbf entity graph, sentiments can be propagated from objects to agents, conceptually via Rule schemas 2 and 3. Thus, here we only classify objects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgments. This work was supported in part by DARPA-BAA-12-47 DEFT grant #12475008 and National Science Foundation grant #IIS-0916046. We would like to thank Richard Johansson and Alessandro Moschitti for running their Opinion Extraction systems on our data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Verb classes as evaluativity functor classes",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Anand",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Reschke",
"suffix": ""
}
],
"year": 2010,
"venue": "Interdisciplinary Workshop on Verbs. The Identification and Representation of Verb Features",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Anand and Kevin Reschke. 2010. Verb classes as evaluativity functor classes. In Interdisciplinary Workshop on Verbs. The Identification and Repre- sentation of Verb Features.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning with compositional semantics as structural inference for subsentential sentiment analysis",
"authors": [
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "793--801",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yejin Choi and Claire Cardie. 2008. Learning with compositional semantics as structural inference for subsentential sentiment analysis. In Proceedings of the 2008 Conference on Empirical Methods in Nat- ural Language Processing, pages 793-801, Hon- olulu, Hawaii, October. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Genre independent subgroup detection in online discussion threads: A study of implicit attitude using textual latent semantics",
"authors": [
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "65--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pradeep Dasigi, Weiwei Guo, and Mona Diab. 2012. Genre independent subgroup detection in online dis- cussion threads: A study of implicit attitude us- ing textual latent semantics. In Proceedings of the 50th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 65-69, Jeju Island, Korea, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Benefactive and malefactive event and writer attitude annotation",
"authors": [
{
"first": "Lingjia",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Yoonjung",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2013,
"venue": "51st Annual Meeting of the Association for Computational Linguistics (ACL-2013",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lingjia Deng, Yoonjung Choi, and Janyce Wiebe. 2013. Benefactive and malefactive event and writer attitude annotation. In 51st Annual Meeting of the Association for Computational Linguistics (ACL- 2013, short paper).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Connotation lexicon: A dash of sentiment beneath the surface meaning",
"authors": [
{
"first": "Song",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Jun Sak",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Polina",
"middle": [],
"last": "Kuznetsova",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Song Feng, Jun Sak Kang, Polina Kuznetsova, and Yejin Choi. 2013. Connotation lexicon: A dash of sentiment beneath the surface meaning. In Proceed- ings of the 51th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), Sofia, Bulgaria, Angust. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A computational model for plot units",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2012,
"venue": "Computational Intelligence",
"volume": "",
"issue": "",
"pages": "466--488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Goyal, Ellen Riloff, and Hal Daum III. 2012. A computational model for plot units. Computational Intelligence, pages 466-488.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The Cambridge Grammar of the English Language",
"authors": [
{
"first": "D",
"middle": [],
"last": "Rodney",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"K"
],
"last": "Huddleston",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pullum",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rodney D. Huddleston and Geoffrey K. Pullum. 2002. The Cambridge Grammar of the English Language. Cambridge University Press, April.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Relational features in fine-grained opinion analysis",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Johansson and Alessandro Moschitti. 2013. Relational features in fine-grained opinion analysis. Computational Linguistics, 39(3).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Sentiment Analysis and Opinion Mining",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Liu. 2012. Sentiment Analysis and Opinion Min- ing. Morgan & Claypool.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sentiment composition",
"authors": [
{
"first": "Karo",
"middle": [],
"last": "Moilanen",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Pulman",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of RANLP 2007",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karo Moilanen and Stephen Pulman. 2007. Senti- ment composition. In Proceedings of RANLP 2007, Borovets, Bulgaria.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Packed feelings and ordered sentiments: Sentiment parsing with quasi-compositional polarity sequencing and compression",
"authors": [
{
"first": "Karo",
"middle": [],
"last": "Moilanen",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Pulman",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 1st Workshop on Computational Approaches to Subjectivity and Sentiment Analysis",
"volume": "",
"issue": "",
"pages": "36--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karo Moilanen, Stephen Pulman, and Yue Zhang. 2010. Packed feelings and ordered sentiments: Sentiment parsing with quasi-compositional polar- ity sequencing and compression. In Proceedings of the 1st Workshop on Computational Approaches to Subjectivity and Sentiment Analysis (WASSA 2010), pages 36-43.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Reverend bayes on inference engines: A distributed hierarchical approach",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pearl",
"suffix": ""
}
],
"year": 1982,
"venue": "Proceedings of the American Association of Artificial Intelligence National Conference on AI",
"volume": "",
"issue": "",
"pages": "133--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Pearl. 1982. Reverend bayes on inference engines: A distributed hierarchical approach. In Proceedings of the American Association of Artificial Intelligence National Conference on AI, pages 133-136, Pitts- burgh, PA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Extracting contextual evaluativity",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Reschke",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Anand",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Ninth International Conference on Computational Semantics, IWCS '11",
"volume": "",
"issue": "",
"pages": "370--374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Reschke and Pranav Anand. 2011. Extracting contextual evaluativity. In Proceedings of the Ninth International Conference on Computational Seman- tics, IWCS '11, pages 370-374, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Sarcasm as contrast between a positive sentiment and negative situation",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Ashequl",
"middle": [],
"last": "Qadir",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Surve",
"suffix": ""
},
{
"first": "Lalindra De",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Gilbert",
"suffix": ""
},
{
"first": "Ruihong",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "704--714",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalin- dra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sen- timent and negative situation. In Proceedings of the 2013 Conference on Empirical Methods in Nat- ural Language Processing, pages 704-714, Seattle, Washington, USA, October. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The General Inquirer: A Computer Approach to Content Analysis",
"authors": [
{
"first": "P",
"middle": [
"J"
],
"last": "Stone",
"suffix": ""
},
{
"first": "D",
"middle": [
"C"
],
"last": "Dunphy",
"suffix": ""
},
{
"first": "M",
"middle": [
"S"
],
"last": "Smith",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Ogilvie",
"suffix": ""
}
],
"year": 1966,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P.J. Stone, D.C. Dunphy, M.S. Smith, and D.M. Ogilvie. 1966. The General Inquirer: A Computer Approach to Content Analysis. MIT Press, Cam- bridge.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "User-level sentiment analysis incorporating social networks",
"authors": [
{
"first": "Chenhao",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Long",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ping",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "1397--1405",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chenhao Tan, Lillian Lee, Jie Tang, Long Jiang, Ming Zhou, and Ping Li. 2011. User-level sentiment analysis incorporating social networks. In Proceed- ings of the 17th ACM SIGKDD international con- ference on Knowledge discovery and data mining, pages 1397-1405. ACM.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Topic sentiment anaylsis in twitter: A graph-based hashtag sentiment classification appraoch",
"authors": [
{
"first": "Xiaolong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2011,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "1031--1040",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaolong Wang, Furu Wei, Xiaohua Liu, Ming zhou, and Ming Zhang. 2011. Topic sentiment anaylsis in twitter: A graph-based hashtag sentiment classifica- tion appraoch. In CIKM, pages 1031-1040.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Annotating expressions of opinions and emotions in language ann",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2005,
"venue": "Language Resources and Evaluation",
"volume": "39",
"issue": "2/3",
"pages": "164--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emo- tions in language ann. Language Resources and Evaluation, 39(2/3):164-210.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Recognizing contextual polarity in phraselevel sentiment analysis",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Hoffmann",
"suffix": ""
}
],
"year": 2005,
"venue": "HLT/EMNLP",
"volume": "",
"issue": "",
"pages": "347--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase- level sentiment analysis. In HLT/EMNLP, pages 347-354.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Constructing free-energy approximations and generalized belief propagation algorithms. Information Theory",
"authors": [
{
"first": "S",
"middle": [],
"last": "Jonathan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yedidia",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Yair",
"middle": [],
"last": "Freeman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weiss",
"suffix": ""
}
],
"year": 2005,
"venue": "IEEE Transactions on",
"volume": "51",
"issue": "7",
"pages": "2282--2312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan S Yedidia, William T Freeman, and Yair Weiss. 2005. Constructing free-energy approx- imations and generalized belief propagation algo- rithms. Information Theory, IEEE Transactions on, 51(7):2282-2312.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Identifying noun product features that imply opinions",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "575--580",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Zhang and Bing Liu. 2011. Identifying noun prod- uct features that imply opinions. In Proceedings of the 49th Annual Meeting of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 575-580, Portland, Oregon, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Benefactives and malefactives, Typological studies in language",
"authors": [
{
"first": "F",
"middle": [],
"last": "Z\u00fa\u00f1iga",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kittil\u00e4",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Z\u00fa\u00f1iga and S. Kittil\u00e4. 2010. Introduction. In F. Z\u00fa\u00f1iga and S. Kittil\u00e4, editors, Benefactives and malefactives, Typological studies in language. J. Benjamins Publishing Company.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Rule2: sent(object) \u2192 sent(gfbf event) 2.1 sent(object) = pos \u2192 sent(GOODFOR) = pos 2.2 sent(object) = neg \u2192 sent(GOODFOR) = neg 2.3 sent(object) = pos \u2192 sent(BADFOR) = neg 2.4 sent(object) = neg \u2192 sent(BADFOR) = posRule3: sent(gfbf event) \u2192 sent(agent) 3.1 sent(GOODFOR) = pos \u2192 sent(agent) = pos 3.2 sent(GOODFOR) = neg \u2192 sent(agent) = neg 3.3 sent(BADFOR) = pos \u2192 sent(agent) = pos 3.4 sent(BADFOR) = neg \u2192 sent(agent) = neg Rule4: sent(agent) \u2192 sent(gfbf event) 4.1 sent(agent) = pos \u2192 sent(GOODFOR) = pos 4.2 sent(agent) = neg \u2192 sent(GOODFOR) = neg 4.3 sent(agent) = pos \u2192 sent(BADFOR) = pos 4.4 sent(agent) = neg \u2192 sent(BADFOR) = neg To explain the rules, we step through an example: EX(2) Why would [President Obama] support [health care reform]? Because [reform] could lower [skyrocketing health care costs], and prohibit [private insurance companies] from overcharging [patients]."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": ": EX(3) ... health-insurance regulations that will prohibit (a) denying coverage for pre-existing conditions, (b) dropping coverage if the client gets sick, and (c) capping insurance company reimbursement..."
},
"TABREF0": {
"num": null,
"html": null,
"text": "",
"type_str": "table",
"content": "<table/>"
},
"TABREF2": {
"num": null,
"html": null,
"text": "",
"type_str": "table",
"content": "<table/>"
}
}
}
}