ACL-OCL / Base_JSON /prefixP /json /P16 /P16-1030.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P16-1030",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:56:27.961120Z"
},
"title": "Connotation Frames: A Data-Driven Investigation",
"authors": [
{
"first": "Hannah",
"middle": [],
"last": "Rashkin",
"suffix": "",
"affiliation": {},
"email": "hrashkin@cs.washington.edu"
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": "",
"affiliation": {},
"email": "sameer@cs.washington.edu"
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Through a particular choice of a predicate (e.g., \"x violated y\"), a writer can subtly connote a range of implied sentiment and presupposed facts about the entities x and y: (1) writer's perspective: projecting x as an \"antagonist\" and y as a \"victim\", (2) entities' perspective: y probably dislikes x, (3) effect: something bad happened to y, (4) value: y is something valuable, and (5) mental state: y is distressed by the event. We introduce connotation frames as a representation formalism to organize these rich dimensions of connotation using typed relations. First, we investigate the feasibility of obtaining connotative labels through crowdsourcing experiments. We then present models for predicting the connotation frames of verb predicates based on their distributional word representations and the interplay between different types of connotative relations. Empirical results confirm that connotation frames can be induced from various data sources that reflect how language is used in context. We conclude with analytical results that show the potential use of connotation frames for analyzing subtle biases in online news media.",
"pdf_parse": {
"paper_id": "P16-1030",
"_pdf_hash": "",
"abstract": [
{
"text": "Through a particular choice of a predicate (e.g., \"x violated y\"), a writer can subtly connote a range of implied sentiment and presupposed facts about the entities x and y: (1) writer's perspective: projecting x as an \"antagonist\" and y as a \"victim\", (2) entities' perspective: y probably dislikes x, (3) effect: something bad happened to y, (4) value: y is something valuable, and (5) mental state: y is distressed by the event. We introduce connotation frames as a representation formalism to organize these rich dimensions of connotation using typed relations. First, we investigate the feasibility of obtaining connotative labels through crowdsourcing experiments. We then present models for predicting the connotation frames of verb predicates based on their distributional word representations and the interplay between different types of connotative relations. Empirical results confirm that connotation frames can be induced from various data sources that reflect how language is used in context. We conclude with analytical results that show the potential use of connotation frames for analyzing subtle biases in online news media.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "People commonly express their opinions through subtle and nuanced language (Thomas et al., 2006; Somasundaran and Wiebe, 2010) . Often, through seemingly objective statements, the writer can influence the readers' judgments toward an event and their participants. Even by choosing a particular predicate, the writer can indicate rich connotative information about the entities that interact through the predicate. More specifically, through a simple statement such as \"x violated y\", the writer can convey:",
"cite_spans": [
{
"start": 75,
"end": 96,
"text": "(Thomas et al., 2006;",
"ref_id": "BIBREF35"
},
{
"start": 97,
"end": 126,
"text": "Somasundaran and Wiebe, 2010)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) writer's perspective: the writer is projecting x as an \"antagonist\" and y as a \"victim\", eliciting negative perspective from readers toward x (i.e., blaming x) and positive perspective toward y (i.e., sympathetic or supportive toward y). (2) entities' perspective: y most likely feels negatively toward x as a result of being violated. (3) effect: something bad happened to y. (4) value: y is something valuable, since it does not make sense to violate something worthless. In other words, the writer is presupposing a positive value of y as a fact. A hearing is scheduled to make a decision on whether to uphold the clinic's suspension. Table 1 : Example typed relations (perspective P(x \u2192 y), effect E(x), value V(x), and mental state S(x)). Not all typed relations are shown due to space constraints. The example sentences demonstrate the usage of the predicates in left [L] or right [R] leaning news sources.",
"cite_spans": [
{
"start": 878,
"end": 881,
"text": "[L]",
"ref_id": null
},
{
"start": 891,
"end": 894,
"text": "[R]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 642,
"end": 649,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Even though the writer might not explicitly state any of the interpretation [1-5] above, the readers will be able interpret these intentions as a part of their comprehension. In this paper, we present an empirical study of how to represent and induce the connotative interpretations that can be drawn from a verb predicate, as illustrated above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "R",
"sec_num": null
},
{
"text": "We introduce connotation frames as a representation framework to organize the rich dimensions of the implied sentiment and presupposed facts. Figure 1 shows an example of a connotation frame for the predicate violate. We define four different typed relations: P(x \u2192 y) for perspective of x towards y, E(x) for effect on x, V(x) for value of x, and S(x) for mental state of x. These relationships can all be either positive (+), neutral (=), or negative (-).",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 150,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "R",
"sec_num": null
},
{
"text": "Our work is the first study to investigate frames as a representation formalism for connotative meanings. This contrasts with previous computational studies and resource development for frame semantics, where the primary focus was almost exclusively on denotational meanings of language (Baker et al., 1998; Palmer et al., 2005) . Our formalism draws inspirations from the earlier work of frame semantics, however, in that we investigate the connection between a word and the related world knowledge associated with the word (Fillmore, 1976) , which is essential for the readers to interpret many layers of the implied sentiment and presupposed value judgments.",
"cite_spans": [
{
"start": 287,
"end": 307,
"text": "(Baker et al., 1998;",
"ref_id": "BIBREF3"
},
{
"start": 308,
"end": 328,
"text": "Palmer et al., 2005)",
"ref_id": "BIBREF28"
},
{
"start": 525,
"end": 541,
"text": "(Fillmore, 1976)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "R",
"sec_num": null
},
{
"text": "We also build upon the extensive amount of literature in sentiment analysis (Pang and Lee, 2008; Liu and Zhang, 2012) , especially the recent emerging efforts on implied sentiment analysis (Feng et al., 2013; Greene and Resnik, 2009) , entityentity sentiment inference , assuming it is an entity that can have a mental state. opinion role induction (Wiegand and Ruppenhofer, 2015) and effect analysis (Choi and Wiebe, 2014) . However, our work is the first to organize various aspects of the connotative information into coherent frames.",
"cite_spans": [
{
"start": 76,
"end": 96,
"text": "(Pang and Lee, 2008;",
"ref_id": "BIBREF29"
},
{
"start": 97,
"end": 117,
"text": "Liu and Zhang, 2012)",
"ref_id": "BIBREF24"
},
{
"start": 189,
"end": 208,
"text": "(Feng et al., 2013;",
"ref_id": "BIBREF10"
},
{
"start": 209,
"end": 233,
"text": "Greene and Resnik, 2009)",
"ref_id": "BIBREF16"
},
{
"start": 349,
"end": 380,
"text": "(Wiegand and Ruppenhofer, 2015)",
"ref_id": "BIBREF39"
},
{
"start": 401,
"end": 423,
"text": "(Choi and Wiebe, 2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "R",
"sec_num": null
},
{
"text": "More concretely, our contributions are threefold: (1) a new formalism, model, and annotated dataset for studying connotation frames from large-scale natural language data and statistics, (2) new datadriven insights into the dynamics among different typed relations within each frame, and (3) an analytic study showing the potential use of connotation frames for analyzing subtle biases in journalism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "R",
"sec_num": null
},
{
"text": "The rest of the paper is organized as follows: in \u00a72, we provide the definitions and data-driven insights for connotation frames. In \u00a73, we introduce models for inducing the connotation frames, followed by empirical results, annotation studies, and analysis on news media in \u00a74. We discuss related work in \u00a75 and conclude in \u00a76.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "R",
"sec_num": null
},
{
"text": "Given a predicate v, we define a connotation frame F(v) as a collection of typed relations and their polarity assignments: (i) perspective P v (a i \u2192 a j ): a directed sentiment from the entity a i to the entity a j , (ii) value V v (a i ): whether a i is presupposed to be valuable, (iii) effect E v (a i ): whether the event denoted by the predicate v is good or bad for the entity a i , and (iv) mental state S v (a i ): the likely mental state of the entity a i as a result of the event. We assume that each typed relation can have one of the three connotative polarities \u2208 {+, \u2212, =}, i.e., positive, negative, or neutral. Our goal in this paper is to focus on the general connotation of the predicate considered out of context. We leave contextual interpretation of connotation as future work. relations for the verbs suffer, guard, and uphold, along with example sentences. For instance, for the verb suffer, the writer is likely to have a positive perspective towards the agent (e.g., being supportive or sympathetic toward the \"17-year-old girl\" in the example shown on the right) and a negative perspective towards the theme (e.g., being negative towards 'botched abortion\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Connotation Frame",
"sec_num": "2"
},
{
"text": "Since the meaning of language is ultimately contextual, the exact connotation will vary depending on the context of each utterance. Nonetheless, there still are common shifts or biases in the connotative polarities, as we found from two data-driven analyses. First, we looked at words from the Subjectivity Lexicon ) that are used in the argument positions of a small selection of predicates in Google Syntactic N-grams (Goldberg and Orwant, 2013) . For this analysis, we assumed that the word in the subject position is the agent while the object is the theme. We found 64% of the words in the agent position of suffer are positive, and 94% of the words in the theme position are negative, which is consistent with the polarities of the writer's perspective towards these arguments, as shown in Table 1 . For guard, 57% of the subjects and 76% of the objects are positive, and in the case of uphold, 56% of the subjects and 72% of the objects are positive.",
"cite_spans": [
{
"start": 420,
"end": 447,
"text": "(Goldberg and Orwant, 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 796,
"end": 803,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data-driven Motivation",
"sec_num": "2.1"
},
{
"text": "We also investigated how media bias can potentially be analyzed through connotation frames. From the Stream Corpus 2014 dataset (KBA, 2014), we selected all articles from news outlets with known political biases, 2 and compared how they use polarised words such as \"accuse\", \"attack\", and \"criticize\" differently in light of P(w \u2192 agent) and P(w \u2192 theme) relations of the connotation frames. Table 2 shows interesting contrasts. Obama, for example, is portrayed as someone who attacks or criticizes others according to the rightleaning sources, whereas the left-leaning sources portray Obama as the victim of harsh acts like \"attack\" or \"criticize\". 3 Furthermore, by knowing the perspective relationships P(w \u2192 a i ) associated with a predicate, we can make predictions about how the left-leaning and right-leaning sources feel about specific people or issues. For example, because left-leaning sources frequently use McCain, Trump, and Limbaugh in the subject position of attack, we might predict that these sources have a negative sentiment towards these entities.",
"cite_spans": [],
"ref_spans": [
{
"start": 392,
"end": 399,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data-driven Motivation",
"sec_num": "2.1"
},
{
"text": "Given a predicate, the polarity assignments of typed relations are interdependent. For example, if the writer feels positively towards the agent but negatively towards the theme, then it is likely that the agent and the theme do not feel positively towards each other. This insight is related to that of , but differs in that the polarities are predicate-specific and do not rely on knowledge of prior sentiment towards the arguments. This and other possible interdependencies are summarized in Table 3 . These interdependencies serve as general guidelines of what properties we expect to depend on one another, especially in the case where the polarities are non-neutral. We will promote these internal consistencies in our factor graph model ( \u00a73) as soft constraints.",
"cite_spans": [],
"ref_spans": [
{
"start": 495,
"end": 502,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dynamics between Typed Relations",
"sec_num": "2.2"
},
{
"text": "There also exist other interdependencies that we will use to simplify our task. First, the directed Perspective Triad: If A is positive towards B, and B is positive towards C, then we expect A is also positive towards C. Similar dynamics hold for the negative case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamics between Typed Relations",
"sec_num": "2.2"
},
{
"text": "Pw\u2192a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamics between Typed Relations",
"sec_num": "2.2"
},
{
"text": "1 = \u00ac (Pw\u2192a 2 \u2295 Pa 1 \u2192a 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamics between Typed Relations",
"sec_num": "2.2"
},
{
"text": "Perspective -Effect: If a predicate has a positive effect on the Subject, then we expect that the interaction between the Subject and Object was positive. Similar dynamics hold for the negative case and for other perspective relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamics between Typed Relations",
"sec_num": "2.2"
},
{
"text": "Ea 1 = Pa 2 \u2192a 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamics between Typed Relations",
"sec_num": "2.2"
},
{
"text": "Perspective -Value: If A is presupposed as valuable, then we expect that the writer also views A positively. Similar dynamics hold for the negative case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamics between Typed Relations",
"sec_num": "2.2"
},
{
"text": "Va 1 = Pw\u2192a 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamics between Typed Relations",
"sec_num": "2.2"
},
{
"text": "Effect -Mental State: If the predicate has a positive effect on A, then we expect that A will gain a positive mental state. Similar dynamics hold for the negative case. sentiments between the agent and the theme are likely to be reciprocal, or at least do not directly conflict with + and \u2212 simultaneously. Therefore, we assume that P(a 1 \u2192 a 2 ) = P(a 2 \u2192 a 1 ) = P(a 1 \u2194 a 2 ), and we only measure for these binary relationships going in one direction. In addition, we assume the predicted 4 perspective from the reader r to an argument P(r \u2192 a) is likely to be the same as the implied perspective from the writer w to the same argument P(w \u2192 a). So, we only try to learn the perspective of the writer. Lifting these assumptions will be future work. For simplicity, our model only explores the polarities involving the agent and the theme roles. We will assume that these roles are correlated to the subject and object positions, and henceforth refer to them as the \"Subject\" and \"Object\" of the event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamics between Typed Relations",
"sec_num": "2.2"
},
{
"text": "Sa 1 = Ea 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamics between Typed Relations",
"sec_num": "2.2"
},
{
"text": "Our task is essentially that of lexicon induction (Akkaya et al., 2009; Feng et al., 2013 ) in that we want to induce the connotation frames of previously unseen verbs. For each predicate, we infer a connotation frame composed of 9 relationship aspects that represent:",
"cite_spans": [
{
"start": 50,
"end": 71,
"text": "(Akkaya et al., 2009;",
"ref_id": "BIBREF1"
},
{
"start": 72,
"end": 89,
"text": "Feng et al., 2013",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Connotation Frames",
"sec_num": "3"
},
{
"text": "perspective {P(w \u2192 o), P(w \u2192 s), P(s \u2192 o)}, effect {E(o), E(s)}, value {V(o), V(s)}, and mental state {S(o), S(s)} po- larities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Connotation Frames",
"sec_num": "3"
},
{
"text": "We propose two models: an aspect-level model that makes the prediction for each typed relation independently based on the distributional representation of the context in which the predicate appears ( \u00a73.1), and a frame-level model that makes the pre-4 Surely different readers can and will form varying opinions after reading the same text. Here we concern with the most likely perspective of the general audience, as a result of reading the text. Figure 2 : A factor graph for predicting the polarities of the typed relations that define a connotation frame for a given verb predicate. The factor graph also includes unary factors (\u03c8 emb ), which we left out for brevity.",
"cite_spans": [],
"ref_spans": [
{
"start": 448,
"end": 456,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Modeling Connotation Frames",
"sec_num": "3"
},
{
"text": "diction over the connotation frame collectively in consideration the dynamics between typed relations ( \u00a73.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Connotation Frames",
"sec_num": "3"
},
{
"text": "Our aspect-level model predicts labels for each of these typed relations separately. As input, we use the 300-dimensional dependency-based word embeddings from Levy and Goldberg (2014) . For each aspect, there is a separate MaxEnt (maximum entropy) classifier used to predict the label of that aspect on a given word-embedding, which is treated as a 300 dimensional input vector to the classifier. The MaxEnt classifiers learn their weights using LBFGS on the training data examples with re-weighting of samples to maximize for the best average F1 score.",
"cite_spans": [
{
"start": 160,
"end": 184,
"text": "Levy and Goldberg (2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Aspect-Level",
"sec_num": "3.1"
},
{
"text": "Next we present a factor graph model ( Figure 2 ) of the connotation frames that parameterize the dynamics between typed relations. Specifically, for each verb predicate, 5 the factor graph contains 9 nodes representing the different aspects of the connotation frame. All these variables take polarity values from the set {\u2212, =, +}.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 47,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Frame-Level",
"sec_num": "3.2"
},
{
"text": "We define",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frame-Level",
"sec_num": "3.2"
},
{
"text": "Y i := {P wo , P ws , P so , E o , E s , V o , V s , S o , S s }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frame-Level",
"sec_num": "3.2"
},
{
"text": "as the set of relational aspects for the i th verb. The factor graph for Y i , is illustrated in Figure 2 , and we will describe the factor potentials in more detail in the rest of this section. The probability of an assignment of polarities to the nodes in Y i is:",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 105,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Frame-Level",
"sec_num": "3.2"
},
{
"text": "P (Y i ) \u221d \u03c8 PV (P ws , V s ) \u03c8 PV (P wo , V o ) \u03c8 PE (P so , E s ) \u03c8 PE (P so , E o ) \u03c8 ES (E s , S s ) \u03c8 ES (E o , S o ) \u03c8 PT (P wo , P ws , P so ) y\u2208Y i \u03c8 emb (y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frame-Level",
"sec_num": "3.2"
},
{
"text": "Embedding Factors We include unary factors on all nodes to represent the results of the aspect-level classifier. Incorporating this knowledge as factors, as opposed to fixing the variables as observed, affords us the flexibility of representing noise in the labels as soft evidence. The potential function \u03c8 emb is a log-linear function of a feature vector f, which is a one-hot feature vector representing the polarity of a node (+,\u2212,or =). For example, with the node representing the value of the object (V o ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frame-Level",
"sec_num": "3.2"
},
{
"text": "\u03c8 emb (V o ) = e w Vo \u2022f (Vo)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frame-Level",
"sec_num": "3.2"
},
{
"text": "The potential \u03c8 emb is defined similarly for the other 8 remaining nodes. All weights were learned using stochastic gradient descent (SGD) over training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frame-Level",
"sec_num": "3.2"
},
{
"text": "Interdependency Factors We include interdependency factors to promote the properties defined by the dynamics between relations ( \u00a72.2). The potentials for Perspective Triad, Perspective-Value, Perspective-Effect, and Effect-State Relationships (\u03c8 PT , \u03c8 PV , \u03c8 PE , \u03c8 ES respectively) are all defined using log-linear functions of one-hot feature vectors that encode the combination of polarities of the neighboring nodes. The potential for \u03c8 PT is therefore:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frame-Level",
"sec_num": "3.2"
},
{
"text": "\u03c8 PT (P wo , P ws , P so ) = e w P T \u2022f (Pwo,Pws,Pso) 5 We consider only verb predicates here.",
"cite_spans": [
{
"start": 54,
"end": 55,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Frame-Level",
"sec_num": "3.2"
},
{
"text": "And we define the potentials for \u03c8 PV , \u03c8 PE , and \u03c8 ES for subject nodes as: Es,Ss) and we define the potentials for the object nodes similarly. As with the unary seed factors, weights were learned using SGD over training data. Belief Propagation We use belief propagation to induce the connotation frames of previously unseen verbs. In the belief propagation algorithm, messages are iteratively passed between the nodes to their neighboring factors and vice versa. Each message \u00b5, containing a scalar for each value x \u2208 {\u2212, 0, +}, is defined from each node v to a neighboring factor a as follows:",
"cite_spans": [
{
"start": 78,
"end": 84,
"text": "Es,Ss)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Frame-Level",
"sec_num": "3.2"
},
{
"text": "\u03c8 PV (P ws , V s ) = e w P V,s \u2022f (Pws,Vs) \u03c8 PE (P so , E s ) = e w P E,s \u2022f (Pso,Es) \u03c8 ES (E s , S s ) = e w ES,s \u2022f (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frame-Level",
"sec_num": "3.2"
},
{
"text": "\u00b5 v\u2192a (x) \u221d a * \u2208N (v) a \u00b5 a * \u2192v (x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frame-Level",
"sec_num": "3.2"
},
{
"text": "and from each factor a to a neighboring node v as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frame-Level",
"sec_num": "3.2"
},
{
"text": "\u00b5 a\u2192v \u221d x ,x v =x \u03c8(x ) v * \u2208N (a) v \u00b5 v * \u2192a (x v * )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frame-Level",
"sec_num": "3.2"
},
{
"text": "At the conclusion of message passing, the probability of a specific polarity associated with node v being equal to x is proportional to a\u2208N (v) \u00b5 a\u2192v (x). Our factor graph does not contain any loops, so we are able to perform exact inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frame-Level",
"sec_num": "3.2"
},
{
"text": "We first describe crowd-sourced annotations ( \u00a74.1), then present the empirical results of predicting connotation frames ( \u00a74.2), and conclude with qualitative analysis of a large corpus ( \u00a74.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In order to understand how humans interpret connotation frames, we designed an Amazon Mechanical Turk (AMT) annotation study. We gathered a set of transitive verbs commonly used in the New York Times corpus (Sandhaus, 2008) , selecting the 2400 verbs that are used more than 200 times in the corpus. Of these, AMT workers annotated the 1000 most frequently used verbs. Annotation Design In a pilot annotation experiment, we found that annotators have difficulty thinking about subtle connotative polarities when shown predicates without any context. Therefore, we designed the AMT task to provide a generic context as follows. We first split each verb predicate into 5 separate tasks that each gave workers a different generic sentence using the verb. To create generic sentences, we used Google Syntactic N-grams (Goldberg and Orwant, 2013) to come up with a frequently seen Subject-Verb-Object tuple which served as a simple three-word sentence with generic arguments. For each of the 5 sentences, we asked 3 annotators to answer questions like \"How do you think the Subject feels about the event described in this sentence?\" In total, each verb has 15 annotations aggregated over 5 different generic sentences containing the verb.",
"cite_spans": [
{
"start": 207,
"end": 223,
"text": "(Sandhaus, 2008)",
"ref_id": "BIBREF31"
},
{
"start": 814,
"end": 841,
"text": "(Goldberg and Orwant, 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Crowdsourcing",
"sec_num": "4.1"
},
{
"text": "In order to help the annotators, some of the questions also allowed annotators to choose sentiment using additional classes for \"positive or neutral\" or \"negative or neutral\" for when they were less confident but still felt like a sentiment might exist. When taking inter-annotator agreement, we count \"positive or neutral\" as agreeing with either \"positive\" or \"neutral\" classes. Annotator agreement Table 4 shows agreements and data statistics. The non-conflicting (NC) agreement only counts opposite polarities as disagreement. 6 From this study, we can see that non-expert annotators are able to see these sort of relationships based on their understanding of how language is used. From the NC agreement, we see that annotators do not frequently choose completely opposite polarities, indicating that even when they disagree, their disagreements are based on the degree of connotations rather than the polarity itself. The average Krippendorff alpha for all of the questions posed to the workers is 0.25, indicating stronger than random agreement. Considering the subtlety of the implicit sentiments that we are asking them to annotate, it is reasonable that some annotators will pick up on more nuances than others. Overall, the percent agreement is encouraging that the connotative relationships are visible to human annotators.",
"cite_spans": [
{
"start": 531,
"end": 532,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 401,
"end": 408,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Data and Crowdsourcing",
"sec_num": "4.1"
},
{
"text": "We aggregated over crowdsourced labels (fifteen annotations per verb) to create a polarity label for each aspect of a verb. 7 Final distributions of the aggregated labels are 6 Annotators were asked yes/no questions related to Value, so this does not have a corresponding NC agreement score. 7 We take the average to obtain scalar value between [\u22121., 1.] for each aspect of a verb's connotation frame. For simplicity, we cutoff the ranges of negative, neutral and positive polarities as [\u22121, \u22120.25), [\u22120.25, 0.25 inter-annotator agreement. The strict agreement counts agreement over 3 classes (\"positive or neutral\" was counted as agreeing with either + or neutral), while non-conflicting (NC) agreement also allows agreements between neutral and -/+ (no direct conflicts). Distribution shows the final class distribution of -/+ labels created by averaging annotations.",
"cite_spans": [
{
"start": 124,
"end": 125,
"text": "7",
"ref_id": null
},
{
"start": 175,
"end": 176,
"text": "6",
"ref_id": null
},
{
"start": 292,
"end": 293,
"text": "7",
"ref_id": null
},
{
"start": 500,
"end": 512,
"text": "[\u22120.25, 0.25",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Aggregating Annotations",
"sec_num": null
},
{
"text": "included in the right-hand columns of Table 4 . Notably, the distributions are skewed toward positive and neutral labels. The most skewed connotation frame aspect is the value V(x) which tends to be positive, especially for the subject argument. This makes some intuitive sense since, as the subject actively causes the predicate event to occur, they most likely have some intrinsic potential to be valuable. An example of a verb where the subject was labelled as not valuable is \"contaminate\". In the most generic case, the writer is using contaminate to frame the subject as being worthless (and even harmful) with regards to the other event participants. For example, in the sentence \"his touch contaminated the food,\" it is clear that the writer considers \"his touch\" to be of negative value in the context of how it impacts the rest of the event.",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 45,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Aggregating Annotations",
"sec_num": null
},
{
"text": "Using our crowdsourced labels, we randomly divided the annotated verbs into training, dev, and held-out test sets of equal size (300 verbs each). For evaluation we measured average accuracy and F1 score over the 9 different Connotation Frame relationship types for which we have annotations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Connotation Frame Prediction",
"sec_num": "4.2"
},
{
"text": "P(w \u2192 o), P(w \u2192 s), P(s \u2192 o), V(o), V(s), E(o), E(s), S(o)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Connotation Frame Prediction",
"sec_num": "4.2"
},
{
"text": ", and S(s). Baselines To show the non-trivial challenge of learning Connotation Frames, we include a simple majority-class baselines. The MAJORITY classifier assigns each of the 9 relationships the label of the majority of that relationship type found in the training data. Some of these relationships (in particular, the Value of subject/object) have skewed distributions, so we expect this classifier to achieve a much higher accuracy than random but a much lower overall F1 score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Connotation Frame Prediction",
"sec_num": "4.2"
},
{
"text": "Additionally, we add a GRAPH PROP baseline that is comparable to algorithms like graph propagation or label propagation which are often used for (sentiment) lexicon induction. We use a factor graph with nodes representing the polarity of each typed relation for each verb. Binary factors connect nodes representing a particular type of relation for two similar verbs (e.g. P(w \u2192 o) for verbs persuade and convince). These binary factors have hand-tuned potentials that are proportional to the cosine similarity of the verbs' embeddings, encouraging similar verbs to have the same polarity for the various relational aspects. We use words in the training data as the seed set and use loopy belief propagation to propagate polarities from known nodes to the unknown relationships.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Connotation Frame Prediction",
"sec_num": "4.2"
},
{
"text": "Finally, we use a 3-NEAREST NEIGHBOR baseline that labels relationships for a verb based on the predicate's 300-dimensional word embedding representation, using the same embeddings as in our aspect-level. 3-NEAREST NEIGHBOR labels each verb using the polarities of the three closest verbs found in the training set. The most similar verbs are determined using the cosine similarity between word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Connotation Frame Prediction",
"sec_num": "4.2"
},
{
"text": "Results As shown in Table 5 , aspect-level and frame-level models consistently outperform all three baselines -MAJORITY, 3-NN, GRAPH PROP in the development set across the different types of relationships. In particular, the improved F1 scores show that these models are able to perform better across all three classes of labels even in the most skewed cases. The frame-level model also frequently improves the F1 scores of the labels from what they were in the aspect-level model. The summarized comparison of the classifiers' performance test set is shown in Table 6 . As with the development set, aspect-level and frame-level are both able to outperform the baselines. Furthermore, the frame-level formulation is able to make improvement over the results of the aspectlevel classification, indicating that the modelling of inter-dependencies between relationships did help correct some of the mistakes made.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 27,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 561,
"end": 568,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Connotation Frame Prediction",
"sec_num": "4.2"
},
{
"text": "One point of interest about the frame-level results is whether the learned weights over the consistency factors match our initial intuitions about interdependencies between relationships. The weights learned in our algorithm do tell us something interesting about the degree to which these interdependencies are actually found in our data. We show the heat maps for some of the learned weights in Figure 3 . In 3a, we show the weights of one of the embedding factors, and how the polarities are more strongly weighted when they match the relation-level output. In the rest of the figure, we show the weights for the other perspective relationships when P(w \u2192 o) is negative (3b), neutral (3c), and positive (3d), respectively. Based on the expected interdependencies, when P(w \u2192 o) : \u2212, the model should favor P(w \u2192 s) = P(s \u2192 o) and when P(w \u2192 o) : +, the model should favor P(w \u2192 s) = P(s \u2192 o). Our model does, in fact, learn a similar trend, with slightly higher weights along these two diagonals in the maps 3b and 3d. Interestingly, when P(w \u2192 o) is neutral, weights slightly prefer for the other two perspectives to resemble one another, but with highest weights being when other perspectives are also neutral.",
"cite_spans": [],
"ref_spans": [
{
"start": 397,
"end": 405,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Connotation Frame Prediction",
"sec_num": "4.2"
},
{
"text": "Using the connotation frame, we present measured implied sentiment in online journalism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of a Large News Corpus",
"sec_num": "4.3"
},
{
"text": "Data From the Stream Corpus (KBA, 2014), we select 70 million news articles. We extract subject-verb-object relations for this subset using the direct dependencies between noun phrases and verbs as identified by the BBN Serif system, obtaining 1.2 billion unique tuples of the form (url,subject,verb,object,count) .We also extracted subject-verb-object tuples from news articles found in the Annotated English Gigaword Corpus (Napoles et al., 2012) , which contains nearly 10 million articles. From the Gigaword corpus we extracted a further 120 million unique tuples.",
"cite_spans": [
{
"start": 282,
"end": 313,
"text": "(url,subject,verb,object,count)",
"ref_id": null
},
{
"start": 426,
"end": 448,
"text": "(Napoles et al., 2012)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of a Large News Corpus",
"sec_num": "4.3"
},
{
"text": "Estimating Entity Polarities Using connotation frames, we can also measure entity-to-entity sentiment at a large scale. Figure 4 , for example, presents the polarity of entities \"Democrats\" and \"Republicans\" towards a selected set of nouns, by computing the average estimated polarity (using our lexicon) over triples where one of these entities appears as part of the subject (e.g. \"Democrats\" or \"Republican party\"). Apart from nouns that both entities are positive (\"business\", \"constitution\") or negative (\"the allegations\",\"veto threat\") towards, we can also see interesting examples in which Democrats feel more positively (below the line: \"nancy pelosi\", \"unions\", \"gun control\", etc.) and ones where Republicans are more positive (\"the pipeline\", \"gop leaders\", \"budget cuts\", etc.) Also, both entities are neutral towards \"idea\" and \"the proposal\", which probably owes to the fact that ideas or proposals can be good or bad for either entity depending on the context.",
"cite_spans": [],
"ref_spans": [
{
"start": 120,
"end": 128,
"text": "Figure 4",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Analysis of a Large News Corpus",
"sec_num": "4.3"
},
{
"text": "Most prior work on sentiment lexicons focused on the overall polarity of words without taking into account their semantic arguments Baccianella et al., 2010; Velikovich et al., 2010; Kaji and Kitsuregawa, 2007; Kamps et al., 2004; Takamura et al., 2005; Adreevskaia and Bergler, 2006) . Several recent studies began exploring more specific and nuanced aspects of sentiment such as connotation (Feng et al., 2013) , good and bad effects (Choi and Wiebe, 2014) , and evoked sentiment (Mohammad and Turney, 2010). Drawing inspirations from them, we present connotation frames as a unifying representation framework to encode the rich dimensions of implied sentiment, presupposed value judgements, and effect evaluation, and propose a factor graph formulation that captures the interplay among different types of connotation relations. Goyal et al. (2010a; 2010b) investigated how characters (protagonists, villains, victims) in children's stories are affected by certain predicates, which is related to the effect relations studied in this work. While Klenner et al. (2014) similarly investigated the relation between the polarity of the verbs and arguments, our work introduces new perspective types and proposes a unified representation and inference model. Wiegand and Ruppenhofer (2015) also looked at perspective-based relationships induced by verb predicates with a focus on opinion roles. Building on this concept, our framework also incorporates information about the perspectives' polarities as well as information about other typed relations. There have been growing interests for modeling framing (Greene and Resnik, 2009; Hasan and Ng, 2013) , biased language (Recasens et al., 2013) and ideology detection (Yano et al., 2010) . All these tasks are relatively less studied, and we hope our connotation frame lexicon will be useful for them.",
"cite_spans": [
{
"start": 132,
"end": 157,
"text": "Baccianella et al., 2010;",
"ref_id": "BIBREF2"
},
{
"start": 158,
"end": 182,
"text": "Velikovich et al., 2010;",
"ref_id": "BIBREF36"
},
{
"start": 183,
"end": 210,
"text": "Kaji and Kitsuregawa, 2007;",
"ref_id": "BIBREF19"
},
{
"start": 211,
"end": 230,
"text": "Kamps et al., 2004;",
"ref_id": "BIBREF20"
},
{
"start": 231,
"end": 253,
"text": "Takamura et al., 2005;",
"ref_id": "BIBREF34"
},
{
"start": 254,
"end": 284,
"text": "Adreevskaia and Bergler, 2006)",
"ref_id": "BIBREF0"
},
{
"start": 393,
"end": 412,
"text": "(Feng et al., 2013)",
"ref_id": "BIBREF10"
},
{
"start": 436,
"end": 458,
"text": "(Choi and Wiebe, 2014)",
"ref_id": "BIBREF8"
},
{
"start": 832,
"end": 852,
"text": "Goyal et al. (2010a;",
"ref_id": "BIBREF14"
},
{
"start": 853,
"end": 859,
"text": "2010b)",
"ref_id": "BIBREF15"
},
{
"start": 888,
"end": 921,
"text": "(protagonists, villains, victims)",
"ref_id": null
},
{
"start": 1049,
"end": 1070,
"text": "Klenner et al. (2014)",
"ref_id": "BIBREF22"
},
{
"start": 1257,
"end": 1287,
"text": "Wiegand and Ruppenhofer (2015)",
"ref_id": "BIBREF39"
},
{
"start": 1605,
"end": 1630,
"text": "(Greene and Resnik, 2009;",
"ref_id": "BIBREF16"
},
{
"start": 1631,
"end": 1650,
"text": "Hasan and Ng, 2013)",
"ref_id": "BIBREF17"
},
{
"start": 1669,
"end": 1692,
"text": "(Recasens et al., 2013)",
"ref_id": "BIBREF30"
},
{
"start": 1716,
"end": 1735,
"text": "(Yano et al., 2010)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Sentiment inference rules have been explored by the recent work of and . In contrast, we make a novel conceptual connection between inferred sentiments and frame semantics, organized as connotation frames, and present a unified model that integrates different aspects of the connotation frames. Finally, in a broader sense, what we study as connotation frames draws a connection to schema and script theory (Schank and Abelson, 1975) . Unlike most prior work that focused on directly observable actions (Chambers and Jurafsky, 2009; Frermann et al., 2014; Bethard et al., 2008) , we focus on implied sentiments that are framed by predicate verbs.",
"cite_spans": [
{
"start": 407,
"end": 433,
"text": "(Schank and Abelson, 1975)",
"ref_id": "BIBREF32"
},
{
"start": 503,
"end": 532,
"text": "(Chambers and Jurafsky, 2009;",
"ref_id": "BIBREF7"
},
{
"start": 533,
"end": 555,
"text": "Frermann et al., 2014;",
"ref_id": "BIBREF12"
},
{
"start": 556,
"end": 577,
"text": "Bethard et al., 2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this paper, we presented a novel system of connotative frames that define a set of implied sentiment and presupposed facts for a predicate. Our work also empirically explores different methods of inducing and modelling these connotation frames, incorporating the interplay between relations within frames. Our work suggests new research avenues on learning connotation frames, and their applications to deeper understanding of social and political discourse. All the learned connotation frames and annotations will be shared at http://homes.cs.washington. edu/\u02dchrashkin/connframe.html.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "To be more precise, y is most likely in a negative state",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The articles come from 30 news sources indicated by others as exhibiting liberal or conservative leanings(Mitchell et al., 2014; Center for Media and Democracy, 2013; Center for Media and Democracy, 2012;HWC Library, 2011)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "That is, even if someone truly deserves criticism from Obama, left-learning sources would choose slightly different wordings to avoid a potentially harsh cast on Obama.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for many insightful comments. We also thank members of UW NLP for discussions and support. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1256082. The work is also supported in part by NSF grants IIS-1408287, IIS-1524371 and gifts by Google and Facebook.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Mining wordnet for fuzzy sentiment: Sentiment tag extraction from wordnet glosses",
"authors": [
{
"first": "Alina",
"middle": [],
"last": "Adreevskaia",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Bergler",
"suffix": ""
}
],
"year": 2006,
"venue": "11th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "209--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alina Adreevskaia and Sabine Bergler. 2006. Mining wordnet for fuzzy sentiment: Sentiment tag extrac- tion from wordnet glosses. In 11th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 209-216.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Subjectivity word sense disambiguation",
"authors": [
{
"first": "Cem",
"middle": [],
"last": "Akkaya",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "190--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cem Akkaya, Janyce Wiebe, and Rada Mihalcea. 2009. Subjectivity word sense disambiguation. In Pro- ceedings of the 2009 Conference on Empirical Meth- ods in Natural Language Processing, volume 2, pages 190-199.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining",
"authors": [
{
"first": "Stefano",
"middle": [],
"last": "Baccianella",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Esuli",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefano Baccianella, Andrea Esuli, and Fabrizio Sebas- tiani. 2010. Sentiwordnet 3.0: An enhanced lexi- cal resource for sentiment analysis and opinion min- ing. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The berkeley framenet project",
"authors": [
{
"first": "F",
"middle": [],
"last": "Collin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Charles",
"suffix": ""
},
{
"first": "John B",
"middle": [],
"last": "Fillmore",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lowe",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 17th international conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "86--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The berkeley framenet project. In Proceed- ings of the 17th international conference on Compu- tational linguistics, volume 1, pages 86-90.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Building a corpus of temporal-causal structure",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Corvey",
"suffix": ""
},
{
"first": "James H",
"middle": [],
"last": "Klingenstein",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bethard, William J Corvey, Sara Klingenstein, and James H Martin. 2008. Building a corpus of temporal-causal structure. In Proceedings of the Sixth International Conference on Language Re- sources and Evaluation (LREC'08).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Center for Media and Democracy. 2012. Sourcewatch: Conservative news outlets",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Center for Media and Democracy. 2012. Sourcewatch: Conservative news outlets.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Center for Media and Democracy. 2013. Sourcewatch: Liberal news outlets",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "http://www.sourcewatch.org/index. php/Conservative_news_outlets. Center for Media and Democracy. 2013. Sourcewatch: Liberal news outlets. http: //www.sourcewatch.org/index.php/ Liberal_news_outlets.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unsupervised learning of narrative schemas and their participants",
"authors": [
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "2",
"issue": "",
"pages": "602--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathanael Chambers and Dan Jurafsky. 2009. Unsu- pervised learning of narrative schemas and their par- ticipants. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th In- ternational Joint Conference on Natural Language Processing of the AFNLP, volume 2 of ACL '09, pages 602-610.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "+/-effectwordnet: Sense-level lexicon acquisition for opinion inference",
"authors": [
{
"first": "Yoonjung",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1181--1191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoonjung Choi and Janyce Wiebe. 2014. +/- effectwordnet: Sense-level lexicon acquisition for opinion inference. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1181-1191. Associa- tion for Computational Linguistics, October.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sentiment propagation via implicature constraints",
"authors": [
{
"first": "Lingjia",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lingjia Deng and Janyce Wiebe. 2014. Sentiment propagation via implicature constraints. In Pro- ceedings of the Conference of the European Chap- ter of the Association for Computational Linguistics (EACL).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Connotation lexicon: A dash of sentiment beneath the surface meaning",
"authors": [
{
"first": "Song",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Jun Seok",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Polina",
"middle": [],
"last": "Kuznetsova",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "1",
"issue": "",
"pages": "1774--1784",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Song Feng, Jun Seok Kang, Polina Kuznetsova, and Yejin Choi. 2013. Connotation lexicon: A dash of sentiment beneath the surface meaning. In Pro- ceedings of the 51st Annual Meeting of the Asso- ciation for Computational Linguistics (ACL), vol- ume 1, pages 1774-1784. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Frame semantics and the nature of language",
"authors": [
{
"first": "Charles",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
}
],
"year": 1976,
"venue": "Annals of the New York Academy of Sciences: Conference on the Origin and Development of Language and Speech",
"volume": "280",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles J. Fillmore. 1976. Frame semantics and the nature of language. In In Annals of the New York Academy of Sciences: Conference on the Origin and Development of Language and Speech, volume 280, pages 2032.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A hierarchical bayesian model for unsupervised induction of script knowledge",
"authors": [
{
"first": "Lea",
"middle": [],
"last": "Frermann",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Pinkal",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lea Frermann, Ivan Titov, and Manfred Pinkal. 2014. A hierarchical bayesian model for unsupervised in- duction of script knowledge. In Proceedings of the Conference of the European Chapter of the Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A dataset of syntactic-ngrams over time from a very large corpus of english books",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Orwant",
"suffix": ""
}
],
"year": 2013,
"venue": "Second Joint Conference on Lexical and Computational Semantics (*SEM)",
"volume": "1",
"issue": "",
"pages": "241--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Jon Orwant. 2013. A dataset of syntactic-ngrams over time from a very large corpus of english books. In Second Joint Conference on Lexical and Computational Semantics (*SEM), vol- ume 1, pages 241-247, June.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatically producing plot unit representations for narrative text",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "77--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Goyal, Ellen Riloff, and Hal Daum\u00e9, III. 2010a. Automatically producing plot unit representations for narrative text. In Proceedings of the 2010 Con- ference on Empirical Methods in Natural Language Processing, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 77-86.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Toward plot units: Automatic affect state analysis",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of HLT/NAACL Workshop on Computational Approaches to Analysis and Generation of Emotion in Text (CAET)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Goyal, Ellen Riloff, Hal Daum\u00e9 III, and Nathan Gilbert. 2010b. Toward plot units: Automatic affect state analysis. In Proceedings of HLT/NAACL Work- shop on Computational Approaches to Analysis and Generation of Emotion in Text (CAET).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "More than words: Syntactic packaging and implicit sentiment",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Greene",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "503--511",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Greene and Philip Resnik. 2009. More than words: Syntactic packaging and implicit sentiment. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, pages 503-511.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Frame semantics for stance classification",
"authors": [
{
"first": "Saidul",
"middle": [],
"last": "Kazi",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning (CONLL)",
"volume": "",
"issue": "",
"pages": "124--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazi Saidul Hasan and Vincent Ng. 2013. Frame se- mantics for stance classification. Proceedings of the Seventeenth Conference on Computational Natural Language Learning (CONLL), pages 124-132.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Consider the Source: A Resource Guide to Liberal, Conservative, and Nonpartisan Periodicals. www.ccc.edu/colleges/ washington/departments/Documents/ PeriodicalsPov.pdf. Compiled by HWC Librarians",
"authors": [
{
"first": "",
"middle": [],
"last": "Hwc Library",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "HWC Library. 2011. Consider the Source: A Resource Guide to Liberal, Conservative, and Nonpartisan Periodicals. www.ccc.edu/colleges/ washington/departments/Documents/ PeriodicalsPov.pdf. Compiled by HWC Librarians in January 2011.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Building lexicon for sentiment analysis from massive collection of html documents",
"authors": [
{
"first": "Nobuhiro",
"middle": [],
"last": "Kaji",
"suffix": ""
},
{
"first": "Masaru",
"middle": [],
"last": "Kitsuregawa",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "1075--1083",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nobuhiro Kaji and Masaru Kitsuregawa. 2007. Build- ing lexicon for sentiment analysis from massive col- lection of html documents. In Proceedings of the 2007 Joint Conference on Empirical Methods in Nat- ural Language Processing and Computational Nat- ural Language Learning (EMNLP-CoNLL), pages 1075-1083.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Using wordnet to measure semantic orientations of adjectives",
"authors": [
{
"first": "Jaap",
"middle": [],
"last": "Kamps",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Marx",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Mokken",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "De Rijke",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Fourth International Conference on Language Resources and Evaluation(LREC'04)",
"volume": "4",
"issue": "",
"pages": "1115--1118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jaap Kamps, Maarten Marx, Robert J Mokken, and Maarten De Rijke. 2004. Using wordnet to mea- sure semantic orientations of adjectives. In Pro- ceedings of the Fourth International Conference on Language Resources and Evaluation(LREC'04), vol- ume 4, pages 1115-1118.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Knowledge Base Acceleration Stream Corpus",
"authors": [
{
"first": "",
"middle": [],
"last": "Trec Kba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "TREC KBA. 2014. Knowledge Base Accelera- tion Stream Corpus. http://trec-kba.org/ kba-stream-corpus-2014.shtml.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Verb polarity frames: a new resource and its application in target-specific polarity classification",
"authors": [
{
"first": "Manfred",
"middle": [],
"last": "Klenner",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Amsler",
"suffix": ""
},
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of KONVENS 2014",
"volume": "",
"issue": "",
"pages": "106--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manfred Klenner, Michael Amsler, and Nora Hollen- stein. 2014. Verb polarity frames: a new resource and its application in target-specific polarity classi- fication. In Proceedings of KONVENS 2014, pages 106-115.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Dependencybased word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "302--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Dependency- based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (ACL), pages 302-308.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A survey of opinion mining and sentiment analysis",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2012,
"venue": "Mining text data",
"volume": "",
"issue": "",
"pages": "415--463",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Liu and Lei Zhang. 2012. A survey of opinion mining and sentiment analysis. In Mining text data, pages 415-463. Springer.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Political Polarization & Media Habits",
"authors": [
{
"first": "Amy",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Gottfried",
"suffix": ""
},
{
"first": "Jocelyn",
"middle": [],
"last": "Kiley",
"suffix": ""
},
{
"first": "Katerina",
"middle": [
"Eva"
],
"last": "Matsa",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amy Mitchell, Jeffrey Gottfried, Jocelyn Kiley, and Katerina Eva Matsa. 2014. Political Polarization & Media Habits. www.journalism.org/2014/10/21/ political-polarization-media-habits/.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Emotions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text",
"volume": "",
"issue": "",
"pages": "26--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad and Peter D Turney. 2010. Emo- tions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Gener- ation of Emotion in Text, pages 26-34. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Annotated gigaword",
"authors": [
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Gormley",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction",
"volume": "",
"issue": "",
"pages": "95--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated gigaword. In Pro- ceedings of the Joint Workshop on Automatic Knowl- edge Base Construction and Web-scale Knowledge Extraction, pages 95-100. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The proposition bank: An annotated corpus of semantic roles",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational linguistics",
"volume": "31",
"issue": "1",
"pages": "71--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated cor- pus of semantic roles. Computational linguistics, 31(1):71-106.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Opinion mining and sentiment analysis. Foundations and trends in information retrieval",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "2",
"issue": "",
"pages": "1--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and trends in infor- mation retrieval, 2(1-2):1-135.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Linguistic models for analyzing and detecting biased language",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Cristian",
"middle": [],
"last": "Danescu-Niculescu-Mizil",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "1650--1659",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta Recasens, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. 2013. Linguistic models for an- alyzing and detecting biased language. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL), pages 1650- 1659.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The new york times annotated corpus. Linguistic Data Consortium",
"authors": [
{
"first": "Evan",
"middle": [],
"last": "Sandhaus",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "6",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evan Sandhaus. 2008. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Scripts, plans, and knowledge",
"authors": [
{
"first": "C",
"middle": [],
"last": "Roger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schank",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Robert P Abelson",
"suffix": ""
}
],
"year": 1975,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roger C Schank and Robert P Abelson. 1975. Scripts, plans, and knowledge. Yale University.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Recognizing stances in ideological on-line debates",
"authors": [
{
"first": "Swapna",
"middle": [],
"last": "Somasundaran",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text",
"volume": "",
"issue": "",
"pages": "116--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Swapna Somasundaran and Janyce Wiebe. 2010. Rec- ognizing stances in ideological on-line debates. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Genera- tion of Emotion in Text, pages 116-124. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Extracting semantic orientations of words using spin model",
"authors": [
{
"first": "Hiroya",
"middle": [],
"last": "Takamura",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Manabu",
"middle": [],
"last": "Okumura",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of 43rd Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroya Takamura, Takashi Inui, and Manabu Okumura. 2005. Extracting semantic orientations of words using spin model. In Proceedings of 43rd Annual Meeting of the Association for Computational Lin- guistics (ACL).",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Get out the vote: Determining support or opposition from Congressional floor-debate transcripts",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "327--335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: Determining support or opposition from Congressional floor-debate transcripts. In Proceed- ings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 327-335.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "The viability of web-derived polarity lexicons",
"authors": [
{
"first": "Leonid",
"middle": [],
"last": "Velikovich",
"suffix": ""
},
{
"first": "Sasha",
"middle": [],
"last": "Blair-Goldensohn",
"suffix": ""
},
{
"first": "Kerry",
"middle": [],
"last": "Hannan",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10",
"volume": "",
"issue": "",
"pages": "777--785",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leonid Velikovich, Sasha Blair-Goldensohn, Kerry Hannan, and Ryan McDonald. 2010. The viabil- ity of web-derived polarity lexicons. In Human Lan- guage Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10, pages 777- 785.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "An account of opinion implicatures",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Lingjia",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janyce Wiebe and Lingjia Deng. 2014. An account of opinion implicatures. CoRR, abs/1404.6491.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Annotating expressions of opinions and emotions in language. Language resources and evaluation",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "39",
"issue": "",
"pages": "165--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emo- tions in language. Language resources and evalua- tion, 39(2-3):165-210.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Opinion holder and target extraction based on the induction of verbal categories",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand and Josef Ruppenhofer. 2015. Opin- ion holder and target extraction based on the in- duction of verbal categories. Proceedings of the 2015 Conference on Computational Natural Lan- guage Learning (CoNLL), page 215.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Recognizing contextual polarity in phraselevel sentiment analysis",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Hoffmann",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the conference on human language technology and empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "347--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase- level sentiment analysis. In Proceedings of the con- ference on human language technology and empiri- cal methods in natural language processing, pages 347-354.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Shedding (a thousand points of) light on biased language",
"authors": [
{
"first": "Tae",
"middle": [],
"last": "Yano",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, CSLDAMT '10",
"volume": "",
"issue": "",
"pages": "152--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tae Yano, Philip Resnik, and Noah A. Smith. 2010. Shedding (a thousand points of) light on biased lan- guage. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, CSLDAMT '10, pages 152-158.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"num": null,
"text": "An example connotation frame of \"violate\" as a set of typed relations: perspective P(x \u2192 y), effect E(x), value V(x), and mental state S(x).",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "(5) mental state: y is most likely unhappy about the outcome.1 The story begins in Illinois in 1987, when a 17year-old girl suffered a botched abortion.",
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"num": null,
"text": "(a) w emb for P(s \u2192 o) (b) P(w \u2192 o): -(c) P(w \u2192 o): = (d) P(w \u2192 o): + Learned weights of embedding factor for the perspective of subject to object and the weights the perspective triad (PT) factor. Red is for weights that are more positive, whereas blue are more negative.",
"type_str": "figure"
},
"FIGREF5": {
"uris": null,
"num": null,
"text": "Average sentiment of Democrats and Republicans (as subjects) to selected nouns (as their objects), aggregated over a large corpus using the learned lexicon ( \u00a74.2). The line indicates identical sentiments, i.e. Republicans are more positive towards the nouns that are above the line.",
"type_str": "figure"
},
"TABREF0": {
"type_str": "table",
"html": null,
"text": "",
"num": null,
"content": "<table><tr><td>shows examples of connotation frame</td></tr></table>"
},
"TABREF1": {
"type_str": "table",
"html": null,
"text": "Media Bias in Connotation Frames: Obama, for example, is portrayed as someone who attacks or criticizes others by the right-leaning sources, whereas the left-leaning sources portray Obama as the victim of harsh acts like \"attack\" and \"criticize\".",
"num": null,
"content": "<table/>"
},
"TABREF2": {
"type_str": "table",
"html": null,
"text": "Potential Dynamics among Typed Relations: we propose models that parameterize these dynamics using log-linear models (frame-level model in \u00a73).",
"num": null,
"content": "<table/>"
},
"TABREF5": {
"type_str": "table",
"html": null,
"text": "Label Statistics: % Agreement refers to pairwise",
"num": null,
"content": "<table/>"
},
"TABREF7": {
"type_str": "table",
"html": null,
"text": "Detailed breakdown of results on the development set using accuracy and average F1 over the three class labels (+,-,=).",
"num": null,
"content": "<table><tr><td>Algorithm</td><td>Acc. Avg F 1</td></tr><tr><td>Graph Prop</td><td>58.81 41.46</td></tr><tr><td>3-nn</td><td>63.71 47.30</td></tr><tr><td colspan=\"2\">Aspect-Level 67.93 53.17</td></tr><tr><td colspan=\"2\">Frame-Level 68.26 53.50</td></tr></table>"
},
"TABREF8": {
"type_str": "table",
"html": null,
"text": "Performance on the test set. Results are averaged over the different aspects.",
"num": null,
"content": "<table/>"
}
}
}
}