ACL-OCL / Base_JSON /prefixP /json /P15 /P15-1012.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P15-1012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:12:26.199572Z"
},
"title": "Joint Models of Disagreement and Stance in Online Debate",
"authors": [
{
"first": "Dhanya",
"middle": [],
"last": "Sridhar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California Santa Cruz",
"location": {}
},
"email": "dsridhar@ucsc.edu"
},
{
"first": "James",
"middle": [],
"last": "Foulds",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California Santa Cruz",
"location": {}
},
"email": "jfoulds@ucsc.edu"
},
{
"first": "Bert",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {},
"email": "bhuang@vt.edu"
},
{
"first": "Lise",
"middle": [],
"last": "Getoor",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California Santa Cruz",
"location": {}
},
"email": "getoor@ucsc.edu"
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California Santa Cruz",
"location": {}
},
"email": "mawalker@ucsc.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Online debate forums present a valuable opportunity for the understanding and modeling of dialogue. To understand these debates, a key challenge is inferring the stances of the participants, all of which are interrelated and dependent. While collectively modeling users' stances has been shown to be effective (Walker et al., 2012c; Hasan and Ng, 2013), there are many modeling decisions whose ramifications are not well understood. To investigate these choices and their effects, we introduce a scalable unified probabilistic modeling framework for stance classification models that 1) are collective, 2) reason about disagreement, and 3) can model stance at either the author level or at the post level. We comprehensively evaluate the possible modeling choices on eight topics across two online debate corpora, finding accuracy improvements of up to 11.5 percentage points over a local classifier. Our results highlight the importance of making the correct modeling choices for online dialogues, and having a unified probabilistic modeling framework that makes this possible. Dialogue Turns Stance User 1: 18. That's the smoking age thats the shooting age. Why do you think they call it ATF? ANTI User 2: Shooting age? I know 7 year old shooters. 18 should be the gun purchasing age, but there is really no \"shooting\" age. ANTI User 1: I know. I was just pointing out that the logic used to propose a 21 year \"shooting age\" was inconsistent. ANTI User 2: I see. I dont think its really fair that you can join the army at 18 and use handguns and military weapons, but you cant purchase a handgun until 21.",
"pdf_parse": {
"paper_id": "P15-1012",
"_pdf_hash": "",
"abstract": [
{
"text": "Online debate forums present a valuable opportunity for the understanding and modeling of dialogue. To understand these debates, a key challenge is inferring the stances of the participants, all of which are interrelated and dependent. While collectively modeling users' stances has been shown to be effective (Walker et al., 2012c; Hasan and Ng, 2013), there are many modeling decisions whose ramifications are not well understood. To investigate these choices and their effects, we introduce a scalable unified probabilistic modeling framework for stance classification models that 1) are collective, 2) reason about disagreement, and 3) can model stance at either the author level or at the post level. We comprehensively evaluate the possible modeling choices on eight topics across two online debate corpora, finding accuracy improvements of up to 11.5 percentage points over a local classifier. Our results highlight the importance of making the correct modeling choices for online dialogues, and having a unified probabilistic modeling framework that makes this possible. Dialogue Turns Stance User 1: 18. That's the smoking age thats the shooting age. Why do you think they call it ATF? ANTI User 2: Shooting age? I know 7 year old shooters. 18 should be the gun purchasing age, but there is really no \"shooting\" age. ANTI User 1: I know. I was just pointing out that the logic used to propose a 21 year \"shooting age\" was inconsistent. ANTI User 2: I see. I dont think its really fair that you can join the army at 18 and use handguns and military weapons, but you cant purchase a handgun until 21.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Understanding stance and opinion in dialogues can provide critical insight into the theoretical underpinnings of discourse, argumentation, and sentiment. Systems for predicting the stances of individuals can potentially have positive social impact and are of practical interest to non-profits, governmental organizations, and companies. For exam- Figure 1 : Example of a debate dialogue turn between two users on the gun control topic, from 4FORUMS.COM.",
"cite_spans": [],
"ref_spans": [
{
"start": 347,
"end": 355,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "ple, stance predictions may be used to target public awareness and advocacy campaigns, direct political fundraising and get-out-the vote efforts, and improve personalized recommendations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Online debate websites are a particularly rich source of argumentative dialogic data ( Fig. 1 ). On these websites, users debate and share their opinions on a variety of social and political issues. Previous work (Somasundaran and Wiebe, 2010; Walker et al., 2012c) has shown that stance classification in online debates is a challenging problem. While collective approaches that jointly predict user stance seem promising (Walker et al., 2012c; Hasan and Ng, 2013) , the rich structure of online debate forums necessitates many modeling choices. For example, users publish opinions and reply and respond to each others' posts. In so doing, they may agree or disagree with either all or a portion of another user's post, suggesting that collective classifiers for stance may benefit from text-based disagreement modeling. Furthermore, one can model stance either at the author levelassuming that an author's stance is based on all of their posts on a topic (Burfoot et al., 2011 )-or at the post level-assuming that an author's stance is post-specific and may vary across posts (Hasan and Ng, 2013) . These decisions can drastically change the nature of stance models, so understanding their implications is critical.",
"cite_spans": [
{
"start": 213,
"end": 243,
"text": "(Somasundaran and Wiebe, 2010;",
"ref_id": "BIBREF18"
},
{
"start": 244,
"end": 265,
"text": "Walker et al., 2012c)",
"ref_id": "BIBREF23"
},
{
"start": 423,
"end": 445,
"text": "(Walker et al., 2012c;",
"ref_id": "BIBREF23"
},
{
"start": 446,
"end": 465,
"text": "Hasan and Ng, 2013)",
"ref_id": "BIBREF11"
},
{
"start": 957,
"end": 978,
"text": "(Burfoot et al., 2011",
"ref_id": "BIBREF7"
},
{
"start": 1078,
"end": 1098,
"text": "(Hasan and Ng, 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 87,
"end": 93,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we develop a flexible modeling framework for stance classification using probabilistic soft logic (PSL) (Bach et al., 2013; Bach et al., 2015) , a recently introduced probabilistic modeling framework. 1 PSL is a probabilistic programming system that allows models to be specified using a declarative, rule-like language. The resulting models are a special form of conditional random field, called a hinge-loss Markov random field, which admits highly scalable exact inference (Bach et al., 2013) . Modeling stance in large, richly connected online debate forums requires a careful exploration of many modeling choices. This complex domain especially benefits from PSL's flexibility and scalability. PSL makes it easy to develop model variations and extensions, as one can readily incorporate new factors capturing additional intuitions about dependencies in a domain.",
"cite_spans": [
{
"start": 119,
"end": 138,
"text": "(Bach et al., 2013;",
"ref_id": "BIBREF2"
},
{
"start": 139,
"end": 157,
"text": "Bach et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 491,
"end": 510,
"text": "(Bach et al., 2013)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate our models on data from two debate sites, 4FORUMS and CREATEDEBATE (Walker et al., 2012b; Hasan and Ng, 2013) , which we describe in detail in Section 2. Our experimental results show that there are important ramifications of several modeling decisions, including whether to use collective or non-collective models, to represent stance at the post level or the author level, and how to model disagreement. We find that with appropriate modeling choices, our approach leads to improvements of up to 11.5 percentage points of accuracy over simple classification approaches.",
"cite_spans": [
{
"start": 79,
"end": 101,
"text": "(Walker et al., 2012b;",
"ref_id": "BIBREF22"
},
{
"start": 102,
"end": 121,
"text": "Hasan and Ng, 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions include (1) a flexible, unified framework for modeling online debates, (2) extensive experimental study of many possible models on eight forum datasets, collected across two different debate websites, and (3) general modeling recommendations resulting from our empirical studies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Online debate forums represent richly structured argumentative dialogues. On these forums, users debate with each other in discussion threads on a variety of topics or issues, such as gun control, gay marriage, and marijuana legalization. Each discussion consists of a number of posts, which are short text documents authored by users of the forum. A post is either a reply to a previous post, or it is the start (root) of a thread. As users engage with each other, a thread branches out into a tree of argumentative interactions between the users. Forum users often post numerous times and across multiple discussions and topics, which creates a richly structured interaction graph. Online debates present different challenges than more controlled dialogic settings such as congressional debates. Posts are short and informal, there is limited external information about authors, and debate topics admit many modes of argumentation ranging from serious, to tangential, to sarcastic. The reply graph in online debates also has substantially different semantics to networks in other debate settings, such as the graph of speaker mentions in congressional debates. To illustrate this setting, Fig. 1 shows an example dialogue between two users who are debating their opinions on the topic of gun control.",
"cite_spans": [],
"ref_spans": [
{
"start": 1191,
"end": 1197,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Online Debate Forums",
"sec_num": "2"
},
{
"text": "In the context of online debate forums, stance classification (Thomas et al., 2006; Somasundaran and Wiebe, 2009) is the task of assigning stance labels with respect to a discussion topic, either at the level of the user or the level of the post. Stance is typically treated as a binary classification problem, with labels PRO and ANTI. In Fig. 1 , both users' stances toward gun control are ANTI.",
"cite_spans": [
{
"start": 62,
"end": 83,
"text": "(Thomas et al., 2006;",
"ref_id": "BIBREF20"
},
{
"start": 84,
"end": 113,
"text": "Somasundaran and Wiebe, 2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 340,
"end": 346,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Online Debate Forums",
"sec_num": "2"
},
{
"text": "Previous work on stance in online debates has shown that contextual information given by reply links is important for stance classification (Walker et al., 2012a) , and that collective classification often outperforms methods which treat each post independently. Hasan and Ng (2013) use conditional random fields (CRFs) to encourage opposite stances between sequences of posts, and Walker et al. (2012c) use MaxCut over explicitly given rebuttal links between posts to separate them into PRO and ANTI clusters. Sridhar et al. (2014) use hinge-loss Markov random fields (HL-MRFs) to encourage consistency between post level stance labels and observed post-level textual agreements and disagreements.",
"cite_spans": [
{
"start": 140,
"end": 162,
"text": "(Walker et al., 2012a)",
"ref_id": "BIBREF21"
},
{
"start": 382,
"end": 403,
"text": "Walker et al. (2012c)",
"ref_id": "BIBREF23"
},
{
"start": 511,
"end": 532,
"text": "Sridhar et al. (2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Debate Forums",
"sec_num": "2"
},
{
"text": "While the first two approaches leverage rebuttal or reply links, they model reply links as being indicative of opposite stances. However, as shown in Fig. 1 , responses-even rebuttals-can occur be-tween users with the same stance, which suggests the benefit of a more nuanced treatment of reply links. The approach of Sridhar et al. (2014) considers text-based agreement annotations between posts, though it requires that reply links are labeled. Accurate reply polarity labels are likely to be as expensive to obtain as the stance labels that we aim to predict. Noisy or sparse reply labels are cheaper, though likely to reduce performance. In this work, we show how to reason over uncertain reply label predictions to improve stance classification.",
"cite_spans": [
{
"start": 318,
"end": 339,
"text": "Sridhar et al. (2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 150,
"end": 156,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Online Debate Forums",
"sec_num": "2"
},
{
"text": "Also in the online debate setting, Hasan and Ng (2014) show the benefits of joint modeling to classify post-level stance and the authors' reasons for their stances. In contrast, in this work we focus on the dependencies between stance and polarity of replies.",
"cite_spans": [
{
"start": 35,
"end": 54,
"text": "Hasan and Ng (2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Debate Forums",
"sec_num": "2"
},
{
"text": "In the context of opinion subgroup discovery, Abu-Jbara and Radev (2013) demonstrate the effectiveness of clustering users by opiniontarget similarity. In contrast, Murakami and Raymond (2010) use simple recurring patterns such as \"that's a good idea\" to categorize reply links as agree, disagree or neutral, prior to using Max-Cut for subgroup clustering of comment streams on government websites. This approach improves over a MaxCut approach that casts all reply links as disagreements. Building on this work, Lu et al. (2012) model unsupervised discovery of supporting and opposing groups of users for topics in online military forums. They improve upon a Max-Cut baseline by formulating a linear program (LP) to combine multiple textual and reply-link signals, suggesting the benefits of jointly modeling textual and reply-link features.",
"cite_spans": [
{
"start": 46,
"end": 72,
"text": "Abu-Jbara and Radev (2013)",
"ref_id": "BIBREF1"
},
{
"start": 513,
"end": 529,
"text": "Lu et al. (2012)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Debate Forums",
"sec_num": "2"
},
{
"text": "In a different line of work, while Somasundaran and Wiebe (2010) do not use relational information between users or posts, their approach shows the benefit of modeling opinions and their targets at a fine-grained level using relational sentiment analysis techniques. Similarly, Wang and Cardie (2014) demonstrate the effectiveness of using sentiment analysis to identify disputes on Wikipedia Talk pages. Boltu\u017ei\u0107 and\u0160najder (2014) and Ghosh et al. (2014) study various linguistic features to model stance and agreement interactions respectively.",
"cite_spans": [
{
"start": 278,
"end": 300,
"text": "Wang and Cardie (2014)",
"ref_id": "BIBREF24"
},
{
"start": 436,
"end": 455,
"text": "Ghosh et al. (2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Debate Forums",
"sec_num": "2"
},
{
"text": "In the congressional debate setting, approaches using CRFs and similar collective techniques such as minimum-cut have also leveraged reply link polarity for improvements in stance classification (Thomas et al., 2006; Bansal et al., 2008; Balahur et al., 2009; Burfoot et al., 2011) . However, these methods rely heavily on features specific to the congressional setting in order to predict link polarity, and make little use of textual features. In contrast, Abbott et al. 2011use a range of linguistic features from the text of posts and their parents to classify agreement or disagreement between posts on the online debate website 4FORUMS.COM, without the goal of classifying stance.",
"cite_spans": [
{
"start": 195,
"end": 216,
"text": "(Thomas et al., 2006;",
"ref_id": "BIBREF20"
},
{
"start": 217,
"end": 237,
"text": "Bansal et al., 2008;",
"ref_id": "BIBREF5"
},
{
"start": 238,
"end": 259,
"text": "Balahur et al., 2009;",
"ref_id": "BIBREF4"
},
{
"start": 260,
"end": 281,
"text": "Burfoot et al., 2011)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Debate Forums",
"sec_num": "2"
},
{
"text": "In this work, we study datasets from two online debate websites: 4FORUMS.COM, from the Internet Argument Corpus (Walker et al., 2012b) , and CREATEDEBATE.COM (Hasan and Ng, 2013). Table 1 shows statistics about these datasets including the average number of users per discussion topic and average number of posts authored. The best stance classification accuracy to date for online debate forums ranges from 70.1% on CONVINCEME.NET to 75.4% on CREATEDE-BATE.COM (Walker et al., 2012c; Hasan and Ng, 2013) . The web interface for CONVINCEME.NET enforces opposite stances for reply posts, making this dataset inapplicable for text-based disagreement modeling, and so we do not consider it in our experiments. In the more typical online debate forum corpora that we study, the presence of a reply, or even a textual disagreement between posts, does not necessarily indicate opposite stance (e.g. in gun control debates on 4Forums, 23% of disagreements correspond with same stance).",
"cite_spans": [
{
"start": 112,
"end": 134,
"text": "(Walker et al., 2012b)",
"ref_id": "BIBREF22"
},
{
"start": 462,
"end": 484,
"text": "(Walker et al., 2012c;",
"ref_id": "BIBREF23"
},
{
"start": 485,
"end": 504,
"text": "Hasan and Ng, 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 180,
"end": 187,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Online Debate Forums",
"sec_num": "2"
},
{
"text": "For our unified framework, we specify a hingeloss Markov random field to reason jointly about stance and reply-link polarity labels. A closely related line of work focuses on improving struc-tured prediction with domain knowledge modeled as constraints in the objective function (Chang et al., 2012; Ganchev et al., 2010; Mann and Mc-Callum, 2010) . Though more often used in semisupervised settings, constraint-based learning can be especially appropriate for supervised learning when commonly used feature functions for linear models do not capture the richness of the data. Our HL-MRF formulation admits highly expressive features while maintaining a convex objective, thereby enjoying both tractability and a fully probabilistic interpretation.",
"cite_spans": [
{
"start": 279,
"end": 299,
"text": "(Chang et al., 2012;",
"ref_id": "BIBREF8"
},
{
"start": 300,
"end": 321,
"text": "Ganchev et al., 2010;",
"ref_id": "BIBREF9"
},
{
"start": 322,
"end": 347,
"text": "Mann and Mc-Callum, 2010)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Debate Forums",
"sec_num": "2"
},
{
"text": "We face multiple modeling decisions that may impact predictive performance when classifying stance in online debates. A key contribution of this work is the exploration of the ramifications of these choices. We consider the following variations on modeling: collective (C) versus local (L) classifiers, whether to explicitly model disagreement (D), and author-level (A) versus post-level (P) models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Choices",
"sec_num": "3"
},
{
"text": "Collective versus Local. Both collective and non-collective methods for stance prediction require a strong local text classifier. The methods proposed in this paper build upon the state-of-theart local classification approach of Walker et al. (2012a) , which trains a supervised classifier using features including n-grams, lexical category counts, and text lengths. We use logistic regression for the local classifier. These models will be referred to as local (L). In collective (C) classification approaches for stance prediction, the stance labels are all predicted jointly, leveraging relationships along the graph of replies. The simplest way to make use of reply links is to encode that the stance of posts (or authors) that reply to each other is likely to be opposite (Walker et al., 2012c; Hasan and Ng, 2013) . Collective approaches attempt to find the most likely joint stance labeling that is consistent with both the local classifier's predictions and the alternation of stance along response threads. The alternating stance assumption is not necessarily a hard constraint, and may potentially be overridden by the local predictions. C and L models can be constructed with A or P-level granularity as described below, resulting in four modeling combinations.",
"cite_spans": [
{
"start": 229,
"end": 250,
"text": "Walker et al. (2012a)",
"ref_id": "BIBREF21"
},
{
"start": 777,
"end": 799,
"text": "(Walker et al., 2012c;",
"ref_id": "BIBREF23"
},
{
"start": 800,
"end": 819,
"text": "Hasan and Ng, 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Choices",
"sec_num": "3"
},
{
"text": "Modeling Disagreement. As seen in Fig. 1 and Table 1 , the assumption that reply links correspond to opposite stance is not always correct. This suggests the potential benefit of more nuanced models of agreement and disagreement. A natural disagreement modeling approach is to predict the polarity of reply links jointly with stance.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 40,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 45,
"end": 52,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Modeling Choices",
"sec_num": "3"
},
{
"text": "There are two variants of reply link polarity to consider. In textual disagreement, replying posts are coded as expressing agreement or disagreement with the text of the parent post. This may not correspond to a disagreement in stance relative to the thread topic. Some forum interfaces support user self-labeling of post reply links as rebuttals or agreements, thereby explicitly providing textual disagreement labels for posts. Alternatively, in the stance disagreement variant, reply links denote either same or opposite stance between users (posts). In Fig. 1 , User 1 and User 2 disagree in text but have the same stance. For collective modeling of stance and disagreement, it is useful to consider the stance disagreement variant which identifies opposite and same-stance reply links, and jointly encourage stance predictions to be consistent with the disagreement predictions.",
"cite_spans": [],
"ref_spans": [
{
"start": 557,
"end": 563,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Modeling Choices",
"sec_num": "3"
},
{
"text": "As with the local classification of stance, we can construct local classifiers for stance disagreement. In this work, for each reply link instance, we use a copy of the local stance classification features for each author/post at the ends of the reply link. The linguistic features further include discourse markers such as \"actually\" and \"because\" from the disagreement classifier of Abbott et al. (2011) . Additionally, we use textual disagreement as a feature for stance disagreementwhen available. When reply links are not explicitly labeled as rebuttals or agreements, or only rebuttals are known, we instead predict textual disagreement using the features given above, trained on a separate data set with textual-disagreement labels.",
"cite_spans": [
{
"start": 385,
"end": 405,
"text": "Abbott et al. (2011)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Choices",
"sec_num": "3"
},
{
"text": "Finally, with a stance disagreement classifier in hand, we can build collective models that predict stance based on predicted stance disagreement polarity. We denote these models as disagreement (D). When applied at one of A or P-level modeling, this yields two more possible modeling configurations. These models are certainly more complex than others we consider, but their design is consistent with intuition about the nature of discourse, so the added complexity may yield better accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Choices",
"sec_num": "3"
},
{
"text": "Collective models only: Disagreement models only:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "All models:",
"sec_num": null
},
{
"text": "localPro(X1) \u2192 pro(X1) disagree(X1, X2) \u2227 pro(X1) \u2192 \u00ac pro(X2) localDisagree(X1, X2) \u2192 disagree(X1, X2) \u00ac localPro(X1) \u2192 \u00ac pro(X1) disagree(X1, X2) \u2227 \u00ac pro(X1) \u2192 pro(X2) \u00ac localDisagree(X1, X2) \u2192 \u00ac disagree(X1, X2) \u00ac disagree(X1, X2) \u2227 pro(X1) \u2192 pro(X2) pro(X1) \u2227 \u00ac pro(X2) \u2192 disagree(X1, X2) \u00ac disagree(X1, X2) \u2227 \u00ac pro(X1) \u2192 \u00ac pro(X2) pro(X1) \u2227 pro(X2) \u2192 \u00ac disagree(X1, X2) disagree(X1, X2) = 1 \u00ac pro(X1) \u2227 \u00ac pro(X2) \u2192 \u00ac disagree(X1, X2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "All models:",
"sec_num": null
},
{
"text": "Figure 2: PSL rules to define the collective classification models, both for post-level and author-level models. Each X is an author or a post, depending on the level of granularity that the model is applied at. The disagree(X 1 , X 2 ) predicates apply to post reply links, and to pairs of authors connected by reply links.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "All models:",
"sec_num": null
},
{
"text": "Author-Level versus Post-Level. When modeling debates, stance classifiers can predict either the stance of a debate participant (i.e. an author (A)) (Burfoot et al., 2011) , or the stance expressed by a specific dialogue act (i.e. a post (P)) (Hasan and Ng, 2013). The choice of prediction target may depend on the downstream goal, such as user modeling or the study of the dialogic expression of disagreement. From a philosophical perspective, authors are individuals who hold opinions, while posts are not. A post is simply a piece of text which may or may not express the opinions of its author. Nevertheless, given a prediction target, either author or post, it may be beneficial to consider modeling at a different level of granularity. For example, Hasan and Ng (2013) find that post-level prediction accuracy can be improved by \"clamping\" all posts by a given author to the same stance in order to smooth their labels. Alternatively, author-level predictions may potentially be improved by first treating each post separately, thereby effectively giving a classifier more training examples, i.e. the number of posts instead of the number of authors. With this procedure, a final author-level prediction can be obtained by averaging the predictions over the posts for the author, trading the noisiness of post-level instances against the smoothing afforded by the final aggregation. When designing a stance classifier, the modeler must decide the level of granularity for the prediction target and find the best model therein.",
"cite_spans": [
{
"start": 149,
"end": 171,
"text": "(Burfoot et al., 2011)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "All models:",
"sec_num": null
},
{
"text": "To study these choices, we build a flexible stance classification framework that implements the above variations using probabilistic soft logic (PSL) (Bach et al., 2015; Bach et al., 2013) , a recently introduced probabilistic programming system. Like other probabilistic modeling frame-works, notably Markov logic (Richardson and Domingos, 2006) , PSL uses a logic-like language for defining the potential functions for a conditional random field. However, unlike Markov logic, PSL makes inference tractable, even in the loopy author-level networks and the very large post-level networks of online debates.",
"cite_spans": [
{
"start": 150,
"end": 169,
"text": "(Bach et al., 2015;",
"ref_id": "BIBREF3"
},
{
"start": 170,
"end": 188,
"text": "Bach et al., 2013)",
"ref_id": "BIBREF2"
},
{
"start": 315,
"end": 346,
"text": "(Richardson and Domingos, 2006)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Collective Classification Framework",
"sec_num": "4"
},
{
"text": "PSL's tractability arises from the use of a special class of conditional random field models referred to as hinge-loss MRFs (HL-MRFs), which admit efficient, scalable and exact maximum a posteriori (MAP) inference (Bach et al., 2013) . These models are defined over continuous random variables, and MAP inference is a convex optimization problem over these variables. Formally, a hinge-loss MRF defines a probability density function of the form",
"cite_spans": [
{
"start": 214,
"end": 233,
"text": "(Bach et al., 2013)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Collective Classification Framework",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (Y|X) = 1 Z exp \u2212 M r=1 \u03bb r \u03c6 r (Y, X) ,",
"eq_num": "(1)"
}
],
"section": "A Collective Classification Framework",
"sec_num": "4"
},
{
"text": "where the entries of Y and X are in [0, 1], \u03bb is a vector of weight parameters, Z is a normalization constant, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Collective Classification Framework",
"sec_num": "4"
},
{
"text": "\u03c6 r (Y, X) = (max{l r (Y, X), 0}) \u03c1r (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Collective Classification Framework",
"sec_num": "4"
},
{
"text": "is a hinge-loss potential specified by a linear function l r and optional exponent \u03c1 r \u2208 {1, 2}. Given a collection of first-order PSL rules, each instantiation of the rules maps to a hinge-loss potential function as in Equation 2, and the potential functions define an HL-MRF model. For example, a \u21d2 b max(a \u2212 b, 0), where a and b are ground variables, and max(a \u2212 b, 0) is a convex relaxation of logical implication, and which can be understood as its distance to satisfaction. For a full description of PSL, see (Bach et al., 2015) .",
"cite_spans": [
{
"start": 515,
"end": 534,
"text": "(Bach et al., 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Collective Classification Framework",
"sec_num": "4"
},
{
"text": "The models we introduce are specified by the PSL rules in Fig. 2, with both post-level and author-level models following the same design. We denote the different modeling choices with the letters defined in Section 3. First, local logistic regression classifiers output stance probabilities based on textual features of posts or authors. All of the models begin with these real-valued stance predictions, encoded by the observed predicate lo-calPro(X i ). The rules listed for all models encourage the inferred global predictions pro(X i ) to match these local predictions.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Fig. 2,",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Collective Classification Framework",
"sec_num": "4"
},
{
"text": "This defines the local classification models L, which are HL-MRFs with node potentials and no edge potentials, and which are equivalent to the local classifiers. The collective models extend the L models by adding edge potentials which encourage the stance labels to respect disagreement relationships along reply links. Specifically, every reply link between authors (for author-level models) or between posts (for post-level models) x 1 and x 2 is associated with a latent variable disagree(x 1 , x 2 ). The rules encourage the global stance variables to respect the polarity of the disagreement variables (same stance, or opposite stance) and while also trying to match the stance classifiers. For the models that do not explicitly model disagreement, it is assumed that every reply edge constitutes a disagreement, i.e. disagree(x 1 , x 2 ) = 1. These models are denoted C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Collective Classification Framework",
"sec_num": "4"
},
{
"text": "Otherwise, the disagreement variables are encouraged to match binary-valued predictions from the local disagreement classifiers. We binarize the predictions of the disagreement classifiers to encourage propagation. The disagreement variables are modeled jointly with the stance variables, and label information propagates in both directions between stance and disagreement variables. The full joint stance/disagreement collective models are denoted D. In the following, the models are denoted by pairs of letters according to their collectivity level and modeling granularity. For example, AC denotes collective classification performed at the author level, without joint modeling of disagreement. To train these models and use them for prediction, weight learning and MAP inference are performed using the structured perceptron algorithm and ADMM algorithm of Bach et al. (2013) .",
"cite_spans": [
{
"start": 861,
"end": 879,
"text": "Bach et al. (2013)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Collective Classification Framework",
"sec_num": "4"
},
{
"text": "The goals of our experiments were to validate the proposed collective modeling framework, and to make substantive conclusions about the merits of the different possible modeling options described in Section 3. To this end, we evaluated the models on eight topics from 4FORUMS.COM (Walker et al., 2012b) and CREATEDEBATE.COM (Hasan and Ng, 2013), for classification tasks at both the author level and the post level. With comparison to Hasan and Ng (2013), our collective models (C) are essentially equivalent to their CRF, up to the form of the CRF potential function, which is not explicitly specified in the paper. A further goal of our experiments was to determine whether the modeling options in our more general CRF could improve performance over models with this structure.",
"cite_spans": [
{
"start": 280,
"end": 302,
"text": "(Walker et al., 2012b)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "On average, each topic-wise data set contains hundreds of authors and thousands of posts. The 4FORUMS data sets are annotated for stance at the author level, while CREATEDEBATE has stance labels at the post level. To perform post-level evaluations on 4FORUMS we apply author labels to the posts of each author, and on CREATEDEBATE we computed author labels by selecting the majority label of their posts. For 4FORUMS, since postlevel stance labels correspond directly to authorlevel stance labels, we use averages of post-level predictions as the local classifier output for authors. Section 2 includes an overview of these debate forum data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "In the experiments, classification accuracy was estimated via five repeats of 5-fold crossvalidation. In each fold, we ran logistic regression using the scikit-learn software package, 2 using the default settings, except for the L1 regularization trade-off parameter C which was tuned on a within-fold hold-out set consisting of 20% of the discussions within the fold. For the collective models, weight learning was performed on the same in-fold tuning sets. We trained via 700 iterations of structured perceptron, and ran the ADMM MAP inference algorithm to convergence at test time. On average, weight learning and inference took around 1 minute per fold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "The full results for author-level and post-level predictions are given in Table 2 and Table 3 , respectively. In the tables, entries in bold identify statistically significant differences from the local classifier baseline under a paired t-test with significance level \u03b1 = 0.05. These results are summarized in Fig. 3 , which shows box plots for the six possible models, computed over the final crossvalidated accuracy scores of each of the four data : Overall accuracies per model for the author stance prediction task, computed over the final results for each of the four data sets per forum. Note that we expect significant variation in these plots, as the data sets are of varying degrees of difficulty.",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 93,
"text": "Table 2 and Table 3",
"ref_id": null
},
{
"start": 311,
"end": 317,
"text": "Fig. 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "sets from each forum. The overall trends can be seen by reading the box plots in each figure from left to right. In general, collective models outperform local models, and modeling disagreement further improves accuracy. Author-level modeling is typically better than post-level, even for the post-level prediction task. The improvements shown by collective models and author-level models are consistent with Hasan and Ng (2013)'s conclusion about the benefits of user-level constraints. This may suggest that posts only provide relatively noisy observations of the underlying author-level stance. Modeling at the author level results in more stable predictions, as noisy posts are pooled together. But here we also show that the full joint disagreement model at the author level, AD, performs the best overall, for both prediction tasks and for both forums, gaining up to 11.5 percentage points of post-level accuracy over the local postlevel classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "A closer analysis reveals some subtleties. When comparing D models with C models in Fig. 3 , disagreement modeling makes a much bigger difference at the author level than at the post level. This is likely impacted by the level of class imbalance for disagreement classification in the different levels of modeling. Disagreement, rather than agreement, between authors prompts many responses. Thus, reply links are more likely disagreements when measured at the post level, as seen in Ta- Table 3 : Post stance classification accuracy and standard deviations for 4FORUMS (left) and CREAT-EDEBATE (right), estimated via 5 repeats of 5-fold cross-validation. Bolded figures indicate statistically significant (\u03b1 = 0.05) improvement over PL, the baseline model for the post stance classification task. ble 1. Therefore, enforcing disagreement may be a better assumption at the post level, and the nuanced disagreement model is not necessary in this case. The overall improvements in accuracy from disagreement modeling for post-level models were small.",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 90,
"text": "Fig. 3",
"ref_id": "FIGREF1"
},
{
"start": 488,
"end": 495,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "On the other hand, the assumption that reply edges constitute disagreement is less accurate when modeling at the author level (see Table 1 ). In this case, the full joint disagreement model is necessary to obtain good performance. In an extreme example, the two datasets with the lowest disagreement rates at the author level are evolution (44.4%) and gun control (50.7%) from 4FORUMS. The AC classifier performed very poorly for these data sets, dropping to 46.9% accuracy in one instance, as the \"opposite stance\" assumption did not hold (Tables 2 and 3 ). The full joint disagreement model AD performed much better, in fact achieving an outstanding accuracy rates of 80.3% and 80.5% for posts on evolution and gay marriage respectively. To illustrate the benefits of authorlevel disagreement modeling, Fig. 4 shows a post for an author whose stance towards gun control is correctly predicted by AD but not the AC model,",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 138,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 540,
"end": 555,
"text": "(Tables 2 and 3",
"ref_id": null
},
{
"start": 805,
"end": 811,
"text": "Fig. 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "Post: I agree with everything except the last part. Safe gun storage is very important, and sensible storage requirements have two important factors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Stance",
"sec_num": null
},
{
"text": "Reply: I can agree with this. And in case it seemed otherwise, I know full well how to store guns safely, and why it's necessary. My point was that I don't like the idea of such a law, especially when you consider the problem of enforcement. ANTI Figure 4 : A post-reply pair by 4FORUMS.COM authors whose gun control stance is correctly predicted by AD, but not by AC.",
"cite_spans": [],
"ref_spans": [
{
"start": 247,
"end": 255,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "ANTI",
"sec_num": null
},
{
"text": "along with a subsequent reply. The authors largely agree with each other's views, which the joint disagreement model leverages, while the simpler collective model encourages opposite stance due to the presence of reply links between them. To summarize our conclusions from these experiments, the results suggest that author-level modeling is the preferred strategy, regardless of the prediction task. In this scenario, it is essential to explicitly model disagreement in the collective classifier. Our top performing AD model statistically significantly outperforms the respective prediction task baseline on 6 out of 8 topics for both tasks with p-values less than 0.001. Based on our experimental results, we recommend the full author-disagreement model AD as the classifier of choice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ANTI",
"sec_num": null
},
{
"text": "The prediction of user stance in online debate forums is a valuable task, and modeling debate dialogue is complex and requires many decisions such collective or non-collective reasoning, nuanced or naive use of disagreement information, and post versus author-level modeling granularity. We systematically explore each choice, and in doing so build a unified joint framework that incorporates each salient decision. Our method uses a hinge-loss Markov random field to encourage consistency between local classifier predictions for stance and disagreement information. We find that modeling at the author level gives better predictive performance regardless of the granularity of the prediction task, and that nuanced disagreement modeling is of particular importance for authorlevel collective modeling. The resulting collective classifier gives improved predictive performance over both the simple non-collective and standard collective approaches, with a running time overhead of only a few minutes, thanks to the efficient nature of hinge-loss MRFs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "6"
},
{
"text": "There are many directions for future work. Our results have found that collective reasoning can also be beneficial at the post level, as previously reported by Hasan and Ng (2013) . It is likely that a multi-level model for a combination of post-and author-level collective modeling of both stance and disagreement could bring further improvements in performance. It would also be informative to explore dynamic models which elucidate trends of opinions over time. Another direction is to model influence between users in online debate forums, and to identify the most influential users who are able to convince other users to change their opinions. Finally, we note that stance and disagreement classification are both challenging and important problems, and going forward, there is likely to be much room for improvement in these prediction tasks.",
"cite_spans": [
{
"start": 160,
"end": 179,
"text": "Hasan and Ng (2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "6"
},
{
"text": "PSL is an open-source Java toolkit, available here: http://psl.cs.umd.edu.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available at http://scikit-learn.org/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by NSF grant IIS1218488, and IARPA via DoI/NBC contract number D12PC00337. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/NBC, or the U.S. Government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": " Evolution Gay Gun Abortion Gay Marijuana Obama Marriage Control Rights PL 61.9 \u00b1 4.3 76.6 \u00b1 3.9 72.0 \u00b1 3.6 66.4 \u00b1 4.6 66.4 \u00b1 5.2 70.2 \u00b1 5.0 74.1 \u00b1 6.5 63.8 \u00b1 8.7 PC 63.4 \u00b1 5.9 74.6 \u00b1 4.1 73.7 \u00b1 4.3 68.3 \u00b1 5.5 68.7 \u00b1 5.7 72.6 \u00b1 5.6 75.4 \u00b1 7.4 66.1 \u00b1 8.5 PD 63.0 \u00b1 5.4 76.7 \u00b1 4.2 73.7 \u00b1 4.6 67.9 \u00b1 5.0 69.5 \u00b1 5.7 73.2 \u00b1 5.9 74.7 \u00b1 7.0 66.1 \u00b1 8.5 AL 64.9 \u00b1 4.2 77.3 \u00b1 2.9 74.5 \u00b1 2.9 67.1 \u00b1 4.5 65.2 \u00b1 6.5 69.5 \u00b1 4.4 74.0 \u00b1 6.6 59.0 \u00b1 7.5 AC 66.0 \u00b1 5.0 74.4 \u00b1 4.2 75.7 \u00b1 5.1 61.5 \u00b1 5.6 65.8 \u00b1 7.0 73.6 \u00b1 3.5 73.9 \u00b1 7.6 62.5 \u00b1 8.3 AD 65.8 \u00b1 4.4 78.7 \u00b1 3.3 77.1 \u00b1 4.4 67.1 \u00b1 5.4 67.4 \u00b1 7.5 74.0 \u00b1 5.3 74.8 \u00b1 7.5 63.0 \u00b1 8.3 ",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 63,
"text": "Evolution Gay Gun Abortion Gay Marijuana Obama Marriage",
"ref_id": null
}
],
"eq_spans": [],
"section": "CREATEDEBATE Models Abortion",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "How can you say such things?!?: Recognizing disagreement in informal political argument",
"authors": [
{
"first": "Rob",
"middle": [],
"last": "Abbott",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Jean",
"middle": [
"E"
],
"last": "Fox Tree",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Anand",
"suffix": ""
},
{
"first": "Robeson",
"middle": [],
"last": "Bowmani",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "King",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL Workshop on Language and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rob Abbott, Marilyn Walker, Jean E. Fox Tree, Pranav Anand, Robeson Bowmani, and Joseph King. 2011. How can you say such things?!?: Recognizing dis- agreement in informal political argument. In ACL Workshop on Language and Social Media.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Identifying opinion subgroups in Arabic online discussions",
"authors": [
{
"first": "Amjad",
"middle": [],
"last": "Abu",
"suffix": ""
},
{
"first": "-Jbara",
"middle": [],
"last": "Dragomir R Radev",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amjad Abu-Jbara and Dragomir R Radev. 2013. Iden- tifying opinion subgroups in Arabic online discus- sions. In ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Hinge-loss Markov random fields: Convex inference for structured prediction",
"authors": [
{
"first": "H",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Bert",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "London",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Getoor",
"suffix": ""
}
],
"year": 2013,
"venue": "Uncertainty in Artificial Intelligence (UAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen H. Bach, Bert Huang, Ben London, and Lise Getoor. 2013. Hinge-loss Markov random fields: Convex inference for structured prediction. In Un- certainty in Artificial Intelligence (UAI).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Hinge-loss Markov random fields and probabilistic soft logic",
"authors": [
{
"first": "S",
"middle": [
"H"
],
"last": "Bach",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Broecheler",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Getoor",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1505.04406"
]
},
"num": null,
"urls": [],
"raw_text": "S. H. Bach, M. Broecheler, B. Huang, and L. Getoor. 2015. Hinge-loss Markov random fields and proba- bilistic soft logic. arXiv:1505.04406 [cs.LG].",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Determining the polarity and source of opinions expressed in political debates",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Balahur",
"suffix": ""
},
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Andres",
"middle": [],
"last": "Montoyo",
"suffix": ""
}
],
"year": 2009,
"venue": "Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandra Balahur, Zornitsa Kozareva, and Andres Montoyo. 2009. Determining the polarity and source of opinions expressed in political debates. Computational Linguistics and Intelligent Text Pro- cessing.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The power of negative thinking: Exploiting label disagreement in the min-cut classification framework",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Bansal, Claire Cardie, and Lillian Lee. 2008. The power of negative thinking: Exploiting label disagreement in the min-cut classification frame- work. COLING.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Back up your stance: recognizing arguments in online discussions",
"authors": [
{
"first": "Filip",
"middle": [],
"last": "Boltu\u017ei\u0107",
"suffix": ""
},
{
"first": "Jan\u0161najder",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL Workshop on Argumentation Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Filip Boltu\u017ei\u0107 and Jan\u0160najder. 2014. Back up your stance: recognizing arguments in online discussions. In ACL Workshop on Argumentation Mining.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Collective classification of congressional floor-debate transcripts",
"authors": [
{
"first": "Clinton",
"middle": [],
"last": "Burfoot",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clinton Burfoot, Steven Bird, and Timothy Baldwin. 2011. Collective classification of congressional floor-debate transcripts. In ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Structured learning with constrained conditional models",
"authors": [
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2012,
"venue": "Machine learning",
"volume": "88",
"issue": "3",
"pages": "399--431",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2012. Structured learning with constrained conditional models. Machine learning, 88(3):399-431.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Posterior regularization for structured latent variable models",
"authors": [
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Joao",
"middle": [],
"last": "Gra\u00e7a",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Gillenwater",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2010,
"venue": "Machine Learning",
"volume": "11",
"issue": "",
"pages": "2001--2049",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuzman Ganchev, Joao Gra\u00e7a, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior regularization for structured latent variable models. Machine Learn- ing, 11:2001-2049.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Analyzing argumentative discourse units in online interactions",
"authors": [
{
"first": "Debanjan",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Wacholder",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Aakhus",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mitsui",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL Workshop on Argumentation Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Debanjan Ghosh, Smaranda Muresan, Nina Wacholder, Mark Aakhus, and Matthew Mitsui. 2014. Analyz- ing argumentative discourse units in online interac- tions. In ACL Workshop on Argumentation Mining.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Stance classification of ideological debates: Data, models, features, and constraints",
"authors": [
{
"first": "Saidul",
"middle": [],
"last": "Kazi",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazi Saidul Hasan and Vincent Ng. 2013. Stance clas- sification of ideological debates: Data, models, fea- tures, and constraints. International Joint Confer- ence on Natural Language Processing.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Why are you taking this stance? Identifying and classifying reasons in ideological debates",
"authors": [
{
"first": "Saidul",
"middle": [],
"last": "Kazi",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazi Saidul Hasan and Vincent Ng. 2014. Why are you taking this stance? Identifying and classifying reasons in ideological debates. In EMNLP.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Unsupervised discovery of opposing opinion networks from forum discussions",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2012,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Lu, H. Wang, C. Zhai, and D. Roth. 2012. Unsuper- vised discovery of opposing opinion networks from forum discussions. In CIKM.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Generalized expectation criteria for semi-supervised learning with weakly labeled data",
"authors": [
{
"first": "S",
"middle": [],
"last": "Gideon",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "Machine Learning",
"volume": "11",
"issue": "",
"pages": "955--984",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gideon S Mann and Andrew McCallum. 2010. Gener- alized expectation criteria for semi-supervised learn- ing with weakly labeled data. Machine Learning, 11:955-984.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Support or Oppose? Classifying positions in online debates from reply activities and opinion expressions",
"authors": [
{
"first": "Akiko",
"middle": [],
"last": "Murakami",
"suffix": ""
},
{
"first": "Rudy",
"middle": [],
"last": "Raymond",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akiko Murakami and Rudy Raymond. 2010. Support or Oppose? Classifying positions in online debates from reply activities and opinion expressions. In ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Markov logic networks",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Domingos",
"suffix": ""
}
],
"year": 2006,
"venue": "Machine learning",
"volume": "62",
"issue": "",
"pages": "1--2",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Richardson and Pedro Domingos. 2006. Markov logic networks. Machine learning, 62(1-2).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Recognizing stances in online debates",
"authors": [
{
"first": "Swapna",
"middle": [],
"last": "Somasundaran",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2009,
"venue": "ACL and AFNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Swapna Somasundaran and Janyce Wiebe. 2009. Rec- ognizing stances in online debates. In ACL and AFNLP.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Recognizing stances in ideological on-line debates",
"authors": [
{
"first": "Swapna",
"middle": [],
"last": "Somasundaran",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2010,
"venue": "NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Swapna Somasundaran and Janyce Wiebe. 2010. Rec- ognizing stances in ideological on-line debates. In NAACL HLT 2010 Workshop on Computational Ap- proaches to Analysis and Generation of Emotion in Text.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Collective stance classification of posts in online debate forums",
"authors": [
{
"first": "Dhanya",
"middle": [],
"last": "Sridhar",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "Getoor",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL Joint Workshop on Social Dynamics and Personal Attributes in Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dhanya Sridhar, Lise Getoor, and Marilyn Walker. 2014. Collective stance classification of posts in online debate forums. In ACL Joint Workshop on Social Dynamics and Personal Attributes in Social Media.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Get out the vote: Determining support or opposition from Congressional floor-debate transcripts",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2006,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: Determining support or opposition from Congressional floor-debate transcripts. In EMNLP.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "That's your evidence?: Classifying stance in online political debate",
"authors": [
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Anand",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Abbott",
"suffix": ""
},
{
"first": "Jean",
"middle": [
"E"
],
"last": "Fox Tree",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Martell",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "King",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn Walker, Pranav Anand, Rob Abbott, Jean E. Fox Tree, Craig Martell, and Joseph King. 2012a. That's your evidence?: Classifying stance in online political debate. Decision Support Sciences.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A corpus for research on deliberation and debate",
"authors": [
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Anand",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Abbott",
"suffix": ""
},
{
"first": "Jean",
"middle": [
"E"
],
"last": "Fox Tree",
"suffix": ""
}
],
"year": 2012,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn Walker, Pranav Anand, Robert Abbott, and Jean E. Fox Tree. 2012b. A corpus for research on deliberation and debate. In LREC.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Stance classification using dialogic properties of persuasion",
"authors": [
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Anand",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Abbott",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Grant",
"suffix": ""
}
],
"year": 2012,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn Walker, Pranav Anand, Robert Abbott, and Richard Grant. 2012c. Stance classification using dialogic properties of persuasion. In NAACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A piece of my mind: A sentiment analysis approach for online dispute detection",
"authors": [
{
"first": "Lu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu Wang and Claire Cardie. 2014. A piece of my mind: A sentiment analysis approach for online dis- pute detection. In ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Figure 3: Overall accuracies per model for the author stance prediction task, computed over the final results for each of the four data sets per forum. Note that we expect significant variation in these plots, as the data sets are of varying degrees of difficulty.",
"uris": null
},
"TABREF1": {
"num": null,
"text": "Structural statistics averages for 4FO-",
"type_str": "table",
"content": "<table/>",
"html": null
}
}
}
}