id
stringlengths 10
10
| title
stringlengths 28
136
| text
stringlengths 5.62k
98.6k
| num_sections
int64 4
44
|
|---|---|---|---|
1601.02403
|
Argumentation Mining in User-Generated Web Discourse
|
# Argumentation Mining in User-Generated Web Discourse
## Abstract
The goal of argumentation mining, an evolving research field in computational linguistics, is to design methods capable of analyzing people's argumentation. In this article, we go beyond the state of the art in several ways. (i) We deal with actual Web data and take up the challenges given by the variety of registers, multiple domains, and unrestricted noisy user-generated Web discourse. (ii) We bridge the gap between normative argumentation theories and argumentation phenomena encountered in actual data by adapting an argumentation model tested in an extensive annotation study. (iii) We create a new gold standard corpus (90k tokens in 340 documents) and experiment with several machine learning methods to identify argument components. We offer the data, source codes, and annotation guidelines to the community under free licenses. Our findings show that argumentation mining in user-generated Web discourse is a feasible but challenging task.
## Introduction
The art of argumentation has been studied since the early work of Aristotle, dating back to the 4th century BC BIBREF0 . It has been exhaustively examined from different perspectives, such as philosophy, psychology, communication studies, cognitive science, formal and informal logic, linguistics, computer science, educational research, and many others. In a recent and critically well-acclaimed study, Mercier.Sperber.2011 even claim that argumentation is what drives humans to perform reasoning. From the pragmatic perspective, argumentation can be seen as a verbal activity oriented towards the realization of a goal BIBREF1 or more in detail as a verbal, social, and rational activity aimed at convincing a reasonable critic of the acceptability of a standpoint by putting forward a constellation of one or more propositions to justify this standpoint BIBREF2 .
Analyzing argumentation from the computational linguistics point of view has very recently led to a new field called argumentation mining BIBREF3 . Despite the lack of an exact definition, researchers within this field usually focus on analyzing discourse on the pragmatics level and applying a certain argumentation theory to model and analyze textual data at hand.
Our motivation for argumentation mining stems from a practical information seeking perspective from the user-generated content on the Web. For example, when users search for information in user-generated Web content to facilitate their personal decision making related to controversial topics, they lack tools to overcome the current information overload. One particular use-case example dealing with a forum post discussing private versus public schools is shown in Figure FIGREF4 . Here, the lengthy text on the left-hand side is transformed into an argument gist on the right-hand side by (i) analyzing argument components and (ii) summarizing their content. Figure FIGREF5 shows another use-case example, in which users search for reasons that underpin certain standpoint in a given controversy (which is homeschooling in this case). In general, the output of automatic argument analysis performed on the large scale in Web data can provide users with analyzed arguments to a given topic of interest, find the evidence for the given controversial standpoint, or help to reveal flaws in argumentation of others.
Satisfying the above-mentioned information needs cannot be directly tackled by current methods for, e.g., opinion mining, questions answering, or summarization and requires novel approaches within the argumentation mining field. Although user-generated Web content has already been considered in argumentation mining, many limitations and research gaps can be identified in the existing works. First, the scope of the current approaches is restricted to a particular domain or register, e.g., hotel reviews BIBREF5 , Tweets related to local riot events BIBREF6 , student essays BIBREF7 , airline passenger rights and consumer protection BIBREF8 , or renewable energy sources BIBREF9 . Second, not all the related works are tightly connected to argumentation theories, resulting into a gap between the substantial research in argumentation itself and its adaptation in NLP applications. Third, as an emerging research area, argumentation mining still suffers from a lack of labeled corpora, which is crucial for designing, training, and evaluating the algorithms. Although some works have dealt with creating new data sets, the reliability (in terms of inter-annotator agreement) of the annotated resources is often unknown BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 .
Annotating and automatically analyzing arguments in unconstrained user-generated Web discourse represent challenging tasks. So far, the research in argumentation mining “has been conducted on domains like news articles, parliamentary records and legal documents, where the documents contain well-formed explicit arguments, i.e., propositions with supporting reasons and evidence present in the text” BIBREF8 . [p. 50]Boltuzic.Snajder.2014 point out that “unlike in debates or other more formal argumentation sources, the arguments provided by the users, if any, are less formal, ambiguous, vague, implicit, or often simply poorly worded.” Another challenge stems from the different nature of argumentation theories and computational linguistics. Whereas computational linguistics is mainly descriptive, the empirical research that is carried out in argumentation theories does not constitute a test of the theoretical model that is favored, because the model of argumentation is a normative instrument for assessing the argumentation BIBREF15 . So far, no fully fledged descriptive argumentation theory based on empirical research has been developed, thus feasibility of adapting argumentation models to the Web discourse represents an open issue.
These challenges can be formulated into the following research questions:
In this article, we push the boundaries of the argumentation mining field by focusing on several novel aspects. We tackle the above-mentioned research questions as well as the previously discussed challenges and issues. First, we target user-generated Web discourse from several domains across various registers, to examine how argumentation is communicated in different contexts. Second, we bridge the gap between argumentation theories and argumentation mining through selecting the argumenation model based on research into argumentation theories and related fields in communication studies or psychology. In particular, we adapt normative models from argumentation theory to perform empirical research in NLP and support our application of argumentation theories with an in-depth reliability study. Finally, we use state-of-the-art NLP techniques in order to build robust computational models for analyzing arguments that are capable of dealing with a variety of genres on the Web.
## Our contributions
We create a new corpus which is, to the best of our knowledge, the largest corpus that has been annotated within the argumentation mining field to date. We choose several target domains from educational controversies, such as homeschooling, single-sex education, or mainstreaming. A novel aspect of the corpus is its coverage of different registers of user-generated Web content, such as comments to articles, discussion forum posts, blog posts, as well as professional newswire articles.
Since the data come from a variety of sources and no assumptions about its actual content with respect to argumentation can be drawn, we conduct two extensive annotation studies. In the first study, we tackle the problem of relatively high “noise” in the retrieved data. In particular, not all of the documents are related to the given topics in a way that makes them candidates for further deep analysis of argumentation (this study results into 990 annotated documents). In the second study, we discuss the selection of an appropriate argumentation model based on evidence in argumentation research and propose a model that is suitable for analyzing micro-level argumention in user-generated Web content. Using this model, we annotate 340 documents (approx. 90,000 tokens), reaching a substantial inter-annotator agreement. We provide a hand-analysis of all the phenomena typical to argumentation that are prevalent in our data. These findings may also serve as empirical evidence to issues that are on the spot of current argumentation research.
From the computational perspective, we experiment on the annotated data using various machine learning methods in order to extract argument structure from documents. We propose several novel feature sets and identify configurations that run best in in-domain and cross-domain scenarios. To foster research in the community, we provide the annotated data as well as all the experimental software under free license.
The rest of the article is structured as follows. First, we provide an essential background in argumentation theory in section SECREF2 . Section SECREF3 surveys related work in several areas. Then we introduce the dataset and two annotation studies in section SECREF4 . Section SECREF5 presents our experimental work and discusses the results and errors and section SECREF6 concludes this article.
## Theoretical background
Let us first present some definitions of the term argumentation itself. [p. 3]Ketcham.1917 defines argumentation as “the art of persuading others to think or act in a definite way. It includes all writing and speaking which is persuasive in form.” According to MacEwan.1898, “argumentation is the process of proving or disproving a proposition. Its purpose is to induce a new belief, to establish truth or combat error in the mind of another.” [p. 2]Freeley.Steinberg.2008 narrow the scope of argumentation to “reason giving in communicative situations by people whose purpose is the justification of acts, beliefs, attitudes, and values.” Although these definitions vary, the purpose of argumentation remains the same – to persuade others.
We would like to stress that our perception of argumentation goes beyond somehow limited giving reasons BIBREF17 , BIBREF18 . Rather, we see the goal of argumentation as to persuade BIBREF19 , BIBREF20 , BIBREF21 . Persuasion can be defined as a successful intentional effort at influencing another's mental state through communication in a circumstance in which the persuadee has some measure of freedom BIBREF22 , although, as OKeefe2011 points out, there is no correct or universally-endorsed definition of either `persuasion' or `argumentation'. However, broader understanding of argumentation as a means of persuasion allows us to take into account not only reasoned discourse, but also non-reasoned mechanisms of influence, such as emotional appeals BIBREF23 .
Having an argument as a product within the argumentation process, we should now define it. One typical definition is that an argument is a claim supported by reasons BIBREF24 . The term claim has been used since 1950's, introduced by Toulmin.1958, and in argumentation theory it is a synonym for standpoint or point of view. It refers to what is an issue in the sense what is being argued about. The presence of a standpoint is thus crucial for argumentation analysis. However, the claim as well as other parts of the argument might be implicit; this is known as enthymematic argumentation, which is rather usual in ordinary argumentative discourse BIBREF25 .
One fundamental problem with the definition and formal description of arguments and argumentation is that there is no agreement even among argumentation theorists. As [p. 29]vanEmeren.et.al.2014 admit in their very recent and exhaustive survey of the field, ”as yet, there is no unitary theory of argumentation that encompasses the logical, dialectical, and rhetorical dimensions of argumentation and is universally accepted. The current state of the art in argumentation theory is characterized by the coexistence of a variety of theoretical perspectives and approaches, which differ considerably from each other in conceptualization, scope, and theoretical refinement.”
## Argumentation models
Despite the missing consensus on the ultimate argumentation theory, various argumentation models have been proposed that capture argumentation on different levels. Argumentation models abstract from the language level to a concept level that stresses the links between the different components of an argument or how arguments relate to each other BIBREF26 . Bentahar.et.al.2010 propose a taxonomy of argumentation models, that is horizontally divided into three categories – micro-level models, macro-level models, and rhetorical models.
In this article, we deal with argumentation on the micro-level (also called argumentation as a product or monological models). Micro-level argumentation focuses on the structure of a single argument. By contrast, macro-level models (also called dialogical models) and rhetorical models highlight the process of argumentation in a dialogue BIBREF27 . In other words, we examine the structure of a single argument produced by a single author in term of its components, not the relations that can exist among arguments and their authors in time. A detailed discussion of these different perspectives can be found, e.g., in BIBREF28 , BIBREF29 , BIBREF30 , BIBREF1 , BIBREF31 , BIBREF32 .
## Dimensions of argument
The above-mentioned models focus basically only on one dimension of the argument, namely the logos dimension. According to the classical Aristotle's theory BIBREF0 , argument can exist in three dimensions, which are logos, pathos, and ethos. Logos dimension represents a proof by reason, an attempt to persuade by establishing a logical argument. For example, syllogism belongs to this argumentation dimension BIBREF34 , BIBREF25 . Pathos dimension makes use of appealing to emotions of the receiver and impacts its cognition BIBREF35 . Ethos dimension of argument relies on the credibility of the arguer. This distinction will have practical impact later in section SECREF51 which deals with argumentation on the Web.
## Original Toulmin's model
We conclude the theoretical section by presenting one (micro-level) argumentation model in detail – a widely used conceptual model of argumentation introduced by Toulmin.1958, which we will henceforth denote as the Toulmin's original model. This model will play an important role later in the annotation studies (section SECREF51 ) and experimental work (section SECREF108 ). The model consists of six parts, referred as argument components, where each component plays a distinct role.
is an assertion put forward publicly for general acceptance BIBREF38 or the conclusion we seek to establish by our arguments BIBREF17 .
It is the evidence to establish the foundation of the claim BIBREF24 or, as simply put by Toulmin, “the data represent what we have to go on.” BIBREF37 . The name of this concept was later changed to grounds in BIBREF38 .
The role of warrant is to justify a logical inference from the grounds to the claim.
is a set of information that stands behind the warrant, it assures its trustworthiness.
limits the degree of certainty under which the argument should be accepted. It is the degree of force which the grounds confer on the claim in virtue of the warrant BIBREF37 .
presents a situation in which the claim might be defeated.
A schema of the Toulmin's original model is shown in Figure FIGREF29 . The lines and arrows symbolize implicit relations between the components. An example of an argument rendered using the Toulmin's scheme can be seen in Figure FIGREF30 .
We believe that this theoretical overview should provide sufficient background for the argumentation mining research covered in this article; for further references, we recommend for example BIBREF15 .
## Related work in computational linguistics
We structure the related work into three sub-categories, namely argumentation mining, stance detection, and persuasion and on-line dialogs, as these areas are closest to this article's focus. For a recent overview of general discourse analysis see BIBREF39 . Apart from these, research on computer-supported argumentation has been also very active; see, e.g., BIBREF40 for a survey of various models and argumentation formalisms from the educational perspective or BIBREF41 which examines argumentation in the Semantic Web.
## Argumentation Mining
The argumentation mining field has been evolving very rapidly in the recent years, resulting into several workshops co-located with major NLP conferences. We first present related works with a focus on annotations and then review experiments with classifying argument components, schemes, or relations.
One of the first papers dealing with annotating argumentative discourse was Argumentative Zoning for scientific publications BIBREF42 . Later, Teufel.et.al.2009 extended the original 7 categories to 15 and annotated 39 articles from two domains, where each sentence is assigned a category. The obtained Fleiss' INLINEFORM0 was 0.71 and 0.65. In their approach, they tried to deliberately ignore the domain knowledge and rely only on general, rhetorical and logical aspect of the annotated texts. By contrast to our work, argumentative zoning is specific to scientific publications and has been developed solely for that task.
Reed.Rowe.2004 presented Araucaria, a tool for argumentation diagramming which supports both convergent and linked arguments, missing premises (enthymemes), and refutations. They also released the AracuariaDB corpus which has later been used for experiments in the argumentation mining field. However, the creation of the dataset in terms of annotation guidelines and reliability is not reported – these limitations as well as its rather small size have been identified BIBREF10 .
Biran.Rambow.2011 identified justifications for subjective claims in blog threads and Wikipedia talk pages. The data were annotated with claims and their justifications reaching INLINEFORM0 0.69, but a detailed description of the annotation approach was missing.
[p. 1078]Schneider.et.al.2013b annotated Wikipedia talk pages about deletion using 17 Walton's schemes BIBREF43 , reaching a moderate agreement (Cohen's INLINEFORM0 0.48) and concluded that their analysis technique can be reused, although “it is intensive and difficult to apply.”
Stab.Gurevych.2014 annotated 90 argumentative essays (about 30k tokens), annotating claims, major claims, and premises and their relations (support, attack). They reached Krippendorff's INLINEFORM0 0.72 for argument components and Krippendorff's INLINEFORM1 0.81 for relations between components.
Rosenthal2012 annotated sentences that are opinionated claims, in which the author expresses a belief that should be adopted by others. Two annotators labeled sentences as claims without any context and achieved Cohen's INLINEFORM0 0.50 (2,000 sentences from LiveJournal) and 0.56 (2,000 sentences from Wikipedia).
Aharoni.et.al.2014 performed an annotation study in order to find context-dependent claims and three types of context-dependent evidence in Wikipedia, that were related to 33 controversial topics. The claim and evidence were annotated in 104 articles. The average Cohen's INLINEFORM0 between a group of 20 expert annotators was 0.40. Compared to our work, the linguistic properties of Wikipedia are qualitatively different from other user-generated content, such as blogs or user comments BIBREF44 .
Wacholder.et.al.2014 annotated “argument discourse units” in blog posts and criticized the Krippendorff's INLINEFORM0 measure. They proposed a new inter-annotator metric by taking the most overlapping part of one annotation as the “core” and all annotations as a “cluster”. The data were extended by Ghosh2014, who annotated “targets” and “callouts” on the top of the units.
Park.Cardie.2014 annotated about 10k sentences from 1,047 documents into four types of argument propositions with Cohen's INLINEFORM0 0.73 on 30% of the dataset. Only 7% of the sentences were found to be non-argumentative.
Faulkner2014 used Amazon Mechanical Turk to annotate 8,179 sentences from student essays. Three annotators decided whether the given sentence offered reasons for or against the main prompt of the essay (or no reason at all; 66% of the sentences were found to be neutral and easy to identify). The achieved Cohen's INLINEFORM0 was 0.70.
The research has also been active on non-English datasets. Goudas.et.al.2014 focused on user-generated Greek texts. They selected 204 documents and manually annotated sentences that contained an argument (760 out of 16,000). They distinguished claims and premises, but the claims were always implicit. However, the annotation agreement was not reported, neither was the number of annotators or the guidelines. A study on annotation of arguments was conducted by Peldszus.Stede.2013, who evaluate agreement among 26 “naive" annotators (annotators with very little training). They manually constructed 23 German short texts, each of them contains exactly one central claim, two premises, and one objection (rebuttal or undercut) and analyzed annotator agreement on this artificial data set. Peldszus.2014 later achieved higher inter-rater agreement with expert annotators on an extended version of the same data. Kluge.2014 built a corpus of argumentative German Web documents, containing 79 documents from 7 educational topics, which were annotated by 3 annotators according to the claim-premise argumentation model. The corpus comprises 70,000 tokens and the inter-annotator agreement was 0.40 (Krippendorff's INLINEFORM0 ). Houy.et.al.2013 targeted argumentation mining of German legal cases.
Table TABREF33 gives an overview of annotation studies with their respective argumentation model, domain, size, and agreement. It also contains other studies outside of computational linguistics and few proposals and position papers.
Arguments in the legal domain were targeted in BIBREF11 . Using argumentation formalism inspired by Walton.2012, they employed multinomial Naive Bayes classifier and maximum entropy model for classifying argumentative sentences on the AraucariaDB corpus BIBREF45 . The same test dataset was used by Feng.Hirst.2011, who utilized the C4.5 decision classifier. Rooney.et.al.2012 investigated the use of convolution kernel methods for classifying whether a sentence belongs to an argumentative element or not using the same corpus.
Stab.Gurevych.2014b classified sentences to four categories (none, major claim, claim, premise) using their previously annotated corpus BIBREF7 and reached 0.72 macro- INLINEFORM0 score. In contrast to our work, their documents are expected to comply with a certain structure of argumentative essays and are assumed to always contain argumentation.
Biran.Rambow.2011 identified justifications on the sentence level using a naive Bayes classifier over a feature set based on statistics from the RST Treebank, namely n-grams which were manually processed by deleting n-grams that “seemed irrelevant, ambiguous or domain-specific.”
Llewellyn2014 experimented with classifying tweets into several argumentative categories, namely claims and counter-claims (with and without evidence) and verification inquiries previously annotated by Procter.et.al.2013. They used unigrams, punctuations, and POS as features in three classifiers.
Park.Cardie.2014 classified propositions into three classes (unverifiable, verifiable non-experimental, and verifiable experimental) and ignored non-argumentative texts. Using multi-class SVM and a wide range of features (n-grams, POS, sentiment clue words, tense, person) they achieved Macro INLINEFORM0 0.69.
Peldszus.2014 experimented with a rather complex labeling schema of argument segments, but their data were artificially created for their task and manually cleaned, such as removing segments that did not meet the criteria or non-argumentative segments.
In the first step of their two-phase approach, Goudas.et.al.2014 sampled the dataset to be balanced and identified argumentative sentences with INLINEFORM0 0.77 using the maximum entropy classifier. For identifying premises, they used BIO encoding of tokens and achieved INLINEFORM1 score 0.42 using CRFs.
Saint-Dizier.2012 developed a Prolog engine using a lexicon of 1300 words and a set of 78 hand-crafted rules with the focus on a particular argument structure “reasons supporting conclusions” in French.
Taking the dialogical perspective, Cabrio.Villata.2012 built upon an argumentation framework proposed by Dung.1995 which models arguments within a graph structure and provides a reasoning mechanism for resolving accepted arguments. For identifying support and attack, they relied on existing research on textual entailment BIBREF46 , namely using the off-the-shelf EDITS system. The test data were taken from a debate portal Debatepedia and covered 19 topics. Evaluation was performed in terms of measuring the acceptance of the “main argument" using the automatically recognized entailments, yielding INLINEFORM0 score about 0.75. By contrast to our work which deals with micro-level argumentation, the Dung's model is an abstract framework intended to model dialogical argumentation.
Finding a bridge between existing discourse research and argumentation has been targeted by several researchers. Peldszus2013a surveyed literature on argumentation and proposed utilization of Rhetorical Structure Theory (RST) BIBREF47 . They claimed that RST is by its design well-suited for studying argumentative texts, but an empirical evidence has not yet been provided. Penn Discourse Tree Bank (PDTB) BIBREF48 relations have been under examination by argumentation mining researchers too. Cabrio2013b examined a connection between five Walton's schemes and discourse markers in PDTB, however an empirical evaluation is missing.
## Stance detection
Research related to argumentation mining also involves stance detection. In this case, the whole document (discussion post, article) is assumed to represent the writer's standpoint to the discussed topic. Since the topic is stated as a controversial question, the author is either for or against it.
Somasundaran.Wiebe.2009 built a computational model for recognizing stances in dual-topic debates about named entities in the electronic products domain by combining preferences learned from the Web data and discourse markers from PDTB BIBREF48 . Hasan.Ng.2013 determined stance in on-line ideological debates on four topics using data from createdebate.com, employing supervised machine learning and features ranging from n-grams to semantic frames. Predicting stance of posts in Debatepedia as well as external articles using a probabilistic graphical model was presented in BIBREF49 . This approach also employed sentiment lexicons and Named Entity Recognition as a preprocessing step and achieved accuracy about 0.80 in binary prediction of stances in debate posts.
Recent research has involved joint modeling, taking into account information about the users, the dialog sequences, and others. Hasan.Ng.2012 proposed machine learning approach to debate stance classification by leveraging contextual information and author's stances towards the topic. Qiu.et.al.2013 introduced a computational debate side model to cluster posts or users by sides for general threaded discussions using a generative graphical model employing words from various subjectivity lexicons as well as all adjectives and adverbs in the posts. Qiu.Jiang.2013 proposed a graphical model for viewpoint discovery in discussion threads. Burfoot.et.al.2011 exploited the informal citation structure in U.S. Congressional floor-debate transcripts and use a collective classification which outperforms methods that consider documents in isolation.
Some works also utilize argumentation-motivated features. Park.et.al.2011 dealt with contentious issues in Korean newswire discourse. Although they annotate the documents with “argument frames”, the formalism remains unexplained and does not refer to any existing research in argumentation. Walker.et.al.2012b incorporated features with some limited aspects of the argument structure, such as cue words signaling rhetorical relations between posts, POS generalized dependencies, and a representation of the parent post (context) to improve stance classification over 14 topics from convinceme.net.
## Online persuasion
Another stream of research has been devoted to persuasion in online media, which we consider as a more general research topic than argumentation.
Schlosser.2011 investigated persuasiveness of online reviews and concluded that presenting two sides is not always more helpful and can even be less persuasive than presenting one side. Mohammadi.et.al.2013 explored persuasiveness of speakers in YouTube videos and concluded that people are perceived more persuasive in video than in audio and text. Miceli.et.al.2006 proposed a computational model that attempts to integrate emotional and non-emotional persuasion. In the study of Murphy.2001, persuasiveness was assigned to 21 articles (out of 100 manually preselected) and four of them are later analyzed in detail for comparing the perception of persuasion between expert and students. Bernard.et.al.2012 experimented with children's perception of discourse connectives (namely with “because”) to link statements in arguments and found out that 4- and 5-years-old and adults are sensitive to the connectives. Le.2004 presented a study of persuasive texts and argumentation in newspaper editorials in French.
A coarse-grained view on dialogs in social media was examined by Bracewell.et.al.2013, who proposed a set of 15 social acts (such as agreement, disagreement, or supportive behavior) to infer the social goals of dialog participants and presented a semi-supervised model for their classification. Their social act types were inspired by research in psychology and organizational behavior and were motivated by work in dialog understanding. They annotated a corpus in three languages using in-house annotators and achieved INLINEFORM0 in the range from 0.13 to 0.53.
Georgila.et.al.2011 focused on cross-cultural aspects of persuasion or argumentation dialogs. They developed a novel annotation scheme stemming from different literature sources on negotiation and argumentation as well as from their original analysis of the phenomena. The annotation scheme is claimed to cover three dimensions of an utterance, namely speech act, topic, and response or reference to a previous utterance. They annotated 21 dialogs and reached Krippendorff's INLINEFORM0 between 0.38 and 0.57.
Given the broad landscape of various approaches to argument analysis and persuasion studies presented in this section, we would like to stress some novel aspects of the current article. First, we aim at adapting a model of argument based on research by argumentation scholars, both theoretical and empirical. We pose several pragmatical constraints, such as register independence (generalization over several registers). Second, our emphasis is put on reliable annotations and sufficient data size (about 90k tokens). Third, we deal with fairly unrestricted Web-based sources, so additional steps of distinguishing whether the texts are argumentative are required. Argumentation mining has been a rapidly evolving field with several major venues in 2015. We encourage readers to consult an upcoming survey article by Lippi.Torroni.2016 or the proceedings of the 2nd Argumentation Mining workshop BIBREF50 to keep up with recent developments. However, to the best of our knowledge, the main findings of this article have not yet been made obsolete by any related work.
## Annotation studies and corpus creation
This section describes the process of data selection, annotation, curation, and evaluation with the goal of creating a new corpus suitable for argumentation mining research in the area of computational linguistics. As argumentation mining is an evolving discipline without established and widely-accepted annotation schemes, procedures, and evaluation, we want to keep this overview detailed to ensure full reproducibility of our approach. Given the wide range of perspectives on argumentation itself BIBREF15 , variety of argumentation models BIBREF27 , and high costs of discourse or pragmatic annotations BIBREF48 , creating a new, reliable corpus for argumentation mining represents a substantial effort.
A motivation for creating a new corpus stems from the various use-cases discussed in the introduction, as well as some research gaps pointed in section SECREF1 and further discussed in the survey in section SECREF31 (e.g., domain restrictions, missing connection to argumentation theories, non-reported reliability or detailed schemes).
## Topics and registers
As a main field of interest in the current study, we chose controversies in education. One distinguishing feature of educational topics is their breadth of sub-topics and points of view, as they attract researchers, practitioners, parents, students, or policy-makers. We assume that this diversity leads to the linguistic variability of the education topics and thus represents a challenge for NLP. In a cooperation with researchers from the German Institute for International Educational Research we identified the following current controversial topics in education in English-speaking countries: (1) homeschooling, (2) public versus private schools, (3) redshirting — intentionally delaying the entry of an age-eligible child into kindergarten, allowing their child more time to mature emotionally and physically BIBREF51 , (4) prayer in schools — whether prayer in schools should be allowed and taken as a part of education or banned completely, (5) single-sex education — single-sex classes (males and females separate) versus mixed-sex classes (“co-ed”), and (6) mainstreaming — including children with special needs into regular classes.
Since we were also interested in whether argumentation differs across registers, we included four different registers — namely (1) user comments to newswire articles or to blog posts, (2) posts in discussion forums (forum posts), (3) blog posts, and (4) newswire articles. Throughout this work, we will refer to each article, blog post, comment, or forum posts as a document. This variety of sources covers mainly user-generated content except newswire articles which are written by professionals and undergo an editing procedure by the publisher. Since many publishers also host blog-like sections on their portals, we consider as blog posts all content that is hosted on personal blogs or clearly belong to a blog category within a newswire portal.
## Raw corpus statistics
Given the six controversial topics and four different registers, we compiled a collection of plain-text documents, which we call the raw corpus. It contains 694,110 tokens in 5,444 documents. As a coarse-grained analysis of the data, we examined the lengths and the number of paragraphs (see Figure FIGREF43 ). Comments and forum posts follow a similar distribution, being shorter than 300 tokens on average. By contrast, articles and blogs are longer than 400 tokens and have 9.2 paragraphs on average. The process of compiling the raw corpus and its further statistics are described in detail in Appendix UID158 .
## Annotation study 1: Identifying persuasive documents in forums and comments
The goal of this study was to select documents suitable for a fine-grained analysis of arguments. In a preliminary study on annotating argumentation using a small sample (50 random documents) of forum posts and comments from the raw corpus, we found that many documents convey no argumentation at all, even in discussions about controversies. We observed that such contributions do not intend to persuade; these documents typically contain story-sharing, personal worries, user interaction (asking questions, expressing agreement), off-topic comments, and others. Such characteristics are typical to on-line discussions in general, but they have not been examined with respect to argumentation or persuasion. Indeed, we observed that there are (1) documents that are completely unrelated and (2) documents that are related to the topic, but do not contain any argumentation. This issue has been identified among argumentation theorist; for example as external relevance by Paglieri.Castelfranchia.2014. Similar findings were also confirmed in related literature in argumentation mining, however never tackled empirically BIBREF53 , BIBREF8 These documents are thus not suitable for analyzing argumentation.
In order to filter documents that are suitable for argumentation annotation, we defined a binary document-level classification task. The distinction is made between either persuasive documents or non-persuasive (which includes all other sorts of texts, such as off-topic, story sharing, unrelated dialog acts, etc.).
The two annotated categories were on-topic persuasive and non-persuasive. Three annotators with near-native English proficiency annotated a set of 990 documents (a random subset of comments and forum posts) reaching 0.59 Fleiss' INLINEFORM0 . The final label was selected by majority voting. The annotation study took on average of 15 hours per annotator with approximately 55 annotated documents per hour. The resulting labels were derived by majority voting. Out of 990 documents, 524 (53%) were labeled as on-topic persuasive. We will refer to this corpus as gold data persuasive.
We examined all disagreements between annotators and discovered some typical problems, such as implicitness or topic relevance. First, the authors often express their stance towards the topic implicitly, so it must be inferred by the reader. To do so, certain common-ground knowledge is required. However, such knowledge heavily depends on many aspects, such as the reader's familiarity with the topic or her cultural background, as well as the context of the source website or the discussion forum thread. This also applies for sarcasm and irony. Second, the decision whether a particular topic is persuasive was always made with respect to the controversial topic under examination. Some authors shift the focus to a particular aspect of the given controversy or a related issue, making the document less relevant.
We achieved moderate agreement between the annotators, although the definition of persuasiveness annotation might seem a bit fuzzy. We found different amounts of persuasion in the specific topics. For instance, prayer in schools or private vs. public schools attract persuasive discourse, while other discussed controversies often contain non-persuasive discussions, represented by redshirting and mainstreaming. Although these two topics are also highly controversial, the participants of on-line discussions seem to not attempt to persuade but they rather exchange information, support others in their decisions, etc. This was also confirmed by socio-psychological researchers. Ammari.et.al.2014 show that parents of children with special needs rely on discussion sites for accessing information and social support and that, in particular, posts containing humor, achievement, or treatment suggestions are perceived to be more socially appropriate than posts containing judgment, violence, or social comparisons. According to Nicholson.Leask.2012, in the online forum, parents of autistic children were seen to understand the issue because they had lived it. Assuming that participants in discussions related to young kids (e.g., redshirting, or mainstreaming) are usually females (mothers), the gender can also play a role. In a study of online persuasion, Guadagno.Cialdini.2002 conclude that women chose to bond rather than compete (women feel more comfortable cooperating, even in a competitive environment), whereas men are motivated to compete if necessary to achieve independence.
## Annotation study 2: Annotating micro-structure of arguments
The goal of this study was to annotate documents on a detailed level with respect to an argumentation model. First, we will present the annotation scheme. Second, we will describe the annotation process. Finally, we will evaluate the agreement and draw some conclusions.
Given the theoretical background briefly introduced in section SECREF2 , we motivate our selection of the argumentation model by the following requirements. First, the scope of this work is to capture argumentation within a single document, thus focusing on micro-level models. Second, there should exist empirical evidence that such a model has been used for analyzing argumentation in previous works, so it is likely to be suitable for our purposes of argumentative discourse analysis in user-generated content. Regarding the first requirement, two typical examples of micro-level models are the Toulmin's model BIBREF36 and Walton's schemes BIBREF55 . Let us now elaborate on the second requirement.
Walton's argumentation schemes are claimed to be general and domain independent. Nevertheless, evidence from the computational linguistics field shows that the schemes lack coverage for analyzing real argumentation in natural language texts. In examining real-world political argumentation from BIBREF56 , Walton.2012 found out that 37.1% of the arguments collected did not fit any of the fourteen schemes they chose so they created new schemes ad-hoc. Cabrio2013b selected five argumentation schemes from Walton and map these patterns to discourse relation categories in the Penn Discourse TreeBank (PDTB) BIBREF48 , but later they had to define two new argumentation schemes that they discovered in PDTB. Similarly, Song.et.al.2014 admitted that the schemes are ambiguous and hard to directly apply for annotation, therefore they modified the schemes and created new ones that matched the data.
Although Macagno.Konstantinidou.2012 show several examples of two argumentation schemes applied to few selected arguments in classroom experiments, empirical evidence presented by Anthony.Kim.2014 reveals many practical and theoretical difficulties of annotating dialogues with schemes in classroom deliberation, providing many details on the arbitrary selection of the sub-set of the schemes, the ambiguity of the scheme definitions, concluding that the presence of the authors during the experiment was essential for inferring and identifying the argument schemes BIBREF57 .
Although this model (refer to section SECREF21 ) was designed to be applicable to real-life argumentation, there are numerous studies criticizing both the clarity of the model definition and the differentiation between elements of the model. Ball1994 claims that the model can be used only for the most simple arguments and fails on the complex ones. Also Freeman1991 and other argumentation theorists criticize the usefulness of Toulmin's framework for the description of real-life argumentative texts. However, others have advocated the model and claimed that it can be applied to the people's ordinary argumenation BIBREF58 , BIBREF59 .
A number of studies (outside the field of computational linguistics) used Toulmin's model as their backbone argumentation framework. Chambliss1995 experimented with analyzing 20 written documents in a classroom setting in order to find the argument patterns and parts. Simosi2003 examined employees' argumentation to resolve conflicts. Voss2006 analyzed experts' protocols dealing with problem-solving.
The model has also been used in research on computer-supported collaborative learning. Erduran2004 adapt Toulmin's model for coding classroom argumentative discourse among teachers and students. Stegmann2011 builds on a simplified Toulmin's model for scripted construction of argument in computer-supported collaborative learning. Garcia-Mila2013 coded utterances into categories from Toulmin's model in persuasion and consensus-reaching among students. Weinberger.Fischer.2006 analyze asynchronous discussion boards in which learners engage in an argumentative discourse with the goal to acquire knowledge. For coding the argument dimension, they created a set of argumentative moves based on Toulmin's model. Given this empirical evidence, we decided to build upon the Toulmin's model.
In this annotation task, a sequence of tokens (e.g. a phrase, a sentence, or any arbitrary text span) is labeled with a corresponding argument component (such as the claim, the grounds, and others). There are no explicit relations between these annotation spans as the relations are implicitly encoded in the pragmatic function of the components in the Toulmin's model.
In order to prove the suitability of the Toulmin's model, we analyzed 40 random documents from the gold data persuasive dataset using the original Toulmin's model as presented in section SECREF21 . We took into account sever criteria for assessment, such as frequency of occurrence of the components or their importance for the task. We proposed some modifications of the model based on the following observations.
Authors do not state the degree of cogency (the probability of their claim, as proposed by Toulmin). Thus we omitted qualifier from the model due to its absence in the data.
The warrant as a logical explanation why one should accept the claim given the evidence is almost never stated. As pointed out by BIBREF37 , “data are appealed to explicitly, warrants implicitly.” This observation has also been made by Voss2006. Also, according to [p. 205]Eemeren.et.al.1987, the distinction of warrant is perfectly clear only in Toulmin’s examples, but the definitions fail in practice. We omitted warrant from the model.
Rebuttal is a statement that attacks the claim, thus playing a role of an opposing view. In reality, the authors often attack the presented rebuttals by another counter-rebuttal in order to keep the whole argument's position consistent. Thus we introduced a new component – refutation – which is used for attacking the rebuttal. Annotation of refutation was conditioned of explicit presence of rebuttal and enforced by the annotation guidelines. The chain rebuttal–refutation is also known as the procatalepsis figure in rhetoric, in which the speaker raises an objection to his own argument and then immediately answers it. By doing so, the speaker hopes to strengthen the argument by dealing with possible counter-arguments before the audience can raise them BIBREF43 .
The claim of the argument should always reflect the main standpoint with respect to the discussed controversy. We observed that this standpoint is not always explicitly expressed, but remains implicit and must be inferred by the reader. Therefore, we allow the claim to be implicit. In such a case, the annotators must explicitly write down the (inferred) stance of the author.
By definition, the Toulmin's model is intended to model single argument, with the claim in its center. However, we observed in our data, that some authors elaborate on both sides of the controversy equally and put forward an argument for each side (by argument here we mean the claim and its premises, backings, etc.). Therefore we allow multiple arguments to be annotated in one document. At the same time, we restrained the annotators from creating complex argument hierarchies.
Toulmin's grounds have an equivalent role to a premise in the classical view on an argument BIBREF15 , BIBREF60 in terms that they offer the reasons why one should accept the standpoint expressed by the claim. As this terminology has been used in several related works in the argumentation mining field BIBREF7 , BIBREF61 , BIBREF62 , BIBREF11 , we will keep this convention and denote the grounds as premises.
One of the main critiques of the original Toulmin's model was the vague distinction between grounds, warrant, and backing BIBREF63 , BIBREF64 , BIBREF65 . The role of backing is to give additional support to the warrant, but there is no warrant in our model anymore. However, what we observed during the analysis, was a presence of some additional evidence. Such evidence does not play the role of the grounds (premises) as it is not meant as a reason supporting the claim, but it also does not explain the reasoning, thus is not a warrant either. It usually supports the whole argument and is stated by the author as a certain fact. Therefore, we extended the scope of backing as an additional support to the whole argument.
The annotators were instructed to distinguish between premises and backing, so that premises should cover generally applicable reasons for the claim, whereas backing is a single personal experience or statements that give credibility or attribute certain expertise to the author. As a sanity check, the argument should still make sense after removing backing (would be only considered “weaker”).
We call the model as a modified Toulmin's model. It contains five argument components, namely claim, premise, backing, rebuttal, and refutation. When annotating a document, any arbitrary token span can be labeled with an argument component; the components do not overlap. The spans are not known in advance and the annotator thus chooses the span and the component type at the same time. All components are optional (they do not have to be present in the argument) except the claim, which is either explicit or implicit (see above). If a token span is not labeled by any argument component, it is not considered as a part of the argument and is later denoted as none (this category is not assigned by the annotators).
An example analysis of a forum post is shown in Figure FIGREF65 . Figure FIGREF66 then shows a diagram of the analysis from that example (the content of the argument components was shortened or rephrased).
The annotation experiment was split into three phases. All documents were annotated by three independent annotators, who participated in two training sessions. During the first phase, 50 random comments and forum posts were annotated. Problematic cases were resolved after discussion and the guidelines were refined. In the second phase, we wanted to extend the range of annotated registers, so we selected 148 comments and forum posts as well as 41 blog posts. After the second phase, the annotation guidelines were final.
In the final phase, we extended the range of annotated registers and added newswire articles from the raw corpus in order to test whether the annotation guidelines (and inherently the model) is general enough. Therefore we selected 96 comments/forum posts, 8 blog posts, and 8 articles for this phase. A detailed inter-annotator agreement study on documents from this final phase will be reported in section UID75 .
The annotations were very time-consuming. In total, each annotator spent 35 hours by annotating in the course of five weeks. Discussions and consolidation of the gold data took another 6 hours. Comments and forum posts required on average of 4 minutes per document to annotate, while blog posts and articles on average of 14 minutes per document. Examples of annotated documents from the gold data are listed in Appendix UID158 .
We discarded 11 documents out of the total 351 annotated documents. Five forum posts, although annotated as persuasive in the first annotation study, were at a deeper look a mixture of two or more posts with missing quotations, therefore unsuitable for analyzing argumentation. Three blog posts and two articles were found not to be argumentative (the authors took no stance to the discussed controversy) and one article was an interview, which the current model cannot capture (a dialogical argumentation model would be required).
For each of the 340 documents, the gold standard annotations were obtained using the majority vote. If simple majority voting was not possible (different boundaries of the argument component together with a different component label), the gold standard was set after discussion among the annotators. We will refer to this corpus as the gold standard Toulmin corpus. The distribution of topics and registers in this corpus in shown in Table TABREF71 , and Table TABREF72 presents some lexical statistics.
Based on pre-studies, we set the minimal unit for annotation as token. The documents were pre-segmented using the Stanford Core NLP sentence splitter BIBREF69 embedded in the DKPro Core framework BIBREF70 . Annotators were asked to stick to the sentence level by default and label entire pre-segmented sentences. They should switch to annotations on the token level only if (a) a particular sentence contained more than one argument component, or (b) if the automatic sentence segmentation was wrong. Given the “noise” in user-generated Web data (wrong or missing punctuation, casing, etc.), this was often the case.
Annotators were also asked to rephrase (summarize) each annotated argument component into a simple statement when applicable, as shown in Figure FIGREF66 . This was used as a first sanity checking step, as each argument component is expected to be a coherent discourse unit. For example, if a particular occurrence of a premise cannot be summarized/rephrased into one statement, this may require further splitting into two or more premises.
For the actual annotations, we developed a custom-made web-based application that allowed users to switch between different granularity of argument components (tokens or sentences), to annotate the same document in different argument “dimensions” (logos and pathos), and to write summary for each annotated argument component.
As a measure of annotation reliability, we rely on Krippendorff's unitized alpha ( INLINEFORM0 ) BIBREF71 . To the best of our knowledge, this is the only agreement measure that is applicable when both labels and boundaries of segments are to be annotated.
Although the measure has been used in related annotation works BIBREF61 , BIBREF7 , BIBREF72 , there is one important detail that has not been properly communicated. The INLINEFORM0 is computed over a continuum of the smallest units, such as tokens. This continuum corresponds to a single document in the original Krippendorff's work. However, there are two possible extensions to multiple documents (a corpus), namely (a) to compute INLINEFORM1 for each document first and then report an average value, or (b) to concatenate all documents into one large continuum and compute INLINEFORM2 over it. The first approach with averaging yielded extremely high the standard deviation of INLINEFORM3 (i.e., avg. = 0.253; std. dev. = 0.886; median = 0.476 for the claim). This says that some documents are easy to annotate while others are harder, but interpretation of such averaged value has no evidence either in BIBREF71 or other papers based upon it. Thus we use the other methodology and treat the whole corpus as a single long continuum (which yields in the example of claim 0.541 INLINEFORM4 ).
Table TABREF77 shows the inter-annotator agreement as measured on documents from the last annotation phase (see section UID67 ). The overall INLINEFORM0 for all register types, topics, and argument components is 0.48 in the logos dimension (annotated with the modified Toulmin's model). Such agreement can be considered as moderate by the measures proposed by Landis.Koch.1977, however, direct interpretation of the agreement value lacks consensus BIBREF54 . Similar inter-annotator agreement numbers were achieved in the relevant works in argumentation mining (refer to Table TABREF33 in section SECREF31 ; although most of the numbers are not directly comparable, as different inter-annotator metrics were used on different tasks).
There is a huge difference in INLINEFORM0 regarding the registers between comments + forums posts ( INLINEFORM1 0.60, Table TABREF77 a) and articles + blog posts ( INLINEFORM2 0.09, Table TABREF77 b) in the logos dimension. If we break down the value with respect to the individual argument components, the agreement on claim and premise is substantial in the case of comments and forum posts (0.59 and 0.69, respectively). By contrast, these argument components were annotated only with a fair agreement in articles and blog posts (0.22 and 0.24, respectively).
As can be also observed from Table TABREF77 , the annotation agreement in the logos dimension varies regarding the document topic. While it is substantial/moderate for prayer in schools (0.68) or private vs. public schools (0.44), for some topics it remains rather slight, such as in the case of redshirting (0.14) or mainstreaming (0.08).
First, we examine the disagreement in annotations by posing the following research question: are there any measurable properties of the annotated documents that might systematically cause low inter-annotator agreement? We use Pearson's correlation coefficient between INLINEFORM0 on each document and the particular property under investigation. We investigated the following set of measures.
Full sentence coverage ratio represents a ratio of argument component boundaries that are aligned to sentence boundaries. The value is 1.0 if all annotations in the particular document are aligned to sentences and 0.0 if no annotations match the sentence boundaries. Our hypothesis was that automatic segmentation to sentences was often incorrect, therefore annotators had to switch to the token level annotations and this might have increased disagreement on boundaries of the argument components.
Document length, paragraph length and average sentence length. Our hypotheses was that the length of documents, paragraphs, or sentences negatively affects the agreement.
Readability measures. We tested four standard readability measures, namely Ari BIBREF73 , Coleman-Liau BIBREF74 , Flesch BIBREF75 , and Lix BIBREF76 to find out whether readability of the documents plays any role in annotation agreement.
Correlation results are listed in Table TABREF82 . We observed the following statistically significant ( INLINEFORM0 ) correlations. First, document length negatively correlates with agreement in comments. The longer the comment was the lower the agreement was. Second, average paragraph length negatively correlates with agreement in blog posts. The longer the paragraphs in blogs were, the lower agreement was reached. Third, all readability scores negatively correlate with agreement in the public vs. private school domain, meaning that the more complicated the text in terms of readability is, the lower agreement was reached. We observed no significant correlation in sentence coverage and average sentence length measures. We cannot draw any general conclusion from these results, but we can state that some registers and topics, given their properties, are more challenging to annotate than others.
Another qualitative analysis of disagreements between annotators was performed by constructing a probabilistic confusion matrix BIBREF77 on the token level. The biggest disagreements, as can be seen in Table TABREF85 , is caused by rebuttal and refutation confused with none (0.27 and 0.40, respectively). This is another sign that these two argument components were very hard to annotate. As shown in Table TABREF77 , the INLINEFORM5 was also low – 0.08 for rebuttal and 0.17 for refutation.
We analyzed the annotations and found the following phenomena that usually caused disagreements between annotators.
Each argument component (e.g., premise or backing) should express one consistent and coherent piece of information, for example a single reason in case of the premise (see Section UID73 ). However, the decision whether a longer text should be kept as a single argument component or segmented into multiple components is subjective and highly text-specific.
While rhetorical questions have been researched extensively in linguistics BIBREF78 , BIBREF79 , BIBREF80 , BIBREF81 , their role in argumentation represents a substantial research question BIBREF82 , BIBREF83 , BIBREF84 , BIBREF85 , BIBREF86 . Teninbaum.2011 provides a brief history of rhetorical questions in persuasion. In short, rhetorical questions should provoke the reader. From the perspective of our argumentation model, rhetorical questions might fall both into the logos dimension (and thus be labeled as, e.g., claim, premise, etc.) or into the pathos dimension (refer to Section SECREF20 ). Again, the decision is usually not clear-cut.
As introduced in section UID55 , rebuttal attacks the claim by presenting an opponent's view. In most cases, the rebuttal is again attacked by the author using refutation. From the pragmatic perspective, refutation thus supports the author's stance expressed by the claim. Therefore, it can be easily confused with premises, as the function of both is to provide support for the claim. Refutation thus only takes place if it is meant as a reaction to the rebuttal. It follows the discussed matter and contradicts it. Such a discourse is usually expressed as:
[claim: My claim.] [rebuttal: On the other hand, some people claim XXX which makes my claim wrong.] [refutation: But this is not true, because of YYY.]
However, the author might also take the following defensible approach to formulate the argument:
[rebuttal: Some people claim XXX-1 which makes my claim wrong.] [refutation: But this is not true, because of YYY-1.] [rebuttal: Some people claim XXX-2 which makes my claim wrong.] [refutation: But this is not true, because of YYY-2.] [claim: Therefore my claim.]
If this argument is formulated without stating the rebuttals, it would be equivalent to the following:
[premise: YYY-1.] [premise: YYY-2.] [claim: Therefore my claim.]
This example shows that rebuttal and refutation represent a rhetorical device to produce arguments, but the distinction between refutation and premise is context-dependent and on the functional level both premise and refutation have very similar role – to support the author's standpoint. Although introducing dialogical moves into monological model and its practical consequences, as described above, can be seen as a shortcoming of our model, this rhetoric figure has been identified by argumentation researchers as procatalepsis BIBREF43 . A broader view on incorporating opposing views (or lack thereof) is discussed under the term confirmation bias by BIBREF21 who claim that “[...] people are trying to convince others. They are typically looking for arguments and evidence to confirm their own claim, and ignoring negative arguments and evidence unless they anticipate having to rebut them.” The dialectical attack of possible counter-arguments may thus strengthen one's own argument.
One possible solution would be to refrain from capturing this phenomena completely and to simplify the model to claims and premises, for instance. However, the following example would then miss an important piece of information, as the last two clauses would be left un-annotated. At the same time, annotating the last clause as premise would be misleading, because it does not support the claim (in fact, it supports it only indirectly by attacking the rebuttal; this can be seen as a support is considered as an admissible extension of abstract argument graph by BIBREF87 ).
Doc#422 (forumpost, homeschooling) [claim: I try not to be anti-homeschooling, but... it's just hard for me.] [premise: I really haven't met any homeschoolers who turned out quite right, including myself.] I apologize if what I'm saying offends any of you - that's not my intention, [rebuttal: I know that there are many homeschooled children who do just fine,] but [refutation: that hasn't been my experience.]
To the best of our knowledge, these context-dependent dialogical properties of argument components using Toulmin's model have not been solved in the literature on argumentation theory and we suggest that these observations should be taken into account in the future research in monological argumentation.
Appeal to emotion, sarcasm, irony, or jokes are common in argumentation in user-generated Web content. We also observed documents in our data that were purely sarcastic (the pathos dimension), therefore logical analysis of the argument (the logos dimension) would make no sense. However, given the structure of such documents, some claims or premises might be also identified. Such an argument is a typical example of fallacious argumentation, which intentionally pretends to present a valid argument, but its persuasion is conveyed purely for example by appealing to emotions of the reader BIBREF88 .
We present some statistics of the annotated data that are important from the argumentation research perspective. Regardless of the register, 48% of claims are implicit. This means that the authors assume that their standpoint towards the discussed controversy can be inferred by the reader and give only reasons for that standpoint. Also, explicit claims are mainly written just once, only in 3% of the documents the claim was rephrased and occurred multiple times.
In 6% of the documents, the reasons for an implicit claim are given only in the pathos dimension, making the argument purely persuasive without logical argumentation.
The “myside bias”, defined as a bias against information supporting another side of an argument BIBREF89 , BIBREF90 , can be observed by the presence of rebuttals to the author's claim or by formulating arguments for both sides when the overall stance is neutral. While 85% of the documents do not consider any opposing side, only 8% documents present a rebuttal, which is then attacked by refutation in 4% of the documents. Multiple rebuttals and refutations were found in 3% of the documents. Only 4% of the documents were overall neutral and presented arguments for both sides, mainly in blog posts.
We were also interested whether mitigating linguistic devices are employed in the annotated arguments, namely in their main stance-taking components, the claims. Such devices typically include parenthetical verbs, syntactic constructions, token agreements, hedges, challenge questions, discourse markers, and tag questions, among others BIBREF91 . In particular, [p. 1]Kaltenbock.et.al.2010 define hedging as a discourse strategy that reduces the force or truth of an utterance and thus reduces the risk a speaker runs when uttering a strong or firm assertion or other speech act. We manually examined the use of hedging in the annotated claims.
Our main observation is that hedging is used differently across topics. For instance, about 30-35% of claims in homeschooling and mainstreaming signal the lack of a full commitment to the expressed stance, in contrast to prayer in schools (15%) or public vs. private schools (about 10%). Typical hedging cues include speculations and modality (“If I have kids, I will probably homeschool them.”), statements as neutral observations (“It's not wrong to hold the opinion that in general it's better for kids to go to school than to be homeschooled.”), or weasel phrases BIBREF92 (“In some cases, inclusion can work fantastically well.”, “For the majority of the children in the school, mainstream would not have been a suitable placement.”).
On the other hand, most claims that are used for instance in the prayer in schools arguments are very direct, without trying to diminish its commitment to the conveyed belief (for example, “NO PRAYER IN SCHOOLS!... period.”, “Get it out of public schools”, “Pray at home.”, or “No organized prayers or services anywhere on public school board property - FOR ANYONE.”). Moreover, some claims are clearly offensive, persuading by direct imperative clauses towards the opponents/audience (“TAKE YOUR KIDS PRIVATE IF YOU CARE AS I DID”, “Run, don't walk, to the nearest private school.”) or even accuse the opponents for taking a certain stance (“You are a bad person if you send your children to private school.”).
These observations are consistent with the findings from the first annotation study on persuasion (see section UID48 ), namely that some topics attract heated argumentation where participant take very clear and reserved standpoints (such as prayer in schools or private vs. public schools), while discussions about other topics are rather milder. It has been shown that the choices a speaker makes to express a position are informed by their social and cultural background, as well as their ability to speak the language BIBREF93 , BIBREF94 , BIBREF91 . However, given the uncontrolled settings of the user-generated Web content, we cannot infer any similar conclusions in this respect.
We investigated premises across all topics in order to find the type of support used in the argument. We followed the approach of Park.Cardie.2014, who distinguished three types of propositions in their study, namely unverifiable, verifiable non-experiential, and verifiable experiential.
Verifiable non-experiential and verifiable experiential propositions, unlike unverifiable propositions, contain an objective assertion, where objective means “expressing or dealing with facts or conditions as perceived without distortion by personal feelings, prejudices, or interpretations.” Such assertions have truth values that can be proved or disproved with objective evidence; the correctness of the assertion or the availability of the objective evidence does not matter BIBREF8 . A verifiable proposition can further be distinguished as experiential or not, depending on whether the proposition is about the writer's personal state or experience or something non-experiential. Verifiable experiential propositions are sometimes referred to as anectotal evidence, provide the novel knowledge that readers are seeking BIBREF8 .
Table TABREF97 shows the distribution of the premise types with examples for each topic from the annotated corpus. As can be seen in the first row, arguments in prayer in schools contain majority (73%) of unverifiable premises. Closer examination reveals that their content vary from general vague propositions to obvious fallacies, such as a hasty generalization, straw men, or slippery slope. As Nieminen.Mustonen.2014 found out, fallacies are very common in argumentation about religion-related issues. On the other side of the spectrum, arguments about redshirting rely mostly on anecdotal evidence (61% of verifiable experiential propositions). We will discuss the phenomena of narratives in argumentation in more detail later in section UID98 . All the topics except private vs. public schools exhibit similar amount of verifiable non-experiential premises (9%–22%), usually referring to expert studies or facts. However, this type of premises has usually the lowest frequency.
Manually analyzing argumentative discourse and reconstructing (annotating) the underlying argument structure and its components is difficult. As [p. 267]Reed2006 point out, “the analysis of arguments is often hard, not only for students, but for experts too.” According to [p. 81]Harrell.2011b, argumentation is a skill and “even for simple arguments, untrained college students can identify the conclusion but without prompting are poor at both identifying the premises and how the premises support the conclusion.” [p. 81]Harrell.2011 further claims that “a wide literature supports the contention that the particular skills of understanding, evaluating, and producing arguments are generally poor in the population of people who have not had specific training and that specific training is what improves these skills.” Some studies, for example, show that students perform significantly better on reasoning tasks when they have learned to identify premises and conclusions BIBREF95 or have learned some standard argumentation norms BIBREF96 .
One particular extra challenge in analyzing argumentation in Web user-generated discourse is that the authors produce their texts probably without any existing argumentation theory or model in mind. We assume that argumentation or persuasion is inherent when users discuss controversial topics, but the true reasons why people participate in on-line communities and what drives their behavior is another research question BIBREF97 , BIBREF98 , BIBREF99 , BIBREF100 . When the analyzed texts have a clear intention to produce argumentative discourse, such as in argumentative essays BIBREF7 , the argumentation is much more explicit and a substantially higher inter-annotator agreement can be achieved.
The model seems to be suitable for short persuasive documents, such as comments and forum posts. Its applicability to longer documents, such as articles or blog posts, is problematic for several reasons.
The argument components of the (modified) Toulmin's model and their roles are not expressive enough to capture argumentation that not only conveys the logical structure (in terms of reasons put forward to support the claim), but also relies heavily on the rhetorical power. This involves various stylistic devices, pervading narratives, direct and indirect speech, or interviews. While in some cases the argument components are easily recognizable, the vast majority of the discourse in articles and blog posts does not correspond to any distinguishable argumentative function in the logos dimension. As the purpose of such discourse relates more to rhetoric than to argumentation, unambiguous analysis of such phenomena goes beyond capabilities of the current argumentation model. For a discussion about metaphors in Toulmin's model of argumentation see, e.g., BIBREF102 , BIBREF103 .
Articles without a clear standpoint towards the discussed controversy cannot be easily annotated with the model either. Although the matter is viewed from both sides and there might be reasons presented for either of them, the overall persuasive intention is missing and fitting such data to the argumentation framework causes disagreements. One solution might be to break the document down to paragraphs and annotate each paragraph separately, examining argumentation on a different level of granularity.
As introduced in section SECREF20 , there are several dimensions of an argument. The Toulmin's model focuses solely on the logos dimension. We decided to ignore the ethos dimension, because dealing with the author's credibility remains unclear, given the variety of the source web data. However, exploiting the pathos dimension of an argument is prevalent in the web data, for example as an appeal to emotions. Therefore we experimented with annotating appeal to emotions as a separate category independent of components in the logos dimension. We defined some features for the annotators how to distinguish appeal to emotions. Figurative language such as hyperbole, sarcasm, or obvious exaggerating to “spice up” the argument are the typical signs of pathos. In an extreme case, the whole argument might be purely emotional, as in the following example.
Doc#1698 (comment, prayer in schools) [app-to-emot: Prayer being removed from school is just the leading indicator of a nation that is ‘Falling Away’ from Jehovah. [...] And the disasters we see today are simply God’s finger writing on the wall: Mene, mene, Tekel, Upharsin; that is, God has weighed America in the balances, and we’ve been found wanting. No wonder 50 million babies have been aborted since 1973. [...]]
We kept annotations on the pathos dimension as simple as possible (with only one appeal to emotions label), but the resulting agreement was unsatisfying ( INLINEFORM0 0.30) even after several annotation iterations. Appeal to emotions is considered as a type of fallacy BIBREF104 , BIBREF18 . Given the results, we assume that more carefully designed approach to fallacy annotation should be applied. To the best of our knowledge, there have been very few research works on modeling fallacies similarly to arguments on the discourse level BIBREF105 . Therefore the question, in which detail and structure fallacies should be annotated, remains open. For the rest of the paper, we thus focus on the logos dimension solely.
Some of the educational topics under examination relate to young children (e.g., redshirting or mainstreaming); therefore we assume that the majority of participants in discussions are their parents. We observed that many documents related to these topics contain narratives. Sometimes the story telling is meant as a support for the argument, but there are documents where the narrative has no intention to persuade and is simply a story sharing.
There is no widely accepted theory of the role of narratives among argumentation scholars. According to Fisher.1987, humans are storytellers by nature, and the “reason” in argumentation is therefore better understood in and through the narratives. He found that good reasons often take the form of narratives. Hoeken.Fikkers.2014 investigated how integration of explicit argumentative content into narratives influences issue-relevant thinking and concluded that identifying with the character being in favor of the issue yielded a more positive attitude toward the issue. In a recent research, Bex.2011 proposes an argumentative-narrative model of reasoning with evidence, further elaborated in BIBREF106 ; also Niehaus.et.al.2012 proposes a computational model of narrative persuasion.
Stemming from another research field, LeytonEscobar2014 found that online community members who use and share narratives have higher participation levels and that narratives are useful tools to build cohesive cultures and increase participation. Betsch.et.al.2010 examined influencing vaccine intentions among parents and found that narratives carry more weight than statistics.
## Summary of annotation studies
This section described two annotation studies that deal with argumentation in user-generated Web content on different levels of detail. In section SECREF44 , we argued for a need of document-level distinction of persuasiveness. We annotated 990 comments and forum posts, reaching moderate inter-annotator agreement (Fleiss' INLINEFORM0 0.59). Section SECREF51 motivated the selection of a model for micro-level argument annotation, proposed its extension based on pre-study observations, and outlined the annotation set-up. This annotation study resulted into 340 documents annotated with the modified Toulmin's model and reached moderate inter-annotator agreement in the logos dimension (Krippendorff's INLINEFORM1 0.48). These results make the annotated corpora suitable for training and evaluation computational models and each of these two annotation studies will have their experimental counterparts in the following section.
## Experiments
This section presents experiments conducted on the annotated corpora introduced in section SECREF4 . We put the main focus on identifying argument components in the discourse. To comply with the machine learning terminology, in this section we will use the term domain as an equivalent to a topic (remember that our dataset includes six different topics; see section SECREF38 ).
We evaluate three different scenarios. First, we report ten-fold cross validation over a random ordering of the entire data set. Second, we deal with in-domain ten-fold cross validation for each of the six domains. Third, in order to evaluate the domain portability of our approach, we train the system on five domains and test on the remaining one for all six domains (which we report as cross-domain validation).
## Identification of argument components
In the following experiment, we focus on automatic identification of arguments in the discourse. Our approach is based on supervised and semi-supervised machine learning methods on the gold data Toulmin dataset introduced in section SECREF51 .
An argument consists of different components (such as premises, backing, etc.) which are implicitly linked to the claim. In principle one document can contain multiple independent arguments. However, only 4% of the documents in our dataset contain arguments for both sides of the issue. Thus we simplify the task and assume there is only one argument per document.
Given the low inter-annotator agreement on the pathos dimension (Table TABREF77 ), we focus solely on recognizing the logical dimension of argument. The pathos dimension of argument remains an open problem for a proper modeling as well as its later recognition.
Since the smallest annotation unit is a token and the argument components do not overlap, we approach identification of argument components as a sequence labeling problem. We use the BIO encoding, so each token belongs to one of the following 11 classes: O (not a part of any argument component), Backing-B, Backing-I, Claim-B, Claim-I, Premise-B, Premise-I, Rebuttal-B, Rebuttal-I, Refutation-B, Refutation-I. This is the minimal encoding that is able to distinguish two adjacent argument components of the same type. In our data, 48% of all adjacent argument components of the same type are direct neighbors (there are no "O" tokens in between).
We report Macro- INLINEFORM0 score and INLINEFORM1 scores for each of the 11 classes as the main evaluation metric. This evaluation is performed on the token level, and for each token the predicted label must exactly match the gold data label (classification of tokens into 11 classes).
As instances for the sequence labeling model, we chose sentences rather than tokens. During our initial experiments, we observed that building a sequence labeling model for recognizing argument components as sequences of tokens is too fine-grained, as a single token does not convey enough information that could be encoded as features for a machine learner. However, as discussed in section UID73 , the annotations were performed on data pre-segmented to sentences and annotating tokens was necessary only when the sentence segmentation was wrong or one sentence contained multiple argument components. Our corpus consists of 3899 sentences, from which 2214 sentences (57%) contain no argument component. From the remaining ones, only 50 sentences (1%) have more than one argument component. Although in 19 cases (0.5%) the sentence contains a Claim-Premise pair which is an important distinction from the argumentation perspective, given the overall small number of such occurrences, we simplify the task by treating each sentence as if it has either one argument component or none.
The approximation with sentence-level units is explained in the example in Figure FIGREF112 . In order to evaluate the expected performance loss using this approximation, we used an oracle that always predicts the correct label for the unit (sentence) and evaluated it against the true labels (recall that the evaluation against the true gold labels is done always on token level). We lose only about 10% of macro INLINEFORM0 score (0.906) and only about 2% of accuracy (0.984). This performance is still acceptable, while allowing to model sequences where the minimal unit is a sentence.
Table TABREF114 shows the distribution of the classes in the gold data Toulmin, where the labeling was already mapped to the sentences. The little presence of rebuttal and refutation (4 classes account only for 3.4% of the data) makes this dataset very unbalanced.
We chose SVMhmm BIBREF111 implementation of Structural Support Vector Machines for sequence labeling. Each sentence ( INLINEFORM0 ) is represented as a vector of real-valued features.
We defined the following feature sets:
FS0: Baseline lexical features
word uni-, bi-, and tri-grams (binary)
FS1: Structural, morphological, and syntactic features
First and last 3 tokens. Motivation: these tokens may contain discourse markers or other indicators for argument components, such as “therefore” and “since” for premises or “think” and “believe” for claims.
Relative position in paragraph and relative position in document. Motivation: We expect that claims are more likely to appear at the beginning or at the end of the document.
Number of POS 1-3 grams, dependency tree depth, constituency tree production rules, and number of sub-clauses. Based on BIBREF113 .
FS2: Topic and sentiment features
30 features taken from a vector representation of the sentence obtained by using Gibbs sampling on LDA model BIBREF114 , BIBREF115 with 30 topics trained on unlabeled data from the raw corpus. Motivation: Topic representation of a sentence might be valuable for detecting off-topic sentences, namely non-argument components.
Scores for five sentiment categories (from very negative to very positive) obtained from Stanford sentiment analyzer BIBREF116 . Motivation: Claims usually express opinions and carry sentiment.
FS3: Semantic, coreference, and discourse features
Binary features from Clear NLP Semantic Role Labeler BIBREF117 . Namely, we extract agent, predicate + agent, predicate + agent + patient + (optional) negation, argument type + argument value, and discourse marker which are based on PropBank semantic role labels. Motivation: Exploit the semantics of Capturing the semantics of the sentences.
Binary features from Stanford Coreference Chain Resolver BIBREF118 , e.g., presence of the sentence in a chain, transition type (i.e., nominal–pronominal), distance to previous/next sentences in the chain, or number of inter-sentence coreference links. Motivation: Presence of coreference chains indicates links outside the sentence and thus may be informative, for example, for classifying whether the sentence is a part of a larger argument component.
Results of a PTDB-style discourse parser BIBREF119 , namely the type of discourse relation (explicit, implicit), presence of discourse connectives, and attributions. Motivation: It has been claimed that discourse relations play a role in argumentation mining BIBREF120 .
FS4: Embedding features
300 features from word embedding vectors using word embeddings trained on part of the Google News dataset BIBREF121 . In particular, we sum up embedding vectors (dimensionality 300) of each word, resulting into a single vector for the entire sentence. This vector is then directly used as a feature vector. Motivation: Embeddings helped to achieve state-of-the-art results in various NLP tasks BIBREF116 , BIBREF122 .
Except the baseline lexical features, all feature types are extracted not only for the current sentence INLINEFORM0 , but also for INLINEFORM1 preceding and subsequent sentences, namely INLINEFORM2 , INLINEFORM3 , INLINEFORM4 INLINEFORM5 , INLINEFORM6 , where INLINEFORM7 was empirically set to 4. Each feature is then represented with a prefix to determine its relative position to the current sequence unit.
Let us first discuss the upper bounds of the system. Performance of the three human annotators is shown in the first column of Table TABREF139 (results are obtained from a cumulative confusion matrix). The overall Macro- INLINEFORM0 score is 0.602 (accuracy 0.754). If we look closer at the different argument components, we observe that humans are good at predicting claims, premises, backing and non-argumentative text (about 0.60-0.80 INLINEFORM1 ), but on rebuttal and refutation they achieve rather low scores. Without these two components, the overall human Macro- INLINEFORM2 would be 0.707. This trend follows the inter-annotator agreement scores, as discussed in section UID75 .
In our experiments, the feature sets were combined in the bottom-up manner, starting with the simple lexical features (FS0), adding structural and syntactic features (FS1), then adding topic and sentiment features (FS2), then features reflecting the discourse structure (FS3), and finally enriched with completely unsupervised latent vector space representation (FS4). In addition, we were gradually removing the simple features (e.g., without lexical features, without syntactic features, etc.) to test the system with more “abstract” feature sets (feature ablation). The results are shown in Table TABREF139 .
The overall best performance (Macro- INLINEFORM0 0.251) was achieved using the rich feature sets (01234 and 234) and significantly outperformed the baseline as well as other feature sets. Classification of non-argumentative text (the "O" class) yields about 0.7 INLINEFORM1 score even in the baseline setting. The boundaries of claims (Cla-B), premises (Pre-B), and backing (Bac-B) reach in average lower scores then their respective inside tags (Cla-I, Pre-I, Bac-I). It can be interpreted such that the system is able to classify that a certain sentence belongs to a certain argument component, but the distinction whether it is a beginning of the argument component is harder. The very low numbers for rebuttal and refutation have two reasons. First, these two argument components caused many disagreements in the annotations, as discussed in section UID86 , and were hard to recognize for the humans too. Second, these four classes have very few instances in the corpus (about 3.4%, see Table TABREF114 ), so the classifier suffers from the lack of training data.
The results for the in-domain cross validation scenario are shown in Table TABREF140 . Similarly to the cross-validation scenario, the overall best results were achieved using the largest feature set (01234). For mainstreaming and red-shirting, the best results were achieved using only the feature set 4 (embeddings). These two domains contain also fewer documents, compared to other domains (refer to Table TABREF71 ). We suspect that embeddings-based features convey important information when not enough in-domain data are available. This observation will become apparent in the next experiment.
The cross-domain experiments yield rather poor results for most of the feature combinations (Table TABREF141 ). However, using only feature set 4 (embeddings), the system performance increases rapidly, so it is even comparable to numbers achieved in the in-domain scenario. These results indicate that embedding features generalize well across domains in our task of argument component identification. We leave investigating better performing vector representations, such as paragraph vectors BIBREF123 , for future work.
Error analysis based on the probabilistic confusion matrix BIBREF124 shown in Table TABREF142 reveals further details. About a half of the instances for each class are misclassified as non-argumentative (the "O" prediction).
Backing-B is often confused with Premise-B (12%) and Backing-I with Premise-I (23%). Similarly, Premise-I is misclassified as Backing-I in 9%. This shows that distinguishing between backing and premises is not easy because these two components are similar such that they support the claim, as discussed in section UID86 . We can also see that the misclassification is consistent among *-B and *-I tags.
Rebuttal is often misclassified as Premise (28% for Rebuttal-I and 18% for Rebuttal-B; notice again the consistency in *-B and *-I tags). This is rather surprising, as one would expect that rebuttal would be confused with a claim, because its role is to provide an opposing view.
Refutation-B and Refutation-I is misclassified as Premise-I in 19% and 27%, respectively. This finding confirms the discussion in section UID86 , because the role of refutation is highly context-dependent. In a pragmatic perspective, it is put forward to indirectly support the claim by attacking the rebuttal, thus having a similar function to the premise.
We manually examined miss-classified examples produced the best-performing system to find out which phenomena pose biggest challenges. Properly detecting boundaries of argument components caused problems, as shown in Figure FIGREF146 (a). This goes in line with the granularity annotation difficulties discussed in section UID86 . The next example in Figure FIGREF146 (b) shows that even if boundaries of components were detected precisely, the distinction between premise and backing fails. The example also shows that in some cases, labeling on clause level is required (left-hand side claim and premise) but the approximation in the system cannot cope with this level of detail (as explained in section UID111 ). Confusing non-argumentative text and argument components by the system is sometimes plausible, as is the case of the last rhetorical question in Figure FIGREF146 (c). On the other hand, the last example in Figure FIGREF146 (d) shows that some claims using figurative language were hard to be identified. The complete predictions along with the gold data are publicly available.
SVMhmm offers many hyper-parameters with suggested default values, from which three are of importance. Parameter INLINEFORM0 sets the order of dependencies of transitions in HMM, parameter INLINEFORM1 sets the order of dependencies of emissions in HMM, and parameter INLINEFORM2 represents a trading-off slack versus magnitude of the weight-vector. For all experiments, we set all the hyper-parameters to their default values ( INLINEFORM3 , INLINEFORM4 , INLINEFORM5 ). Using the best performing feature set from Table TABREF139 , we experimented with a grid search over different values ( INLINEFORM6 , INLINEFORM7 , INLINEFORM8 ) but the results did not outperform the system trained with default parameter values.
The INLINEFORM0 scores might seem very low at the first glance. One obvious reason is the actual performance of the system, which gives a plenty of room for improvement in the future. But the main cause of low INLINEFORM2 numbers is the evaluation measure — using 11 classes on the token level is very strict, as it penalizes a mismatch in argument component boundaries the same way as a wrongly predicted argument component type. Therefore we also report two another evaluation metrics that help to put our results into a context.
Krippendorff's INLINEFORM0 — It was also used for evaluating inter-annotator agreement (see section UID75 ).
Boundary similarity BIBREF125 — Using this metric, the problem is treated solely as a segmentation task without recognizing the argument component types.
As shown in Table TABREF157 (the Macro- INLINEFORM0 scores are repeated from Table TABREF139 ), the best-performing system achieves 0.30 score using Krippendorf's INLINEFORM1 , which is in the middle between the baseline and the human performance (0.48) but is considered as poor from the inter-annotator agreement point of view BIBREF54 . The boundary similarity metrics is not directly suitable for evaluating argument component classification, but reveals a sub-task of finding the component boundaries. The best system achieved 0.32 on this measure. Vovk2013MT used this measure to annotate argument spans and his annotators achieved 0.36 boundary similarity score. Human annotators in BIBREF125 reached 0.53 boundary similarity score.
The overall performance of the system is also affected by the accuracy of individual NPL tools used for extracting features. One particular problem is that the preprocessing models we rely on (POS, syntax, semantic roles, coreference, discourse; see section UID115 ) were trained on newswire corpora, so one has to expect performance drop when applied on user-generated content. This is however a well-known issue in NLP BIBREF126 , BIBREF127 , BIBREF128 .
To get an impression of the actual performance of the system on the data, we also provide the complete output of our best performing system in one PDF document together with the gold annotations in the logos dimension side by side in the accompanying software package. We believe this will help the community to see the strengths of our model as well as possible limitations of our current approaches.
## Conclusions
Let us begin with summarizing answers to the research questions stated in the introduction. First, as we showed in section UID55 , existing argumentation theories do offer models for capturing argumentation in user-generated content on the Web. We built upon the Toulmin's model and proposed some extensions.
Second, as compared to the negative experiences with annotating using Walton's schemes (see sections UID52 and SECREF31 ), our modified Toulmin's model offers a trade-off between its expressiveness and annotation reliability. However, we found that the capabilities of the model to capture argumentation depend on the register and topic, the length of the document, and inherently on the literary devices and structures used for expressing argumentation as these properties influenced the agreement among annotators.
Third, there are aspects of online argumentation that lack their established theoretical counterparts, such as rhetorical questions, figurative language, narratives, and fallacies in general. We tried to model some of them in the pathos dimension of argument (section UID103 ), but no satisfying agreement was reached. Furthermore, we dealt with a step that precedes argument analysis by filtering documents given their persuasiveness with respect to the controversy. Finally, we proposed a computational model based on machine learning for identifying argument components (section SECREF108 ). In this identification task, we experimented with a wide range of linguistically motivated features and found that (1) the largest feature set (including n-grams, structural features, syntactic features, topic distribution, sentiment distribution, semantic features, coreference feaures, discourse features, and features based on word embeddings) performs best in both in-domain and all-data cross validation, while (2) features based only on word embeddings yield best results in cross-domain evaluation.
Since there is no one-size-fits-all argumentation theory to be applied to actual data on the Web, the argumentation model and an annotation scheme for argumentation mining is a function of the task requirements and the corpus properties. Its selection should be based on the data at hand and the desired application. Given the proposed use-case scenarios (section SECREF1 ) and the results of our annotation study (section SECREF51 ), we recommend a scheme based on Toulmin's model for short documents, such as comments or forum posts.
| 19
|
1603.00968
|
MGNC-CNN: A Simple Approach to Exploiting Multiple Word Embeddings for Sentence Classification
|
# MGNC-CNN: A Simple Approach to Exploiting Multiple Word Embeddings for Sentence Classification
## Abstract
We introduce a novel, simple convolution neural network (CNN) architecture - multi-group norm constraint CNN (MGNC-CNN) that capitalizes on multiple sets of word embeddings for sentence classification. MGNC-CNN extracts features from input embedding sets independently and then joins these at the penultimate layer in the network to form a final feature vector. We then adopt a group regularization strategy that differentially penalizes weights associated with the subcomponents generated from the respective embedding sets. This model is much simpler than comparable alternative architectures and requires substantially less training time. Furthermore, it is flexible in that it does not require input word embeddings to be of the same dimensionality. We show that MGNC-CNN consistently outperforms baseline models.
## Introduction
Neural models have recently gained popularity for Natural Language Processing (NLP) tasks BIBREF0 , BIBREF1 , BIBREF2 . For sentence classification, in particular, Convolution Neural Networks (CNN) have realized impressive performance BIBREF3 , BIBREF4 . These models operate over word embeddings, i.e., dense, low dimensional vector representations of words that aim to capture salient semantic and syntactic properties BIBREF1 .
An important consideration for such models is the specification of the word embeddings. Several options exist. For example, Kalchbrenner et al. kalchbrenner2014convolutional initialize word vectors to random low-dimensional vectors to be fit during training, while Johnson and Zhang johnson2014effective use fixed, one-hot encodings for each word. By contrast, Kim kim2014convolutional initializes word vectors to those estimated via the word2vec model trained on 100 billion words of Google News BIBREF5 ; these are then updated during training. Initializing embeddings to pre-trained word vectors is intuitively appealing because it allows transfer of learned distributional semantics. This has allowed a relatively simple CNN architecture to achieve remarkably strong results.
Many pre-trained word embeddings are now readily available on the web, induced using different models, corpora, and processing steps. Different embeddings may encode different aspects of language BIBREF6 , BIBREF7 , BIBREF8 : those based on bag-of-words (BoW) statistics tend to capture associations (doctor and hospital), while embeddings based on dependency-parses encode similarity in terms of use (doctor and surgeon). It is natural to consider how these embeddings might be combined to improve NLP models in general and CNNs in particular.
Contributions. We propose MGNC-CNN, a novel, simple, scalable CNN architecture that can accommodate multiple off-the-shelf embeddings of variable sizes. Our model treats different word embeddings as distinct groups, and applies CNNs independently to each, thus generating corresponding feature vectors (one per embedding) which are then concatenated at the classification layer. Inspired by prior work exploiting regularization to encode structure for NLP tasks BIBREF9 , BIBREF10 , we impose different regularization penalties on weights for features generated from the respective word embedding sets.
Our approach enjoys the following advantages compared to the only existing comparable model BIBREF11 : (i) It can leverage diverse, readily available word embeddings with different dimensions, thus providing flexibility. (ii) It is comparatively simple, and does not, for example, require mutual learning or pre-training. (iii) It is an order of magnitude more efficient in terms of training time.
## Related Work
Prior work has considered combining latent representations of words that capture syntactic and semantic properties BIBREF12 , and inducing multi-modal embeddings BIBREF13 for general NLP tasks. And recently, Luo et al. luo2014pre proposed a framework that combines multiple word embeddings to measure text similarity, however their focus was not on classification.
More similar to our work, Yin and Schütze yin-schutze:2015:CoNLL proposed MVCNN for sentence classification. This CNN-based architecture accepts multiple word embeddings as inputs. These are then treated as separate `channels', analogous to RGB channels in images. Filters consider all channels simultaneously. MVCNN achieved state-of-the-art performance on multiple sentence classification tasks. However, this model has practical drawbacks. (i) MVCNN requires that input word embeddings have the same dimensionality. Thus to incorporate a second set of word vectors trained on a corpus (or using a model) of interest, one needs to either find embeddings that happen to have a set number of dimensions or to estimate embeddings from scratch. (ii) The model is complex, both in terms of implementation and run-time. Indeed, this model requires pre-training and mutual-learning and requires days of training time, whereas the simple architecture we propose requires on the order of an hour (and is easy to implement).
## Model Description
We first review standard one-layer CNN (which exploits a single set of embeddings) for sentence classification BIBREF3 , and then propose our augmentations, which exploit multiple embedding sets.
Basic CNN. In this model we first replace each word in a sentence with its vector representation, resulting in a sentence matrix INLINEFORM0 , where INLINEFORM1 is the (zero-padded) sentence length, and INLINEFORM2 is the dimensionality of the embeddings. We apply a convolution operation between linear filters with parameters INLINEFORM3 and the sentence matrix. For each INLINEFORM4 , where INLINEFORM5 denotes `height', we slide filter INLINEFORM6 across INLINEFORM7 , considering `local regions' of INLINEFORM8 adjacent rows at a time. At each local region, we perform element-wise multiplication and then take the element-wise sum between the filter and the (flattened) sub-matrix of INLINEFORM9 , producing a scalar. We do this for each sub-region of INLINEFORM10 that the filter spans, resulting in a feature map vector INLINEFORM11 . We can use multiple filter sizes with different heights, and for each filter size we can have multiple filters. Thus the model comprises INLINEFORM12 weight vectors INLINEFORM13 , each of which is associated with an instantiation of a specific filter size. These in turn generate corresponding feature maps INLINEFORM14 , with dimensions varying with filter size. A 1-max pooling operation is applied to each feature map, extracting the largest number INLINEFORM15 from each feature map INLINEFORM16 . Finally, we combine all INLINEFORM17 together to form a feature vector INLINEFORM18 to be fed through a softmax function for classification. We regularize weights at this level in two ways. (1) Dropout, in which we randomly set elements in INLINEFORM19 to zero during the training phase with probability INLINEFORM20 , and multiply INLINEFORM21 with the parameters trained in INLINEFORM22 at test time. (2) An l2 norm penalty, for which we set a threshold INLINEFORM23 for the l2 norm of INLINEFORM24 during training; if this is exceeded, we rescale the vector accordingly. For more details, see BIBREF4 .
MG-CNN. Assuming we have INLINEFORM0 word embeddings with corresponding dimensions INLINEFORM1 , we can simply treat each word embedding independently. In this case, the input to the CNN comprises multiple sentence matrices INLINEFORM2 , where each INLINEFORM3 may have its own width INLINEFORM4 . We then apply different groups of filters INLINEFORM5 independently to each INLINEFORM6 , where INLINEFORM7 denotes the set of filters for INLINEFORM8 . As in basic CNN, INLINEFORM9 may have multiple filter sizes, and multiple filters of each size may be introduced. At the classification layer we then obtain a feature vector INLINEFORM10 for each embedding set, and we can simply concatenate these together to form the final feature vector INLINEFORM11 to feed into the softmax function, where INLINEFORM12 . This representation contains feature vectors generated from all sets of embeddings under consideration. We call this method multiple group CNN (MG-CNN). Here groups refer to the features generated from different embeddings. Note that this differs from `multi-channel' models because at the convolution layer we use different filters on each word embedding matrix independently, whereas in a standard multi-channel approach each filter would consider all channels simultaneously and generate a scalar from all channels at each local region. As above, we impose a max l2 norm constraint on the final feature vector INLINEFORM13 for regularization. Figure FIGREF1 illustrates this approach.
MGNC-CNN. We propose an augmentation of MG-CNN, Multi-Group Norm Constraint CNN (MGNC-CNN), which differs in its regularization strategy. Specifically, in this variant we impose grouped regularization constraints, independently regularizing subcomponents INLINEFORM0 derived from the respective embeddings, i.e., we impose separate max norm constraints INLINEFORM1 for each INLINEFORM2 (where INLINEFORM3 again indexes embedding sets); these INLINEFORM4 hyper-parameters are to be tuned on a validation set. Intuitively, this method aims to better capitalize on features derived from word embeddings that capture discriminative properties of text for the task at hand by penalizing larger weight estimates for features derived from less discriminative embeddings.
## Datasets
Stanford Sentiment Treebank Stanford Sentiment Treebank (SST) BIBREF14 . This concerns predicting movie review sentiment. Two datasets are derived from this corpus: (1) SST-1, containing five classes: very negative, negative, neutral, positive, and very positive. (2) SST-2, which has only two classes: negative and positive. For both, we remove phrases of length less than 4 from the training set.
Subj BIBREF15 . The aim here is to classify sentences as either subjective or objective. This comprises 5000 instances of each.
TREC BIBREF16 . A question classification dataset containing six classes: abbreviation, entity, description, human, location and numeric. There are 5500 training and 500 test instances.
Irony BIBREF17 . This dataset contains 16,006 sentences from reddit labeled as ironic (or not). The dataset is imbalanced (relatively few sentences are ironic). Thus before training, we under-sampled negative instances to make classes sizes equal. Note that for this dataset we report the Area Under Curve (AUC), rather than accuracy, because it is imbalanced.
## Pre-trained Word Embeddings
We consider three sets of word embeddings for our experiments: (i) word2vec is trained on 100 billion tokens of Google News dataset; (ii) GloVe BIBREF18 is trained on aggregated global word-word co-occurrence statistics from Common Crawl (840B tokens); and (iii) syntactic word embedding trained on dependency-parsed corpora. These three embedding sets happen to all be 300-dimensional, but our model could accommodate arbitrary and variable sizes. We pre-trained our own syntactic embeddings following BIBREF8 . We parsed the ukWaC corpus BIBREF19 using the Stanford Dependency Parser v3.5.2 with Stanford Dependencies BIBREF20 and extracted (word, relation+context) pairs from parse trees. We “collapsed" nodes with prepositions and notated inverse relations separately, e.g., “dog barks" emits two tuples: (barks, nsubj_dog) and (dog, nsubj INLINEFORM0 _barks). We filter words and contexts that appear fewer than 100 times, resulting in INLINEFORM1 173k words and 1M contexts. We trained 300d vectors using word2vecf with default parameters.
## Setup
We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. For all models, we tuned the l2 norm constraint INLINEFORM0 over the range INLINEFORM1 on a validation set. For instantiations of MGNC-CNN in which we exploited two embeddings, we tuned both INLINEFORM2 , and INLINEFORM3 ; where we used three embedding sets, we tuned INLINEFORM4 and INLINEFORM5 .
We used standard train/test splits for those datasets that had them. Otherwise, we performed 10-fold cross validation, creating nested development sets with which to tune hyperparameters. For all experiments we used filters sizes of 3, 4 and 5 and we created 100 feature maps for each filter size. We applied 1 max-pooling and dropout (rate: 0.5) at the classification layer. For training we used back-propagation in mini-batches and used AdaDelta as the stochastic gradient descent (SGD) update rule, and set mini-batch size as 50. In this work, we treat word embeddings as part of the parameters of the model, and update them as well during training. In all our experiments, we only tuned the max norm constraint(s), fixing all other hyperparameters.
## Results and Discussion
We repeated each experiment 10 times and report the mean and ranges across these. This replication is important because training is stochastic and thus introduces variance in performance BIBREF4 . Results are shown in Table TABREF2 , and the corresponding best norm constraint value is shown in Table TABREF2 . We also show results on Subj, SST-1 and SST-2 achieved by the more complex model of BIBREF11 for comparison; this represents the state-of-the-art on the three datasets other than TREC.
We can see that MGNC-CNN and MG-CNN always outperform baseline methods (including C-CNN), and MGNC-CNN is usually better than MG-CNN. And on the Subj dataset, MG-CNN actually achieves slightly better results than BIBREF11 , with far less complexity and required training time (MGNC-CNN performs comparably, although no better, here). On the TREC dataset, the best-ever accuracy we are aware of is 96.0% BIBREF21 , which falls within the range of the result of our MGNC-CNN model with three word embeddings. On the irony dataset, our model with three embeddings achieves 4% improvement (in terms of AUC) compared to the baseline model. On SST-1 and SST-2, our model performs slightly worse than BIBREF11 . However, we again note that their performance is achieved using a much more complex model which involves pre-training and mutual-learning steps. This model takes days to train, whereas our model requires on the order of an hour.
We note that the method proposed by Astudillo et al. astudillo2015learning is able to accommodate multiple embedding sets with different dimensions by projecting the original word embeddings into a lower-dimensional space. However, this work requires training the optimal projection matrix on laebled data first, which again incurs large overhead.
Of course, our model also has its own limitations: in MGNC-CNN, we need to tune the norm constraint hyperparameter for all the word embeddings. As the number of word embedding increases, this will increase the running time. However, this tuning procedure is embarrassingly parallel.
## Conclusions
We have proposed MGNC-CNN: a simple, flexible CNN architecture for sentence classification that can exploit multiple, variable sized word embeddings. We demonstrated that this consistently achieves better results than a baseline architecture that exploits only a single set of word embeddings, and also a naive concatenation approach to capitalizing on multiple embeddings. Furthermore, our results are comparable to those achieved with a recently proposed model BIBREF11 that is much more complex. However, our simple model is easy to implement and requires an order of magnitude less training time. Furthermore, our model is much more flexible than previous approaches, because it can accommodate variable-size word embeddings.
## Acknowledgments
This work was supported in part by the Army Research Office (grant W911NF-14-1-0442) and by The Foundation for Science and Technology, Portugal (grant UTAP-EXPL/EEIESS/0031/2014). This work was also made possible by the support of the Texas Advanced Computer Center (TACC) at UT Austin.
| 9
|
1603.01417
|
Dynamic Memory Networks for Visual and Textual Question Answering
|
# Dynamic Memory Networks for Visual and Textual Question Answering
## Abstract
Neural network architectures with memory and attention mechanisms exhibit certain reasoning capabilities required for question answering. One such architecture, the dynamic memory network (DMN), obtained high accuracy on a variety of language tasks. However, it was not shown whether the architecture achieves strong results for question answering when supporting facts are not marked during training or whether it could be applied to other modalities such as images. Based on an analysis of the DMN, we propose several improvements to its memory and input modules. Together with these changes we introduce a novel input module for images in order to be able to answer visual questions. Our new DMN+ model improves the state of the art on both the Visual Question Answering dataset and the \babi-10k text question-answering dataset without supporting fact supervision.
## Introduction
Neural network based methods have made tremendous progress in image and text classification BIBREF0 , BIBREF1 . However, only recently has progress been made on more complex tasks that require logical reasoning. This success is based in part on the addition of memory and attention components to complex neural networks. For instance, memory networks BIBREF2 are able to reason over several facts written in natural language or (subject, relation, object) triplets. Attention mechanisms have been successful components in both machine translation BIBREF3 , BIBREF4 and image captioning models BIBREF5 .
The dynamic memory network BIBREF6 (DMN) is one example of a neural network model that has both a memory component and an attention mechanism. The DMN yields state of the art results on question answering with supporting facts marked during training, sentiment analysis, and part-of-speech tagging.
We analyze the DMN components, specifically the input module and memory module, to improve question answering. We propose a new input module which uses a two level encoder with a sentence reader and input fusion layer to allow for information flow between sentences. For the memory, we propose a modification to gated recurrent units (GRU) BIBREF7 . The new GRU formulation incorporates attention gates that are computed using global knowledge over the facts. Unlike before, the new DMN+ model does not require that supporting facts (i.e. the facts that are relevant for answering a particular question) are labeled during training. The model learns to select the important facts from a larger set.
In addition, we introduce a new input module to represent images. This module is compatible with the rest of the DMN architecture and its output is fed into the memory module. We show that the changes in the memory module that improved textual question answering also improve visual question answering. Both tasks are illustrated in Fig. 1 .
## Dynamic Memory Networks
We begin by outlining the DMN for question answering and the modules as presented in BIBREF6 .
The DMN is a general architecture for question answering (QA). It is composed of modules that allow different aspects such as input representations or memory components to be analyzed and improved independently. The modules, depicted in Fig. 1 , are as follows:
Input Module: This module processes the input data about which a question is being asked into a set of vectors termed facts, represented as $F=[f_1,\hdots ,f_N]$ , where $N$ is the total number of facts. These vectors are ordered, resulting in additional information that can be used by later components. For text QA in BIBREF6 , the module consists of a GRU over the input words.
As the GRU is used in many components of the DMN, it is useful to provide the full definition. For each time step $i$ with input $x_i$ and previous hidden state $h_{i-1}$ , we compute the updated hidden state $h_i = GRU(x_i,h_{i-1})$ by
$$u_i &=& \sigma \left(W^{(u)}x_{i} + U^{(u)} h_{i-1} + b^{(u)} \right)\\
r_i &=& \sigma \left(W^{(r)}x_{i} + U^{(r)} h_{i-1} + b^{(r)} \right)\\
\tilde{h}_i &=& \tanh \left(Wx_{i} + r_i \circ U h_{i-1} + b^{(h)}\right)\\
h_i &=& u_i\circ \tilde{h}_i + (1-u_i) \circ h_{i-1}$$ (Eq. 2)
where $\sigma $ is the sigmoid activation function, $\circ $ is an element-wise product, $W^{(z)}, W^{(r)}, W \in \mathbb {R}^{n_H \times n_I}$ , $U^{(z)}, U^{(r)}, U \in \mathbb {R}^{n_H \times n_H}$ , $n_H$ is the hidden size, and $n_I$ is the input size.
Question Module: This module computes a vector representation $q$ of the question, where $q \in \mathbb {R}^{n_H}$ is the final hidden state of a GRU over the words in the question.
Episodic Memory Module: Episode memory aims to retrieve the information required to answer the question $q$ from the input facts. To improve our understanding of both the question and input, especially if questions require transitive reasoning, the episode memory module may pass over the input multiple times, updating episode memory after each pass. We refer to the episode memory on the $t^{th}$ pass over the inputs as $m^t$ , where $m^t \in \mathbb {R}^{n_H}$ , the initial memory vector is set to the question vector: $m^0 = q$ .
The episodic memory module consists of two separate components: the attention mechanism and the memory update mechanism. The attention mechanism is responsible for producing a contextual vector $c^t$ , where $c^t \in \mathbb {R}^{n_H}$ is a summary of relevant input for pass $t$ , with relevance inferred by the question $q$ and previous episode memory $m^{t-1}$ . The memory update mechanism is responsible for generating the episode memory $m^t$ based upon the contextual vector $c^t$ and previous episode memory $m^{t-1}$ . By the final pass $T$ , the episodic memory $m^T$ should contain all the information required to answer the question $c^t \in \mathbb {R}^{n_H}$0 .
Answer Module: The answer module receives both $q$ and $m^T$ to generate the model's predicted answer. For simple answers, such as a single word, a linear layer with softmax activation may be used. For tasks requiring a sequence output, an RNN may be used to decode $a = [q ; m^T]$ , the concatenation of vectors $q$ and $m^T$ , to an ordered set of tokens. The cross entropy error on the answers is used for training and backpropagated through the entire network.
## Improved Dynamic Memory Networks: DMN+
We propose and compare several modeling choices for two crucial components: input representation, attention mechanism and memory update. The final DMN+ model obtains the highest accuracy on the bAbI-10k dataset without supporting facts and the VQA dataset BIBREF8 . Several design choices are motivated by intuition and accuracy improvements on that dataset.
## Input Module for Text QA
In the DMN specified in BIBREF6 , a single GRU is used to process all the words in the story, extracting sentence representations by storing the hidden states produced at the end of sentence markers. The GRU also provides a temporal component by allowing a sentence to know the content of the sentences that came before them. Whilst this input module worked well for bAbI-1k with supporting facts, as reported in BIBREF6 , it did not perform well on bAbI-10k without supporting facts (Sec. "Model Analysis" ).
We speculate that there are two main reasons for this performance disparity, all exacerbated by the removal of supporting facts. First, the GRU only allows sentences to have context from sentences before them, but not after them. This prevents information propagation from future sentences. Second, the supporting sentences may be too far away from each other on a word level to allow for these distant sentences to interact through the word level GRU.
Input Fusion Layer
For the DMN+, we propose replacing this single GRU with two different components. The first component is a sentence reader, responsible only for encoding the words into a sentence embedding. The second component is the input fusion layer, allowing for interactions between sentences. This resembles the hierarchical neural auto-encoder architecture of BIBREF9 and allows content interaction between sentences. We adopt the bi-directional GRU for this input fusion layer because it allows information from both past and future sentences to be used. As gradients do not need to propagate through the words between sentences, the fusion layer also allows for distant supporting sentences to have a more direct interaction.
Fig. 2 shows an illustration of an input module, where a positional encoder is used for the sentence reader and a bi-directional GRU is adopted for the input fusion layer. Each sentence encoding $f_i$ is the output of an encoding scheme taking the word tokens $[w^i_1, \hdots , w^i_{M_i}]$ , where $M_i$ is the length of the sentence.
The sentence reader could be based on any variety of encoding schemes. We selected positional encoding described in BIBREF10 to allow for a comparison to their work. GRUs and LSTMs were also considered but required more computational resources and were prone to overfitting if auxiliary tasks, such as reconstructing the original sentence, were not used.
For the positional encoding scheme, the sentence representation is produced by $f_i = \sum ^{j=1}_M l_j \circ w^i_j$ , where $\circ $ is element-wise multiplication and $l_j$ is a column vector with structure $l_{jd} = (1 - j / M) - (d / D) (1 - 2j / M)$ , where $d$ is the embedding index and $D$ is the dimension of the embedding.
The input fusion layer takes these input facts and enables an information exchange between them by applying a bi-directional GRU.
$$\overrightarrow{f_i} = GRU_{fwd}(f_i, \overrightarrow{f_{i-1}}) \\
\overleftarrow{f_{i}} = GRU_{bwd}(f_{i}, \overleftarrow{f_{i+1}}) \\
\overleftrightarrow{f_i} = \overleftarrow{f_i} + \overrightarrow{f_i}$$ (Eq. 5)
where $f_i$ is the input fact at timestep $i$ , $ \overrightarrow{f_i}$ is the hidden state of the forward GRU at timestep $i$ , and $\overleftarrow{f_i}$ is the hidden state of the backward GRU at timestep $i$ . This allows contextual information from both future and past facts to impact $\overleftrightarrow{f_i}$ .
We explored a variety of encoding schemes for the sentence reader, including GRUs, LSTMs, and the positional encoding scheme described in BIBREF10 . For simplicity and speed, we selected the positional encoding scheme. GRUs and LSTMs were also considered but required more computational resources and were prone to overfitting if auxiliary tasks, such as reconstructing the original sentence, were not used.
## Input Module for VQA
To apply the DMN to visual question answering, we introduce a new input module for images. The module splits an image into small local regions and considers each region equivalent to a sentence in the input module for text. The input module for VQA is composed of three parts, illustrated in Fig. 3 : local region feature extraction, visual feature embedding, and the input fusion layer introduced in Sec. "Input Module for Text QA" .
Local region feature extraction: To extract features from the image, we use a convolutional neural network BIBREF0 based upon the VGG-19 model BIBREF11 . We first rescale the input image to $448 \times 448$ and take the output from the last pooling layer which has dimensionality $d = 512 \times 14 \times 14$ . The pooling layer divides the image into a grid of $14 \times 14$ , resulting in 196 local regional vectors of $d = 512$ .
Visual feature embedding: As the VQA task involves both image features and text features, we add a linear layer with tanh activation to project the local regional vectors to the textual feature space used by the question vector $q$ .
Input fusion layer: The local regional vectors extracted from above do not yet have global information available to them. Without global information, their representational power is quite limited, with simple issues like object scaling or locational variance causing accuracy problems.
To solve this, we add an input fusion layer similar to that of the textual input module described in Sec. "Input Module for Text QA" . First, to produce the input facts $F$ , we traverse the image in a snake like fashion, as seen in Figure 3 . We then apply a bi-directional GRU over these input facts $F$ to produce the globally aware input facts $\overleftrightarrow{F}$ . The bi-directional GRU allows for information propagation from neighboring image patches, capturing spatial information.
## The Episodic Memory Module
The episodic memory module, as depicted in Fig. 4 , retrieves information from the input facts $\overleftrightarrow{F} = [\overleftrightarrow{f_1}, \hdots , \overleftrightarrow{f_N}]$ provided to it by focusing attention on a subset of these facts. We implement this attention by associating a single scalar value, the attention gate $g^t_i$ , with each fact $\overleftrightarrow{f}_i$ during pass $t$ . This is computed by allowing interactions between the fact and both the question representation and the episode memory state.
$$z^t_i &=& [\overleftrightarrow{f_i} \circ q; \overleftrightarrow{f_i} \circ m^{t-1}; \vert \overleftrightarrow{f_i} - q \vert ; \vert \overleftrightarrow{f_i} - m^{t-1} \vert ] \\
Z^t_i &=& W^{(2)} \tanh \left(W^{(1)}z^t_i + b^{(1)} \right)+ b^{(2)} \\
g^t_i &=& \frac{\exp (Z^t_i)}{\sum _{k=1}^{M_i} \exp (Z^t_k)} $$ (Eq. 10)
where $\overleftrightarrow{f_i}$ is the $i^{th}$ fact, $m^{t-1}$ is the previous episode memory, $q$ is the original question, $\circ $ is the element-wise product, $|\cdot |$ is the element-wise absolute value, and $;$ represents concatenation of the vectors.
The DMN implemented in BIBREF6 involved a more complex set of interactions within $z$ , containing the additional terms $[f; m^{t-1}; q; f^T W^{(b)} q; f^T W^{(b)} m^{t-1}]$ . After an initial analysis, we found these additional terms were not required.
Attention Mechanism
Once we have the attention gate $g^t_i$ we use an attention mechanism to extract a contextual vector $c^t$ based upon the current focus. We focus on two types of attention: soft attention and a new attention based GRU. The latter improves performance and is hence the final modeling choice for the DMN+.
Soft attention: Soft attention produces a contextual vector $c^t$ through a weighted summation of the sorted list of vectors $\overleftrightarrow{F}$ and corresponding attention gates $g_i^t$ : $c^t = \sum _{i=1}^N g^t_i \overleftrightarrow{f}_i$ This method has two advantages. First, it is easy to compute. Second, if the softmax activation is spiky it can approximate a hard attention function by selecting only a single fact for the contextual vector whilst still being differentiable. However the main disadvantage to soft attention is that the summation process loses both positional and ordering information. Whilst multiple attention passes can retrieve some of this information, this is inefficient.
Attention based GRU: For more complex queries, we would like for the attention mechanism to be sensitive to both the position and ordering of the input facts $\overleftrightarrow{F}$ . An RNN would be advantageous in this situation except they cannot make use of the attention gate from Equation .
We propose a modification to the GRU architecture by embedding information from the attention mechanism. The update gate $u_i$ in Equation 2 decides how much of each dimension of the hidden state to retain and how much should be updated with the transformed input $x_i$ from the current timestep. As $u_i$ is computed using only the current input and the hidden state from previous timesteps, it lacks any knowledge from the question or previous episode memory.
By replacing the update gate $u_i$ in the GRU (Equation 2 ) with the output of the attention gate $g^t_i$ (Equation ) in Equation , the GRU can now use the attention gate for updating its internal state. This change is depicted in Fig 5 .
$$h_i &=& g^t_i \circ \tilde{h}_i + (1-g^t_i) \circ h_{i-1}$$ (Eq. 12)
An important consideration is that $g^t_i$ is a scalar, generated using a softmax activation, as opposed to the vector $u_i \in \mathbb {R}^{n_H}$ , generated using a sigmoid activation. This allows us to easily visualize how the attention gates activate over the input, later shown for visual QA in Fig. 6 . Though not explored, replacing the softmax activation in Equation with a sigmoid activation would result in $g^t_i \in \mathbb {R}^{n_H}$ . To produce the contextual vector $c^t$ used for updating the episodic memory state $m^t$ , we use the final hidden state of the attention based GRU.
Episode Memory Updates
After each pass through the attention mechanism, we wish to update the episode memory $m^{t-1}$ with the newly constructed contextual vector $c^t$ , producing $m^t$ . In the DMN, a GRU with the initial hidden state set to the question vector $q$ is used for this purpose. The episodic memory for pass $t$ is computed by
$$m^t = GRU(c^t, m^{t-1})$$ (Eq. 13)
The work of BIBREF10 suggests that using different weights for each pass through the episodic memory may be advantageous. When the model contains only one set of weights for all episodic passes over the input, it is referred to as a tied model, as in the “Mem Weights” row in Table 1 .
Following the memory update component used in BIBREF10 and BIBREF12 we experiment with using a ReLU layer for the memory update, calculating the new episode memory state by
$$m^t = ReLU\left(W^t [m^{t-1} ; c^t ; q] + b\right)$$ (Eq. 14)
where $;$ is the concatenation operator, $W^t \in \mathbb {R}^{n_H \times n_H}$ , $b \in \mathbb {R}^{n_H}$ , and $n_H$ is the hidden size. The untying of weights and using this ReLU formulation for the memory update improves accuracy by another 0.5% as shown in Table 1 in the last column. The final output of the memory network is passed to the answer module as in the original DMN.
## Related Work
The DMN is related to two major lines of recent work: memory and attention mechanisms. We work on both visual and textual question answering which have, until now, been developed in separate communities.
Neural Memory Models The earliest recent work with a memory component that is applied to language processing is that of memory networks BIBREF2 which adds a memory component for question answering over simple facts. They are similar to DMNs in that they also have input, scoring, attention and response mechanisms. However, unlike the DMN their input module computes sentence representations independently and hence cannot easily be used for other tasks such as sequence labeling. Like the original DMN, this memory network requires that supporting facts are labeled during QA training. End-to-end memory networks BIBREF10 do not have this limitation. In contrast to previous memory models with a variety of different functions for memory attention retrieval and representations, DMNs BIBREF6 have shown that neural sequence models can be used for input representation, attention and response mechanisms. Sequence models naturally capture position and temporality of both the inputs and transitive reasoning steps.
Neural Attention Mechanisms Attention mechanisms allow neural network models to use a question to selectively pay attention to specific inputs. They can benefit image classification BIBREF13 , generating captions for images BIBREF5 , among others mentioned below, and machine translation BIBREF14 , BIBREF3 , BIBREF4 . Other recent neural architectures with memory or attention which have proposed include neural Turing machines BIBREF15 , neural GPUs BIBREF16 and stack-augmented RNNs BIBREF17 .
Question Answering in NLP Question answering involving natural language can be solved in a variety of ways to which we cannot all do justice. If the potential input is a large text corpus, QA becomes a combination of information retrieval and extraction BIBREF18 . Neural approaches can include reasoning over knowledge bases, BIBREF19 , BIBREF20 or directly via sentences for trivia competitions BIBREF21 .
Visual Question Answering (VQA) In comparison to QA in NLP, VQA is still a relatively young task that is feasible only now that objects can be identified with high accuracy. The first large scale database with unconstrained questions about images was introduced by BIBREF8 . While VQA datasets existed before they did not include open-ended, free-form questions about general images BIBREF22 . Others are were too small to be viable for a deep learning approach BIBREF23 . The only VQA model which also has an attention component is the stacked attention network BIBREF24 . Their work also uses CNN based features. However, unlike our input fusion layer, they use a single layer neural network to map the features of each patch to the dimensionality of the question vector. Hence, the model cannot easily incorporate adjacency of local information in its hidden state. A model that also uses neural modules, albeit logically inspired ones, is that by BIBREF25 who evaluate on knowledgebase reasoning and visual question answering. We compare directly to their method on the latter task and dataset.
Related to visual question answering is the task of describing images with sentences BIBREF26 . BIBREF27 used deep learning methods to map images and sentences into the same space in order to describe images with sentences and to find images that best visualize a sentence. This was the first work to map both modalities into a joint space with deep learning methods, but it could only select an existing sentence to describe an image. Shortly thereafter, recurrent neural networks were used to generate often novel sentences based on images BIBREF28 , BIBREF29 , BIBREF30 , BIBREF5 .
## Datasets
To analyze our proposed model changes and compare our performance with other architectures, we use three datasets.
## bAbI-10k
For evaluating the DMN on textual question answering, we use bAbI-10k English BIBREF31 , a synthetic dataset which features 20 different tasks. Each example is composed of a set of facts, a question, the answer, and the supporting facts that lead to the answer. The dataset comes in two sizes, referring to the number of training examples each task has: bAbI-1k and bAbI-10k. The experiments in BIBREF10 found that their lowest error rates on the smaller bAbI-1k dataset were on average three times higher than on bAbI-10k.
## DAQUAR-ALL visual dataset
The DAtaset for QUestion Answering on Real-world images (DAQUAR) BIBREF23 consists of 795 training images and 654 test images. Based upon these images, 6,795 training questions and 5,673 test questions were generated. Following the previously defined experimental method, we exclude multiple word answers BIBREF32 , BIBREF33 . The resulting dataset covers 90% of the original data. The evaluation method uses classification accuracy over the single words. We use this as a development dataset for model analysis (Sec. "Model Analysis" ).
## Visual Question Answering
The Visual Question Answering (VQA) dataset was constructed using the Microsoft COCO dataset BIBREF34 which contained 123,287 training/validation images and 81,434 test images. Each image has several related questions with each question answered by multiple people. This dataset contains 248,349 training questions, 121,512 validation questions, and 244,302 for testing. The testing data was split into test-development, test-standard and test-challenge in BIBREF8 .
Evaluation on both test-standard and test-challenge are implemented via a submission system. test-standard may only be evaluated 5 times and test-challenge is only evaluated at the end of the competition. To the best of our knowledge, VQA is the largest and most complex image dataset for the visual question answering task.
## Model Analysis
To understand the impact of the proposed module changes, we analyze the performance of a variety of DMN models on textual and visual question answering datasets.
The original DMN (ODMN) is the architecture presented in BIBREF6 without any modifications. DMN2 only replaces the input module with the input fusion layer (Sec. "Input Module for Text QA" ). DMN3, based upon DMN2, replaces the soft attention mechanism with the attention based GRU proposed in Sec. "The Episodic Memory Module" . Finally, DMN+, based upon DMN3, is an untied model, using a unique set of weights for each pass and a linear layer with a ReLU activation to compute the memory update. We report the performance of the model variations in Table 1 .
A large improvement to accuracy on both the bAbI-10k textual and DAQUAR visual datasets results from updating the input module, seen when comparing ODMN to DMN2. On both datasets, the input fusion layer improves interaction between distant facts. In the visual dataset, this improvement is purely from providing contextual information from neighboring image patches, allowing it to handle objects of varying scale or questions with a locality aspect. For the textual dataset, the improved interaction between sentences likely helps the path finding required for logical reasoning when multiple transitive steps are required.
The addition of the attention GRU in DMN3 helps answer questions where complex positional or ordering information may be required. This change impacts the textual dataset the most as few questions in the visual dataset are likely to require this form of logical reasoning. Finally, the untied model in the DMN+ overfits on some tasks compared to DMN3, but on average the error rate decreases.
From these experimental results, we find that the combination of all the proposed model changes results, culminating in DMN+, achieves the highest performance across both the visual and textual datasets.
## Comparison to state of the art using bAbI-10k
We trained our models using the Adam optimizer BIBREF35 with a learning rate of 0.001 and batch size of 128. Training runs for up to 256 epochs with early stopping if the validation loss had not improved within the last 20 epochs. The model from the epoch with the lowest validation loss was then selected. Xavier initialization was used for all weights except for the word embeddings, which used random uniform initialization with range $[-\sqrt{3}, \sqrt{3}]$ . Both the embedding and hidden dimensions were of size $d = 80$ . We used $\ell _2$ regularization on all weights except bias and used dropout on the initial sentence encodings and the answer module, keeping the input with probability $p=0.9$ . The last 10% of the training data on each task was chosen as the validation set. For all tasks, three passes were used for the episodic memory module, allowing direct comparison to other state of the art methods. Finally, we limited the input to the last 70 sentences for all tasks except QA3 for which we limited input to the last 130 sentences, similar to BIBREF10 .
On some tasks, the accuracy was not stable across multiple runs. This was particularly problematic on QA3, QA17, and QA18. To solve this, we repeated training 10 times using random initializations and evaluated the model that achieved the lowest validation set loss.
Text QA Results
We compare our best performing approach, DMN+, to two state of the art question answering architectures: the end to end memory network (E2E) BIBREF10 and the neural reasoner framework (NR) BIBREF12 . Neither approach use supporting facts for training.
The end-to-end memory network is a form of memory network BIBREF2 tested on both textual question answering and language modeling. The model features both explicit memory and a recurrent attention mechanism. We select the model from the paper that achieves the lowest mean error over the bAbI-10k dataset. This model utilizes positional encoding for input, RNN-style tied weights for the episode module, and a ReLU non-linearity for the memory update component.
The neural reasoner framework is an end-to-end trainable model which features a deep architecture for logical reasoning and an interaction-pooling mechanism for allowing interaction over multiple facts. While the neural reasoner framework was only tested on QA17 and QA19, these were two of the most challenging question types at the time.
In Table 2 we compare the accuracy of these question answering architectures, both as mean error and error on individual tasks. The DMN+ model reduces mean error by 1.4% compared to the the end-to-end memory network, achieving a new state of the art for the bAbI-10k dataset.
One notable deficiency in our model is that of QA16: Basic Induction. In BIBREF10 , an untied model using only summation for memory updates was able to achieve a near perfect error rate of $0.4$ . When the memory update was replaced with a linear layer with ReLU activation, the end-to-end memory network's overall mean error decreased but the error for QA16 rose sharply. Our model experiences the same difficulties, suggesting that the more complex memory update component may prevent convergence on certain simpler tasks.
The neural reasoner model outperforms both the DMN and end-to-end memory network on QA17: Positional Reasoning. This is likely as the positional reasoning task only involves minimal supervision - two sentences for input, yes/no answers for supervision, and only 5,812 unique examples after removing duplicates from the initial 10,000 training examples. BIBREF12 add an auxiliary task of reconstructing both the original sentences and question from their representations. This auxiliary task likely improves performance by preventing overfitting.
## Comparison to state of the art using VQA
For the VQA dataset, each question is answered by multiple people and the answers may not be the same, the generated answers are evaluated using human consensus. For each predicted answer $a_i$ for the $i_{th}$ question with target answer set $T^{i}$ , the accuracy of VQA: $Acc_{VQA} = \frac{1}{N}\sum _{i=1}^Nmin(\frac{\sum _{t\in T^i}{1}_{(a_i==t)}}{3},1)$ where ${1}_{(\cdot )}$ is the indicator function. Simply put, the answer $a_i$ is only 100 $\%$ accurate if at least 3 people provide that exact answer.
Training Details We use the Adam optimizer BIBREF35 with a learning rate of 0.003 and batch size of 100. Training runs for up to 256 epochs with early stopping if the validation loss has not improved in the last 10 epochs. For weight initialization, we sampled from a random uniform distribution with range $[-0.08, 0.08]$ . Both the word embedding and hidden layers were vectors of size $d=512$ . We apply dropout on the initial image output from the VGG convolutional neural network BIBREF11 as well as the input to the answer module, keeping input with probability $p=0.5$ .
Results and Analysis
The VQA dataset is composed of three question domains: Yes/No, Number, and Other. This enables us to analyze the performance of the models on various tasks that require different reasoning abilities.
The comparison models are separated into two broad classes: those that utilize a full connected image feature for classification and those that perform reasoning over multiple small image patches. Only the SAN and DMN approach use small image patches, while the rest use the fully-connected whole image feature approach.
Here, we show the quantitative and qualitative results in Table 3 and Fig. 6 , respectively. The images in Fig. 6 illustrate how the attention gate $g^t_i$ selectively activates over relevant portions of the image according to the query. In Table 3 , our method outperforms baseline and other state-of-the-art methods across all question domains (All) in both test-dev and test-std, and especially for Other questions, achieves a wide margin compared to the other architectures, which is likely as the small image patches allow for finely detailed reasoning over the image.
However, the granularity offered by small image patches does not always offer an advantage. The Number questions may be not solvable for both the SAN and DMN architectures, potentially as counting objects is not a simple task when an object crosses image patch boundaries.
## Conclusion
We have proposed new modules for the DMN framework to achieve strong results without supervision of supporting facts. These improvements include the input fusion layer to allow interactions between input facts and a novel attention based GRU that allows for logical reasoning over ordered inputs. Our resulting model obtains state of the art results on both the VQA dataset and the bAbI-10k text question-answering dataset, proving the framework can be generalized across input domains.
| 15
|
1603.01514
|
A Bayesian Model of Multilingual Unsupervised Semantic Role Induction
|
# A Bayesian Model of Multilingual Unsupervised Semantic Role Induction
## Abstract
We propose a Bayesian model of unsupervised semantic role induction in multiple languages, and use it to explore the usefulness of parallel corpora for this task. Our joint Bayesian model consists of individual models for each language plus additional latent variables that capture alignments between roles across languages. Because it is a generative Bayesian model, we can do evaluations in a variety of scenarios just by varying the inference procedure, without changing the model, thereby comparing the scenarios directly. We compare using only monolingual data, using a parallel corpus, using a parallel corpus with annotations in the other language, and using small amounts of annotation in the target language. We find that the biggest impact of adding a parallel corpus to training is actually the increase in mono-lingual data, with the alignments to another language resulting in small improvements, even with labeled data for the other language.
## Introduction
Semantic Role Labeling (SRL) has emerged as an important task in Natural Language Processing (NLP) due to its applicability in information extraction, question answering, and other NLP tasks. SRL is the problem of finding predicate-argument structure in a sentence, as illustrated below:
INLINEFORM0
Here, the predicate WRITE has two arguments: `Mike' as A0 or the writer, and `a book' as A1 or the thing written. The labels A0 and A1 correspond to the PropBank annotations BIBREF0 .
As the need for SRL arises in different domains and languages, the existing manually annotated corpora become insufficient to build supervised systems. This has motivated work on unsupervised SRL BIBREF1 , BIBREF2 , BIBREF3 . Previous work has indicated that unsupervised systems could benefit from the word alignment information in parallel text in two or more languages BIBREF4 , BIBREF5 , BIBREF6 . For example, consider the German translation of sentence INLINEFORM0 :
INLINEFORM0
If sentences INLINEFORM0 and INLINEFORM1 have the word alignments: Mike-Mike, written-geschrieben, and book-Buch, the system might be able to predict A1 for Buch, even if there is insufficient information in the monolingual German data to learn this assignment. Thus, in languages where the resources are sparse or not good enough, or the distributions are not informative, SRL systems could be made more accurate by using parallel data with resource rich or more amenable languages.
In this paper, we propose a joint Bayesian model for unsupervised semantic role induction in multiple languages. The model consists of individual Bayesian models for each language BIBREF3 , and crosslingual latent variables to incorporate soft role agreement between aligned constituents. This latent variable approach has been demonstrated to increase the performance in a multilingual unsupervised part-of-speech tagging model based on HMMs BIBREF4 . We investigate the application of this approach to unsupervised SRL, presenting the performance improvements obtained in different settings involving labeled and unlabeled data, and analyzing the annotation effort required to obtain similar gains using labeled data.
We begin by briefly describing the unsupervised SRL pipeline and the monolingual semantic role induction model we use, and then describe our multilingual model.
## Unsupervised SRL Pipeline
As established in previous work BIBREF7 , BIBREF8 , we use a standard unsupervised SRL setup, consisting of the following steps:
The task we model, unsupervised semantic role induction, is the step 4 of this pipeline.
## Monolingual Model
We use the Bayesian model of garg2012unsupervised as our base monolingual model. The semantic roles are predicate-specific. To model the role ordering and repetition preferences, the role inventory for each predicate is divided into Primary and Secondary roles as follows:
For example, the complete role sequence in a frame could be: INLINEFORM0 INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 , INLINEFORM8 INLINEFORM9 . The ordering is defined as the sequence of PRs, INLINEFORM10 INLINEFORM11 , INLINEFORM12 , INLINEFORM13 , INLINEFORM14 , INLINEFORM15 INLINEFORM16 . Each pair of consecutive PRs in an ordering is called an interval. Thus, INLINEFORM17 is an interval that contains two SRs, INLINEFORM18 and INLINEFORM19 . An interval could also be empty, for instance INLINEFORM20 contains no SRs. When we evaluate, these roles get mapped to gold roles. For instance, the PR INLINEFORM21 could get mapped to a core role like INLINEFORM22 , INLINEFORM23 , etc. or to a modifier role like INLINEFORM24 , INLINEFORM25 , etc. garg2012unsupervised reported that, in practice, PRs mostly get mapped to core roles and SRs to modifier roles, which conforms to the linguistic motivations for this distinction.
Figure FIGREF16 illustrates two copies of the monolingual model, on either side of the crosslingual latent variables. The generative process is as follows:
All the multinomial and binomial distributions have symmetric Dirichlet and beta priors respectively. Figure FIGREF7 gives the probability equations for the monolingual model. This formulation models the global role ordering and repetition preferences using PRs, and limited context for SRs using intervals. Ordering and repetition information was found to be helpful in supervised SRL as well BIBREF9 , BIBREF8 , BIBREF10 . More details, including the motivations behind this model, are in BIBREF3 .
## Multilingual Model
The multilingual model uses word alignments between sentences in a parallel corpus to exploit role correspondences across languages. We make copies of the monolingual model for each language and add additional crosslingual latent variables (CLVs) to couple the monolingual models, capturing crosslingual semantic role patterns. Concretely, when training on parallel sentences, whenever the head words of the arguments are aligned, we add a CLV as a parent of the two corresponding role variables. Figure FIGREF16 illustrates this model. The generative process, as explained below, remains the same as the monolingual model for the most part, with the exception of aligned roles which are now generated by both the monolingual process as well as the CLV.
Every predicate-tuple has its own inventory of CLVs specific to that tuple. Each CLV INLINEFORM0 is a multi-valued variable where each value defines a distribution over role labels for each language (denoted by INLINEFORM1 above). These distributions over labels are trained to be peaky, so that each value INLINEFORM2 for a CLV represents a correlation between the labels that INLINEFORM3 predicts in the two languages. For example, a value INLINEFORM4 for the CLV INLINEFORM5 might give high probabilities to INLINEFORM6 and INLINEFORM7 in language 1, and to INLINEFORM8 in language 2. If INLINEFORM9 is the only value for INLINEFORM10 that gives high probability to INLINEFORM11 in language 1, and the monolingual model in language 1 decides to assign INLINEFORM12 to the role for INLINEFORM13 , then INLINEFORM14 will predict INLINEFORM15 in language 2, with high probability. We generate the CLVs via a Chinese Restaurant Process BIBREF11 , a non-parametric Bayesian model, which allows us to induce the number of CLVs for every predicate-tuple from the data. We continue to train on the non-parallel sentences using the respective monolingual models.
The multilingual model is deficient, since the aligned roles are being generated twice. Ideally, we would like to add the CLV as additional conditioning variables in the monolingual models. The new joint probability can be written as equation UID11 (Figure FIGREF7 ), which can be further decomposed following the decomposition of the monolingual model in Figure FIGREF7 . However, having this additional conditioning variable breaks the Dirichlet-multinomial conjugacy, which makes it intractable to marginalize out the parameters during inference. Hence, we use an approximation where we treat each of the aligned roles as being generated twice, once by the monolingual model and once by the corresponding CLV (equation ).
This is the first work to incorporate the coupling of aligned arguments directly in a Bayesian SRL model. This makes it easier to see how to extend this model in a principled way to incorporate additional sources of information. First, the model scales gracefully to more than two languages. If there are a total of INLINEFORM0 languages, and there is an aligned argument in INLINEFORM1 of them, the multilingual latent variable is connected to only those INLINEFORM2 aligned arguments.
Second, having one joint Bayesian model allows us to use the same model in various semi-supervised learning settings, just by fixing the annotated variables during training. Section SECREF29 evaluates a setting where we have some labeled data in one language (called source), while no labeled data in the second language (called target). Note that this is different from a classic annotation projection setting (e.g. BIBREF12 ), where the role labels are mapped from source constituents to aligned target constituents.
## Inference and Training
The inference problem consists of predicting the role labels and CLVs (the hidden variables) given the predicate, its voice, and syntactic features of all the identified arguments (the visible variables). We use a collapsed Gibbs-sampling based approach to generate samples for the hidden variables (model parameters are integrated out). The sample counts and the priors are then used to calculate the MAP estimate of the model parameters.
For the monolingual model, the role at a given position is sampled as:
DISPLAYFORM0
where the subscript INLINEFORM0 refers to all the variables except at position INLINEFORM1 , INLINEFORM2 refers to the variables in all the training instances except the current one, and INLINEFORM3 refers to all the model parameters. The above integral has a closed form solution due to Dirichlet-multinomial conjugacy.
For sampling roles in the multilingual model, we also need to consider the probabilities of roles being generated by the CLVs:
DISPLAYFORM0
For sampling CLVs, we need to consider three factors: two corresponding to probabilities of generating the aligned roles, and the third one corresponding to selecting the CLV according to CRP.
DISPLAYFORM0
where the aligned roles INLINEFORM0 and INLINEFORM1 are connected to INLINEFORM2 , and INLINEFORM3 refers to all the variables except INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 .
We use the trained parameters to parse the monolingual data using the monolingual model. The crosslingual parameters are ignored even if they were used during training. Thus, the information coming from the CLVs acts as a regularizer for the monolingual models.
## Evaluation
Following the setting of titovcrosslingual, we evaluate only on the arguments that were correctly identified, as the incorrectly identified arguments do not have any gold semantic labels. Evaluation is done using the metric proposed by lang2011unsupervised, which has 3 components: (i) Purity (PU) measures how well an induced cluster corresponds to a single gold role, (ii) Collocation (CO) measures how well a gold role corresponds to a single induced cluster, and (iii) F1 is the harmonic mean of PU and CO. For each predicate, let INLINEFORM0 denote the total number of argument instances, INLINEFORM1 the instances in the induced cluster INLINEFORM2 , and INLINEFORM3 the instances having label INLINEFORM4 in gold annotations. INLINEFORM5 , INLINEFORM6 , and INLINEFORM7 . The score for each predicate is weighted by the number of its argument instances, and a weighted average is computed over all the predicates.
## Baseline
We use the same baseline as used by lang2011unsupervised which has been shown to be difficult to outperform. This baseline assigns a semantic role to a constituent based on its syntactic function, i.e. the dependency relation to its head. If there is a total of INLINEFORM0 clusters, INLINEFORM1 most frequent syntactic functions get a cluster each, and the rest are assigned to the INLINEFORM2 th cluster.
## Closest Previous Work
This work is closely related to the cross-lingual unsupervised SRL work of titovcrosslingual. Their model has separate monolingual models for each language and an extra penalty term which tries to maximize INLINEFORM0 and INLINEFORM1 i.e. for all the aligned arguments with role label INLINEFORM2 in language 1, it tries to find a role label INLINEFORM3 in language 2 such that the given proportion is maximized and vice verse. However, there is no efficient way to optimize the objective with this penalty term and the authors used an inference method similar to annotation projection. Further, the method does not scale naturally to more than two languages. Their algorithm first does monolingual inference in one language ignoring the penalty and then does the inference in the second language taking into account the penalty term. In contrast, our model adds the latent variables as a part of the model itself, and not an external penalty, which enables us to use the standard Bayesian learning methods such as sampling.
The monolingual model we use BIBREF3 also has two main advantages over titovcrosslingual. First, the former incorporates a global role ordering probability that is missing in the latter. Secondly, the latter defines argument-keys as a tuple of four syntactic features and all the arguments having the same argument-keys are assigned the same role. This kind of hard clustering is avoided in the former model where two constituents having the same set of features might get assigned different roles if they appear in different contexts.
## Data
Following titovcrosslingual, we run our experiments on the English (EN) and German (DE) sections of the CoNLL 2009 corpus BIBREF13 , and EN-DE section of the Europarl corpus BIBREF14 . We get about 40k EN and 36k DE sentences from the CoNLL 2009 training set, and about 1.5M parallel EN-DE sentences from Europarl. For appropriate comparison, we keep the same setting as in BIBREF6 for automatic parses and argument identification, which we briefly describe here. The EN sentences are parsed syntactically using MaltParser BIBREF15 and DE using LTH parser BIBREF16 . All the non-auxiliary verbs are selected as predicates. In CoNLL data, this gives us about 3k EN and 500 DE predicates. The total number of predicate instances are 3.4M in EN (89k CoNLL + 3.3M Europarl) and 2.62M in DE (17k CoNLL + 2.6M Europarl). The arguments for EN are identified using the heuristics proposed by lang2011unsupervised. However, we get an F1 score of 85.1% for argument identification on CoNLL 2009 EN data as opposed to 80.7% reported by titovcrosslingual. This could be due to implementation differences, which unfortunately makes our EN results incomparable. For DE, the arguments are identified using the LTH system BIBREF16 , which gives an F1 score of 86.5% on the CoNLL 2009 DE data. The word alignments for the EN-DE parallel Europarl corpus are computed using GIZA++ BIBREF17 . For high-precision, only the intersecting alignments in the two directions are kept. We define two semantic arguments as aligned if their head-words are aligned. In total we get 9.3M arguments for EN (240k CoNLL + 9.1M Europarl) and 4.43M for DE (32k CoNLL + 4.4M Europarl). Out of these, 0.76M arguments are aligned.
## Main Results
Since the CoNLL annotations have 21 semantic roles in total, we use 21 roles in our model as well as the baseline. Following garg2012unsupervised, we set the number of PRs to 2 (excluding INLINEFORM0 , INLINEFORM1 and INLINEFORM2 ), and SRs to 21-2=19. Table TABREF27 shows the results.
In the first setting (Line 1), we train and test the monolingual model on the CoNLL data. We observe significant improvements in F1 score over the Baseline (Line 0) in both languages. Using the CoNLL 2009 dataset alone, titovcrosslingual report an F1 score of 80.9% (PU=86.8%, CO=75.7%) for German. Thus, our monolingual model outperforms their monolingual model in German. For English, they report an F1 score of 83.6% (PU=87.5%, CO=80.1%), but note that our English results are not directly comparable to theirs due to differences argument identification, as discussed in section SECREF25 . As their argument identification score is lower, perhaps their system is discarding “difficult” arguments which leads to a higher clustering score.
In the second setting (Line 2), we use the additional monolingual Europarl (EP) data for training. We get equivalent results in English and a significant improvement in German compared to our previous setting (Line 1). The German dataset in CoNLL is quite small and benefits from the additional EP training data. In contrast, the English model is already quite good due to a relatively big dataset from CoNLL, and good accuracy syntactic parsers. Unfortunately, titovcrosslingual do not report results with this setting.
The third setting (Line 3) gives the results of our multilingual model, which adds the word alignments in the EP data. Comparing with Line 2, we get non-significant improvements in both languages. titovcrosslingual obtain an F1 score of 82.7% (PU=85.0%, CO=80.6%) for German, and 83.7% (PU=86.8%, CO=80.7%) for English. Thus, for German, our multilingual Bayesian model is able to capture the cross-lingual patterns at least as well as the external penalty term in BIBREF6 . We cannot compare the English results unfortunately due to differences in argument identification.
We also compared monolingual and bilingual training data using a setting that emulates the standard supervised setup of separate training and test data sets. We train only on the EP dataset and test on the CoNLL dataset. Lines 4 and 5 of Table TABREF27 give the results. The multilingual model obtains small improvements in both languages, which confirms the results from the standard unsupervised setup, comparing lines 2 to 3.
These results indicate that little information can be learned about semantic roles from this parallel data setup. One possible explanation for this result is that the setup itself is inadequate. Given the definition of aligned arguments, only 8% of English arguments and 17% of German arguments are aligned. This plus our experiments suggest that improving the alignment model is a necessary step to making effective use of parallel data in multilingual SRI, for example by joint modeling with SRI. We leave this exploration to future work.
## Multilingual Training with Labeled Data for One Language
Another motivation for jointly modeling SRL in multiple languages is the transfer of information from a resource rich language to a resource poor language. We evaluated our model in a very general annotation transfer scenario, where we have a small labeled dataset for one language (source), and a large parallel unlabeled dataset for the source and another (target) language. We investigate whether this setting improves the parameter estimates for the target language. To this end, we clamp the role annotations of the source language in the CoNLL dataset using a predefined mapping, and do not sample them during training. This data gives us good parameters for the source language, which are used to sample the roles of the source language in the unlabeled Europarl data. The CLVs aim to capture this improvement and thereby improve sampling and parameter estimates for the target language. Table TABREF28 shows the results of this experiment. We obtain small improvements in the target languages. As in the unsupervised setting, the small percentage of aligned roles probably limits the impact of the cross-lingual information.
## Labeled Data in Monolingual Model
We explored the improvement in the monolingual model in a semi-supervised setting. To this end, we randomly selected INLINEFORM0 of the sentences in the CoNLL dataset as “supervised sentences” and the rest INLINEFORM1 were kept unsupervised. Next, we clamped the role labels of the supervised sentences using the predefined mapping from Section SECREF29 . Sampling was done on the unsupervised sentences as usual. We then measured the clustering performance using the trained parameters.
To access the contribution of partial supervision better, we constructed a “supervised baseline” as follows. For predicates seen in the supervised sentences, a MAP estimate of the parameters was calculated using the predefined mapping. For the unseen predicates, the standard baseline was used.
Figures FIGREF33 and FIGREF33 show the performance variation with INLINEFORM0 . We make the following observations:
[leftmargin=*]
In both languages, at around INLINEFORM0 , the supervised baseline starts outperforming the semi-supervised model, which suggests that manually labeling about 10% of the sentences is a good enough alternative to our training procedure. Note that 10% amounts to about 3.6k sentences in German and 4k in English. We noticed that the proportion of seen predicates increases dramatically as we increase the proportion of supervised sentences. At 10% supervised sentences, the model has already seen 63% of predicates in German and 44% in English. This explains to some extent why only 10% labeled sentences are enough.
For German, it takes about 3.5% or 1260 supervised sentences to have the same performance increase as 1.5M unlabeled sentences (Line 1 to Line 2 in Table TABREF27 ). Adding about 180 more supervised sentences also covers the benefit obtained by alignments in the multilingual model (Line 2 to Line 3 in Table TABREF27 ). There is no noticeable performance difference in English.
We also evaluated the performance variation on a completely unseen CoNLL test set. Since the test set is very small compared to the training set, the clustering evaluation is not as reliable. Nonetheless, we broadly obtained the same pattern.
## Related Work
As discussed in section SECREF24 , our work is closely related to the crosslingual unsupervised SRL work of titovcrosslingual. The idea of using superlingual latent variables to capture cross-lingual information was proposed for POS tagging by naseem2009multilingual, which we use here for SRL. In a semi-supervised setting, pado2009cross used a graph based approach to transfer semantic role annotations from English to German. furstenau2009graph used a graph alignment method to measure the semantic and syntactic similarity between dependency tree arguments of known and unknown verbs.
For monolingual unsupervised SRL, swier2004unsupervised presented the first work on a domain-general corpus, the British National Corpus, using 54 verbs taken from VerbNet. garg2012unsupervised proposed a Bayesian model for this problem that we use here. titov2012bayesian also proposed a closely related Bayesian model. grenager2006unsupervised proposed a generative model but their parameter space consisted of all possible linkings of syntactic constituents and semantic roles, which made unsupervised learning difficult and a separate language-specific rule based method had to be used to constrain this space. Other proposed models include an iterative split-merge algorithm BIBREF18 and a graph-partitioning based approach BIBREF1 . marquez2008semantic provide a good overview of the supervised SRL systems.
## Conclusions
We propose a Bayesian model of semantic role induction (SRI) that uses crosslingual latent variables to capture role alignments in parallel corpora. The crosslingual latent variables capture correlations between roles in different languages, and regularize the parameter estimates of the monolingual models. Because this is a joint Bayesian model of multilingual SRI, we can apply the same model to a variety of training scenarios just by changing the inference procedure appropriately. We evaluate monolingual SRI with a large unlabeled dataset, bilingual SRI with a parallel corpus, bilingual SRI with annotations available for the source language, and monolingual SRI with a small labeled dataset. Increasing the amount of monolingual unlabeled data significantly improves SRI in German but not in English. Adding word alignments in parallel sentences results in small, non significant improvements, even if there is some labeled data available in the source language. This difficulty in showing the usefulness of parallel corpora for SRI may be due to the current assumptions about role alignments, which mean that only a small percentage of roles are aligned. Further analyses reveals that annotating small amounts of data can easily outperform the performance gains obtained by adding large unlabeled dataset as well as adding parallel corpora.
Future work includes training on different language pairs, on more than two languages, and with more inclusive models of role alignment.
## Acknowledgments
This work was funded by the Swiss NSF grant 200021_125137 and EC FP7 grant PARLANCE.
| 15
|
1603.04513
|
Multichannel Variable-Size Convolution for Sentence Classification
| "# Multichannel Variable-Size Convolution for Sentence Classification\n\n## Abstract\n\nWe propose M(...TRUNCATED)
| 10
|
1604.00400
|
Revisiting Summarization Evaluation for Scientific Articles
| "# Revisiting Summarization Evaluation for Scientific Articles\n\n## Abstract\n\nEvaluation of text (...TRUNCATED)
| 14
|
1604.05781
| "What we write about when we write about causality: Features of causal statements across large-scale(...TRUNCATED)
| "# What we write about when we write about causality: Features of causal statements across large-sca(...TRUNCATED)
| 9
|
1605.03481
|
Tweet2Vec: Character-Based Distributed Representations for Social Media
| "# Tweet2Vec: Character-Based Distributed Representations for Social Media\n\n## Abstract\n\nText fr(...TRUNCATED)
| 9
|
1605.07333
|
Combining Recurrent and Convolutional Neural Networks for Relation Classification
| "# Combining Recurrent and Convolutional Neural Networks for Relation Classification\n\n## Abstract\(...TRUNCATED)
| 17
|
1606.05320
|
Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models
| "# Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models\n\n## Abs(...TRUNCATED)
| 7
|
End of preview. Expand
in Data Studio
🍵 Sencha: Scientific Paper Chunking Assessment
Scientific Challenges - A dataset for evaluating chunking algorithms on academic papers.
Overview
Sencha is designed to test how well chunking algorithms handle long-form scientific documents. It contains full-text NLP research papers with questions that require finding specific information across multiple sections.
Key Challenges
- Handling structured sections (Abstract, Methods, Results, etc.)
- Preserving citation context (BIBREF tags)
- Managing hierarchical section headers
- Chunking technical content with equations and terminology
Dataset Structure
Corpus
The corpus config contains 250 full-text NLP papers.
| Column | Type | Description |
|---|---|---|
id |
string | ArXiv paper ID |
title |
string | Paper title |
text |
string | Full paper text in markdown format |
num_sections |
int | Number of sections in the paper |
Questions
The questions config contains 1,146 questions about paper content.
| Column | Type | Description |
|---|---|---|
id |
string | Unique question identifier |
paper_id |
string | Reference to corpus document (ArXiv ID) |
question |
string | Question about the paper content |
answer |
string | Answer to the question |
chunk-must-contain |
string | Evidence passage that answers the question |
Statistics
| Metric | Value |
|---|---|
| Papers | 250 |
| Questions | 1,146 |
| Avg paper length | |
| Min paper length | ~5,600 chars |
| Max paper length | ~98,500 chars |
| Avg must-contain length | 613 chars |
| Domain | NLP/Computational Linguistics |
Usage
from datasets import load_dataset
# Load the corpus
corpus = load_dataset("chonkie-ai/sencha", "corpus", split="train")
# Load the questions
questions = load_dataset("chonkie-ai/sencha", "questions", split="train")
# Use with MTCB evaluator
from mtcb import SenchaEvaluator
from chonkie import RecursiveChunker
evaluator = SenchaEvaluator(
chunker=RecursiveChunker(chunk_size=512),
embedding_model="voyage-3-large"
)
result = evaluator.evaluate(k=[1, 3, 5, 10])
Sample Topics
The papers cover various NLP topics including:
- Sentiment analysis and affective computing
- Word embeddings and language models
- Text classification and NER
- Question answering systems
- Machine translation
- Social media analysis
- Clinical NLP
Source
Derived from QASPER (NAACL 2021) by Allen AI - a dataset for question answering on scientific research papers.
License
CC-BY-4.0 (following QASPER license)
- Downloads last month
- 63