ACL-OCL / Base_JSON /prefixK /json /K19 /K19-1024.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K19-1024",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:05:18.197605Z"
},
"title": "Policy Preference Detection in Parliamentary Debate Motions",
"authors": [
{
"first": "Gavin",
"middle": [],
"last": "Abercrombie",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Manchester",
"location": {}
},
"email": "gavin.abercrombie@manchester.ac.uk"
},
{
"first": "Riza",
"middle": [],
"last": "Batista-Navarro",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Manchester",
"location": {}
},
"email": ""
},
{
"first": "Federico",
"middle": [],
"last": "Nanni",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Mannheim",
"location": {}
},
"email": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Mannheim",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Debate motions (proposals) tabled in the UK Parliament contain information about the stated policy preferences of the Members of Parliament who propose them, and are key to the analysis of all subsequent speeches given in response to them. We attempt to automatically label debate motions with codes from a pre-existing coding scheme developed by political scientists for the annotation and analysis of political parties' manifestos. We develop annotation guidelines for the task of applying these codes to debate motions at two levels of granularity and produce a dataset of manually labelled examples. We evaluate the annotation process and the reliability and utility of the labelling scheme, finding that inter-annotator agreement is comparable with that of other studies conducted on manifesto data. Moreover, we test a variety of ways of automatically labelling motions with the codes, ranging from similarity matching to neural classification methods, and evaluate them against the gold standard labels. From these experiments, we note that established supervised baselines are not always able to improve over simple lexical heuristics. At the same time, we detect a clear and evident benefit when employing BERT, a state-of-the-art deep language representation model, even in classification scenarios with over 30 different labels and limited amounts of training data.",
"pdf_parse": {
"paper_id": "K19-1024",
"_pdf_hash": "",
"abstract": [
{
"text": "Debate motions (proposals) tabled in the UK Parliament contain information about the stated policy preferences of the Members of Parliament who propose them, and are key to the analysis of all subsequent speeches given in response to them. We attempt to automatically label debate motions with codes from a pre-existing coding scheme developed by political scientists for the annotation and analysis of political parties' manifestos. We develop annotation guidelines for the task of applying these codes to debate motions at two levels of granularity and produce a dataset of manually labelled examples. We evaluate the annotation process and the reliability and utility of the labelling scheme, finding that inter-annotator agreement is comparable with that of other studies conducted on manifesto data. Moreover, we test a variety of ways of automatically labelling motions with the codes, ranging from similarity matching to neural classification methods, and evaluate them against the gold standard labels. From these experiments, we note that established supervised baselines are not always able to improve over simple lexical heuristics. At the same time, we detect a clear and evident benefit when employing BERT, a state-of-the-art deep language representation model, even in classification scenarios with over 30 different labels and limited amounts of training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Commonly known as the Hansard record, transcripts of debates that take place in the House of Commons of the United Kingdom (UK) Parliament are of interest to scholars of political science as well as the media and members of the public who wish to monitor the actions of their elected representatives. Debate motions (the proposals tabled for debate) are expressions of the policy positions taken by the governments, political parties, and individual Members of Parliament (MPs) who propose them. As all speeches given and all votes cast in the House are responses to one of these proposals, the motions are key to any understanding and analysis of the opinions and positions expressed in the subsequent speeches given in parliamentary debates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "By definition, debate motions convey the stated policy preferences of the MPs or parties who propose them. They therefore express polaritypositive or negative-towards some target, such as a piece of legislation, policy, or state of affairs. As noted by Thomas et al. (2006) , the polarity of a debate proposal can strongly affect the language used by debate participants to either support or oppose it, effectively acting as a polarity shifter on the ensuing speeches. Analysis of debate motions is therefore a key first step in automatically determining the positions presented and opinions expressed by all speakers in the wider debates.",
"cite_spans": [
{
"start": 253,
"end": 273,
"text": "Thomas et al. (2006)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Additionally, there are further challenges associated with this task that differentiate it from the forms of sentiment analysis typically performed in other domains. Under Parliament's Rules of Behaviour, 1 debate participants use an esoteric speaking style that is not only laden with opaque procedural language and parliamentary jargon, but is also indirect, containing few explicitly negative words or phrases, even where negative positions are being expressed (Abercrombie and Batista-Navarro, 2018a) .",
"cite_spans": [
{
"start": 464,
"end": 504,
"text": "(Abercrombie and Batista-Navarro, 2018a)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The topics discussed in these debates revolve around policies and policy domains. Topic modelling or detection methods, which tend to produce coarse overviews and output neutral topics such as 'education' or 'transport' (as in Menini et al. (2017) , for instance), are therefore not suitable for our purposes. Rather, we seek to find the proposer of a motion's position or policy preference towards each topic-in other words, an opiniontopic. Topic labels do exist for the Hansard transcripts, such as those produced by the House of Commons Library or parliamentary monitoring organsitions such as Public Whip. 2 However, these are unsuitable due to, in the former case, the fact that they incorporate no opinion or policy preference information, and for the latter, being unsystematic, insufficient in both quantity and coverage of the topics that appear in Hansard, and not future-proof (that is, they do not cover unseen topics that may arise (Abercrombie and Batista-Navarro, 2018b) ).",
"cite_spans": [
{
"start": 227,
"end": 247,
"text": "Menini et al. (2017)",
"ref_id": "BIBREF16"
},
{
"start": 946,
"end": 986,
"text": "(Abercrombie and Batista-Navarro, 2018b)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we use the coding scheme devised by the Manifesto Project, 3 because: (a) it is systematic, having been developed by political scientists over a 40 year period, (b) it is comprehensive and designed to cover any policy preference that may be expressed by any political party in the world, (c) it has been devised to cover any policies that may arise in the future, and (d) there exist many expert-coded examples of manifestos, which we can use as reference documents and/or for validation purposes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We approach automatic policy preference labelling at both the motion and (quasi-)sentence levels (see Section 2). We envisage that the output could therefore be used for downstream tasks, such as sentiment and stance analysis and agreement assessment of debate speeches, which may be performed at different levels of granularity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions This paper makes the following contributions to the literature surrounding natural language processing of political documents and civic technology applications:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We develop a corpus of English language debate motions from the UK Parliament, annotated with policy position labels at two levels of granularity. We also produce annotation guidelines for this task, analysis of inter-annotator agreement rates, and further evaluation of the difficulty of the task on data from both parliamentary debates and the manifestos. We make these resources publicly available for the research community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. We test and evaluate two different ways of automatically labelling debate motions with Manifesto Project codes: lexical similarity matching and supervised classification. For the former, we compare a baseline of unigram overlap with cosine similarity measurement of vector representations of the texts. For the latter, we test a range of established baselines and state-of-the-art deep learning methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Rather than being forums in which speakers attempt to persuade one another of their points of view, as the word 'debate' may imply, parliamentary speeches are displays of position-taking that MPs use to communicate their policy preferences to 'other members within their own party, to members of other parties, and, most important, to their voters' (Proksch and Slapin, 2015) . Debate motions are proposals put forward in Parliament, and as such are the objects of all votes and decisions made by MPs, and, in theory at least, of all speeches and utterances made in the House. 4 Each parliamentary debate begins with such a motion, and may include further amendment motions (usually designed to alter or reverse the meaning of the original) as it progresses. Motions routinely begin with the words 'I beg to move That this House ...', and may include multiple parts, as in Example 1, 5 which consists of two clauses, and appears to take a positive position towards international peace:",
"cite_spans": [
{
"start": 349,
"end": 375,
"text": "(Proksch and Slapin, 2015)",
"ref_id": "BIBREF23"
},
{
"start": 577,
"end": 578,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "I beg to move That this House notes the worsening humanitarian crisis in Yemen; and calls upon the Government to take a lead in passing a resolution at the UN Security Council that would give effect to an immediate ceasefire in Yemen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "The concept of policy preferences is widely used in the political science literature (e.g. Budge et al., 2001; Lowe et al., 2011; Volkens et al., 2013) to represent the positions of political actors expressed in text or speech. The Manifesto Project is an ongoing venture that spans four decades of work in this area and consists of a collection of party political documents annotated by trained experts with codes (labels) representing such preferences. Organised under seven 'domains', the coding scheme comprises 57 policy preference codes, all but one of which (408: Economic goals) are 'positional', encoding a positive or negative position towards a policy issue (Mikhaylov et al., 2008) . Indeed, many of these codes exist in polar opposite pairs, such as 504: Welfare State Expansion and 505: Welfare State Limitation. The included manifestos are coded at the quasi-sentence level-that is, units of text that span a sentence or part of a sentence, and which have been judged by the annotators to contain 'exactly one statement or \"message\"' (Werner et al., 2011) , as in Example 2, in which a single sentence has been annotated as four quasi-sentences: 6",
"cite_spans": [
{
"start": 91,
"end": 110,
"text": "Budge et al., 2001;",
"ref_id": "BIBREF3"
},
{
"start": 111,
"end": 129,
"text": "Lowe et al., 2011;",
"ref_id": "BIBREF15"
},
{
"start": 130,
"end": 151,
"text": "Volkens et al., 2013)",
"ref_id": "BIBREF28"
},
{
"start": 669,
"end": 693,
"text": "(Mikhaylov et al., 2008)",
"ref_id": "BIBREF17"
},
{
"start": 1049,
"end": 1070,
"text": "(Werner et al., 2011)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "To secure your first job we will create 3 million new apprenticeships; 411: Technology and Infrastructure take everyone earning less than 12,500 out of Income Tax ",
"cite_spans": [],
"ref_spans": [
{
"start": 159,
"end": 162,
"text": "Tax",
"ref_id": null
}
],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "There exists a large body of work concerning the analysis of opinions and policy positions in the related domains of legislative debate transcripts (for a survey, see Abercrombie and Batista-Navarro, 2019) and party political manifestos (see Volkens et al., 2015) . Inspired by work on analysis of text from other domains, such as product reviews and social media, much of the computer science research in this area has concentrated on classifying the sentiment polarity of individual speeches (e.g. Burford et al., 2015; Thomas et al., 2006; Yogatama et al., 2015) . Political scientists meanwhile, have tended to focus on position scalingthe task of placing the combined contributions of a political actor on a (usually) one-dimensional scale, such as Left-Right (e.g. Glava\u0161 et al., 2017b; Laver et al., 2003; Nanni et al., 2019a; Proksch and Slapin, 2010) . In either case, the majority of this work does not take into consideration the topics or policy areas addressed in the speeches.",
"cite_spans": [
{
"start": 167,
"end": 205,
"text": "Abercrombie and Batista-Navarro, 2019)",
"ref_id": "BIBREF1"
},
{
"start": 242,
"end": 263,
"text": "Volkens et al., 2015)",
"ref_id": "BIBREF27"
},
{
"start": 500,
"end": 521,
"text": "Burford et al., 2015;",
"ref_id": "BIBREF4"
},
{
"start": 522,
"end": 542,
"text": "Thomas et al., 2006;",
"ref_id": "BIBREF26"
},
{
"start": 543,
"end": 565,
"text": "Yogatama et al., 2015)",
"ref_id": "BIBREF30"
},
{
"start": 771,
"end": 792,
"text": "Glava\u0161 et al., 2017b;",
"ref_id": "BIBREF9"
},
{
"start": 793,
"end": 812,
"text": "Laver et al., 2003;",
"ref_id": "BIBREF14"
},
{
"start": 813,
"end": 833,
"text": "Nanni et al., 2019a;",
"ref_id": "BIBREF19"
},
{
"start": 834,
"end": 859,
"text": "Proksch and Slapin, 2010)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "Supervised classification approaches to opinion-topic identification have been explored in a number of papers. Abercrombie and Batista-Navarro (2018b) obtain good performance in classifying debate motions as belonging to one of 13 'policies' or opinion-topics. However, this approach is somewhat limited in that they use a set of pre-existing labelled examples which does not extend to cover the whole Hansard corpus or any new policies that may arise in the future. A similar setting to ours is that of Herzog et al. (2018) , who use labels from the Comparative Agendas Project (CAP). 7 However, while they seek to discover latent topics present in the corpus, we wish to determine the policy-topic of each individual debate/motion. Rather than employ labelled manifesto data, as we do, they use the descriptions of the CAP codes.",
"cite_spans": [
{
"start": 504,
"end": 524,
"text": "Herzog et al. (2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "Concerning policy identification in party political manifestos, previous studies have focused on topical segmentation and classification of sentences into the seven coarsegrained policy domains (Glava\u0161 et al., 2017a; Zirn et al., 2016) . Meanwhile, Subramanian et al. (2018) recently presented a deep learning model that classifies manifesto sentences with the finer-grained code-level scheme of the Manifesto Project, as well as placing them on a Left-Right scale. In order to contribute to these research efforts and following recent advancements in deep language representation models (Devlin et al., 2018; Peters et al., 2018) , we test the potential of BERT (Bidirectional Encoder Representations from Transformers) for policy-topic classification on both debate motions and manifestos.",
"cite_spans": [
{
"start": 194,
"end": 216,
"text": "(Glava\u0161 et al., 2017a;",
"ref_id": "BIBREF8"
},
{
"start": 217,
"end": 235,
"text": "Zirn et al., 2016)",
"ref_id": "BIBREF31"
},
{
"start": 249,
"end": 274,
"text": "Subramanian et al. (2018)",
"ref_id": "BIBREF25"
},
{
"start": 588,
"end": 609,
"text": "(Devlin et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 610,
"end": 630,
"text": "Peters et al., 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "There is also a growing body of research on the evaluation of annotations for this domain. While the Manifesto Project relies on trained individual annotators to label manifestos, Mikhaylov et al. (2008) report the results of experiments which show that agreement between annotators is difficult to achieve, casting doubts on the reliability of the Project's codes. However, in similar experiments, Lacewell and Werner (2013) report greater inter-annotator agreement, and claim that with ongoing training, annotators can produce reliable labels. An extended analysis of the validity and reproducibility of the coding scheme is offered by Gemenis (2013) , who remarks on the fact that 'the problem of unreliability does not lie with the coders but with the complex nature of the CMP (Comparative Manifesto Project) coding scheme'. Aware of such challenges, and in order to offer an additional comparison to these previous studies, in this work we provide a detailed analysis of the agreement rates of our annotators on both manifestos and debate motions.",
"cite_spans": [
{
"start": 180,
"end": 203,
"text": "Mikhaylov et al. (2008)",
"ref_id": "BIBREF17"
},
{
"start": 638,
"end": 652,
"text": "Gemenis (2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "In the experimental section we report on the use of codes from the Manifesto Project as policy preference labels, with the goal of applying them to debate motions. These labels are convenient because: (a) like debate transcripts, they have been collected over time; and (b) the Project is ongoing, meaning that new example manifestos will continue to be added to it, mitigating potential concept drift problems (in which the language used to refer to aspects of different policy areas may change diachronically).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "To construct our corpus, we made use of the data sources described below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "The Manifesto Project",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "We used annotated manifestos (1) as reference texts for labelling of debate motions by similarity matching, and (2) training a neural network for cross-domain classification of the motions. We downloaded all fifteen of the annotated United Kingdom (including Northern Ireland) manifestos from the Manifesto Corpus Version 2018-1 (Krause et al., 2018) Table 2 : The parties and years of publication of the manifestos that we use as reference texts and training data, and the number of labelled quasi-sentences (QSs) by party in this subset of the manifesto data.",
"cite_spans": [
{
"start": 329,
"end": 350,
"text": "(Krause et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 351,
"end": 358,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "In this subset, the number of UK manifesto quasi-sentences labelled with codes in each domain varies considerably (see Table 1 ). These manifestos were written by a variety of political parties for elections over an 18 year period ( Table 2) . The most prevalent code in these manifestos is 504: Welfare State Expansion (2,691 examples), and the least used is 103: Anti-Imperialism (3 examples). Two codes, 102: Foreign Special Rela-tionships: Negative and 415: Marxist Analysis: Positive, do not appear at all in manifestos from the United Kingdom.",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 126,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 233,
"end": 242,
"text": "Table 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "The Hansard record of House of Commons debates is available for each day on which debates have taken place from 1919 to the present day in xml format at https://www. theyworkforyou.com, where it is updated daily with the most recent debates. As the record is more complete for recent years, we downloaded all files from May 7th 1997 (the start of that year's session of Parliament) to February 28th 2019. From these we extracted 1,156 motions together with the titles of the debates and the dates on which they were tabled. We manually removed procedural motions (those concerned solely with the workings of Parliament) from the dataset as these do not concern policy preferences and have no equivalents in political manifestos.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Debate transcripts",
"sec_num": null
},
{
"text": "In order to approximate the format of the data in the Manifesto Project, and to investigate policy preference detection at different levels of granularity, we divided each motion into smaller units. For convenience, we approximated quasi-sentences in the Hansard data by automatically dividing motions into clauses, which are separated by semicolons in the transcripts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Debate transcripts",
"sec_num": null
},
{
"text": "We adapt the Project's Coding Instructions (Werner et al., 2011) to provide guidelines for the annotation of debate motions. We use version 4 of these instructions because, although a more recent, more finely grained version exists, there are as yet few example manifestos coded under the newer scheme. To complete the annotation task, we recruited three Political Science Master's students from the University of Mannheim, who worked for a total of 40 hours each over a two month period.",
"cite_spans": [
{
"start": 43,
"end": 64,
"text": "(Werner et al., 2011)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "5"
},
{
"text": "Annotations were carried out in two stages: an initial training phase, followed by labelling of the main dataset. We used the coding instructions of version 4 of the Manifesto Project handbook 9 supplemented by debate motion-specific guidelines including notes based on the annotators' discussions during training. 10 For the training phase, after being introduced to the data and the coding instructions, the annotators individually labelled three batches of motions and their quasi-sentences. In addition to labelling each of these with one of the codes, they were instructed to note examples which they found difficult to decide upon. Between each batch we met to discuss these instances, as well as other examples on which the annotators disagreed, adding notes to the annotation guidelines based on the observations made. Interannotator agreement during training ranged from 'fair' to 'substantial', following common interpretation of Fleiss' kappa scores (Landis and Koch, 1977 ) (see Table 3 ).",
"cite_spans": [
{
"start": 973,
"end": 983,
"text": "Koch, 1977",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 991,
"end": 998,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Debate motions",
"sec_num": null
},
{
"text": "The final corpus includes 386 hand-annotated motions and 1,683 quasi-sentences. 11 The majority of these have been labelled by two of the three annotators. Inter-annotator agreement is within the ranges generally interpreted as being 'moderate' to 'substantial' (see Table 4 ). The slightly higher agreement at the quasi-sentence level than on overall motion labels suggests that it may be difficult in some cases to select a single policy preference code for a whole motion. A subsection of the corpus (41 motions, 180 quasi-sentences) was labelled by all three annotators. Fleiss' kappa scores for this subsection are 0.46 at both levels, which indicates 'moderate' agreement. Following Pustejovsky and Stubbs (2012) , the gold standard label for each example is obtained by adjudication, which was carried out by the first author.",
"cite_spans": [
{
"start": 689,
"end": 718,
"text": "Pustejovsky and Stubbs (2012)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 267,
"end": 274,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Debate motions",
"sec_num": null
},
{
"text": "To validate our labelling procedure, and for comparison with other work, we also asked the annotators to label a small quantity (120) of quasisentences from the Manifesto Project. We calculate Fleiss' kappa for these annotations to be 0.48, which is comparable to that obtained on the main dataset of debate motions, and higher than those reported by Mikhaylov et al. (2008) on manifestos.",
"cite_spans": [
{
"start": 351,
"end": 374,
"text": "Mikhaylov et al. (2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Manifestos",
"sec_num": null
},
{
"text": "Again, we asked the annotators to mark any examples which they considered to be difficult to decide upon. Agreement (Fleiss' kappa) on these 'difficult' cases is only 0.17, with only one ex- 10 These guidelines are available along with the corpus. 11 These constitute examples with 'gold standard' labels. The corpus also includes examples labelled by a sole annotator ('silver standard') and further unlabelled motions (see Table 5 ). ample marked as such by all three annotators. In this case, two of them used the 'correct' Manifesto Project gold label, while the third annotator applied a different code from the same domain. Overall, of the 47 examples (39.2%) on which all three annotators agree, 36 of these agree with the gold label (30% of the total). Domain-level agreement is 0.56, which is also similar to that achieved on the debate motions.",
"cite_spans": [
{
"start": 191,
"end": 193,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 425,
"end": 432,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Manifestos",
"sec_num": null
},
{
"text": "We make the corpus available for download at https://madata.bib.uni-mannheim. de/308. The number of labelled and unlabelled examples it contains can be seen in Table 5 . For the gold-labelled data, motions range in length from one to 13 quasi-sentences (mean = 4.3), with each of these consisting of between four and 163 tokens (mean = 28.7).",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 167,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "The Motion Policy Preference Corpus",
"sec_num": null
},
{
"text": "We investigated two ways of automatically labelling debate motions with the codes from the Manifesto Project: (1) similarity matching and (2) supervised classification. We tested both at the quasi-sentence level and we additionally ex- periment with similarity matching methods at the whole motion level, where the lack of sufficient training data prevents application of supervised learning methods. In pre-processing we filtered out any motions that have gold standard labels that appear less than ten times in the corpus, leaving 370 motions and 1,634 quasi-sentences, each annotated with one of the 32 remaining class labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Labelling Methods",
"sec_num": "6"
},
{
"text": "We tested two methods of matching debate motions to codes from the Manifesto Project, comparing a baseline of unigram overlap scores with cosine similarity measurement. In each case, we measured the similarity of the list of tokens A = A 1 , A 2 , ..., A n in each motion or quasi-sentence text and the list of tokens in each collection of concatenated manifesto extracts",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity matching",
"sec_num": null
},
{
"text": "B = B 1 , B 2 , ...B n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity matching",
"sec_num": null
},
{
"text": "For unigram overlap, we simply counted the union of the sets of tokens from A and B. For the latter method, each text was represented by its term frequency-inverse document frequency vector (tf-idf), and cosine similarity calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity matching",
"sec_num": null
},
{
"text": "A \u2022 B || A|||| B||",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity matching",
"sec_num": null
},
{
"text": "With both of these approaches, we explored the use of the following combinations of sources of textual unigram features: the debate titles, which have been shown to be highly predictive of a motion's opinion-topic in a supervised classification setting (Abercrombie and Batista-Navarro, 2018b) , the debate motions themselves, and both the titles and motions together.",
"cite_spans": [
{
"start": 253,
"end": 293,
"text": "(Abercrombie and Batista-Navarro, 2018b)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity matching",
"sec_num": null
},
{
"text": "We tested a range of supervised machine learning algorithms for the policy preference classification task, ranging from traditional approaches to recently developed pre-trained deep language representation models. We were particularly interested in assessing the performance of such approaches: (1) despite the limited training data available (1.6k motion quasi-sentences); and (2) in a cross-domain application (training on over 16k manifesto quasi-sentences, and testing on the motion quasi-sentences).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Classification",
"sec_num": null
},
{
"text": "First, we examined the performance of Support Vector Machines (SVM) trained using lexical (tfidf) or word embedding (w-emb) features, which act as strong traditional baselines. We tested both pre-trained general purpose word embeddings from https://fasttext.cc (Mikolov et al., 2018) and in-domain vectors generated on the Hansard transcripts from Nanni et al. (2019b) .",
"cite_spans": [
{
"start": 261,
"end": 283,
"text": "(Mikolov et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 348,
"end": 368,
"text": "Nanni et al. (2019b)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Classification",
"sec_num": null
},
{
"text": "We also report the results of a widely adopted neural network baseline for topic classification (see for instance Glava\u0161 et al. (2017a) and Subramanian et al. (2018) in the context of manifesto quasi-sentences classification): a Convolutional Neural Network (CNN) with single convolution layer and a single max-pooling layer. We again tested the CNN with general purpose and indomain embeddings.",
"cite_spans": [
{
"start": 114,
"end": 135,
"text": "Glava\u0161 et al. (2017a)",
"ref_id": "BIBREF8"
},
{
"start": 140,
"end": 165,
"text": "Subramanian et al. (2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Classification",
"sec_num": null
},
{
"text": "As final skyline comparisons, we present the performance of (1) a pre-trained BERT (large, cased) model (Devlin et al., 2018) , with a final soft-max layer; and (2) the same pre-trained BERT model, with a CNN and max-pooling layers before the soft-max layer. We additionally experimented with the latter two models in a fine-tuning setting: after training on manifestos, they have been further fine-tuned on motions.",
"cite_spans": [
{
"start": 104,
"end": 125,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Classification",
"sec_num": null
},
{
"text": "We tested all approaches with a 80/20 split of the dataset, and trained all the neural models for three iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Classification",
"sec_num": null
},
{
"text": "We evaluated the predicted labels of each experimental model against the gold standard labels produced by the annotation process. For the machine learning methods, we report F1 scores with both macro and micro weightings in order to offer an understanding of the quality overall, as well as for the different classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "We evaluate labelling of motions by similarity matching at two levels of granularity: quasisentence and whole motion. Cosine similarity matching comfortably outperforms the baseline at both levels of granularity and at both the policy and domain levels (see Figure 1) .",
"cite_spans": [],
"ref_spans": [
{
"start": 258,
"end": 267,
"text": "Figure 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Motions: Similarity Matching",
"sec_num": null
},
{
"text": "Unlike the findings of Abercrombie and Batista-Navarro (2018b), in most settings, we do not find the debate titles to be as powerful indicators of class labels as features derived from the texts of the motions, perhaps due to our larger set of class labels containing more similar (same domain) policy preference codes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motions: Similarity Matching",
"sec_num": null
},
{
"text": "Best performances at both policy and domain levels (F1 macro = 0.59) are obtained using tf-idf features derived from both motion titles and texts, although performance using the texts only is comparable. For most combinations of feature input and similarity measurement method, F1 scores are around twice as good at the domain level as at the policy level. Figure 1 : F1 macro scores for unigram overlap and cosine similarity matching at the policy and domain levels using textual features from whole motions. Use of cosine similarity leads to markedly better performance than unigram overlap, and the best performance is achieved using features derived from both the titles and motion texts at policy and domain levels.",
"cite_spans": [],
"ref_spans": [
{
"start": 357,
"end": 365,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Motions: Similarity Matching",
"sec_num": null
},
{
"text": "We tested the supervised pipelines at the quasisentence level and at the two levels of class label granularity (policy and domain), which allows us to compare the results with previous work on the Manifesto Project (e.g., Zirn et al. (2016) ). As can be seen in Table 6 , the use of machine learning methods generally (but not always) leads to a substantial improvement (especially for Micro F1), in comparison to the heuristics that we have discussed above. Concerning the SVM and CNN baselines, training the classifiers on the large collection of annotated manifestos and then applying them to the motions does not lead to improvements in comparison to the performance of the same architectures on the motions alone. Similarly, we notice that in most cases the use of in-domain embeddings does not improve the results. These two findings might be due to the fact that the style of communication and vocabulary of the employed resources are very different. The size of the training data may also play a role, as can be noticed in particular with the weak performances of the CNNs, especially in comparison to more traditional approaches; in the next section, we return to this issue.",
"cite_spans": [
{
"start": 222,
"end": 240,
"text": "Zirn et al. (2016)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 262,
"end": 269,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Motions: Quasi-sentence Classification",
"sec_num": null
},
{
"text": "Finally, to further confirm the large potential of BERT, even in tasks which involve many labels, a lack of training data, and a very specific style of communication, we have obtained a clear improvement over all other systems when employing this state-of-the-art architecture, trained on manifesto quasi-sentences and further fine-tuned on motions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motions: Quasi-sentence Classification",
"sec_num": null
},
{
"text": "As a final comparison of the presented systems for quasi-sentence classification, we report their performance on the corpus of 16k manifesto quasisentences, again with an 80/20 train-test split. The results (see Table 7 ) are consistent with the performance of supervised pipelines on the Manifesto Corpus presented in previous literature (Glava\u0161 et al., 2017a; Subramanian et al., 2018; Zirn et al., 2016) and in line with the performances we obtained on the motion corpus in Table 6 .",
"cite_spans": [
{
"start": 339,
"end": 361,
"text": "(Glava\u0161 et al., 2017a;",
"ref_id": "BIBREF8"
},
{
"start": 362,
"end": 387,
"text": "Subramanian et al., 2018;",
"ref_id": "BIBREF25"
},
{
"start": 388,
"end": 406,
"text": "Zirn et al., 2016)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 212,
"end": 219,
"text": "Table 7",
"ref_id": "TABREF11"
},
{
"start": 477,
"end": 484,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Manifestos: Quasi-sentence Classification",
"sec_num": null
},
{
"text": "Interestingly, we once again notice the weak performances of the CNNs on the collection, even with ten times as much training data. This could be due to a necessity to extend the architecture (for example, by adding more convolutional layers) rather than a simple lack of training data. Con- versely, traditional SVM baselines offer reasonable results, and we achieve state-of-the-art performances when employing BERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manifestos: Quasi-sentence Classification",
"sec_num": null
},
{
"text": "Through this work we have been able to make a number of observations about the validity and reliability of the annotations produced and the difficulty of the tasks of labelling both debate motions and manifestos. In labelling the manifestos, our annotators agreed with each other to roughly the same extent that they agree with the gold labels provided by the Manifesto Project's expert annotators. This level of agreement is also similar to that reported in Mikhaylov et al. (2008) , though not as good as that of MARPOR 12 itself (Lacewell and Werner, 2013) .",
"cite_spans": [
{
"start": 459,
"end": 482,
"text": "Mikhaylov et al. (2008)",
"ref_id": "BIBREF17"
},
{
"start": 546,
"end": 559,
"text": "Werner, 2013)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "8"
},
{
"text": "The task does seem to be transferable to parliamentary debate motions, with our inter-annotator agreement scores comparable on both domains. Although automatic labelling with lexical similarity matching is more succesful at the quasisentence level than at the motion level, the annotators do not seem to find the coarser grained task much easier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "8"
},
{
"text": "Overall, this is a hard task for humans. However, despite the issue of annotation reproducibility, political scientists continue to find these labels useful-as evidenced by Volkens et al. (2015) , who find 230 articles that use this data in the eight journals they examine. With comparable reliabilty (inter-annotator agreement), the labelled motions could prove equally suitable for many automatic analysis applications.",
"cite_spans": [
{
"start": 173,
"end": 194,
"text": "Volkens et al. (2015)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "8"
},
{
"text": "Concerning automation of the labeling process, we can derive three general findings. The first is that a very simple approach-matching debate motions to coded manifestos using cosine similarity measurement-appears to produce potentially useful outputs, particularly at the domain level, with supervised baselines not necessarily offering consistently better results (especially the CNN architectures). The second is that cross-domain applications (from manifestos to motions) seem to necessitate a further fine-tuning step, perhaps due to the very different styles of communication involved. The third is the significant contribution that the use of BERT provides our supervised pipelines, which are able to achieve state-of-theart performance on both the motions and manifesto quasi-sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "8"
},
{
"text": "The generated dataset of topically labelled motions along with the trained BERT+CNN classifier can now pave the way for further work at the intersection of natural language processing and political science, which can benefit from these fine-grained policy position annotations: from analysing the sentiment of the motions to measuring the level of disagreement between members of the same party, and up to full-blown argumentation mining of each debate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "8"
},
{
"text": "https://www.parliament.uk/documents/ rules-of-behaviour.pdf",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.publicwhip.org.uk 3 https://manifestoproject.wzb.eu",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.parliament.uk/ site-information/glossary/motion 5 https://hansard.parliament. uk/commons/2017-03-28/debates/ F81005F8-5593-49F8-82F7-7A62CB62394A/ Yemen",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Conservative Party manifesto 2015.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.comparativeagendas.net",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available at https://manifestoproject.wzb. eu/down/papers/handbook_2011_version_4. pdf",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Manifesto Research on Political Representation, the research team behind the Manifesto Project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported in part by the SFB 884 on the Political Economy of Reforms at the University of Mannheim (projects B6 and C4), funded by the German Research Foundation (DFG). The authors would like to thank Melis Ince, Olga Sokolova, and Stefan Tasic for their diligent work on annotation, and the anonymous reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Aye' or 'no'? Speech-level sentiment analysis of Hansard UK parliamentary debate transcripts",
"authors": [
{
"first": "Gavin",
"middle": [],
"last": "Abercrombie",
"suffix": ""
},
{
"first": "Riza",
"middle": [],
"last": "Batista-Navarro",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gavin Abercrombie and Riza Batista-Navarro. 2018a. 'Aye' or 'no'? Speech-level sentiment analysis of Hansard UK parliamentary debate transcripts. In Proceedings of the Eleventh International Confer- ence on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Sentiment and position-taking analysis of parliamentary debates: A systematic literature review",
"authors": [
{
"first": "Gavin",
"middle": [],
"last": "Abercrombie",
"suffix": ""
},
{
"first": "Riza",
"middle": [],
"last": "Batista-Navarro",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.04126"
]
},
"num": null,
"urls": [],
"raw_text": "Gavin Abercrombie and Riza Batista-Navarro. 2019. Sentiment and position-taking analysis of parlia- mentary debates: A systematic literature review. arXiv preprint arXiv:1907.04126.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Identifying opinion-topics and polarity of parliamentary debate motions",
"authors": [
{
"first": "Gavin",
"middle": [],
"last": "Abercrombie",
"suffix": ""
},
{
"first": "Riza Theresa",
"middle": [],
"last": "Batista-Navarro",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "280--285",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gavin Abercrombie and Riza Theresa Batista-Navarro. 2018b. Identifying opinion-topics and polarity of parliamentary debate motions. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 280-285, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Mapping policy preferences: Estimates for parties, electors, and governments",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Budge",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hans-Dieter Klingemann",
"suffix": ""
}
],
"year": 1945,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Budge, Hans-Dieter Klingemann, et al. 2001. Map- ping policy preferences: Estimates for parties, elec- tors, and governments, 1945-1998, volume 1. Ox- ford University Press.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Collective document classification with implicit inter-document semantic relationships",
"authors": [
{
"first": "Clint",
"middle": [],
"last": "Burford",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "106--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clint Burford, Steven Bird, and Timothy Baldwin. 2015. Collective document classification with im- plicit inter-document semantic relationships. In Pro- ceedings of the Fourth Joint Conference on Lexical and Computational Semantics, pages 106-116.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "What to do (and not to do) with the Comparative Manifestos Project data",
"authors": [
{
"first": "Kostas",
"middle": [],
"last": "Gemenis",
"suffix": ""
}
],
"year": 2013,
"venue": "Political Studies",
"volume": "61",
"issue": "",
"pages": "3--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kostas Gemenis. 2013. What to do (and not to do) with the Comparative Manifestos Project data. Political Studies, 61:3-23.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unsupervised text segmentation using semantic relatedness graphs",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Nanni",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics (*SEM)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goran Glava\u0161, Federico Nanni, and Simone Paolo Ponzetto. 2016. Unsupervised text segmentation us- ing semantic relatedness graphs. In Proceedings of the Fifth Joint Conference on Lexical and Computa- tional Semantics (*SEM).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Cross-lingual classification of topics in political texts",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Nanni",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Workshop on NLP and Computational Social Science (NLP+CSS)",
"volume": "",
"issue": "",
"pages": "42--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goran Glava\u0161, Federico Nanni, and Simone Paolo Ponzetto. 2017a. Cross-lingual classification of top- ics in political texts. In Proceedings of the Second Workshop on NLP and Computational Social Sci- ence (NLP+CSS), pages 42-46.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Unsupervised cross-lingual scaling of political texts",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Nanni",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "688--693",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goran Glava\u0161, Federico Nanni, and Simone Paolo Ponzetto. 2017b. Unsupervised cross-lingual scal- ing of political texts. In Proceedings of the 15th Conference of the European Chapter of the Associa- tion for Computational Linguistics: Volume 2, Short Papers, pages 688-693.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Transfer topic labeling with domain-specific knowledge base: An analysis of UK House of Commons speeches",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Herzog",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Slava Jankin",
"middle": [],
"last": "Mikhaylov",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "1935--2014",
"other_ids": {
"arXiv": [
"arXiv:1806.00793"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Herzog, Peter John, and Slava Jankin Mikhaylov. 2018. Transfer topic labeling with domain-specific knowledge base: An analysis of UK House of Commons speeches 1935-2014. arXiv preprint arXiv:1806.00793.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Manifesto Corpus. version",
"authors": [
{
"first": "Werner",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "Pola",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "Jirka",
"middle": [],
"last": "Lewandowski",
"suffix": ""
},
{
"first": "Theres",
"middle": [],
"last": "Matthie",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Merz",
"suffix": ""
},
{
"first": "Sven",
"middle": [],
"last": "Regel",
"suffix": ""
},
{
"first": "Annika",
"middle": [],
"last": "Werner",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "2018--2019",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Werner Krause, Pola Lehmann, Jirka Lewandowski, Theres Matthie, Nicolas Merz, Sven Regel, and Annika Werner. 2018. Manifesto Corpus. version: 2018-1. Berlin: WZB Berlin Social Science Center.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Coder training: key to enhancing reliability and validity. Mapping Policy Preferences from Texts",
"authors": [
{
"first": "P",
"middle": [],
"last": "Onawa",
"suffix": ""
},
{
"first": "Annika",
"middle": [],
"last": "Lacewell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Werner",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "3",
"issue": "",
"pages": "169--194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Onawa P Lacewell and Annika Werner. 2013. Coder training: key to enhancing reliability and validity. Mapping Policy Preferences from Texts, 3:169-194.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The measurement of observer agreement for categorical data",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Landis",
"suffix": ""
},
{
"first": "Gary",
"middle": [
"G"
],
"last": "Koch",
"suffix": ""
}
],
"year": 1977,
"venue": "Biometrics",
"volume": "33",
"issue": "1",
"pages": "159--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Richard Landis and Gary G. Koch. 1977. The mea- surement of observer agreement for categorical data. Biometrics, 33(1):159-174.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Extracting policy positions from political texts using words as data",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Laver",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Benoit",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Garry",
"suffix": ""
}
],
"year": 2003,
"venue": "American Political Science Review",
"volume": "97",
"issue": "2",
"pages": "311--331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Laver, Kenneth Benoit, and John Garry. 2003. Extracting policy positions from political texts using words as data. American Political Science Review, 97(2):311-331.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Scaling policy preferences from coded political texts",
"authors": [
{
"first": "Will",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Benoit",
"suffix": ""
},
{
"first": "Slava",
"middle": [],
"last": "Mikhaylov",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Laver",
"suffix": ""
}
],
"year": 2011,
"venue": "Legislative Studies Quarterly",
"volume": "36",
"issue": "",
"pages": "123--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Will Lowe, Kenneth Benoit, Slava Mikhaylov, and Michael Laver. 2011. Scaling policy preferences from coded political texts. Legislative Studies Quar- terly, 36(1):123-155.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Topic-based agreement and disagreement in US electoral manifestos",
"authors": [
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Nanni",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Tonelli",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2938--2944",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefano Menini, Federico Nanni, Simone Paolo Ponzetto, and Sara Tonelli. 2017. Topic-based agreement and disagreement in US electoral mani- festos. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 2938-2944.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Coder reliability and misclassification in Comparative Manifesto Project codings",
"authors": [
{
"first": "Slava",
"middle": [],
"last": "Mikhaylov",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Laver",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Benoit",
"suffix": ""
}
],
"year": 2008,
"venue": "the 66th MPSA Annual National Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slava Mikhaylov, Michael Laver, and Kenneth Benoit. 2008. Coder reliability and misclassification in Comparative Manifesto Project codings. In the 66th MPSA Annual National Conference.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Advances in pre-training distributed word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Puhrsch",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Ad- vances in pre-training distributed word representa- tions. In Proceedings of the International Confer- ence on Language Resources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Political text scaling meets computational semantics",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Nanni",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "Heiner",
"middle": [],
"last": "Stuckenschmidt",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.06217"
]
},
"num": null,
"urls": [],
"raw_text": "Federico Nanni, Goran Glava\u0161, Simone Paolo Ponzetto, and Heiner Stuckenschmidt. 2019a. Political text scaling meets computational semantics. arXiv preprint arXiv:1904.06217.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Proceedings of the Joint Conference on Digital Libraries",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Nanni",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Tonelli",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 1918,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Nanni, Stefano Menini, Sara Tonelli, and Si- mone Paolo Ponzetto. 2019b. Semantifying the UK Hansard (1918-2018). In Proceedings of the Joint Conference on Digital Libraries.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2227-2237.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Position taking in European Parliament speeches",
"authors": [
{
"first": "Sven-Oliver",
"middle": [],
"last": "Proksch",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"B"
],
"last": "Slapin",
"suffix": ""
}
],
"year": 2010,
"venue": "British Journal of Political Science",
"volume": "40",
"issue": "3",
"pages": "587--611",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sven-Oliver Proksch and Jonathan B Slapin. 2010. Position taking in European Parliament speeches. British Journal of Political Science, 40(3):587-611.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The politics of parliamentary debate",
"authors": [
{
"first": "Sven-Oliver",
"middle": [],
"last": "Proksch",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"B"
],
"last": "Slapin",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sven-Oliver Proksch and Jonathan B Slapin. 2015. The politics of parliamentary debate. Cambridge Uni- versity Press.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Natural Language Annotation for Machine Learning: A guide to corpus-building for applications",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Amber",
"middle": [],
"last": "Stubbs",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky and Amber Stubbs. 2012. Natu- ral Language Annotation for Machine Learning: A guide to corpus-building for applications. O'Reilly Media, Inc.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Hierarchical structured model for fine-to-coarse manifesto text analysis",
"authors": [
{
"first": "Shivashankar",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1964--1974",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shivashankar Subramanian, Trevor Cohn, and Timothy Baldwin. 2018. Hierarchical structured model for fine-to-coarse manifesto text analysis. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1964-1974.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Get out the vote: Determining support or opposition from congressional floor-debate transcripts",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "327--335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: Determining support or opposition from congressional floor-debate transcripts. In Proceed- ings of the 2006 conference on Empirical Methods in Natural Language Processing, pages 327-335. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Scope, range, and extent of Manifesto Project data usage: A survey of publications in eight high-impact journals",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Volkens",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Ares",
"suffix": ""
},
{
"first": "Radostina",
"middle": [],
"last": "Bratanova",
"suffix": ""
},
{
"first": "Lea",
"middle": [],
"last": "Kaftan",
"suffix": ""
}
],
"year": 2015,
"venue": "Handbook for Data Users and Coders",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Volkens, Cristina Ares, Radostina Bratanova, and Lea Kaftan. 2015. Scope, range, and extent of Manifesto Project data usage: A survey of publica- tions in eight high-impact journals. In Handbook for Data Users and Coders. WZB.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Mapping policy preferences from texts: statistical solutions for manifesto analysts",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Volkens",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "Bara",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Budge",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Hans-Dieter",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Klingemann",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Volkens, Judith Bara, Ian Budge, Michael D McDonald, and Hans-Dieter Klingemann. 2013. Mapping policy preferences from texts: statistical solutions for manifesto analysts, volume 3. OUP Oxford.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Manifesto coding instructions: 4th fully re",
"authors": [
{
"first": "Annika",
"middle": [],
"last": "Werner",
"suffix": ""
},
{
"first": "Onawa",
"middle": [],
"last": "Lacewell",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Volkens",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annika Werner, Onawa Lacewell, and Andrea Volkens. 2011. Manifesto coding instructions: 4th fully re- vised edition.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Bayesian optimization of text representations",
"authors": [
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Lingpeng",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2100--2105",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1251"
]
},
"num": null,
"urls": [],
"raw_text": "Dani Yogatama, Lingpeng Kong, and Noah A. Smith. 2015. Bayesian optimization of text representations. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 2100-2105, Lisbon, Portugal. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Classifying topics and detecting topic shifts in political manifestos",
"authors": [
{
"first": "C\u00e4cilia",
"middle": [],
"last": "Zirn",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Nanni",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eichorts",
"suffix": ""
},
{
"first": "Heiner",
"middle": [],
"last": "Stuckenschmidt",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First International Conference on the Advances in Computational Analysis of Political Text",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C\u00e4cilia Zirn, Goran Glava\u0161, Federico Nanni, Jason Ei- chorts, and Heiner Stuckenschmidt. 2016. Classify- ing topics and detecting topic shifts in political man- ifestos. In Proceedings of the First International Conference on the Advances in Computational Anal- ysis of Political Text (PolText).",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"text": "The number of quasi-sentences (QSs) coded under each domain in the UK manifestos that we use as reference texts and training data and the number of debate motions and quasi-sentences that we label under each domain in the motion policy preference corpus.",
"num": null,
"content": "<table><tr><td>Party</td><td>Year(s)</td><td>QSs</td></tr><tr><td>Conservative</td><td/><td>1589</td></tr><tr><td>DUP</td><td/><td>229</td></tr><tr><td>Green Party</td><td/><td>2235</td></tr><tr><td>Labour</td><td>2001, 2015</td><td>2503</td></tr><tr><td>Liberal Democrats</td><td>1997, 2015</td><td>2759</td></tr><tr><td>Plaid Cymru</td><td/><td>776</td></tr><tr><td>SDLP</td><td/><td>407</td></tr><tr><td>Sinn F\u00e9in</td><td/><td>272</td></tr><tr><td>SNP</td><td colspan=\"2\">1997, 2001, 2015 2309</td></tr><tr><td>UKIP</td><td/><td>1349</td></tr><tr><td>UUP</td><td/><td>417</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF4": {
"text": "Annotator agreement (Fleiss's kappa) at two levels of granularity during three iterations of training and development of annotation guidelines for labelling debate motions with codes from the Manifesto Project.",
"num": null,
"content": "<table><tr><td/><td>Annotators</td><td>No.</td><td>k</td></tr><tr><td>Motion</td><td>All 3</td><td colspan=\"2\">41 0.46</td></tr><tr><td>QS</td><td>All 3</td><td colspan=\"2\">180 0.46</td></tr><tr><td/><td>1 &amp; 2</td><td colspan=\"2\">139 0.51</td></tr><tr><td>Motion</td><td>2 &amp; 3 1 &amp; 3</td><td colspan=\"2\">155 0.50 169 0.49</td></tr><tr><td/><td>All pairs</td><td colspan=\"2\">463 0.50</td></tr><tr><td/><td>1 &amp; 2</td><td colspan=\"2\">622 0.58</td></tr><tr><td>QS</td><td>2 &amp; 3 1 &amp; 3</td><td colspan=\"2\">650 0.51 731 0.62</td></tr><tr><td/><td>All pairs</td><td colspan=\"2\">2003 0.58</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF5": {
"text": "Fleiss' kappa scores for three-way agreement and Cohen's kappa scores for two-way agreement on the debate motions dataset.",
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null
},
"TABREF7": {
"text": "",
"num": null,
"content": "<table><tr><td>: Statistics for the motion policy preference cor-</td></tr><tr><td>pus. Gold standard examples have been labelled by two</td></tr><tr><td>or three annotators initially and adjudicated on in a fi-</td></tr><tr><td>nal round of annotation. Silver standard examples have</td></tr><tr><td>been labelled by a single annotator only.</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF9": {
"text": "F1 scores for similarity matching and classification of debate motions at the quasi-sentence level.",
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null
},
"TABREF11": {
"text": "F1 scores for classification of party political manifestos at the quasi-sentence level.",
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null
}
}
}
}