ACL-OCL / Base_JSON /prefixR /json /R19 /R19-1006.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R19-1006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:02:49.311170Z"
},
"title": "Multilingual Sentence-Level Bias Detection in Wikipedia",
"authors": [
{
"first": "Desislava",
"middle": [],
"last": "Aleksandrova",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 de Montr\u00e9al",
"location": {}
},
"email": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Lareau",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 de Montr\u00e9al",
"location": {}
},
"email": ""
},
{
"first": "Pierre-Andr\u00e9",
"middle": [],
"last": "M\u00e9nard",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a multilingual method for the extraction of biased sentences from Wikipedia, and use it to create corpora in Bulgarian, French and English. Sifting through the revision history of the articles that at some point had been considered biased and later corrected, we retrieve the last tagged and the first untagged revisions as the before/after snapshots of what was deemed a violation of Wikipedia's neutral point of view policy. We extract the sentences that were removed or rewritten in that edit. The approach yields sufficient data even in the case of relatively small Wikipedias, such as the Bulgarian one, where 62k articles produced 5k biased sentences. We evaluate our method by manually annotating 520 sentences for Bulgarian and French, and 744 for English. We assess the level of noise and analyze its sources. Finally, we exploit the data with well-known classification methods to detect biased sentences. Code and datasets are hosted at https://github.com/ crim-ca/wiki-bias.",
"pdf_parse": {
"paper_id": "R19-1006",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a multilingual method for the extraction of biased sentences from Wikipedia, and use it to create corpora in Bulgarian, French and English. Sifting through the revision history of the articles that at some point had been considered biased and later corrected, we retrieve the last tagged and the first untagged revisions as the before/after snapshots of what was deemed a violation of Wikipedia's neutral point of view policy. We extract the sentences that were removed or rewritten in that edit. The approach yields sufficient data even in the case of relatively small Wikipedias, such as the Bulgarian one, where 62k articles produced 5k biased sentences. We evaluate our method by manually annotating 520 sentences for Bulgarian and French, and 744 for English. We assess the level of noise and analyze its sources. Finally, we exploit the data with well-known classification methods to detect biased sentences. Code and datasets are hosted at https://github.com/ crim-ca/wiki-bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Our goal is to automatically detect neutral point of view (NPOV) violations at the sentence level with a procedure replicable in multiple languages. Sentence-level bias detection is a type of sentiment analysis, closely related to subjectivity detection (Riloff and Wiebe, 2003; Wiebe and Riloff, 2005; Wilson and Raaijmakers, 2008; Murray and Carenini, 2009; Lin et al., 2011; Al Khatib et al., 2012) , where an opinion is considered subjective, and a fact, objective. Yet, as far as bias in writing is concerned, both subjective opinions and objective fact reporting (cf. \u00a75) may, in some cases, be sources of impartiality. The importance of the context is one of the main difficulties in detecting bias at the sentence level. Some types of point-of-view bias are equally challenging for humans to detect. Partisanship in editorials, for example, tends to go unnoticed when in line with the reader's own ideas and beliefs (Yano et al., 2010) . A further complication arises from the ambiguity of the term bias, which stands for a lack of fairness or neutrality in realms as varied as human cognition (Tversky and Kahneman, 1974) , society (Ross et al., 1977) , media (Entman, 2007) , internet (Baeza-Yates, 2018; Pitoura et al., 2018) or statistical models and algorithms (O'Neil, 2016; Shadowen, 2019) , to name a few. With so many different types of bias and their varying definitions, it is not trivial to set the scope of a bias-detection study.",
"cite_spans": [
{
"start": 254,
"end": 278,
"text": "(Riloff and Wiebe, 2003;",
"ref_id": "BIBREF26"
},
{
"start": 279,
"end": 302,
"text": "Wiebe and Riloff, 2005;",
"ref_id": "BIBREF32"
},
{
"start": 303,
"end": 332,
"text": "Wilson and Raaijmakers, 2008;",
"ref_id": "BIBREF33"
},
{
"start": 333,
"end": 359,
"text": "Murray and Carenini, 2009;",
"ref_id": "BIBREF21"
},
{
"start": 360,
"end": 377,
"text": "Lin et al., 2011;",
"ref_id": "BIBREF20"
},
{
"start": 378,
"end": 401,
"text": "Al Khatib et al., 2012)",
"ref_id": "BIBREF0"
},
{
"start": 924,
"end": 943,
"text": "(Yano et al., 2010)",
"ref_id": "BIBREF34"
},
{
"start": 1102,
"end": 1130,
"text": "(Tversky and Kahneman, 1974)",
"ref_id": "BIBREF30"
},
{
"start": 1141,
"end": 1160,
"text": "(Ross et al., 1977)",
"ref_id": "BIBREF27"
},
{
"start": 1169,
"end": 1183,
"text": "(Entman, 2007)",
"ref_id": "BIBREF6"
},
{
"start": 1195,
"end": 1214,
"text": "(Baeza-Yates, 2018;",
"ref_id": "BIBREF2"
},
{
"start": 1215,
"end": 1236,
"text": "Pitoura et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 1274,
"end": 1288,
"text": "(O'Neil, 2016;",
"ref_id": "BIBREF22"
},
{
"start": 1289,
"end": 1304,
"text": "Shadowen, 2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The majority of the work on this task is performed on news articles (Hirning et al., 2017; Baly et al., 2018; Bellows, 2018) and political blogs (Yano et al., 2010; Iyyer et al., 2014) rather than Wikipedia, because of the relative scarcity of examples an encyclopedia provides. Yet, unlike alternative data sources, Wikipedia comes with a definition of bias outlined in its content policy for neutrality of point of view (NPOV) . The core guidelines in NPOV are to: (1) avoid stating opinions as facts, (2) avoid stating seriously contested assertions as facts, (3) avoid stating facts as opinions, (4) prefer nonjudgemental language, and (5) indicate the relative prominence of opposing views. In addition, Wikipedia provides lists of bias-inducing words to avoid, 1 such as positively loaded language (puffery) in the form of peacock words (e.g., best, great, iconic); unsupported attributions, or weasel words (e.g., some people say, it is believed, science says); uncertainty markers, known as hedges (e.g., very, much, a bit, often, approximately), editorializing (e.g., without a doubt, arguably, however) and more. When an article is considered biased, an editor can flag it by adding a tag such as {{POV}} to its source, which displays a disputed neutrality warning banner on the page. These explicit guidelines (and the editors who apply them) help reduce biased language in Wikipedia over time through a continuous process of collaborative content revision (Pavalanathan et al., 2018) . Still, new instances of bias are introduced just as often as old ones are overlooked because of humans' inherent difficulty with subtle expressions of point-of-view partiality. Recasens et al. (2013) showed that when presented with a biased sentence from Wikipedia, annotators manage to correctly identify the loaded word in only 37% of the cases.",
"cite_spans": [
{
"start": 68,
"end": 90,
"text": "(Hirning et al., 2017;",
"ref_id": "BIBREF12"
},
{
"start": 91,
"end": 109,
"text": "Baly et al., 2018;",
"ref_id": "BIBREF3"
},
{
"start": 110,
"end": 124,
"text": "Bellows, 2018)",
"ref_id": "BIBREF4"
},
{
"start": 145,
"end": 164,
"text": "(Yano et al., 2010;",
"ref_id": "BIBREF34"
},
{
"start": 165,
"end": 184,
"text": "Iyyer et al., 2014)",
"ref_id": "BIBREF16"
},
{
"start": 422,
"end": 428,
"text": "(NPOV)",
"ref_id": null
},
{
"start": 1468,
"end": 1495,
"text": "(Pavalanathan et al., 2018)",
"ref_id": "BIBREF23"
},
{
"start": 1675,
"end": 1697,
"text": "Recasens et al. (2013)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Bias detection approaches vary primarily in terms of corpora, vectorization methods, and classification algorithms. We present a review of the related literature along this division.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Among those who tackle NPOV violations in Wikipedia, some rely on available datasets (Vincze, 2013) , others perform manual annotation (Hube and Fetahu, 2018; Ganter and Strube, 2009; Herzig et al., 2011; Al Khatib et al., 2012) , still others attempt to automatically extract labeled examples (Ganter and Strube, 2009; Recasens et al., 2013; Hube and Fetahu, 2018) . Our approach is in line with the latter.",
"cite_spans": [
{
"start": 85,
"end": 99,
"text": "(Vincze, 2013)",
"ref_id": "BIBREF31"
},
{
"start": 135,
"end": 158,
"text": "(Hube and Fetahu, 2018;",
"ref_id": "BIBREF15"
},
{
"start": 159,
"end": 183,
"text": "Ganter and Strube, 2009;",
"ref_id": "BIBREF8"
},
{
"start": 184,
"end": 204,
"text": "Herzig et al., 2011;",
"ref_id": "BIBREF11"
},
{
"start": 205,
"end": 228,
"text": "Al Khatib et al., 2012)",
"ref_id": "BIBREF0"
},
{
"start": 294,
"end": 319,
"text": "(Ganter and Strube, 2009;",
"ref_id": "BIBREF8"
},
{
"start": 320,
"end": 342,
"text": "Recasens et al., 2013;",
"ref_id": "BIBREF25"
},
{
"start": 343,
"end": 365,
"text": "Hube and Fetahu, 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora",
"sec_num": "2.1"
},
{
"text": "Using existing corpora, while being the cheapest method, predetermines which types of bias will be explored and in which languages. Vincze (2013) uses WikiWeasel, the Wikipedia subset of the CoNLL-2010 Shared Task corpora (Farkas et al., 2010) to study discourse-level uncertainty by manually annotating linguistic cues for three overt manifestations of bias: weasel, hedge and peacock words. Ganter and Strube (2009) focus on detecting hedges in a corpus of 1000 extracted sentences tagged with {{weasel}}, Bhosale et al. (2013) try to detect promotional content, while Kuang and Davison (2016) train their model on the English corpus of Recasens et al. (2013) .",
"cite_spans": [
{
"start": 132,
"end": 145,
"text": "Vincze (2013)",
"ref_id": "BIBREF31"
},
{
"start": 222,
"end": 243,
"text": "(Farkas et al., 2010)",
"ref_id": "BIBREF7"
},
{
"start": 393,
"end": 417,
"text": "Ganter and Strube (2009)",
"ref_id": "BIBREF8"
},
{
"start": 508,
"end": 529,
"text": "Bhosale et al. (2013)",
"ref_id": "BIBREF5"
},
{
"start": 571,
"end": 595,
"text": "Kuang and Davison (2016)",
"ref_id": "BIBREF18"
},
{
"start": 639,
"end": 661,
"text": "Recasens et al. (2013)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora",
"sec_num": "2.1"
},
{
"text": "Manual annotation ensures higher quality but is too costly for large multilingual datasets. Hube and Fetahu (2018) learn to detect bias in Wikipedia on a manually annotated corpus of sentences from the inherently biased Conservapedia, with a precision of 0.74. When tested on an unlabeled dataset extracted from Wikipedia however, the classifier obtains a precision of 0.66 for the sentences classified with a certainty over 0.8. Recasens et al. (2013) first propose a heuristic to automatically build a labeled corpus with biased sentences. Out of all revisions of NPOV-tagged articles, they identify the bias-driven edits based on the comments the editors left at commit. Although reliable, this method yields a fairly small set of examples for English (2,235 sentences) and none for smaller Wikipedias, first because of its dependence on revision comments (which are optional), and second, because it limits the examples to bias-driven edits containing five or fewer words.",
"cite_spans": [
{
"start": 92,
"end": 114,
"text": "Hube and Fetahu (2018)",
"ref_id": "BIBREF15"
},
{
"start": 430,
"end": 452,
"text": "Recasens et al. (2013)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora",
"sec_num": "2.1"
},
{
"text": "As for data vectorization, previous work on bias detection relies either on features from pre-trained language models, custom feature-engineering or both. Bellows (2018) finds no significant difference in performance for classifiers trained on Word2vec, GloVe, or fastText representations. Several studies (Recasens et al., 2013; Ganter and Strube, 2009; Hube and Fetahu, 2018) employ multiple lexical, contextual, and linguistic features which, while boosting performance, remain dependant on handcrafted word lists, specialized lexical resources such as SentiWordNet (Baccianella et al., 2010) , subjClue (Gitari et al., 2015), etc., and grammatical parsers that often cover only English. Yano et al. (2010) combine word vector representations from GloVe (as semantic features), 32 boolean lexicon-based features from Recasens et al. (2013) and document vector representations (as contextual features) to distinguish between different uses of the same word. They find that when training a logistic regression classifier, the semantic features alone perform better than both the contextual and the combination of the two.",
"cite_spans": [
{
"start": 155,
"end": 169,
"text": "Bellows (2018)",
"ref_id": "BIBREF4"
},
{
"start": 306,
"end": 329,
"text": "(Recasens et al., 2013;",
"ref_id": "BIBREF25"
},
{
"start": 330,
"end": 354,
"text": "Ganter and Strube, 2009;",
"ref_id": "BIBREF8"
},
{
"start": 355,
"end": 377,
"text": "Hube and Fetahu, 2018)",
"ref_id": "BIBREF15"
},
{
"start": 569,
"end": 595,
"text": "(Baccianella et al., 2010)",
"ref_id": "BIBREF1"
},
{
"start": 691,
"end": 709,
"text": "Yano et al. (2010)",
"ref_id": "BIBREF34"
},
{
"start": 820,
"end": 842,
"text": "Recasens et al. (2013)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vectorization",
"sec_num": "2.2"
},
{
"text": "Also performing bias classification at the sentence level, Vincze (2013) detects sentences containing weasel, hedge or peacock words from the Wiki-Weasel corpus with a precision of 0.74, recall of 0.69 and F 1 of 0.71, by using a dictionary lookup approach. Bellows (2018) reports an accuracy of 0.68 on a corpus of 2,143 biased sentences from news articles, vectorized using tf-idf and classified with a Mutlinomial Naive Bayes, and an accuracy of 0.77 for a CNN and 0.78 with a RNN. Finally, Hube and Fetahu (2018) achieve an F 1 measure of 0.70 using Random Forest on 686 manually annotated sentences from Conservapedia.",
"cite_spans": [
{
"start": 258,
"end": 272,
"text": "Bellows (2018)",
"ref_id": "BIBREF4"
},
{
"start": 494,
"end": 516,
"text": "Hube and Fetahu (2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Algorithms",
"sec_num": "2.3"
},
{
"text": "We propose a procedure to semi-automatically derive a labeled corpus of biased sentences from a Wikipedia dump in any language, which, for this paper, we applied to the April 2019 dumps 2 for Bulgarian, French and English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "3"
},
{
"text": "First, we manually compile a list of NPOVrelated tags for each of the target languages using the names of relevant Wikipedia maintenance templates 3 ({{POV}}, {{NPOV}}, {{neutral point of view}}, {{peacock}}, etc.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagset Curation",
"sec_num": "3.1"
},
{
"text": "Most tags, however, vary in spelling, not only based on the context (e.g., inline or at the beginning of an article), but also because of the open and collaborative nature of Wikipedia. Table 1 shows the sixteen most frequent \"weasel\" tag variations, only five of which (in bold) are documented on Wikipedia. While the official tag is the most frequently used, the unofficial variations account for almost 35% of the most frequent ways to tag a page containing weasel words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagset Curation",
"sec_num": "3.1"
},
{
"text": "While it may be effortless for human editors to interpret the meaning of these variations, it is not trivial to automatically identify all NPOVrelated ones. Simply extracting all the tags starting with the official form of \"weasel\" yields unrelated tags such as \"weasel, back-striped\" (an animal) or \"weasel, ben\" (a punk singer). For that reason, we automatically compiled exhaustive tag frequency lists in each language, and then manually selected the relevant variations of each.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagset Curation",
"sec_num": "3.1"
},
{
"text": "We look for occurrences of the selected tags across all revisions of each page, going forward from the oldest one. When a biased revision is found, we follow its evolution until the POV tag disappears, at which point we assume the problematic content has been either rewritten or edited out. Next, Table 1 : \"Weasel\" tag variation in English we extract the tag together with the pair of adjacent revisions, where the older one is tagged as biased and the newer one is not. We opted for this diachronic retrieval method, rather than relying on the repertoire of articles in Wikipedia's \"NPOV dispute\" section (Herzig et al., 2011; Recasens et al., 2013 ) since the latter only features currently tagged articles, while our method digs NPOV violations from revision histories.",
"cite_spans": [
{
"start": 608,
"end": 629,
"text": "(Herzig et al., 2011;",
"ref_id": "BIBREF11"
},
{
"start": 630,
"end": 651,
"text": "Recasens et al., 2013",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 298,
"end": 305,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Revision Extraction",
"sec_num": "3.2"
},
{
"text": "Each of these revision pairs undergoes a cleaning process using regular expressions to strip as much of the Wikipedia markup, links, and page references as possible, while preserving visible text and essential punctuation. At this point, we proceed to tokenize the text and split it into sentences using the rule-based tokenizer and sentencizer methods of spaCy (Honnibal and Montani, 2017) , whose 2.1.3 version supports 51 languages.",
"cite_spans": [
{
"start": 362,
"end": 390,
"text": "(Honnibal and Montani, 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Processing and Filtering",
"sec_num": "3.3"
},
{
"text": "Finally, we replace all numbers with a special token (numtkn), strip all remaining punctuation, and convert everything to lowercase. Our algorithm also extracts revision pairs where the second member was the subject of a redirect or vandalism, which we filter out. We then compare the revisions to obtain the lists of deleted and inserted sentences for each pair. In about 20% of the cases, the difference consists in simply deleting the NPOV tag, which we believe is an artifact of editorial wars (Sumi et al., 2011; Yasseri et al., 2012) , given the contentiousness of most NPOVflagged topics. Another 20% of the revision differences we set aside are punctuation or case-related.",
"cite_spans": [
{
"start": 498,
"end": 517,
"text": "(Sumi et al., 2011;",
"ref_id": "BIBREF29"
},
{
"start": 518,
"end": 539,
"text": "Yasseri et al., 2012)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Processing and Filtering",
"sec_num": "3.3"
},
{
"text": "We further clear the dataset from outliers (mostly acts of vandalism) by removing those with more than 400 edited sentences. Finally, we exclude revision pairs with minor differences (character-based Levenshtein distance of 1), which are spelling corrections rather than bias resolution. Table 2 To build the final corpora, we take all removed and added sentences (under 300 tokens) from the pre-filtered revisions for the positive and negative classes respectively. We balance the dataset by using unchanged sentences (also treated as negatives), as shown in Table 3 ",
"cite_spans": [],
"ref_spans": [
{
"start": 288,
"end": 295,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 560,
"end": 567,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Processing and Filtering",
"sec_num": "3.3"
},
{
"text": "Once we have collected the tagged/untagged revision pairs for each language (as per \u00a73.2), we evaluate their potential for automatic bias detection. Our intuition is that the sentences that were removed together with the NPOV tag in the same edit likely contain some form of bias. Insertions, on the other hand, come with little guarantee of neutrality, so we focus on the removed sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Evaluation",
"sec_num": "4"
},
{
"text": "For each language, we distribute the tagged/ untagged revision pairs into four bins, based on the number of sentences that were removed in the edit (bin 1: 1 or 2 sentences removed, bin 2: 3-6, bin 3: 7-15, bin 4: 16 or more; these values were determined empirically to yield balanced bins in terms of revision pairs). Each annotator labeled 296 randomly picked sentences for a given language, distributed equally across the four bins. 72 of these sentences (24%) were shared by all annotators working on the same language, while the remaining 224 were labeled by a single annotator (cf. The annotators were given identical instructions. For each sentence in their sample, they had to say whether it violated any of the NPOV principles stated in \u00a71. The annotators were always presented with the full revision pair, so they had access to the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Protocol",
"sec_num": "4.1"
},
{
"text": "Since we had three annotators for English, we used Fleiss' \u03ba to measure IAA. Tables 5 and 6 give the rate of positive annotations and IAA per language and per bin. On average, across all languages and bins, the annotators found 48% of positives in their samples, with an overall IAA of 0.41. Leaving out BG bin 4 (the only one with a negative \u03ba), we get an average positive rate of 47% (std = 0.08) and an average \u03ba value of 0.46 (std = 0.14). Our IAA coefficients are consistent with Vincze (2013), who had 200 English Wikipedia articles annotated by two linguists for weasel, peacock and hedge words, with IAA rates of 0.48, 0.45 and 0.46, respectively, and higher than the 0.35 reported by Hube and Fetahu (2018) Table 6 : Inter-annotator agreement (Fleiss' \u03ba)",
"cite_spans": [
{
"start": 693,
"end": 715,
"text": "Hube and Fetahu (2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 77,
"end": 91,
"text": "Tables 5 and 6",
"ref_id": null
},
{
"start": 716,
"end": 723,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset Evaluation Results",
"sec_num": "4.2"
},
{
"text": "About half of the annotated sentences turn out to be neutral. Below, we discuss the sources of the noise we have observed in our dataset (including the added sentences).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Evaluation Results",
"sec_num": "4.2"
},
{
"text": "We identified two types of noise: pipeline-related and human-related. Pipeline-related noise is either noise introduced at the pre-processing phase (e.g., due to inconsistent sentence segmentation) or noise that remains despite our filtering and cleaning efforts (e.g., NPOV-unrelated edits longer than one character, differences resulting from the introduction of an infobox, differences consisting in changing the spelling of numbers).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sources of Noise",
"sec_num": "4.3"
},
{
"text": "Human editor-related noise comes from the data itself and stems from the behaviour of Wikipedia's editors. It includes edits which introduce bias (often intentionally, as in (1) below), vandalism, corrections of factual mistakes unrelated to bias, replacing bias with another bias (cf. (2)), and collateral edits, i.e., neutral sentences neighbouring biased ones indirectly targeted by a large-scope edit (cf. (3)). Below are some examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sources of Noise",
"sec_num": "4.3"
},
{
"text": "(1) a. (before) cardinal health inc is a holding company b. (after) cardinal health is a healthcare company dedicated to making healthcare safer and more productive",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sources of Noise",
"sec_num": "4.3"
},
{
"text": "(2) a. (before) its support is low only in the cholla province which has for nearly numtkn years supported kim dae jung a well known leftist politician born in that province who also served as president Active voice may be used in cases like (10) to stress the agency of a participant in a situation, alongside a positively loaded support verb. To state a fact as an opinion is to use a weasel word to undermine the fact (11) or hide its source. While previous research shows the success of word-lists in detecting this particular type of bias (Recasens et al., 2013; Ganter and Strube, 2009) , Vincze (2013) warns against the ambiguity of many of them. For example, most can be a weasel word (Most agree that...), a hedge (most of his time), a peacock (the most touristic beach) or neutral (He did the most he could.) To state an opinion as a fact may be done with the use of an adverb (12) or an omission (13). Intentional vagueness or the omission of factual information 14, is arguably the hardest type of bias expression to detect not only for machines, which are expected to recognize the lack of data as an informative feature, but also for humans, since filling factual gaps requires a fair amount of domain-specific knowledge. The goal of the experiments is to assess the usefulness of the dataset in a sentence classification task. Our hypothesis is that having similar examples in both the biased and non-biased classes would help to single out discriminative words targeted by the NPOV-related edits.",
"cite_spans": [
{
"start": 544,
"end": 567,
"text": "(Recasens et al., 2013;",
"ref_id": "BIBREF25"
},
{
"start": 568,
"end": 592,
"text": "Ganter and Strube, 2009)",
"ref_id": "BIBREF8"
},
{
"start": 595,
"end": 608,
"text": "Vincze (2013)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sources of Noise",
"sec_num": "4.3"
},
{
"text": "Each dataset was split into a training set (80%), a development set (10%) on which we tuned the parameters, and a test set (10%) on which we ran a single evaluation with the best parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sources of Noise",
"sec_num": "4.3"
},
{
"text": "We used fastText's classification function (Joulin et al., 2017) , which implements a multinomial logistic regression algorithm on top of pretrained word embeddings. It uses word and character level embeddings to predict the class value of an instance. The parameter optimization was done by altering values for epoch (5, 10, 25) , learning rate (0.1, 0.01, 0.05), word n-grams (1 to 5), minimum count (1-5), embedding dimensions (100, 300), loss function (softmax, ns, hs), minimum character level n-gram size (2, 3), using pretrained vectors or not, and learning rate update rate (50, 100).",
"cite_spans": [
{
"start": 43,
"end": 64,
"text": "(Joulin et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 318,
"end": 321,
"text": "(5,",
"ref_id": null
},
{
"start": 322,
"end": 325,
"text": "10,",
"ref_id": null
},
{
"start": 326,
"end": 329,
"text": "25)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings",
"sec_num": "6.1"
},
{
"text": "When applying fastText's pretrained vectors, 5 we obtained comparable results for English and French without any significant gain, and with lower performance on Bulgarian. Thus, the final model chosen for its overall best performance across all three languages was trained without the use of an additional language model. The best performing values were then tried out on the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings",
"sec_num": "6.1"
},
{
"text": "We also experimented with classic bag-of-word vectorization with the stochastic gradient descent (SGD) (LeCun et al., 1998) and logistic regression (Hosmer and Lemeshow, 2000) algorithms. Each algorithm was run with the same settings on all three datasets to get the best average overall performances for precision, recall and F 1 measure. Parameter optimization was done using a grid search. Stop word lists were used for each language, which is the only language-specific aspect of the experiment.",
"cite_spans": [
{
"start": 103,
"end": 123,
"text": "(LeCun et al., 1998)",
"ref_id": "BIBREF19"
},
{
"start": 148,
"end": 175,
"text": "(Hosmer and Lemeshow, 2000)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-of-Words Vectorization",
"sec_num": "6.2"
},
{
"text": "The optimization for SGD ran 72 permutations with the following parameters:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-of-Words Vectorization",
"sec_num": "6.2"
},
{
"text": "\u2022 Bag-of-word n-gram size: unigrams only, unigrams and bigrams, unigrams to trigrams. \u2022 Bag-of-word size: 100, 150, 300, 500, 1,000 and 3,000. \u2022 Use idf reweighting or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-of-Words Vectorization",
"sec_num": "6.2"
},
{
"text": "\u2022 \u03b1 value: 0.01, 0.001.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-of-Words Vectorization",
"sec_num": "6.2"
},
{
"text": "All the other parameters were set to their default values. 6 For logistic regression, 504 permutations were tested using the following settings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-of-Words Vectorization",
"sec_num": "6.2"
},
{
"text": "\u2022 Same BOW n-gram size and BOW size and value type as SGD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-of-Words Vectorization",
"sec_num": "6.2"
},
{
"text": "\u2022 C: 1.0e-3, 1.0e-2, 1.0e-1, 1.0e0, 1.0e+1, 1.0e+2 and 1.0e+3. \u2022 Solver: sag, saga.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-of-Words Vectorization",
"sec_num": "6.2"
},
{
"text": "Using the training and development sets to run the grid search optimization on all three languages, the average F 1 measure was used to see which parameter values offered the best average performance across the board. The selected values were then used to run the same algorithm once on each language's training and test sets. Table 7 shows the results for the experiments detailed in \u00a76 for the SGD, fastText and logistic regression (LR) algorithms. For each performance measure, dataset section, algorithm and language, we provide results with respect to the biased class. The highest performance obtained on the test dataset of each language is in bold.",
"cite_spans": [],
"ref_spans": [
{
"start": 327,
"end": 334,
"text": "Table 7",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Bag-of-Words Vectorization",
"sec_num": "6.2"
},
{
"text": "For the LR algorithm, the best performances were obtained using a C value of 0.001 with the saga solver using a unigram model of 100 features without inverse document frequency (idf) reweighting. The best parameters for the SGD used a model of unigrams to trigrams, with an \u03b1 of 0.001 and idf reweighting. For fastText, the best performing parameter set used the default values 7 and a minimum of 5 occurrences per token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "Overall, the similar results between the development and test sets for each algorithm confirm that they did not overfit. Furthermore, all three measures have relatively low variance across languages, except for recall with SGD, which is considerably lower for Bulgarian (also impacting F 1 ) than for the other two languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "We observe that FastText's vectorization and classification methods deliver higher precision upon larger datasets, but SGD and LR assure a much higher recall regardless of the number of examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "While relatively better, the SGD performance level on the test set leaves room for improvement. This is likely due to the noise level in the sentences labeled as biased, which count many non-biased examples (see \u00a74.2). The results are equally likely affected by the lexical and contextual ambiguity of the biased expressions, as discussed in \u00a75. However, we do observe comparable best performance On the test set, our best overall average F 1 measure ranged between 0.56 and 0.62. This is lower than Vincze (2013)'s 0.71 or Hube and Fetahu (2018)'s 0.70, but our approach uses a large corpus, automatically derived from Wikipedia in any language with minimal language-specific input, applied to sentence-level bias detection, while Vincze (2013) used a monolingual, dictionarybased approach, and Hube and Fetahu (2018) relied on language-specific resources to extract multiple lexical and grammatical features. Our results set the baseline for sentence-level bias detection across the three languages of this corpus. Higher performance for a specific language may be achieved by a reconfiguration of the parameters or by the introduction of additional features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "We presented a semi-automatic method to extract biased sentences from Wikipedia in Bulgarian, French and English. As this method does not rely on language-specific features, apart from the NPOV tag list and a stop word list, it can be easily applied to Wikipedia archives in other languages. It relies on the tags added by human editors in the articles that they considered biased. We retrieve the last tagged revision and the untagged revision following it and regard them respectively as biased and unbiased. By comparing the revisions, we get the lists of removed and added sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "We manually annotated 1,784 of the removed sentences, for all three languages combined, and found that only about half of them were actually biased. An average Fleiss' \u03ba of 0.41 (0.46 if ignoring an outlier), consistent with the literature, indicates that the task is not trivial even for humans.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "Using our corpora, we tested three classification algorithms: bag-of-word vectorization with SGD, fastText, and logistic regression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "In future work, we would like to improve the quality of the dataset by addressing issues uncovered during the human evaluation, such as incoherent sentence segmentation, enumerations, minor edits and remaining noise. Another conceivable optimization is to segment the dataset into two or more subsets according to the main forms of bias expression (e.g., explicit vs implicit). It would allow to explore and evaluate different forms of bias separately, which in turn might motivate differential classification techniques. Finally, populating the negative examples class with sentences from Wikipedia's Featured Articles (in line with Bhosale et al. 2013) might help reduce class ambiguity by reinforcing the contrast between neutral encyclopedic tone and expressions of bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "https://en.wikipedia.org/wiki/ Wikipedia:Manual_of_Style/Words_to_watch",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://dumps.wikimedia.org 3 For English, see https://en.wikipedia.org/ wiki/Category:Neutrality_templates",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Examples are taken from the English evaluation subsets, where sentences are in lowercase, stripped of punctuation and numbers are replaced by numtkn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available for 157 languages, pretrained on Common Crawl and Wikipedia(Grave et al., 2018) https:// fasttext.cc/docs/en/crawl-vectors.html 6 Version 0.21.2 of the sklearn toolkit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For version 0.8.3 of https://github.com/ facebookresearch/fastText",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been supported by the Minist\u00e8re de l'\u00c9conomie et de l'Innovation du Qu\u00e9bec (MEI). We would like to thank the annotators for their help with the quality evaluation process and the anonymous reviewers for their insightful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic detection of point of view differences in Wikipedia",
"authors": [
{
"first": "Al",
"middle": [],
"last": "Khalid",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Khatib",
"suffix": ""
},
{
"first": "Cathleen",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kantner",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COLING 2012. The COLING 2012 Organizing Committee",
"volume": "",
"issue": "",
"pages": "33--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khalid Al Khatib, Hinrich Sch\u00fctze, and Cathleen Kant- ner. 2012. Automatic detection of point of view dif- ferences in Wikipedia. In Proceedings of COLING 2012. The COLING 2012 Organizing Committee, Mumbai, India, pages 33-50.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining",
"authors": [
{
"first": "Stefano",
"middle": [],
"last": "Baccianella",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Esuli",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefano Baccianella, Andrea Esuli, and Fabrizio Sebas- tiani. 2010. SentiWordNet 3.0: An enhanced lexi- cal resource for sentiment analysis and opinion min- ing. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10). European Languages Resources Asso- ciation (ELRA), Valletta, Malta.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bias on the web",
"authors": [
{
"first": "Ricardo",
"middle": [],
"last": "Baeza-Yates",
"suffix": ""
}
],
"year": 2018,
"venue": "Communications of the ACM",
"volume": "61",
"issue": "6",
"pages": "54--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ricardo Baeza-Yates. 2018. Bias on the web. Commu- nications of the ACM 61(6):54-61.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Predicting factuality of reporting and bias of news media sources",
"authors": [
{
"first": "Ramy",
"middle": [],
"last": "Baly",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Karadzhov",
"suffix": ""
},
{
"first": "Dimitar",
"middle": [],
"last": "Alexandrov",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramy Baly, Georgi Karadzhov, Dimitar Alexandrov, James Glass, and Preslav Nakov. 2018. Predict- ing factuality of reporting and bias of news media sources. EMNLP-2018 .",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Exploration of Classifying Sentence Bias in News Articles with Machine Learning Models",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Bellows",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Bellows. 2018. Exploration of Classifying Sen- tence Bias in News Articles with Machine Learning Models. Ph.D. thesis, University of Rhode Island.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Detecting promotional content in wikipedia",
"authors": [
{
"first": "Shruti",
"middle": [],
"last": "Bhosale",
"suffix": ""
},
{
"first": "Heath",
"middle": [],
"last": "Vinicombe",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1851--1857",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shruti Bhosale, Heath Vinicombe, and Raymond Mooney. 2013. Detecting promotional content in wikipedia. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Pro- cessing. pages 1851-1857.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Framing bias: Media in the distribution of power",
"authors": [
{
"first": "",
"middle": [],
"last": "Robert M Entman",
"suffix": ""
}
],
"year": 2007,
"venue": "J. Commun",
"volume": "57",
"issue": "1",
"pages": "163--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert M Entman. 2007. Framing bias: Media in the distribution of power. J. Commun. 57(1):163-173.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The conll-2010 shared task: learning to detect hedges and their scope in natural language text",
"authors": [
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Vincze",
"suffix": ""
},
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "M\u00f3ra",
"suffix": ""
},
{
"first": "J\u00e1nos",
"middle": [],
"last": "Csirik",
"suffix": ""
},
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "Szarvas",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning-Shared Task",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rich\u00e1rd Farkas, Veronika Vincze, Gy\u00f6rgy M\u00f3ra, J\u00e1nos Csirik, and Gy\u00f6rgy Szarvas. 2010. The conll-2010 shared task: learning to detect hedges and their scope in natural language text. In Proceedings of the Fourteenth Conference on Computational Nat- ural Language Learning-Shared Task. Association for Computational Linguistics, pages 1-12.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Finding hedges by chasing weasels: Hedge detection using wikipedia tags and shallow linguistic features",
"authors": [
{
"first": "Viola",
"middle": [],
"last": "Ganter",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the ACL-IJCNLP 2009 Conference Short Papers",
"volume": "",
"issue": "",
"pages": "173--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viola Ganter and Michael Strube. 2009. Finding hedges by chasing weasels: Hedge detection using wikipedia tags and shallow linguistic features. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers. pages 173-176.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Lexicon-based Approach for Hate Speech Detection",
"authors": [
{
"first": "Njagi",
"middle": [],
"last": "Dennis Gitari",
"suffix": ""
},
{
"first": "Zuping",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hanyurwimfura",
"middle": [],
"last": "Damien",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Long",
"suffix": ""
}
],
"year": 2015,
"venue": "International Journal of Multimedia and Ubiquitous Engineering",
"volume": "10",
"issue": "4",
"pages": "215--230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Njagi Dennis Gitari, Zuping Zhang, Hanyurwimfura Damien, and Jun Long. 2015. A Lexicon-based Ap- proach for Hate Speech Detection. International Journal of Multimedia and Ubiquitous Engineering 10(4):215-230.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning word vectors for 157 languages",
"authors": [
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Prakhar",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Ar- mand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the International Conference on Language Re- sources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An annotation scheme for automated bias detection in wikipedia",
"authors": [
{
"first": "Livnat",
"middle": [],
"last": "Herzig",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Nunes",
"suffix": ""
},
{
"first": "Batia",
"middle": [],
"last": "Snir",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 5th Linguistic Annotation Workshop",
"volume": "",
"issue": "",
"pages": "47--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Livnat Herzig, Alex Nunes, and Batia Snir. 2011. An annotation scheme for automated bias detection in wikipedia. In Proceedings of the 5th Linguistic An- notation Workshop. Association for Computational Linguistics, Stroudsburg, PA, USA, LAW V '11, pages 47-55.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Detecting and identifying bias-heavy sentences in news articles",
"authors": [
{
"first": "P",
"middle": [],
"last": "Nicholas",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Hirning",
"suffix": ""
},
{
"first": "Shreya",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shankar",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas P Hirning, Andy Chen, and Shreya Shankar. 2017. Detecting and identifying bias-heavy sen- tences in news articles. Technical report, Stanford University.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Ines",
"middle": [],
"last": "Montani",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal and Ines Montani. 2017. spacy 2: Natural language understanding with bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear .",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Applied logistic regression",
"authors": [
{
"first": "W",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Stanley",
"middle": [],
"last": "Hosmer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lemeshow",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David W. Hosmer and Stanley Lemeshow. 2000. Ap- plied logistic regression. John Wiley and Sons.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Detecting biased statements in wikipedia",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "Hube",
"suffix": ""
},
{
"first": "Besnik",
"middle": [],
"last": "Fetahu",
"suffix": ""
}
],
"year": 2018,
"venue": "Companion Proceedings of the The Web Conference 2018. International World Wide Web Conferences Steering Committee",
"volume": "",
"issue": "",
"pages": "1779--1786",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christoph Hube and Besnik Fetahu. 2018. Detecting biased statements in wikipedia. In Companion Pro- ceedings of the The Web Conference 2018. Interna- tional World Wide Web Conferences Steering Com- mittee, pages 1779-1786.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Political ideology detection using recursive neural networks",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Enns",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1113--1122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Iyyer, Peter Enns, Jordan Boyd-Graber, and Philip Resnik. 2014. Political ideology detection us- ing recursive neural networks. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers). vol- ume 1, pages 1113-1122.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Bag of tricks for efficient text classification",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "2",
"issue": "",
"pages": "427--431",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Confer- ence of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. Association for Computational Linguistics, Valen- cia, Spain, pages 427-431.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semantic and context-aware linguistic model for bias detection",
"authors": [
{
"first": "Sicong",
"middle": [],
"last": "Kuang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brian D Davison",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of the Natural Language Processing meets Journalism IJCAI-16 Workshop",
"volume": "",
"issue": "",
"pages": "57--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sicong Kuang and Brian D Davison. 2016. Semantic and context-aware linguistic model for bias detec- tion. In Proc. of the Natural Language Processing meets Journalism IJCAI-16 Workshop. pages 57-62.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Efficient backprop",
"authors": [
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Genevieve",
"middle": [
"B"
],
"last": "Orr",
"suffix": ""
},
{
"first": "Klaus-Robert",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 1998,
"venue": "Neural Networks: Tricks of the Trade, This Book is an Outgrowth of a 1996 NIPS Workshop",
"volume": "",
"issue": "",
"pages": "9--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann LeCun, L\u00e9on Bottou, Genevieve B. Orr, and Klaus-Robert M\u00fcller. 1998. Efficient backprop. In Neural Networks: Tricks of the Trade, This Book is an Outgrowth of a 1996 NIPS Workshop. Springer- Verlag, London, UK, UK, pages 9-50.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Sentence subjectivity detection with weaklysupervised learning",
"authors": [
{
"first": "Chenghua",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yulan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Everson",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1153--1161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chenghua Lin, Yulan He, and Richard Everson. 2011. Sentence subjectivity detection with weakly- supervised learning. In Proceedings of 5th Interna- tional Joint Conference on Natural Language Pro- cessing. pages 1153-1161.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Detecting subjectivity in multiparty speech",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
}
],
"year": 2009,
"venue": "Tenth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Murray and Giuseppe Carenini. 2009. Detect- ing subjectivity in multiparty speech. In Tenth An- nual Conference of the International Speech Com- munication Association.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy",
"authors": [
{
"first": "O'",
"middle": [],
"last": "Cathy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Neil",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cathy O'Neil. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Mind your POV: Convergence of articles and editors towards wikipedia's neutrality norm",
"authors": [
{
"first": "Umashanthi",
"middle": [],
"last": "Pavalanathan",
"suffix": ""
},
{
"first": "Xiaochuang",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. ACM Hum. -Comput. Interact",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Umashanthi Pavalanathan, Xiaochuang Han, and Ja- cob Eisenstein. 2018. Mind your POV: Conver- gence of articles and editors towards wikipedia's neutrality norm. Proc. ACM Hum. -Comput. Inter- act. 2(CSCW):137:1-137:23.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "On measuring bias in online information",
"authors": [
{
"first": "Evaggelia",
"middle": [],
"last": "Pitoura",
"suffix": ""
},
{
"first": "Panayiotis",
"middle": [],
"last": "Tsaparas",
"suffix": ""
},
{
"first": "Giorgos",
"middle": [],
"last": "Flouris",
"suffix": ""
},
{
"first": "Irini",
"middle": [],
"last": "Fundulaki",
"suffix": ""
},
{
"first": "Panagiotis",
"middle": [],
"last": "Papadakos",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Abiteboul",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM SIG-MOD Record",
"volume": "46",
"issue": "4",
"pages": "16--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evaggelia Pitoura, Panayiotis Tsaparas, Giorgos Flouris, Irini Fundulaki, Panagiotis Papadakos, Serge Abiteboul, and Gerhard Weikum. 2018. On measuring bias in online information. ACM SIG- MOD Record 46(4):16-21.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Linguistic models for analyzing and detecting biased language",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Cristian",
"middle": [],
"last": "Danescu-Niculescu-Mizil",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1650--1659",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta Recasens, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. 2013. Linguistic models for an- alyzing and detecting biased language. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers). volume 1, pages 1650-1659.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learning extraction patterns for subjective expressions",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 conference on Empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen Riloff and Janyce Wiebe. 2003. Learning extrac- tion patterns for subjective expressions. In Proceed- ings of the 2003 conference on Empirical methods in natural language processing.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Social roles, social control, and biases in social-perception processes",
"authors": [
{
"first": "Teresa",
"middle": [
"M"
],
"last": "Lee D Ross",
"suffix": ""
},
{
"first": "Julia",
"middle": [
"L"
],
"last": "Amabile",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Steinmetz",
"suffix": ""
}
],
"year": 1977,
"venue": "J. Pers. Soc. Psychol",
"volume": "35",
"issue": "7",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee D Ross, Teresa M Amabile, and Julia L Steinmetz. 1977. Social roles, social control, and biases in social-perception processes. J. Pers. Soc. Psychol. 35(7):485.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Ethics and bias in machine learning: A technical study of what makes us \"good",
"authors": [
{
"first": "Nicole",
"middle": [],
"last": "Shadowen",
"suffix": ""
}
],
"year": 2019,
"venue": "The Transhumanism Handbook",
"volume": "",
"issue": "",
"pages": "247--261",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicole Shadowen. 2019. Ethics and bias in ma- chine learning: A technical study of what makes us \"good\". In Newton Lee, editor, The Transhu- manism Handbook, Springer International Publish- ing, Cham, pages 247-261.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Edit wars in wikipedia",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sumi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yasseri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rung",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kornai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kertesz",
"suffix": ""
}
],
"year": 2011,
"venue": "2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing. ieeexplore.ieee.org",
"volume": "",
"issue": "",
"pages": "724--727",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R Sumi, T Yasseri, A Rung, A Kornai, and J Kertesz. 2011. Edit wars in wikipedia. In 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Con- ference on Social Computing. ieeexplore.ieee.org, pages 724-727.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Judgment under uncertainty: Heuristics and biases",
"authors": [
{
"first": "A",
"middle": [],
"last": "Tversky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kahneman",
"suffix": ""
}
],
"year": 1974,
"venue": "Science",
"volume": "185",
"issue": "4157",
"pages": "1124--1131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Tversky and D Kahneman. 1974. Judgment un- der uncertainty: Heuristics and biases. Science 185(4157):1124-1131.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Weasels, hedges and peacocks: Discourse-level uncertainty in wikipedia articles",
"authors": [
{
"first": "Veronika",
"middle": [],
"last": "Vincze",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "383--391",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Veronika Vincze. 2013. Weasels, hedges and peacocks: Discourse-level uncertainty in wikipedia articles. In Proceedings of the Sixth International Joint Confer- ence on Natural Language Processing. pages 383- 391.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Creating subjective and objective sentence classifiers from unannotated texts",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2005,
"venue": "International conference on intelligent text processing and computational linguistics",
"volume": "",
"issue": "",
"pages": "486--497",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janyce Wiebe and Ellen Riloff. 2005. Creating sub- jective and objective sentence classifiers from unan- notated texts. In International conference on intel- ligent text processing and computational linguistics. Springer, pages 486-497.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Comparing word, character, and phoneme n-grams for subjective utterance recognition",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Raaijmakers",
"suffix": ""
}
],
"year": 2008,
"venue": "Ninth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Wilson and Stephan Raaijmakers. 2008. Com- paring word, character, and phoneme n-grams for subjective utterance recognition. In Ninth Annual Conference of the International Speech Communi- cation Association.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Shedding (a thousand points of) light on biased language",
"authors": [
{
"first": "Tae",
"middle": [],
"last": "Yano",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk",
"volume": "",
"issue": "",
"pages": "152--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tae Yano, Philip Resnik, and Noah A Smith. 2010. Shedding (a thousand points of) light on biased lan- guage. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk. Association for Computational Linguistics, Stroudsburg, PA, USA, CSLDAMT '10, pages 152-158.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Andr\u00e1s Rung, Andr\u00e1s Kornai, and J\u00e1nos Kert\u00e9sz",
"authors": [
{
"first": "Taha",
"middle": [],
"last": "Yasseri",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Sumi",
"suffix": ""
}
],
"year": 2012,
"venue": "PLoS One",
"volume": "7",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taha Yasseri, Robert Sumi, Andr\u00e1s Rung, Andr\u00e1s Kor- nai, and J\u00e1nos Kert\u00e9sz. 2012. Dynamics of conflicts in wikipedia. PLoS One 7(6):e38869.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "10) a. (before) the united states department of justice indicted the company but amway secured an acquittal b. (after) the united states department of justice indicted the company but amway were acquitted"
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "a. (before) in the first invasion operation litani in numtkn the israeli military and south lebanon army sla occupied a narrow strip of land ostensibly as a security zone b. (after) in the first operation litani in numtkn the israel defense forces and south lebanon army occupied a narrow strip of land described as the security zone"
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "12) a. (before) in fact the need for fast and secure fund transfers is growing and in the next year instant payments will quickly become the new normal for electronic fund transfers b. (after) it is predicted that in the next year instant payments will become the standard for electronic fund transfers (13) a. (before) in numtkn the journal won the praise of fascist leaders b. (after) there are some authors who retain that the journal won the praise of fascist leaders"
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "14) a. (before) as of numtkn it is the ethnic minority party in romania with representation in the romanian parliament b. (after) as of numtkn it is the ethnic minority party in romania with representation in the romanian parliament and is part of the governing coalition along with the justice and truth alliance and the conservatives 6 Classification Experiments"
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>Revision pairs</td><td>BG</td><td>FR</td><td>EN</td></tr><tr><td>initial number</td><td>1,021</td><td colspan=\"2\">46,331 197,953</td></tr><tr><td>tag removal</td><td colspan=\"3\">-257 -10,255 -61,397</td></tr><tr><td>punct./case</td><td>-194</td><td colspan=\"2\">-5,967 -44,345</td></tr><tr><td>redir./vandalism</td><td>-56</td><td colspan=\"2\">-1,524 -17,154</td></tr><tr><td>deletions only</td><td>-33</td><td colspan=\"2\">-2,740 -11,331</td></tr><tr><td>insertions only</td><td>-28</td><td>-2,819</td><td>-2,938</td></tr><tr><td>spelling</td><td>-3</td><td>-136</td><td>-400</td></tr><tr><td>outliers</td><td>-2</td><td>-153</td><td>-609</td></tr><tr><td>Total pairs</td><td>448</td><td>22,737</td><td>59,779</td></tr></table>",
"num": null,
"text": "gives the number of initial, final and excluded revisions per language.",
"html": null
},
"TABREF2": {
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Number of revision pairs per language",
"html": null
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td>Sentences</td><td>BG</td><td>FR</td><td>EN</td></tr><tr><td>Removed</td><td colspan=\"2\">4,756 105,939</td><td>800,191</td></tr><tr><td>Added</td><td>3,288</td><td>72,183</td><td>494,993</td></tr><tr><td colspan=\"2\">Unchanged 1,468</td><td>33,756</td><td>305,198</td></tr><tr><td>Total</td><td colspan=\"3\">9,512 211,878 1,600,382</td></tr></table>",
"num": null,
"text": ".",
"html": null
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Number of sentences per language",
"html": null
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td/><td/><td colspan=\"4\">), thus allowing us to annotate</td></tr><tr><td colspan=\"6\">more sentences while maintaining enough over-</td></tr><tr><td colspan=\"6\">lap to measure inter-annotator agreement (IAA).</td></tr><tr><td colspan=\"6\">The Bulgarian sample was annotated by two native</td></tr><tr><td colspan=\"6\">speakers, English by three with near-native profi-</td></tr><tr><td colspan=\"4\">ciency, and French by two natives.</td><td/><td/></tr><tr><td colspan=\"6\">Lang All Ann1 Ann2 Ann3 Total</td></tr><tr><td>BG</td><td>72</td><td>224</td><td>224</td><td>-</td><td>520</td></tr><tr><td>FR</td><td>72</td><td>224</td><td>224</td><td>-</td><td>520</td></tr><tr><td>EN</td><td>72</td><td>224</td><td>224</td><td>224</td><td>744</td></tr></table>",
"num": null,
"text": "",
"html": null
},
"TABREF6": {
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "",
"html": null
},
"TABREF7": {
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Bin BG EN</td><td>FR avg</td><td>std</td></tr><tr><td colspan=\"4\">1 0.34 0.51 0.47 0.44 0.07</td></tr><tr><td colspan=\"4\">2 0.64 0.45 0.45 0.52 0.09</td></tr><tr><td colspan=\"4\">3 0.63 0.45 0.38 0.48 0.11</td></tr><tr><td colspan=\"4\">4 0.63 0.52 0.34 0.50 0.12</td></tr><tr><td colspan=\"4\">avg 0.56 0.48 0.41 0.48 0.06</td></tr><tr><td colspan=\"4\">std 0.13 0.03 0.05 0.03 0.10</td></tr><tr><td/><td colspan=\"2\">Table 5: Positives in annotations</td><td/></tr><tr><td>Bin</td><td>BG EN</td><td>FR avg</td><td>std</td></tr><tr><td>1</td><td colspan=\"3\">0.32 0.55 0.67 0.51 0.15</td></tr><tr><td>2</td><td colspan=\"3\">0.22 0.58 0.44 0.41 0.15</td></tr><tr><td>3</td><td colspan=\"3\">0.32 0.31 0.61 0.41 0.14</td></tr><tr><td colspan=\"4\">4 -0.23 0.39 0.68 0.28 0.38</td></tr><tr><td>avg</td><td colspan=\"3\">0.16 0.46 0.60 0.41 0.18</td></tr><tr><td>std</td><td colspan=\"3\">0.23 0.11 0.10 0.08 0.21</td></tr></table>",
"num": null,
"text": ", who crowdsourced the annotation of sentences from Conservapedia into biased and unbiased. Identifying such phenomena is thus not trivial but reasonable agreement can be expected.",
"html": null
},
"TABREF8": {
"type_str": "table",
"content": "<table><tr><td>which has for nearly numtkn years supported kim dae jung a well known progressive politician born in that province who also served as president of south korea numtkn numtkn (3) a. (before) from the numtkn th century confucianism was losing its influence on vietnamese society monetary economy began to develop but unfortu-nately in negative ways b. (after) from the numtkn th century confucianism was losing its influence on vietnamese society and a monetary economy began to develop 5 Expressions of Bias The manual annotation also highlighted the vari-ety of bias expression. Previously, Recasens et al. (2013) had identified two major classes: episte-mological and framing bias (subjective intensi-fiers and one-sided terms), where they considered the first one to group more implicit expressions such as factive and assertive verbs, entailment and hedges. Based on their work and Wikipedia's Manual of Style, we present biased examples from our corpus 4 and discuss them in terms of the overt/ covert nature of the biased statement, its length (one or more words), and its level of ambiguity. Subjective intensifiers are mostly expressed through single-word verbal and nominal modifiers (adverbs and adjectives) as in (4) and (5), but may also take the form of superlatives or quantifiers. They explicitly undermine tone neutrality by in-troducing overstatements and exaggerations (6). (4) a. (before) some prominent liberals including scott reid were strongly critical of volpe s response b. (after) some prominent liberals including scott reid criticized volpe s response (5) Clich\u00e9s and jargon tend to be non-ambiguous but introduce low-frequency words in the corpus, as a result of being discouraged by Wikipedia. (7) (before) x force was concocted by illustrator rob liefeld who started penciling the new mutants comic book in numtkn Describing or analyzing rather than reporting events is a form of partiality harder to model, as it may not necessarily contain explicitly proscribed vocabulary. (8) (before) he was a former club rugby and an opening batsman in club cricket but did not have the ability to make it all the way to the top level these two sports have become his particular area of expertise however he is very knowledgable on all sports that are played (9) (before) however the most important consequence of the battle was that president lincoln was able to sieze upon the victory claim it as a strategic victory for the north and release his emancipation proclamation</td></tr></table>",
"num": null,
"text": "of south korea numtkn numtkn b. (after) its support is low only in the jeolla province (before) he is truly one of the greatest americans (6) a. (before) this is an absurd statement because the cavalry of any age is designed first and foremost to run over the enemy and separate them as to make them far more vulnerable to being overwhelmed and overrun b. (after) this is wrong because the cavalry of any age is designed first and foremost to run over the enemy and separate them as to make them far more vulnerable to being overwhelmed and overrun",
"html": null
},
"TABREF9": {
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Precision BG</td><td>0.5387</td><td>0.5886</td><td>0.5324</td><td>0.5330</td><td>0.5182</td><td>0.5032</td></tr><tr><td/><td>FR</td><td>0.5059</td><td>0.5087</td><td>0.5533</td><td>0.5520</td><td>0.5151</td><td>0.5161</td></tr><tr><td/><td>EN</td><td>0.5112</td><td>0.5083</td><td>0.5656</td><td>0.5634</td><td>0.5230</td><td>0.5224</td></tr><tr><td>Recall</td><td>BG</td><td>0.4318</td><td>0.5049</td><td>0.4752</td><td>0.4937</td><td>0.6219</td><td>0.6303</td></tr><tr><td/><td>FR</td><td>0.8877</td><td>0.8363</td><td>0.5724</td><td>0.5721</td><td>0.6751</td><td>0.6739</td></tr><tr><td/><td>EN</td><td>0.8357</td><td>0.8277</td><td>0.5686</td><td>0.5718</td><td>0.5344</td><td>0.5354</td></tr><tr><td>F 1</td><td>BG</td><td>0.4794</td><td>0.5435</td><td>0.5022</td><td>0.5126</td><td>0.5653</td><td>0.5596</td></tr><tr><td/><td>FR</td><td>0.6444</td><td>0.6146</td><td>0.5627</td><td>0.5619</td><td>0.5844</td><td>0.5845</td></tr><tr><td/><td>EN</td><td>0.6334</td><td>0.6291</td><td>0.5671</td><td>0.5676</td><td>0.5286</td><td>0.5288</td></tr></table>",
"num": null,
"text": "Measure Lang. Dev-SGD Test-SGD Dev-fastText Test-fastText Dev-LR Test-LR",
"html": null
},
"TABREF10": {
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Results for each language, dataset and classification method for the biased class across corpora of varying size and languages from different families.",
"html": null
}
}
}
}