ACL-OCL / Base_JSON /prefixP /json /P19 /P19-1040.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P19-1040",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:27:39.195010Z"
},
"title": "Evidence-based Trustworthiness",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {}
},
"email": "yizhang5@cis.upenn.edu"
},
{
"first": "Zachary",
"middle": [
"G"
],
"last": "Ives",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {}
},
"email": "zives@cis.upenn.edu"
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {}
},
"email": "danroth@cis.upenn.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The information revolution brought with it information pollution. Information retrieval and extraction help us cope with abundant information from diverse sources. But some sources are of anonymous authorship, and some are of uncertain accuracy, so how can we determine what we should actually believe? Not all information sources are equally trustworthy, and simply accepting the majority view is often wrong. This paper develops a general framework for estimating the trustworthiness of information sources in an environment where multiple sources provide claims and supporting evidence, and each claim can potentially be produced by multiple sources. We consider two settings: one in which information sources directly assert claims, and a more realistic and challenging one, in which claims are inferred from evidence provided by sources, via (possibly noisy) NLP techniques. Our key contribution is to develop a family of probabilistic models that jointly estimate the trustworthiness of sources, and the credibility of claims they assert. This is done while accounting for the (possibly noisy) NLP needed to infer claims from evidence supplied by sources. We evaluate our framework on several datasets, showing strong results and significant improvement over baselines.",
"pdf_parse": {
"paper_id": "P19-1040",
"_pdf_hash": "",
"abstract": [
{
"text": "The information revolution brought with it information pollution. Information retrieval and extraction help us cope with abundant information from diverse sources. But some sources are of anonymous authorship, and some are of uncertain accuracy, so how can we determine what we should actually believe? Not all information sources are equally trustworthy, and simply accepting the majority view is often wrong. This paper develops a general framework for estimating the trustworthiness of information sources in an environment where multiple sources provide claims and supporting evidence, and each claim can potentially be produced by multiple sources. We consider two settings: one in which information sources directly assert claims, and a more realistic and challenging one, in which claims are inferred from evidence provided by sources, via (possibly noisy) NLP techniques. Our key contribution is to develop a family of probabilistic models that jointly estimate the trustworthiness of sources, and the credibility of claims they assert. This is done while accounting for the (possibly noisy) NLP needed to infer claims from evidence supplied by sources. We evaluate our framework on several datasets, showing strong results and significant improvement over baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The emergence of social networks and news aggregators -combined with ill-informed posts, deliberate efforts to create and spread sensationalized information, and a strongly polarized political environment -makes it very difficult to establish what is really known. Therefore, fact checking seeks to assess whether the claim is true or false, or to provide a confidence level for the claim given textual evidence (Hassan et al., 2017; Wang, 2017; Wang et al., 2018) . A typical fact checking pipeline consists of document retrieval, sentence-level evidence selection, and textual entailment stages (Thorne et al., 2018) . However, this pipeline is local in that it applies to a given claim. The missing step here is to assess the trustworthiness of the sources producing the claims and evidence. This is a global step that, in principle, accounts for all claims made by a source and all sources making a claim.",
"cite_spans": [
{
"start": 412,
"end": 433,
"text": "(Hassan et al., 2017;",
"ref_id": "BIBREF10"
},
{
"start": 434,
"end": 445,
"text": "Wang, 2017;",
"ref_id": "BIBREF24"
},
{
"start": 446,
"end": 464,
"text": "Wang et al., 2018)",
"ref_id": "BIBREF25"
},
{
"start": 597,
"end": 618,
"text": "(Thorne et al., 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous work has studied how to estimate the trustworthiness or credibility of information sources for fact-finding (Vydiswaran et al.; Pasternack and Roth, 2013), truth discovery (Dong et al.; Pochampally et al., 2014; Dong et al., 2015; Li et al., 2016) and crowdsourcing (Sabou et al., 2012; Hovy et al., 2013; Gao et al., 2015) . Usually, given a list of conflicting facts, e.g. \"source s asserts claim c\", or \"annotator x labels data item t by label y\", we detect the true claims or correct labels for the data item by resolving conflicts, and then compute the trustworthiness of sources.",
"cite_spans": [
{
"start": 195,
"end": 220,
"text": "Pochampally et al., 2014;",
"ref_id": "BIBREF19"
},
{
"start": 221,
"end": 239,
"text": "Dong et al., 2015;",
"ref_id": "BIBREF6"
},
{
"start": 240,
"end": 256,
"text": "Li et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 275,
"end": 295,
"text": "(Sabou et al., 2012;",
"ref_id": "BIBREF21"
},
{
"start": 296,
"end": 314,
"text": "Hovy et al., 2013;",
"ref_id": "BIBREF11"
},
{
"start": 315,
"end": 332,
"text": "Gao et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, many sources do not directly assert claims, but rather generate articles as evidence, expecting readers to infer claims from this evidence. In practice, given a claim of interest, people may search for related articles from multiple sources and collect evidence for the claim; they can then determine the veracity of the claim by deciding whether the evidence found supports or refutes the claim. However, most existing work that attempted to study trustworthiness of sources assumed that sources make assertions directly. Even when intermediate text was accounted for (Vydiswaran et al.; Nakashole and Mitchell, 2014) , it was assumed that clean evidence and clear connections between evidence and conflicting claims are provided, disregarding the fact that NLP systems attempting to support these tasks are noisy.",
"cite_spans": [
{
"start": 598,
"end": 627,
"text": "Nakashole and Mitchell, 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper considers two situations when eval- Figure 1 : Claim with assertions from multiple sources (from http://www.emergent.info/). Direct assertions specify their stance; indirect assertions provide related articles, and we can leverage (noisy) text entailment tools to collect their stances. We want to assess whether to believe the stance and articles.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 55,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "uating the trustworthiness of information sources:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) the source directly asserts claims, and (2) the source indirectly asserts claims by proposing evidence. The first case is similar to previous work; the second case is more challenging but more important in practice. Both cases are depicted in Figure 1. A multitude of sources is given and each may assert multiple claims or propose multiple pieces of evidence. At the same time, multiple claims are observed, some of which are directly asserted by sources and some are supported by evidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our goals are to identify true claims and to estimate the trustworthiness of each source. The key challenge is that this global inference task is influenced by the knowledge of which claims are made by which sources; however, establishing links -from evidence generated by a source to claims -requires NLP techniques such as textual entailment (TE) (Dagan et al., 2013) . Such TE tools, which assess whether a given textual evidence (premise) entails a given claim (hypothesis), are often noisy -making the evaluation of sources more difficult.",
"cite_spans": [
{
"start": 349,
"end": 369,
"text": "(Dagan et al., 2013)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The key contributions of this work are as follows: (1) It proposes a probabilistic model, JELTA, which jointly estimates the credibility of claims and the trustworthiness of sources, when claims are made by sources directly, indirectly, or both. (2) Our framework incorporates a TE model as part of the global inference framework as a way to link evidence (and thus, sources) to claims. 3This is the first work to distinguish between direct and indirect assertions made by information sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our experiments on both synthetic and natural datasets show solid results that are significantly better than baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our goal is to evaluate the trustworthiness of information sources by detecting the true claims while accounting for noise in the links between claims and evidence for them. While direct assertions are straightforward to deal with (since it is clear which source generates which claim), the challenge is to incorporate \"noisy assertions\" into our problem formulation. We first describe our setting, and then elaborate on the probabilistic modeling. (2) the source provides evidence e k by multiple articles, and the proposed evidence can support or refute claims via some noisy NLP tool.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trustworthiness Analysis",
"sec_num": "2"
},
{
"text": "We are given a set of claims to validate and a text corpus (pieces of evidence) generated by multiple sources that are believed to have generated the claims. Given the claim text, we issue a set of searches over the corpus, to find evidence in support of the claims. The result is a a set of (noisy) assertions. A (noisy) assertion consists of a claim, a sentence in the corpus, and a label (\"entailment\", \"contradiction\", \"neutral\"). The claim is a real world input we attempt to determine the truth value of. E.g., in Figure 1 , \"Tom Brokaw wants Brian Williams fired\" is such a claim. An assertion, on the other hand, is an artifact of our framework. As we search the corpus generated by the sources for evidence supporting the claim, we identify candidate sentences ('Related Articles\" in the figure) and use a pretrained textual entailment model (e.g. the decomposable attention model (Parikh et al., 2016)) to provide an entailment label and complete the triple (claim, sentence, label). The generation of noisy assertions as described above follows a typical fact-checking pipeline mentioned in Thorne et al. (2018) . Given noisy assertions, Figure 2 illustrates our problem setting. Overall, there are two situations. In the upper part of the figure, we show the case in which information sources make direct assertions: the source directly states that some claims are true or false. The alternative case, indicated in the lower part of the figure, involves the source indirectly asserting claims by making noisy assertions: the source first generates articles that contain sentences, and the sentences may entail or refute related claims. An entailment tool can then be used to assert the claims to be true or false, based on those sentences. A claim can be supported by multiple sources or multiple pieces of evidence from different sources. We now propose our model, JELTA, which handles both cases described above.",
"cite_spans": [
{
"start": 1102,
"end": 1122,
"text": "Thorne et al. (2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 520,
"end": 528,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1149,
"end": 1157,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Noisy Assertions",
"sec_num": "2.1"
},
{
"text": "Our probabilistic model denotes an information source as s \u2208 S, a claim as c \u2208 C, and m as a mutual exclusion set of claims (exactly one of the claims in each mutual exclusion set is true). Here m is a fact to be checked, and c is a statement that m is true or false. w s,m , w s,e and w e,m are binary indicators -respectively telling us if s asserts claims of m, if s provides evidence e, and if evidence e supports claims in m. We denote evidence e \u2208 E, and for each entailment result, we use b s,c and b s,e,c to represent the observed probability that s asserts c and s provides e to assert c respectively. Here, c\u2208m b s,c = 1 and c\u2208m b s,e,c = 1. We summarize our notation in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 682,
"end": 689,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Fundamentals",
"sec_num": "2.2"
},
{
"text": "Our work models a joint distribution that reflects a \"story\" of how sources generate observations. Intuitively, given an estimation of the verdict of the claims and the factors, including the trustworthiness of sources providing claims and evidence, we want to maximize the probability that we can observe the claims and evidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "We represent the verdict of a claim m as a latent variable, y m , and associate a parameter H s with each source s, reflecting the probability of s telling the truth, which we use to measure the trustworthiness of s. We now describe how y m and H s are used to compute the probability of observing the claims and evidence. Starting with the probability that source s makes a direct assertion: intuitively, if s asserts a true claim c =\u0109 in m, then the probability that s asserts c is H s , the probability s telling the truth. Otherwise, s chooses uniformly from other claims in m with probability 1\u2212Hs |m|\u22121 . Besides the term H s , we require another (hidden) factor related to s, namely, the probability of s telling the truth as evidence. We denote this as P s , the precision of s generating evidence. Here we allow P s can be different with H s , since providing true evidence for a true claim is more difficult than just providing a true claim. However, considering that those all reflect the trustworthiness of s, we assume they share a similar distribution over sources in our problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "P s can then be represented by two other parameters, R s and Q s (Dong et al., 2015). These represent the true-and false-positive rates of s producing evidence, respectively. We denote \u03b3 as the probability of a claim being true, then P s can be represented by Q s and R s as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P s = \u03b3R s \u03b3R s + (1 \u2212 \u03b3)Q s",
"eq_num": "(1)"
}
],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "We assume that the probability of the claim being true or false is equal. Since H s and P s share similar distributions, H s relates to Q s and R s as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "H s \u223c P s = R s R s + Q s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "(2) Now we discuss how to compute the probability of observing the noisy assertions. Intuitively, when source s wants to assert a claim with the NLP tool (textual entailment model) by proposing evidence: if s wants to support a true claim indirectly, s will recall true evidence with probability R s . This requires the NLP tool to do textual entailment correctly, otherwise s will also uniformly choose false or unrelated evidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "This paper considers the simplest way to generate a false claim or false evidence, and the choice may not always follow random sampling in practice. Our prior work Pasternack and Roth (2013) discusses some other options, which could alternatively be used here.",
"cite_spans": [
{
"start": 164,
"end": 190,
"text": "Pasternack and Roth (2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "In the remainder of this section, we formally model those processes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "Direct Assertion. Modeling the generative process of direct assertions by sources is very similar to Simple LCA (Pasternack and Roth, 2013) . As above, if the claim c \u2208 m asserted by s is the true claim y m , the probability of observing the source s asserting claim c of m is H s . Otherwise, the probability of s asserting a false claim of m is 1\u2212Hs |m|\u22121 . Therefore, the joint probability of the observation over X d and y m can be modeled as follows:",
"cite_spans": [
{
"start": 112,
"end": 139,
"text": "(Pasternack and Roth, 2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "P (y m , X d |H s ) = P (y m ) H bs,y m s c\u2208m\\ym ( 1 \u2212 H s |m| \u2212 1 ) 1\u2212bs,c ws,m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "(3) Then, given all sources S and \u03b8 = {H s }, we can write the full joint of direct observations as: Indirect Assertion. Here the sources provide articles containing possible evidence rather than making direct assertions. Besides the parameters Q s and R s , the observation also depends on the noisy entailment results given by the textual entailment model. Therefore, we introduce a function \u03c6 w (e, m, c) \u2208 R 1 to measure the reliability of an entailment result. Here \u03c6 w (e, m, c) is a linear combination of feature values in a sigmoid function, so that we can scale it to [0, 1]:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (Y, X d |\u03b8) = m P (y m ) s H bs,y m s ( 1 \u2212 H s |m| \u2212 1 ) 1\u2212bs,y m ws,m",
"eq_num": "(4)"
}
],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c6 w (e, m, c) = exp( i w i z i ) 1 + exp( i w i z i )",
"eq_num": "(5)"
}
],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "where z i is a feature for each given e, m, c , and w = {w i } are the weights of each z i learned by our model. For each observation s, e, m, c , the source generates true evidence with probability R s , and with probability of \u03c6 w (e, m, c), the proposed evidence e supports claim c of m. This means that we have probability of R s \u2022 \u03c6 w (e, m, c) to observe the tuple when c = y m . If c = y m , either the source does not provide true evidence, or the entailment model provides an unreliable entailment result -which means we have probability of 1 N 1 \u2212 P s \u2022 \u03c6 w (e, m, c) to observe a false evidence-claim pair. Here N is the total number of such false evidence-claim pairs. Therefore, the joint probability of the observation over y m and X i (indirect assertion observa-tions) is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "P (y m , X i |R s , Q s , W ) = P (y m ) e R s \u2022 \u03c6 w (e, m, c) bs,e,y m 1 \u2212 Rs Rs+Qs \u2022 \u03c6 w (e, m, c) N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "1\u2212bs,e,y m ws,e,we,m (6) Here, we also use c\u2208m\\ym b s,e,c = 1\u2212b s,e,ym . Then, given all sources S and \u03b8 = {Q s , R s , W }, the full joint probability of indirect assertions is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (Y,X i |\u03b8) = m P (y m ) s e R s \u2022 \u03c6 w (e, m, c) bs,e,y m 1 \u2212 Rs Rs+Qs \u2022 \u03c6 w (e, m, c) N 1\u2212bs,e,y m ws,ewe,m",
"eq_num": "(7)"
}
],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "Joint Modeling. Now to consider direct and indirect assertions together, we multiply Equations 4 and 7 together with two hyper-parameters, \u03b7 d and \u03b7 i , which give different weights to direct and indirect assertions. If \u03b7 d > \u03b7 i , this means we believe that a direct assertion is more accurate than an indirect assertion, and vice versa. Therefore, observing that all sources propose their evidence and make their assertions independently, and taking \u03b8 = {H s , R s , Q s , W }, we can write the full joint as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "P (X, Y |\u03b8) = m P (y m ) s H bs,y m s ( 1 \u2212 H s |m| \u2212 1 ) 1\u2212bs,y m ws,m\u03b7 d e R s \u2022 \u03c6 bs,e,y m 1 \u2212 Rs Rs+Qs \u2022 \u03c6 N 1\u2212bs,e,y m ws,ewe,m\u03b7 i (8) where \u03c6 = \u03c6 w (e, m, c) for abbreviation. Mean- while, since H s \u223c Rs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "Rs+Qs , we model it by minimizing their KL divergence. Therefore, we also minimize:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E Hs [log H s P s ] = s H s log H s P s = s H s log H s (R s + Q s ) R s",
"eq_num": "(9)"
}
],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "2.4 Inference",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "The true claim, y m , is a latent variable that is unknown in our problem, so we solve this ap-proximately by using the EM algorithm (Dempster et al., 1977) to first estimate the true claim, then find the maximum a posterior point estimate of the parameters. Therefore, the E-step is \u2200m:",
"cite_spans": [
{
"start": 133,
"end": 156,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "P (y m = c|X, \u03b8 t ) = P (y m = c|X, \u03b8 t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "v\u2208m P (y m = v|X, \u03b8 t ) (10) In the M-step, besides maximizing the posterior of parameters, we should also consider the interactions between H s and R s , Q s . We include it as a regularization term with a parameter \u03bb that controls the importance of the interactions. Thus, the M-step is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 t+1 = argmax \u03b8 E Y |X,\u03b8 t [log P (X, Y |\u03b8)P (\u03b8)] \u2212 \u03bbE Hs [log H s P s ]",
"eq_num": "(11)"
}
],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "Since there are no closed form solutions for those parameters, we use gradient ascent to solve them parameter-by-parameter. We leave the computation of derivatives to the appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JELTA",
"sec_num": "2.3"
},
{
"text": "In our model, \u03c6 w (e, m, c) evaluates the reliability of an entailment result given by the entailment model. As we described in Section 2.3, \u03c6 w (e, m, c) is a sigmoid function of a linear combination of feature values, and we include following features:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Entailment Results",
"sec_num": "2.5"
},
{
"text": "Entailment Score. For each prediction of the given entailment model, the model will predict a label, i.e. entailment, contradiction or neutral as well as a score to support its conclusion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Entailment Results",
"sec_num": "2.5"
},
{
"text": "Text Similarity. This feature is computed by the cosine similarity between numerical representations of the evidence and the claim. In this work, we use tf-idf and Glove (Pennington et al., 2014) to represent sentences respectively. To represent a sentence, we use the pre-trained Glove 1 with a simple method proposed in (Arora et al., 2017) .",
"cite_spans": [
{
"start": 170,
"end": 195,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF17"
},
{
"start": 322,
"end": 342,
"text": "(Arora et al., 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Entailment Results",
"sec_num": "2.5"
},
{
"text": "Entity Similarity. We identify entities for each pair of evidence and claim, and compute the overlap of entities by jaccard similarity and entity similarity by NESim (Do et al., 2009) as two features.",
"cite_spans": [
{
"start": 166,
"end": 183,
"text": "(Do et al., 2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Entailment Results",
"sec_num": "2.5"
},
{
"text": "We evaluate the effectiveness of our joint model JELTA and compare it with baselines. We first de-scribe our datasets and the methods we compare with, then elaborate on the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "3"
},
{
"text": "Data Sets We use both synthetic and natural datasets to evaluate our models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.1"
},
{
"text": "Synthetic Dataset: FEVER. We use the training file of FEVER 2 to create the synthetic dataset. FEVER is a dataset for verification of claims. We augment FEVER with sources and other information using following steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.1"
},
{
"text": "Step 1: Assign Veracity for Claims. Fever provides evidence-claim pairs with their textual entailment labels. Considering our running example, Fever provides sentence pairs such as \"...NBC's Tom Brokaw reportedly...\" and \"Tom Brokaw wants Brian Williams fired.\" as evidence and claim. For each experimental round, we sample 200 claims from those pairs, then randomly assign half as true and half as false.These labels will be the ground truth of claims' veracity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.1"
},
{
"text": "Step 2: Create Sources with Accuracy. Next, we create sources with corresponding accuracy as the ground truth of trustworthiness. In our each experimental round, we create 200 sources and for each source s i , we associate an accuracy denoted as H s i . To generate H s i , we sample a decimal number from a normal distribution (\u00b5 = 0.5, \u03c3 = 1) in [0, 1]. A normal distribution is used here because we assume most sources mix true and false claims, and a few of them are highly trustworthy or totally unbelievable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.1"
},
{
"text": "Step 3: Associate Sources with Evidence and Claims. The last step is to assign claims and evidence to each source. In our experiments, each source makes 30 assertions. Each source s i , with probability H s i , picks a true claim; otherwise it picks a false claim. For evidence, since we assume that the distribution of precision generating evidence over sources shares a similar distribution with {H s i }, the source s i picks a piece of evidence either supporting a true claim or refuting a false claim with H s i + , where epsilon is a small Gaussian noise (\u00b5 = 0, \u03c3 = 1). Considering the running example, if claim \"Tom Brokaw wants Brian Williams fired.\" is associated with \"True\" in Step 1, and Fever provides the pair with label \"Entailment\", \"...NBC's Tom Brokaw reportedly...\" is therefore a piece of evidence supporting a true claim. Otherwise s i picks a piece of evi-2 http://fever.ai/data.html dence supporting a false claim or refuting a true claim. Note we assume that each source provides more pieces of evidence than claims, and set the ratio of direct assertions to indirect assertions as 1 4 in our expeirments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.1"
},
{
"text": "We run 10 rounds of experiments and report the average performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.1"
},
{
"text": "Natural Dataset: Emergent. We use Emergent (Ferreira and Vlachos, 2016) directly; it is derived from a digital journalism project for rumor debunking. It contains 300 rumored claims and 2,595 associated news articles from different websites, collected and labeled by journalists with an estimate of their veracity (true, false or unverified). We eliminated the claims that are unverified, leaving 201 claims and 589 effective sources. For each source, the dataset provides the claims it supports or refutes, which we use as direct assertions. It also provides the articles generated by the source, and we use them as possible evidence repository that may support or refute the claims. The ground truth of the trustworthiness is generated by computing the accuracy of sources based on the veracity label provided.",
"cite_spans": [
{
"start": 43,
"end": 71,
"text": "(Ferreira and Vlachos, 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.1"
},
{
"text": "Entailment Model. We need a textual entailment model to tell us if evidence (a sentence) supports or refutes a claim. We use a pre-trained decomposable attention model (Parikh et al., 2016) with Elmo embedding (Peters et al., 2018) trained on the SNLI dataset 3 . The model's performance is not good on either FEVER or Emergent: when we use majority voting over the evidence to estimate the veracity of a claim, the accuracy is under 40%. To improve the textual entailment model, we adapt the pre-trained model with additional training data. For the experiment on FEVER, we randomly sample 100 training examples from labeled development dataset of FEVER. (There is no overlap between the additional training data and our created test data.) For the experiments on Emergent, we construct additional training data by article headlines and article headline stance provided by Emergent. Here, the article headline is generated by each article, and the dataset tells us if the article headline can support or refute the claim, which is a good source of additional training data.",
"cite_spans": [
{
"start": 168,
"end": 189,
"text": "(Parikh et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.1"
},
{
"text": "Metrics To evaluate the performance of our method as well as the baselines, we evaluate considering both direct and indirect assertions are much better than those considering one of them only, and JELTA can achieve an additional improvement. (c) reports the performance varying different rate of noises being added to the entailment results, and JELTA is also consistently better than other methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.1"
},
{
"text": "(1) the accuracy of the estimated veracity of the claims, (2) the accuracy of the estimated trustworthiness of the source. Here, we evaluate trustworthiness by two typical correlation scores, the Spearman correlation coefficient and Pearson correlation coefficient (Fieller et al., 1957) . Spearman's correlation assesses monotonic relationships, whereas Pearson's correlation is the covariance of two random variables -thus when computing Pearson's correlation, we normalize the estimated accuracy of the sources.",
"cite_spans": [
{
"start": 265,
"end": 287,
"text": "(Fieller et al., 1957)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.1"
},
{
"text": "MJ-Claim. In this case, we only consider direct assertions made by sources, and for each claim we collect all related assertions and do majority vote to estimate the veracity of the claims. Once we get an estimation of their veracity, we can compute the accuracy for each source.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": null
},
{
"text": "MJ-EVI. We only consider indirect assertions in this case. With the textual entailment model output, each evidence provided by the article will ei-ther support, refute or abstain the claim. Here, we also use majority vote to estimate the veracity of the claims, and use the mean ratio between the number of evidence supporting the true claim and the total number of evidence for each claim to estimate the trustworthiness of the source.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": null
},
{
"text": "Sim-LCA. We leverage the model proposed in (Pasternack and Roth, 2013) to estimate the credibility of the sources. Here, the model only considers direct assertions.",
"cite_spans": [
{
"start": 43,
"end": 70,
"text": "(Pasternack and Roth, 2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": null
},
{
"text": "Sim-Com. We propose a simple solution that considers both direct and indirect assertions. Here, we use MJ-EVI to estimate the truth of the claims, based on which we calculate the accuracy for each source. Note that the results are the same compared with MJ-EVI when estimating the veracity of the claims, while the estimation of trustworthiness over sources is different.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": null
},
{
"text": "Accuracy of Veracity. Figure 4 (a) reports (for each method) the accuracy of estimation for the claims' veracity, over our synthetic dataset. JELTA achieves the highest accuracy, by around 5%, and shows a low standard deviation over the 10 rounds of experiments. Figure 4 (b) reports the accuracy on Emergent. We again observe a 4% improvement in accuracy compared with MJ-EVI, and around 16% improvement vs the methods considering direct assertions only. It makes sense that evidence-based method (leveraging indirect assertions) can beat the claim-based method (leveraging direct assertions only) by using more information to reduce potential noise. However, using sources and claims only is more noisy, especially with many bad information sources. Figure 4 (a) shows that when the distribution of sources changes, the performance of MJ-Claim and Sim-LCA also varies a lot: their performance greatly depends on the distribution of trustworthiness over sources. Besides offering higher accuracy, evidence-based methods are more robust to varying sources' trustworthiness.",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 34,
"text": "Figure 4 (a)",
"ref_id": "FIGREF3"
},
{
"start": 263,
"end": 275,
"text": "Figure 4 (b)",
"ref_id": "FIGREF3"
},
{
"start": 752,
"end": 764,
"text": "Figure 4 (a)",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "Trustworthiness Estimation. Figure 5 (a) reports the performance by Pearson and Spearman score, for each method's estimate of the trustworthiness of each source on FEVER. JELTA's accuracy is consistently better than other baselines, whenever we use the Spearman or Pearson score to compute the correlation between the estimation and the ground truth. JELTA also has a lower standard deviation over different rounds. This result is consistent with the results shown when we are estimating the veracity of claims. It reveals that evidence-based methods are relatively more stable than the methods considering direct assertions only. MJ-Claim and Sim-LCA highly depend on the trustworthiness distribution over sources. If most of the sources are more trustworthy, we can both estimate the true claims more accurately and better estimate the trustworthiness of sources; and vice versa. That is why both MJ-Claim and Sim-LCA have high standard deviations over different rounds. Based on the results of MJ-EVI, we can observe that simply calculating accuracy by estimated \"correct\" evidence cannot achieve a highly correlated estimation of sources: the entailment tool provides noisy evidence. However, Sim-Com, which directly counts estimated \"correct\" claims by MJ-EVI, can improve the estimation. Thus, if we can estimate the veracity of claims accurately, estimating the trustworthiness by claims is more accurate than doing that by noisy evidence. This is also why we can significantly improve the performance by joint modeling. Intuitively, we use evidence to better estimate the veracity of claims, and leverage claims to better estimate the trustworthiness of sources, in an iterative fashion. Figure 5 (b) leads to similar conclusions. Since there are more trustworthy sources, the performance of claim-based methods is better than MJ-EVI.",
"cite_spans": [
{
"start": 1696,
"end": 1708,
"text": "Figure 5 (b)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 28,
"end": 40,
"text": "Figure 5 (a)",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "Influence of textual entailment model. Figures 4 and 5 show that our method, which jointly considers direct and indirect assertions, significantly improves the estimation. Among different factors, evidence contributes the most when estimating the veracity of claims, which can also help the estimation of the trustworthiness. However, the usefulness of evidence highly depends on the quality of the NLP tool. To quantify the amount of noise introduced, we report the Pearson and Spearman score varying a noise rate r. Given r, for each entailment result, with probability r, we will flip the answer of the textual entailment. For example, if the result is \"entailment\", we will change it randomly to either \"contradiction\" or \"neutral\", and vice versa. The results are shown in (c) of Figure 4 and 5. As noise increases, the accuracy, Pearson and Spearman score drop lower. However, the JELTA method is consistently better than the alternatives. JELTA's accuracy decreases more slowly, and its correlation remains positive, even though we flip 95% of the entailment results. This demonstrates that jointly considering direct and indirect assertions can better avoid the skewness caused by either evidence or claims.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 54,
"text": "Figures 4 and 5",
"ref_id": "FIGREF3"
},
{
"start": 785,
"end": 793,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "Evaluating the trustworthiness of sources has been studied for fact-finding, truth discovery and crowdsourcing. In the context of fact-finding (Vydiswaran et al.; Pasternack and Roth, 2013) and truth discovery (Yin et al., 2008; Zhao et al., 2012; Li et al., 2014; Pochampally et al., 2014; Dong et al., 2015; Li et al., 2016) , the solutions estimate the trustworthiness or credibility of sources, by resolving the conflicts of claims provided by multiple sources. The claims are usually in structured form, and conflicting values can be easily captured without noise. Works in (Vydiswaran et al.; Nakashole and Mitchell, 2014; Popat et al., 2017) further take text into consideration, however, in (Vydiswaran et al.; Nakashole and Mitchell, 2014) , they still depend on a structured input form and thus the connection between evidence and conflicting claims are given, which is usually not practical. Popat et al. (2017) leverages text as evidence to do fact-checking, while their estimation of credibility of sources neglects the reliability of sources generating evidence. In crowdsourced labeling (Sabou et al., 2012; Hovy et al., 2013; Gao et al., 2015) , the system is given noisy labels which are annotated by different annotators. The input is again in structured form, and there is no evidence to consider. This is a limited setting compared with our problem. Our problem is also related to fact-checking (Wang et al., 2018; Thorne et al., 2018; Yin and Roth, 2018; Zhao et al., 2018) , however they only consider if the evidence can support the claim without tracking the source of the claim and evidence.",
"cite_spans": [
{
"start": 163,
"end": 189,
"text": "Pasternack and Roth, 2013)",
"ref_id": "BIBREF16"
},
{
"start": 210,
"end": 228,
"text": "(Yin et al., 2008;",
"ref_id": "BIBREF27"
},
{
"start": 229,
"end": 247,
"text": "Zhao et al., 2012;",
"ref_id": "BIBREF28"
},
{
"start": 248,
"end": 264,
"text": "Li et al., 2014;",
"ref_id": "BIBREF12"
},
{
"start": 265,
"end": 290,
"text": "Pochampally et al., 2014;",
"ref_id": "BIBREF19"
},
{
"start": 291,
"end": 309,
"text": "Dong et al., 2015;",
"ref_id": "BIBREF6"
},
{
"start": 310,
"end": 326,
"text": "Li et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 599,
"end": 628,
"text": "Nakashole and Mitchell, 2014;",
"ref_id": "BIBREF14"
},
{
"start": 629,
"end": 648,
"text": "Popat et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 719,
"end": 748,
"text": "Nakashole and Mitchell, 2014)",
"ref_id": "BIBREF14"
},
{
"start": 903,
"end": 922,
"text": "Popat et al. (2017)",
"ref_id": "BIBREF20"
},
{
"start": 1102,
"end": 1122,
"text": "(Sabou et al., 2012;",
"ref_id": "BIBREF21"
},
{
"start": 1123,
"end": 1141,
"text": "Hovy et al., 2013;",
"ref_id": "BIBREF11"
},
{
"start": 1142,
"end": 1159,
"text": "Gao et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 1415,
"end": 1434,
"text": "(Wang et al., 2018;",
"ref_id": "BIBREF25"
},
{
"start": 1435,
"end": 1455,
"text": "Thorne et al., 2018;",
"ref_id": "BIBREF22"
},
{
"start": 1456,
"end": 1475,
"text": "Yin and Roth, 2018;",
"ref_id": "BIBREF26"
},
{
"start": 1476,
"end": 1494,
"text": "Zhao et al., 2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "This paper studied the problem of estimating the trustworthiness of given information sources. The sources make direct claims or indirect claims by generating evidence that implies these claims.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "We proposed a probabilistic framework, JELTA, which jointly considers both kinds of assertions to better estimate claims' veracity and sources' trustworthiness. We evaluated JELTA over both synthetic and real datasets, and our results show significant improvements over baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "While we presented the framework here as applying to claims with two truth values, we believe that this framework can apply more broadly. For example, rather than considering a claim as being True or False, (Chen et al., 2019) suggests that one needs to view a claim from a diverse, yet comprehensive, set of perspectives. Our framework can be extended to deal with sources that generate a spectrum of perspectives, each with a stance relative to claim and with evidence supporting it. We leave this for future work.",
"cite_spans": [
{
"start": 207,
"end": 226,
"text": "(Chen et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "To infer the value of latent variables and parameters in our model, we use EM algorithm to first estimate the true claim, and then find the maximum a posterior point estimate of the parameters. As shown in Section 2.4, given parameters \u03b8 t and X, E-step is easy to compute, while the M-step is more complicated. Since there are no closed form solutions for those parameters, we use gradient ascent to solve them and do them parameterby-parameter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Inference",
"sec_num": null
},
{
"text": "For H s , we have: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Inference",
"sec_num": null
},
{
"text": "\u2202P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Inference",
"sec_num": null
},
{
"text": "https://nlp.stanford.edu/projects/ glove/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://nlp.stanford.edu/projects/ snli/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is partly supported by a Google gift and by DARPA, under agreement number HR0011-18-2-0052.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A simple but tough-to-beat baseline for sentence embeddings",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings. ICLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Seeing things from a different angle: Discovering diverse perspectives about claims",
"authors": [
{
"first": "Sihao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Khashabi",
"suffix": ""
},
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sihao Chen, Daniel Khashabi, Wenpeng Yin, Chris Callison-Burch, and Dan Roth. 2019. Seeing things from a different angle: Discovering diverse perspec- tives about claims. In NAACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Recognizing textual entailment: Models and applications",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Ido Dagan",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Fabio",
"middle": [
"Massimo"
],
"last": "Sammons",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zanzoto",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan, Dan Roth, Mark Sammons, and Fabio Mas- simo Zanzoto. 2013. Recognizing textual entail- ment: Models and applications.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Maximum likelihood from incomplete data via the em algorithm",
"authors": [
{
"first": "P",
"middle": [],
"last": "Arthur",
"suffix": ""
},
{
"first": "Nan",
"middle": [
"M"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "Donald B",
"middle": [],
"last": "Laird",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of the royal statistical society. Series B (methodological)",
"volume": "",
"issue": "",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur P Dempster, Nan M Laird, and Donald B Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the royal statistical society. Series B (methodological), pages 1-38.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Robust, light-weight approaches to compute lexical similarity",
"authors": [
{
"first": "Quang",
"middle": [],
"last": "Do",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Sammons",
"suffix": ""
},
{
"first": "Yuancheng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Vydiswaran",
"suffix": ""
}
],
"year": 2009,
"venue": "Computer Science Research and Technical Reports",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quang Do, Dan Roth, Mark Sammons, Yuancheng Tu, and V Vydiswaran. 2009. Robust, light-weight ap- proaches to compute lexical similarity. Computer Science Research and Technical Reports.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Integrating conflicting data: the role of source dependence",
"authors": [
{
"first": "Laure",
"middle": [],
"last": "Xin Luna Dong",
"suffix": ""
},
{
"first": "Divesh",
"middle": [],
"last": "Berti-Equille",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Srivastava",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the VLDB Endowment",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Luna Dong, Laure Berti-Equille, and Divesh Sri- vastava. Integrating conflicting data: the role of source dependence. Proceedings of the VLDB En- dowment.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Knowledge-based trust: Estimating the trustworthiness of web sources",
"authors": [
{
"first": "Evgeniy",
"middle": [],
"last": "Xin Luna Dong",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Van",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Wilko",
"middle": [],
"last": "Dang",
"suffix": ""
},
{
"first": "Camillo",
"middle": [],
"last": "Horn",
"suffix": ""
},
{
"first": "Shaohua",
"middle": [],
"last": "Lugaresi",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the VLDB Endowment",
"volume": "8",
"issue": "",
"pages": "938--949",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Luna Dong, Evgeniy Gabrilovich, Kevin Murphy, Van Dang, Wilko Horn, Camillo Lugaresi, Shaohua Sun, and Wei Zhang. 2015. Knowledge-based trust: Estimating the trustworthiness of web sources. Pro- ceedings of the VLDB Endowment, 8(9):938-949.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Emergent: a novel data-set for stance classification",
"authors": [
{
"first": "William",
"middle": [],
"last": "Ferreira",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: Human language technologies",
"volume": "",
"issue": "",
"pages": "1163--1168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Ferreira and Andreas Vlachos. 2016. Emer- gent: a novel data-set for stance classification. In Proceedings of the 2016 conference of the North American chapter of the association for computa- tional linguistics: Human language technologies, pages 1163-1168.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Tests for rank correlation coefficients. i",
"authors": [
{
"first": "C",
"middle": [],
"last": "Edgar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fieller",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Herman",
"suffix": ""
},
{
"first": "Egon S",
"middle": [],
"last": "Hartley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pearson",
"suffix": ""
}
],
"year": 1957,
"venue": "Biometrika",
"volume": "44",
"issue": "3/4",
"pages": "470--481",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edgar C Fieller, Herman O Hartley, and Egon S Pear- son. 1957. Tests for rank correlation coefficients. i. Biometrika, 44(3/4):470-481.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Truth discovery and crowdsourcing aggregation: A unified perspective",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the VLDB Endowment",
"volume": "8",
"issue": "",
"pages": "2048--2049",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Gao, Qi Li, Bo Zhao, Wei Fan, and Jiawei Han. 2015. Truth discovery and crowdsourcing aggre- gation: A unified perspective. Proceedings of the VLDB Endowment, 8(12):2048-2049.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Claimbuster: the first-ever end-to-end fact-checking system",
"authors": [
{
"first": "Naeemul",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Gensheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Fatma",
"middle": [],
"last": "Arslan",
"suffix": ""
},
{
"first": "Josue",
"middle": [],
"last": "Caraballo",
"suffix": ""
},
{
"first": "Damian",
"middle": [],
"last": "Jimenez",
"suffix": ""
},
{
"first": "Siddhant",
"middle": [],
"last": "Gawsane",
"suffix": ""
},
{
"first": "Shohedul",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Minumol",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "Aaditya",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Anil",
"middle": [],
"last": "Kumar Nayak",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the VLDB Endowment",
"volume": "10",
"issue": "",
"pages": "1945--1948",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naeemul Hassan, Gensheng Zhang, Fatma Arslan, Jo- sue Caraballo, Damian Jimenez, Siddhant Gawsane, Shohedul Hasan, Minumol Joseph, Aaditya Kulka- rni, Anil Kumar Nayak, et al. 2017. Claimbuster: the first-ever end-to-end fact-checking system. Pro- ceedings of the VLDB Endowment, 10(12):1945- 1948.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning whom to trust with mace",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1120--1130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with mace. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1120-1130.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Resolving conflicts in heterogeneous data by truth discovery and source reliability estimation",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yaliang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 ACM SIG-MOD international conference on Management of data",
"volume": "",
"issue": "",
"pages": "1187--1198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Li, Yaliang Li, Jing Gao, Bo Zhao, Wei Fan, and Jiawei Han. 2014. Resolving conflicts in heteroge- neous data by truth discovery and source reliability estimation. In Proceedings of the 2014 ACM SIG- MOD international conference on Management of data, pages 1187-1198. ACM.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A survey on truth discovery",
"authors": [
{
"first": "Yaliang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Chuishi",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2016,
"venue": "ACM SIGKDD Explorations Newsletter",
"volume": "17",
"issue": "2",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaliang Li, Jing Gao, Chuishi Meng, Qi Li, Lu Su, Bo Zhao, Wei Fan, and Jiawei Han. 2016. A sur- vey on truth discovery. ACM SIGKDD Explorations Newsletter, 17(2):1-16.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Language-aware truth assessment of fact candidates",
"authors": [
{
"first": "Ndapandula",
"middle": [],
"last": "Nakashole",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Tom",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1009--1019",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ndapandula Nakashole and Tom M Mitchell. 2014. Language-aware truth assessment of fact candidates. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1009-1019.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A decomposable attention model for natural language inference",
"authors": [
{
"first": "P",
"middle": [],
"last": "Ankur",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.01933"
]
},
"num": null,
"urls": [],
"raw_text": "Ankur P Parikh, Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. arXiv preprint arXiv:1606.01933.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Latent credibility analysis",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Pasternack",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 22nd international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "1009--1020",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Pasternack and Dan Roth. 2013. Latent credibility analysis. In Proceedings of the 22nd international conference on World Wide Web, pages 1009-1020. ACM.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Fusing data with correlations",
"authors": [
{
"first": "Ravali",
"middle": [],
"last": "Pochampally",
"suffix": ""
},
{
"first": "Anish",
"middle": [
"Das"
],
"last": "Sarma",
"suffix": ""
},
{
"first": "Xin",
"middle": [
"Luna"
],
"last": "Dong",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Meliou",
"suffix": ""
},
{
"first": "Divesh",
"middle": [],
"last": "Srivastava",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 ACM SIGMOD international conference on Management of data",
"volume": "",
"issue": "",
"pages": "433--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ravali Pochampally, Anish Das Sarma, Xin Luna Dong, Alexandra Meliou, and Divesh Srivastava. 2014. Fusing data with correlations. In Proceedings of the 2014 ACM SIGMOD international conference on Management of data, pages 433-444. ACM.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Where the truth lies: Explaining the credibility of emerging claims on the web and social media",
"authors": [
{
"first": "Kashyap",
"middle": [],
"last": "Popat",
"suffix": ""
},
{
"first": "Subhabrata",
"middle": [],
"last": "Mukherjee",
"suffix": ""
},
{
"first": "Jannik",
"middle": [],
"last": "Str\u00f6tgen",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th International Conference on World Wide Web Companion",
"volume": "",
"issue": "",
"pages": "1003--1012",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kashyap Popat, Subhabrata Mukherjee, Jannik Str\u00f6tgen, and Gerhard Weikum. 2017. Where the truth lies: Explaining the credibility of emerging claims on the web and social media. In Proceedings of the 26th International Conference on World Wide Web Companion, pages 1003-1012. International World Wide Web Conferences Steering Committee.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Crowdsourcing research opportunities: lessons from natural language processing",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Sabou",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "Arno",
"middle": [],
"last": "Scharl",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 12th International Conference on Knowledge Management and Knowledge Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta Sabou, Kalina Bontcheva, and Arno Scharl. 2012. Crowdsourcing research opportunities: lessons from natural language processing. In Pro- ceedings of the 12th International Conference on Knowledge Management and Knowledge Technolo- gies, page 17. ACM.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Christos Christodoulopoulos, and Arpit Mittal",
"authors": [
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
}
],
"year": 2018,
"venue": "Fever: a large-scale dataset for fact extraction and verification",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.05355"
]
},
"num": null,
"urls": [],
"raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. Fever: a large-scale dataset for fact extraction and verification. arXiv preprint arXiv:1803.05355.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Content-driven trust propagation framework",
"authors": [
{
"first": "Chengxiang",
"middle": [],
"last": "Vg Vydiswaran",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "VG Vydiswaran, ChengXiang Zhai, and Dan Roth. Content-driven trust propagation framework. In Proceedings of the 17th ACM SIGKDD interna- tional conference on Knowledge discovery and data mining.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "liar, liar pants on fire\": A new benchmark dataset for fake news detection",
"authors": [
{
"first": "William",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.00648"
]
},
"num": null,
"urls": [],
"raw_text": "William Yang Wang. 2017. \" liar, liar pants on fire\": A new benchmark dataset for fake news detection. arXiv preprint arXiv:1705.00648.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Relevant document discovery for factchecking articles",
"authors": [
{
"first": "Xuezhi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Cong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Baumgartner",
"suffix": ""
},
{
"first": "Flip",
"middle": [],
"last": "Korn",
"suffix": ""
}
],
"year": 2018,
"venue": "Companion of the The Web Conference 2018 on The Web Conference",
"volume": "",
"issue": "",
"pages": "525--533",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhi Wang, Cong Yu, Simon Baumgartner, and Flip Korn. 2018. Relevant document discovery for fact- checking articles. In Companion of the The Web Conference 2018 on The Web Conference 2018, pages 525-533. International World Wide Web Con- ferences Steering Committee.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Twowingos: A two-wing optimization strategy for evidential claim verification",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.03465"
]
},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin and Dan Roth. 2018. Twowingos: A two-wing optimization strategy for evidential claim verification. arXiv preprint arXiv:1808.03465.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Truth discovery with multiple conflicting information providers on the web",
"authors": [
{
"first": "Xiaoxin",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "S Yu",
"middle": [],
"last": "Philip",
"suffix": ""
}
],
"year": 2008,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "20",
"issue": "6",
"pages": "796--808",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoxin Yin, Jiawei Han, and S Yu Philip. 2008. Truth discovery with multiple conflicting informa- tion providers on the web. IEEE Transactions on Knowledge and Data Engineering, 20(6):796-808.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A bayesian approach to discovering truth from conflicting sources for data integration",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jim",
"middle": [],
"last": "Benjamin Ip Rubinstein",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Gemmell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the VLDB Endowment",
"volume": "5",
"issue": "",
"pages": "550--561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Zhao, Benjamin IP Rubinstein, Jim Gemmell, and Jiawei Han. 2012. A bayesian approach to dis- covering truth from conflicting sources for data in- tegration. Proceedings of the VLDB Endowment, 5(6):550-561.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "An end-to-end multi-task learning model for fact checking",
"authors": [
{
"first": "Shuai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)",
"volume": "",
"issue": "",
"pages": "138--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuai Zhao, Bo Cheng, Hao Yang, et al. 2018. An end-to-end multi-task learning model for fact check- ing. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 138- 144.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Our solution considers two settings: (1) source s i directly asserts multiple claims c j ;",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Plate diagram for probabilistic model describing the generation of direct and indirect assertions. Shaded parts are the observations, y m is the latent variable, and H s , R s , Q s and \u03c6 w are groups of parameters. Dotted lines describe the interactions between H s and R s , Q s .",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "Note that we simplify the expression by leveraging c\u2208m\\ym b s,c = 1 \u2212 b s,ym .",
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"num": null,
"text": "The performance of estimating veracity of claims by different methods. On both FEVER and Emergent, JELTA achieves the highest accuracy, and a low standard deviation in the 10 rounds evaluation on FEVER. (c) reports the accuracy variation when we add different rate of noise in the textual entailment results, and JELTA is consistently better.",
"type_str": "figure",
"uris": null
},
"FIGREF4": {
"num": null,
"text": "The performance of estimating the trustworthiness of sources by Peasrson and Spearman score. The methods",
"type_str": "figure",
"uris": null
},
"FIGREF5": {
"num": null,
"text": "(X, Y |\u03b8) \u2202Hs = m ym P (ym|X, \u03b8 t )ws,m\u03b7 d (bs,y m \u2212 Hs) Hs \u2212 Hs 2 + \u03bb log Rs Rs + Qs \u2212 log Hs \u2212 1 (12) Then, for R s , Q s and W , the derivatives are as follows: bs,e,y m )\u03c6w(e, m, c)Qs Rs(Qs + Rs)\u03c6w(e, m, c) \u2212 (Qs + Rs) 2 bs,e,y m )\u03c6w(e, m, c)Rs (Qs + Rs) 2 \u2212 Rs(Qs + Rs)\u03c6w(e, m, c) ,y m + Hs(1 \u2212 bs,e,y m )\u03c6w(e, m, c) Hs \u2022 \u03c6w(e, m, c) \u2212 1 1 \u2212 \u03c6w(e, m, c) zi (15)",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"content": "<table><tr><td>Direct</td><td>Source: pagesix.com</td><td>Textual</td></tr><tr><td>Assertion</td><td>Stance: For</td><td>Entailment for</td></tr><tr><td/><td>Source: foxnews.com</td><td>Indirect Assertion</td></tr><tr><td/><td>Related Articles:</td><td/></tr><tr><td/><td>\u2026 NBC's Tom Brokaw reportedly</td><td>For</td></tr><tr><td/><td>wants Brian Williams fired over</td><td>(Entailment)</td></tr><tr><td/><td>fabricated Iraq helicopter story \u2026</td><td/></tr><tr><td>Indirect</td><td/><td/></tr><tr><td>Assertion</td><td>Source: m.huffpost.com</td><td/></tr><tr><td/><td>Related Articles:</td><td/></tr><tr><td/><td>\u2026 Brian Williams' Future Uncertain As NBC News Investigates Iraq,</td><td>Against (Contradiction)</td></tr><tr><td/><td>Katrina Coverage \u2026</td><td/></tr></table>",
"type_str": "table",
"text": "Tom Brokaw wants Brian Williams fired.",
"html": null,
"num": null
},
"TABREF1": {
"content": "<table><tr><td colspan=\"2\">Notation Description</td></tr><tr><td>s</td><td>an information source</td></tr><tr><td>c</td><td>a claim</td></tr><tr><td>m</td><td>a mutually exclusion set of claims</td></tr><tr><td>e</td><td>an evidence</td></tr><tr><td>ym</td><td>the true claim in m</td></tr><tr><td>bs,c</td><td>The (observed) probability of c asserted by s</td></tr><tr><td>bs,e,c</td><td>The (observed) probability of c asserted by e from s</td></tr><tr><td>ws,m</td><td>if s asserts claims of m</td></tr><tr><td>ws,e</td><td>if s provides e</td></tr><tr><td>we,m</td><td>if e supports or refutes claims of m</td></tr><tr><td>Hs</td><td>the probability s makes an honest claim</td></tr><tr><td>Ps</td><td>the (hidden) probability s produces a true evidence</td></tr><tr><td>Rs</td><td>the probability s recalls a true evidence (true-positive rate)</td></tr><tr><td>Qs</td><td>the probability s recalls a false evidence (false-positive rate)</td></tr><tr><td>Xi</td><td>Set of all (observed) direct assertions</td></tr><tr><td>X d</td><td>Set of all (observed) indirect assertions</td></tr><tr><td>Y</td><td>Set of all true claims</td></tr></table>",
"type_str": "table",
"text": "Notation Table",
"html": null,
"num": null
}
}
}
}