ACL-OCL / Base_JSON /prefixN /json /naacl /2021.naacl-demos.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:25:56.056240Z"
},
"title": "Improving Evidence Retrieval for Automated Explainable Fact-Checking",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Samarinas",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Singapore",
"location": {}
},
"email": ""
},
{
"first": "Wynne",
"middle": [],
"last": "Hsu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Singapore",
"location": {}
},
"email": ""
},
{
"first": "Mong",
"middle": [
"Li"
],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Singapore",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automated fact-checking on a large-scale is a challenging task that has not been studied systematically until recently. Large noisy document collections like the web or news articles make the task more difficult. In this paper, we describe the components of a threestage automated fact-checking system, named Quin+. We demonstrate that using dense passage representations increases the evidence recall in a noisy setting. We experiment with two sentence selection approaches, an embeddingbased selection using a dense retrieval model, and a sequence labeling approach for contextaware selection. Quin+ is able to verify opendomain claims using a large-scale corpus or web search results.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Automated fact-checking on a large-scale is a challenging task that has not been studied systematically until recently. Large noisy document collections like the web or news articles make the task more difficult. In this paper, we describe the components of a threestage automated fact-checking system, named Quin+. We demonstrate that using dense passage representations increases the evidence recall in a noisy setting. We experiment with two sentence selection approaches, an embeddingbased selection using a dense retrieval model, and a sequence labeling approach for contextaware selection. Quin+ is able to verify opendomain claims using a large-scale corpus or web search results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the emergence of social media and many individual news sources online, the spread of misinformation has become a major problem with potentially harmful social consequences. Fake news can manipulate public opinion, create conflicts, elicit unreasonable fear and suspicion. The vast amount of unverified online content led to the establishment of external post-hoc fact-checking organizations, such as PolitiFact, FactCheck.org, Snopes etc, with dedicated resources to verify claims online. However, manual fact-checking is time consuming and intractable on a large scale. The ability to automatically perform fact-checking is critical to minimize negative social impact.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Automated fact checking is a complex task involving evidence extraction followed by evidence reasoning and entailment. For the retrieval of relevant evidence from a corpus of documents, existing systems typically utilize traditional sparse retrieval which may have poor recall, especially when the relevant passages have few overlapping words with the claims to be verified. Dense retrieval models have proven effective in question answering as these models can better capture the latent semantic content of text. The work in (Samarinas et al., 2020) is the first to use dense retrieval for fact checking. The authors constructed a new dataset called Factual-NLI comprising of claim-evidence pairs from the FEVER dataset (Thorne et al., 2018) as well as synthetic examples generated from benchmark Question Answering datasets (Kwiatkowski et al., 2019; Nguyen et al., 2016) . They demonstrated that using Factual-NLI to train a dense retriever can improve evidence retrieval significantly.",
"cite_spans": [
{
"start": 526,
"end": 550,
"text": "(Samarinas et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 721,
"end": 742,
"text": "(Thorne et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 826,
"end": 852,
"text": "(Kwiatkowski et al., 2019;",
"ref_id": "BIBREF9"
},
{
"start": 853,
"end": 873,
"text": "Nguyen et al., 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While the FEVER dataset has enabled the systematic evaluation of automated fact-checking systems, it does not reflect well the noisy nature of real-world data. Motivated by this, we introduce the Factual-NLI+ dataset, an extension of the FEVER dataset with synthetic examples from question answering datasets and noise passages from web search results. We examine how dense representations can improve the first-stage retrieval recall of passages for fact-checking in a noisy setting, and make the retrieval of relevant evidence more tractable on a large scale.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, the selection of relevant evidence sentences for accurate fact-checking and explainability remains a challenge. Figure 1 shows an example of a claim and the retrieved passage which has three sentences, of which only the last sentence provides the critical evidence to refute the claim. We propose two ways to select the relevant sentences, an embedding-based selection using a dense retrieval model, and a sequence labeling approach for context-aware selection. We show that the former generalizes better with a high recall, while the latter has higher precision, making them suitable for the identification of relevant evidence sentences. Our fact-checking system Quin+ is able to verify open-domain claims using a large corpus or web search results.",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 129,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Automated claim verification using a large corpus has not been studied systematically until the availability of the Fact Extraction and VERification dataset (FEVER) (Thorne et al., 2018) . This dataset contains claims that are supported or refuted by specific evidence from Wikipedia articles. Prior to the work in (Samarinas et al., 2020) , fact-checking solutions have relied on sparse passage retrieval, followed by a claim verification (entailment classification) model (Nie et al., 2019) . Other approaches used the mentions of entities in a claim and/or basic entity linking to retrieve documents and a machine learning model such as logistic regression or an enhanced sequential inference model to decide whether an article most likely contains the evidence (Yoneda et al.; Chen et al., 2017; Hanselowski et al., 2018) .",
"cite_spans": [
{
"start": 165,
"end": 186,
"text": "(Thorne et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 315,
"end": 339,
"text": "(Samarinas et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 474,
"end": 492,
"text": "(Nie et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 765,
"end": 780,
"text": "(Yoneda et al.;",
"ref_id": "BIBREF23"
},
{
"start": 781,
"end": 799,
"text": "Chen et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 800,
"end": 825,
"text": "Hanselowski et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "However, retrieval based on sparse representations and exact keyword matching can be rather restrictive for various queries. This restriction can be mitigated by dense representations using BERTbased language models . The works in Karpukhin et al., 2020; Xiong et al., 2020; Chang et al., 2020) have successfully used such models and its variants for passage retrieval in open-domain question answering. The results can be further improved using passage re-ranking with cross-attention BERT-based models (Nogueira et al., 2019) . The work in (Samarinas et al., 2020) is the first to propose a dense model to retrieve passages for fact-checking.",
"cite_spans": [
{
"start": 231,
"end": 254,
"text": "Karpukhin et al., 2020;",
"ref_id": "BIBREF8"
},
{
"start": 255,
"end": 274,
"text": "Xiong et al., 2020;",
"ref_id": "BIBREF22"
},
{
"start": 275,
"end": 294,
"text": "Chang et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 504,
"end": 527,
"text": "(Nogueira et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 542,
"end": 566,
"text": "(Samarinas et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Apart from passage retrieval, sentence selection is also a critical task in fact-checking. These evidence sentences provide an explanation why a claim has been assessed to be credible or not. Re-cent works have proposed a BERT-based model for extracting relevant evidence sentences from multi-sentence passages (Atanasova et al., 2020) . The authors observe that joint training on veracity prediction and explanation generation performs better than training separate models. The work in (Stammbach and Ash, 2020) investigates how the few-shot learning capabilities of the GPT-3 model (Brown et al., 2020) can be used for generating fact-checking explanations.",
"cite_spans": [
{
"start": 311,
"end": 335,
"text": "(Atanasova et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 487,
"end": 512,
"text": "(Stammbach and Ash, 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The automated claim verification task can be defined as follows: given a textual claim c and a cor-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Quin+ System",
"sec_num": "3"
},
{
"text": "pus D = {d 1 , d 2 , ..., d n }, where every passage d is comprised of sentences s j , 1 \u2264 j \u2264 k, a system",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Quin+ System",
"sec_num": "3"
},
{
"text": "will return a set of evidence sentences\u015c \u2282 d i and a label\u0177 \u2208 {probably true, probably false, inconclusive}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Quin+ System",
"sec_num": "3"
},
{
"text": "We have developed an automated fact-checking system, called Quin+, that verifies a given claim in three stages: passage retrieval from a corpus, sentence selection and entailment classification as shown in Figure 2 . The label is determined as follows: we first perform entailment classification on the set of evidence sentences. When the number of retrieved evidence sentences that entail or contradict the claim is low, we label the claim as \"inconclusive\". If the number of evidence sentences that support the claim exceeds the number of sentences that refute the claim, we assign the label \"probably true\". Otherwise, we assign the label \"probably false\".",
"cite_spans": [],
"ref_spans": [
{
"start": 206,
"end": 214,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The Quin+ System",
"sec_num": "3"
},
{
"text": "The passage retrieval model in Quin+ is based on a dense retrieval model called QR-BERT (Samarinas et al., 2020) . This model is based on BERT and creates dense vectors for passages by calculating their average token embedding. The relevance of a passage d to a claim c is then given by their dot product:",
"cite_spans": [
{
"start": 88,
"end": 112,
"text": "(Samarinas et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Passage Retrieval",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r(c, d) = \u03c6(c) T \u03c6(d)",
"eq_num": "(1)"
}
],
"section": "Passage Retrieval",
"sec_num": "3.1"
},
{
"text": "Dot product search can run efficiently using an approximate nearest neighbors index implemented using the FAISS library (Johnson et al., 2019) . QR-BERT maximizes the sampled softmax loss: The work in (Samarinas et al., 2020) introduced the Factual-NLI dataset that extends the FEVER dataset (Thorne et al., 2018) with more diverse synthetic examples derived from question answering datasets. There are 359,190 new entailed claims with evidence and additional contradicted claims from a rule-based approach. To ensure robustness, we compile a new large-scale noisy version of Factual-NLI called Factual-NLI+ 1 . This dataset includes all the 5 million Wikipedia passages in the FEVER dataset. We add 'noise' passages as follows. For every claim c in the FEVER dataset, we retrieve the top 30 web results from the Bing search engine and keep passages with the highest BM25 score that are classified as neutral by the entailment model. For claims generated from MSMARCO queries (Nguyen et al., 2016) , we include the irrelevant passages that are found in the MSMARCO dataset for those queries. This results in 418,650 additional passages. The new dataset reflects better the nature of a largescale corpus that would be used by real-world factchecking system. We trained a dense retrieval model using this extended dataset.",
"cite_spans": [
{
"start": 120,
"end": 142,
"text": "(Johnson et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 201,
"end": 225,
"text": "(Samarinas et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 292,
"end": 313,
"text": "(Thorne et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 976,
"end": 997,
"text": "(Nguyen et al., 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Passage Retrieval",
"sec_num": "3.1"
},
{
"text": "L \u03b8 = (c,d)\u2208D + b r \u03b8 (c, d) \u2212 log d i \u2208D b e r \u03b8 (c,d i ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Passage Retrieval",
"sec_num": "3.1"
},
{
"text": "The Quin+ system utilizes a hybrid model that combines the results from the dense retrieval model described above and BM25 sparse retrieval to obtain the final list of retrieved passages. For efficient sparse retrieval, we used the Rust-based Tantivy full text search engine 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Passage Retrieval",
"sec_num": "3.1"
},
{
"text": "We propose and experiment with two sentence selection methods: an embedding-based selection and context-aware sentence selection method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Selection",
"sec_num": "3.2"
},
{
"text": "The embedding-based selection method relies on the dense representations learned by the dense passage retrieval model QR-BERT. For a given claim c, we select the sentences s i from a given passage d = {s 1 , s 2 , ..., s k } whose relevance score r(c, s i ) is greater than some threshold \u03bb which is set experimentally.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Selection",
"sec_num": "3.2"
},
{
"text": "The context-aware sentence selection method uses a BERT-based sequence labeling model. The input of the model is the concatenation of the tokenized claim C = {C 1 , C 2 , ..., C k }, the special [SEP] token and the tokenized evidence passage E = {E 1 , E 2 , ..., E m } (see Figure 3 ). For the output of the model, we adopt the BIO tagging format so that all the irrelevant tokens are classified as O, the first token of an evidence sentence classified as B evidence and the rest tokens of an evidence sentence as I evidence. We trained a model based on RoBERTa-large (Liu et al., 2019) , minimizing the cross-entropy loss:",
"cite_spans": [
{
"start": 569,
"end": 587,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 275,
"end": 283,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Sentence Selection",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L \u03b8 = \u2212 N i=1 l i j=1 log(p \u03b8 (y i j ))",
"eq_num": "(3)"
}
],
"section": "Sentence Selection",
"sec_num": "3.2"
},
{
"text": "where N is the number of examples in the training batch, l i the number of non-padding tokens of the i th example, and p \u03b8 (y i j ) is the estimated softmax probability of the correct label for the j th token of the i th example. We trained this model on Factual-NLI with batch size 64, Adam optimizer and initial learning rate 5 \u00d7 10 \u22125 until convergence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Selection",
"sec_num": "3.2"
},
{
"text": "Natural Language Inference (NLI), also known as textual entailment classification, is the task of detecting whether a hypothesis statement is entailed by a premise passage. It is essentially a text classification problem, where the input is a pair of premise-hypothesis (P, H) and the output a label y \u2208 {entailment, contradiction, neu-tral}. An NLI model is often a core component of many automated fact-checking systems. Datasets like the Stanford Natural Language Inference corpus (SNLI) (Bowman et al., 2015), Multi-Genre Natural Language Inference corpus (Multi-NLI) (Williams et al., 2018) and Adversarial-NLI (Nie et al., 2020) have facilitated the development of models for this task.",
"cite_spans": [
{
"start": 572,
"end": 595,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 616,
"end": 634,
"text": "(Nie et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entailment Classification",
"sec_num": "3.3"
},
{
"text": "Even though pre-trained NLI models seem to perform well on the two popular NLI datasets (SNLI and Multi-NLI), they are not as effective in a real-world setting. This is possibly due to the bias in these two datasets, which has a negative effect in the generalization ability of the trained models (Poliak et al., 2018) . Further, these datasets are comprised of short single-sentence premises. As a result, models trained on these datasets usually do not perform well on noisy realworld data involving multiple sentences. These issues have led to the development of additional more challenging datasets such as Adversarial NLI (Nie et al., 2020 ).",
"cite_spans": [
{
"start": 297,
"end": 318,
"text": "(Poliak et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 627,
"end": 644,
"text": "(Nie et al., 2020",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entailment Classification",
"sec_num": "3.3"
},
{
"text": "Our Quin+ system utilizes an NLI model based on RoBERTa-large with a linear transformation of the [CLS] token embedding :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entailment Classification",
"sec_num": "3.3"
},
{
"text": "o = sof tmax(W \u2022 BERT [CLS] ([P ; H]) + a) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entailment Classification",
"sec_num": "3.3"
},
{
"text": "where P ; H is the concatenation of the premise with the hypothesis, W 3\u00d71024 is a linear transformation matrix, and a 3\u00d71 is the bias. We trained the entailment model by minimizing the cross-entropy loss on the concatenation of the three popular NLI datasets (SNLI, Multi-NLI and Adversarial-NLI) with batch size 64, Adam optimizer and initial learning rate 5 \u00d7 10 \u22125 until convergence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entailment Classification",
"sec_num": "3.3"
},
{
"text": "We evaluate the three individual components of Quin+ (retrieval, sentence selection and entailment classification) and finally perform an end-toend evaluation using various configurations. Table 1 gives the recall@k and Mean Reciprocal Rank (MRR@100) of the passage retrieval models on FEVER and Factual-NLI+. We also compare the performance on a noisy extension of the FEVER dataset where additional passages from the Bing search engine are included as 'noise' passages. We see that when noise passages are added to the FEVER dataset, the gap between the hybrid passage retrieval model in Quin+ and sparse retrieval widens. This demonstrates the limitations of using sparse retrieval, and why it is crucial to have a dense retrieval model to surface relevant passages from a noisy corpus. Overall, the hybrid passage retrieval model in Quin+ gives the best performance compared to BM25 and the dense retrieval model. Table 2 shows the token-level precision, recall and F1 score of the proposed sentence selection methods on the Factual-NLI dataset and a domainspecific (medical) claim verification dataset, Sci-Fact (Wadden et al., 2020) . We also compare the performance to a baseline sentence-level NLI approach, where we perform entailment classification (using the model described in Section 3.3) on each sentence of a passage and select the nonneutral sentences as evidence. We observe that the sequence labeling model gives the highest precision, recall and F1 score when tested on the Factual-NLI dataset. Further, the precision is significantly higher than the other methods.",
"cite_spans": [
{
"start": 1117,
"end": 1138,
"text": "(Wadden et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 189,
"end": 196,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 918,
"end": 925,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Performance of Quin+",
"sec_num": "4"
},
{
"text": "On the other hand, for the SciFact dataset, we see that sequence labeling method remains the top performer in terms of precision and F1 score after fine-tuning, although its recall is lower than the embedding-based method. This shows that sequence labeling model is able to mitigate the high false positive rate observed with the embeddingbased selection method by taking into account the surrounding context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of Quin+",
"sec_num": "4"
},
{
"text": "The Factual-NLI+ dataset contains claims with passages that either support or refute the claims with some sentences highlighted as ground truth specific evidence. Table 3 shows the performance of the entailment model to classify the input evidence as supporting or refuting the claims. The input evidence can be in the form of the whole passage, ground truth evidence sentences, or sentences selected by our sequence labeling model. We observe that the entailment classification model performs poorly when whole passages are passed as input evidence. However, when the specific sentences are passed as input, the precision, recall, and F1 measures improve. The reason is that our entailment classification model is trained mostly on short premises. As a result, it does better on sentence-level evidence compared to the longer passages. Finally, we carry out an end-to-end evaluation of our fact-checking system on Factual-NLI+ using various configurations of top-k passage retrieval (BM25, dense, hybrid, for various values of k \u2208 [5, 100]) and evidence selection approaches (embdedding-based and sequence labeling). Table 4 shows the macro-average F1 score for the three classes (supporting, refuting, neutral) for some of the tested configurations. We see that dense or hybrid retrieval with evidence selection using the proposed sequence labeling model gives the best results. Even though hybrid retrieval seems to lead to slightly worse performance, it requires much fewer passages (6 instead of 50) and makes the system more efficient.",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 170,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1118,
"end": 1125,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Performance of Quin+",
"sec_num": "4"
},
{
"text": "We have created a demo for verifying opendomain claims using the top 20 results from a web search engine. For a given claim, Quin+ returns relevant text passages with highlighted sentences. The passages are grouped into two sets, supporting and refuting. It computes a veracity rating based on the number of supporting and refuting evidence. It returns \"probably true\" if there are more supporting evidence, otherwise it returns \"probably false\". When the number of retrieved evidence is low, it returns \"inconclusive\". Figure 4 shows a screen dump of the system with a claim that has been assessed to be probably false based on the overwhelming number of refuting sentence evidence (21 refute versus 0 support). Quin+ can also be used on a large-scale corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 520,
"end": 528,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "System Demonstration",
"sec_num": "5"
},
{
"text": "In this work, we have presented a three-stage factchecking system. We have demonstrated how a dense retrieval model can lead to higher recall when retrieving passages for fact-checking. We have also proposed two schemes to select relevant sentences: an embedding-based approach and a sequence labeling model to improve the claim verification accuracy. Quin+ gave promising results in our extended Factual-NLI+ corpus, and is also able to verify open-domain claims using web search results. The source code of our system is publicly available 3 . Even though our system is able to verify multiple open-domain claims successfully, it has some limitations. Quin+ is not able to effectively verify multi-hop claims that require the retrieval of multiple pieces of evidence. For the verification of multi-hop claims, methodologies inspired by multi-hop question answering could be utilized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "6"
},
{
"text": "For the future development of large-scale factchecking systems we believe that a new benchmark needs to be introduced. The currently available datasets, including Factual-NLI+, are not suitable for evaluating the verification of claims using multiple sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "6"
},
{
"text": "https://archive.org/details/factual-nli 2 https://github.com/tantivy-search/tantivy",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/algoprog/Quin",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Generating fact checking explanations",
"authors": [
{
"first": "Pepa",
"middle": [],
"last": "Atanasova",
"suffix": ""
},
{
"first": "Jakob",
"middle": [
"Grue"
],
"last": "Simonsen",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Lioma",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pepa Atanasova, Jakob Grue Simonsen, Christina Li- oma, and Isabelle Augenstein. 2020. Generating fact checking explanations. In Proceedings of the Annual Meeting of the Association for Computa- tional Linguistics (ACL).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large an- notated corpus for learning natural language infer- ence. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Language models are few-shot learners",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Tom B Brown",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Ryder",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Subbiah",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Dhariwal",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Shyam",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Askell",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.14165"
]
},
"num": null,
"urls": [],
"raw_text": "Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Pre-training tasks for embedding-based large-scale retrieval",
"authors": [
{
"first": "Wei-Cheng",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Felix",
"middle": [
"X"
],
"last": "Yu",
"suffix": ""
},
{
"first": "Yin-Wen",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sanjiv",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei-Cheng Chang, Felix X. Yu, Yin-Wen Chang, Yim- ing Yang, and Sanjiv Kumar. 2020. Pre-training tasks for embedding-based large-scale retrieval. In International Conference on Learning Representa- tions (ICLR).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Enhanced lstm for natural language inference",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhen-Hua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced lstm for natural language inference. In Proceedings of the Annual Meeting of the Association for Computa- tional Linguistics (ACL).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirec- tional transformers for language understanding. In NAACL-HLT.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Ukp-athene: Multi-sentence textual entailment for claim verification",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Hanselowski",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zile",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Daniil",
"middle": [],
"last": "Sorokin",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Schulz",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)",
"volume": "",
"issue": "",
"pages": "103--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Hanselowski, Hao Zhang, Zile Li, Daniil Sorokin, Benjamin Schiller, Claudia Schulz, and Iryna Gurevych. 2018. Ukp-athene: Multi-sentence textual entailment for claim verification. In Pro- ceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 103-108.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Billion-scale similarity search with gpus",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Matthijs",
"middle": [],
"last": "Douze",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Transactions on Big Data",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Johnson, Matthijs Douze, and Herv\u00e9 J\u00e9gou. 2019. Billion-scale similarity search with gpus. IEEE Transactions on Big Data.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Dense passage retrieval for open-domain question answering",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Oguz",
"suffix": ""
},
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. Proceedings of the Annual Meeting of the Association for Computational Lin- guistics (ACL).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Redfield",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Epstein",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Kelcey",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natu- ral questions: a benchmark for question answering research. Transactions of the Association of Com- putational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Latent retrieval for weakly supervised open domain question answering",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the Annual Meeting of the Association for Computa- tional Linguistics (ACL).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "MS MARCO: A human generated machine reading comprehension dataset",
"authors": [
{
"first": "Tri",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Mir",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "Xia",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Tiwary",
"suffix": ""
},
{
"first": "Rangan",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated ma- chine reading comprehension dataset. CoRR, abs/1611.09268.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Combining fact extraction and verification with neural semantic matching networks",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Haonan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yixin Nie, Haonan Chen, and Mohit Bansal. 2019. Combining fact extraction and verification with neu- ral semantic matching networks. In Proceedings of the AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adversarial NLI: A new benchmark for natural language understanding",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Ad- versarial NLI: A new benchmark for natural lan- guage understanding. In Proceedings of the Annual Meeting of the Association for Computational Lin- guistics (ACL).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multi-stage document ranking with bert",
"authors": [
{
"first": "Rodrigo",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rodrigo Nogueira, W. Yang, Kyunghyun Cho, and Jimmy Lin. 2019. Multi-stage document ranking with bert. ArXiv, abs/1910.14424.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Hypothesis only baselines in natural language inference",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "Aparajita",
"middle": [],
"last": "Haldar",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language infer- ence. In Proceedings of the Joint Conference on Lexical and Computational Semantics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Latent retrieval for large-scale fact-checking and question answering with nli training",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Samarinas",
"suffix": ""
},
{
"first": "Wynne",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "Mong Li",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE International Conference on Tools with Artificial Intelligence (ICTAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Samarinas, Wynne Hsu, and Mong Li Lee. 2020. Latent retrieval for large-scale fact-checking and question answering with nli training. In IEEE In- ternational Conference on Tools with Artificial In- telligence (ICTAI).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "2020. e-fever: Explanations and summaries for automated fact checking",
"authors": [
{
"first": "Dominik",
"middle": [],
"last": "Stammbach",
"suffix": ""
},
{
"first": "Elliott",
"middle": [],
"last": "Ash",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Conference on Truth and Trust Online (TTO)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominik Stammbach and Elliott Ash. 2020. e-fever: Explanations and summaries for automated fact checking. In Proceedings of the Conference on Truth and Trust Online (TTO).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "FEVER: a large-scale dataset for fact extraction and verification",
"authors": [
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and verification. In NAACL-HLT.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Fact or fiction: Verifying scientific claims",
"authors": [
{
"first": "David",
"middle": [],
"last": "Wadden",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Lucy",
"middle": [
"Lu"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Shanchuan",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Madeleine",
"middle": [],
"last": "Van Zuylen",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Wadden, Kyle Lo, Lucy Lu Wang, Shanchuan Lin, Madeleine van Zuylen, Arman Cohan, and Han- naneh Hajishirzi. 2020. Fact or fiction: Verifying scientific claims. Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the Conference of the North American Chap- ter of the Association for Computational Linguistics (NAACL-HLT).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Approximate nearest neighbor negative contrastive learning for dense text retrieval",
"authors": [
{
"first": "Lee",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Chenyan",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Ye",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Kwok-Fung",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Jialin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Bennett",
"suffix": ""
},
{
"first": "Junaid",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Arnold",
"middle": [],
"last": "Overwijk",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.00808"
]
},
"num": null,
"urls": [],
"raw_text": "Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. arXiv preprint arXiv:2007.00808.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Ucl machine reading group: Four factor framework for fact finding (hexaf)",
"authors": [
{
"first": "Takuma",
"middle": [],
"last": "Yoneda",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Welbl",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takuma Yoneda, Jeff Mitchell, Johannes Welbl, Pon- tus Stenetorp, and Sebastian Riedel. Ucl machine reading group: Four factor framework for fact find- ing (hexaf). In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Sample claim and the retrieved evidence passage where only the last sentence is relevant.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Three stages of claim verification in Quin+. where D b is the set of passages in a training batch b, D + b is the set of positive claim-passage pairs in the batch b, and \u03b8 represents the parameters of the BERT model.",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "Sequence labeling model for evidence selection from a passage for a given claim.",
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"num": null,
"text": "The Quin+ system returning relevant evidence and a veracity rating for a claim.",
"type_str": "figure"
},
"TABREF1": {
"html": null,
"text": "Performance of passage retrieval models.",
"num": null,
"content": "<table><tr><td colspan=\"3\">(a) Factual-NLI Dataset</td><td/></tr><tr><td>Model</td><td colspan=\"2\">Precision Recall</td><td>F1</td></tr><tr><td>Baseline</td><td>67.74</td><td>91.87</td><td>77.98</td></tr><tr><td>Sequence labeling</td><td>94.78</td><td>92.11</td><td>93.43</td></tr><tr><td>Embedding-based</td><td>66.12</td><td>90.29</td><td>76.34</td></tr><tr><td colspan=\"3\">(b) SciFact Dataset</td><td/></tr><tr><td>Model</td><td colspan=\"2\">Precision Recall</td><td>F1</td></tr><tr><td>Baseline</td><td>62.21</td><td colspan=\"2\">71.54 66.55</td></tr><tr><td>Sequence labeling</td><td>69.38</td><td colspan=\"2\">68.45 68.91</td></tr><tr><td>Embedding-based</td><td>43.30</td><td colspan=\"2\">92.36 58.96</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"html": null,
"text": "Performance of sentence selection methods.",
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF4": {
"html": null,
"text": "Performance of entailment classification model on different forms of input evidence.",
"num": null,
"content": "<table><tr><td colspan=\"2\">Passage retrieval Sentence selection</td><td>F1</td></tr><tr><td>BM25, k=5</td><td>Embedding-based</td><td>52.76</td></tr><tr><td>BM25, k=20</td><td>Embedding-based</td><td>47.65</td></tr><tr><td>BM25, k=5</td><td>Sequence labeling</td><td>49.65</td></tr><tr><td>Dense, k=5</td><td>Embedding-based</td><td>49.03</td></tr><tr><td>Dense, k=5</td><td>Sequence labeling</td><td>52.83</td></tr><tr><td>Dense, k=50</td><td colspan=\"2\">Sequence labeling 58.22</td></tr><tr><td>Hybrid, k=6</td><td>Embedding-based</td><td>50.29</td></tr><tr><td>Hybrid, k=6</td><td colspan=\"2\">Sequence labeling 57.24</td></tr><tr><td>Hybrid, k=50</td><td>Sequence labeling</td><td>52.60</td></tr></table>",
"type_str": "table"
},
"TABREF5": {
"html": null,
"text": "End-to-end claim verification on Factual-NLI+ for different configurations.",
"num": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}