ACL-OCL / Base_JSON /prefixF /json /fever /2021.fever-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:42:26.653575Z"
},
"title": "Verdict Inference with Claim and Retrieved Elements Using RoBERTa",
"authors": [
{
"first": "In-Zu",
"middle": [],
"last": "Gi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Central University",
"location": {
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Ting-Yu",
"middle": [],
"last": "Fang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Central University",
"location": {
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Richard",
"middle": [],
"last": "Tzong",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Han",
"middle": [],
"last": "Tsai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Central University",
"location": {
"country": "Taiwan"
}
},
"email": "thtsai@g.ncu.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic fact verification has attracted recent research attention as the increasing dissemination of disinformation on social media platforms. The FEVEROUS shared task introduces a benchmark for fact verification, in which a system is challenged to verify the given claim using the extracted evidential elements from Wikipedia documents. In this paper, we propose our 3 rd place three-stage system consisting of document retrieval, element retrieval, and verdict inference for the FEVER-OUS shared task. By considering the context relevance in the fact extraction and verification task, our system achieves 0.29 FEVER-OUS score on the development set and 0.25 FEVEROUS score on the blind test set, both outperforming the FEVEROUS baseline.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic fact verification has attracted recent research attention as the increasing dissemination of disinformation on social media platforms. The FEVEROUS shared task introduces a benchmark for fact verification, in which a system is challenged to verify the given claim using the extracted evidential elements from Wikipedia documents. In this paper, we propose our 3 rd place three-stage system consisting of document retrieval, element retrieval, and verdict inference for the FEVER-OUS shared task. By considering the context relevance in the fact extraction and verification task, our system achieves 0.29 FEVER-OUS score on the development set and 0.25 FEVEROUS score on the blind test set, both outperforming the FEVEROUS baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The large-scale dissemination of disinformation on social media platforms intended to mislead or deceive the general population has become a major societal problem (Tan et al., 2020) . For example, the widespread disinformation of the Covid-19 vaccine has caused a growth of anti-vaccination sentiment online and led to declining vaccination coverage. As the best way to stop disinformation from going viral online is early verification, recent researchers have put efforts into automatic fact verification systems.",
"cite_spans": [
{
"start": 164,
"end": 182,
"text": "(Tan et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To answer the increasing demand for such systems, the FEVER (Fact Extraction and VERification) dataset (Thorne et al., 2018) was introduced and used for the shared task of the FEVER Workshop 2018. It consists of 185,445 annotated claims with a label of \"SUPPORTED\", \"REFUTED\", or \"NOT ENOUGH INFO\" as well as sets of evidential sentences from the given pre-processed Wikipedia pages. Among the participated teams of the shared task, Nie et al. (2019) proposed a system consisting of three connected homogeneous networks of document retrieval, sentence selection, and claim verification. Yoneda et al. (2018) proposed a four-stage system that utilizes logistic regression models for the document retrieval and sentence retrieval stages, Enhanced Sequential Inference Model (ESIM) (Chen et al., 2017) for the natural language inference stage, and Multi-Layer Perceptron (MLP) for the aggregation stage.",
"cite_spans": [
{
"start": 103,
"end": 124,
"text": "(Thorne et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 433,
"end": 450,
"text": "Nie et al. (2019)",
"ref_id": "BIBREF5"
},
{
"start": 587,
"end": 607,
"text": "Yoneda et al. (2018)",
"ref_id": "BIBREF12"
},
{
"start": 779,
"end": 798,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To explore the ability of automatic fact verification systems over both unstructured sentences and structured table-based information, Aly et al. (2021) introduces the Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) dataset. The shared task in 2021 uses the FEVEROUS dataset and further requires a system to be able to retrieve structured information from Wikipedia as evidence for each claim, which differs from the shared task in 2018. However, these two shared tasks still share the similar setting as a fact extraction and verification problem, which makes the pipelines and methods of the early proposed systems worth referring to. All in all, the FEVEROUS shared task in 2021 challenges a system to extract evidential elements, primarily sentences and table cells, from the given 5.4M Wikipedia documents and verify as \"SUPPORTS\", \"REFUTES\", or \"NOT ENOUGH INFO\" for each given claim. Systems are evaluated by jointly considering how complete the relevant Wikipedia elements are retrieved and how correct the final verification verdicts are.",
"cite_spans": [
{
"start": 135,
"end": 152,
"text": "Aly et al. (2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a three-stage system as Figure 1 shows to improve the FEVEROUS baseline in two aspects. First, while the baseline retriever pays attention to literal relevance and word frequency with a combination method of entity matching and TF-IDF, we fine-tune the BERT model (Devlin et al., 2019) to integrate the con-text relevance for finding evidential elements and downstream verdict inference. Second, the baseline predictor uses the claim and the concatenation of retrieved elements as input, having a constraint of the maximum input length. We experiment with several ways to include more elements for verdict inference. Finally, these improvements allow us to achieve substantially higher performance than the baseline.",
"cite_spans": [
{
"start": 290,
"end": 311,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 50,
"end": 58,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our system is a three-stage model consisting of document retrieval, element retrieval, and verdict inference. Document retrieval aims to extract the selection of the most related Wikipedia documents when only given the claim. The claim and the set of candidate elements from the most related Wikipedia documents are then given to the subsequent element retrieval to find out the most evidential elements regarding the claim. The final stage utilizes the NLI model for verdict inference, predicting the final verdict based on the most evidential elements and the claim.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": "Document retrieval is to extract the most related documents from 5.4M Wikipedia documents when only given the claim. A Wikipedia document is determined as related by checking if any element from the Wikipedia document is included as evidential elements for the given claim.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Retrieval",
"sec_num": "2.1"
},
{
"text": "Our document retrieval utilizes Anserini (Yang et al., 2018) , an information retrieval toolkit built on Lucene and providing an easy-to-use interface for querying. Experiments have shown that Anserini is efficient in indexing large document collections and provides modern ranking methods that are on par with research requirements.",
"cite_spans": [
{
"start": 41,
"end": 60,
"text": "(Yang et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document Retrieval",
"sec_num": "2.1"
},
{
"text": "Based on the observation that the claim often includes the title and the introductory section of the related Wikipedia document, we take the title and the first 10 elements of each Wikipedia document, normalize them by removing the links, and then build the indices of our Wikipedia document collection. We then use our Anserini to query each claim with the indices we built and retrieve k Wikipedia documents most related to the claim as well as their relatedness scores. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Retrieval",
"sec_num": "2.1"
},
{
"text": "For element retrieval, we experiment with two different approaches, the Anserini and the BERT model, to select relevant elements from documents retrieved in the previous stage. Both methods require every element to be a sequence input, including the table elements. We apply two techniques to linearize the table elements. One is converting each cell element to a sequence format of \"[Header] is [Cell] .\" (Oguz et al., 2021) , and the other is prepending the Wikipedia document title of the element in front of the converted sequence. Take the evidential cell element \"Travis Hafner\" as an example, we first convert it as a sequence \"Player is Travis Hafner.\", and then prepend the title to the sequence as \"2005 Cleveland Indians season Player is Travis Hafner.\".",
"cite_spans": [
{
"start": 396,
"end": 402,
"text": "[Cell]",
"ref_id": null
},
{
"start": 406,
"end": 425,
"text": "(Oguz et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Element Retrieval",
"sec_num": "2.2"
},
{
"text": "For our Anserini in the element retrieval stage, we use every element in the entire Wikipedia documents to build the indices of another Wikipedia elements collection. Due to the mechanism of the Anserini, we first retrieve l top related Wikipedia elements when only given the claim, and apply a filter of k most related documents to utilize the benefits of the document retrieval stage and obtain the finally retrieved m elements. The afterward filter leads to the different numbers of finally re-trieved m elements for each claim. To improve the performance, we separate the retrieval procedure for sentences and tables by building another two separated collections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Element Retrieval",
"sec_num": "2.2"
},
{
"text": "For the second approach experimented in the element retrieval stage, we fine-tune the BERT model as a binary classification with the ground truth elements as positive and the other elements as negative from the k most related Wikipedia documents retrieved by our Anserini in the document retrieval stage. Our BERT model takes the concatenation of the claim and the element sequence as input, and we use the output prediction to calculate the normalized relatedness scores. We rank the elements according to the normalized relatedness scores and select m Wikipedia elements top related to the claim. The normalized relatedness score between the claim c i and the j-th element is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Element Retrieval",
"sec_num": "2.2"
},
{
"text": "p(x = 1|c i , j) = e p + e p + + e p \u2212",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Element Retrieval",
"sec_num": "2.2"
},
{
"text": "where x \u2208 {0, 1} indicates whether the j-th element is positive or negative, p + is the prediction scores for positive, p \u2212 is the prediction scores for negative, and p(x = 1|c i , j) is the normalized score for p + .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Element Retrieval",
"sec_num": "2.2"
},
{
"text": "Our BERT model uses the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 1e \u22125 , a batch size of 16, and 1 training epochs due to the time constraint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Element Retrieval",
"sec_num": "2.2"
},
{
"text": "On the third stage of the FEVEROUS shared task, NLI is a task that matches the scenario of classifying the semantic relationship between the claim and the retrieved elements as \"SUPPORTS\", \"RE-FUTES\", or \"NOT ENOUGH INFO\" (NEI). Therefore, we adopt the RoBERTa (Liu et al., 2019) NLI model pre-trained on well-known NLI datasets, including SNLI, MNLI, FEVER-NLI, ANLI (Nie et al., 2020) , and experiment on its variants with aggregation method on top of it to fully utilize the semantic information of retrieved elements in the previous stage.",
"cite_spans": [
{
"start": 261,
"end": 279,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 368,
"end": 386,
"text": "(Nie et al., 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Verdict Inference",
"sec_num": "2.3"
},
{
"text": "For simplicity, we name our RoBERTa NLI model without aggregation as RoBERTa, RoBERTa NLI model with logical aggregation as RoBERTa-LOG, and RoBERTa NLI model with MLP for aggregation as RoBERTa-MLP. Yoneda et al. (2018) . Our RoBERTa takes the claim and the concatenation of all retrieved elements as input, while our RoBERTa-LOG and RoBERTa-MLP take the claim and each retrieved element as input.",
"cite_spans": [
{
"start": 200,
"end": 220,
"text": "Yoneda et al. (2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Verdict Inference",
"sec_num": "2.3"
},
{
"text": "The RoBERTa NLI models are fine-tuned with ground truth labels in FEVEROUS training set and additionally sampled NEI instances to get rid of the unbalanced labeling problem. We use the adam optimizer with a learning rate of 1e \u22126 , a batch size of 8, a scheduler to watch on the development loss, and a total of 7 training epochs. Our RoBERTa-LOG simply merges the NLI predictions and outputs the label obtaining the highest point. For our RoBERTa-MLP, our MLP containing two fully connected layers and ReLU is trained jointly with the RoBERTa NLI models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verdict Inference",
"sec_num": "2.3"
},
{
"text": "To evaluate the performance of the document retrieval, we measure the recall metric and the results are shown in Table 1 . Our document retriever achieves a document coverage of 69% when retrieving the top 5 documents and 73% when retrieving the top 10 documents. When deciding the value of k, it is a trade-off between retrieval performance and computational resources. As a result, we set k = 5 for the downstream element retrieval using the BERT model and at the same time experiment with different settings for the downstream element retrieval using the Anserini. Table 2 shows the development set results using our Anserini. The retriever of l = 5000 with prepending achieves better performance than the retrievers without prepending. From the results, we observe that prepending title improves recall performance. Nevertheless, to meet the submission requirement of at most 5 sentences and 25 cells for each claim, the averagely obtained 58 elements require further control on retrieved numbers of each element type. Experiments show that different combinations of l and k as well as separating the retrieval for sentences and tables respectively achieve comparable performance to the retriever of l = 5000 with prepending and provide better supervision of the number and type of the retrieved elements. Table 3 shows the development set results using our BERT. We observe similar results with our Anserirni, which the retriever with prepending achieves better performance than the retrievers without prepending.",
"cite_spans": [],
"ref_spans": [
{
"start": 113,
"end": 120,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 568,
"end": 575,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 1310,
"end": 1317,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Document Retrieval Results",
"sec_num": "3.1"
},
{
"text": "We use our Anserini of l = 5000 with prepending and our BERT with prepending to have a relatively fair comparison between our two approaches for element retrieval. Our Anserini of l = 5000 with prepending covers 55% of all elements, while our BERT with prepending covers 59%, showing our BERT substantially outperforms our Anserini. Table 4 shows the development set results of our models trained on a training subset. We observe that the RoBERTa-LOG reaches 0.52 F 1 score and the RoBERTa-MLP reaches 0.22 F 1 score, both Table 4 : Performance of different verdict inference methods trained on a training subset. Scores are reported on the development set in per-class F 1 , with S represents \"SUPPORTS\", R represents \"REFUTES\", and NEI represents \"NOT ENOUGH INFO\". The overall score is reported using macro-averaged F 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 333,
"end": 340,
"text": "Table 4",
"ref_id": null
},
{
"start": 523,
"end": 530,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Element Retrieval Results",
"sec_num": "3.2"
},
{
"text": "are much lower than the RoBERTa. This indicates that, while each claim in the development set only requires an average of 4.6 elements to reach the golden truth label according to our analysis, it is inappropriate for our RoBERTa-LOG and RoBERTa-MLP to take all thirty elements evenly for each claim. Therefore, we choose to use our RoBERTa and simplify the input by removing potentially repeated words to allow more elements included for verdict inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verdict Inference Results",
"sec_num": "3.3"
},
{
"text": "We also test the performance of our RoBERTa with different element retrieval methods using the FEVEROUS scorer as shown in Table 5 . The performance of evidence is reported with a restriction of at most 5 sentences and 25 cells as the FEVER-OUS scorer limits. We observe that the quality of the upstream data is crucial to the performance of the downstream task, as our RoBERTa taking the elements from our Anserini and BERT reach 0.58 and 0.60 accuracy, respectively. Experiments also show that our RoBERTa taking the elements from our BERT is more robust than our Anserini with improvements in evidence precision, recall, and F 1 score. Our RoBERTa taking the elements from our BERT also outperforms the baseline with 0.1 improvements on FEVEROUS score.",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 130,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Verdict Inference Results",
"sec_num": "3.3"
},
{
"text": "According to our observation of the performance on all three stages, we decide our final system as the combination of our Anserini for document retrieval, our BERT for element retrieval, and our RoBERTa for verdict inference. The results of the blind test of our final system are presented in Table 6 . Our final system is proved robust and outperforms the FEVEROUS baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 293,
"end": 300,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Verdict Inference Results",
"sec_num": "3.3"
},
{
"text": "Our system proves that performing document retrieval, element retrieval and verdict inference in the three-phase procedure is a proper pipeline for Table 6 : Performance of systems on blind test results. Our final system is Anserini for document retrieval with k = 5, BERT for element retrieval with prepending, and RoBERTa for verdict inference. The candidate system is Anserini for both document and separated retrieval of different element types as well as RoBERTa for verdict inference.",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 155,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4"
},
{
"text": "the fact extraction and verification shared task. For evidence retrieval (document and element retrieval), our proposed methods of the BM25-based Anserini and the context-aware BERT model considers both the presence of certain keywords and semantic context. Hence, it is able to extract related elements over unstructured sentences and structured table cells.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4"
},
{
"text": "Nevertheless, the retrieval performance still has room for improvement in two aspects. One is to tune the balance weighting between the presence of certain keywords and the semantic context. The other is to attentively design the fine-tuning of the BERT model for element retrieval. During the finetuning process, we use the output from the document retrieval stage and set as negative labels for all elements from the 5 most related documents that are not annotated as evidence of the corresponding claim. While each claim has only a maximum of 3 evidence sets and an average of nearly 4 elements per set, our BERT model suffers from the unbalanced labeling problem, in which the process includes massive negative and few positive instances. The positive instances in the fine-tuning are also rather few because we use the output of the document retrieval stage that only retrieves a document coverage of 69%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4"
},
{
"text": "We describe our 3 rd place system for the FEVER-OUS shared task via the three-stage setup of document retrieval, element retrieval, and verdict inference. By considering the context relevance in the fact extraction and verification task, our system achieves 0.29 FEVEROUS score on the development set and 0.25 FEVEROUS score on the blind test set, both outperforming the FEVEROUS baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. FEVEROUS: fact extraction and verification over unstructured and structured information",
"authors": [
{
"first": "Rami",
"middle": [],
"last": "Aly",
"suffix": ""
},
{
"first": "Zhijiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"Sejr"
],
"last": "Schlichtkrull",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rami Aly, Zhijiang Guo, Michael Sejr Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. FEVEROUS: fact extraction and verification over unstructured and structured information. CoRR, abs/2106.05707.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Enhanced lstm for natural language inference",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhen-Hua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/p17-1152"
]
},
"num": null,
"urls": [],
"raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced lstm for natural language inference. Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/n19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Pa- pers), pages 4171-4186. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Combining fact extraction and verification with neural semantic matching networks",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Haonan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2019,
"venue": "The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu",
"volume": "",
"issue": "",
"pages": "6859--6866",
"other_ids": {
"DOI": [
"10.1609/aaai.v33i01.33016859"
]
},
"num": null,
"urls": [],
"raw_text": "Yixin Nie, Haonan Chen, and Mohit Bansal. 2019. Combining fact extraction and verification with neu- ral semantic matching networks. In The Thirty- Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Ad- vances in Artificial Intelligence, EAAI 2019, Hon- olulu, Hawaii, USA, January 27 -February 1, 2019, pages 6859-6866. AAAI Press.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Adversarial NLI: A new benchmark for natural language understanding",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Ad- versarial NLI: A new benchmark for natural lan- guage understanding. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Yashar Mehdad, and Scott Yih. 2021. Unik-qa: Unified representations of structured and unstructured knowledge for open-domain question answering",
"authors": [
{
"first": "Barlas",
"middle": [],
"last": "Oguz",
"suffix": ""
},
{
"first": "Xilun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Peshterliev",
"suffix": ""
},
{
"first": "Dmytro",
"middle": [],
"last": "Okhonko",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Schlichtkrull",
"suffix": ""
},
{
"first": "Sonal",
"middle": [],
"last": "Gupta",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Scott Yih. 2021. Unik-qa: Unified representations of structured and unstructured knowledge for open-domain question answering.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Detecting cross-modal inconsistency to defend against neural fake news",
"authors": [
{
"first": "Reuben",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Plummer",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Saenko",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2081--2106",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.163"
]
},
"num": null,
"urls": [],
"raw_text": "Reuben Tan, Bryan Plummer, and Kate Saenko. 2020. Detecting cross-modal inconsistency to de- fend against neural fake news. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2081-2106, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "FEVER: a large-scale dataset for fact extraction and verification",
"authors": [
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/n18-1074"
]
},
"num": null,
"urls": [],
"raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and verification. In Proceedings of the 2018",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"authors": [],
"year": 2018,
"venue": "",
"volume": "1",
"issue": "",
"pages": "809--819",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 809-819. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Anserini: Reproducible ranking baselines using lucene",
"authors": [
{
"first": "Peilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM J. Data Inf. Qual",
"volume": "10",
"issue": "4",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3239571"
]
},
"num": null,
"urls": [],
"raw_text": "Peilin Yang, Hui Fang, and Jimmy Lin. 2018. Anserini: Reproducible ranking baselines using lucene. ACM J. Data Inf. Qual., 10(4):16:1-16:20.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "UCL machine reading group: Four factor framework for fact finding (HexaF)",
"authors": [
{
"first": "Takuma",
"middle": [],
"last": "Yoneda",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Welbl",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)",
"volume": "",
"issue": "",
"pages": "97--102",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5515"
]
},
"num": null,
"urls": [],
"raw_text": "Takuma Yoneda, Jeff Mitchell, Johannes Welbl, Pon- tus Stenetorp, and Sebastian Riedel. 2018. UCL ma- chine reading group: Four factor framework for fact finding (HexaF). In Proceedings of the First Work- shop on Fact Extraction and VERification (FEVER), pages 97-102, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "System Overview: Document Retrieval, Element Retrieval, and Verdict Inference.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Both aggregation methods have been proved effective in",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"content": "<table/>",
"type_str": "table",
"text": "Recall is calculated by the frequency of the ground truth document occurrence in the retrieved documents. k indicates the number of retrieved documents.",
"html": null,
"num": null
},
"TABREF2": {
"content": "<table><tr><td colspan=\"5\">Prepend m Sentence Table Overall</td></tr><tr><td>-</td><td>40</td><td>0.73</td><td>0.16</td><td>0.37</td></tr><tr><td/><td>40</td><td>0.71</td><td>0.52</td><td>0.59</td></tr></table>",
"type_str": "table",
"text": "Recall is calculated on an element level with our Anserini. l indicates the number of retrieved elements by our Anserini. (s) and (c) indicate separated retrieval for sentence and cells respectively. Prepend indicates whether the title is prepended to the linearized element sequence. k indicates the number of retrieved documents used to filter the elements. avg-m indicates the averagely retrieved m elements eventually after the filter of k most related documents on l elements.",
"html": null,
"num": null
},
"TABREF3": {
"content": "<table/>",
"type_str": "table",
"text": "Coverage for sentence, table, and overall performance with our BERT using the 5 most related Wikipedia documents retrieved by our Anserini in the document retrieval stage. m indicates the number of retrieved elements. Prepend indicates whether the title is prepended to the linearized element sequence.",
"html": null,
"num": null
},
"TABREF6": {
"content": "<table><tr><td>Evidence</td></tr></table>",
"type_str": "table",
"text": "Performance of different element retrieval methods using our RoBERTa. Scores are reported on the development set using the FEVEROUS scorer. The Anserini uses l = 5000 with prepending and a filter of k = 10 documents. The BERT uses k = 5 documents retrieved from the previous stage and utilizes prepending for element retrieval.",
"html": null,
"num": null
}
}
}
}