ACL-OCL / Base_JSON /prefixF /json /fever /2021.fever-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:42:27.831960Z"
},
"title": "The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) Shared Task",
"authors": [
{
"first": "Rami",
"middle": [],
"last": "Aly",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Zhijiang",
"middle": [],
"last": "Guo",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Michael",
"middle": [],
"last": "Schlichtkrull",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Christos",
"middle": [],
"last": "Christodoulopoulos",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Oana",
"middle": [],
"last": "Cocarascu",
"suffix": "",
"affiliation": {},
"email": "oana.cocarascu@kcl.ac.uk"
},
{
"first": "Arpit",
"middle": [],
"last": "Mittal",
"suffix": "",
"affiliation": {},
"email": "arpitmittal@fb.com"
},
{
"first": "Roberto",
"middle": [],
"last": "Fico",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Red",
"middle": [],
"last": "Sundown",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jack",
"middle": [],
"last": "Arnold",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lewis",
"middle": [],
"last": "Ford",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lee",
"middle": [],
"last": "Leighton",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) shared task, asks participating systems to determine whether human-authored claims are SUPPORTED or REFUTED based on evidence retrieved from Wikipedia (or NOTENOUGHINFO if the claim cannot be verified). Compared to the FEVER 2018 shared task, the main challenge is the addition of structured data (tables and lists) as a source of evidence. The claims in the FEVEROUS dataset can be verified using only structured evidence, only unstructured evidence, or a mixture of both. Submissions are evaluated using the FEVEROUS score that combines label accuracy and evidence retrieval. Unlike FEVER 2018 (Thorne et al., 2018a), FEVEROUS requires partial evidence to be returned for NOTENOUGHINFO claims, and the claims are longer and thus more complex. The shared task received 13 entries, six of which were able to beat the baseline system. The winning team was \"Bust a move!\", achieving a FEVEROUS score of 27% (+9% compared to the baseline). In this paper we describe the shared task, present the full results and highlight commonalities and innovations among the participating systems. Claim: In the 2018 Naples general election, Roberto Fico, an Italian politician and member of the Five Star Movement, received 57,119 votes with 57.6 percent of the total votes.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) shared task, asks participating systems to determine whether human-authored claims are SUPPORTED or REFUTED based on evidence retrieved from Wikipedia (or NOTENOUGHINFO if the claim cannot be verified). Compared to the FEVER 2018 shared task, the main challenge is the addition of structured data (tables and lists) as a source of evidence. The claims in the FEVEROUS dataset can be verified using only structured evidence, only unstructured evidence, or a mixture of both. Submissions are evaluated using the FEVEROUS score that combines label accuracy and evidence retrieval. Unlike FEVER 2018 (Thorne et al., 2018a), FEVEROUS requires partial evidence to be returned for NOTENOUGHINFO claims, and the claims are longer and thus more complex. The shared task received 13 entries, six of which were able to beat the baseline system. The winning team was \"Bust a move!\", achieving a FEVEROUS score of 27% (+9% compared to the baseline). In this paper we describe the shared task, present the full results and highlight commonalities and innovations among the participating systems. Claim: In the 2018 Naples general election, Roberto Fico, an Italian politician and member of the Five Star Movement, received 57,119 votes with 57.6 percent of the total votes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automated fact verification has become an important field of research, as fact-checkers and journalists are facing an even-increasing volume of claims to verify (Thorne and Vlachos, 2018) . This task has been explored by the NLP community through forums and shared tasks such as CLEF CheckThat! (Nakov et al., 2021) , SemEval (Wang et al., 2021) and FEVER (Thorne et al., 2018b) , as well as a number of datasets aimed at modelling parts of the task (Karadzhov et al., 2017; Wang, 2017; Augen-stein et al., 2019; Gupta et al., 2020) .",
"cite_spans": [
{
"start": 161,
"end": 187,
"text": "(Thorne and Vlachos, 2018)",
"ref_id": null
},
{
"start": 295,
"end": 315,
"text": "(Nakov et al., 2021)",
"ref_id": "BIBREF15"
},
{
"start": 318,
"end": 345,
"text": "SemEval (Wang et al., 2021)",
"ref_id": null
},
{
"start": 356,
"end": 378,
"text": "(Thorne et al., 2018b)",
"ref_id": "BIBREF19"
},
{
"start": 450,
"end": 474,
"text": "(Karadzhov et al., 2017;",
"ref_id": "BIBREF11"
},
{
"start": 475,
"end": 486,
"text": "Wang, 2017;",
"ref_id": "BIBREF22"
},
{
"start": 487,
"end": 512,
"text": "Augen-stein et al., 2019;",
"ref_id": null
},
{
"start": 513,
"end": 532,
"text": "Gupta et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While these previous works focus on claims that are verified against a single type of evidence, such as text or structured information, the new FEVER-OUS dataset (Aly et al., 2021) we study in this shared task requires the models to reason about both types of evidence. This helps better approximate real-world fact checking, where both the claims and the sources of evidence are more complex in nature.",
"cite_spans": [
{
"start": 162,
"end": 180,
"text": "(Aly et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "FEVEROUS models the task of Fact Extraction and VERification Over Unstructured and Structured information, overcoming some limitations of the previous FEVER dataset (Thorne et al., 2018a) , which only considers text as evidence, and improving the quality of the annotations as well as removing known biases from the dataset. FEVEROUS contains 87,026 new claims which are more complex (25.3 word/claim on average compared to 9.4 for FEVER), and a larger pool of evidence (tables, lists, and sentences from the entirety of Wikipedia), bringing us closer to real-world scenarios, while maintaining the experimental control of an artificially designed dataset.",
"cite_spans": [
{
"start": 165,
"end": 187,
"text": "(Thorne et al., 2018a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper presents a short description of the task and dataset, the final test phase leaderboard, and a summary of the submissions with a comparison to previous FEVER shared tasks, an analysis of current challenges and a discussion around interesting research directions for this task. The shared task received 13 entries in total, with the winning team, \"Bust a move!\", achieving a score of 27%, 9 percentage points higher than the baseline system we released. While considerable progress was made by the participants of the task, there are still plenty of opportunities for systems to improve. We will leave the scoring system open to allow future work to build upon the advances made in this shared task. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given a human-authored claim, systems had to first retrieve evidence from Wikipedia in the form of sentences and table cells, each accompanied by the page/section titles and column headers they were found under respectively. They then had to classify whether the claim is SUPPORTED or REFUTED based on the evidence, or NOTENOUGHINFO if the claim cannot be verified. System responses would be scored both on the evidence retrieval and the label classification. Note that unlike in the original FEVER shared task, (partial) evidence needs to be provided for the NOTENOUGHINFO claims. Each claim in the FEVEROUS dataset could have multiple ways of being verified, which is represented in the different evidence sets -each with potentially multiple pieces of evidence. The participating systems only had to provide one complete evidence set for their response to be considered correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "We provided the training and development datasets through the FEVER website 1 and as an open source dataset 2 . A reserved portion of the dataset was released as a blind test set without the gold annotations (labels + evidence) to be used in the 1 https://fever.ai/dataset/feverous. html 2 https://doi.org/10.5281/zenodo. 4911507 final phase of the challenge. The training data and the blind test set are described in (Aly et al., 2021) , with each split's label distribution being thus known in advance. The label distribution of the dataset is only roughly balanced for the blind test set. The number of evidence sets with only textual evidence is slightly higher than sets that contain only textual evidence or sets that require a combination of different evidence types (c.f. Table 1). ",
"cite_spans": [
{
"start": 418,
"end": 436,
"text": "(Aly et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.1"
},
{
"text": "Verdict: Supported Table Retrieval Cell Selection",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 34,
"text": "Table Retrieval",
"ref_id": null
}
],
"eq_spans": [],
"section": "Verdict Predictor",
"sec_num": null
},
{
"text": "Figure 2: The pipeline of the FEVEROUS baseline, illustration taken from (Aly et al., 2021) .",
"cite_spans": [
{
"start": 73,
"end": 91,
"text": "(Aly et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Retrieval",
"sec_num": null
},
{
"text": "The FEVEROUS shared task was hosted as a challenge on EvalAI 3 where participants were invited to submit predictions against the blind test set. Participants had about three days (24th July to 27th July 2021) starting from the release of the unlabeled blind test set to submit up to three submissions. Only a team's final submission was considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submissions",
"sec_num": "2.2"
},
{
"text": "The platform was open to submission on the development split one month prior, to allow participants to become familiar with the submission environment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submissions",
"sec_num": "2.2"
},
{
"text": "The FEVEROUS baseline, shown schematically in Figure 2 , employs a multi-stage retrieval pipeline, followed by a verdict prediction module. The most relevant documents are retrieved using a combination of entity matching and TF-IDF. The latter is then used to to extract the most relevant sentences and tables from the selected documents. To retrieve relevant cells from extracted tables, a cell extraction model linearizes them and treats the extraction as a sequence labelling task. A RoBERTa classifier (Liu et al., 2019) pre-trained on multiple NLI datasets, and fine-tuned on the FEVEROUS data 4 , then predicts the veracity of the claim using the retrieved evidence and its context. Since the FEVEROUS dataset is imbalanced regarding NEI labels (5% of claims), the baseline additionally samples artificial NEI instances for training by partially removing evidence pieces from annotations. This baseline substantially outperforms the sentence-only and table-only baselines (Aly et al., 3 https://eval.ai/web/challenges/ challenge-page/1091 4 For early reported scores of the baseline the classifier was not trained on the entire FEVEROUS data. These preliminary scores were almost identical to the final ones, with accuracy, evidence precision, and F1 being around 0.01 points higher than here, but FEVEROUS score being marginally lower. 2021).",
"cite_spans": [
{
"start": 506,
"end": 524,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 978,
"end": 990,
"text": "(Aly et al.,",
"ref_id": null
},
{
"start": 991,
"end": 992,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 46,
"end": 54,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": "2.3"
},
{
"text": "Similar to the previous shared tasks of the FEVER Workshop (Thorne et al., 2018b) , the scoring in FEVEROUS considers both the evidence retrieval and the claim labels. While we track the scores for each of these aspects individually (label accuracy, evidence P/R/F1), we use the FEVEROUS score defined in Aly et al. (2021) as the primary score for the challenge, defined as follows. For a given claim, a prediction is considered correct only if at least one complete gold evidence set E is a subset of the predicted evidence\u00ca and the predicted label is correct. We recognise that the evidence annotations are unlikely to be exhaustive, and measuring precision would penalise correct evidence missed by the annotators. Instead, we set a limit to the number of evidence pieces systems can return for each claim and allow only 5 predicted sentences items and 25 predicted table cells (the latter include table captions and list items). If additional evidence was returned it was discarded without penalty.",
"cite_spans": [
{
"start": 59,
"end": 81,
"text": "(Thorne et al., 2018b)",
"ref_id": "BIBREF19"
},
{
"start": 305,
"end": 322,
"text": "Aly et al. (2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FEVEROUS score",
"sec_num": "2.4"
},
{
"text": "The results for all submissions to the shared task are in Table 2 . Further break downs are provided with results of each class (SUPPORTED, REFUTED NOTENOUGHINFO) in Table 2 , and according to different types of evidence needed (textual-only, tabular-only, or both) in Table 4 . The latter results are further analysed in Table 6 for each type of evidence. In Table 3 tion of their approach. 7 teams sent us system descriptions, six of which also submitted a paper. The descriptions appear in Appendix A (as sent by the authors except from minor typographic corrections), with the accompanying paper citation if one was submitted. In the remainder of this section we present our observations on the techniques used by the participants. The architecture followed by participating teams consisted of evidence retrieval followed by verdict prediction. Evidence retrieval was decomposed into page retrieval, followed by selecting textual (i.e. sentences) and tabular evidence (i.e. table cells) from the retrieved pages. Verdict prediction combined the retrieved evidence to return a label for the claim.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 166,
"end": 173,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 269,
"end": 276,
"text": "Table 4",
"ref_id": "TABREF8"
},
{
"start": 322,
"end": 329,
"text": "Table 6",
"ref_id": "TABREF13"
},
{
"start": 360,
"end": 367,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "Page retrieval Page retrieval was mostly kept simple relying on term-matching for efficiency Bust a move!, NCU and METUIS used BM25 (the latter two using the implementation of Anserini ), while Martin Funkquist used vanilla TF-IDF matching. Papelo and Albatross, following the baseline, combined TF-IDF matching with entity matching, and EURECOM_Fever reranked its results with a BERT model pre-trained on MS-MARCO (Nguyen et al., 2016) . Overall, focusing on the entities in the claim for page retrieval was found to be beneficial by the participants.",
"cite_spans": [
{
"start": 415,
"end": 436,
"text": "(Nguyen et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "Sentence selection In order to select sentences to used as evidence, many teams used continuous representations in order to capture semantic affinity with the claim. Bust a move! applied a three stage-process to the task consisting of multi-hop dense passage-retrieval (Xiong et al., 2021) trained on FEVEROUS data, followed by BM25 filtering to ensure that sentences containing named entities mentioned in the claim are not ranked below those sentences semantically related to the claim but for different entities, and a final re-ranking step using a fine-tuned RoBERTa model. The latter is trained iteratively using a scheme to identify hard negative examples using previous versions of the model. Their method performs better than other systems on claims where the disambiguation of a claim's entity was a major challenge or when an article's name is not mentioned in the claim itself (c.f. Table 3 ), suggesting that their negative sampling method is effective. Papelo used a fine-tuned RoBERTa model for sentence selection combined with a next hop predictor based on T5 (Raffel et al., 2020 ) that aims to retrieve evidence complementing the pieces already retrieved. NCU used a BERT evidence classifier fine-tuned on FEVEROUS data, while METUIS developed a BERT-based QA model using the data provided. Martin Funkquist and EU-RECOM_Fever used TF-IDF matching following the baseline. tion. The cells from the tables were often chosen using the same approach used by teams to select sentences from documents but trained on tabular data from the task (Bust a move!, Papelo, NCU, EURECOM_Fever). Cells are treated as text by linearzing them through concatenating their content and context with special markup. The teams considered as context a Evidence retrieval from different locations Claims requiring information from two or more different sections or articles (termed multi-hop reasoning in FEVEROUS), was a challenge to all systems, with neither of the two systems employing multihop evidence retrieval (Bust a move!, Papelo) scoring better (c.f. Table 3 ). However, we note that both teams' multihop evidence retrieval focus on the iterative retrieval of evidence, while for multihop claims labelled in FEVEROUS direct semantic matching with only the claim is sufficient in many cases.",
"cite_spans": [
{
"start": 269,
"end": 289,
"text": "(Xiong et al., 2021)",
"ref_id": "BIBREF23"
},
{
"start": 1075,
"end": 1095,
"text": "(Raffel et al., 2020",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 894,
"end": 901,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 2055,
"end": 2062,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "Verdict prediction For verdict prediction, the top two teams developed models taking into account the fact that the evidence during testing will be noisy given that retrieval is imperfect. Thus they trained models using retrieved evidence (Papelo) or combining it with gold evidence from the data (Bust a move!). The latter considered all evidence to be of tabular form and used two instantiations of a TAPAS model, whose predictions were aggregated using an MLP. On the other hand, Papelo considered all evidence to be of sentence form using a simple markup to encode the table structure, and trained a T5 model on FEVEROUS data. In addition they facilitated the handling of mathematical reasoning at this stage by encoding numbers and relations between into \"math hints\" that were added as a prefix to the input. By using these hints, Papelo achieves by far the highest scores on claims that require numerical reasoning, as seen in Table 3 . As this type of reasoning is typically also more relevant to claims requiring tabular evidence, it is possibly part of the reason Papelo performs substantially better in such claims (Table 4 ). Yet, all systems appear to struggle when a claim requires both textual and tabular evidence, as seen in Table 4 . With exception to Papelo, the score tends to follow the tabular-only performance, which was more challenging for most systems. Following the baseline, NCU, EURECOM_Fever and Albatross treated all evidence as text and relied on some form of pretrained NLI model fine-tuned to the data from the shared task. Albatross fine-tuned several existing NLI models and used majority voting to obtain the final verdict. METUIS used a pre-trained NLI model without further tuning which was applied to each piece of evidence retrieved, combining the predictions heuristically. Finally, Martin Funkquist handled textual evidence using RoBERTa and tabular evidence using TAPAS, aggregating their results using an MLP. ",
"cite_spans": [],
"ref_spans": [
{
"start": 934,
"end": 941,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 1126,
"end": 1134,
"text": "(Table 4",
"ref_id": "TABREF8"
},
{
"start": 1242,
"end": 1249,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "Handling NOTENOUGHINFO (NEI) claims in FEVEROUS is substantially more challenging than in the FEVER shared task for two reasons: i) (partial) evidence must be retrieved for a prediction to be considered correct, ii) the dataset is very imbalanced, with relatively few NEI instances in the training set. As seen in Table 5 , NEI performance was poor on the whole, with the top two systems opting not to predicting any NEI instances. In contrast to Bust a move!, Papelo by design does not predict any NEI instances, replacing instances in the training set labelled as NEI with Supported, turning the task into a binary classification task. While Papelo explored sampling artificial NEI instances, by labelling any instance with incomplete extracted evidence as NEI, their model performs much worse in this scenario, overpredicting NEI. A possible cause is that their prediction model is still trained on noisy evidence, creating the additional challenge of distinguishing a complete evidence set with possibly irrelevant evidence from an incomplete evidence set. This further presents an explanation for Bust a move!'s performance on NEI, as they also train their prediction model on both complete and incomplete evidence, which makes it more robust to imperfect retrieval for supported and refuted instances, yet making it impossible for the model to correctly distinguish these from instances with not enough information. Interestingly, worse overall systems did better on NEI predictions, with Albatross and METUIS receiving a relatively balanced F 1 score across all classes. This can possibly be attributed to their explicit treatment of the NEI class, with METUIS using a verdict heuristic to predict the NEI class if none of the extracted evidence pieces provides enough confidence in supporting or refuting a claim. They also report that their model overpredicts NEI instances on the development split, suggesting that it should be fine-tuned on the dataset. We measured how the metric of the FEVEROUS shared task correlates to its performance on both components of the task, namely evidence retrieval and veracity prediction and how the FEVEROUS scores compare to the scores obtained in FEVER (Thorne et al., 2018b) . As seen in Figure 3 , both FEVER and FEVEROUS scores for systems participating in the respective shared tasks strongly positively correlate with an increased label accuracy (Pearson correlation of \u03c1 = 0.92 for both). However, concerning the retrieval component, it can be seen that the FEVEROUS scores correlate with the evidence F1 (\u03c1 = 0.83) more strongly than the FEVER scores (\u03c1 = 0.41), especially in terms of recall (\u03c1 = 0.97 and \u03c1 = 0.53, respectively). Again, this is a consequence of correct NEI predictions not requiring any evidence in FEVER.",
"cite_spans": [
{
"start": 2200,
"end": 2222,
"text": "(Thorne et al., 2018b)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 314,
"end": 321,
"text": "Table 5",
"ref_id": "TABREF10"
},
{
"start": 2236,
"end": 2244,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4"
},
{
"text": "The FEVEROUS baseline system is stronger than the one proposed for FEVER relatively to the respective submitted systems, achieving a higher Figure 3 : Correlation between FEVER/FEVEROUS score and Label Accuracy, F1, and Recall, respectively. score than more than half of participating teams, while the FEVER baseline only performed better than around a fifth of all systems. Generally, FEVEROUS scores are generally much lower than FEVER scores, with retrieval scores being particularly low for FEVEROUS. While this is likely partly due to the NEIs being easier to predict and more numerous in FEVER (by only predicting NEI, a system would already get a score of 0.33), it might also be a result of artefacts in the original FEVER dataset, as a claim-only baseline was able to get an accuracy score of about 62% (Schuster et al., 2019), compared to a majority-class baseline of 33%. In contrast, the claim-only baseline on FEVEROUS achieves a score of 58% against a majority-class baseline of 56%. Yet, peculiarities of the FEVEROUS dataset observed by participants is a higher number of redundant evidence pieces than in FEVER (Malon, 2021) (likely a result of the higher complexity of evidence annotation), as well as a considerable number of claims that are refuted/NEI due to a single piece of information being incorrect/missing in an otherwise supported claim (Bouziane et al., 2021) . Since claims are much longer in FEVEROUS than FEVER, such cases are much harder to identify. Similar to the FEVER 2.0 challenge (Thorne et al., 2019) where in a build-it, break-it, fix-it competition teams created adversarial attacks (breakers) against systems that were trained on the FEVER dataset (builders), to identify biases and weaknesses and address them (fixers), such a challenge might provide highly valuable insights to the FEVEROUS dataset to foster further research on this task. ",
"cite_spans": [
{
"start": 1128,
"end": 1141,
"text": "(Malon, 2021)",
"ref_id": "BIBREF13"
},
{
"start": 1366,
"end": 1389,
"text": "(Bouziane et al., 2021)",
"ref_id": "BIBREF2"
},
{
"start": 1520,
"end": 1541,
"text": "(Thorne et al., 2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 140,
"end": 148,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4"
},
{
"text": "A.1 Bust a move! (Bouziane et al., 2021) We proposed a novel architecture to handle the joint retrieval and entailment over unstructured and structured information. To verify a claim, we first retrieve documents and filter the most relevant tables using BM25. Therefrom, our passage retriever extracts the relevant pieces of evidence, which can be either sentences or table cells. Finally, we obtain the verdict prediction by performing entailment using a TAPAS-based ensemble model. For retrieval, we proposed a novel training paradigm, Reinforced Adaptive Retrieval Embedding (RARE), which is inspired by reinforcement learning. It consists of re-ranking the BM25 retrieved hard-negative samples based on a snapshot of the embedding model of the last epoch. RARE samples better hard negatives, helping the model correct itself and preventing overfitting. For entailment, we proposed Noisy Entailment through Adapted Training (NEAT) that consists of two models trained on golden and noisy evidence sets, respectively. Together, they will see both relevant and irrelevant passages during training to make the ensemble more robust to noisy inputs at inference.",
"cite_spans": [
{
"start": 17,
"end": 40,
"text": "(Bouziane et al., 2021)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A System Description Summaries Submitted by Participants",
"sec_num": null
},
{
"text": "A.2 Papelo (Malon, 2021) We develop a system for the FEVEROUS fact extraction and verification task that ranks an initial set of potential evidence and then pursues missing evidence in subsequent hops by trying to generate it, with a \"next hop prediction module\" whose output is matched against page elements in a predicted article. Seeking evidence with the next hop prediction module continues to improve FEVEROUS score for up to seven hops. Label classification is trained on possibly incomplete extracted evidence chains, utilizing hints that facilitate numerical comparison. The system achieves .281 FEVEROUS score and .658 label accuracy on the development set, and finishes in second place with .259 FEVEROUS score and .576 label accuracy on the test set.",
"cite_spans": [
{
"start": 11,
"end": 24,
"text": "(Malon, 2021)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A System Description Summaries Submitted by Participants",
"sec_num": null
},
{
"text": "A.3 NCU (Gi et al., 2021) Our 3rd place FEVEROUS system is a three-stage model consisting of document retrieval, element retrieval, and verdict inference. Our document retrieval utilizes Anserini , an information retrieval toolkit built on Lucene. For element retrieval, we experiment two different approaches, the Anserini and the BERT model, to select relevant elements from documents retrieved in previous stage. For the third stage, we adopt the RoBERTa NLI model pre-trained on well-known NLI datasets, including SNLI, MNLI, FEVER-NLI, ANLI (Nie et al., 2020) , and experiment on its variants with aggregation method to fully utilize the semantic information of retrieved elements in previous stage. Our system improves the FEVEROUS baseline in two aspects. First, while the baseline retriever pays attention to literally relevance with a combination method of entity matching and TF-IDF, we fine-tune the BERT model to integrate more semantic relevance for finding evidential elements and downstream verdict inference. Second, the baseline predictor uses the concatenation of claim and elements as input, having a constraint of maximum length. We experiment several ways to include more elements for verdict inference. These improvements allow us to achieve 0.29 feverous score on the development set and 0.25 feverous score on the blind test set, both outperforming the FEVEROUS baseline.",
"cite_spans": [
{
"start": 8,
"end": 25,
"text": "(Gi et al., 2021)",
"ref_id": "BIBREF6"
},
{
"start": 546,
"end": 564,
"text": "(Nie et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A System Description Summaries Submitted by Participants",
"sec_num": null
},
{
"text": "A.4 EURECOM_Fever (Saeed et al., 2021) It is clear that enhancing evidence retrieval plays a vital role in any fact-checking system. In our sub-mission, we focus on enhancing the identification of Wikipedia pages by utilizing advances in the information retrieval (IR) community, where neural ranking models have been proposed for better data retrieval (Mitra et al., 2017) . We extend the baseline by providing a two-stage re-ranking process in the spirit of simple IR systems: (a) first, numerous pages to a given query are retrieved from a corpus using entity-matching and TF-IDF (Chen et al., 2017) and (b) second, the pages are scored and reranked using a more computationally-demanding method. Given that neural ranking methods have shown success in the IR community (Guo et al., 2020) , we used one as part of our extension to the baseline, where a re-ranker provides a score for every (query, table) pair. We then retain the tables with the top scores. The re-ranker is based on a pre-trained BERT model that is fine-tuned on the passage re-ranking task of the MS MACRO (Nguyen et al., 2016) dataset to minimize the binary cross-entropy loss. This extension to the baseline was enough to beat it, exhibiting that having higher recall with a computationally-demanding method would be more effective for evidence retrieval than standard mechanisms.",
"cite_spans": [
{
"start": 18,
"end": 38,
"text": "(Saeed et al., 2021)",
"ref_id": null
},
{
"start": 353,
"end": 373,
"text": "(Mitra et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 583,
"end": 602,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 773,
"end": 791,
"text": "(Guo et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 1078,
"end": 1099,
"text": "(Nguyen et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A System Description Summaries Submitted by Participants",
"sec_num": null
},
{
"text": "A.5 Martin Funkquist (Funkquist, 2021) The proposed system consists of three main parts: document retrieval, evidence retrieval and label prediction. The first part retrieves the most relevant documents using TF-IDF vector similarity scores between the claim and the title and body text of the documents. Then evidence is retrieved from these documents using similarity scores between TF-IDF vectors to retrieve the textual evidence and similarity scores between dense vectors created by fine-tuned TaPaS models to retrieve tabular evidence. Finally, the evidence is passed through a dense neural network to produce a veracity label, where the input is vectors created by a pre-trained RoBERTa for the sentence evidence and a TaPaS model for the table evidence.",
"cite_spans": [
{
"start": 21,
"end": 38,
"text": "(Funkquist, 2021)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A System Description Summaries Submitted by Participants",
"sec_num": null
},
{
"text": "For the retrieval part, we experimented with Spacy NER based on Transformers and FastText matching instead of TFIDF. We also analyzed performance change across parameters like page count, table count, etc in the TFIDF module of baseline implementation. For verdict prediction, we fine tuned several publicly available NLI models on the competition data. We also tried a majority vote strategy for creating the test predictions using the various verdict prediction models that we had trained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.6 Albatross",
"sec_num": null
},
{
"text": "A .7 METUIS (Temiz et al., 2021) We propose a pipeline that retrieves documents by using Anserini indexing on top of the Wikipedia dump. After the document retrieval, evidence related to the claim is selected by using Bert-Large-Cased Question Answering model, and the results of the QA model are sorted by using Universal Sentence Encoder score, which measures the similarity between the claim and the document portion. The final verdict of the claim is determined by the XLNET natural language inference model, which compares the evidence and the claim. Other than the sentence evidence, the cell evidence is obtained by TAPAS Table Question Answering model and by looking at the match score between the entities of the claim and the cell values. The pipeline is fully unsupervised, and all the models used in the pipeline require no pretraining. ",
"cite_spans": [
{
"start": 2,
"end": 32,
"text": ".7 METUIS (Temiz et al., 2021)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 629,
"end": 643,
"text": "Table Question",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.6 Albatross",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Amazon for sponsoring the dataset generation and supporting the FEVER workshop and the FEVEROUS shared task. Rami Aly is supported by the Engineering and Physical Sciences Research Council Doctoral Training Partnership (EPSRC). James Thorne is supported by an Amazon Alexa Graduate Research Fellowship. Zhijiang Guo, Michael Schlichtkrull and Andreas Vlachos are supported by the ERC grant AVeriTeC (GA 865958).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. FEVEROUS: Fact extraction and VERification over unstructured and structured information",
"authors": [
{
"first": "Rami",
"middle": [],
"last": "Aly",
"suffix": ""
},
{
"first": "Zhijiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"Sejr"
],
"last": "Schlichtkrull",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
}
],
"year": null,
"venue": "Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rami Aly, Zhijiang Guo, Michael Sejr Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. FEVEROUS: Fact extraction and VERification over unstructured and structured information. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "MultiFC: A real-world multi-domain dataset for evidence-based fact checking of claims",
"authors": [
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Lioma",
"suffix": ""
},
{
"first": "Dongsheng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Lucas",
"middle": [
"Chaves"
],
"last": "Lima",
"suffix": ""
},
{
"first": "Casper",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "Jakob",
"middle": [
"Grue"
],
"last": "Simonsen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4685--4697",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1475"
]
},
"num": null,
"urls": [],
"raw_text": "Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Chris- tian Hansen, and Jakob Grue Simonsen. 2019. MultiFC: A real-world multi-domain dataset for evidence-based fact checking of claims. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 4685-4697, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "FaBULOUS: Fact-checking based on understanding of language over unstructured and structured information",
"authors": [
{
"first": "Mostafa",
"middle": [],
"last": "Bouziane",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Perrin",
"suffix": ""
},
{
"first": "Amine",
"middle": [],
"last": "Sadq",
"suffix": ""
},
{
"first": "Thanh",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Aur\u00e9lien",
"middle": [],
"last": "Cluzeau",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Mardas",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Fourth Workshop on Fact Extraction and VERification (FEVER)",
"volume": "",
"issue": "",
"pages": "31--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mostafa Bouziane, Hugo Perrin, Amine Sadq, Thanh Nguyen, Aur\u00e9lien Cluzeau, and Julien Mardas. 2021. FaBULOUS: Fact-checking based on understanding of language over unstructured and structured infor- mation. In Proceedings of the Fourth Workshop on Fact Extraction and VERification (FEVER), pages 31 -40. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Reading wikipedia to answer open-domain questions",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Fisch",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.00051"
]
},
"num": null,
"urls": [],
"raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and An- toine Bordes. 2017. Reading wikipedia to an- swer open-domain questions. arXiv preprint arXiv:1704.00051.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "TabFact: A Large-scale Dataset for Table-based Fact Verification. In ICLR",
"authors": [
{
"first": "Wenhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hongmin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jianshu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yunkai",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shiyang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2020. TabFact: A Large-scale Dataset for Table-based Fact Verification. In ICLR, pages 1-14.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Combining sentence and table evidence to predict veracity of factual claims using TaPaS and RoBERTa",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Funkquist",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Fourth Workshop on Fact Extraction and VERification (FEVER)",
"volume": "",
"issue": "",
"pages": "92--101",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Funkquist. 2021. Combining sentence and ta- ble evidence to predict veracity of factual claims us- ing TaPaS and RoBERTa. In Proceedings of the Fourth Workshop on Fact Extraction and VERifica- tion (FEVER), pages 92 -101. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Verdict inference with claim and retrieved elements using roberta",
"authors": [
{
"first": "In-Zu",
"middle": [],
"last": "Gi",
"suffix": ""
},
{
"first": "Ting-Yu",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Richard Tzong-Han",
"middle": [],
"last": "Tsai",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Fourth Workshop on Fact Extraction and VERification (FEVER)",
"volume": "",
"issue": "",
"pages": "60--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In-Zu Gi, Ting-Yu Fang, and Richard Tzong-Han Tsai. 2021. Verdict inference with claim and retrieved elements using roberta. In Proceedings of the Fourth Workshop on Fact Extraction and VERifica- tion (FEVER), pages 60 -66. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A deep look into neural ranking models for information retrieval",
"authors": [
{
"first": "Jiafeng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Yixing",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Liu",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Qingyao",
"middle": [],
"last": "Ai",
"suffix": ""
},
{
"first": "Hamed",
"middle": [],
"last": "Zamani",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
},
{
"first": "Xueqi",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2020,
"venue": "Information Processing & Management",
"volume": "57",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiafeng Guo, Yixing Fan, Liang Pang, Liu Yang, Qingyao Ai, Hamed Zamani, Chen Wu, W Bruce Croft, and Xueqi Cheng. 2020. A deep look into neural ranking models for information re- trieval. Information Processing & Management, 57(6):102067.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "INFOTABS: Inference on tables as semi-structured data",
"authors": [
{
"first": "Vivek",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Maitrey",
"middle": [],
"last": "Mehta",
"suffix": ""
},
{
"first": "Pegah",
"middle": [],
"last": "Nokhiz",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Srikumar",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2309--2324",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.210"
]
},
"num": null,
"urls": [],
"raw_text": "Vivek Gupta, Maitrey Mehta, Pegah Nokhiz, and Vivek Srikumar. 2020. INFOTABS: Inference on tables as semi-structured data. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 2309-2324, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Open domain question answering over tables via dense retrieval",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Herzig",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Syrine",
"middle": [],
"last": "Krichene",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Eisenschlos",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "512--519",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.43"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan Herzig, Thomas M\u00fcller, Syrine Krichene, and Julian Eisenschlos. 2021. Open domain ques- tion answering over tables via dense retrieval. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 512-519, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "TaPas: Weakly supervised table parsing via pre-training",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Herzig",
"suffix": ""
},
{
"first": "Krzysztof",
"middle": [],
"last": "Nowak",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Piccinno",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Eisenschlos",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4320--4333",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.398"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan Herzig, Pawel Krzysztof Nowak, Thomas M\u00fcller, Francesco Piccinno, and Julian Eisenschlos. 2020. TaPas: Weakly supervised table parsing via pre-training. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 4320-4333, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Fully automated fact checking using external sources",
"authors": [
{
"first": "Georgi",
"middle": [],
"last": "Karadzhov",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Barr\u00f3n-Cede\u00f1o",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Koychev",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "344--353",
"other_ids": {
"DOI": [
"10.26615/978-954-452-049-6_046"
]
},
"num": null,
"urls": [],
"raw_text": "Georgi Karadzhov, Preslav Nakov, Llu\u00eds M\u00e0rquez, Alberto Barr\u00f3n-Cede\u00f1o, and Ivan Koychev. 2017. Fully automated fact checking using external sources. In Proceedings of the International Confer- ence Recent Advances in Natural Language Process- ing, RANLP 2017, pages 344-353, Varna, Bulgaria. INCOMA Ltd.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Team Papelo at feverous: Multi-hop evidence pursuit",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Malon",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Fourth Workshop on Fact Extraction and VERification (FEVER)",
"volume": "",
"issue": "",
"pages": "40--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Malon. 2021. Team Papelo at feverous: Multi-hop evidence pursuit. In Proceedings of the Fourth Workshop on Fact Extraction and VERifica- tion (FEVER), pages 40-46. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning to match using local and distributed representations of text for web search",
"authors": [
{
"first": "Bhaskar",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Diaz",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Craswell",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th International Conference on World Wide Web, WWW '17",
"volume": "",
"issue": "",
"pages": "1291--1299",
"other_ids": {
"DOI": [
"10.1145/3038912.3052579"
]
},
"num": null,
"urls": [],
"raw_text": "Bhaskar Mitra, Fernando Diaz, and Nick Craswell. 2017. Learning to match using local and distributed representations of text for web search. In Proceed- ings of the 26th International Conference on World Wide Web, WWW '17, page 1291-1299, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "2021. The CLEF-2021 checkthat! lab on detecting check-worthy claims, previously factchecked claims, and fake news",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Da San",
"suffix": ""
},
{
"first": "Tamer",
"middle": [],
"last": "Martino",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Elsayed",
"suffix": ""
},
{
"first": "Rub\u00e9n",
"middle": [],
"last": "Barr\u00f3n-Cede\u00f1o",
"suffix": ""
},
{
"first": "Shaden",
"middle": [],
"last": "M\u00edguez",
"suffix": ""
},
{
"first": "Firoj",
"middle": [],
"last": "Shaar",
"suffix": ""
},
{
"first": "Fatima",
"middle": [],
"last": "Alam",
"suffix": ""
},
{
"first": "Maram",
"middle": [],
"last": "Haouari",
"suffix": ""
},
{
"first": "Nikolay",
"middle": [],
"last": "Hasanain",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Babulkov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nikolov",
"suffix": ""
},
{
"first": "Julia",
"middle": [
"Maria"
],
"last": "Gautam Kishore Shahi",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Stru\u00df",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mandl",
"suffix": ""
}
],
"year": 2021,
"venue": "Advances in Information Retrieval -43rd European Conference on IR Research, ECIR 2021, Virtual Event",
"volume": "12657",
"issue": "",
"pages": "639--649",
"other_ids": {
"DOI": [
"10.1007/978-3-030-72240-1_75"
]
},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Giovanni Da San Martino, Tamer Elsayed, Alberto Barr\u00f3n-Cede\u00f1o, Rub\u00e9n M\u00edguez, Shaden Shaar, Firoj Alam, Fatima Haouari, Maram Hasanain, Nikolay Babulkov, Alex Nikolov, Gau- tam Kishore Shahi, Julia Maria Stru\u00df, and Thomas Mandl. 2021. The CLEF-2021 checkthat! lab on detecting check-worthy claims, previously fact- checked claims, and fake news. In Advances in In- formation Retrieval -43rd European Conference on IR Research, ECIR 2021, Virtual Event, March 28 - April 1, 2021, Proceedings, Part II, volume 12657 of Lecture Notes in Computer Science, pages 639-649. Springer.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Ms marco: A human generated machine reading comprehension dataset",
"authors": [
{
"first": "Tri",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Mir",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "Xia",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Tiwary",
"suffix": ""
},
{
"first": "Rangan",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2016,
"venue": "30th Conference on Neural Information Processing Systems (NIPS 2016)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. In 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Adversarial NLI: A new benchmark for natural language understanding",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4885--4901",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.441"
]
},
"num": null,
"urls": [],
"raw_text": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Ad- versarial NLI: A new benchmark for natural lan- guage understanding. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 4885-4901, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Exploring James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "809--819",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1074"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018a. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The fact extraction and VERification (FEVER) shared task",
"authors": [
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "Oana",
"middle": [],
"last": "Cocarascu",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Christodoulopoulos",
"suffix": ""
},
{
"first": "Arpit",
"middle": [],
"last": "Mittal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5501"
]
},
"num": null,
"urls": [],
"raw_text": "James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, and Arpit Mittal. 2018b. The fact extraction and VERification (FEVER) shared task. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 1-9, Brussels, Belgium. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Christos Christodoulopoulos, and Arpit Mittal. 2019. The FEVER2.0 shared task",
"authors": [
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "Oana",
"middle": [],
"last": "Cocarascu",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {
"DOI": [
"10.18653/v1/D19-6601"
]
},
"num": null,
"urls": [],
"raw_text": "James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, and Arpit Mittal. 2019. The FEVER2.0 shared task. In Proceedings of the Second Workshop on Fact Extraction and VERifica- tion (FEVER), pages 1-6, Hong Kong, China. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "SemEval-2021 task 9: Fact verification and evidence finding for tabular data in scientific documents (SEM-TAB-FACTS)",
"authors": [
{
"first": "X",
"middle": [
"R"
],
"last": "Nancy",
"suffix": ""
},
{
"first": "Diwakar",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Mahajan",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Danilevsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rosenthal",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)",
"volume": "",
"issue": "",
"pages": "317--326",
"other_ids": {
"DOI": [
"10.18653/v1/2021.semeval-1.39"
]
},
"num": null,
"urls": [],
"raw_text": "Nancy X. R. Wang, Diwakar Mahajan, Marina Danilevsky, and Sara Rosenthal. 2021. SemEval- 2021 task 9: Fact verification and evidence finding for tabular data in scientific documents (SEM-TAB- FACTS). In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 317-326, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "liar, liar pants on fire\": A new benchmark dataset for fake news detection",
"authors": [
{
"first": "William",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "422--426",
"other_ids": {
"DOI": [
"10.18653/v1/P17-2067"
]
},
"num": null,
"urls": [],
"raw_text": "William Yang Wang. 2017. \"liar, liar pants on fire\": A new benchmark dataset for fake news detection. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 422-426, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Answering complex open-domain questions with multi-hop dense retrieval",
"authors": [
{
"first": "Wenhan",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Srini",
"middle": [],
"last": "Iyer",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Oguz",
"suffix": ""
}
],
"year": 2021,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenhan Xiong, Xiang Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Scott Yih, Sebastian Riedel, Douwe Kiela, and Barlas Oguz. 2021. Answering complex open-domain questions with multi-hop dense retrieval. In Inter- national Conference on Learning Representations.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Anserini: Enabling the use of lucene for information retrieval research",
"authors": [
{
"first": "Peilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '17",
"volume": "",
"issue": "",
"pages": "1253--1256",
"other_ids": {
"DOI": [
"10.1145/3077136.3080721"
]
},
"num": null,
"urls": [],
"raw_text": "Peilin Yang, Hui Fang, and Jimmy Lin. 2017. Anserini: Enabling the use of lucene for information retrieval research. In Proceedings of the 40th International ACM SIGIR Conference on Research and Devel- opment in Information Retrieval, SIGIR '17, page 1253-1256, New York, NY, USA. Association for Computing Machinery.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "FEVEROUS sample instances. Evidence in tables is highlighted in red. Each piece of evidence e i has associated context, i.e. page, section title(s) and the closest row/column headers (highlighted in dark gray). Left: evidence consists of two table cells refuting the claim. Right: Evidence consists of two table cells and one sentence from two different pages, supporting the claim."
},
"TABREF1": {
"text": "Quantitative characteristics in each split of FEVEROUS, with E Sentences , E Cells , and E Sentences+Cells being claims requiring only sentence evidence, cell evidence, or both, respectively.",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Claim: Red Sundown</td></tr><tr><td>screenplay was written by</td></tr><tr><td>Martin Berkeley; based</td></tr><tr><td>on a story by Lewis...</td></tr></table>",
"num": null
},
"TABREF3": {
"text": "",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF4": {
"text": "Table/cell selection Table selectionwas often done term-based using the same approaches as for page selection, i.e. TF-IDF as in the baseline (Papelo, EURECOM_Fever) and BM25 (Bust a move!), while Martin Funkquist used the dense table retriever ofHerzig et al. (2021). NCU considers all tables in retrieved documents for cell extrac-",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Team Name</td><td colspan=\"4\">Numerical Reasoning Multi-hop Reasoning Entity Disambiguation Search terms not in Claim</td></tr><tr><td>Bust a move!</td><td>0.11</td><td>0.14</td><td>0.20</td><td>0.15</td></tr><tr><td>Papelo</td><td>0.21</td><td>0.10</td><td>0.10</td><td>0.02</td></tr><tr><td>NCU</td><td>0.10</td><td>0.14</td><td>0.20</td><td>0.12</td></tr><tr><td>Z team</td><td>0.10</td><td>0.13</td><td>0.18</td><td>0.11</td></tr><tr><td>EURECOM_Fever</td><td>0.07</td><td>0.12</td><td>0.17</td><td>0.10</td></tr><tr><td>Baseline</td><td>0.07</td><td>0.11</td><td>0.12</td><td>0.12</td></tr><tr><td>Saturday_Night_Fever</td><td>0.07</td><td>0.12</td><td>0.12</td><td>0.11</td></tr><tr><td>Martin Funkquist</td><td>0.01</td><td>0.14</td><td>0.12</td><td>0.09</td></tr><tr><td>Albatross</td><td>0.02</td><td>0.09</td><td>0.10</td><td>0.12</td></tr><tr><td>METUIS</td><td>0.04</td><td>0.00</td><td>0.04</td><td>0.01</td></tr><tr><td>ChaCha</td><td>0.03</td><td>0.01</td><td>0.01</td><td>0.00</td></tr><tr><td>seda_kaist</td><td>0.02</td><td>0.01</td><td>0.01</td><td>0.00</td></tr><tr><td>qmul_uou_iiith</td><td>0.02</td><td>0.01</td><td>0.01</td><td>0.01</td></tr></table>",
"num": null
},
"TABREF5": {
"text": "FEVEROUS scores on the blind test phase, requiring numerical reasoning (740 samples), multi-hop reasoning (1,195 samples), entity disambiguation (200 samples) and search terms beyond entities mentioned in claim (193 samples).",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF6": {
"text": "table's caption, table headers, and the page name, with the latter improving scores substantially for NCU. While Bust a move! and Papelo train a separate model for sentence and cell retrieval, NCU trains a single model on the joint tabular and textual data. Martin Funkquist and METUIS used the TAPAS QA model(Herzig et al., 2020) for retrieving cells. Using continuous representations to retrieve sentences and cells from retrieved documents have generally been successful, however, using specialised methods (i.e. TAPAS) explored by participants for table retrieval and cell selection seems to be have been less successful. Instead, term-based table retrieval, and treating tables as sequences of cells was overall more successful.",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF8": {
"text": "",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF10": {
"text": "",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF11": {
"text": "the limits of transfer learning with a unified text-totext transformer. Journal of Machine Learning Research, 21(140):1-67.Mohammed Adel Saeed, Giulio Alfarano,Khai Nguyen, Duc Pham, Raphael Troncy, and Paolo Papotti. 2021. Neural re-rankers for evidence retrieval in the FEVEROUS task.",
"html": null,
"type_str": "table",
"content": "<table><tr><td>In Proceedings of</td></tr><tr><td>the Fourth Workshop on Fact Extraction and VERi-</td></tr><tr><td>fication (FEVER), pages 108 -113. Association for</td></tr><tr><td>Computational Linguistics.</td></tr><tr><td>Tal Schuster, Darsh Shah, Yun Jie Serene Yeo, Daniel</td></tr><tr><td>Roberto Filizzola Ortiz, Enrico Santus, and Regina</td></tr><tr><td>Barzilay. 2019. Towards debiasing fact verification</td></tr><tr><td>models. In Proceedings of the 2019 Conference on</td></tr><tr><td>Empirical Methods in Natural Language Processing</td></tr><tr><td>and the 9th International Joint Conference on Natu-</td></tr><tr><td>ral Language Processing (EMNLP-IJCNLP), pages</td></tr><tr><td>3419-3425, Hong Kong, China. Association for</td></tr><tr><td>Computational Linguistics.</td></tr><tr><td>Orkun Temiz, \u00d6zg\u00fcn Ozan K\u0131l\u0131\u00e7, Arif Ozan K\u0131z\u0131ldag,</td></tr><tr><td>and Tugba Ta\u015fkaya Temizel. 2021. A fact check-</td></tr><tr><td>ing and verification system for FEVEROUS using</td></tr><tr><td>a zero-shot learning approach. In Proceedings of</td></tr><tr><td>the Fourth Workshop on Fact Extraction and VER-</td></tr><tr><td>ification (FEVER), pages 113 -121. Association for</td></tr><tr><td>Computational Linguistics.</td></tr><tr><td>James Thorne and Andreas Vlachos. 2018. Automated</td></tr><tr><td>fact checking: Task formulations, methods and fu-</td></tr><tr><td>ture directions. In Proceedings of the 27th Inter-</td></tr><tr><td>national Conference on Computational Linguistics,</td></tr><tr><td>pages 3346-3359, Santa Fe, New Mexico, USA. As-</td></tr><tr><td>sociation for Computational Linguistics.</td></tr></table>",
"num": null
},
"TABREF13": {
"text": "Results of the blind test phase of the FEVEROUS challenge, only considering samples that require exclusively textual evidence (top), tabular evidence (middle), and both (bottom), respectively.",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Team Name</td><td colspan=\"3\">Label Accuracy Precision Recall Evidence</td><td>F1</td><td>FEVEROUS Score</td></tr><tr><td/><td colspan=\"3\">Numerical Reasoning (740)</td><td/><td/></tr><tr><td>Bust a move</td><td>0.55</td><td>0.09</td><td>0.22</td><td>0.13</td><td>0.11</td></tr><tr><td>Papelo</td><td>0.62</td><td>0.11</td><td>0.32</td><td>0.16</td><td>0.21</td></tr><tr><td>NCU</td><td>0.46</td><td>0.10</td><td>0.20</td><td>0.13</td><td>0.10</td></tr><tr><td>Z team</td><td>0.41</td><td>0.09</td><td>0.22</td><td>0.13</td><td>0.08</td></tr><tr><td>EURECOM_Fever</td><td>0.42</td><td>0.09</td><td>0.20</td><td>0.13</td><td>0.07</td></tr><tr><td>Baseline</td><td>0.38</td><td>0.08</td><td>0.16</td><td>0.11</td><td>0.07</td></tr><tr><td>Saturday_Night_Fever</td><td>0.44</td><td>0.08</td><td>0.16</td><td>0.11</td><td>0.07</td></tr><tr><td>Martin Funkquist</td><td>0.32</td><td>0.04</td><td>0.10</td><td>0.06</td><td>0.01</td></tr><tr><td>Albatross</td><td>0.31</td><td>0.02</td><td>0.05</td><td>0.03</td><td>0.02</td></tr><tr><td>METUIS</td><td>0.33</td><td>0.03</td><td>0.06</td><td>0.04</td><td>0.04</td></tr><tr><td>ChaCha</td><td>0.41</td><td>0.02</td><td>0.07</td><td>0.03</td><td>0.03</td></tr><tr><td>seda_kaist</td><td>0.41</td><td>0.02</td><td>0.07</td><td>0.03</td><td>0.02</td></tr><tr><td>qmul_uou_iiith</td><td>0.51</td><td>0.03</td><td>0.03</td><td>0.03</td><td>0.02</td></tr><tr><td/><td colspan=\"3\">Multi-hop Reasoning (1195)</td><td/><td/></tr><tr><td>Bust a move</td><td>0.55</td><td>0.09</td><td>0.20</td><td>0.12</td><td>0.14</td></tr><tr><td>Papelo</td><td>0.48</td><td>0.07</td><td>0.13</td><td>0.09</td><td>0.10</td></tr><tr><td>NCU</td><td>0.46</td><td>0.12</td><td>0.21</td><td>0.15</td><td>0.14</td></tr><tr><td>Z team</td><td>0.59</td><td>0.09</td><td>0.20</td><td>0.12</td><td>0.13</td></tr><tr><td>EURECOM_Fever</td><td>0.47</td><td>0.18</td><td>0.16</td><td>0.17</td><td>0.12</td></tr><tr><td>Baseline</td><td>0.44</td><td>0.13</td><td>0.15</td><td>0.14</td><td>0.11</td></tr><tr><td>Saturday_Night_Fever</td><td>0.48</td><td>0.14</td><td>0.15</td><td>0.15</td><td>0.12</td></tr><tr><td>Martin Funkquist</td><td>0.65</td><td>0.09</td><td>0.18</td><td>0.12</td><td>0.14</td></tr><tr><td>Albatross</td><td>0.38</td><td>0.10</td><td>0.11</td><td>0.10</td><td>0.09</td></tr><tr><td>METUIS</td><td>0.32</td><td>0.05</td><td>0.01</td><td>0.02</td><td>0.00</td></tr><tr><td>ChaCha</td><td>0.47</td><td>0.03</td><td>0.03</td><td>0.03</td><td>0.01</td></tr><tr><td>seda_kaist</td><td>0.46</td><td>0.03</td><td>0.03</td><td>0.03</td><td>0.01</td></tr><tr><td>qmul_uou_iiith</td><td>0.23</td><td>0.03</td><td>0.01</td><td>0.02</td><td>0.01</td></tr><tr><td/><td colspan=\"3\">Entity Disambiguation (200)</td><td/><td/></tr><tr><td>Bust a move</td><td>0.40</td><td>0.07</td><td>0.34</td><td>0.11</td><td>0.20</td></tr><tr><td>Papelo</td><td>0.41</td><td>0.05</td><td>0.14</td><td>0.08</td><td>0.10</td></tr><tr><td>NCU</td><td>0.49</td><td>0.09</td><td>0.27</td><td>0.14</td><td>0.20</td></tr><tr><td>Z team</td><td>0.38</td><td>0.07</td><td>0.35</td><td>0.11</td><td>0.18</td></tr><tr><td>EURECOM_Fever</td><td>0.41</td><td>0.13</td><td>0.26</td><td>0.17</td><td>0.17</td></tr><tr><td>Baseline</td><td>0.42</td><td>0.10</td><td>0.21</td><td>0.14</td><td>0.12</td></tr><tr><td>Saturday_Night_Fever</td><td>0.40</td><td>0.12</td><td>0.22</td><td>0.15</td><td>0.12</td></tr><tr><td>Martin Funkquist</td><td>0.37</td><td>0.07</td><td>0.25</td><td>0.11</td><td>0.12</td></tr><tr><td>Albatross</td><td>0.43</td><td>0.08</td><td>0.17</td><td>0.11</td><td>0.10</td></tr><tr><td>METUIS</td><td>0.37</td><td>0.04</td><td>0.06</td><td>0.05</td><td>0.04</td></tr><tr><td>ChaCha</td><td>0.38</td><td>0.02</td><td>0.06</td><td>0.03</td><td>0.01</td></tr><tr><td>seda_kaist</td><td>0.37</td><td>0.02</td><td>0.05</td><td>0.03</td><td>0.01</td></tr><tr><td>qmul_uou_iiith</td><td>0.24</td><td>0.02</td><td>0.01</td><td>0.01</td><td>0.01</td></tr><tr><td/><td colspan=\"3\">Search terms not in Claim (195)</td><td/><td/></tr><tr><td>Bust a move</td><td>0.28</td><td>0.05</td><td>0.33</td><td>0.09</td><td>0.15</td></tr><tr><td>Papelo</td><td>0.26</td><td>0.02</td><td>0.09</td><td>0.03</td><td>0.02</td></tr><tr><td>NCU</td><td>0.38</td><td>0.07</td><td>0.24</td><td>0.11</td><td>0.12</td></tr><tr><td>Z team</td><td>0.26</td><td>0.05</td><td>0.35</td><td>0.09</td><td>0.11</td></tr><tr><td>EURECOM_Fever</td><td>0.33</td><td>0.08</td><td>0.23</td><td>0.12</td><td>0.10</td></tr><tr><td>Baseline</td><td>0.38</td><td>0.06</td><td>0.19</td><td>0.09</td><td>0.12</td></tr><tr><td>Saturday_Night_Fever</td><td>0.30</td><td>0.07</td><td>0.20</td><td>0.10</td><td>0.11</td></tr><tr><td>Martin Funkquist</td><td>0.27</td><td>0.05</td><td>0.25</td><td>0.08</td><td>0.09</td></tr><tr><td>Albatross</td><td>0.48</td><td>0.05</td><td>0.17</td><td>0.07</td><td>0.12</td></tr><tr><td>METUIS</td><td>0.37</td><td>0.02</td><td>0.03</td><td>0.02</td><td>0.01</td></tr><tr><td>ChaCha</td><td>0.27</td><td>0.01</td><td>0.05</td><td>0.01</td><td>0.00</td></tr><tr><td>sed_kaist</td><td>0.26</td><td>0.01</td><td>0.05</td><td>0.01</td><td>0.00</td></tr><tr><td>qmul_uou_iiith</td><td>0.22</td><td>0.01</td><td>0.01</td><td>0.01</td><td>0.01</td></tr></table>",
"num": null
},
"TABREF14": {
"text": "FEVEROUS scores on the blind test phase, grouped into their different challenges, with samples numbers in brackets.",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
}
}
}
}