| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T10:44:52.082929Z" |
| }, |
| "title": "SF-QA: Simple and Fair Evaluation Library for Open-domain Question Answering", |
| "authors": [ |
| { |
| "first": "Xiaopeng", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Carnegie Mellon University", |
| "location": {} |
| }, |
| "email": "xiaopen2@andrew.cmu.edu" |
| }, |
| { |
| "first": "Kyusong", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Carnegie Mellon University", |
| "location": {} |
| }, |
| "email": "kyusongl@soco.ai" |
| }, |
| { |
| "first": "Tiancheng", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Carnegie Mellon University", |
| "location": {} |
| }, |
| "email": "tianchez@soco.ai" |
| }, |
| { |
| "first": "Inc", |
| "middle": [], |
| "last": "Soco", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Carnegie Mellon University", |
| "location": {} |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Although open-domain question answering (QA) draws great attention in recent years, it requires large amounts of resources for building the full system and it is often difficult to reproduce previous results due to complex configurations. In this paper, we introduce SF-QA: simple and fair evaluation framework for opendomain QA. SF-QA framework modularizes the pipeline open-domain QA system, which makes the task itself easily accessible and reproducible to research groups without enough computing resources. The proposed evaluation framework is publicly available and anyone can contribute to the code and evaluations.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Although open-domain question answering (QA) draws great attention in recent years, it requires large amounts of resources for building the full system and it is often difficult to reproduce previous results due to complex configurations. In this paper, we introduce SF-QA: simple and fair evaluation framework for opendomain QA. SF-QA framework modularizes the pipeline open-domain QA system, which makes the task itself easily accessible and reproducible to research groups without enough computing resources. The proposed evaluation framework is publicly available and anyone can contribute to the code and evaluations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Open-domain Question Answering (QA) is the task of answering open-ended questions by utilizing knowledge from a large body of unstructured texts, such as Wikipedia, world-wide-web and etc. This task is challenging because researchers have to face issues in both scalability and accuracy. In the last few years, rapid progress has been made and the performance of open-domain QA systems has been improved significantly (Chen et al., 2017; Qi et al., 2019; Yang et al., 2019) . Several different approaches were proposed, including twostage ranker-reader systems (Chen et al., 2017) , end-to-end models (Seo et al., 2019) and retrievalfree models (Raffel et al., 2019) . Despite people's increasing interest in open-domain QA research, there are still two main limitations in current opendomain QA research communities that makes research in this area not easily accessible:", |
| "cite_spans": [ |
| { |
| "start": 418, |
| "end": 437, |
| "text": "(Chen et al., 2017;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 438, |
| "end": 454, |
| "text": "Qi et al., 2019;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 455, |
| "end": 473, |
| "text": "Yang et al., 2019)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 561, |
| "end": 580, |
| "text": "(Chen et al., 2017)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 601, |
| "end": 619, |
| "text": "(Seo et al., 2019)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 645, |
| "end": 666, |
| "text": "(Raffel et al., 2019)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The first issue is the high cost of ranking large knowledge sources. Most of the prior research used Wikipedia dumps as the knowledge source. For example, the English Wikipedia has more than 7 million articles and 100 million sentences. For many researchers, indexing data of this size with a classic search engine (e.g., Apache Lucene (Mc-Candless et al., 2010) ) is feasible but becomes impractical when indexing with a neural ranker that requires weeks to index with GPU acceleration and consumes very large memory space for vector search. Therefore, research that innovates in ranking mostly originates from the industry.", |
| "cite_spans": [ |
| { |
| "start": 336, |
| "end": 362, |
| "text": "(Mc-Candless et al., 2010)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The second issue is about reproducibility. Opendomain QA datasets are collected at different time, making it depends on different versions of Wikipedia as the correct knowledge source. For example, SQuAD (Rajpurkar et al., 2016) uses the 2016 Wikipedia dump, and Natural Question uses 2018 Wikipedia dump. Our experiments found that a system's performance can vary greatly when using the wrong version of Wikipedia. Moreover, indexing the entire Wikipedia with neural methods is expensive, so it is hard for researchers to utilize others' new rankers in their future research. Lastly, the performance of an open-domain QA system depends on many hyperparameters, e.g. the number of passages passed to the reader, fusion strategy, etc., which is another confounding factor to reproduce a system's results.", |
| "cite_spans": [ |
| { |
| "start": 204, |
| "end": 228, |
| "text": "(Rajpurkar et al., 2016)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Thus, this work proposes SF-QA (Simple and Fair Question-Answering), a Python library to solve the above challenges for two-stage QA systems. The key idea of SF-QA is to provide preindexed large knowledge sources as public APIs or cached ranking results; a hub of reader models; and a configuration file that can be used to precisely reproduce an open-domain QA system for a task. The pre-indexed knowledge sources enable researchers to build on top of the previously proposed rankers without worrying about the tedious work needed to index the entire Wikipedia. Then the executable configuration file provides a complete snapshot that captures all of the hyperparameters in order to reproduce a result.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Experiments are conducted to validate the effectiveness of SF-QA. We show that one can easily reproduce previous state-of-the-art open-domain QA results on four QA datasets, namely Open SQuAD, Open Natural Questions, Open CMRC, and Open DRCD. More datasets will be included in the future. Also, we illustrate several use cases of SF-QA, such as efficient reader comparison, reproducible research, open-source community, and knowledgeempowered applications.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "SF-QA is also completely open-sourced 1 and encourages the research community to contribute their rankers or readers into the repository, so that their methods can be used by the rest of the community.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In short, the contributions of this paper include: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Existing deep learning open-domain QA approaches can be broadly divided into three categories.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Recent open-domain QA systems mostly use a two-stage ranker-reader approach. Dr.QA (Chen et al., 2017) (Devlin et al., 2018) instead of the previous RNN-based model and that significantly improved the end-to-end performance. To deal with span extraction in a multi-document setting, uses the global normalization approach (Clark and Gardner, 2017) to make the span scores comparable among candidate documents, which improved the performance by a large amount.", |
| "cite_spans": [ |
| { |
| "start": 83, |
| "end": 102, |
| "text": "(Chen et al., 2017)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 103, |
| "end": 124, |
| "text": "(Devlin et al., 2018)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 322, |
| "end": 347, |
| "text": "(Clark and Gardner, 2017)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Two-stage Approach", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The graph-based ranker-reader approach has also been explored recently. Asai et al. (2019) proposes a graph-based retriever to retrieve supporting documents recursively based on entity link evidence, and then uses a BERT-based reader model to complete open-domain QA task.", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 90, |
| "text": "Asai et al. (2019)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Two-stage Approach", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Open-domain QA using the end-to-end approach was not feasible for a long time, because this needs humongous memory to index the corpus and do the vector search. With the emergence of a large pretrained language model (PLM), researchers revisit this idea and make the end-to-end open-domain QA feasible. proposed Openretrieval QA (ORQA) model, which updates the ranker and reader model in an end-to-end fashion by pre-training the model with an Inverse Cloze Task (ICT). Seo et al. 2019experiments with considering open-domain QA task as a one-stage problem, and indexing corpus at phrase level directly. This approach shows promising inference speed with compromise in worse performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "End-to-End Approach", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Pre-trained language models have got rapid development in recent years. Querying a language model directly to get phrase-level answers becomes a possibility. The T5 model (11B version) (Raffel et al., 2019) can reach comparative scores on several open-domain QA datasets, compared with two-stage approaches with far less number of parameters (\u223c330M). However, as reported in Guu et al. (2020) , decreasing the number of parameters hurts the model performance drastically. This leaves large room for future research on how to make retrieval-free open-domain QA feasible in the real-world setting. knowledge-base is indexed by a ranker, e.g. a fulltext search engine. Given a query, the ranker can return a list of relevant passages that may contain the correct answer. How to choose the size of a passage is still an open research question and many choices are available, e.g. paragraph, fixed-size chunks, and sentences. Note that it is not necessary that the ranker needs to return the final passages in one-shot: advanced ranker can iteratively refine the passage list to support multi-hop reasoning (Yang et al., 2018; Asai et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 375, |
| "end": 392, |
| "text": "Guu et al. (2020)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 1102, |
| "end": 1121, |
| "text": "(Yang et al., 2018;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 1122, |
| "end": 1140, |
| "text": "Asai et al., 2019)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Retrieval-free Approach", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Then given the returned passages, a machine reader model will process all passages jointly and extract potential phrase-level answers from them. A fusion strategy is needed to combine candidate answers and scores from each passage and to create a final list of N-best phrase-level answers by reading these passages. The reason to combine ranker with the reader is to solve the scalability challenge since the state-of-the-art readers are prohibitively slow to process very large corpus in realtime (Chen et al., 2017; Devlin et al., 2018) .", |
| "cite_spans": [ |
| { |
| "start": 498, |
| "end": 517, |
| "text": "(Chen et al., 2017;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 518, |
| "end": 538, |
| "text": "Devlin et al., 2018)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Retrieval-free Approach", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "SF-QA is a library that is designed to make it easy to evaluate and reproduce open-domain systems that use ranker-reader architecture. SF-QA decreases the cost of indexing, hosting, and querying large unstructured text knowledge base, e.g. Wikipedia, and also provides a complete configuration snapshot that can be used to replicate a QA system's performance. It is also a place for opendomain QA researchers to share their work, no matter it is innovating in better information retrieval or it is in stronger machine reading comprehension.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Proposed Library Overview", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "There are four main components in SF-QA: ranker service, reader hub, evaluation, and pipeline configuration.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Proposed Library Overview", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The goal of the ranker service is to reduce the cost and time to index and query large knowledge source for open-domain QA research using a variety of ranking technologies. Up to date, we have included the BM25 (Robertson et al., 2009) and SPARTA (Zhao et al., 2020) ranking methods with several configurations detailed below. More methods will be included and we also welcome community contributions.", |
| "cite_spans": [ |
| { |
| "start": 211, |
| "end": 235, |
| "text": "(Robertson et al., 2009)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 247, |
| "end": 266, |
| "text": "(Zhao et al., 2020)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ranker Service", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Currently, SF-QA supports four ways of document splitting for indexing: The returned passage is in the following JSON format: { question id : [\"score\": 42.86,\"answer\": \"Super Bowl V, the fifth edition of the Super Bowl...\", ...]}, which contains all question ids as key, and top-k retrieved documents and scores as value.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ranker Service", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "There are two methods to use the ranking results: cached ranking results and ranking API.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ranker Service", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The fastest way to use ranking service for experiments is via cached ranking results. SF-QA provides top-K ranked passages in JSON format for training, validation and test (if publicly available) set. One can directly use the cached results for training or for testing, saving time, and resources for processing the raw data. Another use case is one may use more computationally expensive reranking methods to re-rank the top-K passages and then feed them into the reader component.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cached Ranking Results", |
| "sec_num": "3.3.1" |
| }, |
| { |
| "text": "The cached results are very useful for researchers who work on existing datasets and who do not need to have a live system. However, only cached results do not work for new datasets or live QA system that needs to handle user queries. Therefore, SF-QA also provides public API as a service to solve this need. The API is available as a RESTful API and can be reached via HTTPs. Detailed connection documentation can be found on the GitHub.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ranking APIs", |
| "sec_num": "3.3.2" |
| }, |
| { |
| "text": "Reader hub allows SF-QA's user to specify which reader model to use to extract phrase-level answers. One can either uses their own models by implementing an abstract function or directly load any reader models that are compatible with the Hugging Face Transformer library (Wolf et al., 2019) . SF-QA also includes its own reader model that is optimized for open-domain QA. For example, it offers a BERT reader that is globally normalized , which provides more reliable answer scores to compare multiple candidates' answers from different passages.", |
| "cite_spans": [ |
| { |
| "start": 272, |
| "end": 291, |
| "text": "(Wolf et al., 2019)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reader Hub", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Moreover, the reader hub allows the user to define the fusion mechanism that combines the ranking results with reading results. The current implementation supports a linear combination with two free variables, namely the type of score and the weight on reader score. Concretely, the final answer score is computed as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reader Hub", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "y = (1 \u2212 \u03b1)y reader + \u03b1y rank (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reader Hub", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "where \u03b1 is a coefficient between 0 and 1. y reader is the reader score, which can either be logits or probability after the softmax layer. y ranker is the ranker score, which depends on the ranking method. One may also specify different normalization strategies to normalize the score from ranker or reader. Normalization strategies include z-normalization, floor normalization etc. Lastly, one may easily add their own strategy by overriding the fusion function.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reader Hub", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "SF-QA evaluation is designed to offer a multilingual and comprehensive evaluation script that computes the performance of an open-domain QA system and also outputs useful intermediate metrics that are useful for analysis and visualization. For language support, SF-QA evaluation supports English and Chinese. For metrics, it has the most common EM (exact match) and F1 score for the final performance. It also provides other relevant metrics. The following is a list of metrics that are in the output:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "\u2022 Exact match (EM)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "\u2022 F-1 Score", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "\u2022 Ranking recall at K", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "\u2022 Oracle ranker score", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "\u2022 Mean reciprocal rank (MRR)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "The pipeline configuration file is in YAML format, which defines all the hyperparameter for an opendomain QA system to do a forward inference. One can set the configuration for data, ranker ID, and reader ID, fusion strategy and etc. Therefore, the easiest way to share an open-domain QA system for results replication is via providing the right YAML configuration. The following is an example. 4 Use Cases SF-QA is designed to be modular and ready to use, with the hope that it can connect people from researchers interested in Question Answering (QA), Information Retrieval (IR), and developers from industries. In this section, we illustrate several use cases of SF-QA.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pipeline Configurations", |
| "sec_num": "3.6" |
| }, |
| { |
| "text": "In open-domain QA, the first stage ranker consumes humongous resources in both time, memory, and storage. (Robertson et al., 2009) , and SPARTA (Zhao et al., 2020) , both with different granularity options. Researchers can call the RESTful API to get the cached ranking results directly if they want to focus on existing opendomain QA datasets, for example, Open SQuAD, Open CMRC, etc. Alternatively, they can call the backend live ranker to get the top retrieved results regarding the input query. We design SF-QA to be completely modular: the researcher is able to pick up a cached ranker and plug in their own reader model to evaluate the open-domain QA results.", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 130, |
| "text": "(Robertson et al., 2009)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 144, |
| "end": 163, |
| "text": "(Zhao et al., 2020)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficient Reader Comparison", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Reproducibility is another problem that existed in current open-domain QA research. Since the firststage retriever model needs researchers to collect large-scale data by themselves, it is hard to keep all the settings the same to make fair comparisons. In SF-QA, we collected data following the earliest works' setting (Chen et al., 2017; Yang et al., 2019; . Therefore, researchers can check SF-QA to get data specifications for existing models. Moreover, parameter settings for different models are recorded and saved in another separate configuration file, as shown in the section3.6. Therefore, any existing models in the current SF-QA project can be directly reproduced, which would greatly facilitate researchers in establishing benchmark scores and doing fair comparisons.", |
| "cite_spans": [ |
| { |
| "start": 319, |
| "end": 338, |
| "text": "(Chen et al., 2017;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 339, |
| "end": 357, |
| "text": "Yang et al., 2019;", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reproducible Research", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "SF-QA framework also considers the needs from an industry perspective. To show the potential of open-domain QA and to encourage more people to join the development of this task, we also provide a RESTful API (with a ready-to-use open-domain QA model in the backend) for users to ask questions and get the phrase-level answers directly as output. We also provide a tutorial to demonstrate that SF-QA can be seamlessly incorporated into RASA (Bocklisch et al., 2017) , a popular opensource chatbot building platform, with only a few lines of code. We hope that this effort can attract people from different backgrounds to open-domain QA research.", |
| "cite_spans": [ |
| { |
| "start": 440, |
| "end": 464, |
| "text": "(Bocklisch et al., 2017)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Knowledge-empowered Applications", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Reported Reproduced EM F1 EM F1 Bertserini (Yang et al., 2019) 38.6 46.1 41.2 48.6 +DS (Xie et al., 2020) 51.2 59.4 51.6 59.2", |
| "cite_spans": [ |
| { |
| "start": 43, |
| "end": 62, |
| "text": "(Yang et al., 2019)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 87, |
| "end": 105, |
| "text": "(Xie et al., 2020)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Multipassage 53.0 60.9 53.2 60.7", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "SpartaQA (Zhao et al., 2020) 59.3 66.5 59.3 66.5 Table 1 : Comparison between reported performance and reproduced performance on Open SQuAD.", |
| "cite_spans": [ |
| { |
| "start": 9, |
| "end": 28, |
| "text": "(Zhao et al., 2020)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 49, |
| "end": 56, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Results in Table 1 shows the performance comparison between several reported open-domain QA systems and our reproduced results. The first experiment conducted is to reproduce some prior results using SF-QA. We choose Bertserini (Yang et al., 2019) , Bertserini with distant supervision (Xie et al., 2020) , Multi-passage Bert , and SPARTA (Zhao et al., 2020) as three benchmark systems to reproduce.", |
| "cite_spans": [ |
| { |
| "start": 228, |
| "end": 247, |
| "text": "(Yang et al., 2019)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 286, |
| "end": 304, |
| "text": "(Xie et al., 2020)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 339, |
| "end": 358, |
| "text": "(Zhao et al., 2020)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 11, |
| "end": 18, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Reproducing Prior Art", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "To reproduce Bertserini (Yang et al., 2019) , we follow the implementation described in the original paper and first index the 2016 English Wikipedia in paragraph level to get 29.5M documents in total. A BERT-base-cased model is trained with global normalization, following descriptions in the paper. We observe a slight improvement in the opendomain QA result, which may due to the usage of a newer version of the BM25 retriever. phenomenon has also been reported in (Xie et al., 2020) . For Bertserini with distant supervision (Xie et al., 2020) , we follow the two-stage distant supervision strategy proposed by the original author, where the model was first fine-tuned using the original SQuAD dataset, and then fine-tuned on the distantly supervised data retrieved from the full Wikipedia. The score we get matches the score reported by the original author.", |
| "cite_spans": [ |
| { |
| "start": 24, |
| "end": 43, |
| "text": "(Yang et al., 2019)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 468, |
| "end": 486, |
| "text": "(Xie et al., 2020)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 529, |
| "end": 547, |
| "text": "(Xie et al., 2020)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reproducing Prior Art", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "To reproduce Multi-passage BERT , we first index the Wikipedia corpus using chunk size equals to 100, with a stride of 50 words. A BERT reranker is then trained to rerank the retrieved top 100 documents and the top 30 documents are passed to the reader. In the reader training stage, we train the model using BERT-largecased model, also with global normalization to make the span score comparable. Our reproduced score matches the score reported in the original paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reproducing Prior Art", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "For SpartaQA, we follow the original author's implementation on SPARTA retriever, and index the Wikipedia in the context level with a size of 150. During the reader stage, a SpanBERT (Joshi et al., 2020) model is used to train the model with distantly supervised data retrieved from Wikipedia with global normalization strategy. The score matches the reported score.", |
| "cite_spans": [ |
| { |
| "start": 183, |
| "end": 203, |
| "text": "(Joshi et al., 2020)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reproducing Prior Art", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "This experiment shows results for elapsed time to evaluate open domain question answering with and without the SFQA evaluation framework (Table 2). Traditionally, we need to build the complete pipeline in order to evaluate the open-domain QA as following steps: (1) Indexing: converting full Wikipedia into sparse or dense representations;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Time saved by SF-QA", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "(2) Uploading: inserting the text and representations to Elasticsearch (or similar database); 3) Retriever: retrieval n-best candidates from Elasticsearch; 4) Reader: span prediction using machine reading comprehension. We use GeForce RTX 2080 Ti GPU to index the entire Wikipedia dump of the total 89,544,689 sentences. The total amount of elapsed time for open-domain QA is 29 hours without using SF-QA for one experimental setting. In comparison to this, using cached retrieved results provided from SF-QA saves repetitive work in heavy indexing, and it only takes \u223c 4 hours to get the final scores.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Time saved by SF-QA", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We conduct the last experiment to test the robustness of the state-of-the-art system against temporal shifting. Results are reported in Table 3 . We observe that model accuracy is largely affected by the version of the Wikipedia dump, showing that it is essential to track the version of the input data and make sure that all open-domain QA researches are reproducible starting from the data input level.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 136, |
| "end": 143, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model Accuracy v.s. Corpus release year", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "In conclusion, this paper presents SF-QA, a novel evaluation framework to make open-domain QA research simple and fair. This framework fixes the gap among researchers from different fields, and make the open-domain QA more accessible. We show the robustness of this framework by successfully reproducing several existing models in opendomain QA research. We hope that SF-QA can make the open-domain QA research more accessible and make the evaluation easier. We expect to further improve our framework by including more models in both ranker and reader side, and encourage community contributions to the project as well.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "https://github.com/soco-ai/SF-QA.git", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Learning to retrieve reasoning paths over wikipedia graph for question answering", |
| "authors": [ |
| { |
| "first": "Akari", |
| "middle": [], |
| "last": "Asai", |
| "suffix": "" |
| }, |
| { |
| "first": "Kazuma", |
| "middle": [], |
| "last": "Hashimoto", |
| "suffix": "" |
| }, |
| { |
| "first": "Hannaneh", |
| "middle": [], |
| "last": "Hajishirzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Caiming", |
| "middle": [], |
| "last": "Xiong", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1911.10470" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. 2019. Learn- ing to retrieve reasoning paths over wikipedia graph for question answering. arXiv preprint arXiv:1911.10470.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Rasa: Open source language understanding and dialogue management", |
| "authors": [ |
| { |
| "first": "Tom", |
| "middle": [], |
| "last": "Bocklisch", |
| "suffix": "" |
| }, |
| { |
| "first": "Joey", |
| "middle": [], |
| "last": "Faulkner", |
| "suffix": "" |
| }, |
| { |
| "first": "Nick", |
| "middle": [], |
| "last": "Pawlowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Nichol", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1712.05181" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tom Bocklisch, Joey Faulkner, Nick Pawlowski, and Alan Nichol. 2017. Rasa: Open source language understanding and dialogue management. arXiv preprint arXiv:1712.05181.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Reading Wikipedia to answer opendomain questions", |
| "authors": [ |
| { |
| "first": "Danqi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Fisch", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| }, |
| { |
| "first": "Antoine", |
| "middle": [], |
| "last": "Bordes", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Association for Computa- tional Linguistics (ACL).", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Simple and effective multi-paragraph reading comprehension", |
| "authors": [ |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Gardner", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1710.10723" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christopher Clark and Matt Gardner. 2017. Simple and effective multi-paragraph reading comprehen- sion. arXiv preprint arXiv:1710.10723.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1810.04805" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Realm: Retrievalaugmented language model pre-training", |
| "authors": [ |
| { |
| "first": "Kelvin", |
| "middle": [], |
| "last": "Guu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Zora", |
| "middle": [], |
| "last": "Tung", |
| "suffix": "" |
| }, |
| { |
| "first": "Panupong", |
| "middle": [], |
| "last": "Pasupat", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2002.08909" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020. Realm: Retrieval- augmented language model pre-training. arXiv preprint arXiv:2002.08909.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Spanbert: Improving pre-training by representing and predicting spans", |
| "authors": [ |
| { |
| "first": "Mandar", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| }, |
| { |
| "first": "Danqi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Yinhan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Daniel", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Weld", |
| "suffix": "" |
| }, |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "8", |
| "issue": "", |
| "pages": "64--77", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predict- ing spans. Transactions of the Association for Com- putational Linguistics, 8:64-77.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Natural questions: a benchmark for question answering research", |
| "authors": [ |
| { |
| "first": "Tom", |
| "middle": [], |
| "last": "Kwiatkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Jennimaria", |
| "middle": [], |
| "last": "Palomaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Olivia", |
| "middle": [], |
| "last": "Redfield", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Ankur", |
| "middle": [], |
| "last": "Parikh", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Alberti", |
| "suffix": "" |
| }, |
| { |
| "first": "Danielle", |
| "middle": [], |
| "last": "Epstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Illia", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| }, |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "7", |
| "issue": "", |
| "pages": "453--466", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a bench- mark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Ranking paragraphs for improving answer recall in open-domain question answering", |
| "authors": [ |
| { |
| "first": "Jinhyuk", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Seongjun", |
| "middle": [], |
| "last": "Yun", |
| "suffix": "" |
| }, |
| { |
| "first": "Hyunjae", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Miyoung", |
| "middle": [], |
| "last": "Ko", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaewoo", |
| "middle": [], |
| "last": "Kang", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1810.00494" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jinhyuk Lee, Seongjun Yun, Hyunjae Kim, Miyoung Ko, and Jaewoo Kang. 2018. Ranking paragraphs for improving answer recall in open-domain ques- tion answering. arXiv preprint arXiv:1810.00494.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Latent retrieval for weakly supervised open domain question answering", |
| "authors": [ |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1906.00300" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. arXiv preprint arXiv:1906.00300.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Lucene in action", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Mccandless", |
| "suffix": "" |
| }, |
| { |
| "first": "Erik", |
| "middle": [], |
| "last": "Hatcher", |
| "suffix": "" |
| }, |
| { |
| "first": "Otis", |
| "middle": [], |
| "last": "Gospodneti\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Gospodneti\u0107", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael McCandless, Erik Hatcher, Otis Gospodneti\u0107, and O Gospodneti\u0107. 2010. Lucene in action, vol- ume 2. Manning Greenwich.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Answering complex open-domain questions through iterative query generation", |
| "authors": [ |
| { |
| "first": "Peng", |
| "middle": [], |
| "last": "Qi", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaowen", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Leo", |
| "middle": [], |
| "last": "Mehr", |
| "suffix": "" |
| }, |
| { |
| "first": "Zijian", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1910.07000" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang, and Christopher D Manning. 2019. Answering complex open-domain questions through iterative query gen- eration. arXiv preprint arXiv:1910.07000.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", |
| "authors": [ |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Raffel", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Roberts", |
| "suffix": "" |
| }, |
| { |
| "first": "Katherine", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharan", |
| "middle": [], |
| "last": "Narang", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Matena", |
| "suffix": "" |
| }, |
| { |
| "first": "Yanqi", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter J", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1910.10683" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Squad: 100,000+ questions for machine comprehension of text", |
| "authors": [ |
| { |
| "first": "Pranav", |
| "middle": [], |
| "last": "Rajpurkar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Konstantin", |
| "middle": [], |
| "last": "Lopyrev", |
| "suffix": "" |
| }, |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1606.05250" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "The probabilistic relevance framework: Bm25 and beyond", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Robertson", |
| "suffix": "" |
| }, |
| { |
| "first": "Hugo", |
| "middle": [], |
| "last": "Zaragoza", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Foundations and Trends R in Information Retrieval", |
| "volume": "3", |
| "issue": "4", |
| "pages": "333--389", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and be- yond. Foundations and Trends R in Information Re- trieval, 3(4):333-389.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Real-time open-domain question answering with dense-sparse phrase index", |
| "authors": [ |
| { |
| "first": "Minjoon", |
| "middle": [], |
| "last": "Seo", |
| "suffix": "" |
| }, |
| { |
| "first": "Jinhyuk", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Tom", |
| "middle": [], |
| "last": "Kwiatkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Ankur", |
| "middle": [], |
| "last": "Parikh", |
| "suffix": "" |
| }, |
| { |
| "first": "Ali", |
| "middle": [], |
| "last": "Farhadi", |
| "suffix": "" |
| }, |
| { |
| "first": "Hannaneh", |
| "middle": [], |
| "last": "Hajishirzi", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "4430--4441", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019. Real-time open-domain question answering with dense-sparse phrase index. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 4430-4441.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "R 3: Reinforced ranker-reader for open-domain question answering", |
| "authors": [ |
| { |
| "first": "Shuohang", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mo", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaoxiao", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhiguo", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Klinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Shiyu", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerry", |
| "middle": [], |
| "last": "Tesauro", |
| "suffix": "" |
| }, |
| { |
| "first": "Bowen", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Jing", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Thirty-Second AAAI Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerry Tesauro, Bowen Zhou, and Jing Jiang. 2018. R 3: Reinforced ranker-reader for open-domain question answering. In Thirty-Second AAAI Conference on Artificial Intelligence.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Ramesh Nallapati, and Bing Xiang", |
| "authors": [ |
| { |
| "first": "Zhiguo", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaofei", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Multi-passage bert: A globally normalized bert model for open-domain question answering", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1908.08167" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nal- lapati, and Bing Xiang. 2019. Multi-passage bert: A globally normalized bert model for open-domain question answering. arXiv preprint arXiv:1908.08167.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Huggingface's transformers: Stateof-the-art natural language processing", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| }, |
| { |
| "first": "Lysandre", |
| "middle": [], |
| "last": "Debut", |
| "suffix": "" |
| }, |
| { |
| "first": "Victor", |
| "middle": [], |
| "last": "Sanh", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Chaumond", |
| "suffix": "" |
| }, |
| { |
| "first": "Clement", |
| "middle": [], |
| "last": "Delangue", |
| "suffix": "" |
| }, |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Moi", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierric", |
| "middle": [], |
| "last": "Cistac", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Rault", |
| "suffix": "" |
| }, |
| { |
| "first": "R\u00e9mi", |
| "middle": [], |
| "last": "Louf", |
| "suffix": "" |
| }, |
| { |
| "first": "Morgan", |
| "middle": [], |
| "last": "Funtowicz", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "ArXiv", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, et al. 2019. Huggingface's transformers: State- of-the-art natural language processing. ArXiv, pages arXiv-1910.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Distant supervision for multistage fine-tuning in retrieval-based question answering", |
| "authors": [ |
| { |
| "first": "Yuqing", |
| "middle": [], |
| "last": "Xie", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Luchen", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| }, |
| { |
| "first": "Kun", |
| "middle": [], |
| "last": "Xiong", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicholas", |
| "middle": [ |
| "Jing" |
| ], |
| "last": "Yuan", |
| "suffix": "" |
| }, |
| { |
| "first": "Baoxing", |
| "middle": [], |
| "last": "Huai", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of The Web Conference 2020", |
| "volume": "", |
| "issue": "", |
| "pages": "2934--2940", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yuqing Xie, Wei Yang, Luchen Tan, Kun Xiong, Nicholas Jing Yuan, Baoxing Huai, Ming Li, and Jimmy Lin. 2020. Distant supervision for multi- stage fine-tuning in retrieval-based question answer- ing. In Proceedings of The Web Conference 2020, pages 2934-2940.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "End-to-end open-domain question answering with bertserini", |
| "authors": [ |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuqing", |
| "middle": [], |
| "last": "Xie", |
| "suffix": "" |
| }, |
| { |
| "first": "Aileen", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Xingyu", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Luchen", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| }, |
| { |
| "first": "Kun", |
| "middle": [], |
| "last": "Xiong", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1902.01718" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with bertserini. arXiv preprint arXiv:1902.01718.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Hotpotqa: A dataset for diverse, explainable multi-hop question answering", |
| "authors": [ |
| { |
| "first": "Zhilin", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Peng", |
| "middle": [], |
| "last": "Qi", |
| "suffix": "" |
| }, |
| { |
| "first": "Saizheng", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "William", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruslan", |
| "middle": [], |
| "last": "Cohen", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1809.09600" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answer- ing. arXiv preprint arXiv:1809.09600.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Sparta: Efficient open-domain question answering via sparse transformer matching retrieval", |
| "authors": [ |
| { |
| "first": "Tiancheng", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaopeng", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyusong", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2009.13013" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tiancheng Zhao, Xiaopeng Lu, and Kyusong Lee. 2020. Sparta: Efficient open-domain question an- swering via sparse transformer matching retrieval. arXiv preprint arXiv:2009.13013.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "num": null, |
| "text": "A typical ranker-reader-based open-domain QA system operates as follows: first, a large text", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "num": null, |
| "text": "Overall pipeline for open-domain QA", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "num": null, |
| "text": "1. Sentence: sentence-level indexing 2. Paragraph: paragraph-level indexing 3. Chunk: fixed word size indexing 4. Context: context-level indexing, where the full sentence is always kept, with a maximum number of tokens Also, Wikipedia dumps at different times are indexed separately so that users can choose to use the same dump as benchmark datasets used. The following versions are included: 1. English Wikipedia: 2016/2018/2020 2. Chinese Wikipedia: 2017/2018/2020", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "num": null, |
| "text": "n d e x n a m e : en\u2212w i k i \u22122016 r e a d e r : m o d e l i d : squad \u2212c o n t e x t \u2212s p a n b e r t param: n gpu: 2 s c o r e w e i g h t : 0 . 8 t o p k : 10", |
| "uris": null |
| }, |
| "TABREF4": { |
| "text": "Time elapsed to evaluate open-domain QA using Open SQuAD development set", |
| "type_str": "table", |
| "html": null, |
| "num": null, |
| "content": "<table><tr><td>open-domain QA setting</td><td>wiki 2016* EM F1</td><td colspan=\"2\">wiki 2018 R@1 EM F1</td><td colspan=\"2\">wiki 2020 R@1 EM F1</td><td>R@1</td></tr><tr><td>BM25 + SpanBERT</td><td colspan=\"2\">49.2 56.7 41.9</td><td colspan=\"2\">45.8 53.8 39.4</td><td>41.5 49.5 35.4</td></tr><tr><td>Sparta + SpanBERT</td><td colspan=\"2\">59.3 66.5 50.8</td><td colspan=\"2\">46.5 54.4 39.3</td><td>46.4 53.9 42.2</td></tr></table>" |
| }, |
| "TABREF5": { |
| "text": "Open SQuAD performance using Wikipedia dumps from different years. * represents the dump which SQuAD originally used for annotation.", |
| "type_str": "table", |
| "html": null, |
| "num": null, |
| "content": "<table/>" |
| } |
| } |
| } |
| } |