ACL-OCL / Base_JSON /prefixI /json /ijclclp /2021.ijclclp-2.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:27:19.887889Z"
},
"title": "Answering Chinese Elementary School Social Studies Multiple Choice Questions",
"authors": [
{
"first": "Chao-Chun",
"middle": [],
"last": "Liang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica",
"location": {
"settlement": "Taipei",
"country": "Taiwan"
}
},
"email": "ccliang@iis.sinica.edu.tw"
},
{
"first": "Daniel",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Michigan",
"location": {
"addrLine": "Ann Arbor",
"region": "Michigan",
"country": "USA"
}
},
"email": ""
},
{
"first": "Meng-Tse",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hsin-Min",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica",
"location": {
"settlement": "Taipei",
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Keh-Yih",
"middle": [],
"last": "Su",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica",
"location": {
"settlement": "Taipei",
"country": "Taiwan"
}
},
"email": "kysu@iis.sinica.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present several novel approaches to answer Chinese elementary school social studies multiple choice questions. Although BERT shows excellent performance on various reading comprehension tasks, it handles some kinds of questions poorly, in particular negation, all-of-the-above, and none-of-the-above questions. We thus propose a novel framework to cascade BERT with preprocessor and answer-picker/selector modules to address these cases. Experimental results show the proposed approaches effectively improve the performance of BERT, and thus demonstrate the feasibility of supplementing BERT with additional modules.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We present several novel approaches to answer Chinese elementary school social studies multiple choice questions. Although BERT shows excellent performance on various reading comprehension tasks, it handles some kinds of questions poorly, in particular negation, all-of-the-above, and none-of-the-above questions. We thus propose a novel framework to cascade BERT with preprocessor and answer-picker/selector modules to address these cases. Experimental results show the proposed approaches effectively improve the performance of BERT, and thus demonstrate the feasibility of supplementing BERT with additional modules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Machine reading comprehension (MRC) is a challenge for AI research, and is frequently adopted to seek desired information from knowledge sources such as company document collections, Wikipedia or the Web for a given question. To evaluate the capability of a MRC system, different test forms have been adopted in the literature (Qiu et al., 2019; such as binary choice, multiple choice (MC), multiple selection (MS), and cloze. Which test form to adopt usually depends on the format of the given benchmark/dataset. In this paper, (3) \u9694\u4ee3\u6559\u990a\u5bb6\u5ead (4) \u5bc4\u990a\u5bb6\u5ead",
"cite_spans": [
{
"start": 327,
"end": 345,
"text": "(Qiu et al., 2019;",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "(1) \u4e09\u4ee3\u540c\u5802\u5bb6\u5ead we solve MC questions about traditional Chinese primary school social studies. In this Chinese Social Studies MC (CSSMC) QA task, the system selects the correct answer from several candidate options based on a given question and its associated lesson manually constructed by Taiwan book publishers. Table 1 shows an example of CSSMC, where the passage is the corresponding supporting evidence (SE).",
"cite_spans": [],
"ref_spans": [
{
"start": 310,
"end": 317,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "Previous work on answering MC questions can be divided into statistics-based approaches (Kouylekov & Magnini, 2005; Heilman & Smith, 2010) and neural-network-based approaches (Parikh et al., 2016; Chen et al., 2017) . Recent pre-trained language models such as BERT (Devlin et al., 2019) , XLNET (Yang et al., 2019) , RoBERTa , and ALBERT (Lan et al., 2019) show excellent performance on different RC MC tasks. As BERT shows excellent performance on various English datasets (e.g., SQuAD 1.1 (Rajpurkar et al., 2016) , GLUE (Wang et al., 2018) , etc.), it is adopted as our baseline. Table 6 shows its performance given the gold SE.",
"cite_spans": [
{
"start": 88,
"end": 115,
"text": "(Kouylekov & Magnini, 2005;",
"ref_id": "BIBREF12"
},
{
"start": 116,
"end": 138,
"text": "Heilman & Smith, 2010)",
"ref_id": "BIBREF7"
},
{
"start": 175,
"end": 196,
"text": "(Parikh et al., 2016;",
"ref_id": "BIBREF22"
},
{
"start": 197,
"end": 215,
"text": "Chen et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 266,
"end": 287,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 296,
"end": 315,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 339,
"end": 357,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 492,
"end": 516,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF25"
},
{
"start": 524,
"end": 543,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 584,
"end": 591,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "After analyzing error cases, we observed that BERT handles the following question types poorly: (1) Negation questions, that is, questions with negation phrases such as \u4e0d\u53ef\u80fd (unlikely). For this type of question, BERT selects the same answer for \"\u5c0f\u654f\u7684\u5abd\u5abd\u76ee\u524d\u5728 \u90f5\u5c40\u670d\u52d9\uff0c\u8acb\u554f\u5c0f\u654f\u7684\u5abd\u5abd\u53ef\u80fd\u6703\u70ba\u5c45\u6c11\u63d0\u4f9b\u4ec0\u9ebc\u670d\u52d9\uff1f (Xiaomin's mother serves at the post office. What kind of services could Xiaomin's mother provide to the residents?)\" and \"\u5c0f \u654f\u7684\u5abd\u5abd\u76ee\u524d\u5728\u90f5\u5c40\u670d\u52d9\uff0c\u8acb\u554f\u5c0f\u654f\u7684\u5abd\u5abd\u4e0d\u53ef\u80fd\u6703\u70ba\u5c45\u6c11\u63d0\u4f9b\u4ec0\u9ebc\u670d\u52d9\uff1f (Xiaomin's mother serves at the post office. What kind of service could not Xiaomin's mother provide to the residents?)\" (which differ only in the negation word \u4e0d (not)). BERT evidently pays no special attention to negative words; however, any one of them would change the desired answer;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "(2) All-of-the-above (\u4ee5\u4e0a\u7686\u662f) and none-of-the-above (\u4ee5\u4e0a\u7686\u975e) questions, choices for which include either All of the above or None of the above. In both cases, the answer cannot be handled by simply by selecting the most likely choice without preprocessing (1)\u8001\u4eba (2)\u5c0f\u5b69 (3)\u9752\u58ef\u5e74 (4)\u4ee5\u4e0a\u7686\u975e the given choices. Table 2 shows an example of these question types.",
"cite_spans": [],
"ref_spans": [
{
"start": 298,
"end": 305,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "The above phenomenon was also observed by Wu & Su (2020) , who reported that BERT achieves superior results mainly by utilizing surface features, and that its performance degrades significantly when the dataset involves negation words. Moreover, it is difficult for BERT to learn the semantic meaning of all-of-the-above and none-of-the-above questions, which suggests that the listed candidate options are all correct or all incorrect, with a small amount of data.",
"cite_spans": [
{
"start": 42,
"end": 56,
"text": "Wu & Su (2020)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "However, it is difficult to pinpoint the sources of the problem and then find corresponding remedies within BERT, due to its complicated architecture (even its basic version includes 12 heads and 12 stacked layers). We thus prefer to keep its implementation untouched if the problem can be fixed by coupling BERT with external modules. Accordingly, we here propose a framework that cascades BERT with a preprocessor module and an answer-picker/selector module. The preprocessor module revises the choices for all-of-the-above and none-of-the-above questions, and the answer-picker/selector module (a postprocessor) determines the appropriate choices under the cases mentioned above. The above approach is inspired by Lin & Su (2021) , who demonstrate that BERT learns natural language inference inefficiently, even for simple binary prediction; however, they also point out that task-related features and domain knowledge significantly help to improve BERT's learning efficiency.",
"cite_spans": [
{
"start": 717,
"end": 732,
"text": "Lin & Su (2021)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "For negation-type questions, instead of picking the highest-scoring choice as usual, the answer-picker/selector module selects the candidate with the lowest score. On the other hand, for all-of-the-above or none-of-the-above questions, we use a decision tree to select the answer, as illustrated in Figure 2 . In these cases, the preprocessor module first replaces the original \"all of the above\" or \"none of the above\" choices with a new choice generated by concatenating all other choices together (before those candidates are sent to BERT). Take for example the second last row in Table 2 : we replace \"\u4ee5\u4e0a\u7686\u662f (all of the above)\", the original last choice, with \"\u5236\u5b9a\u8001\u4eba\u798f\u5229\u653f\u7b56^\u63d0\u4f9b\u826f\u597d\u7684\u5b89\u990a\u7167\u9867^\u5efa\u7acb\u5065\u5168\u7684\u91ab\u7642\u9ad4\u7cfb (Make welfare policies for elderly people^ Provide good nursing care^ Establish a sound medical system)\".",
"cite_spans": [],
"ref_spans": [
{
"start": 299,
"end": 307,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 584,
"end": 591,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "We evaluate the proposed framework on a CSSMC dataset. The experimental results show the proposed approaches outperform the pure BERT model. This thus constitutes a new way to supplement BERT with additional modules. We believe the same strategy could be applied to other DNN models, which -despite good overall performance -are too complicated to customize for specific problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "In summary, in this paper we make the following contributions: (1) We propose several novel approaches to supplement BERT to solve negation, all-of-the-above, and none-of-the-above questions. (2) Experimental results show that the proposed approach effectively improves performance, and thus demonstrate the feasibility of supplementing BERT with additional modules to fix given problems. 3We construct and release a new Traditional Chinese Machine Reading Question and Answering dataset to assess the performance of RC MC models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "In comparison with our previous conference version (Lee et al., 2020) , this article describes additional \"Separately Judge then Select\" and \"Separately Judge Concatenation then Select\" experiments, which adopt a BERT entailment prediction model to handle each candidate option separately (details are provided in Sections 2.2.1 and 2.2.2) instead of jointly processing all candidate options together. We have also added Section 3 to describe the construction of the CSSMC dataset, which we adopt to compare different approaches.",
"cite_spans": [
{
"start": 51,
"end": 69,
"text": "(Lee et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "Given a social studies problem Q and its corresponding supporting evidence SE, our goal is to find the most likely answer from the given candidate set A = {A 1 , A 2 , \u2026 A n }, where n is the total number of available choices or candidates, and A i denotes the i-th answer candidate. This task is formulated as follows, where \u00c2 is the answer to be chosen. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "2.1"
},
{
"text": "1,..., arg m ax (A | , , ) i i n A P Q SE A \uf03d \uf03d (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "2.1"
},
{
"text": "Three different approaches are proposed in which we use entailment prediction (Dagan et al., 2005) to determine whether the candidate option is the correct answer to the question:",
"cite_spans": [
{
"start": 78,
"end": 98,
"text": "(Dagan et al., 2005)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Models",
"sec_num": "2.2"
},
{
"text": "(1) Separately judge then select (SJS), which considers each individual candidate option separately and then selects the final answer based on their output scores;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Models",
"sec_num": "2.2"
},
{
"text": "(2) Separately judge with concatenation then select (SJCS), which adopts the framework of the first approach but first replaces the all-of-the-above (\u4ee5\u4e0a\u7686\u662f) and none-of-the-above (\u4ee5\u4e0a\u7686\u975e) answer choices with the concatenation of all the other remaining candidate options before entailment judgment;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Models",
"sec_num": "2.2"
},
{
"text": "(3) Jointly judge then select (JJS), which jointly considers all candidate options to make the final decision. Details are provided below. Figure 1 shows the architecture of the proposed SJS approach, which consists of two main components: (1) the YN-BERT module, a fine-tuned BERT entailment prediction model (where YN denotes its output is a yes-no binary entailment judgment), and (2) the answer-picker module, which determines the final answer given the entailment judgment scores from four different YN-BERT modules. The input sequence is the concatenation of the associated supporting evidence, a given question, and a specific individual answer candidate/option. For each answer candidate, YN-BERT outputs an entailment judgment score used to select either Entail or Not-entail (i.e., the judgment is Entail if the score exceeds 0.5, and Not-entail otherwise). Entail implies that the given answer candidate is entailed by the combination of the question and its associated supporting evidence. The answer-picker module considers the entailment judgment scores of the various choices and selects the most appropriate one based on the decision tree shown in Figure 2 . Note that this decision tree is used only by the answer picker to make the final decision and is not involved in BERT's fine-tuning process.",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 147,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1164,
"end": 1172,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Proposed Models",
"sec_num": "2.2"
},
{
"text": "A given question is classified as negative-type if it includes a negation word within a pre-specified negation word list, which is obtained from the CSSMC training data, and currently consists of {\"\u4e0d\u6703 (will not)\", \"\u4e0d\u80fd (cannot)\", \"\u4e0d\u5f97 (not allow)\", \"\u4e0d\u662f (is not)\", \"\u4e0d\u61c9\u8a72 (should not)\", \"\u4e0d\u53ef\u80fd (unlikely)\", \"\u4e0d\u9700 (do not need)\", \"\u4e0d\u5fc5 (do not need)\", \"\u4e0d\u7528 (do not need)\", \"\u6c92\u6709 (without)\"}. Since the proposed approaches aim to supplement BERT, these negation words are manually picked from the error cases in the training data-set, on which BERT model make mistakes. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Separately Judge then Select (SJS)",
"sec_num": "2.2.1"
},
{
"text": "Another approach adopts the framework of the first approach but first recasts \"\u4ee5\u4e0a\u7686\u662f (all of the above)\" and \"\u4ee5\u4e0a\u7686\u975e (none of the above)' answer candidates as the concatenation of all of the other options. Take for example the last row in Table 2 : we replace \"\u4ee5\u4e0a\u7686\u975e\", the original last choice, with \"\u8001\u4eba^\u5c0f\u5b69^\u9752\u58ef\u5e74 (elderly people^children^young people)\".",
"cite_spans": [],
"ref_spans": [
{
"start": 236,
"end": 243,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Separately Judge with Concatenation then Select (SJCS)",
"sec_num": "2.2.2"
},
{
"text": "Afterwards, the answer-picker module selects the most appropriate choice based on the following rule: For negation questions, we select the answer candidate with the lowest entailment score; otherwise, we select that with the highest entailment score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Separately Judge with Concatenation then Select (SJCS)",
"sec_num": "2.2.2"
},
{
"text": "Shown in Figure 4 , the system architecture of the JJS approach consists of three main components: (1) the preprocessor, which recasts \"\u4ee5\u4e0a\u7686\u662f (all of the above)\" and \"\u4ee5\u4e0a\u7686 \u975e (none of the above)\" answer candidates as the concatenation of the other options (associated with the same question), as shown above, before inputting the question-choice-evidence combination into the BERT model; (2) the BERT-MC model, a typical fine-tuned BERT multiple-choice prediction model (Xu et al., 2019) described in Section 4.1; and (3) the answer selector, a candidate re-selector which for negation-type questions picks that answer candidate with the lowest score as opposed to that with the highest score (as for other question types). ",
"cite_spans": [
{
"start": 467,
"end": 484,
"text": "(Xu et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 9,
"end": 17,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Jointly Judge then Select (JJS)",
"sec_num": "2.2.3"
},
{
"text": "To evaluate the proposed approaches, we constructed a Chinese Social Studies Machine Reading and Question Answering (CSSMRQA) dataset, which is a superset of the CSSMC dataset mentioned above, to assess the capability of different Q&A systems (not just MC questions). This dataset consists of three question types: (1) yes/no questions, which ask whether the given question is a correct statement judged from the supporting evidence; (2) multiple-choice (MC) questions, which include four answer choices from which the correct one is to be chosen (here, this is the CSSMC dataset adopted in this paper); and (3) multiple-selection (MS) questions, which are similar to the multiple-choice questions but can contain more than one correct answer. Below we describe how they are constructed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Social Studies MRQA Dataset Construction",
"sec_num": "3."
},
{
"text": "We first collected lessons for grades 3 to 6 from elementary-school social studies textbooks published in Taiwan. For each lesson, we collected relevant questions from leading publishing houses in Taiwan. We thus obtained 14,103 yes/no questions, 5347 MC questions, and 340 MS questions from a total of 255 lessons. We then annotated the supporting evidence to indicate what information is needed to answer each question. This is described in detail below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Collection",
"sec_num": "3.1"
},
{
"text": "We hired two annotators to annotate the supporting evidence for each question. Supporting evidence is the content in the lesson (associated with the given question) which contains just the information necessary to answer the question. In the CSSMRQA dataset, each lesson comprises several paragraphs, and each paragraph comprises several sentences. Supporting evidence consists of one or more sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supporting Evidence (SE) Annotation",
"sec_num": "3.2"
},
{
"text": "We used Doccano (Nakayama et al., 2018) , an open-source text annotation tool, as the platform for annotation. Doccano allows the user to highlight supporting words in the text (i.e., those words that provide hints to find the related passage). Given a question and its corresponding answer (also the lesson associated with the question), the annotators highlighted supporting words necessary to answer the question. Usually, these supporting words were words within the given question. Annotators were not allowed to annotate supporting words across sentence splitters or delimiters. Nonetheless, some questions lack suitable supporting evidence in the lesson. For example, students may rely on common sense (instead of textbook context) to answer the question, \"\u73ed\u4e0a\u540c\u5b78\u6709\u4eba\u4e82\u4e1f\u5783\u573e\uff0c\u8eab\u70ba\u885b\u751f \u80a1\u9577\u7684\u5c0f\u7389\u53ef\u4ee5\u600e\u9ebc\u505a\uff1f (1) \u9ed8\u9ed8\u7684\u8ddf\u5728\u4ed6\u5011\u5f8c\u9762\u64bf\u5783\u573e (2) \u52f8\u544a\u4e82\u4e1f\u5783\u573e\u7684\u540c\u5b78\uff0c\u4e26 \u8acb\u4ed6\u5011\u5c07\u5783\u573e\u64bf\u8d77\u4f86 (3) \u6c92\u95dc\u4fc2\uff0c\u7b49\u6253\u6383\u6642\u9593\u518d\u6383\u5c31\u597d\u4e86 (4) \u628a\u5783\u573e\u85cf\u5728\u770b\u4e0d\u898b\u7684\u5730\u65b9 (What can Xiaoyu (the Chief of Health) do when her classmate litters? (1) Pick up trash after them silently; (2) Advise the classmate who litters and ask him/her to pick up the litter; (3) It doesn't matter, just wait until the cleaning time; or (4) Hide litter out of sight)\". In such cases, annotators found no suitable supporting words in the lesson and thus skipped SE annotation. Afterward, sentences that contain marked supporting words were annotated as supporting evidence. Table 3 shows the final results of SE annotation. Figure 5 shows an example of multiple-choice question annotation. Annotators first read both the question (qtext) and the correct answer (answer) from the right-hand side windows, and then highlight supporting words (marked with purple boxes) in the lesson. To prevent annotators from highlighting supporting word regions across sentences, we use special symbols as separators (||| for paragraphs and || for sentences).",
"cite_spans": [
{
"start": 16,
"end": 39,
"text": "(Nakayama et al., 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 1352,
"end": 1359,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 1402,
"end": 1410,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Supporting Evidence (SE) Annotation",
"sec_num": "3.2"
},
{
"text": "We conducted experiments on the above CSSMC dataset with the three proposed approaches. Table 4 shows the dataset statistics. For comparison, we used a typical BERT multiple-choice implementation (Xu et al., 2019) as our baseline.",
"cite_spans": [
{
"start": 196,
"end": 213,
"text": "(Xu et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 88,
"end": 95,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4."
},
{
"text": "For the baseline, we used the BERT-MC model, that is, BERT (Devlin et al., 2019) fine-tuned for the multiple-choice task as our baseline, as it is the most widely adopted state-of-the-art model (Xu et al., 2019) . It was built by exporting BERT's final hidden layer into a linear layer and then taking a softmax operation. For details on the BERT-MC model, please see Xu et al. (2019) . The BERT input sequence consists of \"[CLS] SE [SEP] Question [SEP] Option-#i [SEP]\", where Option-#i denotes the i-th option and [CLS] and [SEP] are special tokens representing the classification and the passage separators, respectively, as defined in Devlin et al. (2019) . Figure 6 shows the architecture of the BERT baseline model. ",
"cite_spans": [
{
"start": 59,
"end": 80,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 194,
"end": 211,
"text": "(Xu et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 368,
"end": 384,
"text": "Xu et al. (2019)",
"ref_id": "BIBREF29"
},
{
"start": 639,
"end": 659,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 662,
"end": 670,
"text": "Figure 6",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Baseline: BERT-MC",
"sec_num": "4.1"
},
{
"text": "SE is the corresponding shortest passage based on which the system can answer the given question. Given the annotation results described in Section 3.2, we find many questions that involve common-sense reasoning, for which no corresponding SEs can be found in the retrieved lesson. We denote as SE1 that set of questions for which SEs can be found in the retrieved lesson (this is termed GSE1 if it is also associated with gold SEs); the set of remaining questions is SE2. Table 5 shows the statistics for GSE1. ",
"cite_spans": [],
"ref_spans": [
{
"start": 473,
"end": 480,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Retrieved Supporting Evidence (SE) Dataset",
"sec_num": "4.2"
},
{
"text": "We conducted two sets of experiments on the CSSMC dataset: (i) GSE1, based on SE1 with gold SEs, to compare the QA component performance of different models; and (ii) LSE, based on the whole dataset with all SEs directly retrieved from the Lucene search engine, to compare different approaches under a real-world situation. Each set covers six different models: (1) BERT-MC Only, (2) SJS, (3) SJCS, (4) BERT-MC+Neg, (5) BERT-MC+AllAbv&NonAbv, and (6) BERT-MC+Neg+AllAbv&NonAbv, where BERT-MC Only is the baseline model and Neg and AllAbv&NonAbv denote additional answer-selector and preprocessor modules for the negation and all-of-the-above/none-of-the-above question-types, respectively. We adopted the setting specified in Xu et al. (2019) for BERT training. All other models were trained using the following hyperparameters: (1) a maximum sequence length of 300; (2) a learning rate of 5e-5 with the AdamW optimizer (Loshchilov & Hutter, 2019) ; (3) 3 to 5 epochs. Table 6 compares the accuracy of various approaches; we report test set performance using the settings that corresponded to the best dev set performance.",
"cite_spans": [
{
"start": 726,
"end": 742,
"text": "Xu et al. (2019)",
"ref_id": "BIBREF29"
},
{
"start": 920,
"end": 947,
"text": "(Loshchilov & Hutter, 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 969,
"end": 976,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In this scenario we sought to evaluate the QA component performance of six different models on the GSE1 subset (i.e., with gold SEs). The GSE1 column in Table 6 gives the test set accuracy rates of various approaches. As the SJS model has special handling for negation and \"\u4ee5\u4e0a\u7686\u662f (all-of-the-above)\" or \"\u4ee5\u4e0a\u7686\u975e (none-of-the-above)\" questions, it yields better performance than BERT-MC Only (0.862 vs. 0.849). The SJCS model further replaces the \"\u4ee5 \u4e0a \u7686 \u662f (all-of-the-above) \" and \" \u4ee5 \u4e0a \u7686 \u975e (none-of-the-above) \" options with the concatenation of the three other options. However, this degrades the baseline performance significantly, from 0.849 to 0.822. This is because the \"\u4ee5\u4e0a\u7686\u662f (all-of-the-above)\" and b GSE1-Neg: Only negation-type questions within GSE1.",
"cite_spans": [],
"ref_spans": [
{
"start": 153,
"end": 160,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Jointly Judge then Select (JJS)",
"sec_num": "4.3.1"
},
{
"text": "c GSE1-AllAbv&NonAbv: Only AllAbv&NonAbv-type questions within GSE1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Jointly Judge then Select (JJS)",
"sec_num": "4.3.1"
},
{
"text": "d LSE: <SE1+SE2> with all SEs retrieved from the Lucene search engine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Jointly Judge then Select (JJS)",
"sec_num": "4.3.1"
},
{
"text": "\"\u4ee5\u4e0a\u7686\u975e (none-of-the-above)\" options are closely related to the other three options. However, as it considers the concatenation option and the other three options independently, or separately, without using a complicated decision tree (specified in Figure 3 ), this approach is unable to take such correlation into account.",
"cite_spans": [],
"ref_spans": [
{
"start": 247,
"end": 255,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Jointly Judge then Select (JJS)",
"sec_num": "4.3.1"
},
{
"text": "The JJS model (i.e., the last row in Table 6 ) addresses this problem by considering all of the options together simultaneously. Table 6 shows that it considerably outperforms the SJCS model by 5.7% (87.9% -82.2%) on the test set, which shows that jointly processing all options together is essential after the concatenation step. The BERT-MC+Neg and BERT-MC+AllAbv&NonAbv models are also evaluated as an ablation analysis. Table 6 indicates they also outperform the BERT-MC only baseline by 2.1% (87.0% -84.9%) and 3.0% (87.9% -84.9%) on the test set, respectively, which shows the necessity of both the preprocessor and answer-selector modules.",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 44,
"text": "Table 6",
"ref_id": "TABREF5"
},
{
"start": 129,
"end": 136,
"text": "Table 6",
"ref_id": "TABREF5"
},
{
"start": 424,
"end": 431,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Jointly Judge then Select (JJS)",
"sec_num": "4.3.1"
},
{
"text": "Last, to explore the effects of the proposed approaches on specific question types, we conducted two additional experiments on two GSE1 subsets: (1) the Neg-type only subset, which contains only negation questions, to compare the performance between the BERT-MC only and BERT-MC+Neg approaches to evaluate the effectiveness of the answer-selector module; (2) the AllAbv&NonAbv only subset, which contains only AllAbv or NonAbv questions, to compare the BERT-MC only and BERT-MC+AllAbv&NonAbv approaches to evaluate the effectiveness of the proposed preprocessor. Table 6 clearly shows Table 7 . Error case of \"BERT-MC+Neg\" on \"GSE1-Neg\" subset.",
"cite_spans": [],
"ref_spans": [
{
"start": 563,
"end": 593,
"text": "Table 6 clearly shows Table 7",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Jointly Judge then Select (JJS)",
"sec_num": "4.3.1"
},
{
"text": "Question: \u6e05\u671d\u7d71\u6cbb\u81fa\u7063\u6642\u671f\uff0c\u600e\u6a23\u7684\u4eba\u61c9\u8a72\u6bd4\u8f03\u6c92\u6709\u5171\u540c\u7684\u8840\u7de3\uff1f Options: (1)\u53c3\u52a0\u540c\u4e00\u500b\u5b97\u89aa\u6703 (2)\u53c3\u52a0\u540c\u4e00\u500b\u796d\u7940\u516c\u696d (3)\u53c3\u52a0\u540c\u4e00\u500b\u300c\u90ca\u300d (4) \u5728\u540c\u4e00\u5ea7\u5b97\u7960\u796d\u7940\u7956\u5148 Table 8 . Error case of \"BERT-MC+AllAbv&NonAbv\" on \"GSE1-AllAbv&NonAbv\" subset.",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 104,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "SEs: \u53e6\u5916\uff0c\u96a8\u8457\u5546\u696d\u8208\u76db\uff0c\u5728\u5e9c\u57ce\u3001\u9e7f\u6e2f\u3001\u824b\u823a\u7b49\u5927\u57ce\u5e02\uff0c\u4e5f\u51fa\u73fe\u7531\u5546\u4eba\u7d44\u6210\u7684\u300c\u90ca\u300d\u3002 \u300c\u90ca\u300d\u985e\u4f3c\u73fe\u4ee3\u540c\u696d\u516c\u6703\uff0c\u6210\u54e1\u9664\u4e86\u7d93\u71df\u8cbf\u6613\u5916\uff0c\u4e5f\u7a4d\u6975\u53c3\u8207\u5730\u65b9\u7684\u516c\u5171\u4e8b\u52d9\u3002",
"sec_num": null
},
{
"text": "Question: \"\u5fd7\u5fe0\u5bb6\u9644\u8fd1\u6709\u4e00\u9593\u5de5\u5ee0\uff0c\u6642\u5e38\u5c07\u672a\u7d93\u8655\u7406\u7684\u6c59\u6c34\u6392\u5165\u6cb3\u5ddd\u4e2d\uff0c\u9019\u6a23\u53ef\u80fd\u6703\u9020 \u6210\u4ec0\u9ebc\u5f8c\u679c\uff1f\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SE:\u5de5\u696d\u751f\u7522\u5982\u679c\u6c92\u6709\u9069\u7576\u8655\u7406\uff0c\u5f88\u5bb9\u6613\u7834\u58de\u5468\u906d\u74b0\u5883\uff0c\u9020\u6210\u7a7a\u6c23\u6c59\u67d3\u3001\u566a\u97f3\u6c59\u67d3\u3001\u6c34 \u8cea\u6c59\u67d3\u3001\u571f\u5730\u6c59\u67d3\u7b49\u3002\u4f8b\u5982\uff1a\u5de5\u696d\u5ee2\u6c34\u6216\u662f\u5bb6\u5ead\u6c59\u6c34\u76f4\u63a5\u6392\u5165\u6cb3\u6d41\uff0c\u4e0d\u50c5\u5371\u5bb3\u6cb3 \u6d41\u751f\u614b\uff0c\u6709\u6bd2\u7269\u8cea\u5982\u679c\u6d41\u5165\u5927\u6d77\uff0c\u901a\u904e\u98df\u7269\u93c8\u9032\u5165\u4eba\u9ad4\uff0c\u66f4\u6703\u56b4\u91cd\u640d\u5bb3\u5065\u5eb7\u3002",
"sec_num": null
},
{
"text": "Options: (1)\u7a7a\u6c23\u6c59\u67d3 (2)\u566a\u97f3 (3)\u6c34\u8cea\u6c59\u67d3 (4)\u4ee5\u4e0a\u7686\u662f that the preprocessor (GSE1-Neg column) and answer-selector (GSE1-AllAbv&NonAbv column) modules effectively enhance BERT-MC on these two subsets (from 20% to 40%, and from 64.3% to 83.9%, respectively). The above experiments sufficiently demonstrate the effectiveness of our proposed approaches (unnecessary combinations are marked \"NA\" in Table 6 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 379,
"end": 386,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "SE:\u5de5\u696d\u751f\u7522\u5982\u679c\u6c92\u6709\u9069\u7576\u8655\u7406\uff0c\u5f88\u5bb9\u6613\u7834\u58de\u5468\u906d\u74b0\u5883\uff0c\u9020\u6210\u7a7a\u6c23\u6c59\u67d3\u3001\u566a\u97f3\u6c59\u67d3\u3001\u6c34 \u8cea\u6c59\u67d3\u3001\u571f\u5730\u6c59\u67d3\u7b49\u3002\u4f8b\u5982\uff1a\u5de5\u696d\u5ee2\u6c34\u6216\u662f\u5bb6\u5ead\u6c59\u6c34\u76f4\u63a5\u6392\u5165\u6cb3\u6d41\uff0c\u4e0d\u50c5\u5371\u5bb3\u6cb3 \u6d41\u751f\u614b\uff0c\u6709\u6bd2\u7269\u8cea\u5982\u679c\u6d41\u5165\u5927\u6d77\uff0c\u901a\u904e\u98df\u7269\u93c8\u9032\u5165\u4eba\u9ad4\uff0c\u66f4\u6703\u56b4\u91cd\u640d\u5bb3\u5065\u5eb7\u3002",
"sec_num": null
},
{
"text": "The remaining errors in the GSE1-Neg and GSE1-AllAbv&NonAbv subsets are mainly due to that answering those questions requires further inference capability. Table 7 shows that we need to know that \"\u5546\u4eba (businessmen)\" are people without \"\u5171\u540c\u7684\u8840\u7de3 (blood relations)\". Similarly, Table 8 shows that we need to know that \"\u672a\u7d93\u8655\u7406\u7684\u6c59\u6c34\u6392\u5165\u6cb3\u5ddd (untreated sewage discharged into the river)\" causes \"\u6c34\u8cea\u6c59\u67d3 (water pollution)\".",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 163,
"text": "Table 7",
"ref_id": null
},
{
"start": 272,
"end": 279,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "SE:\u5de5\u696d\u751f\u7522\u5982\u679c\u6c92\u6709\u9069\u7576\u8655\u7406\uff0c\u5f88\u5bb9\u6613\u7834\u58de\u5468\u906d\u74b0\u5883\uff0c\u9020\u6210\u7a7a\u6c23\u6c59\u67d3\u3001\u566a\u97f3\u6c59\u67d3\u3001\u6c34 \u8cea\u6c59\u67d3\u3001\u571f\u5730\u6c59\u67d3\u7b49\u3002\u4f8b\u5982\uff1a\u5de5\u696d\u5ee2\u6c34\u6216\u662f\u5bb6\u5ead\u6c59\u6c34\u76f4\u63a5\u6392\u5165\u6cb3\u6d41\uff0c\u4e0d\u50c5\u5371\u5bb3\u6cb3 \u6d41\u751f\u614b\uff0c\u6709\u6bd2\u7269\u8cea\u5982\u679c\u6d41\u5165\u5927\u6d77\uff0c\u901a\u904e\u98df\u7269\u93c8\u9032\u5165\u4eba\u9ad4\uff0c\u66f4\u6703\u56b4\u91cd\u640d\u5bb3\u5065\u5eb7\u3002",
"sec_num": null
},
{
"text": "Since the gold SE is not available for real-world applications, this scenario compares the system performance of different models in a real-world situation. That is, we evaluated various models with all the SEs retrieved from a search engine (i.e., Apache Lucene (https://lucene.apache.org/)). Furthermore, to support those questions for which no associated SEs from the lessons (i.e., the SE2 subset), we used Wikipedia as an external knowledge resource to provide SEs when possible. We first used Lucene to search the Taiwan elementary-school social studies textbook and Wikipedia separately to yield two different SEs, after which we constructed a fused SE by concatenating these two SEs with the format \"Textbook-SE [SEP] Wiki-SE\" where Textbook-SE and Wiki-SE denote the two SEs retrieved from the textbook and Wikipedia, respectively. Question: \"\u5c0f\u82b1\u5728\u8d85\u5e02\u8cb7\u5230\u904e\u671f\u7684\u9905\u4e7e\uff0c\u8acb\u554f\u8a72\u8d85\u5e02\u7684\u8ca9\u552e\u884c\u70ba\u9055 \u53cd\u4ec0\u9ebc\u6cd5\u5f8b\uff1f\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSE (SE1+SE2 with all SEs retrieved from Lucene)",
"sec_num": "4.3.2"
},
{
"text": "Options: (1)\u5211\u6cd5 (2)\u61b2\u6cd5 (3)\u6559\u80b2\u57fa\u672c\u6cd5 (4)\u98df\u54c1\u5b89\u5168\u885b\u751f\u7ba1\u7406\u6cd5 Experimental results (the LSE column in Table 6 ) show that both the preprocessor and the answer selector effectively supplement BERT-MC; performance is improved further when they are jointly adopted (3.3% = 72.5% -69.2%). Furthermore, the accuracy of the BERT-MC only model on LSE is significantly lower than that on GSE1 (69.2% vs. 84.9%), which clearly illustrates that extracting good SEs is essential in QA tasks. Last, to show the influence of incorporating Wikipedia, we conducted an experiment in which we used only Lucene to search the textbook. The BERT-MC+Neg+AllAbv&NonAbv model now drops to 70.4% (not shown in Table 6 ) from 72.5%, which shows that Wikipedia provides the required common sense for some cases.",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 90,
"text": "Table 6",
"ref_id": "TABREF5"
},
{
"start": 667,
"end": 674,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "LSE (SE1+SE2 with all SEs retrieved from Lucene)",
"sec_num": "4.3.2"
},
{
"text": "We randomly selected 40 error cases from the test set of the BERT-MC+Neg+AllAbv&NonAbv model under the \"all SEs retrieved from Lucene\" scenario. We found that all errors come from two sources: (1) the correct support evidence was not retrieved (52%), and (2) the answer requires deep inference (48%). Table 9 shows an example for each category. For the first example, the retrieved SE is irrelevant to the question; our model thus fails to produce the correct answer. The second example illustrates that the model requires further inference capability to know that both \"\u725b\u5976\u7684\u4fdd\u5b58\u671f\u9650\u904e\u4e86\u6c92 (Has the milk expired?)\" and \"\u5728\u8d85\u5e02\u8cb7\u5230\u904e\u671f\u7684\u9905\u4e7e (I bought expired cookies in the supermarket)\" are similar events related to \"\u98df\u54c1\u5b89\u5168\u885b\u751f\u7ba1\u7406\u6cd5 (Act Governing Food Safety and Sanitation)\".",
"cite_spans": [],
"ref_spans": [
{
"start": 301,
"end": 308,
"text": "Table 9",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Error Analysis and Discussion",
"sec_num": "5."
},
{
"text": "Before 2015, most work on entailment judgment adopted statistical approaches (Kouylekov & Magnini, 2005; Heilman & Smith, 2010) . In subsequent work, neural network models were widely adopted due to the availability of large datasets such as RACE (Lai et al., 2017) and SNLI (Bowman et al., 2015) . Parikh et al. (2017) propose the first alignment-and-attention mechanism, achieving state-of-the-art (SOTA) results on the SNLI dataset. Chen et al. (2017) further propose a sequential inference model based on chain LSTMs which outperforms previous models. In recent work, pre-trained language models such as BERT (Devlin et al., 2019) , XLNET (Yang et al., 2019) , RoBERTa and ALBERT (Lan et al., 2019) yield superior performance on MC RC tasks. However, these results are obtained mainly by utilizing surface features (Jiang & Marneffe, 2019) . Besides, Zhang et al. (2020) propose a dual co-matching network to model relationships among passages, questions, and answer candidates to achieve SOTA results for MC questions. Also, Jin et al. (2020) propose two-stage transfer learning for coarse-tuning on out-of-domain datasets and fine-tuning on larger in-domain datasets to further improve performance. In comparison with those previous approaches, instead of adopting a new inference NN, our proposed approaches supplement the original BERT with additional modules to address two specific problems that BERT handles poorly.",
"cite_spans": [
{
"start": 77,
"end": 104,
"text": "(Kouylekov & Magnini, 2005;",
"ref_id": "BIBREF12"
},
{
"start": 105,
"end": 127,
"text": "Heilman & Smith, 2010)",
"ref_id": "BIBREF7"
},
{
"start": 247,
"end": 265,
"text": "(Lai et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 275,
"end": 296,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 299,
"end": 319,
"text": "Parikh et al. (2017)",
"ref_id": null
},
{
"start": 436,
"end": 454,
"text": "Chen et al. (2017)",
"ref_id": "BIBREF3"
},
{
"start": 613,
"end": 634,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 643,
"end": 662,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 684,
"end": 702,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 819,
"end": 843,
"text": "(Jiang & Marneffe, 2019)",
"ref_id": "BIBREF8"
},
{
"start": 855,
"end": 874,
"text": "Zhang et al. (2020)",
"ref_id": "BIBREF31"
},
{
"start": 1030,
"end": 1047,
"text": "Jin et al. (2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6."
},
{
"text": "We present several novel approaches to supplement BERT with additional modules to address problems with three specific types of questions that BERT-MC handles poorly (i.e., negation, all-of-the-above, and none-of-the-above). The proposed approach constitutes a new way to enhance a complicated DNN model with additional modules to pinpoint problems found in error analysis. Experimental results show the proposed approaches effectively improve performance, and thus demonstrate the feasibility of supplementing BERT with additional modules to fix specific problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "S",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bowman, S. R., Angeli, G., Potts, C. & Manning, C. D. (2015). A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 632-642.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural Reading Comprehension and Beyond. (Doctoral Dissertation)",
"authors": [
{
"first": "D",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, D. (2018). Neural Reading Comprehension and Beyond. (Doctoral Dissertation). Stanford Univ..",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Enhanced LSTM for Natural Language Inference",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1657--1668",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, Q., Zhu, X., Ling, Z., Wei, S., Jiang, H. & Inkpen, D. (2017). Enhanced LSTM for Natural Language Inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 1657-1668.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The PASCAL Recognising Textual Entailment Challenge",
"authors": [
{
"first": "I",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Glickman",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2005,
"venue": "Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment",
"volume": "",
"issue": "",
"pages": "177--190",
"other_ids": {
"DOI": [
"10.1007/11736790_9"
]
},
"num": null,
"urls": [],
"raw_text": "Dagan, I., Glickman, O., & Magnini, B. (2005) The PASCAL Recognising Textual Entailment Challenge. In Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment, Springer, 177-190. https://doi.org/10.1007/11736790_9",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M",
"middle": [
"W"
],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Devlin, J., Chang, M. W., Lee, K. & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4171-4186. https://doi.org/10.18653/v1/N19-1423",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Tree edit models for recognizing textual entailments, paraphrases, and answers to questions",
"authors": [
{
"first": "M",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1011--1019",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heilman, M. & Smith, N. A. (2010). Tree edit models for recognizing textual entailments, paraphrases, and answers to questions. In Proceedings of Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, 1011-1019.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Evaluating BERT for natural language inference: A case study on the CommitmentBank",
"authors": [
{
"first": "N",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "M.-C",
"middle": [
"D"
],
"last": "Marneffe",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "6086--6091",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang, N. & Marneffe, M.-C. D. (2019). Evaluating BERT for natural language inference: A case study on the CommitmentBank. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 6086-6091.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "MMM: Multi-stage multi-task learning for multi-choice reading comprehension",
"authors": [
{
"first": "D",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "J",
"middle": [
"Y"
],
"last": "Kao",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "8010--8017",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin, D., Gao, S., Kao, J. Y., Chung, T., & Hakkani-tur, D. (2020). MMM: Multi-stage multi-task learning for multi-choice reading comprehension. In Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8010-8017.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Recognizing textual entailment with tree edit distance algorithms",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kouylekov",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the First Challenge Workshop Recognising Textual Entailment",
"volume": "",
"issue": "",
"pages": "17--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kouylekov, M. & Magnini, B. (2005). Recognizing textual entailment with tree edit distance algorithms. In Proceedings of the First Challenge Workshop Recognising Textual Entailment 2005, 17-20.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "RACE: Large-scale ReAding Comprehension Dataset From Examinations",
"authors": [
{
"first": "G",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "785--794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lai, G., Xie, Q., Liu, H., Yang, Y. & Hovy, E. (2017). RACE: Large-scale ReAding Comprehension Dataset From Examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 785-794.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "ALBERT: A Lite BERT for Self-supervised Learning of Language Representations",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.11942"
]
},
"num": null,
"urls": [],
"raw_text": "Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P. & Soricut, R. (2019). ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:1909.11942.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Answering Chinese Elementary School Social Study Multiple Choice Questions",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "C",
"middle": [
"C"
],
"last": "Liang",
"suffix": ""
},
{
"first": "K",
"middle": [
"Y"
],
"last": "Su",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 International Conference on Technologies and Applications of Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, D., Liang, C. C. & Su, K. Y. (2020). Answering Chinese Elementary School Social Study Multiple Choice Questions. In Proceedings of the 2020 International Conference on Technologies and Applications of Artificial Intelligence.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "How Fast can BERT Learn Simple Natural Language Inference?",
"authors": [
{
"first": "Y",
"middle": [
"C"
],
"last": "Lin",
"suffix": ""
},
{
"first": "K",
"middle": [
"Y"
],
"last": "Su",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "626--633",
"other_ids": {
"DOI": [
"10.18653/v1/2021.eacl-main.51"
]
},
"num": null,
"urls": [],
"raw_text": "Lin, Y.C. & Su, K.Y. (2021). How Fast can BERT Learn Simple Natural Language Inference? In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, 626-633. https://doi.org/10.18653/v1/2021.eacl-main.51",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L. & Stoyanov, V. (2019). RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Neural Machine Reading Comprehension",
"authors": [
{
"first": "S",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Methods and Trends. Applied Sciences",
"volume": "9",
"issue": "18",
"pages": "",
"other_ids": {
"DOI": [
"10.3390/app9183698"
]
},
"num": null,
"urls": [],
"raw_text": "Liu, S., Zhang, X., Zhang, S., Wang, H., & Zhang, W. (2019). Neural Machine Reading Comprehension: Methods and Trends. Applied Sciences, 9(18), 3698. https://doi.org/10.3390/app9183698",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Decoupled Weight Decay Regularization",
"authors": [
{
"first": "I",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Loshchilov, I. & Hutter, F. (2019). Decoupled Weight Decay Regularization. In Proceedings of International Conference on Learning Representations 2019.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Doccano: Text annotation tool for human",
"authors": [
{
"first": "H",
"middle": [],
"last": "Nakayama",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kubo",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kamura",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Taniguchi",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nakayama, H., Kubo, T., Kamura, J., Taniguchi, Y., & Liang, X. (2018). Doccano: Text annotation tool for human. Software available from https://github.com/doccano/doccano",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A Decomposable Attention Model for Natural Language Inference",
"authors": [
{
"first": "A",
"middle": [
"P"
],
"last": "Parikh",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Tackstrom",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2249--2255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parikh, A. P., Tackstrom, O., Das, D. & Uszkoreit, J. (2016). A Decomposable Attention Model for Natural Language Inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 2249-2255.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A Survey on Neural Machine Reading Comprehension",
"authors": [
{
"first": "B",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.03824"
]
},
"num": null,
"urls": [],
"raw_text": "Qiu, B., Chen, X., Xu, J., & Sun, Y. (2019). A Survey on Neural Machine Reading Comprehension. arXiv preprint arXiv:1906.03824.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "SQuAD: 100,000+ Questions for Machine Comprehension of Text",
"authors": [
{
"first": "P",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajpurkar, P., Zhang, J., Lopyrev, K. & Liang, P. (2016). SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 2383-2392.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding",
"authors": [
{
"first": "A",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop Blackbox NLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5446"
]
},
"num": null,
"urls": [],
"raw_text": "Wang, A., Singh, A., Michael, J., Hill, F., Levy, O. & Bowman, S. (2018). GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In Proceedings of the 2018 EMNLP Workshop Blackbox NLP: Analyzing and Interpreting Neural Networks for NLP, 353-355. https://doi.org/10.18653/v1/W18-5446",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Making Negation-word Entailment Judgment via Supplementing BERT with Aggregative Pattern",
"authors": [
{
"first": "T",
"middle": [
"M"
],
"last": "Wu",
"suffix": ""
},
{
"first": "K",
"middle": [
"Y"
],
"last": "Su",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Technologies and Applications of Artificial Intelligence",
"volume": "2020",
"issue": "",
"pages": "17--22",
"other_ids": {
"DOI": [
"10.1109/TAAI51410.2020.00012"
]
},
"num": null,
"urls": [],
"raw_text": "Wu, T. M. & Su, K. Y. (2020). Making Negation-word Entailment Judgment via Supplementing BERT with Aggregative Pattern. In International Conference on Technologies and Applications of Artificial Intelligence (TAAI 2020), 17-22. https://doi.org/10.1109/TAAI51410.2020.00012",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A BERT based model for Multiple-Choice Reading Comprehension",
"authors": [
{
"first": "K",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu, K., Tin, J., & Kim, J. (2019). A BERT based model for Multiple-Choice Reading Comprehension. Retrieved from http://cs229.stanford.edu/proj2019spr/report/72.pdf",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Q",
"middle": [
"C"
],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of Advances in neural information processing systems 32 (NIPS 2019",
"volume": "",
"issue": "",
"pages": "5753--5763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. & Le, Q. C. (2019). XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Proceedings of Advances in neural information processing systems 32 (NIPS 2019), 5753-5763.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "DCMN+: Dual co-matching network for multi-choice reading comprehension",
"authors": [
{
"first": "S",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "9563--9570",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, S., Zhao, H., Wu, Y., Zhang, Z., Zhou, X., & Zhou, X. (2020). DCMN+: Dual co-matching network for multi-choice reading comprehension. In Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9563-9570.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Architecture of proposed SJS approach.",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Decision tree for SJS approach. Each \"act-xxx\" is a specific action to be taken.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Figure 3shows the examples under two different inference mechanisms: (1) for a negation-type question (left figure), and (2) a question with all of the above option (right figure).",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "Two inference mechanisms under SJS framework.",
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"text": "System architecture of proposed \"Jointly Judge then Select\" framework.",
"uris": null
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"text": "Multiple-choice question annotation.",
"uris": null
},
"FIGREF6": {
"type_str": "figure",
"num": null,
"text": "The architecture of the BERT-MC model(Xu et al., 2019).",
"uris": null
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td>Passage</td><td>\u4e09\u4ee3\u540c\u5802\u5bb6\u5ead\u662f\u5b50\u5973\u548c\u7236\u6bcd\u3001\u7956\u7236\u6bcd\u6216\u5916</td></tr><tr><td/><td>\u7956\u7236\u6bcd\u540c\u4f4f\u3002</td></tr><tr><td>Question</td><td>\u300c\u6211\u548c\u7238\u7238\u3001\u5abd\u5abd\u3001\u723a\u723a\u3001\u5976\u5976\u4f4f\u5728\u4e00\u8d77\u3002\u300d</td></tr><tr><td/><td>\u662f\u5c6c\u65bc\u54ea\u4e00\u7a2e\u985e\u578b\u7684\u5bb6\u5ead\uff1f</td></tr><tr><td>Options</td><td>(1) \u4e09\u4ee3\u540c\u5802\u5bb6\u5ead</td></tr><tr><td/><td>(2) \u55ae\u89aa\u5bb6\u5ead</td></tr></table>",
"num": null,
"html": null,
"text": ""
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>Problem type</td><td>Questions</td></tr><tr><td>Negation</td><td>Question: \u6d69\u6d69\u8ddf\u5bb6\u4eba\u5230\u81fa\u6771\u7e23\u95dc\u5c71\u93ae\u904a\u73a9\uff0c\u4ed6\u4e0d\u53ef\u80fd\u5728\u7576\u5730\u770b</td></tr><tr><td/><td>\u5230\u4ec0\u9ebc\uff1f</td></tr><tr><td/><td>Options: (1)\u963f\u7f8e\u65cf\u8c50\u5e74\u796d (2)\u74b0\u93ae\u81ea\u884c\u8eca\u9053 (3)\u6cb9\u6850\u82b1\u5a5a\u79ae</td></tr><tr><td/><td>(4)\u89aa\u6c34\u516c\u5712</td></tr><tr><td>All of the</td><td>Question: \u5728\u9ad8\u9f61\u5316\u7684\u793e\u6703\u88e1\uff0c\u6211\u5011\u61c9\u8a72\u5982\u4f55\u56e0\u61c9\u9ad8\u9f61\u5316\u793e\u6703\u7684</td></tr><tr><td>above</td><td>\u5230\u4f86\uff1f</td></tr><tr><td/><td>Options: (1)\u5236\u5b9a\u8001\u4eba\u798f\u5229\u653f\u7b56 (2)\u63d0\u4f9b\u826f\u597d\u7684\u5b89\u990a\u7167\u9867 (3)</td></tr><tr><td/><td>\u5efa\u7acb\u5065\u5168\u7684\u91ab\u7642\u9ad4\u7cfb (4)\u4ee5\u4e0a\u7686\u662f</td></tr><tr><td>None of the</td><td>Question: \u90fd\u5e02\u6709\u516c\u5171\u8a2d\u65bd\u5b8c\u5584\u3001\u5de5\u4f5c\u6a5f\u6703\u591a\u7b49\u512a\u9ede\uff0c\u5e38\u5438\u5f15\u9109</td></tr><tr><td>above</td><td>\u6751\u5730\u5340\u54ea\u4e00\u7a2e\u5e74\u9f61\u5c64\u7684\u5c45\u6c11\u524d\u5f80\uff1f</td></tr><tr><td/><td>Options:</td></tr></table>",
"num": null,
"html": null,
"text": ""
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td>Subset</td><td>Training</td><td>Dev</td><td>Test</td></tr><tr><td>Questions</td><td>3,879</td><td>780</td><td>778</td></tr><tr><td>Questions w/ SE</td><td>3,135</td><td>604</td><td>563</td></tr><tr><td>Questions w/o SE</td><td>744</td><td>176</td><td>215</td></tr><tr><td>Averaged SPs</td><td>1.09</td><td>1.16</td><td>1.14</td></tr><tr><td>Averaged SSs</td><td>3.17</td><td>2.94</td><td>2.73</td></tr><tr><td/><td colspan=\"3\">*Questions w/o SE: the number of questions without supporting evidence</td></tr><tr><td/><td colspan=\"3\">Averaged SPs: the average number of Supporting Paragraphs</td></tr><tr><td/><td colspan=\"3\">Averaged SSs: the average number of Supporting Sentences</td></tr></table>",
"num": null,
"html": null,
"text": ""
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td/><td>Training</td><td>Dev</td><td>Test</td></tr><tr><td>Lessons</td><td>202</td><td>27</td><td>26</td></tr><tr><td>Questions</td><td>3,879</td><td>780</td><td>778</td></tr><tr><td>Averaged paragraphs/lesson</td><td>11.28</td><td>13.93</td><td>10.93</td></tr><tr><td>#Averaged entences/lesson</td><td>46.40</td><td>52.67</td><td>46.33</td></tr></table>",
"num": null,
"html": null,
"text": ""
},
"TABREF4": {
"type_str": "table",
"content": "<table><tr><td/><td>Training</td><td>Dev</td><td>Test</td></tr><tr><td>Lessons</td><td>196</td><td>27</td><td>26</td></tr><tr><td>Questions</td><td>3,135</td><td>604</td><td>563</td></tr><tr><td>( NEG a )</td><td>(53)</td><td>(14)</td><td>(15)</td></tr><tr><td>( AllAbv&amp;NonAbv b )</td><td>(332)</td><td>(69)</td><td>(56)</td></tr><tr><td>Averaged paragraphs/lesson</td><td>11.35</td><td>13.93</td><td>10.85</td></tr><tr><td>Averaged sentences/ Lesson</td><td>46.72</td><td>52.67</td><td>46.15</td></tr></table>",
"num": null,
"html": null,
"text": ""
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td/><td/><td>GSE1 a</td><td colspan=\"2\">GSE1-Neg b GSE1-AllAbv&amp;NonAbv c</td><td>LSE d</td></tr><tr><td>BERT-MC</td><td>only</td><td/><td/><td/></tr><tr><td>(baseline)</td><td/><td>0.849</td><td>0.200</td><td>0.643</td><td>0.692</td></tr><tr><td>SJS</td><td/><td>0.862</td><td>NA</td><td>NA</td><td>0.694</td></tr><tr><td>SJCS</td><td/><td>0.822</td><td>NA</td><td>NA</td><td>0.661</td></tr><tr><td>BERT-MC</td><td/><td/><td/><td/></tr><tr><td>+ Neg</td><td/><td>0.870</td><td>0.400</td><td>NA</td><td>0.695</td></tr><tr><td>BERT-MC</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">+ AllAbv&amp;NonAbv</td><td>0.879</td><td>NA</td><td>0.839</td><td>0.719</td></tr><tr><td>BERT-MC</td><td/><td/><td/><td/></tr><tr><td>+ Neg</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">+ AllAbv&amp;NonAbv</td><td/><td/><td/></tr><tr><td>(also JJS)</td><td/><td>0.879</td><td>NA</td><td>NA</td><td>0.725</td></tr></table>",
"num": null,
"html": null,
"text": ""
},
"TABREF6": {
"type_str": "table",
"content": "<table><tr><td>Error Type</td><td>Questions</td></tr><tr><td>Incorrect</td><td>Wrong SE: \u6e05\u671d\u7d71\u6cbb\u81fa\u7063\u521d\u671f\uff0c\u6f22\u4eba\u6e21\u6d77\u4f86\u81fa\u5f8c\uff0c\u5f80\u5f80\u540c\u9109\u4eba\u805a</td></tr><tr><td>supporting</td><td>\u5c45\u5728\u4e00\u8d77\uff0c\u4e26\u4e14\u5efa\u7bc9\u5edf\u5b87\u4f9b\u5949\u5171\u540c\u4fe1\u4ef0\u7684\u795e\u660e\u3002</td></tr><tr><td>evidence</td><td>Question: \u81fa\u7063\u6709\u8a31\u591a\u5f9e\u4e2d\u570b\u79fb\u6c11\u4f86\u7684\u6f22\u4eba\uff0c\u4f86\u81fa\u8981\u6e21\u904e\u5371\u96aa\u7684\u81fa</td></tr><tr><td>(52%)</td><td>\u7063\u6d77\u5cfd\uff0c\u6240\u4ee5\u4ec0\u9ebc\u795e\u660e\u5c31\u88ab\u6240\u6709\u79fb\u6c11\u6240\u5171\u540c\u4fe1\u4ef0\uff1f</td></tr><tr><td/><td>Options: (1)\u95dc\u516c (2)\u571f\u5730\u516c (3)\u5abd\u7956 (4)\u4e09\u5c71\u570b\u738b</td></tr><tr><td>Requires</td><td>SE: \u5211\u6cd5\u5c0d\u50b7\u5bb3\u4ed6\u4eba\u7684\u884c\u70ba\u52a0\u4ee5\u8655\u7f70\uff1b\u6c11\u6cd5\u5247\u4ee5\u640d\u5bb3\u8ce0\u511f\u7684\u65b9\u5f0f\uff0c</td></tr><tr><td>advanced inference capability</td><td>\u8acb\u554f\u725b\u5976\u7684\u4fdd\u5b58\u671f\u9650\u904e\u4e86\u6c92\uff1f(\u76f8\u95dc\u6cd5\u5f8b\uff1a\u6c11\u6cd5\u3001\u6d88\u8cbb\u8005\u4fdd\u8b77\u6cd5\u3001 \u98df\u54c1\u5b89\u5168\u885b\u751f\u7ba1\u7406\u6cd5)</td></tr><tr><td>(48%)</td><td/></tr></table>",
"num": null,
"html": null,
"text": ""
}
}
}
}