ACL-OCL / Base_JSON /prefixB /json /bea /2021.bea-1.17.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:09:56.565738Z"
},
"title": "Automatically Generating Cause-and-Effect Questions from Passages",
"authors": [
{
"first": "Katherine",
"middle": [],
"last": "Stasaski",
"suffix": "",
"affiliation": {},
"email": "katiestasaski@berkeley.edu"
},
{
"first": "Manav",
"middle": [],
"last": "Rathod",
"suffix": "",
"affiliation": {},
"email": "manav.rathod@berkeley.edu"
},
{
"first": "Tony",
"middle": [],
"last": "Tu",
"suffix": "",
"affiliation": {},
"email": "tonytu16@berkeley.edu"
},
{
"first": "Yunfang",
"middle": [],
"last": "Xiao",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": "",
"affiliation": {},
"email": "hearst@berkeley.edu"
},
{
"first": "U",
"middle": [
"C"
],
"last": "Berkeley",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automated question generation has the potential to greatly aid in education applications, such as online study aids to check understanding of readings. The state-of-the-art in neural question generation has advanced greatly, due in part to the availability of large datasets of question-answer pairs. However, the questions generated are often surface-level and not challenging for a human to answer. To develop more challenging questions, we propose the novel task of cause-and-effect question generation. We build a pipeline that extracts causal relations from passages of input text, and feeds these as input to a state-of-the-art neural question generator. The extractor is based on prior work that classifies causal relations by linguistic category (Cao et al., 2016; Altenberg, 1984). This work results in a new, publicly available collection of cause-and-effect questions. We evaluate via both automatic and manual metrics and find performance improves for both question generation and question answering when we utilize a small auxiliary data source of cause-and-effect questions for finetuning. Our approach can be easily applied to generate cause-and-effect questions from other text collections and educational material, allowing for adaptable large-scale generation of cause-and-effect questions.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Automated question generation has the potential to greatly aid in education applications, such as online study aids to check understanding of readings. The state-of-the-art in neural question generation has advanced greatly, due in part to the availability of large datasets of question-answer pairs. However, the questions generated are often surface-level and not challenging for a human to answer. To develop more challenging questions, we propose the novel task of cause-and-effect question generation. We build a pipeline that extracts causal relations from passages of input text, and feeds these as input to a state-of-the-art neural question generator. The extractor is based on prior work that classifies causal relations by linguistic category (Cao et al., 2016; Altenberg, 1984). This work results in a new, publicly available collection of cause-and-effect questions. We evaluate via both automatic and manual metrics and find performance improves for both question generation and question answering when we utilize a small auxiliary data source of cause-and-effect questions for finetuning. Our approach can be easily applied to generate cause-and-effect questions from other text collections and educational material, allowing for adaptable large-scale generation of cause-and-effect questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automated question generation (QG) can have a large educational impact since questions can be generated to check learners' comprehension and understanding of textbooks or other reading materials (Thalheimer, 2003; Kurdi et al., 2020) . A high-quality QG system could reduce the costly human effort required to generate questions as well as free up teachers' time to focus on other instructional activities (Kurdi et al., 2020) . Furthermore, a robust QG system could also expand the variety of",
"cite_spans": [
{
"start": 195,
"end": 213,
"text": "(Thalheimer, 2003;",
"ref_id": "BIBREF30"
},
{
"start": 214,
"end": 233,
"text": "Kurdi et al., 2020)",
"ref_id": "BIBREF19"
},
{
"start": 406,
"end": 426,
"text": "(Kurdi et al., 2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Injuries can also be prevented by proper rest and recovery. If you do not get enough rest, your body will become injured and will not react well to exercise, or improve. You can also rest by doing a different activity. Extracted Cause you do not get enough rest Extracted Effect your body will become injured and will not react well to exercise, or improve Generated Cause Q why will your body become injured and not react well to exercise? Generated Effect Q what happens if you don't get enough rest? Table 1 : Example passage (taken from the TQA dataset), extracted cause/effect, and generated questions. The Extracted Cause is the intended answer for the Generated Cause Question; similarly for Effect. educational material used for formative assessment, allowing students more opportunities to cement their understanding of concepts.",
"cite_spans": [],
"ref_spans": [
{
"start": 503,
"end": 510,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Original Passage",
"sec_num": null
},
{
"text": "To be truly effective, automated question generation for education must have the ability to ask questions for students at different levels of development. A frequently used measure of question difficulty is Bloom's taxonomy (Bloom et al., 1956; Anderson et al., 2001) , which defines a framework of how to assess types of questions across different levels of mastery, progressing from simplest to most complex. Factual questions, which involve recalling information, fall on the lowest level (Recall) (Beatty Jr, 1975; Anderson et al., 2001) . By contrast, cause and effect questions are categorized at level 6 -Analysis -according to the original Bloom's taxonomy (Beatty Jr, 1975) or level 2 -Understanding -in the commonly used revised model (Anderson et al., 2001) .",
"cite_spans": [
{
"start": 224,
"end": 244,
"text": "(Bloom et al., 1956;",
"ref_id": "BIBREF4"
},
{
"start": 245,
"end": 267,
"text": "Anderson et al., 2001)",
"ref_id": "BIBREF1"
},
{
"start": 501,
"end": 518,
"text": "(Beatty Jr, 1975;",
"ref_id": "BIBREF3"
},
{
"start": 519,
"end": 541,
"text": "Anderson et al., 2001)",
"ref_id": "BIBREF1"
},
{
"start": 665,
"end": 682,
"text": "(Beatty Jr, 1975)",
"ref_id": "BIBREF3"
},
{
"start": 745,
"end": 768,
"text": "(Anderson et al., 2001)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Original Passage",
"sec_num": null
},
{
"text": "The rise of large question answering datasets, such as SQuAD (Rajpurkar et al., 2016) , NewsQA , and HotPotQA (Yang et al., 2018) , have created the ability to train wellperforming neural QG systems. However, due to the nature of their training data, current question generation systems mainly generate factual questions characteristic of Bloom's level 1.",
"cite_spans": [
{
"start": 61,
"end": 85,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 110,
"end": 129,
"text": "(Yang et al., 2018)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Original Passage",
"sec_num": null
},
{
"text": "Questions which test knowledge of a causal relation can assess a deeper level of mastery beyond surface-level factual questions. Altenberg (1984) states that \"[a] causal relation can be said to exist between two events or states of affairs if one is understood as the cause of or reason for the other.\" We aim to generate cause-and-effect questions, which test knowledge of the relationship between these two events or states. An example of our generated questions along with corresponding input, cause, and effect can be seen in Table 1. To address this task, we propose a novel pipeline to generate and evaluate cause-and-effect questions directly from text. We improve upon a pre-existing causal extraction system (Cao et al., 2016) , which uses a series of syntactic rules to extract causes and effects from unstructured text. We utilize the resulting cause and effect as intended answers for a neural question generation system. For each cause and effect, we generate one question, to test each direction of the causal relationship.",
"cite_spans": [
{
"start": 129,
"end": 145,
"text": "Altenberg (1984)",
"ref_id": "BIBREF0"
},
{
"start": 717,
"end": 735,
"text": "(Cao et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 530,
"end": 538,
"text": "Table 1.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Original Passage",
"sec_num": null
},
{
"text": "Our work sets the stage for scalable generation of cause-and-effect questions because it automatically generates causal questions and answers from freeform text. In this paper, we evaluate our approach on two English datasets: SQuAD Wikipedia articles (Rajpurkar et al., 2018) and middle school science textbooks in the Textbook Question Answering (TQA) dataset (Kembhavi et al., 2017) .",
"cite_spans": [
{
"start": 252,
"end": 276,
"text": "(Rajpurkar et al., 2018)",
"ref_id": "BIBREF25"
},
{
"start": 362,
"end": 385,
"text": "(Kembhavi et al., 2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Original Passage",
"sec_num": null
},
{
"text": "Our research contributions include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original Passage",
"sec_num": null
},
{
"text": "\u2022 A novel cause-and-effect question generation pipeline, including an improved causal extraction system based on a linguistic typology (Cao et al., 2016; Altenberg, 1984) ,",
"cite_spans": [
{
"start": 135,
"end": 153,
"text": "(Cao et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 154,
"end": 170,
"text": "Altenberg, 1984)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Original Passage",
"sec_num": null
},
{
"text": "\u2022 An evaluation framework, accompanied by preliminary experimental results showing that fine-tuning on a small, auxiliary dataset of cause-and-effect questions substantially improves both question generation and question answering models,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original Passage",
"sec_num": null
},
{
"text": "\u2022 A novel collection of 8,808 cause-and-effect",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original Passage",
"sec_num": null
},
{
"text": "questions, with open source code to apply the pipeline to other text collections, allowing for future work to examine the educational impact of automatically-generated cause-and-effect questions. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original Passage",
"sec_num": null
},
{
"text": "In this section, we discuss past work in causal extraction, question answering, question generation, and applications of generated questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Causal extraction systems aim to identify whether a causal relation is expressed in text and to identify the cause and effect if so. However, the use of neural techniques in causal extraction is sparse and thus most of the work is still tied to a focus on extracting relations based on specific linguistic features (Asghar, 2016) . We utilize Cao et al. (2016) , which is aimed at extracting causal relations from academic papers via a series of structured syntactic patterns tied to a linguistic typology (Altenberg, 1984) . Causal relation extraction has been applied to inform question answering models (Girju, 2003; Breja and Jain, 2020) . Some work has used neural networks to generate explanations from opendomain \"why\" questions without using external knowledge sources (Nie et al., 2019) .",
"cite_spans": [
{
"start": 315,
"end": 329,
"text": "(Asghar, 2016)",
"ref_id": "BIBREF2"
},
{
"start": 343,
"end": 360,
"text": "Cao et al. (2016)",
"ref_id": "BIBREF6"
},
{
"start": 506,
"end": 523,
"text": "(Altenberg, 1984)",
"ref_id": "BIBREF0"
},
{
"start": 606,
"end": 619,
"text": "(Girju, 2003;",
"ref_id": "BIBREF14"
},
{
"start": 620,
"end": 641,
"text": "Breja and Jain, 2020)",
"ref_id": "BIBREF5"
},
{
"start": 777,
"end": 795,
"text": "(Nie et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Causal Extraction",
"sec_num": "2.1"
},
{
"text": "Neural question answering (QA) is a widelyexplored area with many large datasets of crowdworker-created questions. SQuAD (Rajpurkar et al., 2016) and NewsQA each include over 100,000 crowdworkercreated questions from Wikipedia and CNN articles, respectively. NarrativeQA includes questions aimed at larger narrative events, which require reading an entire novel or movie script in order to answer (Ko\u010disk\u00fd et al., 2018) . However, cause-andeffect questions are infrequent (\"why\" questions compose 9.78% of the dataset), and questions are paired with long documents.",
"cite_spans": [
{
"start": 121,
"end": 145,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 397,
"end": 419,
"text": "(Ko\u010disk\u00fd et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Answering",
"sec_num": "2.2"
},
{
"text": "HotPotQA includes over 100,000 questions spanning multiple passages, where questions require combining multiple facts in order to correctly answer the question (Yang et al., 2018) . However, HotPotQA is still primarily a factual recall task. For instance, our inspection of the dataset finds that 30% of the questions expect a Person entity as an answer. The three most common question types are \"what,\" \"which,\" and \"who,\" which suggest a factual, information lookup style answer. While HotPotQA questions are potentially more difficult for machines to answer than questions from other QA datasets, the resulting questions are overwhelmingly factual and include entities as the intended answer.",
"cite_spans": [
{
"start": 160,
"end": 179,
"text": "(Yang et al., 2018)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Answering",
"sec_num": "2.2"
},
{
"text": "Question answering approaches trained on these datasets include gated attention-based methods (Wang et al., 2017b) as well as transformer-based methods (Yang et al., 2019; Lan et al., 2020) . Models which augment QA data with automaticallygenerated questions have also been explored (Duan et al., 2017) .",
"cite_spans": [
{
"start": 94,
"end": 114,
"text": "(Wang et al., 2017b)",
"ref_id": "BIBREF33"
},
{
"start": 152,
"end": 171,
"text": "(Yang et al., 2019;",
"ref_id": "BIBREF36"
},
{
"start": 172,
"end": 189,
"text": "Lan et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 283,
"end": 302,
"text": "(Duan et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Answering",
"sec_num": "2.2"
},
{
"text": "Past work has utilized purely syntactic cues to generate questions (Heilman and Smith, 2010) . However, these systems rely on rules which may be brittle. More recent work has combined a syntactic question generator with backtranslation, to improve robustness and reduce grammatical errors of generated questions (Dhole and Manning, 2020) . While Syn-QG is able to reliably generate types of causal questions using two specific patterns, the system is limited in diversity of question wording by syntactic rules.",
"cite_spans": [
{
"start": 67,
"end": 92,
"text": "(Heilman and Smith, 2010)",
"ref_id": "BIBREF15"
},
{
"start": 312,
"end": 337,
"text": "(Dhole and Manning, 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation",
"sec_num": "2.3"
},
{
"text": "Question answering datasets have been used to train neural models to generate questions directly from input text. Question generation approaches include specialized attention mechanisms (Zhao et al., 2018) as well as generating via large transformer models (Qi et al., 2020; Chan and Fan, 2019) . Jointly training QA and QG models have also been explored (Wang et al., 2017a) . Past work has also trained neural QG systems on questions generated via a rule-based system (De Kuthy et al., 2020).",
"cite_spans": [
{
"start": 186,
"end": 205,
"text": "(Zhao et al., 2018)",
"ref_id": "BIBREF39"
},
{
"start": 257,
"end": 274,
"text": "(Qi et al., 2020;",
"ref_id": "BIBREF24"
},
{
"start": 275,
"end": 294,
"text": "Chan and Fan, 2019)",
"ref_id": "BIBREF7"
},
{
"start": 355,
"end": 375,
"text": "(Wang et al., 2017a)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation",
"sec_num": "2.3"
},
{
"text": "Past work has explored educational applications of question generation (Kurdi et al., 2020) . QG-Net utilizes a pointer-generator model trained on SQuAD to automatically generate questions from textbooks (Wang et al., 2018) . Additional work has aimed at generating educational questions from a structured ontology (Stasaski and Hearst, 2017) . QuizBot exposes students to questions via a dialogue interface, where pre-set questions are chosen in an intelligent order (Ruan et al., 2019) .",
"cite_spans": [
{
"start": 71,
"end": 91,
"text": "(Kurdi et al., 2020)",
"ref_id": "BIBREF19"
},
{
"start": 204,
"end": 223,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF34"
},
{
"start": 315,
"end": 342,
"text": "(Stasaski and Hearst, 2017)",
"ref_id": "BIBREF29"
},
{
"start": 468,
"end": 487,
"text": "(Ruan et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation Applications",
"sec_num": "2.4"
},
{
"text": "3 Cause-and-Effect Question Generation Pipeline",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation Applications",
"sec_num": "2.4"
},
{
"text": "We propose a novel pipeline which combines a causal extractor with a neural question generation system to produce cause-and-effect questions. The entire pipeline can be seen in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 185,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Question Generation Applications",
"sec_num": "2.4"
},
{
"text": "Our pipeline can take in freeform text and automatically extract a cause and effect to generate questions. We feed passages from the SQuAD 2.0 development set (Rajpurkar et al., 2018) and the Textbook Question Answering (Kembhavi et al., 2017) datasets into our causal extraction pipeline for evaluation experiments. SQuAD 2.0 consists of over 100,000 questions from over 500 Wikipedia articles. This dataset is standard to train both QA and QG systems; we do not use the question portion of this dataset because it consists primarily of straightforward factual questions. A heuristic to determine the proportion of cause-and-effect questions in SQuAD is the number of questions in the dataset which begin with \"why.\" While this does not capture all possible ways of expressing causality, this can serve as a signal for the prevalence of cause-and-effect questions in each dataset. In SQuAD, only 1.3% of questions begin with \"why,\" indicating cause-and-effect questions are not a significant component of this dataset. A more extensive analysis which examines hand-labeled question n-grams finds similar results (Appendix A).",
"cite_spans": [
{
"start": 159,
"end": 183,
"text": "(Rajpurkar et al., 2018)",
"ref_id": "BIBREF25"
},
{
"start": 220,
"end": 243,
"text": "(Kembhavi et al., 2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "TQA consists of 26,260 questions from Life Science, Earth Science and Physical Science textbooks. We choose this dataset because it contains educational textbook text, which often express causal relationships; however, we do not use the TQA questions since many are tied to an entire lesson in the textbook and include visual diagrams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "We use and improve a pre-existing causal extractor to identify cause and effect pairs in unstructured input text, see Causal Extractor in Figure 1 (Cao et al., 2016 ). 2 The system achieves approximately 0.85 recall of hand-labeled cause-and-effect relationships over 3 academic articles, as reported in Cao et al. (2016) . The extractor relies on a series of hand-crafted patterns based on syntax cues and Figure 1 : Causal extraction and question generation pipeline. Input passage is passed to the causal extractor, which identifies a cause and effect. The cause and effect are each passed into the Question Generator as intended answers, to generate a resulting question. Questions are evaluated using automatic and human metrics. Example passages and questions can be seen in in Table 1. part-of-speech tags. An example pattern, where & indicates a class of tokens defined by Cao et al. (2016) , &C is the cause, &R is the effect, and optional arguments are shown in parentheses is: &C (,/;/./-) (&AND) as a (&ADJ) result (,) &R A match to this pattern from the TQA dataset (cause bolded, effect italicized, and other matched terms underlined) is:",
"cite_spans": [
{
"start": 147,
"end": 164,
"text": "(Cao et al., 2016",
"ref_id": "BIBREF6"
},
{
"start": 168,
"end": 169,
"text": "2",
"ref_id": null
},
{
"start": 304,
"end": 321,
"text": "Cao et al. (2016)",
"ref_id": "BIBREF6"
},
{
"start": 881,
"end": 898,
"text": "Cao et al. (2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 138,
"end": 146,
"text": "Figure 1",
"ref_id": null
},
{
"start": 407,
"end": 415,
"text": "Figure 1",
"ref_id": null
},
{
"start": 784,
"end": 792,
"text": "Table 1.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Causal Extraction",
"sec_num": "3.2"
},
{
"text": "Unsaturated fatty acids have at least one double bond between carbon atoms. As a result, some carbon atoms are not bonded to as many hydrogen atoms as possible. They are unsaturated with hydrogens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Causal Extraction",
"sec_num": "3.2"
},
{
"text": "We pass 2 or 3 sentence passages into the causal extractor, since a cause and effect may span across adjacent sentences. We always try to select a 3sentence passage, but when a paragraph boundary would be crossed, we reduce to 2-sentence passage. We use a sliding 3-sentence window to examine all possible sentence combinations. Multiple causal relationships may be found in a single passage; the sliding window captures the multiple relationships across different passages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Causal Extraction",
"sec_num": "3.2"
},
{
"text": "After an examination of 160 extracted causal relationships from SQuAD and TQA, we implement targeted modifications to improve the quality of extracted causal relationships. We omit causal relationships which include direct reference to a figure in a textbook or are part of a question. We additionally find that ambiguous causal linking phrases (\"as,\" \"so,\" and \"since\") are prevalent but have lower accuracy than more direct phrases (\"because\"). Thus, we filter out \"as,\" \"so,\" and \"since\" patterns where \"as,\" \"so,\" or \"since\" is not labeled as a conjunction or subordinate conjunction by the part-of-speech tagger. Further details are described in Appendix B. This modification reduces the total number of extracted relations from 3,976 to 3,359 for TQA and from 1,105 to 1,045 for SQuAD. Evaluation of these changes appears in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjustments to Cao et al.'s Extractor",
"sec_num": "3.2.1"
},
{
"text": "One motivation for choosing the Cao et al. system is that each pattern is tied to a typology of causal links (Altenberg, 1984) . The typology contains 4 main categories, based on the causal link which is utilized to express the causal relationship: Adverbial (\"so\", \"hence\", \"therefore\"), Prepositional (\"because of\", \"on account of\"), Subordination (\"because\", \"as\", \"since\"), and Clause-integrated linkage (\"that's why\", \"the result was\"). Examples of causal relations extracted using each part of the typology, using the TQA dataset, can be seen in Table 2 . By incorporating these patterns in our evaluation framework, we can examine potential gaps in model performance that might be tied to syntactic cues, allowing for the design of improved models.",
"cite_spans": [
{
"start": 109,
"end": 126,
"text": "(Altenberg, 1984)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 552,
"end": 559,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Causal Link Typology",
"sec_num": "3.2.2"
},
{
"text": "After extracting causes and effects from input text, we use each as an intended answer for a neural question generation system (see Question Generation in Figure 1 ). This results in two output questions",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 163,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cause-and-Effect Question Generation",
"sec_num": "3.3"
},
{
"text": "Different energy levels in the cloud have different numbers of orbitals. Therefore, different energy levels have different maximum numbers of electrons. Table 5 .1 lists the number of orbitals and electrons for the first four energy levels. Prepositional The function of these scales is for protection against predators. The shape of sharks teeth differ according to their diet. Species that feed on mollusks and crustaceans have dense flattened teeth for crushing, those that feed on fish have needle-like teeth for gripping, and those that feed on larger prey, such as mammals, have pointed lower teeth for gripping and triangular upper teeth with serrated edges for cutting. Subordination The spoon particles started moving faster and became warmer, causing the temperature of the spoon to rise. Because the coffee particles lost some of their kinetic energy to the spoon particles, the coffee particles started to move more slowly. This caused the temperature of the coffee to fall.",
"cite_spans": [],
"ref_spans": [
{
"start": 153,
"end": 160,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Type Example Adverbial",
"sec_num": null
},
{
"text": "The stars outer layers spread out and cool. The result is a larger star that is cooler on the surface, and red in color. Eventually a red giant burns up all of the helium in its core. (Altenberg, 1984) , from the TQA dataset, with causes bolded, effects italicized, and causal link words underlined.",
"cite_spans": [
{
"start": 184,
"end": 201,
"text": "(Altenberg, 1984)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Clause-Integration",
"sec_num": null
},
{
"text": "for each causal relationship (one corresponding to cause, and one to effect). We use ProphetNet, a state-of-the-art question generation model, to generate these questions (Qi et al., 2020) . 3 The novelty of ProphetNet is its ability to generate text by predicting the next n-gram instead of just the next token, which helps to prevent overfitting on strong local correlations. ProphetNet is fine-tuned for question generation using SQuAD 1.1 (Rajpurkar et al., 2016) . Models specifications are described in Section 5, and model evaluation is described in Sections 6 and 7.",
"cite_spans": [
{
"start": 171,
"end": 188,
"text": "(Qi et al., 2020)",
"ref_id": "BIBREF24"
},
{
"start": 191,
"end": 192,
"text": "3",
"ref_id": null
},
{
"start": 443,
"end": 467,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Clause-Integration",
"sec_num": null
},
{
"text": "In order to ensure the extracted causes and effects are suitable to generate questions from, we evaluated the original Cao et al. (2016) and our improved causal extractor for accuracy via a crowdworking task. Workers evaluated if an extracted cause and effect were causal or not, judged from a passage with the extracted cause and effect highlighted (see Appendix C).",
"cite_spans": [
{
"start": 119,
"end": 136,
"text": "Cao et al. (2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Extracted Causal Relations",
"sec_num": "4"
},
{
"text": "We utilized Amazon Mechanical Turk to recruit crowdworkers for data labeling. We required crowdworkers to have a 98% HIT approval rating, have completed at least 5,000 past HITs, and be located in the United States. We estimated the task would require no more than 10 minutes to complete; therefore, we paid $1.66, equivalent to $10 per hour. We asked crowdworkers to provide a one-sentence explanation for their classification decision for the first and last item in the HIT, which we manually checked to ensure quality. To increase confidence in the labels and to tiebreak disagreements, we acquired 5 labels for every passage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Extracted Causal Relations",
"sec_num": "4"
},
{
"text": "We sampled 100 passages from TQA and 100 from SQuAD for both the original causal extraction system and our improved system. We used proportionate stratified sampling where the size of sample typology group is proportional to the size of population typology group, while also ensuring that all typology groups are represented in the sample.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Extracted Causal Relations",
"sec_num": "4"
},
{
"text": "After tiebreaking using majority vote, for the original Cao et al. (2016) causal extraction system we find an overall 70% of TQA and 68% of SQuAD are rated as causal. In comparison, for our improved system, we find an overall 83% of TQA and 79% of SQuAD are rated as causal. Results segmented by typology for our improved system can be seen in Table 3 (results for the original Cao et al. (2016) system can be seen in Appendix C). While the accuracies are higher for TQA than SQuAD, all accuracies range from 72 to 92%. This provides evidence that the extractor is able to reliably identify causes and effects, which we can utilize as intended answers for the downstream question generation system.",
"cite_spans": [
{
"start": 56,
"end": 73,
"text": "Cao et al. (2016)",
"ref_id": "BIBREF6"
},
{
"start": 378,
"end": 395,
"text": "Cao et al. (2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 344,
"end": 351,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Evaluating Extracted Causal Relations",
"sec_num": "4"
},
{
"text": "Fleiss's kappa (Fleiss et al., 1971) however, is only slight to fair for this evaluation (0.21 for base extractor and 0.10 for our improved extractor). This could be due to a high prior probability of an extracted cause and effect being causal (62% of questions have 4 or 5 annotators in agreement) (Falotico and Quatto, 2015) . However, future work should additionally investigate alternatives to this task to achieve higher agreement. Overall, the high accuracy of our improved extractor provides evidence that we can reliably extract a cause and effect to pass into a question generation system.",
"cite_spans": [
{
"start": 15,
"end": 36,
"text": "(Fleiss et al., 1971)",
"ref_id": "BIBREF13"
},
{
"start": 299,
"end": 326,
"text": "(Falotico and Quatto, 2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Extracted Causal Relations",
"sec_num": "4"
},
{
"text": "This section describes the design of both the question generation (QG) and question answering (QA) models, which occur in the Question Generation and Evaluation Framework components of Figure 1 , respectively. We describe them together because they share an auxiliary data source. The QG models are evaluated in Sections 6 (automated) and 7 (manual).",
"cite_spans": [],
"ref_spans": [
{
"start": 185,
"end": 194,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "QG and QA Models",
"sec_num": "5"
},
{
"text": "To generate cause-and-effect questions where the intended answers are the causes and effects from our improved extractor, we train and evaluate two question generation models based on Prophet-Net. We also make use of question answering models to assess the quality of the generated questions. We choose a transformer-based QA model from the Huggingface Transformers library (Wolf et al., 2020) which is BERT-large fine-tuned on SQuAD 2.0 with whole-word masking. 4 F1 performance for this model on the original SQuAD 2.0 test set is 0.93.",
"cite_spans": [
{
"start": 374,
"end": 393,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "QG and QA Models",
"sec_num": "5"
},
{
"text": "For both QA and QG, we utilize an additional data source for fine-tuning, to examine the effect of augmenting the models with additional cause-andeffect question data. We use a syntactic question generation system combined with backtranslation, Syn-QG 5 (Dhole and Manning, 2020) , to generate questions and answers for fine-tuning. We limit Syn-QG to two patterns, Purpose (PNC and PRP) and Cause (CAU), which generate cause-and-effect questions. We input three-sentence passages which our system has identified as including a cause and effect to Syn-QG, resulting in 2,082 questions from TQA and 1,753 from SQuAD. While the wording and syntactic structure of Syn-QG questions lack diversity, we hypothesize that ProphetNet will benefit from fine-tuning on a set of cause-and-effect questions as this question type was not well-represented in the SQuAD training set.",
"cite_spans": [
{
"start": 254,
"end": 279,
"text": "(Dhole and Manning, 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "QG and QA Models",
"sec_num": "5"
},
{
"text": "Additionally, while Syn-QG uses its own syntactic patterns to generate cause-and-effect questions directly from text, we observe their system is not able to cover all causal relationships included in Cao et al. (2016) . From randomly-sampled passages which our improved extractor has identified as including a cause and effect, Syn-QG is able to generate cause-and-effect questions for 309 out of 500 passages for SQuAD and 233 out of 500 passages for TQA.",
"cite_spans": [
{
"start": 200,
"end": 217,
"text": "Cao et al. (2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "QG and QA Models",
"sec_num": "5"
},
{
"text": "We compare two models for QG and QA experiments in Sections 6 and 7: Base (with no finetuning) and Syn-QG fine-tuned to explore the effect of fine-tuning QG and QA models on a small auxiliary dataset of cause-and-effect questions. For all experiments, we split the Syn-QG auxiliary dataset into 80% training and 20% test data. The QG models' resulting questions are fed as input into the QA models (see Figure 1) . Additional model specifications are in Appendix D. Table 4 contains generated questions for each typology category for both the base and the Syn-QG fine-tuned QG models.",
"cite_spans": [],
"ref_spans": [
{
"start": 403,
"end": 412,
"text": "Figure 1)",
"ref_id": null
},
{
"start": 466,
"end": 473,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "QG and QA Models",
"sec_num": "5"
},
{
"text": "As part of our Evaluation Framework, we develop two automated metrics to evaluate the generated cause-and-effect questions: (i) cause/effect presence, which measures whether the cause or effect is present in the question, and (ii) QA system performance, which measures whether a question answering system can answer the generated question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Evaluation of Generated Questions",
"sec_num": "6"
},
{
"text": "Because causal relationships contain both a cause and an effect, we assume that a question which assesses understanding of this relationship would include a direct mention of one of the two. For instance, a question which has the cause as the Table 4 : Randomly sampled generated question from each typology category from TQA, with corresponding base and Syn-QG questions. Passages include causes in bold (which are the intended answers for Cause Questions) and effects italicized (which are the intended answers for Effect Questions).",
"cite_spans": [],
"ref_spans": [
{
"start": 243,
"end": 250,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cause/Effect Presence",
"sec_num": "6.1"
},
{
"text": "intended answer would contain the effect in the question text, and vice versa. Thus, we propose the Cause/Effect Presence metric: for questions where the cause is the intended answer, we measure the recall of the words in the extracted effect present in the question. Likewise, for questions where the effect is the intended answer, we measure the recall of the words in the extracted cause present in the question. Because the question could contain a subset of words expected for the cause/effect, we measure this in terms of recall. Furthermore, a question is necessarily going to have additional words, such as the opening phrase, e.g. \"why,\" which we do not want to penalize. Formally defined:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cause/Effect Presence",
"sec_num": "6.1"
},
{
"text": "Recall cause = |Q e \u2229 C| |C| Recall ef f ect = |Q c \u2229 E| |E|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cause/Effect Presence",
"sec_num": "6.1"
},
{
"text": "where C is a bag of all words in the cause extracted from the passage, E is a bag of all words in the effect extracted from the passage, Q e is a bag of all words in the effect question, and Q c is a bag of all words in the cause question. For words that appear n times in an extracted cause or effect, we",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cause/Effect Presence",
"sec_num": "6.1"
},
{
"text": "give credit for each appearance in the question, up to n times. Results for Cause/Effect Presence on SQuAD and TQA can be seen in Table 5 . Results stratified by typology for the best performimg model (finetuned Syn-QG) can be seen in Appendix E.",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 137,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cause/Effect Presence",
"sec_num": "6.1"
},
{
"text": "We note that after fine-tuning ProphetNet on Syn-QG data, performance increases from 0.55 to 0.72 for TQA and 0.37 to 0.56 for SQuAD. This is somewhat expected, as Syn-QG's syntax rules result in questions that are similarly-structured and include the cause or effect. We also note slightly higher scores when an effect is the intended answer. Future work should isolate whether this is due to the model's ability to generate better questions from effects or whether the extractor is better at extracting effects than causes from passages. In Section 7, we explore this further by having humans label the quality of the QG model's output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cause/Effect Presence",
"sec_num": "6.1"
},
{
"text": "Following past work on evaluating QG models , we measure whether a QA system is able to accurately answer the generated cause-and-effect questions from the passage. If the QG model produces an ill-formed or incorrect ques- Table 5 : Average cause and effect recall in questions generated by ProphetNet Base and fine-tuned on Syn-QG (out of 1.0, higher is better). C indicates questions where a cause is the intended answer, E indicates questions where the effect is the intended answer, and T indicates the total for both. Table 6 : F1 results for base and fine-tuned QA on questions generated from base and fine-tuned ProphetNet. C indicates questions where a cause is the intended answer, E indicates questions where the effect is the intended answer, and T indicates the total between both.",
"cite_spans": [],
"ref_spans": [
{
"start": 223,
"end": 230,
"text": "Table 5",
"ref_id": null
},
{
"start": 523,
"end": 530,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Question Answering System Performance",
"sec_num": "6.2"
},
{
"text": "tion, the QA model will be less likely to produce the correct answer. The QA model must answer a cause question with an effect, and an effect question with a cause. We report the QA model's F1 performance on the set of cause-and-effect questions in Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 249,
"end": 256,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Question Answering System Performance",
"sec_num": "6.2"
},
{
"text": "Fine-tuning the QA model with the Syn-QG data provides a large F1 improvement over the base model (0.21 to 0.54 for TQA, 0.18 to 0.53 for SQuAD), showing the benefit of fine-tuning on an auxiliary cause-and-effect dataset. However, the QA models do not perform as well at answering cause-and-effect questions as they do at answering factual ones; we see a 0.4-0.7 drop in F1 values from factual to causal questions. Future work can explore using our dataset to further improve QA performance on cause-and-effect questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Answering System Performance",
"sec_num": "6.2"
},
{
"text": "Results stratified by typology for the bestperforming QG and QA pair (both fine-tuned on Syn-QG) can be seen in Table 7 . The lowestperforming category for both TQA and SQuAD is Adverbial. However, Adverbial was ranked the second-highest for correct causal links by crowdworkers (in Section 4). It is unexpected that one of the more reliable extraction types resulted in the lowest QA performance. Future work can explore why QA models are not adept at handling this linguistic phrasing and how to improve model performance in this category. Table 7 : F1 scores of best QG-QA pair (both fine-tuned on Syn-QG) broken down by typology.",
"cite_spans": [],
"ref_spans": [
{
"start": 112,
"end": 119,
"text": "Table 7",
"ref_id": null
},
{
"start": 542,
"end": 549,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Question Answering System Performance",
"sec_num": "6.2"
},
{
"text": "While automated metrics are scalable, they may not be as reliable as human evaluation. Thus, we conducted a crowdworker task to evaluate generated questions. To ensure we provide the model with high-quality causal relations, we evaluated questions generated from the extracted relations that were labeled as Causal in Section 4 (83 from TQA and 79 for SQuAD). We performed human evaluation for base ProphetNet, to establish a baseline human-rating of generated questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation of Generated Questions",
"sec_num": "7"
},
{
"text": "Crowdworkers were presented with a passage (with the intended answer highlighted) and a generated question (see Appendix C). Workers were asked to label (1) if the question is a causal question and (2) whether the answer highlighted in the passage correctly answers the question. If a question was too ill-formed or ungrammatical to provide these classifications, crowdworkers were asked to select a checkbox and provide a text justification. The conditions for this task were the same as in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation of Generated Questions",
"sec_num": "7"
},
{
"text": "Results can be seen in Table 8 . These high accuracy ratings (80%-90%) indicate ProphetNet is able to reliably generate questions which are correct for the intended answer and are valid cause-and-effect questions. For both tasks, questions generated from the TQA dataset were higher-ranked than those generated from SQuAD. For the task of determining whether a question was causal, there is not a consistent winner between the set of cause and effect questions. This indicates ProphetNet is consistently better at generating questions for both directions of the cause-and-effect relationship.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 30,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Evaluation of Generated Questions",
"sec_num": "7"
},
{
"text": "For the task of determining whether an intended answer is correct, when the effect is the intended answer, the proportion that are rated as correct is higher than for cause. This is consistent with the higher performance for effect questions on the Cause/Effect Presence metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation of Generated Questions",
"sec_num": "7"
},
{
"text": "Results stratified by typology can be seen in Table 9 . For rating answer correctness, the Preposi- Table 8 : Percentage of Base ProphetNet generated questions (n=324) classified by crowdworkers as causal (% Causal) and matching the intended answer (% Answer), by dataset and whether the intended answer was cause or effect. \"Total\" indicates the combination of cause and effect questions and \"Overall\" indicates the combination of datasets. : Percentage of crowdworking ratings indicating a cause-and-effect question (% C), a correct answer (% A), and the total number of questions evaluated segmented by typology category.",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 53,
"text": "Table 9",
"ref_id": "TABREF9"
},
{
"start": 100,
"end": 107,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Evaluation of Generated Questions",
"sec_num": "7"
},
{
"text": "tional category was the lowest for both datasets, indicating a potential model shortcoming. The Subordination questions from SQuAD were the least causal over all categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation of Generated Questions",
"sec_num": "7"
},
{
"text": "In 22 cases out of 1,620, a crowdworker indicated that a generated question was too ill-formed to provide a rating. Fleiss's kappa for determining whether a question was causal was 0.07 and for determining whether the answer was correct was 0.14, both slight agreement. However, this is likely due to the high prior probability distribution (Falotico and Quatto, 2015) . For 74.1% of questions, 4 or 5 annotators all agreed on the same rating for determining whether a question was causal; likewise, 80.9% of the time 4 or 5 annotators all agreed for determining whether an answer is correct.",
"cite_spans": [
{
"start": 341,
"end": 368,
"text": "(Falotico and Quatto, 2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation of Generated Questions",
"sec_num": "7"
},
{
"text": "Overall, human ratings indicate that 80%-90% of the time, the generated question is both causal and correct for the intended answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation of Generated Questions",
"sec_num": "7"
},
{
"text": "Our proposed pipeline is generalizable to a wide range of corpora, allowing for the generation of cause-and-effect questions from other educational texts at-scale. Human evaluation ratings indicate our model can reliably generate questions which are both causal and correct for the intended answer. However, more work needs to be done to generate questions which sound more natural and are less reliant on the wording of the input passage. The types of grammar errors made should also be examined before assessing the educational benefits of the generated questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "8"
},
{
"text": "Additionally, our goal in generating cause-andeffect questions was to generate questions which fall higher on Bloom's taxonomy. While our results indicate cause-and-effect questions are more challenging than straightforward factual questions for a QA model, further work should assess the difficulty of these questions in an educational setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "8"
},
{
"text": "The complexity of causal relationships can be further explored. We observe instances of overlapping causal relationships extracted from our system, such as the Subordination example in Table 2 . This can be leveraged to generate questions which require knowledge of longer logic chains to answer, similar to past work which accomplished this (Stasaski and Hearst, 2017) .",
"cite_spans": [
{
"start": 343,
"end": 370,
"text": "(Stasaski and Hearst, 2017)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 185,
"end": 193,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Future Work",
"sec_num": "8"
},
{
"text": "Future work can also examine diversifying question wording. By generating multiple questions testing the same causal relationship, students can have multiple opportunities to solidify their knowledge. Our pipeline allows for straightforward experimentation with the question generation model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "8"
},
{
"text": "We propose a new task, cause-and-effect question generation, along with a novel pipeline to utilize extracted causes and effects as intended answers for a question generation system. We provide automatic and manual evaluation metrics and show that fine-tuning QG and QA models on an auxiliary dataset of cause-and-effect questions improves performance. Our publicly-released pipeline can automatically generate cause-and-effect questions for educational resources at scale.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "We conduct a manual analysis of the 68 mostfrequent n-gram question openers from SQuAD, to investigate the prevalence of causal questions which exist. We restrict the question openers to the most frequent n-grams which cover 80% of the dataset, excluding unigrams other than \"why\" because they were uninformative. Two researchers hand-labeled each opener as one of: cause-andeffect, not cause-and-effect, and unknown, where unknown question openers could be the start of cause-and-effect questions but would require reading the entire question to determine. A third researcher tie-broke disagreements. The average Cohen's Kappa (Cohen, 1960) is 0.72 (substantialagreement). Of the 87,599 questions in SQuAD which could be labeled by the 68 most-frequent n-gram question openers, 1,194 (1.4%) were causeand-effect, 29,540 (33.7%) were not cause-andeffect, and 56,865 (64.9%) were unknown. The full list of labeled question openers is found in our Github repo.",
"cite_spans": [
{
"start": 628,
"end": 641,
"text": "(Cohen, 1960)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Manual Analysis of SQuAD Questions",
"sec_num": null
},
{
"text": "For the causal extraction section of the pipeline, we modified the \"as\" pattern. The original pattern is formulated as: &R@Complete@ (,) (-such/-same/seem/-regard/-regards/-regarded/view/-views/-viewed/-denote/-denoted/denotes) as (-if/-follow/-follows/-&adv) &C@Complete@ Where @Complete@ indicates that the text piece is a clause which must have predicate and subject, \"-\" indicates tokens followed should not be matched, and \"()\" indicates tokens that are not required. &R and &C represent the extracted cause and effect. However, the original pattern assumes that the cause is always before \"as.\" In reality, \"as\" can be included before both the cause and the effect, such as in the following example: Some renewable resources are too expensive to be widely used. As the technology improves and more people use renewable energy, the prices will come down. The cost of renewable resources will go down relative to fossil fuels as we use fossil fuels up.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Causal Extraction Details",
"sec_num": null
},
{
"text": "For this example, the causal phrase extracted by original pattern is \"Some renewable resources are too expensive to be widely used.\" The effect phrase extracted by original pattern is \"The technology improves and more people use renewable energy, the prices will come down.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Causal Extraction Details",
"sec_num": null
},
{
"text": "We implement a new pattern (pattern-id = 145): \",/;/./-As &C , &R\". For each cause-and-effect extracted in the original pattern, if the new pattern is also a match, we replace the cause and effect with the output from the new pattern. For the example sentences above, the causal phrases extracted by our new pattern is \"the technology improves and more people use renewable energy.\" The corresponding effect phrase extracted by new pattern is \"The prices will come down.\" C Crowdworking Task Interface Figure 2 contains the cause (bolded and highlighted in orange) and effect (underlined and highlighted in blue) shown to workers when evaluating the quality of an extracted cause and effect. Figure 3 contains a sample stimulus showing the intended answer and generated question. Cao et al. (2016) causal extraction system, stratified by typology. Each main typology category is further stratified by the type and sub-type of link. For example, the Adverbial link category contains two types: (A) Anaphoric and (B) Cataphoric. The Anaphoric category is further segmented into three sub-types: (1) Implicit Cohesion (e.g.,\"therefore\") (2) Pronominal Cohesion (e.g., \"for this reason\"), and (3) Pronominal + Lexical Cohesion (e.g., \"because of\" NP) (Altenberg, 1984) . We refer to the main category by name, with the subcategories denoted with codes, e.g., Adv.1.a. ",
"cite_spans": [
{
"start": 780,
"end": 797,
"text": "Cao et al. (2016)",
"ref_id": "BIBREF6"
},
{
"start": 1247,
"end": 1264,
"text": "(Altenberg, 1984)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 502,
"end": 510,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 692,
"end": 700,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "B Causal Extraction Details",
"sec_num": null
},
{
"text": "Causal Extraction: The approximate runtime for this algorithm on an Intel(R) Core(TM) i7-6850K CPU machine is 72 hours for the TQA dataset and 24 hours for SQuAD. Question Generation: ProphetNet has 391,324,672 parameters; our version is unchanged from Qi et al. (2020) . We finetune the provided question generation model checkpoint, which is a 16 GB model fine-tuned on SQuAD. The approximate runtime to fine tune this model on an auxiliary dataset on a p3.2xlarge AWS ec2 6 machine is 0.5 hours. For our fine-tuning process, we train for 3 epochs with a learning rate of 1e-6 with a batch size of 1. The rest of parameters are kept the same as what is found in the examples provided by the ProphetNet GitHub repository README. Approximate inference time is 10 minutes for TQA and 5 minutes for SQuAD. We utilize the Fairseq library (Ott et al., 2019) to facilitate the training and inference processes. Comparing the fine-tuned model's generated 6 https://aws.amazon.com/ec2/ Table 11 : Average cause/effect presence recall in the TQA and SQuAD datasets, categorized by typology. Questions are generated by ProphetNet fine-tuned on Syn-QG. #T refers to number of questions in TQA; #S the same for SQuAD. '*' indicates no relation found.",
"cite_spans": [
{
"start": 253,
"end": 269,
"text": "Qi et al. (2020)",
"ref_id": "BIBREF24"
},
{
"start": 835,
"end": 853,
"text": "(Ott et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 979,
"end": 987,
"text": "Table 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "D Model specifics",
"sec_num": null
},
{
"text": "questions to the Syn-QG questions, the fine-tuned QG model achieves 57.01 training BLEU and 53.89 test BLEU (Papineni et al., 2002) .",
"cite_spans": [
{
"start": 108,
"end": 131,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "D Model specifics",
"sec_num": null
},
{
"text": "Question Answering: The QA model we utilize has 334,094,338 parameters. The approximate runtime to fine tune this model on an auxiliary dataset on a p3.2xlarge AWS ec2 machine is 0.5 hours. For our fine-tuning process, we train for 10 epochs with an initial learning rate of 1e-5, a batch size of 4, 500 warm-up steps, and a weight decay of 0.01. The rest of the parameters are the defaults set by the HuggingFace TrainingArguments class. We also truncate each example to a max of 512 tokens. Approximate inference time is 1 minutes for TQA and 1 minute for SQuAD. On the Syn-QG dataset, the fine-tuned QA model achieves 0.97 training F1 and 0.95 test F1. Table 11 shows the results for the automatic cause/effect present metric segmented by typology categories. For SQuAD, the lowest-performing category is Subordination, which corresponds to the category with the lowest proportion of extracted relationships labeled as causal by crowdworkers (Section 4).",
"cite_spans": [],
"ref_spans": [
{
"start": 656,
"end": 664,
"text": "Table 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "D Model specifics",
"sec_num": null
},
{
"text": "https://github.com/kstats/CausalQG",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/Angela7126/CE_ extractor--Patterns_Based",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://huggingface.co/deepset/bertlarge-uncased-whole-word-masking-squad25 https://bitbucket.org/kaustubhdhole/ syn-qg",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by an AWS Machine Learning Research Award, an NVIDIA Corporation GPU grant, an AI2 Key Scientific Challenge Proposal grant, and a National Science Foundation (NSF) Graduate Research Fellowship (DGE 1752814). We thank the three anonymous reviewers as well as Nate Weinman, Philippe Laban, Dongyeop Kang, and the Hearst Lab Research Group for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Causal linking in spoken and written english",
"authors": [
{
"first": "Bengt",
"middle": [],
"last": "Altenberg",
"suffix": ""
}
],
"year": 1984,
"venue": "Studia Linguistica",
"volume": "38",
"issue": "1",
"pages": "20--69",
"other_ids": {
"DOI": [
"10.1111/j.1467-9582.1984.tb00734.x"
]
},
"num": null,
"urls": [],
"raw_text": "Bengt Altenberg. 1984. Causal linking in spoken and written english. Studia Linguistica, 38(1):20-69.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A taxonomy for learning, teaching, and assessing: A revision of Bloom's taxonomy of educational objectives",
"authors": [
{
"first": "W",
"middle": [],
"last": "Lorin",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"Samuel"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bloom",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lorin W Anderson, Benjamin Samuel Bloom, et al. 2001. A taxonomy for learning, teaching, and as- sessing: A revision of Bloom's taxonomy of educa- tional objectives. Longman,.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic extraction of causal relations from natural language texts: a comprehensive survey",
"authors": [
{
"first": "Nabiha",
"middle": [],
"last": "Asghar",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1605.07895"
]
},
"num": null,
"urls": [],
"raw_text": "Nabiha Asghar. 2016. Automatic extraction of causal relations from natural language texts: a comprehen- sive survey. arXiv preprint arXiv:1605.07895.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Reading comprehension skills and bloom's taxonomy. Literacy Research and Instruction",
"authors": [
{
"first": "Ross",
"middle": [],
"last": "Beatty",
"suffix": ""
}
],
"year": 1975,
"venue": "",
"volume": "15",
"issue": "",
"pages": "101--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ross Beatty Jr. 1975. Reading comprehension skills and bloom's taxonomy. Literacy Research and In- struction, 15(2):101-108.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain",
"authors": [
{
"first": "Benjamin",
"middle": [
"S"
],
"last": "Bloom",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Krathwohl",
"suffix": ""
},
{
"first": "Bertram",
"middle": [
"B"
],
"last": "Masia",
"suffix": ""
}
],
"year": 1956,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin S. Bloom, David Krathwohl, and Bertram B. Masia. 1956. Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. Green.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Causality for question answering",
"authors": [
{
"first": "Manvi",
"middle": [],
"last": "Breja",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sanjay Kumar Jain",
"suffix": ""
}
],
"year": 2020,
"venue": "COLINS",
"volume": "",
"issue": "",
"pages": "884--893",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manvi Breja and Sanjay Kumar Jain. 2020. Causality for question answering. In COLINS, pages 884-893.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The role of cause-effect link within scientific paper",
"authors": [
{
"first": "Mengyun",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Xiaoping",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhuge",
"suffix": ""
}
],
"year": 2016,
"venue": "12th International Conference on Semantics, Knowledge and Grids (SKG)",
"volume": "",
"issue": "",
"pages": "32--39",
"other_ids": {
"DOI": [
"10.1109/SKG.2016.013"
]
},
"num": null,
"urls": [],
"raw_text": "Mengyun Cao, Xiaoping Sun, and Hai Zhuge. 2016. The role of cause-effect link within scientific paper. 12th International Conference on Semantics, Knowl- edge and Grids (SKG), pages 32-39.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A recurrent BERT-based model for question generation",
"authors": [
{
"first": "Ying-Hong",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Yao-Chung",
"middle": [],
"last": "Fan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Workshop on Machine Reading for Question Answering",
"volume": "",
"issue": "",
"pages": "154--162",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5821"
]
},
"num": null,
"urls": [],
"raw_text": "Ying-Hong Chan and Yao-Chung Fan. 2019. A recur- rent BERT-based model for question generation. In Proceedings of the 2nd Workshop on Machine Read- ing for Question Answering, pages 154-162, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A coefficient of agreement for nominal scales. Educational and psychological measurement",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1960,
"venue": "",
"volume": "20",
"issue": "",
"pages": "37--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological mea- surement, 20(1):37-46.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Towards automatically generating questions under discussion to link information and discourse structure",
"authors": [
{
"first": "Kordula",
"middle": [],
"last": "De Kuthy",
"suffix": ""
},
{
"first": "Madeeswaran",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "Haemanth",
"middle": [],
"last": "Santhi Ponnusamy",
"suffix": ""
},
{
"first": "Detmar",
"middle": [],
"last": "Meurers",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5786--5798",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.509"
]
},
"num": null,
"urls": [],
"raw_text": "Kordula De Kuthy, Madeeswaran Kannan, Haemanth Santhi Ponnusamy, and Detmar Meurers. 2020. To- wards automatically generating questions under dis- cussion to link information and discourse structure. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5786-5798, Barcelona, Spain (Online). International Committee on Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Syn-QG: Syntactic and shallow semantic rules for question generation",
"authors": [
{
"first": "Kaustubh",
"middle": [],
"last": "Dhole",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "752--765",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.69"
]
},
"num": null,
"urls": [],
"raw_text": "Kaustubh Dhole and Christopher D. Manning. 2020. Syn-QG: Syntactic and shallow semantic rules for question generation. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 752-765, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Question generation for question answering",
"authors": [
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "866--874",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1090"
]
},
"num": null,
"urls": [],
"raw_text": "Nan Duan, Duyu Tang, Peng Chen, and Ming Zhou. 2017. Question generation for question answering. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 866-874, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Fleiss' kappa statistic without paradoxes",
"authors": [
{
"first": "Rosa",
"middle": [],
"last": "Falotico",
"suffix": ""
},
{
"first": "Piero",
"middle": [],
"last": "Quatto",
"suffix": ""
}
],
"year": 2015,
"venue": "Quality & Quantity",
"volume": "49",
"issue": "2",
"pages": "463--470",
"other_ids": {
"DOI": [
"10.1007/s11135-014-0003-1"
]
},
"num": null,
"urls": [],
"raw_text": "Rosa Falotico and Piero Quatto. 2015. Fleiss' kappa statistic without paradoxes. Quality & Quantity, 49(2):463-470.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Measuring nominal scale agreement among many raters",
"authors": [
{
"first": "J",
"middle": [
"L"
],
"last": "Fleiss",
"suffix": ""
}
],
"year": 1971,
"venue": "Psychological Bulletin",
"volume": "76",
"issue": "5",
"pages": "378--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.L. Fleiss et al. 1971. Measuring nominal scale agree- ment among many raters. Psychological Bulletin, 76(5):378-382.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic detection of causal relations for question answering",
"authors": [
{
"first": "Roxana",
"middle": [],
"last": "Girju",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the ACL 2003 Workshop on Multilingual Summarization and Question Answering",
"volume": "",
"issue": "",
"pages": "76--83",
"other_ids": {
"DOI": [
"10.3115/1119312.1119322"
]
},
"num": null,
"urls": [],
"raw_text": "Roxana Girju. 2003. Automatic detection of causal relations for question answering. In Proceedings of the ACL 2003 Workshop on Multilingual Summa- rization and Question Answering, pages 76-83, Sap- poro, Japan. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Good question! statistical ranking for question generation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Heilman and Noah A. Smith. 2010. Good question! statistical ranking for question genera- tion. In Human Language Technologies: The 2010",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "609--617",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 609-617, Los Angeles, California. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Are you smarter than a sixth grader? textbook question answering for multimodal machine comprehension",
"authors": [
{
"first": "Aniruddha",
"middle": [],
"last": "Kembhavi",
"suffix": ""
},
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Jonghyun",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "5376--5384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aniruddha Kembhavi, Minjoon Seo, D. Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Ha- jishirzi. 2017. Are you smarter than a sixth grader? textbook question answering for multimodal ma- chine comprehension. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5376-5384.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The NarrativeQA reading comprehension challenge",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u00fd",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Schwarz",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "G\u00e1bor",
"middle": [],
"last": "Melis",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "317--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Ko\u010disk\u00fd, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G\u00e1bor Melis, and Edward Grefenstette. 2018. The NarrativeQA read- ing comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317- 328.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A Systematic Review of Automatic Question Generation for Educational Purposes",
"authors": [
{
"first": "Ghader",
"middle": [],
"last": "Kurdi",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Leo",
"suffix": ""
},
{
"first": "Bijan",
"middle": [],
"last": "Parsia",
"suffix": ""
},
{
"first": "Uli",
"middle": [],
"last": "Sattler",
"suffix": ""
},
{
"first": "Salam",
"middle": [],
"last": "Al-Emari",
"suffix": ""
}
],
"year": 2020,
"venue": "International Journal of Artificial Intelligence in Education",
"volume": "30",
"issue": "1",
"pages": "121--204",
"other_ids": {
"DOI": [
"10.1007/s40593-019-00186-y"
]
},
"num": null,
"urls": [],
"raw_text": "Ghader Kurdi, Jared Leo, Bijan Parsia, Uli Sattler, and Salam Al-Emari. 2020. A Systematic Review of Automatic Question Generation for Educational Purposes. International Journal of Artificial Intelli- gence in Education, 30(1):121-204.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In International Con- ference on Learning Representations.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning to explain: Answering why-questions via rephrasing",
"authors": [
{
"first": "Allen",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Bennett",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on NLP for Conversational AI",
"volume": "",
"issue": "",
"pages": "113--120",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4113"
]
},
"num": null,
"urls": [],
"raw_text": "Allen Nie, Erin Bennett, and Noah Goodman. 2019. Learning to explain: Answering why-questions via rephrasing. In Proceedings of the First Workshop on NLP for Conversational AI, pages 113-120, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT 2019: Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "ProphetNet: Predicting future n-gram for sequence-to-sequence pre-training",
"authors": [
{
"first": "Weizhen",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Yeyun",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Dayiheng",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Jiusheng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ruofei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
"volume": "",
"issue": "",
"pages": "2401--2410",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. 2020. ProphetNet: Predicting future n-gram for sequence-to-sequence pre-training. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 2401-2410, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Know what you don't know: Unanswerable questions for SQuAD",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "784--789",
"other_ids": {
"DOI": [
"10.18653/v1/P18-2124"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for SQuAD. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784- 789, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "SQuAD: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1264"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Quizbot: A dialogue-based adaptive learning system for factual knowledge",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Murnane",
"suffix": ""
},
{
"first": "James",
"middle": [
"A"
],
"last": "Brunskill",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Landay",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19",
"volume": "",
"issue": "",
"pages": "1--13",
"other_ids": {
"DOI": [
"10.1145/3290605.3300587"
]
},
"num": null,
"urls": [],
"raw_text": "Murnane, Emma Brunskill, and James A. Landay. 2019. Quizbot: A dialogue-based adaptive learning system for factual knowledge. In Proceedings of the 2019 CHI Conference on Human Factors in Com- puting Systems, CHI '19, page 1-13, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Multiple choice question generation utilizing an ontology",
"authors": [
{
"first": "Katherine",
"middle": [],
"last": "Stasaski",
"suffix": ""
},
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "303--312",
"other_ids": {
"DOI": [
"10.18653/v1/W17-5034"
]
},
"num": null,
"urls": [],
"raw_text": "Katherine Stasaski and Marti A. Hearst. 2017. Multi- ple choice question generation utilizing an ontology. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 303-312, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The learning benefits of questions",
"authors": [
{
"first": "Will",
"middle": [],
"last": "Thalheimer",
"suffix": ""
}
],
"year": 2003,
"venue": "Work Learning Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Will Thalheimer. 2003. The learning benefits of ques- tions. Work Learning Research.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "NewsQA: A machine comprehension dataset",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Trischler",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xingdi",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Bachman",
"suffix": ""
},
{
"first": "Kaheer",
"middle": [],
"last": "Suleman",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "191--200",
"other_ids": {
"DOI": [
"10.18653/v1/W17-2623"
]
},
"num": null,
"urls": [],
"raw_text": "Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2017. NewsQA: A machine compre- hension dataset. In Proceedings of the 2nd Work- shop on Representation Learning for NLP, pages 191-200, Vancouver, Canada. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A joint model for question answering and question generation",
"authors": [
{
"first": "Tong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "(",
"middle": [],
"last": "Xingdi",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Eric) Yuan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Trischler",
"suffix": ""
}
],
"year": 2017,
"venue": "Learning to generate natural language workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tong Wang, Xingdi (Eric) Yuan, and Adam Trischler. 2017a. A joint model for question answering and question generation. In Learning to generate natu- ral language workshop, ICML 2017.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Gated self-matching networks for reading comprehension and question answering",
"authors": [
{
"first": "Wenhui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "189--198",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1018"
]
},
"num": null,
"urls": [],
"raw_text": "Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017b. Gated self-matching net- works for reading comprehension and question an- swering. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 189-198, Vancou- ver, Canada. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Qg-net: A data-driven question generation model for educational content",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"S"
],
"last": "Lan",
"suffix": ""
},
{
"first": "Weili",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"E"
],
"last": "Waters",
"suffix": ""
},
{
"first": "Phillip",
"middle": [
"J"
],
"last": "Grimaldi",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"G"
],
"last": "Baraniuk",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifth Annual ACM Conference on Learning at Scale, L@S '18",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3231644.3231654"
]
},
"num": null,
"urls": [],
"raw_text": "Zichao Wang, Andrew S. Lan, Weili Nie, Andrew E. Waters, Phillip J. Grimaldi, and Richard G. Bara- niuk. 2018. Qg-net: A data-driven question gener- ation model for educational content. In Proceedings of the Fifth Annual ACM Conference on Learning at Scale, L@S '18, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "5753--5763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems, volume 32, pages 5753-5763. Curran Associates, Inc.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "HotpotQA: A dataset for diverse, explainable multi-hop question answering",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Conference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Machine comprehension by text-to-text neural question generation",
"authors": [
{
"first": "Xingdi",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Bachman",
"suffix": ""
},
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Trischler",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "15--25",
"other_ids": {
"DOI": [
"10.18653/v1/W17-2603"
]
},
"num": null,
"urls": [],
"raw_text": "Xingdi Yuan, Tong Wang, Caglar Gulcehre, Alessan- dro Sordoni, Philip Bachman, Saizheng Zhang, Sandeep Subramanian, and Adam Trischler. 2017. Machine comprehension by text-to-text neural ques- tion generation. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 15-25, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Paragraph-level neural question generation with maxout pointer and gated self-attention networks",
"authors": [
{
"first": "Yao",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Xiaochuan",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Yuanyuan",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Qifa",
"middle": [],
"last": "Ke",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3901--3910",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1424"
]
},
"num": null,
"urls": [],
"raw_text": "Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018. Paragraph-level neural question gener- ation with maxout pointer and gated self-attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 3901-3910, Brussels, Belgium. Associa- tion for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Example crowdworking presentation of passage, cause, and effect, from TQA dataset.",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Example crowdworking presentation of passage, intended answer, and generated question, from TQA dataset.",
"uris": null,
"num": null
},
"TABREF0": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Examples of causal relationships from different typology categories",
"num": null
},
"TABREF2": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Percent of extractions labeled as causal by crowdworkers, for samples of 100 each from TQA and SQuAD datasets, by linguistic category, for our improved sytsem.",
"num": null
},
"TABREF9": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "",
"num": null
},
"TABREF10": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "contains the crowdworker ratings for the",
"num": null
},
"TABREF12": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Number of extracted relations that were labeled as causal by crowdworkers for originalCao et al. (2016) system, organized by linguistic category. #T is total number in TQA and #S is total number in SQuAD.",
"num": null
}
}
}
}