ACL-OCL / Base_JSON /prefixJ /json /J19 /J19-4002.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "J19-4002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:58:15.704352Z"
},
"title": "Discourse in Multimedia: A Case Study in Extracting Geometry Knowledge from Textbooks",
"authors": [
{
"first": "Mrinmaya",
"middle": [],
"last": "Sachan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "mrinmays@cs.cmu.edu"
},
{
"first": "Avinava",
"middle": [],
"last": "Dubey",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Tom",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {}
},
"email": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "J19-4002",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "To ensure readability, text is often written and presented with due formatting. These text formatting devices help the writer to effectively convey the narrative. At the same time, these help the readers pick up the structure of the discourse and comprehend the conveyed information. There have been a number of linguistic theories on discourse structure of text. However, these theories only consider unformatted text. Multimedia text contains rich formatting features that can be leveraged for various NLP tasks. In this article, we study some of these discourse features in multimedia text and what communicative function they fulfill in the context. As a case study, we use these features to harvest structured subject knowledge of geometry from textbooks. We conclude that the discourse and text layout features provide information that is complementary to lexical semantic information. Finally, we show that the harvested structured knowledge can be used to improve an existing solver for geometry problems, making it more accurate as well as more explainable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The study of discourse focuses on the properties of text as a whole and how meaning is conveyed by making connections between component sentences. Writers often use certain linguistic devices to make a discourse structure that enables them to effectively communicate their narrative. The readers, too, comprehend text by picking up these linguistic devices and recognizing the discourse structure. There are a number of linguistic theories on discourse relations (Van Dijk 1972; Longacre 1983; Grosz and Sidner 1986; Cohen 1987; Mann and Thompson 1988; Polanyi 1988; Moser and Moore 1996) that specify relations between discourse units and how to represent the discourse structure of a piece of text (i.e., discourse parsing; Duverle and Prendinger 2009; Subba and Di Eugenio 2009; Feng and Hirst 2012; Gosh, Riccardi, and Johansson 2012; Feng and Hirst 2014; Ji and Eisenstein 2014; Li et al. 2014; Li, Ng, and Kan 2014; Wang and Lan 2015) . These discourse features have been shown to be useful in a number of NLP applications such as summarization (Dijk 1979; Marcu 2000; Boguraev and Neff 2000; Louis, Joshi, and Nenkova 2010; Gerani et al. 2014) , information retrieval (Wang et al. 2006; Lioma, Larsen, and Lu 2012) , information extraction (Kitani, Eriguchi, and Hara 1994; Conrath et al. 2014) , and question answering (Chai and Jin 2004; Sun and Chai 2007; Narasimhan and Barzilay 2015; Sachan et al. 2015) .",
"cite_spans": [
{
"start": 463,
"end": 478,
"text": "(Van Dijk 1972;",
"ref_id": "BIBREF108"
},
{
"start": 479,
"end": 493,
"text": "Longacre 1983;",
"ref_id": "BIBREF74"
},
{
"start": 494,
"end": 516,
"text": "Grosz and Sidner 1986;",
"ref_id": "BIBREF48"
},
{
"start": 517,
"end": 528,
"text": "Cohen 1987;",
"ref_id": "BIBREF25"
},
{
"start": 529,
"end": 552,
"text": "Mann and Thompson 1988;",
"ref_id": "BIBREF78"
},
{
"start": 553,
"end": 566,
"text": "Polanyi 1988;",
"ref_id": "BIBREF92"
},
{
"start": 567,
"end": 588,
"text": "Moser and Moore 1996)",
"ref_id": "BIBREF85"
},
{
"start": 726,
"end": 754,
"text": "Duverle and Prendinger 2009;",
"ref_id": "BIBREF33"
},
{
"start": 755,
"end": 781,
"text": "Subba and Di Eugenio 2009;",
"ref_id": "BIBREF105"
},
{
"start": 782,
"end": 802,
"text": "Feng and Hirst 2012;",
"ref_id": "BIBREF41"
},
{
"start": 803,
"end": 838,
"text": "Gosh, Riccardi, and Johansson 2012;",
"ref_id": null
},
{
"start": 839,
"end": 859,
"text": "Feng and Hirst 2014;",
"ref_id": "BIBREF42"
},
{
"start": 860,
"end": 883,
"text": "Ji and Eisenstein 2014;",
"ref_id": "BIBREF54"
},
{
"start": 884,
"end": 899,
"text": "Li et al. 2014;",
"ref_id": "BIBREF64"
},
{
"start": 900,
"end": 921,
"text": "Li, Ng, and Kan 2014;",
"ref_id": null
},
{
"start": 922,
"end": 940,
"text": "Wang and Lan 2015)",
"ref_id": "BIBREF111"
},
{
"start": 1051,
"end": 1062,
"text": "(Dijk 1979;",
"ref_id": null
},
{
"start": 1063,
"end": 1074,
"text": "Marcu 2000;",
"ref_id": "BIBREF80"
},
{
"start": 1075,
"end": 1098,
"text": "Boguraev and Neff 2000;",
"ref_id": "BIBREF13"
},
{
"start": 1099,
"end": 1130,
"text": "Louis, Joshi, and Nenkova 2010;",
"ref_id": "BIBREF75"
},
{
"start": 1131,
"end": 1150,
"text": "Gerani et al. 2014)",
"ref_id": "BIBREF46"
},
{
"start": 1175,
"end": 1193,
"text": "(Wang et al. 2006;",
"ref_id": "BIBREF110"
},
{
"start": 1194,
"end": 1221,
"text": "Lioma, Larsen, and Lu 2012)",
"ref_id": "BIBREF71"
},
{
"start": 1247,
"end": 1280,
"text": "(Kitani, Eriguchi, and Hara 1994;",
"ref_id": "BIBREF59"
},
{
"start": 1281,
"end": 1301,
"text": "Conrath et al. 2014)",
"ref_id": "BIBREF26"
},
{
"start": 1327,
"end": 1346,
"text": "(Chai and Jin 2004;",
"ref_id": "BIBREF17"
},
{
"start": 1347,
"end": 1365,
"text": "Sun and Chai 2007;",
"ref_id": "BIBREF106"
},
{
"start": 1366,
"end": 1395,
"text": "Narasimhan and Barzilay 2015;",
"ref_id": "BIBREF86"
},
{
"start": 1396,
"end": 1415,
"text": "Sachan et al. 2015)",
"ref_id": "BIBREF95"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Most linguistic theories of discourse consider written text without much formatting. However, in this multimedia age, text is often richly formatted. Be it newsprint, textbooks, brochures, or even scientific articles, text is usually appropriately formatted and stylized. For example, the text may have a heading. It may be divided into a number of sections with section subtitles. Parts of the text may be italicized or boldfaced to place appropriate emphasis wherever required. The text may contain itemized lists, footnotes, indentations, or quotations. It may refer to associated tables and figures. The tables and figures, too, usually have associated captions. All these text layout features ensure that the text is easy to read and understand. Even articles accepted for Computational Linguistics follow a due formatting scheme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "These text layout features are in addition to other linguistic devices such as syntactic arrangement or rhetorical forms. Relations between textual units that are not necessarily contiguous can thus be expressed thanks to typographical or dispositional markers. Such relations, which are out of reach of standard NLP tools, have only been studied within some specific layout contexts (Hovy 1998; Pascual 1996; Bateman et al. 2001a ,",
"cite_spans": [
{
"start": 384,
"end": 395,
"text": "(Hovy 1998;",
"ref_id": "BIBREF51"
},
{
"start": 396,
"end": 409,
"text": "Pascual 1996;",
"ref_id": "BIBREF88"
},
{
"start": 410,
"end": 430,
"text": "Bateman et al. 2001a",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "An excerpt of a textbook from our data set that introduces the Pythagorean theorem. The textbook has many typographical features that can be used to harvest this theorem: The textbook explicitly labels it as a \"theorem\"; there is a colored bounding box around it; an equation sets down the rule and there is a supporting figure. Our model leverages such rich contextual and typographical information (when available) to accurately harvest axioms and then parses them to horn-clause rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1",
"sec_num": null
},
{
"text": "inter alia) 1 and there are not many comprehensive studies on the various kinds of discourse features and how they can be leveraged to improve NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1",
"sec_num": null
},
{
"text": "In this article, we study some of these discourse features in multimedia text and what communicative function they fulfill in the context. As a case study, we study the problem of harvesting structured subject knowledge of geometry from textbooks and show that the formatting devices can indeed be used to improve a strong information extraction system in that domain. We show that the discourse and text layout features provide information that is complementary to lexical semantic information commonly used for information extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1",
"sec_num": null
},
{
"text": "With the intent of making the subject material easy to grasp and remember for students, textbooks often contain rich discourse and formatting features. Crucial material such as axioms or theorems are presented with stylistic highlighting or bounding boxes. Often, mathematical information such as equations are presented in a separate color and font size. Often, theorems are numbered or named (e.g., Theorem 8.4). For example, Figure 1 shows a snapshot of a math textbook that describes the Pythagorean theorem. The textbook explicitly labels it as a \"theorem\"; there is a colored bounding box around it; an equation sets down the rule and there is a supporting figure. In this article, we will try to answer the question: Can this rich contextual and typographical information (whenever available) be used to harvest these axioms in the form of structured rules? Our goal is to not only extract the axiom mentioned in Figure 1 but also map it to a rule corresponding to the Pythagorean theorem: isTriangle(ABC) \u2227 perpendicular(AC, BC) =\u21d2 BC 2 + AC 2 = AB 2",
"cite_spans": [],
"ref_spans": [
{
"start": 428,
"end": 436,
"text": "Figure 1",
"ref_id": null
},
{
"start": 920,
"end": 928,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 1",
"sec_num": null
},
{
"text": "We present an automatic approach that can (a) harvest such subject knowledge from textbooks, and (b) parse the extracted knowledge to structured rules. We propose novel models that perform sequence labeling and alignment to extract redundant axiom mentions across various textbooks, and then parse the redundant axioms to structured rules. These redundant structured rules are then resolved to achieve the best correct structured rule for each axiom. We conduct a comprehensive feature analysis of the usefulness of various discourse features: shallow discourse features based on discourse markers, a deep one based on Rhetorical Structure Theory (Mann and Thompson 1988) , and various text layout features in a multimedia document (Hovy 1998) for the various stages of information extraction. Our experiments show the usefulness of all the various typographical features over and above the various lexical semantic and discourse level features considered for the task. We use our model to extract and parse axiomatic knowledge from a novel data set of 20 publicly available math textbooks. We use this structured axiomatic knowledge to build a new axiomatic solver that performs logical inference to solve geometry problems. Our axiomatic solver outperforms GEOS on all existing test sets introduced in Seo et al. (2015) as well as a new test set of geometry questions collected from these textbooks. We also performed user studies on a number of school students studying geometry who found that our axiomatic solver is more interpretable and useful compared with GEOS.",
"cite_spans": [
{
"start": 647,
"end": 671,
"text": "(Mann and Thompson 1988)",
"ref_id": "BIBREF78"
},
{
"start": 732,
"end": 743,
"text": "(Hovy 1998)",
"ref_id": "BIBREF51"
},
{
"start": 1304,
"end": 1321,
"text": "Seo et al. (2015)",
"ref_id": "BIBREF98"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1",
"sec_num": null
},
{
"text": "Discourse Analysis: Discourse analysis is the analysis of semantics conveyed by a coherent sequence of sentences, propositions, or speech. Discourse analysis is taken up in a variety of disciplines in the humanities and social sciences and a number of discourse theories have been proposed (Mann and Thompson 1988; Kamp and Reyle 1993; Lascarides and Asher 2008, inter alia) . Their starting point lies in the idea that text is not just a collection of sentences, but also includes relations between all these sentences that ensure its coherence. It is often assumed that discourse analysis is a threestep process:",
"cite_spans": [
{
"start": 290,
"end": 314,
"text": "(Mann and Thompson 1988;",
"ref_id": "BIBREF78"
},
{
"start": 315,
"end": 335,
"text": "Kamp and Reyle 1993;",
"ref_id": "BIBREF55"
},
{
"start": 336,
"end": 374,
"text": "Lascarides and Asher 2008, inter alia)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2."
},
{
"text": "1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2."
},
{
"text": "splitting the text into discourse units (DUs),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2."
},
{
"text": "ensuring the attachment between DUs, and then 3. labeling links between DUs with discourse relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "Discourse relations may be divided into two categories: nucleus-satellite (or subordinate) relations, which link an important argument to an argument supporting background information, and multinuclear (or coordinate) relations, which link arguments of equal importance. Most discourse theories (DRT, RST, SDRT, etc.) acknowledge that a discourse is hierarchically structured thanks to discourse relations. A number of discourse relations have been proposed under various theories for discourse analysis. Discourse analysis has been shown to be useful for many NLP tasks, such as question answering (Chai and Jin 2004; Lioma, Larsen, and Lu 2012; Jansen, Surdeanu, and Clark 2014) , summarization (Louis, Joshi, and Nenkova 2010) , and information extraction (Kitani, Eriguchi, and Hara 1994) . However, to the best of our knowledge, we do not have a theory or a working model of discourse in a multimedia setting.",
"cite_spans": [
{
"start": 599,
"end": 618,
"text": "(Chai and Jin 2004;",
"ref_id": "BIBREF17"
},
{
"start": 619,
"end": 646,
"text": "Lioma, Larsen, and Lu 2012;",
"ref_id": "BIBREF71"
},
{
"start": 647,
"end": 680,
"text": "Jansen, Surdeanu, and Clark 2014)",
"ref_id": "BIBREF53"
},
{
"start": 697,
"end": 729,
"text": "(Louis, Joshi, and Nenkova 2010)",
"ref_id": "BIBREF75"
},
{
"start": 759,
"end": 792,
"text": "(Kitani, Eriguchi, and Hara 1994)",
"ref_id": "BIBREF59"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "Psychologists and educationists have frequently studied multimedia issues such as the impact of illustrations (pictures, tables, etc.) in text, design principles of multimedia presentations, and so forth (Dwyer 1978; Fleming, Levie, and Levie 1978; Hartley 1985; Twyman 1985) . However, these discussions are usually too general and hard to build on from a computational perspective. Thus, most studies of multimedia text have only been theoretical in nature. Larkin and Simon (1987) , Mayer (1989) , and Petre and Green (1990) attempt to answer questions: whether a graphical notation is superior to text notation, what makes a diagram (sometimes) worth ten thousand words, how illustration effects thinking. Hovy (1998) , Arens and Hovy (1990) , Arens (1992) , and Arens, Hovy, and Van Mulken (1993) provide a theory of the communicative function fulfilled by various formatting devices and use it in text planning. In a similar vein, Dale (1991b, a) , White (1995) , Pascual and Virbel (1996) , Reed and Long (1997) , and Bateman et al. (2001b) discuss the textual function of punctuation marks and use it in the text generation process. Andr\u00e9 et al. (1991) and Andr\u00e9 (2000) build a system WIP that generates multimedia presentations via layered architecture (composed of the control layer, content layer, design layer, realization layer, and the presentation layer) and with the help of various content, design, user, and application experts. Mackinlay (1986) discuss the automatic generation of tables and charts. Luc, Mojahid, and Virbel (1999) study enumerations. Feiner (1988) , Arens et al. (1988) , Neal et al. (1990) , Feiner and McKeown (1991) , Wahlster et al. (1992) , Arens, Hovy, and Vossers (1992) , and Maybury (1998) discuss various aspects of processing and knowledge required for automatically generating multimedia. Finally, Stock (1993) discusses using hypermedia features for the task of information exploration.",
"cite_spans": [
{
"start": 204,
"end": 216,
"text": "(Dwyer 1978;",
"ref_id": "BIBREF34"
},
{
"start": 217,
"end": 248,
"text": "Fleming, Levie, and Levie 1978;",
"ref_id": "BIBREF44"
},
{
"start": 249,
"end": 262,
"text": "Hartley 1985;",
"ref_id": "BIBREF50"
},
{
"start": 263,
"end": 275,
"text": "Twyman 1985)",
"ref_id": "BIBREF107"
},
{
"start": 460,
"end": 483,
"text": "Larkin and Simon (1987)",
"ref_id": "BIBREF61"
},
{
"start": 486,
"end": 498,
"text": "Mayer (1989)",
"ref_id": "BIBREF83"
},
{
"start": 505,
"end": 527,
"text": "Petre and Green (1990)",
"ref_id": "BIBREF91"
},
{
"start": 710,
"end": 721,
"text": "Hovy (1998)",
"ref_id": "BIBREF51"
},
{
"start": 724,
"end": 745,
"text": "Arens and Hovy (1990)",
"ref_id": "BIBREF4"
},
{
"start": 748,
"end": 760,
"text": "Arens (1992)",
"ref_id": "BIBREF3"
},
{
"start": 763,
"end": 801,
"text": "and Arens, Hovy, and Van Mulken (1993)",
"ref_id": "BIBREF5"
},
{
"start": 937,
"end": 952,
"text": "Dale (1991b, a)",
"ref_id": "BIBREF28"
},
{
"start": 955,
"end": 967,
"text": "White (1995)",
"ref_id": "BIBREF114"
},
{
"start": 970,
"end": 995,
"text": "Pascual and Virbel (1996)",
"ref_id": "BIBREF89"
},
{
"start": 998,
"end": 1018,
"text": "Reed and Long (1997)",
"ref_id": "BIBREF94"
},
{
"start": 1021,
"end": 1047,
"text": "and Bateman et al. (2001b)",
"ref_id": "BIBREF11"
},
{
"start": 1141,
"end": 1160,
"text": "Andr\u00e9 et al. (1991)",
"ref_id": "BIBREF2"
},
{
"start": 1165,
"end": 1177,
"text": "Andr\u00e9 (2000)",
"ref_id": "BIBREF1"
},
{
"start": 1447,
"end": 1463,
"text": "Mackinlay (1986)",
"ref_id": "BIBREF77"
},
{
"start": 1519,
"end": 1550,
"text": "Luc, Mojahid, and Virbel (1999)",
"ref_id": "BIBREF76"
},
{
"start": 1571,
"end": 1584,
"text": "Feiner (1988)",
"ref_id": "BIBREF38"
},
{
"start": 1587,
"end": 1606,
"text": "Arens et al. (1988)",
"ref_id": "BIBREF7"
},
{
"start": 1609,
"end": 1627,
"text": "Neal et al. (1990)",
"ref_id": "BIBREF87"
},
{
"start": 1630,
"end": 1655,
"text": "Feiner and McKeown (1991)",
"ref_id": "BIBREF40"
},
{
"start": 1658,
"end": 1680,
"text": "Wahlster et al. (1992)",
"ref_id": "BIBREF109"
},
{
"start": 1683,
"end": 1714,
"text": "Arens, Hovy, and Vossers (1992)",
"ref_id": "BIBREF6"
},
{
"start": 1721,
"end": 1735,
"text": "Maybury (1998)",
"ref_id": "BIBREF82"
},
{
"start": 1847,
"end": 1859,
"text": "Stock (1993)",
"ref_id": "BIBREF102"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formatting in Discourse:",
"sec_num": null
},
{
"text": "However, all the aforementioned studies were merely theoretical. All the models were hand-coded and not trained from multimedia corpora. In this paper, we provide a corpus analysis of multimedia text and use it to show that the formatting devices can indeed be used to improve a strong information extraction system in the geometry domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formatting in Discourse:",
"sec_num": null
},
{
"text": "Although the problem of using computers to solve geometry questions is old (Feigenbaum and Feldman 1963; Schattschneider and King 1997; Davis 2006) , NLP and computer vision techniques were first used to solve geometry problems in Seo et al. (2015) . Seo et al. (2014) only aligned geometric shapes with their textual mentions, but Seo et al. (2015) also extracted geometric relations and built GEOS, the first automated system to solve SAT style geometry questions. GEOS used a coordinate geometry based solution by translating each predicate into a set of manually written constraints. A Boolean satisfiability problem posed with these constraints was used to solve the multiple-choice question. GEOS had two key issues: (a) It needed access to answer choices that may not always be available for such problems, and (b) It lacked the deductive geometric reasoning used by students to solve these problems. In this article, we build an axiomatic solver that mitigates these issues by performing deductive reasoning using axiomatic knowledge extracted from textbooks. Furthermore, we use ideas from discourse to automatically extract these axiom rules from textbooks.",
"cite_spans": [
{
"start": 75,
"end": 104,
"text": "(Feigenbaum and Feldman 1963;",
"ref_id": "BIBREF37"
},
{
"start": 105,
"end": 135,
"text": "Schattschneider and King 1997;",
"ref_id": "BIBREF96"
},
{
"start": 136,
"end": 147,
"text": "Davis 2006)",
"ref_id": "BIBREF30"
},
{
"start": 231,
"end": 248,
"text": "Seo et al. (2015)",
"ref_id": "BIBREF98"
},
{
"start": 251,
"end": 268,
"text": "Seo et al. (2014)",
"ref_id": "BIBREF97"
},
{
"start": 332,
"end": 349,
"text": "Seo et al. (2015)",
"ref_id": "BIBREF98"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solving Geometry Problems:",
"sec_num": null
},
{
"text": "Automatic approaches that use logical inference for geometry theorem proving, such as the Wus method (Wen-Tsun 1986), Grobner basis method (Kapur 1986) , and angle method (Chou, Gao, and Zhang 1994) , have been used in tutoring systems such as Geometry Expert (Gao and Lin 2002) and Geometry Explorer (Wilson and Fleuriot 2005) . There has also been research in synthesizing geometry constructions, given logical constraints (Gulwani, Korthikanti, and Tiwari 2011; Itzhaky et al. 2013) or generating geometric proof problems (Alvin et al. 2014) for applications in tutoring systems. Our approach can be used to provide the axiomatic information necessary for these works.",
"cite_spans": [
{
"start": 139,
"end": 151,
"text": "(Kapur 1986)",
"ref_id": "BIBREF56"
},
{
"start": 171,
"end": 198,
"text": "(Chou, Gao, and Zhang 1994)",
"ref_id": "BIBREF23"
},
{
"start": 260,
"end": 278,
"text": "(Gao and Lin 2002)",
"ref_id": "BIBREF45"
},
{
"start": 301,
"end": 327,
"text": "(Wilson and Fleuriot 2005)",
"ref_id": "BIBREF115"
},
{
"start": 425,
"end": 464,
"text": "(Gulwani, Korthikanti, and Tiwari 2011;",
"ref_id": "BIBREF49"
},
{
"start": 465,
"end": 485,
"text": "Itzhaky et al. 2013)",
"ref_id": "BIBREF52"
},
{
"start": 525,
"end": 544,
"text": "(Alvin et al. 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solving Geometry Problems:",
"sec_num": null
},
{
"text": "Other Related Tasks: Our work is also related to Textbook Question Answering (Kembhavi et al. 2017 ), which proposes the task of multimodal machine comprehension where the context needed to answer questions composes of both text and images. The TQA data set is built from middle school science textbooks and pairs a given question to a limited span of knowledge needed to answer it. Also related is the work on Diagram QA (Kembhavi et al. 2016 ), which proposes the task of understanding and answering questions based on diagrams from textbooks, and FigureSeer (Siegel et al. 2016) , which parses figures in research papers.",
"cite_spans": [
{
"start": 77,
"end": 98,
"text": "(Kembhavi et al. 2017",
"ref_id": "BIBREF58"
},
{
"start": 422,
"end": 443,
"text": "(Kembhavi et al. 2016",
"ref_id": "BIBREF58"
},
{
"start": 561,
"end": 581,
"text": "(Siegel et al. 2016)",
"ref_id": "BIBREF101"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solving Geometry Problems:",
"sec_num": null
},
{
"text": "Information Extraction from Textbooks: Our model for extracting structured rules of geometry from textbooks builds upon ideas from information extraction (IE), which is the task of automatically extracting structured information from unstructured and/or semi-structured documents. Although there has been a lot of work in IE on domains such as Web documents (Chang, Hsu, and Lui 2003; Etzioni et al. 2004; Cafarella et al. 2005; Chang et al. 2006; Banko et al. 2007; Etzioni et al. 2008; Mitchell et al. 2015) and scientific publication data (Shah et al. 2003; Peng and McCallum 2006; Saleem and Latif 2012) , work on IE from educational material is much more sparse. Most of the research in IE from educational material deals with extracting simple educational concepts (Shah et al. 2003; Canisius and Sporleder 2007; Yang et al. 2015; Wang et al. 2015; Liang et al. 2015; Wu et al. 2015; Liu et al. 2016b; Wang et al. 2016) or binary relational tuples (Balasubramanian et al. 2002; Clark et al. 2012; Dalvi et al. 2016) using existing IE techniques. On the other hand, our approach extracts axioms and parses them to horn-clause rules. This is much more challenging. Raw application of rule mining or sequence labeling techniques used to extract information from Web documents and scientific publications to educational material usually leads to poor results as the amount of redundancy in educational material is lower and the amount of labeled data is sparse. Our approach tackles these issues by making judicious use of typographical information, the redundancy of information, and ordering constraints to improve the harvesting and parsing of axioms. This has not been attempted in previous work.",
"cite_spans": [
{
"start": 358,
"end": 384,
"text": "(Chang, Hsu, and Lui 2003;",
"ref_id": "BIBREF20"
},
{
"start": 385,
"end": 405,
"text": "Etzioni et al. 2004;",
"ref_id": "BIBREF36"
},
{
"start": 406,
"end": 428,
"text": "Cafarella et al. 2005;",
"ref_id": "BIBREF15"
},
{
"start": 429,
"end": 447,
"text": "Chang et al. 2006;",
"ref_id": "BIBREF21"
},
{
"start": 448,
"end": 466,
"text": "Banko et al. 2007;",
"ref_id": "BIBREF9"
},
{
"start": 467,
"end": 487,
"text": "Etzioni et al. 2008;",
"ref_id": "BIBREF35"
},
{
"start": 488,
"end": 509,
"text": "Mitchell et al. 2015)",
"ref_id": "BIBREF84"
},
{
"start": 542,
"end": 560,
"text": "(Shah et al. 2003;",
"ref_id": "BIBREF99"
},
{
"start": 561,
"end": 584,
"text": "Peng and McCallum 2006;",
"ref_id": "BIBREF90"
},
{
"start": 585,
"end": 607,
"text": "Saleem and Latif 2012)",
"ref_id": null
},
{
"start": 771,
"end": 789,
"text": "(Shah et al. 2003;",
"ref_id": "BIBREF99"
},
{
"start": 790,
"end": 818,
"text": "Canisius and Sporleder 2007;",
"ref_id": "BIBREF16"
},
{
"start": 819,
"end": 836,
"text": "Yang et al. 2015;",
"ref_id": "BIBREF118"
},
{
"start": 837,
"end": 854,
"text": "Wang et al. 2015;",
"ref_id": "BIBREF111"
},
{
"start": 855,
"end": 873,
"text": "Liang et al. 2015;",
"ref_id": "BIBREF65"
},
{
"start": 874,
"end": 889,
"text": "Wu et al. 2015;",
"ref_id": "BIBREF116"
},
{
"start": 890,
"end": 907,
"text": "Liu et al. 2016b;",
"ref_id": "BIBREF73"
},
{
"start": 908,
"end": 925,
"text": "Wang et al. 2016)",
"ref_id": "BIBREF112"
},
{
"start": 954,
"end": 983,
"text": "(Balasubramanian et al. 2002;",
"ref_id": "BIBREF8"
},
{
"start": 984,
"end": 1002,
"text": "Clark et al. 2012;",
"ref_id": "BIBREF24"
},
{
"start": 1003,
"end": 1021,
"text": "Dalvi et al. 2016)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solving Geometry Problems:",
"sec_num": null
},
{
"text": "Language to Programs: After harvesting axioms from textbooks, we also parse the axiom mentions to horn-clause rules. This work is related to a large body of work on semantic parsing Mooney 1993, 1996; Kate et al. 2005; Zettlemoyer and Collins 2012, inter alia). Semantic parsers typically map natural language to formal programs such as database queries (Liang, Jordan, and Klein 2011; Berant et al. 2013; Yaghmazadeh et al. 2017, inter alia) , commands to robots (Shimizu and Haas 2009; Matuszek, Fox, and Koscher 2010; Chen and Mooney 2011, inter alia), or even general purpose programs (Lei et al. 2013; Ling et al. 2016; Yin and Neubig 2017; Ling et al. 2017) . More specifically, Liu et al. (2016a) and Quirk, Mooney, and Galley (2015) learn \"If-Then\" and \"If-This-Then-That\" rules, respectively. In theory, these works can be adapted to parse axiom mentions to horn-clause rules. However, this would require a large amount of supervision, which would be expensive to obtain. We mitigated this issue by using redundant axiom mention extractions from multiple textbooks and then combining the parses obtained from various textbooks to achieve a better final parse for each axiom.",
"cite_spans": [
{
"start": 182,
"end": 200,
"text": "Mooney 1993, 1996;",
"ref_id": null
},
{
"start": 201,
"end": 218,
"text": "Kate et al. 2005;",
"ref_id": "BIBREF57"
},
{
"start": 219,
"end": 234,
"text": "Zettlemoyer and",
"ref_id": "BIBREF123"
},
{
"start": 354,
"end": 385,
"text": "(Liang, Jordan, and Klein 2011;",
"ref_id": "BIBREF66"
},
{
"start": 386,
"end": 405,
"text": "Berant et al. 2013;",
"ref_id": "BIBREF12"
},
{
"start": 406,
"end": 442,
"text": "Yaghmazadeh et al. 2017, inter alia)",
"ref_id": null
},
{
"start": 464,
"end": 487,
"text": "(Shimizu and Haas 2009;",
"ref_id": "BIBREF100"
},
{
"start": 488,
"end": 488,
"text": "",
"ref_id": null
},
{
"start": 590,
"end": 607,
"text": "(Lei et al. 2013;",
"ref_id": "BIBREF63"
},
{
"start": 608,
"end": 625,
"text": "Ling et al. 2016;",
"ref_id": "BIBREF69"
},
{
"start": 626,
"end": 646,
"text": "Yin and Neubig 2017;",
"ref_id": "BIBREF120"
},
{
"start": 647,
"end": 664,
"text": "Ling et al. 2017)",
"ref_id": "BIBREF70"
},
{
"start": 686,
"end": 704,
"text": "Liu et al. (2016a)",
"ref_id": "BIBREF72"
},
{
"start": 709,
"end": 741,
"text": "Quirk, Mooney, and Galley (2015)",
"ref_id": "BIBREF93"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solving Geometry Problems:",
"sec_num": null
},
{
"text": "Large-scale corpus studies of multimedia text have been rare because of the difficulty in obtaining rich multimedia documents in analyzable data structures. A large proportion of text today is typeset using some typesetting software such as LaTeX, Word, HTML, and so on. These features can also serve as useful cues in downstream applications and a model for text formatting is required.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Format",
"sec_num": "3."
},
{
"text": "Some more excerpts of textbooks from our data set that describe (a) complementary angles, (b) exterior angles, and (c) parallelogram diagonal bisection axioms. Each excerpt contains rich typographical features that can be used to harvest the axioms. (a) For the complementary angles mention, the textbook explicitly labels the section name \"5.2.1 Complementary Angles\" with boldface and color; the axiom name \"complementary angles\" is in bold font, and there is a supporting figure. (b) For the exterior angles mention, the axiom statement is boldfaced, the axiom rule is mentioned via an equation (which is emphasized with the boldfaced string \"To show\"), and there is a supporting figure. (c) For the parallelogram diagonal bisection mention, the axiom statement is emphasized with the boldfaced string \"Property,\" the axiom statement itself is italicized, there is a supporting figure, and the axiom rule is written as an equation. Our model will leverage such rich contextual and typographical information (when available) to accurately harvest axioms and then parses them to horn-clause rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 1",
"sec_num": null
},
{
"text": "Corresponding JSON file for the example textbook excerpts shown in Figure 1 . We mark the various typographical features that can be used to harvest the axioms in red: features such as the heading, the bounding box, a supporting figure, and the equation. Table 1 shows some excerpts of textbooks from our data set that describe complementary angles, exterior angles, and parallelogram diagonal bisection axioms. As described, each excerpt contains rich typographical features, such as the section headings, italicization, boldface, coloring, explicit axiom name, supporting figures, and equations that can be used to harvest the axioms. We wish to leverage such rich contextual and typographical information to accurately harvest axioms and then parse them to horn-clause rules. The textbooks are provided to us in rich JSON format, which retains the rich typesetting of these textbooks as shown in Tables 2 and 3. For demonstration, we have manually marked the various typographical features that can be used to harvest the axioms. We will show how we can use these features to harvest axioms of geometry from textbooks and then parse them to structured rules.",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 75,
"text": "Figure 1",
"ref_id": null
},
{
"start": 255,
"end": 262,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Table 2",
"sec_num": null
},
{
"text": "In this section, we review various text formatting devices used in a typical multimedia system and identify what communicative function they serve. This will help us come up with a theory for text formatting in discourse and also motivate how these features can be used in a typical NLP application like information extraction. This theory is inspired from various style suggestions for English writing (Strunk 2007) . The goal of a text formatting device in a multimedia text is to delimit the portion of text for which certain exceptional conditions of interpretation hold. We categorize text formatting devices into Table 3 Corresponding JSON files for the example textbook excerpts shown in Table 1 . We mark the various typographical features that can be used to harvest the axioms in red: (a) For the complementary angles mention, we have features such as the subsubsection \"5.2.1 Complementary Angles\" with boldface and color; the axiom name \"complementary angles\" is in bold font, and there is a supporting figure. (b) For the exterior angles mention, the axiom statement is boldfaced, the axiom rule is mentioned via an equation (which is emphasized with the boldfaced string \"To show\"), and there is a supporting figure. (c) For the parallelogram diagonal bisection mention, the axiom statement is emphasized with the boldfaced string \"Property,\" the axiom statement itself is italicized, there is a supporting figure, and the axiom rule is written as an equation. four broad categories: depiction, position, composition, and substantiation, and describe the various text formatting devices here:",
"cite_spans": [
{
"start": 403,
"end": 416,
"text": "(Strunk 2007)",
"ref_id": "BIBREF104"
}
],
"ref_spans": [
{
"start": 619,
"end": 626,
"text": "Table 3",
"ref_id": null
},
{
"start": 695,
"end": 702,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Text Formatting Elements in Discourse",
"sec_num": "4."
},
{
"text": "\u2022 Depiction: Depiction features concern with how a string of text is presented in the multimedia. These include features such as capitalization, font size/color, boldface, italicization, underline, strikethrough, parenthesis, quotation marks, use of bounding boxes, and so forth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Formatting Elements in Discourse",
"sec_num": "4."
},
{
"text": "\u2022 Position: Position features concern with the positioning of a piece of text relative to the remaining material in the document. These features include in-lining, text offset, footnotes, headers and footers, text separation or isolation (a block of text separated from the rest to create a special effect).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Formatting Elements in Discourse",
"sec_num": "4."
},
{
"text": "\u2022 Composition: Composition features are concerned with the internal structuring of a piece of text. Examples include graphical markers such as paragraph breaks, sections (having sections, chapters, etc., in the document), lists (itemization, enumeration), concept definition using a parenthesis or colon, and so on. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Formatting Elements in Discourse",
"sec_num": "4."
},
{
"text": "A key question for research is: Are these text formatting features useful for NLP tasks? In particular, in this article, we will try to identify whether these text formatting features are useful for information extraction. In a typical multimedia document, authors use various text formatting devices to better communicate the content to their readers. This helps the readers digest the material quickly and much more easily. Thus, can these text formatting features be useful in an information extraction system too? We experimentally validate our hypothesis in the application of harvesting axioms of geometry from richly formatted textbooks. Then, we show that these harvested axioms can improve an existing solver for answering SAT style geometry problems. SAT geometry tests the student's knowledge of Euclidean geometry in its classical sense, including the study of points, lines, planes, angles, triangles, congruence, similarity, solid figures, circles, and analytical geometry. A typical geometry problem is provided in Figure 2 . Geometry questions include a textual description accompanied by a diagram. Various levels of understanding are required to solve geometry problems. An important challenge is understanding both the diagram (which consists of identifying visual elements in the diagram, their locations, their geometric properties, etc.) and the text simultaneously, and then reasoning about the geometrical concepts using well-known axioms of Euclidean geometry.",
"cite_spans": [],
"ref_spans": [
{
"start": 1030,
"end": 1038,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Text Formatting Features for Information Extraction?",
"sec_num": "5."
},
{
"text": "We first recap GEOS, a completely automatic solver for geometry problems. We will then use the rich contextual and typographical information in textbooks to extract structured knowledge of geometry. This structured knowledge of geometry will then be used to improve GEOS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Formatting Features for Information Extraction?",
"sec_num": "5."
},
{
"text": "An example SAT style geometry problem. The problem consists of a diagram as well as the question text. In order to solve such a question, the system is required to understand both the diagram as well as the question text, and also reason about geometrical concepts using well-known axioms of Euclidean geometry.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure",
"sec_num": null
},
{
"text": "Our work reuses GEOS (Seo et al. 2015) to parse the question text and diagram into its formal problem description as shown in Figure 3 . GEOS uses a logical formula, a first-order logic expression that includes known numbers or geometrical entities (e.g., 4 cm) as constants, unknown numbers or geometrical entities (e.g., O) as variables, geometric or arithmetic relations (e.g., isLine, isTriangle) as predicates, and properties of geometrical entities (e.g., measure, liesOn) as functions. This is done by learning a set of relations that potentially correspond to the question text (or the diagram) along with a confidence score. For diagram parsing, GEOS uses a publicly available diagram parser for geometry problems (Seo et al. 2014) to obtain the set of all visual elements, their coordinates, their relationships in the diagram, and their",
"cite_spans": [
{
"start": 21,
"end": 38,
"text": "(Seo et al. 2015)",
"ref_id": "BIBREF98"
},
{
"start": 723,
"end": 740,
"text": "(Seo et al. 2014)",
"ref_id": "BIBREF97"
}
],
"ref_spans": [
{
"start": 126,
"end": 134,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Background: GEOS",
"sec_num": "6."
},
{
"text": "A logical expression that represents the meaning of the text description and the diagram in the geometry problem in Figure 2 . GEOS derives a weighted logical expression where each predicate also carries a weighted score, but we do not show them here for clarity. alignment with entity references in the question text. The diagram parser also provides confidence scores for each literal to be true in the diagram. For text parsing, GEOS takes a multistage approach, which maps words or phrases in the text to their corresponding concepts, and then identifies relations between identified concepts.",
"cite_spans": [],
"ref_spans": [
{
"start": 116,
"end": 124,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "Given this formal problem description, GEOS uses a numerical method to check the satisfiablity of literals by defining a relaxed indicator function for each literal. These indicator functions are manually engineered for every predicate. Each predicate is mapped into a set of constraints over point coordinates. 2 These constraints can be nontrivial to write, requiring significant manual engineering. As a result, GEOS's constraint set is incomplete and it cannot solve a number of SAT style geometry questions. Furthermore, this solver is not interpretable. As our user studies show, it is not natural for a student to understand the solution of these geometry questions in terms of satisfiability of constraints over coordinates. A more natural way for students to understand and reason about these questions is through deductive reasoning using axioms of geometry.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "To tackle the aforementioned issues with the numerical solver in GEOS, we replace the numerical solver with an axiomatic solver. We extract axiomatic knowledge from textbooks and parse them into horn-clause rules. Then we build an axiomatic solver that performs logical inference with these horn-clause rules and the formal problem description. A sample logical program (in prolog notation) that solves the problem in Figure 2 is given in Figure 4 . The logical program has a set of declarations from the GEOS text and diagram parsers that describe the problem specification; and the parsed horn-clause rules describe the underlying theory. Normalized confidence scores from question text and diagram and axiom parsing models are used as probabilities in the program. Figure 5 shows a block diagram of the overall system that solves geometry problems. Also, Figure 6 pictorially shows the two step procedure for obtaining structured axiomatic knowledge from textbooks:",
"cite_spans": [],
"ref_spans": [
{
"start": 418,
"end": 426,
"text": "Figure 2",
"ref_id": null
},
{
"start": 439,
"end": 447,
"text": "Figure 4",
"ref_id": null
},
{
"start": 768,
"end": 776,
"text": "Figure 5",
"ref_id": null
},
{
"start": 858,
"end": 866,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Set-up for the Axiomatic Solver",
"sec_num": "7."
},
{
"text": "Axiom Identification and Alignment: In this stage, we identify axiom mentions in all textbooks and align the mentions of the same axiom across different textbooks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "Axiom Parsing: In this stage, we parse each of these axiom mentions into implication rules and then resolve the implication rules for various axiom mentions referring to the same axiom mention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "Next, we describe how we harvest structured axiomatic knowledge from textbooks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "We present a structured prediction model that identifies axioms in textbooks and then parses them. Because harvesting axioms from a single textbook is a very hard problem, we use multiple textbooks and leverage the redundancy of information to accurately 2 For example, the predicate isPerpendicular(AB, CD) is mapped to the constraint ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Harvesting Axiomatic Knowledge",
"sec_num": "8."
},
{
"text": "y B \u2212y A x B \u2212x A \u00d7 y D \u2212y C x D \u2212x C = \u22121.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Harvesting Axiomatic Knowledge",
"sec_num": "8."
},
{
"text": "A sample logical program (in prolog style) that solves the problem in Figure 2 . The program consists of a set of data structure declarations that correspond to types in the prolog program, a set of declarations from the diagram and text parse, and a subset of the geometry axioms written as horn-clause rules. The axioms are used as the underlying theory with the aforementioned declarations to yield the solution upon logical inference. Normalized confidence weights from the diagram, text, and axiom parses are used as probabilities. For reader understanding, we list the axioms in the order (1 to 7) they are used to solve the problem. However, this ordering is not required. Other (less probable) declarations and axiom rules are not shown here for clarity but they can be assumed to be present.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 78,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "Block diagram of our overall system that solves geometry problems. We use GEOS (Seo et al. 2015) -previous work that parses geometry questions into a formal problem description. In this article, we describe an approach to harvest geometry axioms from textbooks and then parse them to rules. Then, we use an off-the-shelf prolog style probabilistic reasoner (solver) to perform logical inference with these horn-clause rules and the formal problem description to obtain the answer. Our focus in this article is on the task of harvesting knowledge of geometry from textbooks. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 5",
"sec_num": null
},
{
"text": "Pictorial representation of our two step procedure for obtaining structured axiomatic knowledge from textbooks. Left: In the first step, we identify axiom mentions in all the textbooks (shown in blue) and align the mentions of the same axiom across different textbooks (shown in red). Right:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 6",
"sec_num": null
},
{
"text": "In the second step, we parse each of these identified axiom mentions into implication rules and then resolve the implication rules for various axiom mentions referring to the same axiom mention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 6",
"sec_num": null
},
{
"text": "extract and parse axioms. We first define a joint model that identifies axiom mentions in each textbook and aligns repeated mentions of the same axiom across textbooks. Then, given a set of axioms (with possibly multiple mentions of each axiom), we define a parsing model that maps each axiom to a horn-clause rule by utilizing the various mentions of the axiom. Given a set of textbooks B in machine readable form (JSON in our experiments), we extract chapters relevant for geometry in each of them to obtain a sequence of discourse elements (with associated typographical information) from each textbook. We assume that the textbook comprises an ordered set 3 of discourse elements where a discourse element could be a natural language sentence, heading, title, figure, table, or caption. The discourse element (e.g., a sentence) could have additional typographical features. For example, the sentence could be written in boldface, underline, and so forth. These properties of discourse elements will be useful features that can be leveraged for the task of harvesting axioms. Let S b = {s (b) 0 , s",
"cite_spans": [
{
"start": 1092,
"end": 1095,
"text": "(b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 6",
"sec_num": null
},
{
"text": "(b) 1 , . . . s (b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 6",
"sec_num": null
},
{
"text": "|S b | } denote the sequence of discourse elements in textbook b. |S b | denotes the number of discourse elements in textbook b.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 6",
"sec_num": null
},
{
"text": "We decompose the problem of extracting axioms from textbooks into two tractable sub-problems:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Axiom Identification and Alignment",
"sec_num": "8.1"
},
{
"text": "1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Axiom Identification and Alignment",
"sec_num": "8.1"
},
{
"text": "identification of axiom mentions in each textbook using sequence labeling 2. alignment of repeated mentions of the same axiom across textbooks",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Axiom Identification and Alignment",
"sec_num": "8.1"
},
{
"text": "Then, we combine the learned models for these sub-problems into a joint optimization framework that simultaneously learns to identify and align axiom mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Axiom Identification and Alignment",
"sec_num": "8.1"
},
{
"text": "8.1.1 Axiom Identification. Linear-chain conditional random field formulation (Lafferty, McCallum, and Pereira 2001) can be used for the subproblem of axiom identification. Given {S b |b \u2208 B}, a sequence of discourse elements (with associated typographical information) from each textbook, the model labels each discourse element s i and Y b be the tag sequence assigned to S b . The conditional random field defines:",
"cite_spans": [
{
"start": 78,
"end": 116,
"text": "(Lafferty, McCallum, and Pereira 2001)",
"ref_id": "BIBREF60"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Axiom Identification and Alignment",
"sec_num": "8.1"
},
{
"text": "p(Y b |S b ; \u03b8 \u03b8 \u03b8) \u221d |S b | k=1 exp \uf8eb \uf8ed i,j\u2208T \u03b8 \u03b8 \u03b8 T ij f ij (y (b) k\u22121 , y (b) k , S b ) \uf8f6 \uf8f8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Axiom Identification and Alignment",
"sec_num": "8.1"
},
{
"text": "We find the parameters \u03b8 \u03b8 \u03b8 using maximum-likelihood estimation with L2 regularization:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Axiom Identification and Alignment",
"sec_num": "8.1"
},
{
"text": "\u03b8 \u03b8 \u03b8 * = arg max \u03b8 \u03b8 \u03b8 b\u2208B log p(Y b |S b ; \u03b8 \u03b8 \u03b8) \u2212 \u03bb||\u03b8 \u03b8 \u03b8|| 2 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Axiom Identification and Alignment",
"sec_num": "8.1"
},
{
"text": "We use limited memory BFGS (L-BFGS) to optimize the objective and Viterbi decoding for inference. \u03bb is tuned on the dev set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Axiom Identification and Alignment",
"sec_num": "8.1"
},
{
"text": "Features: Features f look at a pair of adjacent tags y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Axiom Identification and Alignment",
"sec_num": "8.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(b) k\u22121 , y",
"eq_num": "(b)"
}
],
"section": "Axiom Identification and Alignment",
"sec_num": "8.1"
},
{
"text": "k , the input sequence S b , and where we are in the sequence. The features (listed in Table 4 ) include various content-based features encoding various notions of similarity between pairs of discourse elements (in terms of semantic overlap, more refined match of geometry entities, and certain keywords) as well as various typographical features such as whether the discourse elements are annotated as an axiom (or theorem or corollary) in the textbook; contain equations, diagrams, or text that is bold or italicized; are in the same node of the JSON hierarchy; are contained in a bounding box, and so forth. We also use features directly from an existing RST parser (Feng and Hirst 2014) ; discourse structure can be useful to understand if two consecutive discourse elements are together part of an axiom (or not).",
"cite_spans": [
{
"start": 669,
"end": 690,
"text": "(Feng and Hirst 2014)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [
{
"start": 87,
"end": 94,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Axiom Identification and Alignment",
"sec_num": "8.1"
},
{
"text": "Some extracted axiom mentions contain pointers to a diagram (e.g., \" Figure 2.1\") . In all these cases, we consider the diagram to be a part of the axiom mention. We will discuss the impact of the various content-and typography-based features later in Section 11. 8.1.2 Axiom Alignment. Next, we leverage the redundancy of information and the relatively fixed ordering of axioms in various textbooks. Most textbooks typically present all axioms of geometry in approximately the same order, moving from easier concepts to more advanced concepts. For example, all textbooks will introduce the definition of a right-angled triangle before introducing the Pythagorean theorem. We leverage this structure by aligning various mentions of the same axiom across textbooks and introducing structural constraints on the alignment.",
"cite_spans": [],
"ref_spans": [
{
"start": 69,
"end": 81,
"text": "Figure 2.1\")",
"ref_id": null
}
],
"eq_spans": [],
"section": "Axiom Identification and Alignment",
"sec_num": "8.1"
},
{
"text": "Feature set for our axiom identification model. The features are based on content and typography.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 4",
"sec_num": null
},
{
"text": "Semantic textual similarity between the current and next discourse element. We include features that compute the proportion of common unigrams and bigrams across the two discourse elements. This feature is conjoined with the tag assigned to the current and next sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence overlap",
"sec_num": null
},
{
"text": "Number of geometry entities (constants, predicates, and functions)-normalized by the number of tokens in this discourse element. This feature is conjoined with the tag assigned to the current discourse element.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Geometry entities",
"sec_num": null
},
{
"text": "Indicator that the current discourse element contains any one of the following words: hence, if, equal, twice, proportion, ratio, product. This feature is conjoined with the tag assigned to the current discourse element.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keywords",
"sec_num": null
},
{
"text": "Indicator for the RST relation between the current and next discourse element. This feature is conjoined with the tag assigned to the current and next sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RST edge",
"sec_num": null
},
{
"text": "(a) The current (or previous) discourse element is mentioned as an Axiom, Theorem, or Corollary (e.g., Similar Triangle Theorem or Corollary 2.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Axiom, Theorem, Corollary Mention",
"sec_num": null
},
{
"text": "(b) The section or subsection in the textbook containing the current (or previous) discourse element mentions an Axiom, Theorem, or Corollary. This feature is conjoined with the tag assigned to the current (and previous) discourse element.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Axiom, Theorem, Corollary Mention",
"sec_num": null
},
{
"text": "The current (or next) discourse element contains an equation (e.g., PA \u00d7 PB = PT 2 ). This feature is conjoined with the tag assigned to the current (and next) sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Equation",
"sec_num": null
},
{
"text": "The current discourse element contains a pointer to a figure (e.g., \" Figure 2 .1\"). This feature is conjoined with the tag assigned to the current discourse element.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 78,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Associated diagram",
"sec_num": null
},
{
"text": "The discourse element (or previous discourse element) contains text that is in bold font or underlined. Conjoined with the tag assigned to the current (and previous) discourse element.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bold/ Underline",
"sec_num": null
},
{
"text": "Indicator that the current and previous discourse elements are bounded by a bounding box in the textbook. Conjoined with the tag assigned to the current (and previous) discourse element.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bounding box",
"sec_num": null
},
{
"text": "Indicator that the current and previous discourse element are in the same node of the JSON hierarchy. Conjoined with the tag assigned to the current (and previous) discourse element.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JSON structure",
"sec_num": null
},
{
"text": "Let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JSON structure",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A b = A (b) 1 , A",
"eq_num": "(b)"
}
],
"section": "JSON structure",
"sec_num": null
},
{
"text": "2 , . . . , A (b) |A b | be the axiom mentions extracted from textbook b. Let A denote the collection of axiom mentions extracted from all textbooks. We assume a global ordering of axioms",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JSON structure",
"sec_num": null
},
{
"text": "A * = A * 1 , A * 2 , . . . , A * U",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JSON structure",
"sec_num": null
},
{
"text": "where U is some predefined upper bound on the total number of axioms in geometry. Then, we emphasize that the axiom mentions extracted from each textbook (roughly) follow this ordering. Let Z",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JSON structure",
"sec_num": null
},
{
"text": "(b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JSON structure",
"sec_num": null
},
{
"text": "ij be a random variable that denotes if axiom A (b) i extracted from book b refers to the global axiom A * j . We introduce a log-linear model that factorizes over alignment pairs:",
"cite_spans": [
{
"start": 48,
"end": 51,
"text": "(b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "JSON structure",
"sec_num": null
},
{
"text": "P(Z|A; \u03c6 \u03c6 \u03c6) = 1 Z(A; \u03c6 \u03c6 \u03c6) \u00d7 exp \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed b 1 ,b 2 \u2208B b 1 =b 2 1\u2264k\u2264U 1\u2264i\u2264|A b 1 | 1\u2264j\u2264|A b 2 | Z (b 1 ) ik Z (b 2 ) jk \u03c6 \u03c6 \u03c6 T g(A (b 1 ) i , A (b 2 ) j ) \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JSON structure",
"sec_num": null
},
{
"text": "Here, Z(A; \u03c6 \u03c6 \u03c6) is the partition function of the log-linear model. g denotes a feature function that measures the similarity of two axiom mentions (described in detail later). We introduce the following constraints on the alignment structure:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JSON structure",
"sec_num": null
},
{
"text": "C1: An axiom appears in a book at most once.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JSON structure",
"sec_num": null
},
{
"text": "An axiom refers to exactly one theorem in the global ordering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C2:",
"sec_num": null
},
{
"text": "Ordering Constraint: If i th axiom in a book refers to the j th axiom in the global ordering then no axiom succeeding the i th axiom can refer to a global axiom preceding j.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C3:",
"sec_num": null
},
{
"text": "We find the optimal parameters \u03c6 \u03c6 \u03c6 using maximumlikelihood estimation with L2 regularization:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Hard Constraints:",
"sec_num": null
},
{
"text": "\u03c6 \u03c6 \u03c6 * = arg max \u03c6 \u03c6 \u03c6 log P(Z|A; \u03c6 \u03c6 \u03c6) \u2212 \u00b5||\u03c6 \u03c6 \u03c6|| 2 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Hard Constraints:",
"sec_num": null
},
{
"text": "We use L-BFGS to optimize the objective. To compute feature expectations appearing in the gradient of the objective, we use a Gibbs sampler. The sampling equations for Z b ik are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Hard Constraints:",
"sec_num": null
},
{
"text": "P(Z (b) ik |rest) \u221d exp (T b (i, k)) (1) T b (i, k) = Z (b) ik b \u2208B b =b 1\u2264j\u2264|A b | Z (b ) jk \u03c6 \u03c6 \u03c6 T g(A (b) i , A (b ) j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Hard Constraints:",
"sec_num": null
},
{
"text": "Note that the constraints C1 . . . 3 define the feasible space of alignments. Our sampler always samples the next Z Learning with Soft Constraints: We might want to treat some constraints, in particular, the ordering constraints C3 as soft constraints. We can write down the constraint C3 using the alignment variables:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Hard Constraints:",
"sec_num": null
},
{
"text": "Z (b) ij \u2264 1 \u2212 Z (b) kl \u2200 1 \u2264 i < k \u2264 |A b |, 1 \u2264 l < j \u2264 U \u2200 b \u2208 B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Hard Constraints:",
"sec_num": null
},
{
"text": "To model these constraints as soft constraints, we penalize the model for violating these constraints. Let the penalty for violating this constraints be the exp \u03bd max 0, 1 \u2212 Z (b) ij \u2212 Z (b) kl . Thus, we introduce a new regularization term:",
"cite_spans": [
{
"start": 176,
"end": 179,
"text": "(b)",
"ref_id": null
},
{
"start": 187,
"end": 190,
"text": "(b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Hard Constraints:",
"sec_num": null
},
{
"text": "R(Z) = 1\u2264i<k\u2264|A b | 1\u2264l<j\u2264U b\u2208B exp \u03bd max 0, 1 \u2212 Z (b) ij \u2212 Z (b) kl",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Hard Constraints:",
"sec_num": null
},
{
"text": "Here \u03bd is a hyper-parameter to tune the cost of violating a constraint. We write down the following regularized objective:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Hard Constraints:",
"sec_num": null
},
{
"text": "\u03c6 \u03c6 \u03c6 * = arg max \u03c6 \u03c6 \u03c6 log P(Z|A; \u03c6 \u03c6 \u03c6) \u2212 R(Z) \u2212 \u00b5||\u03c6 \u03c6 \u03c6|| 2 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Hard Constraints:",
"sec_num": null
},
{
"text": "We use L-BFGS to find the optimal parameters \u03c6 \u03c6 \u03c6 * . We perform Gibbs sampling to compute feature expectations. The sampling equation for Z (b) ik is similar (Equation (1)), but:",
"cite_spans": [
{
"start": 142,
"end": 145,
"text": "(b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Hard Constraints:",
"sec_num": null
},
{
"text": "T b (i, k) = b \u2208B b =b 1\u2264j\u2264|A b | Z (b) ik Z (b ) jk \u03c6 \u03c6 \u03c6 T g(A (b) i , A (b ) j ) + \u03bd b \u2208B b =b i<j\u2264|A b | 1\u2264l<k 1 \u2212 Z (b) ik \u2212 Z (b ) jl + \u03bd b \u2208B b =b 1\u2264j<i| k<l\u2264U 1 \u2212 Z (b) ik \u2212 Z (b ) jl",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Hard Constraints:",
"sec_num": null
},
{
"text": "Features: Now, we describe the features g. These too include content-based features encoding various notions of similarity between pairs of axiom mentions (such as unigram, bigram, dependency and entity overlap, longest common subsequence [LCS], alignment, MT, and summarization scores) as well as various typographical features, such as matching of the current (and parent) node of axiom mentions in respective JSON hierarchies, equation template matching, and image caption matching. The features are listed in Table 5 . We will further discuss the impact of the various content-and typography-based features later in Section 11.",
"cite_spans": [],
"ref_spans": [
{
"start": 513,
"end": 520,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning with Hard Constraints:",
"sec_num": null
},
{
"text": "8.1.3 Joint Identification and Alignment. Joint modeling of axiom identification and alignment components is useful as both problems potentially help each other. Correct axiom identification can help predict correct alignments and axiom alignments can help predict correct axiom mention boundaries. Hence, we combine the respective models for identification and alignment into a joint model. Let Y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Hard Constraints:",
"sec_num": null
},
{
"text": "ij denote that the discourse element s (b) i from book b has tag j. We reuse the definitions of the alignment variables Z (b) ij as before. We further define Z (b) i0 such that it denotes that the i th axiom in textbook b Table 5 Feature set for our axiom alignment model. The features are based on content and typography.",
"cite_spans": [
{
"start": 39,
"end": 42,
"text": "(b)",
"ref_id": null
},
{
"start": 160,
"end": 163,
"text": "(b)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 222,
"end": 229,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning with Hard Constraints:",
"sec_num": null
},
{
"text": "Real valued features that compute the proportion of common unigrams, bigrams, dependencies, and geometry entities (constants, predicates, and functions) across the two axioms. When comparing geometric entities, we include geometric entities derived from the associated diagrams when available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unigram, Bigram, Dependency and Entity Overlap",
"sec_num": null
},
{
"text": "Real valued feature that computes the length of longest common subsequence of words between two axiom mentions normalized by the total number of words in the two mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Longest Common Subsequence",
"sec_num": null
},
{
"text": "Real valued feature that computes the absolute difference in the number of discourse elements in the two mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of discourse elements",
"sec_num": null
},
{
"text": "We use an off-the-shelf monolingual word aligner-JACANA (Yao et al. 2013 ) pretrained on PPDB-and compute alignment score between axiom mentions as the feature.",
"cite_spans": [
{
"start": 56,
"end": 72,
"text": "(Yao et al. 2013",
"ref_id": "BIBREF119"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Scores",
"sec_num": null
},
{
"text": "We use two common MT evaluation metrics METEOR (Denkowski and Lavie 2010) and MAXSIM (Chan and Ng 2008) , and use the evaluation scores as features. While METEOR computes n-gram overlaps controlling on precision and recall, MAXSIM performs bipartite graph matching and maps each word in one axiom to at most one word in the other.",
"cite_spans": [
{
"start": 85,
"end": 103,
"text": "(Chan and Ng 2008)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MT Metrics",
"sec_num": null
},
{
"text": "We also use Rouge-S (Lin 2004) , a text summarization metric, and use the evaluation score as a feature. Rouge-S is based on skip-grams. is not aligned with any global axiom. We again define a log-linear model with factors that score axiom identification and axiom alignments.",
"cite_spans": [
{
"start": 20,
"end": 30,
"text": "(Lin 2004)",
"ref_id": "BIBREF67"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Metrics",
"sec_num": null
},
{
"text": "p(Y, Z|{S b }; \u03b8 \u03b8 \u03b8, \u03c6 \u03c6 \u03c6) \u221d f AI (Y|{S b }; \u03b8 \u03b8 \u03b8) \u00d7 f AA (Z|Y, {S b }; \u03c6 \u03c6 \u03c6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Metrics",
"sec_num": null
},
{
"text": "Here, the factors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Metrics",
"sec_num": null
},
{
"text": "f AI = exp( b\u2208B |S b | k=1 i,j\u2208T Y (b) k\u22121i Y (b) kj \u03b8 \u03b8 \u03b8 T ij f ij (i, j, S b )) f AA = exp( b 1 ,b 2 \u2208B b 1 =b 2 1\u2264k\u2264U 1\u2264i\u2264|A b 1 | 1\u2264j\u2264|A b 2 | Z (b 1 ) ik Z (b 2 ) jk \u03c6 \u03c6 \u03c6 T g(A (b 1 ) i , A (b 2 ) j ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Metrics",
"sec_num": null
},
{
"text": "Note that an error in axiom identification would result in a change in the axiom alignment feature function g and hence would worsen the quality of axiom alignments. This motivates our joint modeling of axiom identification and alignment. We again have the following model constraints:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Metrics",
"sec_num": null
},
{
"text": "C1 : Every discourse element has a unique label C2 Tag O cannot be followed by tag I C3 Consistency between Ys and Zs, i.e., axiom boundaries defined by Ys and Zs must agree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Metrics",
"sec_num": null
},
{
"text": "C4 = C3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Metrics",
"sec_num": null
},
{
"text": "We use L-BFGS for learning. To compute feature expectations, we use a Metropolis Hastings sampler that samples Ys and Zs alternatively. Sampling for Zs reduces to Gibbs sampling and the sampling equations are the same as before (Section 8.1.2). For better mixing, we sample Y in blocks. Consider blocks of Ys which denote axiom boundaries at time stamp t; we define three operations to sample axiom blocks at the next time stamp. The operations (shown in Figure 7) are:",
"cite_spans": [],
"ref_spans": [
{
"start": 455,
"end": 464,
"text": "Figure 7)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Summarization Metrics",
"sec_num": null
},
{
"text": "Update axiom: The axiom boundary can be shrunk, expanded, or moved. The new axiom, however, cannot overlap with other axioms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Metrics",
"sec_num": null
},
{
"text": "The axiom can be deleted by labeling all its discourse elements as O.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Delete axiom:",
"sec_num": null
},
{
"text": "Introduce axiom: Given a contiguous sequence of discourse elements labeled O, a new axiom can be introduced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Delete axiom:",
"sec_num": null
},
{
"text": "Note that these three operations define an ergodic Markov chain. We use the axiom identification part of the model as the proposal: An illustration of the three operations to sample axiom blocks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Delete axiom:",
"sec_num": null
},
{
"text": "Q(\u0232|Y) \u221d exp \uf8eb \uf8ed b\u2208B |S b | k=1 i,j\u2208T\u0232 (b) k\u22121i\u0232 (b) kj \u03b8 \u03b8 \u03b8 T ij f ij (i, j, S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Delete axiom:",
"sec_num": null
},
{
"text": "Hence, the acceptance ratio only depends on the alignment part of the model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Delete axiom:",
"sec_num": null
},
{
"text": "R(\u0232|Y) = min 1, U(\u0232) U(Y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Delete axiom:",
"sec_num": null
},
{
"text": "where U(Y) = f AA . We again have two variants, where we model the ordering constraints (C4 ) as soft or hard constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Delete axiom:",
"sec_num": null
},
{
"text": "After harvesting axioms, we build a parser for these axioms that maps raw axioms to horn-clause rules. The axiom harvesting step provides us a multiset of axiom extractions. Let A = {A 1 , A 2 , . . . , A |A| } represent the multiset where each axiom A i is mentioned at least once. Each axiom mention, in turn, comprises a contiguous sequence of discourse elements and optionally an accompanying diagram. Semantic parsers map natural language to formal programs such as database queries (Liang, Jordan, and Klein 2011, inter alia) , commands to robots (Shimizu and Haas 2009, inter alia) , or even general purpose programs (Yin and Neubig 2017) . More specifically, Liu et al. (2016a) learn \"If-Then\" program statements and Quirk, Mooney, and Galley (2015) learn \"If-This-Then-That\" rules. In theory, these works can be used to parse axioms to horn-clause rules. However, semantic parsing is a hard task and would require a large amount of supervision. In our setting, we can only afford a modest amount of supervision. We mitigate this issue by using the redundant axiom mention extractions from multiple sources (textbooks) and combining the parses obtained from various textbooks to achieve a better final parse for each axiom.",
"cite_spans": [
{
"start": 488,
"end": 531,
"text": "(Liang, Jordan, and Klein 2011, inter alia)",
"ref_id": null
},
{
"start": 553,
"end": 588,
"text": "(Shimizu and Haas 2009, inter alia)",
"ref_id": null
},
{
"start": 624,
"end": 645,
"text": "(Yin and Neubig 2017)",
"ref_id": "BIBREF120"
},
{
"start": 667,
"end": 685,
"text": "Liu et al. (2016a)",
"ref_id": "BIBREF72"
},
{
"start": 725,
"end": 757,
"text": "Quirk, Mooney, and Galley (2015)",
"ref_id": "BIBREF93"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Axiom Parsing",
"sec_num": "8.2"
},
{
"text": "First, we describe a base parser that parses axiom mentions to horn-clause rules. Then, we utilize the redundancy of axiom extractions from various sources (textbooks) to improve our parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Axiom Parsing",
"sec_num": "8.2"
},
{
"text": "8.2.1 Base Axiomatic Parser. Our base parser identifies the premise and conclusion portions of each axiom and then uses GEOS's text parser to parse the two portions into a logical formula. Then, the two logical formulas are put together to form horn-clause rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Axiom Parsing",
"sec_num": "8.2"
},
{
"text": "Axiom mentions (for example, the Pythagorean theorem mention in Figure 1 ) are often accompanied by equations or diagrams. When the mention has an equation, we simply treat the equation as the conclusion and the rest of the mention as the premise. When the axiom has an associated diagram, we always include the diagram in the premise. We learn a model to predict the split of the axiom text into two parts, forming the premise and the conclusion spans. Then, the GEOS parser maps the premise and conclusion spans to premise and conclusion logical formulas, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 72,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Axiom Parsing",
"sec_num": "8.2"
},
{
"text": "Let Z s represent the split that demarcates the premise and conclusion spans. We score the axiom split as a log-linear model: p(Z s |a; w) \u221d exp w T h(a, Z s ) . Here, h are feature functions described later. We found that in most cases (>95%), the premise and conclusion are contiguous spans in the axiom mention where the left span corresponds to the premise and the right span corresponds to the conclusion. Hence, we search over the space of contiguous spans to infer Z s . Joint search over the latent variables Z s , Z p , and Z c is exponential. Hence, we use a greedy procedure, beam search, with a fixed beam size (10) for inference. That is, in each step, we only expand the ten most promising candidates so far given by the current score. We first infer Z s to decide the split of the axiom and then infer Z p and Z c to obtain the parse of the premise and the conclusion, using the two-part approach described before. We use L-BGFGS for learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Axiom Parsing",
"sec_num": "8.2"
},
{
"text": "We list the features h defined over candidate spans forming the text split in Table 6 . The features are similar to those used in previous work on discourse analysis, Table 6 Feature set for our axiom parsing model.",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 85,
"text": "Table 6",
"ref_id": null
},
{
"start": 167,
"end": 174,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features:",
"sec_num": null
},
{
"text": "Proportion of (a) words, (b) geometry relations, and (c) relation-arguments shared by the two spans.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Similarity",
"sec_num": null
},
{
"text": "Number of geometry relations represented in the two spans. We use the Lexicon Map from GEOS to compute the number of expressed geometry relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of Relations",
"sec_num": null
},
{
"text": "The distribution of the two text spans is typically dependent on their lengths. We use the ratio of the length of the two spans as an additional feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Lengths",
"sec_num": null
},
{
"text": "Relative position of the two lexical heads and the text split in the discourse element sentence. We use the difference between the lexical head position and the text split position as the feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relative Position",
"sec_num": null
},
{
"text": "Discourse markers (connectives, cue-words, or cue-phrases, etc.) have been shown to give good indications on discourse structure (Marcu 2000) . We build a list of discourse markers using the training set, considering the first and last tokens of each span, culled to top 100 by frequency. We use these 100 discourse markers as features. We repeat the same procedure by using part-of-speech (POS) instead of words and use them as features.",
"cite_spans": [
{
"start": 129,
"end": 141,
"text": "(Marcu 2000)",
"ref_id": "BIBREF80"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse (Typography) Discourse Markers",
"sec_num": null
},
{
"text": "Punctuation at the segment border is another excellent cue for the segmentation. We include indicator features to show whether there is punctuation at the segment border.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Punctuation",
"sec_num": null
},
{
"text": "Indicator that the two text spans are part of the same (a) sentence, (b) paragraph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Organization",
"sec_num": null
},
{
"text": "We use an off-the-shelf RST parser (Feng and Hirst 2014) and include an indicator feature that shows that the segmentation matches the parse segmentation. We also include the RST label as a feature.",
"cite_spans": [
{
"start": 35,
"end": 56,
"text": "(Feng and Hirst 2014)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RST Parse",
"sec_num": null
},
{
"text": "Soricut and Marcu (2003) (section 3.1) presented a statistical model for deciding elementary discourse unit boundaries. We use the probability given by this model retrained on our training set as a feature. This feature uses both lexical and syntactic information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soricut and Marcu Segmenter",
"sec_num": null
},
{
"text": "Head/ Common Ancestor/ Attachment Node Head node is defined as the word with the highest occurrence as a lexical head in the lexicalized tree among all the words in the text span. The attachment node is the parent of the head node. We use features for the head words of the left and right spans, the common ancestor (if any), the attachment node, and the conjunction of the two head node words. We repeat these features with part-of-speech (POS) instead of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soricut and Marcu Segmenter",
"sec_num": null
},
{
"text": "Distance to (a) root, and (b) common ancestor for the nodes spanning the respective spans. We use these distances and the difference in the distances as features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntax",
"sec_num": null
},
{
"text": "Dominance (Soricut and Marcu 2003) is a key idea in discourse that looks at syntax trees and studies sub-trees for each span to infer a logical nesting order between the two. We use the dominance relationship as a feature. See Soricut and Marcu (2003) for details.",
"cite_spans": [
{
"start": 227,
"end": 251,
"text": "Soricut and Marcu (2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dominance",
"sec_num": null
},
{
"text": "Indicator that the two spans are in the same node in the JSON hierarchy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JSON structure",
"sec_num": null
},
{
"text": "Conjoined with the indicator feature that shows that the two spans are part of the same paragraph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JSON structure",
"sec_num": null
},
{
"text": "in particular on the automatic detection of elementary discourse units (EDUs) in rhetorical structure theory (Mann and Thompson 1988) and discourse parsing (Marcu 2000; Soricut and Marcu 2003) . These include ideas such as the use of a list of discourse markers, punctuation, and natural text and JSON organization as an indicator of discourse boundaries. We also use an off-the-shelf discourse parser and an EDU segmenter from Soricut and Marcu (2003) . Then we also used syntax-based cues, such as span lengths, head node attachment, distance to common ancestor/root, relative position of the two lexical heads and the text split; and dominance, which have been found to be useful in discourse parsing (Marcu 2000; Soricut and Marcu 2003) . Finally, we also used some semantic features, such as the similarity of the two spans (in terms of common words, geometry relations and relation-arguments), and number of geometry relations in the respective span parses. We will discuss the impact of the various features later in Section 11. Given a beam of premise and conclusion splits, we use the GEOS parser to obtain premise and conclusion logical formulas for each split in the beam and obtain a beam of axiom parses for each axiom in each textbook.",
"cite_spans": [
{
"start": 109,
"end": 133,
"text": "(Mann and Thompson 1988)",
"ref_id": "BIBREF78"
},
{
"start": 156,
"end": 168,
"text": "(Marcu 2000;",
"ref_id": "BIBREF80"
},
{
"start": 169,
"end": 192,
"text": "Soricut and Marcu 2003)",
"ref_id": null
},
{
"start": 440,
"end": 452,
"text": "Marcu (2003)",
"ref_id": null
},
{
"start": 704,
"end": 716,
"text": "(Marcu 2000;",
"ref_id": "BIBREF80"
},
{
"start": 717,
"end": 740,
"text": "Soricut and Marcu 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "JSON structure",
"sec_num": null
},
{
"text": "Parser. Now, we describe a multisource parser that utilizes the redundancy of axiom extractions from various sources (textbooks). Given a beam of 10-best parses for each axiom from each source, we use a number of heuristics to determine the best parse for the axiom:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multisource Axiomatic",
"sec_num": "8.2.2"
},
{
"text": "1. Majority Voting: For each axiom, pick the parse that occurs most frequently across beams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multisource Axiomatic",
"sec_num": "8.2.2"
},
{
"text": "Average Score: Pick the parse that has the highest average parse score (only counting top 5 parses for each source) for each axiom.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "Learn a set of weights {\u00b5 1 , \u00b5 2 , . . . , \u00b5 S }, one for each source, and then pick the parse that has the highest average weighted parse score for each axiom.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learn Source Confidence:",
"sec_num": "3."
},
{
"text": "Predicate Score: Instead of selecting from one of the top parses across various sources, treat each axiom parse as a bag of premise predicates and a bag of conclusion predicates. Then, pick a subset of premise and conclusion predicates for the final parse, using average scoring with thresholding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "We use a collection of grade 6-10 Indian high school math textbooks by four publishers/authors (NCERT, R S Aggarwal, R D Sharma, and M L Aggarwal)-a total of 5 \u00d7 4 = 20 textbooks to validate our model. Millions of students in India study geometry from these books every year and these books are readily available online. We manually marked chapters relevant for geometry in these books and then parsed them using Adobe Acrobat's pdf2xml parser and AllenAI's Science Parse project. 4 Then, we annotated geometry axioms, alignments, and parses for grade 6, 7, and 8 textbooks by the four publishers/authors. We use grade 6, 7, and 8 textbook annotations for development, training, and testing, respectively. Grade 9 and 10 data are used as unlabeled data. Thus our method is semi-supervised. During training our axiom identification, alignment, and joint axiom identification and alignment models, the latent variables Z are fixed for the training set and are not sampled. For the remaining data, these variables are sampled using our Gibbs sampler. All the hyper-parameters in all the models are tuned on the development set using grid search. Then, these hyperparameter values are fixed and the entire training + development set is used for training (along with the unlabeled data) and all the models are evaluated on the test set. GEOS used 13 types of entities and 94 functions and predicates. We add some more entities, functions, and predicates to cover other more complex concepts in geometry not covered in GEOS. Thus, we obtain a final set of 19 entity types and 115 functions and predicates for our parsing model. We use Stanford CoreNLP (Manning et al. 2014) for feature generation. We use two data sets for evaluating our system: (a) practice and official SAT style geometry questions used in GEOS, and (b) an additional data set of geometry questions collected from the aforementioned textbooks. This data set consists of a total of 1,406 SAT style questions across grades 6-10, and is approximately 7.5 times the size of the data set used in GEOS. We split the data set into training (350 questions), development (150 questions), and test (906 questions), with equal proportion of grade 6-10 questions. We annotated the 500 training and development questions with groundtruth logical forms. We use the training set to train another version of GEOS with the expanded set of entity types, functions, and predicates. We call this system GEOS++, which will be used as a baseline for our method.",
"cite_spans": [
{
"start": 481,
"end": 482,
"text": "4",
"ref_id": null
},
{
"start": 1646,
"end": 1667,
"text": "(Manning et al. 2014)",
"ref_id": "BIBREF79"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets and Baselines:",
"sec_num": null
},
{
"text": "We first evaluate the axiom identification, alignment, and parsing models individually.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results:",
"sec_num": null
},
{
"text": "For axiom identification, we compare the results of automatic identification with gold axiom identifications and compute the precision, recall, and F-measure on the test set. We use strict as well as relaxed comparison. In strict comparison mode the automatically identified mentions and gold mentions must match exactly to get credit, whereas in the relaxed comparison mode only a majority (>50%) of sentences in the automatically identified mentions and gold mentions must match to get credit. Table 7 shows the results of axiom identification, where we clearly see improvements in performance when we jointly model axiom identification and alignment. This is due to the fact that both components reinforce each other. We also observe that modeling the ordering constraints as soft constraints leads to better performance than modeling them as hard constraints. This is because the ordering of presentation of axioms is generally (yet not always) consistent across textbooks.",
"cite_spans": [],
"ref_spans": [
{
"start": 496,
"end": 503,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results:",
"sec_num": null
},
{
"text": "To evaluate axiom alignment, we first view it as a series of decisions, one for each pair of axiom mentions, and compute precision, recall, and F-score by comparing automatic decisions with gold decisions. Then, we also use a standard clustering metric, Normalized Mutual Information (NMI) (Strehl and Ghosh 2002) to measure the quality Table 7 Test set Precision, Recall, and F-measure scores for axiom identification when performed alone and when performed jointly with axiom alignment. We show results for both strict as well as relaxed comparison modes. For the joint model, we show results when we model ordering constraints as hard or soft constraints. Table 8 Test set Precision, Recall, F-measure, and NMI scores for axiom alignment when performed alone and when performed jointly with axiom identification. For the joint model, we show results when we model ordering constraints as hard or soft constraints. of axiom mention clustering. Table 8 shows the results on the test set when gold axiom identifications are used. We observe improvements in axiom alignment performance too when we jointly model axiom identification and alignment jointly both in terms of F-score as well as NMI. Modeling ordering constraints as soft constraints again leads to better performance than modeling them as hard constraints in terms of both metrics.",
"cite_spans": [
{
"start": 290,
"end": 313,
"text": "(Strehl and Ghosh 2002)",
"ref_id": "BIBREF103"
}
],
"ref_spans": [
{
"start": 337,
"end": 344,
"text": "Table 7",
"ref_id": null
},
{
"start": 659,
"end": 666,
"text": "Table 8",
"ref_id": null
},
{
"start": 946,
"end": 953,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results:",
"sec_num": null
},
{
"text": "To evaluate axiom parsing, we compute precision, recall, and F-score in (a) deriving literals in axiom parses, as well as for (b) the final axiom parses on our test set. Table 9 shows the results of axiom parsing for GEOS (trained on the training set) as well as various versions of our best performing system (GEOS++ with our axiomatic solver) with various heuristics for multisource parsing. The results show that our system (single source) performs better than GEOS, as it is trained with the expanded set of entity types, functions, and predicates. The results also show that the choice of heuristic is important for the multisource parser-though all the heuristics lead to improvements over the single source parser. The average score heuristic that chooses the parse with the highest average score across sources performs better than majority voting, which chooses the best parse based on a voting heuristic. Learning the confidence of every source and using a weighted average is an even better heuristic. Finally, predicate scoring, which chooses the parse by scoring predicates on the premise and conclusion sides, performs the best leading to 87.5 F1 score (when computed over parse literals) and 73.2 F1 score (when computed on the full parse). The high F1 score for axiom parsing on the test set shows that our approach works well and we can accurately harvest axiomatic knowledge from textbooks.",
"cite_spans": [],
"ref_spans": [
{
"start": 170,
"end": 177,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Strict",
"sec_num": null
},
{
"text": "Test set Precision, Recall, and F-measure scores for axiom parsing. These scores are computed over literals derived in axiom parses or full axiom parses. We show results for the old GEOS system; for the improved GEOS++ system with expanded entity types, functions, and predicates; and for the multisource parsers presented in this paper. Table Scores for solving geometry questions on the SAT practice and official data sets and a data set of questions from the 20 textbooks. We use SAT's grading scheme that rewards a correct answer with a score of 1.0 and penalizes a wrong answer with a negative score of 0.25. Oracle uses gold axioms but automatic text and diagram interpretation in our logical solver. All differences between GEOS and our system are significant (p < 0.05 using the two-tailed paired t-test). Finally, we use the extracted horn-clause rules in our axiomatic solver for solving geometry problems. For this, we over-generate a set of horn-clause rules by generating three horn-clause parses for each axiom and use them as the underlying theory in prolog programs such as the one shown in Figure 4 . We use weighted logical expressions for the question description and the diagram derived from GEOS++ as declarations, and the (normalized) score of the parsing model multiplied by the score of the joint axiom identification and alignment model as weights for the rules. Table 10 shows the results for our best end-to-end system and compares it to GEOS on the practice and official SAT data set from Seo et al. (2015) as well as questions from the 20 textbooks. On all the three data sets, our system outperforms GEOS. Especially on the data set from the 20 textbooks (which is indeed a harder data set and includes more problems that require complex reasoning based on geometry), GEOS does not perform very well, whereas our system still achieves a good score. Oracle shows the performance of our system when gold axioms (written down by an expert) are used along with automatic text and diagram interpretations in GEOS++. This shows that there is scope for further improvement in our approach.",
"cite_spans": [
{
"start": 1518,
"end": 1535,
"text": "Seo et al. (2015)",
"ref_id": "BIBREF98"
}
],
"ref_spans": [
{
"start": 338,
"end": 351,
"text": "Table Scores",
"ref_id": null
},
{
"start": 1108,
"end": 1116,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1389,
"end": 1397,
"text": "Table 10",
"ref_id": null
}
],
"eq_spans": [],
"section": "Table 9",
"sec_num": null
},
{
"text": "Students around the world solve geometry problems through rigorous deduction, whereas the numerical solver in GEOS does not provide such explainability. One of the key benefits of our axiomatic solver is that it provides an easy-to-understand studentfriendly deductive solution to geometry problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explainability",
"sec_num": "10."
},
{
"text": "To test the explainability of our axiomatic solver, we asked 50 grade 6-10 students (10 students in each grade) to use GEOS and our system (GEOS++ with our axiomatic solver) as a Web-based assistive tool while learning geometry. The tool uses the probabilistic prolog solver (Fierens et al. 2015) to derive the most probable explanation (MPE) for a solution. Then, it lists, one by one, the various axioms used and the conclusion drawn from the axiom application, as shown in Figure 8 . The students were each asked to rate how 'explainable' and 'useful' the two systems were on a scale of 1-5. Table 11 shows the mean rating by students in each grade on the two facets. We can observe that students of each grade found our system to be more interpretable as well as more useful to them than GEOS. This study lends support to our claims about the need for an interpretable deductive solver for geometry problems. ",
"cite_spans": [
{
"start": 275,
"end": 296,
"text": "(Fierens et al. 2015)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 476,
"end": 484,
"text": "Figure 8",
"ref_id": null
},
{
"start": 595,
"end": 603,
"text": "Table 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "Explainability",
"sec_num": "10."
},
{
"text": "An example demonstration on how to solve the problem in Figure 1: (1) Use the theorem that the sum of interior angles of a triangle is 180 \u2022 and additionally the fact that \u2220AMO is 90 \u2022 to conclude that \u2220MOA is 60 \u2022 . (2) Conclude that MOA \u223c MOB (using a similar triangle theorem) and then conclude that \u2220MOB = \u2220MOA = 60 \u2022 (using the theorem that corresponding angles of similar triangles are equal). (3) Use angle sum rule to conclude that \u2220AOB = \u2220MOB + \u2220MOA = 120 \u2022 . (4) Use the theorem that the angle subtended by an arc of a circle at the center is double the angle subtended by it at any point on the circle to conclude that \u2220ADB = 0.5 \u00d7 \u2220AOB = 60 \u2022 .",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 65,
"text": "Figure 1:",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 8",
"sec_num": null
},
{
"text": "In this section, we will measure the value of the various features in our axiom harvesting and parsing pipeline. Note that we have described three sets of features f, g, and h-corresponding to the various steps in our pipeline: axiom identification, axiom alignment, and axiom parsing in Tables 4, 5, and 6. We will ablate each of the three features one by one via backward selection (i.e., we will remove features and observe how that affects performance).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Ablation",
"sec_num": "11."
},
{
"text": "User study ratings for GEOS and our system (O.S.) by students in grades 6-10. Ten students in each grade were asked to rate the two systems on a scale of 1-5 on two facets: 'explainability' and 'usefulness'. Each cell shows the mean rating computed over ten students in that grade for that facet. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 11",
"sec_num": null
},
{
"text": "2.7 2.9 2.9 3.2 Grade 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grade 6",
"sec_num": null
},
{
"text": "3.0 3.7 3.3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grade 6",
"sec_num": null
},
{
"text": "2.7 3.5 3.1 3.5 Grade 9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grade 8",
"sec_num": "3.6"
},
{
"text": "2.4 3.3 3.0 3.7 Grade 10 2.8 3.1 3.2 3.8 Overall 2.7 3.3 3.1 3.6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grade 8",
"sec_num": "3.6"
},
{
"text": "Ablation study results for the axiom identification component. We remove features of the axiom identification component one by one as listed in Table 4 and observe the fall in performance in terms of the axiom identification performance as well as the overall performance to gauge the value of the various features. Table 12 shows the fall in performance in terms of the axiom identification performance, as well as the overall performance as we ablate various axiom identification features listed in Table 4 . We can observe that removal of any of the features results in a loss of performance. Thus, all the content as well as typographical features are important for performance. We observe that the content features such as sentence overlap, geometry entity sharing, and keyword usage are clearly important. At the same time, the various discourse features such as the RST relation, axiom, theorem, corollary annotation, use of equations and diagrams, bold/underline, bounding box, and XML structure are all important. Most of these features depend on typographical information that is vital in performance of the axiom identification component as well as the overall model. In particular, we can observe that the axiom, theorem, corollary annotation, and bounding box features contribute most to the performance of the model as they are direct indicators of the presence of an axiom mention. Table 13 shows the fall in performance in terms of the axiom alignment performance as well as the overall performance as we ablate various axiom alignment features listed in Table 5 . We again observe that removal of any of the features results in a loss of performance. Thus, the various content as well as typographical features are important for performance. We observe that the content features such as unigram, bigram and entity overlap, length of the longest common subsequence, number of sentences and various aligner, MT, and summarization scores are clearly important. At the same time, the various discourse features such as the XML structure, equation template, and image Table 13 Ablation study results for the axiom alignment component. We remove features of the axiom alignment component one by one as listed in Table 5 and observe the fall in performance in terms of the axiom alignment performance, as well as the overall performance to gauge the value of the various features. caption match are all important. Note that these features depend on typographical information that is again vital in performance. In particular, we can observe that the overlap and the XML structure features contribute most to the performance of the model. Table 14 shows the fall in performance in terms of the axiom parsing performance as well as the overall performance as we ablate various axiom parsing features listed in Table 6 . We again observe that removal of any of the features results in a loss of performance. The axiom parsing component uses a few content-based features, such as span similarity and number of relations, span lengths, and relative position; and various discourse features, such as discourse markers, punctuations, text organization, RST parse, an existing discourse segmentor from Soricut and Marcu (Soricut and Marcu 2003) , node attachment, syntax, dominance, and XML structure; and all are clearly important. In particular, we can observe that span similarity and punctuation features contribute most to the performance of the model.",
"cite_spans": [
{
"start": 3216,
"end": 3246,
"text": "Marcu (Soricut and Marcu 2003)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 144,
"end": 151,
"text": "Table 4",
"ref_id": null
},
{
"start": 316,
"end": 324,
"text": "Table 12",
"ref_id": null
},
{
"start": 501,
"end": 508,
"text": "Table 4",
"ref_id": null
},
{
"start": 1397,
"end": 1405,
"text": "Table 13",
"ref_id": null
},
{
"start": 1571,
"end": 1578,
"text": "Table 5",
"ref_id": null
},
{
"start": 2080,
"end": 2088,
"text": "Table 13",
"ref_id": null
},
{
"start": 2223,
"end": 2230,
"text": "Table 5",
"ref_id": null
},
{
"start": 2648,
"end": 2656,
"text": "Table 14",
"ref_id": null
},
{
"start": 2818,
"end": 2825,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Table 12",
"sec_num": null
},
{
"text": "We qualitatively analyze the structured axioms harvested by our method. We show the few most probable horn-clause rules for some popular named theorems in geometry in Figure 9 , along with the confidence of our method on the rules being correct. Note that some horn-clause parsed rules can be incorrect. For example, the second most probable horn-clause rule for the Pythagorean theorem is partially incorrect (does not state which angle is 90 \u2022 ). Similarly, the second and third most probable horn-clause for the circle secant tangent theorem are also incorrect. Our problog solver can use these redundant but weighted horn-clause rules for solving geometry problems.",
"cite_spans": [],
"ref_spans": [
{
"start": 167,
"end": 175,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Axioms Harvested",
"sec_num": "12."
},
{
"text": "Ablation study results for the axiom parsing component. We remove features of the axiom parsing component one by one as listed in Table 6 and observe the fall in performance in terms of the axiom parsing performance as well as the overall performance to gauge the value of the various features. ",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 137,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Table 14",
"sec_num": null
},
{
"text": "Horn-clause rules for some popular named theorems in geometry harvested by our approach. We also show the confidence our method has on the rule being correct (which is used in reasoning via the problog solver).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "Next, we qualitatively describe some example solutions of geometry problems as well as perform a qualitative error analysis. We first show some sample questions that our solver can answer correctly in Table 15 . We also show the explanations generated by our Table 15 Some correctly answered questions along with explanations generated by our deductive solver for these problems.",
"cite_spans": [],
"ref_spans": [
{
"start": 201,
"end": 209,
"text": "Table 15",
"ref_id": null
},
{
"start": 259,
"end": 267,
"text": "Table 15",
"ref_id": null
}
],
"eq_spans": [],
"section": "Example Solutions and Error Analysis",
"sec_num": "13."
},
{
"text": "Some example failure cases of our approach for solving SAT style geometry problems. In (i) the axiom set contains an axiom that the internal angle of a regular hexagon is 120 \u2022 and that each side of a regular polygon is equal. But there is no way to deduce that the angle CBO is half of the internal angle ABC (by symmetry). On the other hand, the coordinate geometry solver can exploit these three facts as maximizing the satisfiability of the various constraints to answer the question. (ii) The solver does not contain any knowledge about construction. The question cannot be correctly interpreted and the coordinate geometry solver also gets it wrong. (iii) The solver does not contain any knowledge about construction or prisms. The question cannot be correctly interpreted and the coordinate geometry solver also gets it wrong. (iv) The question as well as the answer candidates cannot be correctly interpreted (as the concept of perpendicular to plane is not in the vocabulary). Both solvers get it wrong. (v) The parser cannot interpret that angle AC is indeed angle AEC. This needs to be understood by context as it defies the standard type definition of an angle. Both solvers get it wrong. (vi) Both diagram and text parsers fail here. Both solvers answer incorrectly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 16",
"sec_num": null
},
{
"text": "deductive solver for these problems (constructed in the same way as described earlier).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 16",
"sec_num": null
},
{
"text": "Note that these problems are diverse in terms of question types, as well as the reasoning required to answer them, and our solver can handle them. We also show some failure cases of our approach in Table 16 . There are a number of reasons that could lead to a failure of our approach to correctly answer a question. These include an error in parsing the diagram, the text, or an incorrect or incomplete knowledge in the form of geometry rules. As can be observed in the failure examples, and also evaluated by us in a small error analysis of 100 textbook questions, our approach answered 52 questions correctly. Among the 48 incorrectly answered questions, our diagram parse was incorrect for 12 questions, and the text parse was incorrect for 15 questions. Our formal language was insufficiently defined to handle 6 questions (i.e., the semantics of the question could not be adequately captured by the formal language). Twenty-one questions were incorrectly answered due to missing knowledge of geometry in the form of rules. Note that several questions were incorrectly answered due to a failure of multiple system components (for example, failure of both the text and the diagram parser).",
"cite_spans": [],
"ref_spans": [
{
"start": 198,
"end": 206,
"text": "Table 16",
"ref_id": null
}
],
"eq_spans": [],
"section": "Table 16",
"sec_num": null
},
{
"text": "We presented an approach to harvest structured axiomatic knowledge from math textbooks. Our approach uses rich features based on context and typography, the redundancy of axiomatic knowledge, and shared ordering constraints across multiple textbooks to accurately extract and parse axiomatic knowledge to horn-clause rules. We used the parsed axiomatic knowledge to improve the best previously published automatic approach to solve geometry problems. A user-study conducted on a number of school students studying geometry found our approach to be more interpretable and useful than its predecessor. While this article focused on harvesting geometry axioms from textbooks as a case study, we would like to extend it to obtain valuable structured knowledge from textbooks in areas such as science, engineering, and finance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "14."
},
{
"text": "Please see related work (Section 2) for a complete list of references.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Given a textbook in JSON format, we can construct this ordered set by preorder traversal of the JSON tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/allenai/science-parse",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Synthesis of geometry proof problems",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Alvin",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Gulwani",
"suffix": ""
},
{
"first": "Rupak",
"middle": [],
"last": "Majumdar",
"suffix": ""
},
{
"first": "Supratik",
"middle": [],
"last": "Mukhopadhyay",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "245--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alvin, Chris, Sumit Gulwani, Rupak Majumdar, and Supratik Mukhopadhyay. 2014. Synthesis of geometry proof problems. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pages 245-252, Quebec.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The generation of multimedia presentations. Handbook of Natural Language Processing",
"authors": [
{
"first": "Elisabeth",
"middle": [],
"last": "Andr\u00e9",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "305--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andr\u00e9, Elisabeth. 2000. The generation of multimedia presentations. Handbook of Natural Language Processing. pages 305-327, Marcel Dekker Inc.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "WIP: The automatic synthesis of multimodal presentations",
"authors": [
{
"first": "Elisabeth",
"middle": [],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Finkler",
"suffix": ""
},
{
"first": "Winfried",
"middle": [],
"last": "Graf",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Rist",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "Schauder",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Wahlster",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andr\u00e9, Elisabeth, Wolfgang Finkler, Winfried Graf, Thomas Rist, Anne Schauder, and Wolfgang Wahlster. 1991. WIP: The automatic synthesis of multimodal presentations. Technical report, University of Saarland.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multimedia presentation planning as an extension of text planning",
"authors": [
{
"first": "Yigal",
"middle": [],
"last": "Arens",
"suffix": ""
}
],
"year": 1992,
"venue": "Aspects of Automated Natural Language Generation",
"volume": "",
"issue": "",
"pages": "277--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arens, Yigal. 1992. Multimedia presentation planning as an extension of text planning. In Dale, R., E. Hovy, D. R\u00f6sner, and O. Stock, editors. Aspects of Automated Natural Language Generation, pages 277-280, Springer.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "How to describe what? Towards a theory of modality utilization",
"authors": [
{
"first": "Yigal",
"middle": [],
"last": "Arens",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the 12th Annual Conference of the Cognitive Science Society",
"volume": "487",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arens, Yigal and Eduard Hovy. 1990. How to describe what? Towards a theory of modality utilization. In Proceedings of the 12th Annual Conference of the Cognitive Science Society, volume 487, Cambridge, MA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Structure and rules in automated multimedia presentation planning",
"authors": [
{
"first": "Yigal",
"middle": [],
"last": "Arens",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Susanne",
"middle": [],
"last": "Van Mulken",
"suffix": ""
}
],
"year": 1993,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "1253--1259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arens, Yigal, Eduard Hovy, and Susanne Van Mulken. 1993. Structure and rules in automated multimedia presentation planning. In IJCAI, pages 1253-1259, Chambery.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "On the knowledge underlying multimedia presentations",
"authors": [
{
"first": "Yigal",
"middle": [],
"last": "Arens",
"suffix": ""
},
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
},
{
"first": "Mira",
"middle": [],
"last": "Vossers",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arens, Yigal, Eduard H. Hovy, and Mira Vossers. 1992. On the knowledge underlying multimedia presentations. Technical report, University of Southern California Marina Del Rey Information Sciences Inst.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic construction of user-interface displays",
"authors": [
{
"first": "Yigal",
"middle": [],
"last": "Arens",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Stuart",
"middle": [
"C"
],
"last": "Shapiro",
"suffix": ""
},
{
"first": "Norman",
"middle": [
"K"
],
"last": "Sondheimer",
"suffix": ""
}
],
"year": 1988,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "808--813",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arens, Yigal, Lawrence Miller, Stuart C. Shapiro, and Norman K. Sondheimer. 1988. Automatic construction of user-interface displays. In AAAI, pages 808-813, St. Paul.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Out of the box information extraction: A case study using bio-medical texts",
"authors": [
{
"first": "Niranjan",
"middle": [],
"last": "Balasubramanian",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [
"Etzioni"
],
"last": "Mausam",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Bart",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Balasubramanian, Niranjan, Stephen Soderland, Oren Etzioni Mausam, and Robert Bart. 2002. Out of the box information extraction: A case study using bio-medical texts. Technical report, University of Washington.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Open information extraction from the web",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Broadhead",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2007,
"venue": "IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2670--2676",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Banko, Michele, Michael J. Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence, pages 2670-2676, Hyderabad.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Towards constructive text, diagram, and layout generation for information presentation",
"authors": [
{
"first": "John",
"middle": [],
"last": "Bateman",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Kleinz",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kamps",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Reichenberger",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "3",
"pages": "409--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bateman, John, J\u00f6rg Kleinz, Thomas Kamps, and Klaus Reichenberger. 2001a. Towards constructive text, diagram, and layout generation for information presentation. Computational Linguistics, 27(3):409-449.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Towards constructive text, diagram, and layout generation for information presentation",
"authors": [
{
"first": "John",
"middle": [],
"last": "Bateman",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Kleinz",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kamps",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Reichenberger",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "3",
"pages": "409--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bateman, John, J\u00f6rg Kleinz, Thomas Kamps, and Klaus Reichenberger. 2001b. Towards constructive text, diagram, and layout generation for information presentation. Computational Linguistics, 27(3):409-449.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Semantic parsing on freebase from question-answer pairs",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Chou",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Frostig",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1533--1544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berant, Jonathan, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, pages 1533-1544, Seattle, WA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Discourse segmentation in aid of document summarization",
"authors": [
{
"first": "Branimir",
"middle": [
"K"
],
"last": "Boguraev",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"S"
],
"last": "Neff",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 33rd",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boguraev, Branimir K. and Mary S. Neff. 2000. Discourse segmentation in aid of document summarization. In System Sciences, 2000. Proceedings of the 33rd",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Annual International Conference",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual International Conference, pages 1-10, Washington, DC.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Knowitnow: Fast, scalable information extraction from the web",
"authors": [
{
"first": "Michael",
"middle": [
"J"
],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2005,
"venue": "HLT/EMNLP 2005, Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "563--570",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cafarella, Michael J., Doug Downey, Stephen Soderland, and Oren Etzioni. 2005. Knowitnow: Fast, scalable information extraction from the web. In HLT/EMNLP 2005, Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, pages 563-570, Vancouver.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bootstrapping information extraction from field books",
"authors": [
{
"first": "Sander",
"middle": [],
"last": "Canisius",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Sporleder",
"suffix": ""
}
],
"year": 2007,
"venue": "EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "827--836",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Canisius, Sander and Caroline Sporleder. 2007. Bootstrapping information extraction from field books. In EMNLP-CoNLL, pages 827-836, Prague.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Discourse structure for context question answering",
"authors": [
{
"first": "Joyce",
"middle": [
"Y"
],
"last": "Chai",
"suffix": ""
},
{
"first": "Rong",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Workshop on Pragmatics of Question Answering at HLT-NAACL 2004",
"volume": "",
"issue": "",
"pages": "23--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chai, Joyce Y. and Rong Jin. 2004. Discourse structure for context question answering. In Proceedings of the Workshop on Pragmatics of Question Answering at HLT-NAACL 2004, pages 23-30, Boston, MA.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Maxsim: A maximum similarity metric for machine translation evaluation",
"authors": [
{
"first": "Yee",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Seng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chan, Yee Seng and Hwee Tou Ng. 2008. Maxsim: A maximum similarity metric for machine translation evaluation. In 2008",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Annual Conference of the Association for Computational Linguistics (ACL)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "55--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Conference of the Association for Computational Linguistics (ACL), pages 55-62, Columbus, OH.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Automatic information extraction from semi-structured web pages by pattern discovery",
"authors": [
{
"first": "Chia",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Chun-Nan",
"middle": [],
"last": "Hui",
"suffix": ""
},
{
"first": "Shao-Cheng",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lui",
"suffix": ""
}
],
"year": 2003,
"venue": "Decision Support Systems",
"volume": "35",
"issue": "1",
"pages": "129--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang, Chia Hui, Chun-Nan Hsu, and Shao-Cheng Lui. 2003. Automatic information extraction from semi-structured web pages by pattern discovery. Decision Support Systems, 35(1):129-147.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A survey of web information extraction systems",
"authors": [
{
"first": "Chia",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Mohammed",
"middle": [],
"last": "Hui",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kayed",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Moheb",
"suffix": ""
},
{
"first": "Khaled",
"middle": [
"F"
],
"last": "Girgis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shaalan",
"suffix": ""
}
],
"year": 2006,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "18",
"issue": "10",
"pages": "1411--1428",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang, Chia Hui, Mohammed Kayed, Moheb R. Girgis, and Khaled F. Shaalan. 2006. A survey of web information extraction systems. IEEE Transactions on Knowledge and Data Engineering, 18(10):1411-1428.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning to interpret natural language navigation instructions from observations",
"authors": [
{
"first": "David",
"middle": [
"L"
],
"last": "Chen",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI-2011)",
"volume": "",
"issue": "",
"pages": "859--865",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, David L. and Raymond J. Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI-2011), pages 859-865. San Francisco, CA.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Machine Proofs in Geometry: Automated Production of Readable Proofs for Geometry Theorems",
"authors": [
{
"first": "Shang",
"middle": [],
"last": "Chou",
"suffix": ""
},
{
"first": "Xiao-Shan",
"middle": [],
"last": "Ching",
"suffix": ""
},
{
"first": "Jing-Zhong",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "6",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chou, Shang Ching, Xiao-Shan Gao, and Jing-Zhong Zhang. 1994. Machine Proofs in Geometry: Automated Production of Readable Proofs for Geometry Theorems, volume 6. World Scientific.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Constructing a textual KB from a biology textbook",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Harrison",
"suffix": ""
},
{
"first": "Niranjan",
"middle": [],
"last": "Balasubramanian",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction",
"volume": "",
"issue": "",
"pages": "74--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clark, Peter, Phil Harrison, Niranjan Balasubramanian, and Oren Etzioni. 2012. Constructing a textual KB from a biology textbook. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction, pages 74-78, Montreal.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Analyzing the structure of argumentative discourse",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1987,
"venue": "Computational Linguistics",
"volume": "13",
"issue": "1-2",
"pages": "11--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cohen, Robin. 1987. Analyzing the structure of argumentative discourse. Computational Linguistics, 13(1-2):11-24.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Unsupervised extraction of semantic relations using discourse cues",
"authors": [
{
"first": "Juliette",
"middle": [],
"last": "Conrath",
"suffix": ""
},
{
"first": "Stergos",
"middle": [],
"last": "Afantenos",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Muller",
"suffix": ""
}
],
"year": 2014,
"venue": "Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "2184--2194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conrath, Juliette, Stergos Afantenos, Nicholas Asher, and Philippe Muller. 2014. Unsupervised extraction of semantic relations using discourse cues. In Association for Computational Linguistics (ACL), pages 2184-2194, Dublin.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Exploring the role of punctuation in the signalling of discourse structure",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of a Workshop on Text Representation and Domain Modelling: Ideas from Linguistics and AI",
"volume": "",
"issue": "",
"pages": "110--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dale, Robert. 1991a. Exploring the role of punctuation in the signalling of discourse structure. In Proceedings of a Workshop on Text Representation and Domain Modelling: Ideas from Linguistics and AI, pages 110-120, Berlin.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The role of punctuation in discourse structure",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
}
],
"year": 1991,
"venue": "Working Notes for the AAAI Fall Symposium on Discourse Structure in Natural Language Understanding and Generation",
"volume": "",
"issue": "",
"pages": "13--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dale, Robert. 1991b. The role of punctuation in discourse structure. In Working Notes for the AAAI Fall Symposium on Discourse Structure in Natural Language Understanding and Generation, pages 13-14, Asilomar, CA.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "IKE-an interactive tool for knowledge extraction",
"authors": [
{
"first": "Bhavana",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Sumithra",
"middle": [],
"last": "Bhakthavatsalam",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Groeneveld",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 5th Workshop on Automated Knowledge Base Construction, AKBC@NAACL-HLT 2016",
"volume": "",
"issue": "",
"pages": "12--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dalvi, Bhavana, Sumithra Bhakthavatsalam, Chris Clark, Peter Clark, Oren Etzioni, Anthony Fader, and Dirk Groeneveld. 2016. IKE-an interactive tool for knowledge extraction. In Proceedings of the 5th Workshop on Automated Knowledge Base Construction, AKBC@NAACL-HLT 2016, pages 12-17, San Diego, CA.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Geometry with computers",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Davis",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davis, Tom. 2006. Geometry with computers. Technical report.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Extending the meteor machine translation evaluation metric to the phrase level",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denkowski, Michael and Alon Lavie. 2010. Extending the meteor machine translation evaluation metric to the phrase level. In Human Language Technologies: The 2010",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "49--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 250-253, Los Angeles, CA. Dijk, Teun A. Van. 1979. Recalling and summarizing complex discourse. Text Processing, pages 49-93.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A novel discourse parser based on support vector machine classification",
"authors": [
{
"first": "David",
"middle": [
"A"
],
"last": "Duverle",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Prendinger",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "2",
"issue": "",
"pages": "665--673",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duverle, David A., and Helmut Prendinger. 2009. A novel discourse parser based on support vector machine classification. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 665-673, Suntec.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Strategies for improving visual learning",
"authors": [
{
"first": "F",
"middle": [
"M"
],
"last": "Dwyer",
"suffix": ""
}
],
"year": 1978,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dwyer, F. M. 1978. Strategies for improving visual learning. Learning Services.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Open information extraction from the web",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2008,
"venue": "Communications of the ACM",
"volume": "51",
"issue": "12",
"pages": "68--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Etzioni, Oren, Michele Banko, Stephen Soderland, and Daniel S. Weld. 2008. Open information extraction from the web. Communications of the ACM, 51(12):68-74.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Methods for domain-independent information extraction from the web: An experimental comparison",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "Ana-Maria",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Shaked",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Yates",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Nineteenth National Conference on Artificial Intelligence, Sixteenth Conference on Innovative Applications of Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "391--398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Etzioni, Oren, Michael J. Cafarella, Doug Downey, Ana-Maria Popescu, Tal Shaked, Stephen Soderland, Daniel S. Weld, and Alexander Yates. 2004. Methods for domain-independent information extraction from the web: An experimental comparison. In Proceedings of the Nineteenth National Conference on Artificial Intelligence, Sixteenth Conference on Innovative Applications of Artificial Intelligence, pages 391-398, San Jose, CA.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Computers and Thought",
"authors": [
{
"first": "Edward",
"middle": [
"A"
],
"last": "Feigenbaum",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Feldman",
"suffix": ""
}
],
"year": 1963,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feigenbaum, Edward A. and Julian Feldman. 1963. Computers and Thought. The AAAI Press.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "An architecture for knowledge-based graphical interfaces",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Feiner",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feiner, Steven. 1988. An architecture for knowledge-based graphical interfaces.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Automating the generation of coordinated multimedia explanations",
"authors": [
{
"first": "Steven",
"middle": [
"K"
],
"last": "Feiner",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
}
],
"year": 1991,
"venue": "Computer",
"volume": "24",
"issue": "10",
"pages": "33--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feiner, Steven K. and Kathleen R. McKeown. 1991. Automating the generation of coordinated multimedia explanations. Computer, 24(10):33-41.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Text-level discourse parsing with rich linguistic features",
"authors": [
{
"first": "Vanessa",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "60--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng, Vanessa Wei and Graeme Hirst. 2012. Text-level discourse parsing with rich linguistic features. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 60-68, Jeju.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A linear-time bottom-up discourse parser with constraints and post-editing",
"authors": [
{
"first": "Vanessa",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "511--521",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng, Vanessa Wei and Graeme Hirst. 2014. A linear-time bottom-up discourse parser with constraints and post-editing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 511-521, Baltimore, MD.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Inference and learning in probabilistic logic programs using weighted Boolean formulas",
"authors": [
{
"first": "Daan",
"middle": [],
"last": "Fierens",
"suffix": ""
},
{
"first": "Guy",
"middle": [],
"last": "Van Den Broeck",
"suffix": ""
},
{
"first": "Joris",
"middle": [],
"last": "Renkens",
"suffix": ""
},
{
"first": "Dimitar",
"middle": [],
"last": "Shterionov",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Gutmann",
"suffix": ""
},
{
"first": "Ingo",
"middle": [],
"last": "Thon",
"suffix": ""
},
{
"first": "Gerda",
"middle": [],
"last": "Janssens",
"suffix": ""
},
{
"first": "Luc De",
"middle": [],
"last": "Raedt",
"suffix": ""
}
],
"year": 2015,
"venue": "Theory and Practice of Logic Programming",
"volume": "15",
"issue": "3",
"pages": "358--401",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fierens, Daan, Guy Van den Broeck, Joris Renkens, Dimitar Shterionov, Bernd Gutmann, Ingo Thon, Gerda Janssens, and Luc De Raedt. 2015. Inference and learning in probabilistic logic programs using weighted Boolean formulas. Theory and Practice of Logic Programming, 15(3):358-401.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Instructional Message Design: Principles from the Behavioral Sciences",
"authors": [
{
"first": "M",
"middle": [
"L"
],
"last": "Fleming",
"suffix": ""
},
{
"first": "W",
"middle": [
"H"
],
"last": "Levie",
"suffix": ""
},
{
"first": "W",
"middle": [
"H"
],
"last": "Levie",
"suffix": ""
}
],
"year": 1978,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fleming, M. L., W. H. Levie, and W. H. Levie. 1978. Instructional Message Design: Principles from the Behavioral Sciences. Educational Technology Publications.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "MMP/geometer -A software package for automated geometric reasoning",
"authors": [
{
"first": "Xiao-Shan",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2002,
"venue": "International Workshop on Automated Deduction in Geometry",
"volume": "",
"issue": "",
"pages": "44--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gao, Xiao-Shan and Qiang Lin. 2002. MMP/geometer -A software package for automated geometric reasoning. In International Workshop on Automated Deduction in Geometry, pages 44-66, Hagenberg Castle.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Abstractive summarization of product reviews using discourse structure",
"authors": [
{
"first": "Shima",
"middle": [],
"last": "Gerani",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Bita",
"middle": [],
"last": "Nejat",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1602--1613",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerani, Shima, Yashar Mehdad, Giuseppe Carenini, Raymond T. Ng, and Bita Nejat. 2014. Abstractive summarization of product reviews using discourse structure. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1602-1613, Doha.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Global features for shallow discourse parsing",
"authors": [
{
"first": "Sucheta",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Riccardi",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "150--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ghosh, Sucheta, Giuseppe Riccardi, and Richard Johansson. 2012. Global features for shallow discourse parsing. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 150-159, Seoul.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Attention, intentions, and the structure of discourse",
"authors": [
{
"first": "Barbara",
"middle": [
"J"
],
"last": "Grosz",
"suffix": ""
},
{
"first": "Candace",
"middle": [
"L"
],
"last": "Sidner",
"suffix": ""
}
],
"year": 1986,
"venue": "Computational Linguistics",
"volume": "12",
"issue": "3",
"pages": "175--204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grosz, Barbara J. and Candace L. Sidner. 1986. Attention, intentions, and the structure of discourse. Computational Linguistics, 12(3):175-204.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Synthesizing geometry constructions",
"authors": [
{
"first": "Sumit",
"middle": [],
"last": "Gulwani",
"suffix": ""
},
{
"first": "Vijay",
"middle": [
"Anand"
],
"last": "Korthikanti",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Tiwari",
"suffix": ""
}
],
"year": 2011,
"venue": "ACM SIGPLAN Notices",
"volume": "46",
"issue": "",
"pages": "50--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gulwani, Sumit, Vijay Anand Korthikanti, and Ashish Tiwari. 2011. Synthesizing geometry constructions. In ACM SIGPLAN Notices, 46, pages 50-61.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Designing Instructional Text",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hartley",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hartley, J. 1985. Designing Instructional Text. Kogan Page.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Automatic generation of formatted text. Readings in Intelligent User Interfaces",
"authors": [
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "262",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hovy, Eduard H. 1998. Automatic generation of formatted text. Readings in Intelligent User Interfaces. page 262, Morgan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Solving geometry problems using a combination of symbolic and numerical reasoning",
"authors": [
{
"first": "Shachar",
"middle": [],
"last": "Itzhaky",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Gulwani",
"suffix": ""
},
{
"first": "Neil",
"middle": [],
"last": "Immerman",
"suffix": ""
},
{
"first": "Mooly",
"middle": [],
"last": "Sagiv",
"suffix": ""
}
],
"year": 2013,
"venue": "International Conference on Logic for Programming Artificial Intelligence and Reasoning",
"volume": "",
"issue": "",
"pages": "457--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Itzhaky, Shachar, Sumit Gulwani, Neil Immerman, and Mooly Sagiv. 2013. Solving geometry problems using a combination of symbolic and numerical reasoning. In International Conference on Logic for Programming Artificial Intelligence and Reasoning, pages 457-472, Stellenbosch.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Discourse complements lexical semantics for non-factoid answer reranking",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "977--986",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jansen, Peter, Mihai Surdeanu, and Peter Clark. 2014. Discourse complements lexical semantics for non-factoid answer reranking. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 977-986, Baltimore, MD.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Representation learning for text-level discourse parsing",
"authors": [
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "13--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ji, Yangfeng and Jacob Eisenstein. 2014. Representation learning for text-level discourse parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 13-24, Baltimore, MD.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "From discourse to logic: Introduction to model theoretic semantics of natural language, formal logic and discourse representation",
"authors": [
{
"first": "Hans",
"middle": [],
"last": "Kamp",
"suffix": ""
},
{
"first": "Uwe",
"middle": [],
"last": "Reyle",
"suffix": ""
}
],
"year": 1993,
"venue": "Studies in Linguistics and Philosophy",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kamp, Hans and Uwe Reyle. 1993. From discourse to logic: Introduction to model theoretic semantics of natural language, formal logic and discourse representation. Studies in Linguistics and Philosophy.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Using gr\u00f6bner bases to reason about geometry problems",
"authors": [
{
"first": "Deepak",
"middle": [],
"last": "Kapur",
"suffix": ""
}
],
"year": 1986,
"venue": "Journal of Symbolic Computation",
"volume": "2",
"issue": "4",
"pages": "399--408",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kapur, Deepak. 1986. Using gr\u00f6bner bases to reason about geometry problems. Journal of Symbolic Computation, 2(4):399-408.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Learning to transform natural to formal languages",
"authors": [
{
"first": "Rohit",
"middle": [
"J"
],
"last": "Kate",
"suffix": ""
},
{
"first": "Yuk",
"middle": [],
"last": "Wah",
"suffix": ""
},
{
"first": "Wong",
"middle": [],
"last": "Raymond",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of AAAI-05",
"volume": "",
"issue": "",
"pages": "1062--1068",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kate, Rohit J., Yuk Wah, Wong Raymond, and J. Mooney. 2005. Learning to transform natural to formal languages. In Proceedings of AAAI-05, pages 1062-1068, Pittsburgh, PA.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Are you smarter than a sixth grader? Textbook question answering for multimodal machine comprehension",
"authors": [
{
"first": "Aniruddha",
"middle": [],
"last": "Kembhavi",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Salvato",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Kolve",
"suffix": ""
},
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Amsterdam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kembhavi",
"suffix": ""
},
{
"first": "Minjoon",
"middle": [],
"last": "Aniruddha",
"suffix": ""
},
{
"first": "Dustin",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Jonghyun",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "5376--5384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kembhavi, Aniruddha, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali Farhadi. 2016. A diagram is worth a dozen images. In European Conference on Computer Vision, pages 235-251, Amsterdam. Kembhavi, Aniruddha, Minjoon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Are you smarter than a sixth grader? Textbook question answering for multimodal machine comprehension. In Conference on Computer Vision and Pattern Recognition (CVPR), pages 5376-5384, Honolulu, HI.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Pattern matching and discourse processing in information extraction from Japanese text",
"authors": [
{
"first": "Tsuyoshi",
"middle": [],
"last": "Kitani",
"suffix": ""
},
{
"first": "Yoshio",
"middle": [],
"last": "Eriguchi",
"suffix": ""
},
{
"first": "Masami",
"middle": [],
"last": "Hara",
"suffix": ""
}
],
"year": 1994,
"venue": "Journal of Artificial Intelligence Research",
"volume": "2",
"issue": "",
"pages": "89--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kitani, Tsuyoshi, Yoshio Eriguchi, and Masami Hara. 1994. Pattern matching and discourse processing in information extraction from Japanese text. Journal of Artificial Intelligence Research, 2:89-110.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning, ICML",
"volume": "1",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lafferty, John, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML, volume 1, pages 282-289, Williamstown, MA.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Why a diagram is (sometimes) worth ten thousand words",
"authors": [
{
"first": "Jill",
"middle": [
"H"
],
"last": "Larkin",
"suffix": ""
},
{
"first": "Herbert",
"middle": [
"A"
],
"last": "Simon",
"suffix": ""
}
],
"year": 1987,
"venue": "Cognitive Science",
"volume": "11",
"issue": "1",
"pages": "65--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Larkin, Jill H. and Herbert A. Simon. 1987. Why a diagram is (sometimes) worth ten thousand words. Cognitive Science, 11(1):65-100.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Segmented discourse representation theory: Dynamic semantics with discourse structure",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Lascarides",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
}
],
"year": 2008,
"venue": "Computing Meaning",
"volume": "",
"issue": "",
"pages": "87--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lascarides, Alex and Nicholas Asher. 2008. Segmented discourse representation theory: Dynamic semantics with discourse structure. In H. Bunt and R. Muskens, editors, Computing Meaning. Springer, pages 87-124.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "From natural language specifications to program input parsers",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Fan",
"middle": [],
"last": "Long",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Martin",
"middle": [
"C"
],
"last": "Rinard",
"suffix": ""
}
],
"year": 2013,
"venue": "Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "1294--1303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei, Tao, Fan Long, Regina Barzilay, and Martin C. Rinard. 2013. From natural language specifications to program input parsers. In Association for Computational Linguistics (ACL), pages 1294-1303, Sofia.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Text-level discourse dependency parsing",
"authors": [
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "25--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, Sujian, Liang Wang, Ziqiang Cao, and Wenjie Li. 2014. Text-level discourse dependency parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 25-35, Baltimore, MD.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Measuring prerequisite relations among concepts",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Zhaohui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wenyi",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "C. Lee",
"middle": [],
"last": "Giles",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1668--1674",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang, Chen, Zhaohui Wu, Wenyi Huang, and C. Lee Giles. 2015. Measuring prerequisite relations among concepts. In EMNLP, pages 1668-1674, Lisbon.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Learning dependency-based compositional semantics",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "590--599",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang, Percy, Michael I. Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 590-599, Portland.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out: Proceedings of the ACL-04 Workshop",
"volume": "8",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, Chin-Yew. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, volume 8, pages 74-81, Barcelona.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "A PDTB-styled end-to-end discourse parser",
"authors": [
{
"first": "Ziheng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Hwee",
"middle": [
"Tou"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2014,
"venue": "Natural Language Engineering",
"volume": "20",
"issue": "2",
"pages": "151--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, Ziheng, Hwee Tou Ng, and Min-Yen Kan. 2014. A PDTB-styled end-to-end discourse parser. Natural Language Engineering, 20(2):151-184.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Latent predictor networks for code generation",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u1ef3",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Senior",
"suffix": ""
},
{
"first": "Fumin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.06744"
]
},
"num": null,
"urls": [],
"raw_text": "Ling, Wang, Edward Grefenstette, Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u1ef3, Andrew Senior, Fumin Wang, and Phil Blunsom. 2016. Latent predictor networks for code generation. arXiv preprint arXiv:1603.06744.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "Program induction for rationale generation: Learning to solve and explain algebraic word problems",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2017,
"venue": "Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "158--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ling, Wang, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction for rationale generation: Learning to solve and explain algebraic word problems. In Association for Computational Linguistics (ACL), pages 158-167, Vancouver.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "Rhetorical relations for information retrieval",
"authors": [
{
"first": "Christina",
"middle": [],
"last": "Lioma",
"suffix": ""
},
{
"first": "Birger",
"middle": [],
"last": "Larsen",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "931--940",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lioma, Christina, Birger Larsen, and Wei Lu. 2012. Rhetorical relations for information retrieval. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 931-940, Portland.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "Latent attention for if-then program synthesis",
"authors": [
{
"first": "Chang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xinyun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Eui Chul",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Mingcheng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Dawn",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "29",
"issue": "",
"pages": "4574--4582",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, Chang, Xinyun Chen, Eui Chul Shin, Mingcheng Chen, and Dawn Song. 2016a, Latent attention for if-then program synthesis. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29. Curran Associates, Inc., pages 4574-4582.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "Learning concept graphs from online educational data",
"authors": [
{
"first": "Hanxiao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wanli",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Artificial Intelligence Research",
"volume": "55",
"issue": "",
"pages": "1059--1090",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, Hanxiao, Wanli Ma, Yiming Yang, and Jaime Carbonell. 2016b. Learning concept graphs from online educational data. Journal of Artificial Intelligence Research, 55:1059-1090.",
"links": null
},
"BIBREF74": {
"ref_id": "b74",
"title": "Some Aspects of Text Grammars",
"authors": [
{
"first": "Robert",
"middle": [
"E"
],
"last": "Longacre",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Longacre, Robert E. 1983. Some Aspects of Text Grammars. Springer.",
"links": null
},
"BIBREF75": {
"ref_id": "b75",
"title": "Discourse indicators for content selection in summarization",
"authors": [
{
"first": "Annie",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "147--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Louis, Annie, Aravind Joshi, and Ani Nenkova. 2010. Discourse indicators for content selection in summarization. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 147-156, Uppsala.",
"links": null
},
"BIBREF76": {
"ref_id": "b76",
"title": "A linguistic approach to some parameters of layout: A study of enumerations",
"authors": [
{
"first": "Christophe",
"middle": [],
"last": "Luc",
"suffix": ""
},
{
"first": "Mustapha",
"middle": [],
"last": "Mojahid",
"suffix": ""
},
{
"first": "Jacques",
"middle": [],
"last": "Virbel",
"suffix": ""
}
],
"year": 1999,
"venue": "Understanding or Retrieval of Documents, AAAI Fall Symposium",
"volume": "",
"issue": "",
"pages": "35--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luc, Christophe, Mustapha Mojahid, and Jacques Virbel. 1999. A linguistic approach to some parameters of layout: A study of enumerations. In Understanding or Retrieval of Documents, AAAI Fall Symposium., pages 35-44, Orlando.",
"links": null
},
"BIBREF77": {
"ref_id": "b77",
"title": "Automating the design of graphical presentations of relational information",
"authors": [
{
"first": "Jock",
"middle": [],
"last": "Mackinlay",
"suffix": ""
}
],
"year": 1986,
"venue": "ACM Transactions On Graphics (Tog)",
"volume": "5",
"issue": "2",
"pages": "110--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mackinlay, Jock. 1986. Automating the design of graphical presentations of relational information. ACM Transactions On Graphics (Tog), 5(2):110-141.",
"links": null
},
"BIBREF78": {
"ref_id": "b78",
"title": "Rhetorical Structure Theory: Toward a functional theory of text organization",
"authors": [
{
"first": "William",
"middle": [
"C"
],
"last": "Mann",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"A"
],
"last": "Thompson",
"suffix": ""
}
],
"year": 1988,
"venue": "Text",
"volume": "3",
"issue": "8",
"pages": "234--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mann, William C. and Sandra A. Thompson. 1988. Rhetorical Structure Theory: Toward a functional theory of text organization. Text, 3(8):234-281.",
"links": null
},
"BIBREF79": {
"ref_id": "b79",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2014,
"venue": "Association for Computational Linguistics (ACL) System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manning, Christopher D., Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55-60, Baltimore, MD.",
"links": null
},
"BIBREF80": {
"ref_id": "b80",
"title": "The Theory and Practice of Discourse Parsing and Summarization",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcu, Daniel. 2000. The Theory and Practice of Discourse Parsing and Summarization. MIT Press.",
"links": null
},
"BIBREF81": {
"ref_id": "b81",
"title": "Following directions using statistical machine translation",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Matuszek",
"suffix": ""
},
{
"first": "Dieter",
"middle": [],
"last": "Fox",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Koscher",
"suffix": ""
}
],
"year": 2010,
"venue": "5th ACM/IEEE International Conference on Human-Robot Interaction (HRI)",
"volume": "",
"issue": "",
"pages": "251--258",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matuszek, Cynthia, Dieter Fox, and Karl Koscher. 2010. Following directions using statistical machine translation. In 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages 251-258, Osaka.",
"links": null
},
"BIBREF82": {
"ref_id": "b82",
"title": "Planning Multimedia Explanations Using Communicative Acts",
"authors": [
{
"first": "M",
"middle": [],
"last": "Maybury",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maybury, M. 1998. Planning Multimedia Explanations Using Communicative Acts. San Francisco: Morgan Kaufman.",
"links": null
},
"BIBREF83": {
"ref_id": "b83",
"title": "Systematic thinking fostered by illustrations in scientific text",
"authors": [
{
"first": "Richard",
"middle": [
"E"
],
"last": "Mayer",
"suffix": ""
}
],
"year": 1989,
"venue": "Journal of Educational Psychology",
"volume": "81",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mayer, Richard E. 1989. Systematic thinking fostered by illustrations in scientific text. Journal of Educational Psychology, 81(2):240.",
"links": null
},
"BIBREF84": {
"ref_id": "b84",
"title": "Never-ending learning",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hruschka",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Talukdar",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Betteridge",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kisiel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Lao",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mazaitis",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Nakashole",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Platanios",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Samadi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Settles",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wijaya",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Saparov",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Greaves",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI-15)",
"volume": "",
"issue": "",
"pages": "2302--2310",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell, T., W. Cohen, E. Hruschka, P. Talukdar, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. 2015. Never-ending learning. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI-15), pages 2302-2310, Austin, TX.",
"links": null
},
"BIBREF85": {
"ref_id": "b85",
"title": "Toward a synthesis of two accounts of discourse structure",
"authors": [
{
"first": "Megan",
"middle": [],
"last": "Moser",
"suffix": ""
},
{
"first": "Johanna",
"middle": [
"D"
],
"last": "Moore",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "3",
"pages": "409--419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moser, Megan and Johanna D. Moore. 1996. Toward a synthesis of two accounts of discourse structure. Computational Linguistics, 22(3):409-419.",
"links": null
},
"BIBREF86": {
"ref_id": "b86",
"title": "Machine comprehension with discourse relations",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1253--1262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Narasimhan, Karthik and Regina Barzilay. 2015. Machine comprehension with discourse relations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, Volume 1: Long papers, pages 1253-1262, Beijing.",
"links": null
},
"BIBREF87": {
"ref_id": "b87",
"title": "Intelligent multi-media integrated interface project",
"authors": [
{
"first": "J",
"middle": [
"G"
],
"last": "Neal",
"suffix": ""
},
{
"first": "S",
"middle": [
"C"
],
"last": "Shapiro",
"suffix": ""
},
{
"first": "C",
"middle": [
"Y"
],
"last": "Thielman",
"suffix": ""
},
{
"first": "J",
"middle": [
"R"
],
"last": "Gucwa",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Lammens",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neal, J. G., S. C. Shapiro, C. Y. Thielman, J. R. Gucwa, and J. M. Lammens. 1990. Intelligent multi-media integrated interface project. Technical report RADC-TR-90-128, Calspan UB Research Center, Buffalo, NY.",
"links": null
},
"BIBREF88": {
"ref_id": "b88",
"title": "Integrating text formatting and text generation",
"authors": [
{
"first": "Elsa",
"middle": [],
"last": "Pascual",
"suffix": ""
}
],
"year": 1996,
"venue": "Trends in Natural Language Generation An Artificial Intelligence Perspective",
"volume": "",
"issue": "",
"pages": "205--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascual, Elsa. 1996. Integrating text formatting and text generation. In Trends in Natural Language Generation An Artificial Intelligence Perspective, Springer, pages 205-221.",
"links": null
},
"BIBREF89": {
"ref_id": "b89",
"title": "Semantic and layout properties of text punctuation",
"authors": [
{
"first": "Elsa",
"middle": [],
"last": "Pascual",
"suffix": ""
},
{
"first": "Jacques",
"middle": [],
"last": "Virbel",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the Association for Computational Linguistics Workshop on Punctuation",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascual, Elsa and Jacques Virbel. 1996. Semantic and layout properties of text punctuation. In Proceedings of the Association for Computational Linguistics Workshop on Punctuation, pages 41-48, Santa Cruz, CA.",
"links": null
},
"BIBREF90": {
"ref_id": "b90",
"title": "Information extraction from research papers using conditional random fields",
"authors": [
{
"first": "Fuchun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2006,
"venue": "Information Processing & Management",
"volume": "42",
"issue": "4",
"pages": "963--979",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng, Fuchun and Andrew McCallum. 2006. Information extraction from research papers using conditional random fields. Information Processing & Management, 42(4):963-979.",
"links": null
},
"BIBREF91": {
"ref_id": "b91",
"title": "Is graphical notation really superior to text, or just different? Some claims by logic designers about graphics in notation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Petre",
"suffix": ""
},
{
"first": "T",
"middle": [
"R G"
],
"last": "Green",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of ECCE-5",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petre, M. and T. R. G. Green. 1990. Is graphical notation really superior to text, or just different? Some claims by logic designers about graphics in notation. In Proceedings of ECCE-5, Urbino.",
"links": null
},
"BIBREF92": {
"ref_id": "b92",
"title": "A formal model of the structure of discourse",
"authors": [
{
"first": "Livia",
"middle": [],
"last": "Polanyi",
"suffix": ""
}
],
"year": 1988,
"venue": "Journal of Pragmatics",
"volume": "12",
"issue": "5-6",
"pages": "601--638",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Polanyi, Livia. 1988. A formal model of the structure of discourse. Journal of Pragmatics, 12(5-6):601-638.",
"links": null
},
"BIBREF93": {
"ref_id": "b93",
"title": "Language to code: Learning semantic parsers for if-this-then-that recipes",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "878--888",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quirk, Chris, Raymond J. Mooney, and Michel Galley. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, Volume 1: Long Papers, pages 878-888, Beijing.",
"links": null
},
"BIBREF94": {
"ref_id": "b94",
"title": "Generating punctuation in written arguments",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Reed",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Long",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reed, Chris and Derek Long. 1997. Generating punctuation in written arguments. Technical report 2694743, Department of Computer Science, University College, London.",
"links": null
},
"BIBREF95": {
"ref_id": "b95",
"title": "Information extraction from research papers by data integration and data validation from multiple header extraction sources",
"authors": [
{
"first": "Mrinmaya",
"middle": [],
"last": "Sachan",
"suffix": ""
},
{
"first": "Avinava",
"middle": [],
"last": "Dubey",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sachan, Mrinmaya, Avinava Dubey, Eric P. Xing, and Matthew Richardson. 2015. Learning answer-entailing structures for machine comprehension. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 239-249, Beijing. Saleem, Ozair and Seemab Latif. 2012. Information extraction from research papers by data integration and data validation from multiple header extraction sources. In Proceedings of the World Congress on Engineering and Computer Science, volume 1, pages 177-180, San Francisco, CA.",
"links": null
},
"BIBREF96": {
"ref_id": "b96",
"title": "Geometry Turned On: Dynamic Software in Learning, Teaching, and Research",
"authors": [
{
"first": "Doris",
"middle": [],
"last": "Schattschneider",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "King",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schattschneider, Doris and James King. 1997. Geometry Turned On: Dynamic Software in Learning, Teaching, and Research. Mathematical Association of America Notes.",
"links": null
},
"BIBREF97": {
"ref_id": "b97",
"title": "Diagram understanding in geometry questions",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Joon",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "2831--2838",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seo, Min Joon, Hannaneh Hajishirzi, Ali Farhadi, and Oren Etzioni. 2014. Diagram understanding in geometry questions. In Proceedings of AAAI, pages 2831-2838, Quebec.",
"links": null
},
"BIBREF98": {
"ref_id": "b98",
"title": "Solving geometry problems: Combining text and diagram interpretation",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Joon",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Clint",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Malcolm",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1466--1476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seo, Min Joon, Hannaneh Hajishirzi, Ali Farhadi, Oren Etzioni, and Clint Malcolm. 2015. Solving geometry problems: Combining text and diagram interpretation. In Proceedings of EMNLP, pages 1466-1476, Lisbon.",
"links": null
},
"BIBREF99": {
"ref_id": "b99",
"title": "Information extraction from full text scientific articles: Where are the keywords?",
"authors": [
{
"first": "Parantu",
"middle": [
"K"
],
"last": "Shah",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Perez-Iratxeta",
"suffix": ""
},
{
"first": "Peer",
"middle": [],
"last": "Bork",
"suffix": ""
},
{
"first": "Miguel",
"middle": [
"A"
],
"last": "Andrade",
"suffix": ""
}
],
"year": 2003,
"venue": "BMC Bioinformatics",
"volume": "4",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shah, Parantu K., Carolina Perez-Iratxeta, Peer Bork, and Miguel A. Andrade. 2003. Information extraction from full text scientific articles: Where are the keywords? BMC Bioinformatics, 4(1):20.",
"links": null
},
"BIBREF100": {
"ref_id": "b100",
"title": "Learning to follow navigational route instructions",
"authors": [
{
"first": "Nobuyuki",
"middle": [],
"last": "Shimizu",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"R"
],
"last": "Haas",
"suffix": ""
}
],
"year": 2009,
"venue": "IJCAI 2009, Proceedings of the 21st International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1488--1493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shimizu, Nobuyuki and Andrew R. Haas. 2009. Learning to follow navigational route instructions. In IJCAI 2009, Proceedings of the 21st International Joint Conference on Artificial Intelligence, pages 1488-1493, Pasadena, CA.",
"links": null
},
"BIBREF101": {
"ref_id": "b101",
"title": "Sentence level discourse parsing using syntactic and lexical information",
"authors": [
{
"first": "Noah",
"middle": [],
"last": "Siegel",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Horvitz",
"suffix": ""
},
{
"first": "Roie",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "Santosh",
"middle": [],
"last": "Divvala",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter",
"volume": "1",
"issue": "",
"pages": "149--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siegel, Noah, Zachary Horvitz, Roie Levin, Santosh Divvala, and Ali Farhadi. 2016. Figureseer: Parsing result-figures in research papers. In European Conference on Computer Vision, pages 664-680, Amsterdam. Soricut, Radu and Daniel Marcu. 2003. Sentence level discourse parsing using syntactic and lexical information. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, pages 149-156, Edmonton.",
"links": null
},
"BIBREF102": {
"ref_id": "b102",
"title": "Alfresco: Enjoying the combination of NLP and hypermedia for information exploration",
"authors": [
{
"first": "Oliviero",
"middle": [],
"last": "Stock",
"suffix": ""
}
],
"year": 1993,
"venue": "AAAI Workshop on Intelligent Multimedia Interfaces",
"volume": "",
"issue": "",
"pages": "197--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stock, Oliviero. 1993. Alfresco: Enjoying the combination of NLP and hypermedia for information exploration. In AAAI Workshop on Intelligent Multimedia Interfaces, pages 197-224, Anaheim.",
"links": null
},
"BIBREF103": {
"ref_id": "b103",
"title": "Cluster ensembles -A knowledge reuse framework for combining multiple partitions",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Strehl",
"suffix": ""
},
{
"first": "Joydeep",
"middle": [],
"last": "Ghosh",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "583--617",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strehl, Alexander and Joydeep Ghosh. 2002. Cluster ensembles -A knowledge reuse framework for combining multiple partitions. Journal of Machine Learning Research, 3(Dec):583-617.",
"links": null
},
"BIBREF104": {
"ref_id": "b104",
"title": "The Elements of Style",
"authors": [
{
"first": "William",
"middle": [],
"last": "Strunk",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strunk, William. 2007. The Elements of Style. Penguin.",
"links": null
},
"BIBREF105": {
"ref_id": "b105",
"title": "An effective discourse parser that uses rich linguistic information",
"authors": [
{
"first": "Rajen",
"middle": [],
"last": "Subba",
"suffix": ""
},
{
"first": "Barbara",
"middle": [
"Di"
],
"last": "Eugenio",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Subba, Rajen and Barbara Di Eugenio. 2009. An effective discourse parser that uses rich linguistic information. In Proceedings of Human Language Technologies: The 2009",
"links": null
},
"BIBREF106": {
"ref_id": "b106",
"title": "Discourse processing for context question answering based on linguistic knowledge. Knowledge-Based Systems",
"authors": [
{
"first": "",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Joyce",
"middle": [
"Y"
],
"last": "Mingyu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chai",
"suffix": ""
}
],
"year": 2007,
"venue": "Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "20",
"issue": "",
"pages": "511--526",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 566-574, Boulder, CO. Sun, Mingyu and Joyce Y. Chai. 2007. Discourse processing for context question answering based on linguistic knowledge. Knowledge-Based Systems, 20(6):511-526.",
"links": null
},
"BIBREF107": {
"ref_id": "b107",
"title": "Using pictorial language: A discussion of the dimensions of the problem",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Twyman",
"suffix": ""
}
],
"year": 1985,
"venue": "Designing Usable Texts",
"volume": "",
"issue": "",
"pages": "245--312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Twyman, Michael. 1985. Using pictorial language: A discussion of the dimensions of the problem. In Designing Usable Texts, Elsevier, pages 245-312.",
"links": null
},
"BIBREF108": {
"ref_id": "b108",
"title": "Some Aspects of Text Grammars",
"authors": [
{
"first": "",
"middle": [],
"last": "Van Dijk",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Teun",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Van Dijk, Teun A. 1972. Some Aspects of Text Grammars. Mouton & Co. N.V.",
"links": null
},
"BIBREF109": {
"ref_id": "b109",
"title": "Wip: The coordinated generation of multimodal presentations from a common representation",
"authors": [
{
"first": "Wolfgang",
"middle": [],
"last": "Wahlster",
"suffix": ""
},
{
"first": "Elisabeth",
"middle": [],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "Som",
"middle": [],
"last": "Bandyopadhyay",
"suffix": ""
},
{
"first": "Winfried",
"middle": [],
"last": "Graf",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Rist",
"suffix": ""
}
],
"year": 1992,
"venue": "Artificial Intelligence Perspective",
"volume": "",
"issue": "",
"pages": "121--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wahlster, Wolfgang, Elisabeth Andr\u00e9, Som Bandyopadhyay, Winfried Graf, and Thomas Rist. 1992. Wip: The coordinated generation of multimodal presentations from a common representation. In A. Ortony, editor, Communication from an Artificial Intelligence Perspective, Springer, pages 121-143.",
"links": null
},
"BIBREF110": {
"ref_id": "b110",
"title": "An information retrieval approach based on discourse type",
"authors": [
{
"first": "D",
"middle": [
"Y"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Wing Pong",
"suffix": ""
},
{
"first": "Kam-Fai",
"middle": [],
"last": "Luk",
"suffix": ""
},
{
"first": "K",
"middle": [
"L"
],
"last": "Wong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kwok",
"suffix": ""
}
],
"year": 2006,
"venue": "International Conference on Application of Natural Language to Information Systems",
"volume": "",
"issue": "",
"pages": "197--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, D. Y., Robert Wing Pong Luk, Kam-Fai Wong, and K. L. Kwok. 2006. An information retrieval approach based on discourse type. In International Conference on Application of Natural Language to Information Systems, pages 197-202, Klagenfurt.",
"links": null
},
"BIBREF111": {
"ref_id": "b111",
"title": "Concept hierarchy extraction from textbooks",
"authors": [
{
"first": "Jianxiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Man",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Beijing",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Shuting",
"suffix": ""
},
{
"first": "Zhaohui",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 ACM Symposium on Document Engineering",
"volume": "",
"issue": "",
"pages": "147--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, Jianxiang and Man Lan. 2015. A refined end-to-end discourse parser. In CoNLL Shared Task, pages 17-24, Beijing. Wang, Shuting, Chen Liang, Zhaohui Wu, Kyle Williams, Bart Pursel, Benjamin Brautigam, Sherwyn Saul, Hannah Williams, Kyle Bowen, and C. Lee Giles. 2015. Concept hierarchy extraction from textbooks. In Proceedings of the 2015 ACM Symposium on Document Engineering, pages 147-156.",
"links": null
},
"BIBREF112": {
"ref_id": "b112",
"title": "Using prerequisites to extract concept maps from textbooks",
"authors": [
{
"first": "Shuting",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Ororbia",
"suffix": ""
},
{
"first": "Zhaohui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Pursel",
"suffix": ""
},
{
"first": "C. Lee",
"middle": [],
"last": "Giles",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th ACM International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "317--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, Shuting, Alexander Ororbia, Zhaohui Wu, Kyle Williams, Chen Liang, Bart Pursel, and C. Lee Giles. 2016. Using prerequisites to extract concept maps from textbooks. In Proceedings of the 25th ACM International Conference on Information and Knowledge Management, pages 317-326, Indianapolis.",
"links": null
},
"BIBREF113": {
"ref_id": "b113",
"title": "Basic principles of mechanical theorem proving in elementary geometries",
"authors": [
{
"first": "",
"middle": [],
"last": "Wen-Tsun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1986,
"venue": "Journal of Automated Reasoning",
"volume": "2",
"issue": "3",
"pages": "221--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wen-Tsun, Wu. 1986. Basic principles of mechanical theorem proving in elementary geometries. Journal of Automated Reasoning, 2(3):221-252.",
"links": null
},
"BIBREF114": {
"ref_id": "b114",
"title": "Presenting punctuation. CoRR, abs/cmp-lg/9506012",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "White, Michael. 1995. Presenting punctuation. CoRR, abs/cmp-lg/9506012.",
"links": null
},
"BIBREF115": {
"ref_id": "b115",
"title": "Combining dynamic geometry, automated geometry theorem proving and diagrammatic proofs",
"authors": [
{
"first": "Sean",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Jacques",
"middle": [
"D"
],
"last": "Fleuriot",
"suffix": ""
}
],
"year": 2005,
"venue": "Workshop on User Interfaces for Theorem Proving (UITP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilson, Sean and Jacques D. Fleuriot. 2005. Combining dynamic geometry, automated geometry theorem proving and diagrammatic proofs. In Workshop on User Interfaces for Theorem Proving (UITP).",
"links": null
},
"BIBREF116": {
"ref_id": "b116",
"title": "PDFMEF: A multi-entity knowledge extraction framework for scholarly documents and semantic search",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Killian",
"suffix": ""
},
{
"first": "Huaiyu",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Sagnik",
"middle": [
"Ray"
],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Suppawong",
"middle": [],
"last": "Tuarob",
"suffix": ""
},
{
"first": "Cornelia",
"middle": [],
"last": "Caragea",
"suffix": ""
},
{
"first": "C. Lee",
"middle": [],
"last": "Giles",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 8th International Conference on Knowledge Capture",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, Jian, Jason Killian, Huaiyu Yang, Kyle Williams, Sagnik Ray Choudhury, Suppawong Tuarob, Cornelia Caragea, and C. Lee Giles. 2015. PDFMEF: A multi-entity knowledge extraction framework for scholarly documents and semantic search. In Proceedings of the 8th International Conference on Knowledge Capture, Palisades.",
"links": null
},
"BIBREF117": {
"ref_id": "b117",
"title": "Typeand content-driven synthesis of SQL queries from natural language",
"authors": [
{
"first": "Navid",
"middle": [],
"last": "Yaghmazadeh",
"suffix": ""
},
{
"first": "Yuepeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Isil",
"middle": [],
"last": "Dillig",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Dillig",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaghmazadeh, Navid, Yuepeng Wang, Isil Dillig, and Thomas Dillig. 2017. Type- and content-driven synthesis of SQL queries from natural language. CoRR, abs/1702.01168.",
"links": null
},
"BIBREF118": {
"ref_id": "b118",
"title": "Concept graph learning from educational data",
"authors": [
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Hanxiao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Wanli",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Eighth ACM International Conference on Web Search and Data Mining",
"volume": "",
"issue": "",
"pages": "159--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang, Yiming, Hanxiao Liu, Jaime G. Carbonell, and Wanli Ma. 2015. Concept graph learning from educational data. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, pages 159-168, Shanghai.",
"links": null
},
"BIBREF119": {
"ref_id": "b119",
"title": "A lightweight and high performance monolingual word aligner",
"authors": [
{
"first": "Xuchen",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceeding of ACL",
"volume": "2",
"issue": "",
"pages": "702--707",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yao, Xuchen, Benjamin Van Durme, Chris Callison-Burch, and Peter Clark. 2013. A lightweight and high performance monolingual word aligner. In Proceeding of ACL, Volume 2, pages 702-707, Sofia.",
"links": null
},
"BIBREF120": {
"ref_id": "b120",
"title": "A syntactic neural model for generalpurpose code generation",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2017,
"venue": "the 55th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "440--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yin, Pengcheng and Graham Neubig. 2017. A syntactic neural model for general- purpose code generation. In the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 440-450 Vancouver.",
"links": null
},
"BIBREF121": {
"ref_id": "b121",
"title": "Learning semantic grammars with constructive inductive logic programming",
"authors": [
{
"first": "John",
"middle": [
"M"
],
"last": "Zelle",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 11th National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "817--822",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zelle, John M. and Raymond J. Mooney. 1993. Learning semantic grammars with constructive inductive logic programming. In Proceedings of the 11th National Conference on Artificial Intelligence, pages 817-822, Washington, DC.",
"links": null
},
"BIBREF122": {
"ref_id": "b122",
"title": "Learning to parse database queries using inductive logic programming",
"authors": [
{
"first": "John",
"middle": [
"M"
],
"last": "Zelle",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the Thirteenth National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1050--1055",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zelle, John M. and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the Thirteenth National Conference on Artificial Intelligence, pages 1050-1055, Portland.",
"links": null
},
"BIBREF123": {
"ref_id": "b123",
"title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars",
"authors": [
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1207.1420"
]
},
"num": null,
"urls": [],
"raw_text": "Zettlemoyer, Luke S. and Michael Collins. 2012. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. arXiv preprint arXiv:1207.1420.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "SubstantiationSubstantiation features are used to further substantiate the discourse argument. Examples include associated figures or tables, references to tables, figures (e.g.,Figure 1.2), or external links that are very important in understanding a complex multimedia document."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "sort point = {A, B, C, D, O, M} sort line = {AB, BC, CA, BD, DA, OA, OM} //Symmetrically define BA, CB, \u2026 sort angle = {ABC, BCA, CAB, ABD, BDA, DAB, AMO, MOA, OAM, BMO} //Symmetrically define CBA, ACB, \u2026 sort triangle = {ABC, ABD, AMO} //Symmetrically define CBA, (length(AX), length(XB)) :-liesOn(A, O), liesOn(B, O), perpendicular(OX, AB), liesOn(X, AB) 0.7 similar(ABC, DEF) :-equals(length(BC), length(EF)), equals(measure(ABC), measure(DEF)), equals(measure(BCA), measure(EFD)) // ASA rule. Similar rules for SAS, SSS, RHS rules of similarity 0.7 equals(measure(CAB), measure(FED)) :-similar(ABC, DEF) // Similar rules for other corresponding angles 0.7 equals(measure(ABC), u+v)) :-equals(measure(ABD), u)), equals(measure(DBC), v)), liesInInterior(D, ABC) 0.6 equals(measure(ADB), t/2) :-equals(measure(AOB), t), liesOn(A, O), liesOn(B, O)"
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Outside an axiom. Hereon, a contiguous block of discourse elements labeled B or I will be considered as an axiom mention. Let T = {B, I, O} denote the tag set. Let y(b) i be the tag assigned to s (b)"
},
"FIGREF4": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "ik in this feasible space. \u00b5 is tuned on the development set."
},
"TABREF0": {
"type_str": "table",
"text": "The template matcher is designed such that it identifies various rewritings of the same axiom equation, e.g., PA \u00d7 PB = PT 2 and PA \u00d7 PB = PC 2 could refer to the same axiom with point T in one axiom mention being point C in another mention.",
"num": null,
"html": null,
"content": "<table><tr><td>Discourse (Typography)</td><td>JSON structure Equation Template</td><td>Indicator matching the current (and parent) node of axiom mentions in respective JSON hierarchies; i.e., are both nodes mentioned as axioms, di-agrams or bounding boxes? Indicator feature that matches templates of equations detected in the axiom mentions.</td></tr></table>"
}
}
}
}