| { |
| "paper_id": "J18-1005", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T02:20:02.510808Z" |
| }, |
| "title": "Weighted DAG Automata for Semantic Graphs", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Notre Dame", |
| "location": { |
| "postCode": "46656", |
| "region": "IN", |
| "country": "United States" |
| } |
| }, |
| "email": "dchiang@nd.edu" |
| }, |
| { |
| "first": "Frank", |
| "middle": [], |
| "last": "Drewes", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Notre Dame", |
| "location": { |
| "postCode": "46656", |
| "region": "IN", |
| "country": "United States" |
| } |
| }, |
| "email": "drewes@cs.umu.se." |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Notre Dame", |
| "location": { |
| "postCode": "46656", |
| "region": "IN", |
| "country": "United States" |
| } |
| }, |
| "email": "gildea@cs.rochester.edu" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Lopez", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "alopez@inf.ed.ac.uk" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "satta@dei.unipd.it.submission" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Graphs have a variety of uses in natural language processing, particularly as representations of linguistic meaning. A deficit in this area of research is a formal framework for creating, combining, and using models involving graphs that parallels the frameworks of finite automata for strings and finite tree automata for trees. A possible starting point for such a framework is the formalism of directed acyclic graph (DAG) automata, defined by Kamimura and Slutzki and extended by Quernheim and Knight. In this article, we study the latter in depth, demonstrating several new results, including a practical recognition algorithm that can be used for inference and learning with models defined on DAG automata. We also propose an extension to graphs with unbounded node degree and show that our results carry over to the extended formalism.", |
| "pdf_parse": { |
| "paper_id": "J18-1005", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Graphs have a variety of uses in natural language processing, particularly as representations of linguistic meaning. A deficit in this area of research is a formal framework for creating, combining, and using models involving graphs that parallels the frameworks of finite automata for strings and finite tree automata for trees. A possible starting point for such a framework is the formalism of directed acyclic graph (DAG) automata, defined by Kamimura and Slutzki and extended by Quernheim and Knight. In this article, we study the latter in depth, demonstrating several new results, including a practical recognition algorithm that can be used for inference and learning with models defined on DAG automata. We also propose an extension to graphs with unbounded node degree and show that our results carry over to the extended formalism.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Statistical models of natural language semantics are making rapid progress. At the risk of oversimplifying, work in this area can be divided into two streams. One stream, semantic parsing (Mooney 2007) , aims to map from sentences to logical forms that can be executed (for example, to query a knowledge base); work in this stream tends to be on small, narrow-domain data sets like GeoQuery. The other stream aims for broader coverage, and historically tackled shallower, piecemeal tasks, like semantic role labeling (Gildea and Jurafsky 2000) , word sense disambiguation (Brown et al. 1991) , coreference resolution (Soon, Ng, and Lim 2001) , and so on. Correspondingly, resources like OntoNotes (Hovy et al. 2006) provided separate resources for each of these tasks.", |
| "cite_spans": [ |
| { |
| "start": 188, |
| "end": 201, |
| "text": "(Mooney 2007)", |
| "ref_id": "BIBREF63" |
| }, |
| { |
| "start": 517, |
| "end": 543, |
| "text": "(Gildea and Jurafsky 2000)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 572, |
| "end": 591, |
| "text": "(Brown et al. 1991)", |
| "ref_id": null |
| }, |
| { |
| "start": 617, |
| "end": 641, |
| "text": "(Soon, Ng, and Lim 2001)", |
| "ref_id": "BIBREF81" |
| }, |
| { |
| "start": 697, |
| "end": 715, |
| "text": "(Hovy et al. 2006)", |
| "ref_id": "BIBREF44" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "This piecemeal situation parallels that of early work on syntactic parsing, which focused on subtasks like part-of-speech tagging (Ratnaparkhi 1996) , noun-phrase chunking (Ramshaw and Marcus 1995) , prepositional phrase attachment (Collins and Brooks 1995) , and so on. As the field matured, these tasks were increasingly synthesized into a single process. This was made possible because of a single representation (phrase structure or dependency trees) that captures all of these phenomena; because of corpora annotated with these representations, like the Penn Treebank (Marcus, Marcinkiewicz, and Santorini 1993) ; and because of formalisms, like context-free grammars, which can model these representations practically (Charniak 1997; Collins 1997; Petrov et al. 2006) .", |
| "cite_spans": [ |
| { |
| "start": 130, |
| "end": 148, |
| "text": "(Ratnaparkhi 1996)", |
| "ref_id": "BIBREF75" |
| }, |
| { |
| "start": 172, |
| "end": 197, |
| "text": "(Ramshaw and Marcus 1995)", |
| "ref_id": "BIBREF74" |
| }, |
| { |
| "start": 232, |
| "end": 257, |
| "text": "(Collins and Brooks 1995)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 573, |
| "end": 616, |
| "text": "(Marcus, Marcinkiewicz, and Santorini 1993)", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 724, |
| "end": 739, |
| "text": "(Charniak 1997;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 740, |
| "end": 753, |
| "text": "Collins 1997;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 754, |
| "end": 773, |
| "text": "Petrov et al. 2006)", |
| "ref_id": "BIBREF69" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In a similar way, more recent work in semantic processing consolidates various semantics-related tasks into one. For example, the Abstract Meaning Representation (AMR) Bank (Banarescu et al. 2013) began as an effort to unify the various annotation layers of OntoNotes. It has driven the development of many systems, chiefly string-to-AMR parsers like JAMR (Flanigan et al. 2014) and CAMR (Wang, Xue, and Pradhan 2015a,b) , as well as many other systems submitted to the AMR Parsing task at SemEval 2016 (May 2016) . AMRs have also been used for generation (Flanigan et al. 2016) , summarization (Liu et al. 2015) , and entity detection and linking (Li et al. 2015; Pan et al. 2015) .", |
| "cite_spans": [ |
| { |
| "start": 173, |
| "end": 196, |
| "text": "(Banarescu et al. 2013)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 356, |
| "end": 378, |
| "text": "(Flanigan et al. 2014)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 388, |
| "end": 420, |
| "text": "(Wang, Xue, and Pradhan 2015a,b)", |
| "ref_id": null |
| }, |
| { |
| "start": 503, |
| "end": 513, |
| "text": "(May 2016)", |
| "ref_id": "BIBREF61" |
| }, |
| { |
| "start": 556, |
| "end": 578, |
| "text": "(Flanigan et al. 2016)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 595, |
| "end": 612, |
| "text": "(Liu et al. 2015)", |
| "ref_id": "BIBREF57" |
| }, |
| { |
| "start": 648, |
| "end": 664, |
| "text": "(Li et al. 2015;", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 665, |
| "end": 681, |
| "text": "Pan et al. 2015)", |
| "ref_id": "BIBREF67" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "But the AMR Bank is by no means the only resource of its kind. Others include the Prague Dependency Treebank (B\u00f6hmov\u00e1 et al. 2003) , DeepBank (Oepen and L\u00f8nning 2006) , and Universal Conceptual Cognitive Annotation (Abend and Rappoport 2013) . By and large, these resources are based on, or equivalent to, graphs, in which vertices stand for entities and edges stand for semantic relations among them. The Semantic Dependency Parsing task at SemEval 2014 and 2015 (Oepen et al. 2014 converted several such resources into a unified graph format and invited participants to map from sentences to these semantic graphs.", |
| "cite_spans": [ |
| { |
| "start": 109, |
| "end": 130, |
| "text": "(B\u00f6hmov\u00e1 et al. 2003)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 142, |
| "end": 166, |
| "text": "(Oepen and L\u00f8nning 2006)", |
| "ref_id": "BIBREF67" |
| }, |
| { |
| "start": 215, |
| "end": 241, |
| "text": "(Abend and Rappoport 2013)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 464, |
| "end": 482, |
| "text": "(Oepen et al. 2014", |
| "ref_id": "BIBREF66" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The unification of various kinds of semantic annotation into a single representation, semantic graphs, and the creation of large, broad-coverage collections of these representations are very positive developments for research in semantic processing. What is still missing-in our view-is a formal framework for creating, combining, and using models involving graphs that parallels those for strings and trees. Finite string automata and transducers served as a framework for investigation of speech recognition and computational phonology/morphology. Similarly, context-free grammars (and pushdown automata) served as a framework for investigation of computational syntax and syntactic parsing. But we lack a similar framework for learning and inferring semantic representations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Two such formalisms have recently been proposed for NLP: one is hyperedge replacement graph grammars, or HRGs (Bauderon and Courcelle 1987; Habel and Kreowski 1987; Habel 1992; Drewes, Kreowski, and Habel 1997) , applied to AMR parsing by various authors (Chiang et al. 2013; Peng, Song, and Gildea 2015; Bj\u00f6rklund, Drewes, and Ericson 2016) . The other formalism is directed acyclic graph (DAG) automata, defined by Kamimura and Slutzki (1981) and extended by Quernheim and Knight (2012) . In this article, we study DAG automata in depth, with the goal of enabling efficient algorithms for natural language processing applications.", |
| "cite_spans": [ |
| { |
| "start": 110, |
| "end": 139, |
| "text": "(Bauderon and Courcelle 1987;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 140, |
| "end": 164, |
| "text": "Habel and Kreowski 1987;", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 165, |
| "end": 176, |
| "text": "Habel 1992;", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 177, |
| "end": 210, |
| "text": "Drewes, Kreowski, and Habel 1997)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 255, |
| "end": 275, |
| "text": "(Chiang et al. 2013;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 276, |
| "end": 304, |
| "text": "Peng, Song, and Gildea 2015;", |
| "ref_id": "BIBREF68" |
| }, |
| { |
| "start": 305, |
| "end": 341, |
| "text": "Bj\u00f6rklund, Drewes, and Ericson 2016)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 417, |
| "end": 444, |
| "text": "Kamimura and Slutzki (1981)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 461, |
| "end": 488, |
| "text": "Quernheim and Knight (2012)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "After some background on the use of graph-based representations in natural language processing in Section 2, we define our variant of DAG automata in Section 3. We then show the following properties of our formalism:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "r Path languages are regular, as is desirable for a formal model of AMRs (Section 4.1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "r The class of hyperedge-replacement languages is closed under intersection with languages recognized by DAG automata (Section 4.2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "r Emptiness is decidable in polynomial time (Section 4.3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "We then turn to the recognition problem for our formalism, and show the following:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "r The recognition problem is NP-complete even for fixed automata (Section 5.1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "r For input graphs of bounded treewidth, there is an efficient algorithm for recognition or summing over computations of an automaton for an input graph (Section 5.2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "r The recognition/summation algorithm can be asymptotically improved using specialized binarization techniques (Section 6).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "We expect that nodes of potentially unbounded degree will be important in natural language processing to handle phenomena such as coreference and optional modifiers. We show how to extend our formalism to handle nodes of unbounded degree (Section 7.2), and demonstrate the following additional results:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "r All closure and decidability properties mentioned above continue to hold for the extended model, and the path languages stay regular (Section 7.3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "r We provide a practical recognition/summation algorithm for the novel model (Section 7.4).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Graphs, or representations equivalent to graphs, have been used by many linguistic formalisms and natural-language processing systems to model semantic dependencies. For example, unification-based grammar formalisms use feature structures, like lexical functional grammar f-structures (Kaplan and Bresnan 1982) and headdriven phrase structure grammar synsem objects (Pollard and Sag 1994) , that can be drawn as rooted, directed, (usually) acyclic graph structures (Shieber 1986 ). The Prague Dependency Treebank's tectogrammatical trees (B\u00f6hmov\u00e1 et al. 2003) can be turned into graphs using coreference and argument-sharing annotations, and DeepBank's annotations using Minimal Recursion Semantics can be stripped down to Elementary Dependency Structures, which are graphs (Oepen and L\u00f8nning 2006) . Universal Conceptual Cognitive Annotation (Abend and Rappoport 2013) uses several annotation layers, which are graphs. AMRs, whose format is derived from the PENMAN generation system, are equivalent to graphs (Banarescu et al. 2013) . Several of these graph representations have been the target of the Semantic Dependency Parsing task at SemEval 2014 and 2015 (Oepen et al. 2014 .", |
| "cite_spans": [ |
| { |
| "start": 285, |
| "end": 310, |
| "text": "(Kaplan and Bresnan 1982)", |
| "ref_id": "BIBREF49" |
| }, |
| { |
| "start": 366, |
| "end": 388, |
| "text": "(Pollard and Sag 1994)", |
| "ref_id": "BIBREF70" |
| }, |
| { |
| "start": 465, |
| "end": 478, |
| "text": "(Shieber 1986", |
| "ref_id": "BIBREF79" |
| }, |
| { |
| "start": 538, |
| "end": 559, |
| "text": "(B\u00f6hmov\u00e1 et al. 2003)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 774, |
| "end": 798, |
| "text": "(Oepen and L\u00f8nning 2006)", |
| "ref_id": "BIBREF67" |
| }, |
| { |
| "start": 843, |
| "end": 869, |
| "text": "(Abend and Rappoport 2013)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1010, |
| "end": 1033, |
| "text": "(Banarescu et al. 2013)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 1161, |
| "end": 1179, |
| "text": "(Oepen et al. 2014", |
| "ref_id": "BIBREF66" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graphs for Natural Language", |
| "sec_num": "2." |
| }, |
| { |
| "text": "In this section, we focus on AMRs, but the formalisms we work with in the remainder of the article could, in principle, be used on any of the other graph representations listed above. Although the standard AMR format somewhat resembles the Penn Treebank's parenthesized representation of trees, AMRs can be thought of as directed graphs. Examples of these two representations, from the AMR Bank (LDC2014T12), are reported in Figures 1 and 2 . Nodes are labeled, in order to convey lexical information. Edges are labeled to convey information about semantic roles. Labels at the edges need not be unique, meaning that edges impinging on the same node might have the same label. Furthermore, our DAGs are not ordered, meaning that there is no order relation for the edges impinging at a given node, as is usually the case in standard graph structures. A node can appear in more than one place (for example, in Figure 1 , node s2 (a / and :op1 (a2 / ask-01 :ARG0 (i / i) :ARG1 (t / thing :ARG1-of (t2 / think-01 :ARG0 (s2 / she) :ARG2 (l / location :location-of (w / we)))) :ARG2 s2) :op2 (s / say-01 :ARG0 s2 :ARG1 (a3 / and :op1 (w2 / want-01 :polarity -:ARG0 s2 :ARG1 (t3 / think-01 :ARG0 s2 :ARG1 l)) :op2 (r / recommend-01 :ARG0 s2 :ARG1 (c / content-01 :ARG1 i :ARG2 (e / experience-01 :ARG0 w)) :ARG2 i)) :ARG2 i) :op3 c)", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 425, |
| "end": 440, |
| "text": "Figures 1 and 2", |
| "ref_id": "FIGREF9" |
| }, |
| { |
| "start": 908, |
| "end": 916, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Graphs for Natural Language", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Example AMR in its standard format, number DF-200-192403-625 0111.7 from the AMR Bank. The sentence is: \"I asked her what she thought about where we'd be and she said she doesn't want to think about that, and that I should be happy about the experiences we've had (which I am).\" appears six times); we call this a reentrancy, analogous to a reentrant feature structure in unification-based grammar formalisms.", |
| "cite_spans": [ |
| { |
| "start": 43, |
| "end": 60, |
| "text": "DF-200-192403-625", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 1", |
| "sec_num": null |
| }, |
| { |
| "text": "Cycles and multiple roots. Although the AMR guidelines 1 describe AMRs as acyclic graphs, the AMR Bank in fact contains some graphs with cycles. The majority of these cyclic graphs involve an edge labeled with an inverse role such as ARG0-of, which means that the parent node is the ARG0 of the child node. The purpose of these inverse roles is to make the graph singly-rooted. If we reverse such edges, most cyclic graphs become acyclic (but multiply-rooted). Most remaining cycles are caused by a relatively small number of roles. By \"reifying\" these, that is, changing them into nodes (see Figure 3 ), these cycles can be eliminated. Table 1 shows some statistics on the December 2014 internal release of the AMR Bank. 2 The small percentage of graphs that are cyclic is reduced by reversing *-of edges, and all but eliminated by reification. The three cyclic graphs that remain (out of 20,628) were clearly annotation mistakes and were subsequently corrected. Table 1 Statistics on AMR graphs, out of 20,628 total. original = as provided in the corpus; reversed = with all edge labels of the form *-of reversed; reified = with certain roles reified as needed to break cycles. A graph with no edges is counted as having zero treewidth. The table also shows that the average number of roots more than doubles as a result of these transformations. (The original corpus had a small number of instances that contained more than one sentence, and were annotated as multiple graphs under a multi-sentence node; we counted these as multiple roots.)", |
| "cite_spans": [ |
| { |
| "start": 722, |
| "end": 723, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 593, |
| "end": 601, |
| "text": "Figure 3", |
| "ref_id": "FIGREF32" |
| }, |
| { |
| "start": 637, |
| "end": 644, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 964, |
| "end": 971, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 1", |
| "sec_num": null |
| }, |
| { |
| "text": "In summary, we can think of AMRs as singly-rooted, possibly cyclic directed graphs, or as multiply-rooted directed acyclic graphs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 1", |
| "sec_num": null |
| }, |
| { |
| "text": "Node degree. The in-degree (out-degree) of a node in a DAG is the number of incoming (outgoing, respectively) edges at that node. AMRs have unbounded in-degree and outdegree. Unbounded in-degree is needed for instance in the semantic representation of sentences with coreference relations, in which some concept is shared among several predicates. Unbounded out-degree allows the attachment to a given predicate a number of optional modifiers that can grow with the length of the sentence. We studied the degree distribution of nodes in the AMR Bank. 3 The maximum degree (in-degree plus out-degree) is 17, and the average is 2.12. The full degree distribution is shown in Figure 4 . In practice, AMRs strongly favor nodes of low degree. Nonetheless, the presence of nodes with large degree indicates that practical applications are likely to benefit from algorithms capable of handling potentially unbounded degree, which we develop in Section 7.", |
| "cite_spans": [ |
| { |
| "start": 551, |
| "end": 552, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 673, |
| "end": 681, |
| "text": "Figure 4", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 1", |
| "sec_num": null |
| }, |
| { |
| "text": "Multiple edges. In the standard definition for graphs, also called simple graphs, there can be at most one edge between two nodes. As opposed to simple graphs, multigraphs allow more than one edge between two nodes, called multiple edges. In semantic representations this is very useful. For instance, in the AMR for the sentence \"John likes himself,\" the node for the predicate \"like\" has its ARG0 and ARG1 semantic roles filled by the same argument \"John.\" Accordingly, we use multigraphs to represent AMR. This also simplifies the definition of a recognition model for AMRs, since a check to avoid multiple-edges would in some sense add an external condition, making the theory more difficult to develop. Treewidth. Several of the algorithms presented in this article depend on the graphtheoretical notion of treewidth. The treewidth of a graph G, written tw(G), is a natural number that formalizes the degree to which G is \"tree-like,\" with trees having treewidth of 1. We will postpone the mathematical definition of tw(G) to the next section. For a graph G and a value k given as input, it is NP-complete to determine whether G has treewidth at most k. However, for the semantic graphs we are dealing with, the worst case might not be realized. Using a reimplementation of the QuickBB algorithm (Gogate and Dechter 2004) , with only the \"simplicial\" and \"almost-simplicial\" heuristics, we found that we could compute the exact treewidth of all the graphs in the AMR Bank in a few seconds. The results (deleting multi-sentence nodes) are shown in Table 1 : The average treewidth is only about 1.5, and the maximum treewidth is only 4. An example of a graph with treewidth 4 is shown in Figure 2 . As we will see, this means that algorithms with an exponential dependence on treewidth can be practical for real world AMRs.", |
| "cite_spans": [ |
| { |
| "start": 1301, |
| "end": 1326, |
| "text": "(Gogate and Dechter 2004)", |
| "ref_id": "BIBREF39" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1552, |
| "end": 1559, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 1691, |
| "end": 1699, |
| "text": "Figure 2", |
| "ref_id": "FIGREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 1", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section we formally specify the type of DAGs that we use in this article. We then define a family of automata that process languages of these DAGs, under the restriction that nodes have bounded degree. We also briefly discuss the existing literature on DAG automata. The restriction on node degree will later be dropped, in Section 7.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG Automata", |
| "sec_num": "3." |
| }, |
| { |
| "text": "We make frequent use of finite multisets. Formally, given a set Q, a multiset over Q is a mapping \u00b5 : Q \u2192 N. Intuitively, \u00b5(q) = n means that q occurs n times in \u00b5. The collection of all finite multisets over Q is denoted by M(Q). We usually specify a multiset \u00b5 \u2208 M(Q) by listing its elements using a set-like notation such as {q 1 , . . . , q n }. Note, however, that q 1 , . . . , q n may contain repeated elements, in contrast to ordinary sets. We also use the latter, but the context will always disambiguate the two different meanings. The union of multisets is denoted by the operator and is defined by pointwise addition: (\u00b5 \u00b5 )(q) = \u00b5(q) + \u00b5 (q) for all q \u2208 Q. Thus, if \u00b5 = {q 1 , . . . , q m } and \u00b5 = {q 1 , . . . , q n } then \u00b5 \u00b5 = {q 1 , . . . , q m , q 1 , . . . , q n }. If f : Q \u2192 P is a function, we extend it to a function from M(Q) to M(P) in the canonical way:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "f ({q 1 , . . . , q n }) = {f (q 1 ), . . . , f (q n )}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "An alphabet is a finite set \u03a3 that we are going to use as node labels for our graphs. We consider graphs that are directed and unordered, have nodes labeled by symbols from \u03a3, and have multiple edges. We do not use edge labels, despite the fact that the AMR structures we want to model have labels at their edges. Our choice is motivated by our goal to simplify the notation. Graphs with labels only at their nodes can easily encode graphs with edge labels by splitting every edge into two, and putting an extra node in the middle, whose label is the label of the edge. We will come back to the discussion of this encoding at the end of this section, after our definition of DAG automata.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "A (node-labeled, directed and unordered) graph is a tuple D = (V, E, lab, src, tar) , where V and E are finite sets of nodes and edges, respectively, lab : V \u2192 \u03a3 is a labeling function, and src, tar : E \u2192 V are functions that assign to each edge e \u2208 E its source node src(e) and its target node tar(e), respectively.", |
| "cite_spans": [ |
| { |
| "start": 62, |
| "end": 83, |
| "text": "(V, E, lab, src, tar)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that our definition does not identify an edge with the pair of nodes that the edge is incident upon. In the terminology of standard graph theory, this means that our graphs are not simple graphs. This allows us to use multiple edges incident upon the same pair of nodes, a feature that is not only natural for AMRs (see the previous section) but will also be used in several of our algorithms.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1", |
| "sec_num": null |
| }, |
| { |
| "text": "A graph D as above is a directed acyclic graph if it is acyclic. More precisely, there do not exist e 0 , . . . , e k\u22121 \u2208 E with k > 0 such that tar(e i\u22121 ) = src(e i mod k ) for 1 \u2264 i \u2264 k. In this article, we will only consider directed acyclic graphs that are nonempty and connected. We call them DAGs, for short, and denote the set of all DAGs over \u03a3 by D \u03a3 . Note that a DAG can have multiple roots, that is, there may be more than one node v \u2208 V such that tar(e) = v for all e \u2208 E. (By acyclicity, there is always at least one root.) For a node v \u2208 V we define the sets of incoming and outgoing edges of v in the obvious way: in(v) = {e \u2208 E | tar(e) = v} and out(v) = {e \u2208 E | src(e) = v}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1", |
| "sec_num": null |
| }, |
| { |
| "text": "As usual, the graph D is a tree if there is a node r \u2208 V, the root of D, such that every node v \u2208 V \\ {r} is reachable from r on exactly one directed path, i.e., there is exactly one sequence of edges e 1 , . . . , e k with k > 0 such that r = src(e 1 ), tar(e i ) = src(e i+1 ) for all 1 \u2264 i < k, and tar(e k ) = v. We use standard terminology regarding trees. In particular, a node v is a child of a node u if out(u) \u2229 in(v) = \u2205.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1", |
| "sec_num": null |
| }, |
| { |
| "text": "As mentioned in the previous section, the treewidth of DAGs plays an important role for the algorithms proposed in this article. We now recall the notions of tree decompositions and treewidth, at the same time introducing the specific notation that will be used later in the article.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1", |
| "sec_num": null |
| }, |
| { |
| "text": "A tree decomposition of a graph D = (V, E, lab, src, tar) is a tree T whose nodes and edges we call bags and arcs, respectively, and whose node labels are subsets of V. For the sake of clarity, the label of bag b is denoted by cont(b) rather than by lab(b) and is called the content of b. T is required to satisfy the following:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 2", |
| "sec_num": null |
| }, |
| { |
| "text": "For every node v \u2208 V, there is a bag b such that v \u2208 cont(b).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1.", |
| "sec_num": null |
| }, |
| { |
| "text": "For every edge e \u2208 E, there is a bag b such that {src(e), tar(e)} \u2286 cont(b).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "For every node v \u2208 V, the subgraph of T induced by the bags b containing v is connected.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "The width of T is the maximum of quantity |cont(b)| \u2212 1 computed over all bags b of T, and the treewidth of D is the minimum of the widths of its tree decompositions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "We note here that, in most definitions in the literature, the edges of a tree decomposition are undirected. In the context of this article, however, it is more convenient to define tree decompositions to be directed trees, because later on we will define algorithms that process our DAGs in an order that is guided by the arc directions in the associated tree decompositions. In order to turn an undirected tree decomposition into a directed one, just choose an arbitrary bag as the root, and establish edge directions accordingly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "Consider the DAG D shown in Figure 5 (a). A possible tree decomposition T of D is displayed in Figure 5 (b), consisting of five bags, each containing a maximum of four nodes from D. It is easy to check that T satisfies the first two conditions in the definition of tree decomposition. Consider now node 5 of D. The bags of T containing this node are the three topmost ones. The subgraph of T induced by these bags is connected (and thus a tree in itself). The same holds true for any other node of D, and this shows that the third condition in the definition of tree decomposition is satisfied as well.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 28, |
| "end": 36, |
| "text": "Figure 5", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 95, |
| "end": 103, |
| "text": "Figure 5", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Example 1", |
| "sec_num": null |
| }, |
| { |
| "text": "Several other tree decompositions can be constructed for D. For instance, a trivial tree decomposition of D is the tree containing a single bag with all the nodes of D. However, it is not difficult to argue that every tree decomposition of D must have a bag that contains at least 4 nodes from D. Thus, the treewidth of D is 3. Informally, the size of the largest bags in a tree decomposition increases with the number of reentrancies that can be found along a path in the DAG. (a) ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 1", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "{1, 2, 3, 5} {2, 4, 5, 8} {3, 5, 6, 9} {4, 7} {6, 10}", |
| "eq_num": "(b)" |
| } |
| ], |
| "section": "Example 1", |
| "sec_num": null |
| }, |
| { |
| "text": "Let us now embark on the definition of DAG automata. Informally, a DAG automaton consists of a set of nondeterministic transitions that read DAG nodes and associate states with their incoming and outgoing edges. Because we do not only want to recognize DAG languages but, more generally, want to be able to use DAG automata to associate a weight with each DAG, we define a more general version in which the transitions have weights taken from some semiring K. Throughout the entire paper, all semirings are assumed to be commutative-that is, not only the additive but also the multiplicative operator is commutative.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "A weighted DAG automaton is a tuple M = (\u03a3, Q, \u03b4, K), where r \u03a3 is an alphabet of node labels r Q is a finite set of states r (K, \u2295, \u2297, 0, 1) is a semiring of weights (which we identify with its domain K if there is no danger of confusion)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 3", |
| "sec_num": null |
| }, |
| { |
| "text": "r \u03b4 : \u0398 \u2192 K \\ {0}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 3", |
| "sec_num": null |
| }, |
| { |
| "text": "is a transition function that assigns nonzero weights to a finite set \u0398 of transitions of the form", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 3", |
| "sec_num": null |
| }, |
| { |
| "text": "t = {q 1 , . . . , q m }, \u03c3, {r 1 , . . . , r n } \u2208 M(Q) \u00d7 \u03a3 \u00d7 M(Q)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 3", |
| "sec_num": null |
| }, |
| { |
| "text": ", where m, n \u2265 0. If \u03b4(t) = w we also write this transition in the form", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 3", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "{q 1 , . . . , q m } \u03c3/w \u2212\u2212\u2192 {r 1 , . . . , r n }", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Definition 3", |
| "sec_num": null |
| }, |
| { |
| "text": "As already mentioned, a DAG automaton processes an input DAG by assigning states to edges. A transition of the form of Equation (1) gets m states on the incoming edges of a node and puts n states on the outgoing edges. Alternatively, we may read the transition bottom-up, that is, it gets n states on the outgoing edges and puts m states on the incoming edges. As two special cases, note that when m = 0 in Equation (1) then the transition processes a root node, and when n = 0 it processes a leaf node.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 3", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that the transition function \u03b4 : \u0398 \u2192 K \\ {0} assigns nonzero weights to the transitions of a DAG automaton. Intuitively, the weight of all transitions not in \u0398 is 0. Reflecting this intuition, we extend \u03b4 to the set of all possible transitions t \u2208 M(Q) \u00d7 \u03a3 \u00d7 M(Q) by defining \u03b4(t) = 0 for every t / \u2208 \u0398. In this way, \u03b4 is turned into a total function, which is sometimes convenient.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 3", |
| "sec_num": null |
| }, |
| { |
| "text": "The use of multisets of states in Equation (1) is needed because, when processing a node v, the same state might be assigned to several of the edges in in(v) or in out(v), and we have to specify the collection of all these state occurrences. As an example, assume |in(v)| = 3. Then we should distinguish between the scenario where the assigned states are {q, q, q } and the scenario where the assigned states are {q, q , q }.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 3", |
| "sec_num": null |
| }, |
| { |
| "text": "Let us now formally define the semantics [[M] ] of a DAG automaton M as in Definition 3. As may be expected, [[M] ] maps every DAG over \u03a3 to its weight. A run of M on a DAG D with node set V and edge set E is a mapping \u03c1 : E \u2192 Q. The transition function \u03b4 extends to runs by taking the product of all local transition weights:", |
| "cite_spans": [ |
| { |
| "start": 41, |
| "end": 45, |
| "text": "[[M]", |
| "ref_id": null |
| }, |
| { |
| "start": 109, |
| "end": 113, |
| "text": "[[M]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 3", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03b4(\u03c1) = v\u2208V \u03b4( \u03c1(in(v)), lab(v), \u03c1(out(v)) ) Now, [[M]] (D)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 3", |
| "sec_num": null |
| }, |
| { |
| "text": "is the sum of the weights of all runs on D:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 3", |
| "sec_num": null |
| }, |
| { |
| "text": "[[M]] (D) = run \u03c1 on D \u03b4(\u03c1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 3", |
| "sec_num": null |
| }, |
| { |
| "text": "An unweighted DAG automaton is the special case of a DAG automaton in which K is the Boolean semiring. In this case, [[M] ] : D \u03a3 \u2192 {true, false} is the characteristic function of a subset of D \u03a3 . We generally identify such a characteristic function with the corresponding set, that is, [[M] ", |
| "cite_spans": [ |
| { |
| "start": 117, |
| "end": 121, |
| "text": "[[M]", |
| "ref_id": null |
| }, |
| { |
| "start": 288, |
| "end": 292, |
| "text": "[[M]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 3", |
| "sec_num": null |
| }, |
| { |
| "text": "] = {D \u2208 D \u03a3 | [[M]] (D) = true},", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 3", |
| "sec_num": null |
| }, |
| { |
| "text": ". , q m } \u03c3 \u2212 \u2192 {r 1 , . . . , r n }.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 3", |
| "sec_num": null |
| }, |
| { |
| "text": "An accepting run of M is a run whose weight is true, i.e., which uses only transitions of M.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 3", |
| "sec_num": null |
| }, |
| { |
| "text": "Let us illustrate unweighted DAG automata with a small example, where the label alphabet \u03a3 is given by \u03a3 = {a, b}. In our example, a's have two children and can be roots whereas b's have two parents and can be leaves. We want the automaton to accept all DAGs such that no path contains more than two consecutive a's. To accomplish this, viewing a run as a top-down process, we need to use the states in order to keep track of whether we have recently seen zero, one, or two a's. Consequently, we let M = (\u03a3, Q, \u03b4, K), where Q = {0, 1, 2} and \u03b4 is given by the transitions", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 2", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2205 a \u2212 \u2192 {1, 1} {i} a \u2212 \u2192 {i + 1, i + 1} for 0 \u2264 i \u2264 1 {i, j} b \u2212 \u2192 {0} | \u2205 for 0 \u2264 i, j \u2264 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 2", |
| "sec_num": null |
| }, |
| { |
| "text": "The notation in the last line, which will also be used later on, abbreviates two transitions, namely {i, j} b \u2212 \u2192 {0} and {i, j} b \u2212 \u2192 \u2205. An accepting run on a DAG in [[M] ] is shown in Figure 6 .", |
| "cite_spans": [ |
| { |
| "start": 167, |
| "end": 171, |
| "text": "[[M]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 186, |
| "end": 194, |
| "text": "Figure 6", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Example 2", |
| "sec_num": null |
| }, |
| { |
| "text": "It may be instructive to note that the construction of a run of the automaton can be understood as a top-down or a bottom-up process. Under the top-down view, this particular automaton is deterministic: for each node the states on the incoming edges uniquely determine those on the outgoing edges. In contrast, under a bottomup view, thus essentially reading transitions backwards, the transitions for b create a nondeterministic behavior.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 2", |
| "sec_num": null |
| }, |
| { |
| "text": "A finite automaton for strings, as traditionally defined (Hopcroft and Ullman 1979) , is a special case of our DAG automata, where each transition has at most one incoming state and at most one outgoing state. Each DAG in the language recognized by such an automaton consists of one long path, and the vertex labels can be interpreted as tokens in a string. For example, the finite automaton 1 2 3 a b c can be represented by a DAG automaton with the transitions:", |
| "cite_spans": [ |
| { |
| "start": 57, |
| "end": 83, |
| "text": "(Hopcroft and Ullman 1979)", |
| "ref_id": "BIBREF43" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 3", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2205 a \u2212 \u2192 {q} {q} b \u2212 \u2192 {q} {q} c \u2212 \u2192 \u2205", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 3", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that empty state sets take the place of initial and final states in traditional finite automata. Similarly, our DAG automata generalize tree automata (Comon et al. 2002) , because a DAG automaton with transitions having at most one incoming state and any number of outgoing states will recognize a tree.", |
| "cite_spans": [ |
| { |
| "start": 155, |
| "end": 174, |
| "text": "(Comon et al. 2002)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 3", |
| "sec_num": null |
| }, |
| { |
| "text": "We now present a linguistic example based on the sentence \"John wants Mary to believe him\" and its AMR representation D. In Figure 7 we display a fragment of the transitions of a DAG automaton M, along with an accepting run of M on D.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 124, |
| "end": 132, |
| "text": "Figure 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Example 4", |
| "sec_num": null |
| }, |
| { |
| "text": "As already mentioned, although the standard AMR representation has labels on both edges and nodes, for simplicity we only consider DAGs with labels on nodes. We represent the edge labels of AMR, such as ARG0 and ARG1, as nodes with one incoming and one outgoing edge.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 4", |
| "sec_num": null |
| }, |
| { |
| "text": "We observe that our DAG automata could, without any change in the definitions, also be applied to directed acyclic graphs that may be disconnected, or even to graphs over \u03a3 containing cycles if this turns out to be of interest for some application. Of course, Transitions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 4", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2205 want \u2212\u2212\u2192 {q want-arg0 , q want-arg1 } {q want-arg0 } ARG0 \u2212 \u2212\u2212 \u2192 {q person } {q want-arg1 } ARG1 \u2212 \u2212\u2212 \u2192 {q pred } {q pred } believe \u2212\u2212\u2212\u2192 {q believe-arg0 , q believe-arg1 } {q believe-arg0 } ARG0 \u2212 \u2212\u2212 \u2192 {q person } {q believe-arg1 } ARG1 \u2212 \u2212\u2212 \u2192 {q person } {q person , q person } John \u2212 \u2212 \u2192 \u2205 {q person } Mary \u2212 \u2212\u2212 \u2192 \u2205 Example run: want ARG1 believe ARG1 John q person ARG0 Mary q person q believe-arg0 q believe-arg1 q pred ARG0 q want-arg0 q want-arg1 q person", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 4", |
| "sec_num": null |
| }, |
| { |
| "text": "A DAG automaton (top) and an example run (bottom) on the AMR for the sentence \"John wants Mary to believe him.\" the algorithmic results presented in the following could not necessarily be assumed to hold in such a generalized case anymore.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 7", |
| "sec_num": null |
| }, |
| { |
| "text": "To conclude the present section, we discuss the theoretical implications of our choice to exclude edge labels for our DAGs. Assume for the moment that we did include edge labels, taken from an alphabet \u039b. A generalized transition applying to a node labeled with \u03c3 that has incoming edges labeled by \u03bb 1 , . . . , \u03bb m \u2208 \u039b and outgoing edges labeled by \u03bb 1 , . . . , \u03bb n \u2208 \u039b would then look like this:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 7", |
| "sec_num": null |
| }, |
| { |
| "text": "{q 1 :\u03bb 1 , . . . , q m :\u03bb m } \u03c3/w \u2212\u2212\u2192 {r 1 :\u03bb 1 , . . . , r n :\u03bb n }", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 7", |
| "sec_num": null |
| }, |
| { |
| "text": "with the obvious semantics. We now show that there is no power added in using edge labels. To this end we generalize the approach of Example 4, where ARG0 and ARG1 are used as node labels rather than edge labels. Each edge e with label \u03bb can be encoded by splitting e into two new unlabeled edges e 1 and e 2 , and by adding a fresh node v with label \u03bb in between e 1 and e 2 . Using this special encoding, transitions of the form above can be implemented by an automaton as in Definition 3. For this, we enlarge our node labeling alphabet by adding the labels in \u039b to it. Further, the state set Q is replaced by (\u039b \u00d7 Q) \u222a (Q \u00d7 \u039b). The automaton contains all transitions {(q, \u03bb)} \u03bb/1 \u2212 \u2212 \u2192 {(\u03bb, q)}, for q \u2208 Q and \u03bb \u2208 \u039b. Thus, for each of the fresh nodes, the label is carried to the states on its incident edges, and the same state q is assigned to both edges, effectively simulating a single edge labeled with \u03bb and carrying the state q. Now, every generalized transition as above can be turned into the ordinary transition", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 7", |
| "sec_num": null |
| }, |
| { |
| "text": "{(q 1 , \u03bb 1 ), . . . , (q m , \u03bb m )} \u03c3/w \u2212\u2212\u2192 {(\u03bb 1 , r 1 ), . . . , (\u03bb n , r n )}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 7", |
| "sec_num": null |
| }, |
| { |
| "text": "It should be clear that this DAG automaton simulates the processing of the edge labels of the generalized DAG automaton in a faithful way.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 7", |
| "sec_num": null |
| }, |
| { |
| "text": "Other than in the perspective of natural language processing, DAG automata have been investigated in several different domains-for instance, to represent derivations in Chomsky type-0 phrase structure grammars (Kamimura and Slutzki 1981) , to solve systems of set constraints (Charatonik 1999) , or to process series-parallel graphs in pattern matching applications (Fujiyoshi 2010). Kamimura and Slutzki (1981) define automata for two classes of DAGs. They primarily consider so-called d-DAGs, a recursively defined type of ordered planar DAG, where ordered means that there is a global total order on the set of nodes of the graph that implicitly orders the incoming and outgoing edges of each node. These d-DAGs are intended to model the derivations of type-0 grammars (equivalent to Turing machines). Accordingly, d-DAGs have bounded node degree and cannot have subgraphs matching certain Z-like patterns that would correspond to the same node being rewritten by two different rules. These restrictions are unsuitable when modeling natural language semantic structures. The authors also briefly consider DAGs without the planarity restriction, but still ordered in the sense mentioned above.", |
| "cite_spans": [ |
| { |
| "start": 210, |
| "end": 237, |
| "text": "(Kamimura and Slutzki 1981)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 276, |
| "end": 293, |
| "text": "(Charatonik 1999)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 384, |
| "end": 411, |
| "text": "Kamimura and Slutzki (1981)", |
| "ref_id": "BIBREF47" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Formalisms", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Our definition of DAG automata is based on that of Quernheim and Knight (2012). Also motivated by modeling semantic representations of natural languages, Quernheim and Knight extend the automata of Kamimura and Slutzki (1981) by adding weights and by dropping the planarity restriction as well as the bound on the in-degree. In order to process nodes with unbounded in-degree, Quernheim and Knight exploit some ordering on the incoming edges at each node, and introduce so-called implicit rules that process these edges in several steps. In Section 7 we take a different, simpler approach for processing DAGs with unbounded node degree that can also handle unbounded out-degree. Overall, this article can be viewed as an in-depth exploration of the theoretical properties of a somewhat simplified version of the formalism of Quernheim and Knight.", |
| "cite_spans": [ |
| { |
| "start": 198, |
| "end": 225, |
| "text": "Kamimura and Slutzki (1981)", |
| "ref_id": "BIBREF47" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Formalisms", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "There are also major notational differences with respect to our proposal: Quernheim and Knight (2012) essentially view computations as top-down rewriting processes, and the rewriting relation is defined via the introduction of specialized DAGs, called incomplete DAGs. In contrast, in our definition of run in Sections 3.1 and 7.2, there is no commitment to a specific rewriting process, which makes the notation somewhat simpler. Quernheim and Knight also show how to obtain weighted DAG-to-tree transducers, which could form the basis of a natural language generation system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Formalisms", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "With the goal of modeling ground terms in logical languages, Charatonik (1999) proposes devices that are mainly bottom-up tree automata running on DAGs, and states the external restriction, not implemented through the defined automata, that for these DAGs common substructures should be maximally shared. This maximal sharing condition is quite common in the literature on unification, but is unsuitable when modeling natural language semantic structures: Two copies of the same semantic substructure should be shared only when they refer to the same concept or action. A consequence of the maximal sharing is that, even in a nondeterministic automaton, isomorphic sub-DAGs are assigned the same state (because they are actually identical). This is exploited in the main result of Charatonik, the NP-completeness of the emptiness problem. This is in contrast with the polynomial time result for the same problem for our DAG automata, presented in Section 5.1. Anantharaman, Narendran, and Rusinowitch (2005) also work under the maximal sharing assumption, and solve in the negative the problem of closure under complementation that had been left as an open question by Charatonik (1999) . The authors consider the uniform membership problem for their automata, showing NP-hardness. Here, uniform means that the automaton is considered as part of the input. In our article and relative to our family of automata, we consider the easier problem of deciding membership for a fixed automaton, given only the DAG as input. Despite the more restricted question, we can show NP-hardness. Anantharaman, Narendran, and Rusinowitch also show that universality is undecidable for their automata. Finally, with the motivation of representing sets of terms by means of a single DAG, they also consider DAGs where each node has an additional Boolean label. This representation does not seem to be relevant for modeling of natural language semantic structures.", |
| "cite_spans": [ |
| { |
| "start": 61, |
| "end": 78, |
| "text": "Charatonik (1999)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 960, |
| "end": 1007, |
| "text": "Anantharaman, Narendran, and Rusinowitch (2005)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 1169, |
| "end": 1186, |
| "text": "Charatonik (1999)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "132", |
| "sec_num": null |
| }, |
| { |
| "text": "Fujiyoshi (2010) considers DAG automata that are essentially top-down tree automata. Such an automaton is said to accept a DAG if there exists a spanning tree of the DAG that is accepted by the automaton (viewed as a tree automaton). In particular, whenever a DAG is accepted, every other DAG obtained by adding edges is also accepted. This property does not seem to be desirable for modeling semantic structures. Similarly to our result, Fujiyoshi proves that the non-uniform membership problem is NP-complete, but although he also uses a reduction from SAT, the reduction itself is very different from ours (as expected, due to the differences in the automata models).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "132", |
| "sec_num": null |
| }, |
| { |
| "text": "Among the types of DAG automata studied in theoretical computer science, the model by Priese (2007) is the one that comes closest to the extended DAG automaton introduced in Section 7, even though Priese uses an algebraic setting to describe it. The major difference is that the DAG automata of Priese are able to check that the multiset of states assigned to the roots and leaves of the input DAG belongs to a given regular set, in the sense of Section 7.1. For example, it is possible to express the condition that recognized DAGs shall have a unique root. At first sight, this may appear to be a minor point, but this is not so. Section 4.1 shows that the path languages of our model are regular, whereas they are not even context-free once it becomes possible to express that a DAG has a unique root (which is also observed by Priese [2007] ). We consider this to be an indication that our DAG automata are better suited for studying semantic structures because we expect those to have regular path languages, and in the interest of algorithmic results one should not use unnecessarily powerful models. In the more general setting of Priese, our recognition algorithm does not apply, and our proof of the polynomial decidability of the emptiness problem, and the corresponding result for finiteness of Blum and Drewes (2016) , break down. Apart from the mentioned study of path languages, the questions studied by Priese are essentially disjoint with those studied in this article.", |
| "cite_spans": [ |
| { |
| "start": 86, |
| "end": 99, |
| "text": "Priese (2007)", |
| "ref_id": null |
| }, |
| { |
| "start": 831, |
| "end": 844, |
| "text": "Priese [2007]", |
| "ref_id": null |
| }, |
| { |
| "start": 1306, |
| "end": 1328, |
| "text": "Blum and Drewes (2016)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "132", |
| "sec_num": null |
| }, |
| { |
| "text": "Another automaton model for graph processing is the graph acceptor by Thomas (1991 Thomas ( , 1996 . A graph acceptor consists mainly of a finite set of pairwise non-isomorphic r-tiles that play the role of the rules. Each tile is an r-sphere (i.e., a graph with a center node whose distance to all other nodes is at most r). Each node of such a tile carries a state. A run on an input graph G is then a mapping of states to the nodes of G such that each node is the center of one of the tiles. The definition of the graph acceptor includes an occurrence constraint, a Boolean combination of conditions that restrict the number of occurrences of each tile. A given run is accepting if the occurrence constraint is satisfied. The expressiveness of the model can be characterized by existential monadic second-order logic (Thomas 1996) , and it can be extended by weights (Droste and D\u00fcck 2015) . Similar to our basic (non-extended) model, graph acceptors of this type recognize graph languages of bounded degree. However, because of the overlapping of tiles in runs and the occurrence constraint, they are considerably more powerful than our DAG automata (and thus too powerful for our purposes) unless the tiles are required to have the radius 0 (i.e., they are single nodes). The latter restriction results in too weak a model, because it cannot say anything about the edges in the graph if each tile is just a single node.", |
| "cite_spans": [ |
| { |
| "start": 70, |
| "end": 82, |
| "text": "Thomas (1991", |
| "ref_id": "BIBREF82" |
| }, |
| { |
| "start": 83, |
| "end": 98, |
| "text": "Thomas ( , 1996", |
| "ref_id": "BIBREF83" |
| }, |
| { |
| "start": 820, |
| "end": 833, |
| "text": "(Thomas 1996)", |
| "ref_id": "BIBREF83" |
| }, |
| { |
| "start": 870, |
| "end": 892, |
| "text": "(Droste and D\u00fcck 2015)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "132", |
| "sec_num": null |
| }, |
| { |
| "text": "More results on the (non-extended) DAG automata invented in this article were proved by Blum (2015) and Blum and Drewes (2016) . In particular, an alternative proof of the regularity of path languages was given in Blum (2015) (which is simpler and more constructive, but was conceived after the proof in Section 4.1), and the polynomial decidability of the finiteness problem was proved.", |
| "cite_spans": [ |
| { |
| "start": 88, |
| "end": 99, |
| "text": "Blum (2015)", |
| "ref_id": null |
| }, |
| { |
| "start": 104, |
| "end": 126, |
| "text": "Blum and Drewes (2016)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "132", |
| "sec_num": null |
| }, |
| { |
| "text": "Without going into further detail, we mention here some additional publications by diverse authors on automata recognizing DAGs or graphs: Bossut, Dauchet, and Warin (1988) ; Kaminski and Pinter (1992) ; Potthoff, Seibert, and Thomas (1994) ; Bossut, Dauchet, and Warin (1995) . Furthermore, there exists considerable work within the XML community on evaluating tree automata and logical queries on compressed representations of trees, which are DAGs (see, e.g., Frick, Grohe, and Koch 2003; Lohrey and Maneth 2006) . This work seems to be only tangentially related to the present article because it is not interested in the DAG as a structure in its own right (and automata that define DAG languages), as we are.", |
| "cite_spans": [ |
| { |
| "start": 139, |
| "end": 172, |
| "text": "Bossut, Dauchet, and Warin (1988)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 175, |
| "end": 201, |
| "text": "Kaminski and Pinter (1992)", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 204, |
| "end": 240, |
| "text": "Potthoff, Seibert, and Thomas (1994)", |
| "ref_id": "BIBREF71" |
| }, |
| { |
| "start": 243, |
| "end": 276, |
| "text": "Bossut, Dauchet, and Warin (1995)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 463, |
| "end": 491, |
| "text": "Frick, Grohe, and Koch 2003;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 492, |
| "end": 515, |
| "text": "Lohrey and Maneth 2006)", |
| "ref_id": "BIBREF58" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "132", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section we consider only unweighted DAG automata. We explore three properties of such DAG automata and of the (unweighted) DAG languages recognized by them:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties", |
| "sec_num": "4." |
| }, |
| { |
| "text": "r With multiple roots, the path languages of DAG automata are regular; but not under the constraint of a single root (Section 4.1). r Testing for emptiness of DAG automata is decidable under our definition, but not under the original definition by Kamimura and Slutzki (Section 4.3) .", |
| "cite_spans": [ |
| { |
| "start": 248, |
| "end": 282, |
| "text": "Kamimura and Slutzki (Section 4.3)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties", |
| "sec_num": "4." |
| }, |
| { |
| "text": "The results in this section are not required for understanding Sections 5-7 on recognition.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Reading the labels of nodes on the paths in a DAG D from a root to a leaf yields the path language of the DAG, denoted by paths(D). (In the following, all paths are assumed to start at a root; their rootedness will thus no longer be mentioned.) The path language of a set L of DAGs is the union paths(L) = D\u2208L paths(D) of the path languages of its individual DAGs. We now show that the path language of a recognizable DAG language is always a regular string language. Thus, in this respect our DAG automata are similar to those by Kamimura and Slutzki (1981) , whose path languages are trivially regular. However, if we restrict recognizable DAG languages to DAGs with only one root, then this no longer holds. In fact, in this case even non-context-free path languages are obtained, as in the case of Priese (2007) .", |
| "cite_spans": [ |
| { |
| "start": 531, |
| "end": 558, |
| "text": "Kamimura and Slutzki (1981)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 802, |
| "end": 815, |
| "text": "Priese (2007)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Let us first show that path languages of recognizable DAG languages (without the restriction to unique roots) are regular. To this end, recall that we have defined DAGs as connected directed acyclic graphs. Let us now drop the connectedness assumption, and consider arbitrary directed acyclic graphs, which we call nc-DAGs. Then any nc-DAG can be written as the finite disjoint union", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "D 1 + \u2022 \u2022 \u2022 + D k of (connected) DAGs D 1 , . . . , D k , for k \u2265 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Here D + D is used to denote the disjoint union of DAGs D and D .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We define as [[M] ] + the language of nc-DAGs recognized by M: In words, each nc-DAG in [[M] ] + is the disjoint union of one or more DAGs from [[M] ]. We extend our definition of path language of a DAG to nc-DAGs and to languages of nc-DAGs.", |
| "cite_spans": [ |
| { |
| "start": 13, |
| "end": 17, |
| "text": "[[M]", |
| "ref_id": null |
| }, |
| { |
| "start": 88, |
| "end": 92, |
| "text": "[[M]", |
| "ref_id": null |
| }, |
| { |
| "start": 144, |
| "end": 148, |
| "text": "[[M]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Let D 1 , . . . , D k \u2208 [[M]]. We have paths(D 1 + \u2022 \u2022 \u2022 + D k ) = paths(D 1 ) \u222a \u2022 \u2022 \u2022 \u222a paths(D k ). It di- rectly follows that paths([[M]]) and paths([[M]] + )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "coincide. This observation will be used later to simplify our proof.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Another useful observation is the following. Consider a DAG D = (V, E, lab, src, tar) and two edges e 1 , e 2 \u2208 E. Let D[e 1 \u2194 e 2 ] denote the graph that is obtained from D by interchanging the targets of e 1 and e 2 . More precisely, if v i (i = 1, 2) is the node such that e i \u2208 in(v i ), then D[e 1 \u2194 e 2 ] has e 1 \u2208 in(v 2 ) and e 2 \u2208 in(v 1 ) but is otherwise identical to D. It is not difficult to see that the edge interchange operator we have just defined might introduce cycles, that is, D[e 1 \u2194 e 2 ] may no longer be a DAG. However, in what follows we will use this operator in a restricted way, such that the resulting graph is still a DAG.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Suppose that D \u2208 [[M] ] + and let \u03c1 be an accepting run of M on D.", |
| "cite_spans": [ |
| { |
| "start": 17, |
| "end": 21, |
| "text": "[[M]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "If D = D 1 + D 2 and, for i = 1, 2, e i is an edge of D i such that \u03c1(e 1 ) = \u03c1(e 2 ), then D[e 1 \u2194 e 2 ] \u2208 [[M]] + . This is true because D[e 1 \u2194 e 2 ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "is still acyclic (because e 1 and e 2 belong to distinct connected components of D) and \u03c1 is an accepting run on D[e 1 \u2194 e 2 ] as well.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Now, let us turn M into M by adding a unique leaf symbol , adding a new state f and the transition {f } \u2212 \u2192 \u2205, and turning all original transitions of the form", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "{q 1 , . . . , q k } a \u2212 \u2192 \u2205 into {q 1 , . . . , q k } a \u2212 \u2192 {f }.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Thus, the DAGs recognized by M are those originally recognized by M, but with additional leaves labeled added as leaves below the original leaves, and the accepting runs of M are those of M, extended by labeling the (unique) outgoing edges of the original leaves with f .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For a string w \u2208 \u03a3 + , let \u2206(w) denote the set of all states q for which there exists an accepting run \u03c1 of M on a DAG D such that some path labeled w leads to an edge e with \u03c1(e) = q. Hence, paths(", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "[[M]]) = {w | f \u2208 \u2206(w)}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "By the Myhill-Nerode theorem, it therefore suffices to show that the equivalence relation \u223c, given by w 1 \u223c w 2 if and only if \u2206(w 1 ) = \u2206(w 2 ), is a right congruence. In other words, if \u2206(w 1 ) = \u2206(w 2 ) and w is any string, then", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "w 1 w \u2208 paths([[M]]) if and only if w 2 w \u2208 paths([[M]]).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "So, assume that \u2206(w 1 ) = \u2206(w 2 ) and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "w 1 w \u2208 paths([[M]]). Then there is some D 1 \u2208 [[M ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "] containing a path p whose node labels are w 1 w . Let \u03c1 1 be a run on D 1 and consider the |w 1 |-th edge e 1 on p, i.e., the edge between w 1 and w. Then \u03c1 1 (e 1 ) \u2208 \u2206(w 1 ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "D 2 \u2208 [[M ]]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Choose any", |
| "sec_num": null |
| }, |
| { |
| "text": ", edge e 2 , and accepting run \u03c1 2 such that some path to e 2 in D 2 is labeled by w 2 and \u03c1 (e 2 ) = \u03c1 1 (e 1 ). Note that D 2 , e 2 , and \u03c1 2 exist because \u2206(w 1 ) = \u2206(w 2 ). Now, let D = D 1 + D 2 . By the observation above the graph D[e 1 \u2194 e 2 ] is in [[M ] ] + . Furthermore, it obviously contains the path w 2 w , which means that f \u2208 \u2206(w 2 w) and thus w 2 w \u2208 paths([[M]]), as required.", |
| "cite_spans": [ |
| { |
| "start": 257, |
| "end": 262, |
| "text": "[[M ]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Choose any", |
| "sec_num": null |
| }, |
| { |
| "text": "We have thus shown that the path language of every recognizable DAG language is a regular string language. The proof of this statement relies crucially on the fact that DAGs in [[M] ] may have several roots: We considered the disconnected graph", |
| "cite_spans": [ |
| { |
| "start": 177, |
| "end": 181, |
| "text": "[[M]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Choose any", |
| "sec_num": null |
| }, |
| { |
| "text": "D = D 1 + D 2 \u2208 [[M ]] + and turned it into D[e 1 \u2194 e 2 ] \u2208 [[M ]] + .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Choose any", |
| "sec_num": null |
| }, |
| { |
| "text": "However, the latter may be connected and may thus, in fact, be an element of [[M ] ], while containing the roots of both D 1 and D 2 .", |
| "cite_spans": [ |
| { |
| "start": 77, |
| "end": 82, |
| "text": "[[M ]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Choose any", |
| "sec_num": null |
| }, |
| { |
| "text": "To end this section, we discuss two examples showing that, indeed, the regularity of path languages (and even its context-freeness) is lost if single-rootedness is imposed on the DAGs (see Priese [2007] for similar arguments). More precisely, let", |
| "cite_spans": [ |
| { |
| "start": 189, |
| "end": 202, |
| "text": "Priese [2007]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Choose any", |
| "sec_num": null |
| }, |
| { |
| "text": "[[M]] s = {D \u2208 [[M]] | D has only one root}. Then paths([[M]] s )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Choose any", |
| "sec_num": null |
| }, |
| { |
| "text": "is not necessarily contextfree, as the following two examples show.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Choose any", |
| "sec_num": null |
| }, |
| { |
| "text": "Let \u03a3 = {a, b, \u2022} and consider the DAG automaton with states q, r, r , s and transitions", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 5", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2205 a \u2212 \u2192 {q, r}, {q} a \u2212 \u2192 {q, r} {r} \u2022 \u2212 \u2192 {r } {q, r } b \u2212 \u2192 {s}, {s, r } b \u2212 \u2192 {s} | \u2205", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 5", |
| "sec_num": null |
| }, |
| { |
| "text": "In an accepting run on a DAG having a single root (labeled by a) every a is related to a uniquely determined b, and vice versa, by paths of the form a \u2192 \u2022 \u2192 b (where \u03c1(e) = r and \u03c1(e ) = r for the incoming and outgoing edge, respectively). Hence, the intersection of the path language of [[M] ] s with a * b * is {a n b n | n \u2265 1}, a strictly context-free language. This means that the path language of [[M] ] s cannot be regular. Note that the construction breaks down if arbitrarily many roots are allowed (i.e., if [[M] ] is considered); in this case, no \"counting\" is possible and we simply get a + b + .", |
| "cite_spans": [ |
| { |
| "start": 288, |
| "end": 292, |
| "text": "[[M]", |
| "ref_id": null |
| }, |
| { |
| "start": 403, |
| "end": 407, |
| "text": "[[M]", |
| "ref_id": null |
| }, |
| { |
| "start": 518, |
| "end": 522, |
| "text": "[[M]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 5", |
| "sec_num": null |
| }, |
| { |
| "text": "In a similar way, one may build M such that paths(", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 6", |
| "sec_num": null |
| }, |
| { |
| "text": "[[M]] s ), intersected with {a 1 , . . . , a k } * , k \u2265 2, is equal to MIX(k)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 6", |
| "sec_num": null |
| }, |
| { |
| "text": ", the language of all strings over the alphabet {a 1 , . . . , a k } that contain the same number of occurrences of each symbol in this alphabet. To simplify the construction, we show how to obtain all strings of the form \u2022w such that w \u2208 MIX(k).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 6", |
| "sec_num": null |
| }, |
| { |
| "text": "Let \u03a3 = {\u2022, a 1 , . . . , a k }.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 6", |
| "sec_num": null |
| }, |
| { |
| "text": "We use states q, r, r 1 , . . . , r k and the following transitions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 6", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2205 \u2022 \u2212 \u2192 {q, r} {r} \u2022 \u2212 \u2192 {r, r 1 , . . . , r k } | \u2205 {q, r i } a i \u2212 \u2192 {q} | \u2205 for 1 \u2264 i \u2264 k", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 6", |
| "sec_num": null |
| }, |
| { |
| "text": "An example run of this automaton (for k = 3) is illustrated in Figure 8 . Similarly to the previous example, taking the intersection of paths([[M]] s ) with the regular language {\u2022w | w \u2208 {a 1 , . . . , a k } * } yields the intended language. The reader should easily be able to adapt the automaton in such a way that the initial \u2022 is dropped.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 63, |
| "end": 71, |
| "text": "Figure 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Example 6", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 \u2022 \u2022 \u2022 \u2022 a 2 a 3 a 3 a 1 a 2 a 1 a 1 a 3 a 2 q q q q q q q q q r r r r r r 2 r 3 r 1 r 2 r 3 r 1 r 2 r 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 6", |
| "sec_num": null |
| }, |
| { |
| "text": "Example run of the DAG automaton that recognizes paths of the form \u2022w where w \u2208 MIX(k).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 8", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that, whereas MIX(2) is well known to be context-free, MIX(k) is not contextfree for any k > 2. It has been recently shown (Salvati 2014 ) that MIX(3) can be generated by a Linear Context-Free Rewriting System (Vijay-Shanker, Weir, and Joshi 1987), but it is unknown whether MIX(k), k > 3, can be generated by this class.", |
| "cite_spans": [ |
| { |
| "start": 128, |
| "end": 141, |
| "text": "(Salvati 2014", |
| "ref_id": "BIBREF77" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 8", |
| "sec_num": null |
| }, |
| { |
| "text": "A hyperedge replacement grammar (HRG, see Drewes, Kreowski, and Habel [1997] for an overview) is a context-free type of graph grammar. It can in particular be used to generate DAG languages. Recognition algorithms for HRGs (Lautemann 1990; Chiang et al. 2013 ) can be thought of as constructions that intersect the graph language [[G] ] generated by an HRG G with a single graph. But just as the Cocke-Kasami-Younger algorithm for context-free grammars can be thought of as a special case of intersecting a context-free language with a regular language (Bar-Hillel, Perles, and Shamir 1961), we would more generally like to be able to intersect [[G] ] with any recognizable DAG language. In other words, given an unweighted DAG automaton M, we would like to construct an HRG G such that [[G ] ", |
| "cite_spans": [ |
| { |
| "start": 42, |
| "end": 76, |
| "text": "Drewes, Kreowski, and Habel [1997]", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 223, |
| "end": 239, |
| "text": "(Lautemann 1990;", |
| "ref_id": "BIBREF55" |
| }, |
| { |
| "start": 240, |
| "end": 258, |
| "text": "Chiang et al. 2013", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 330, |
| "end": 334, |
| "text": "[[G]", |
| "ref_id": null |
| }, |
| { |
| "start": 645, |
| "end": 649, |
| "text": "[[G]", |
| "ref_id": null |
| }, |
| { |
| "start": 787, |
| "end": 792, |
| "text": "[[G ]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Intersection with Hyperedge Replacement Languages", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "] = [[G]] \u2229 [[M]].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Intersection with Hyperedge Replacement Languages", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "To discuss briefly how this can be done, we need to give a rough introduction to HRGs (adapted to the setting and terminology of the current article). An HRG comes with a ranked alphabet N of nonterminal hyperedge labels, in addition to the alphabet \u03a3 of node labels. Here, saying that N is ranked means that N is specified as a disjoint union N = k N k , where the elements of N k are said to be the symbols of rank k. A hypergraph H is a graph that may, in addition to the usual elements, contain a finite set of hyperedges. Each hyperedge h has a label lab(h) \u2208 N k for some k \u2208 N and a sequence att(h) \u2208 V k of attached nodes. We also view h as having k tentacles that connect it to its k attached nodes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Intersection with Hyperedge Replacement Languages", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "An HR rule r = (L ::= R) consists of a left-hand side L and a right-hand side R. L is a hypergraph that consists of a single hyperedge h labeled by some X \u2208 N k , together with the attached nodes of h, say u 1 , . . . , u k . These nodes should be thought of as being unlabeled as their label is irrelevant. The right-hand side is a hypergraph whose set of nodes also contains u 1 , . . . , u k (among other nodes). Suppose that a host hypergraph H contains a hyperedge h labeled with X and attached to nodes v 1 , . . . , v k . Then the rule r can be applied to it, which yields the hypergraph obtained by removing h from H and inserting the right-hand side of r in its place by identifying each u i with the corresponding v i . Figure 9 shows an example of a rule and its application. An HRG G consists of an alphabet N of nonterminals as before, an initial nonterminal of rank 0, and a finite set of HR rules. The generated graph language [[G] ], called an HR language, consists of all graphs that can be derived from the initial nonterminal by repeated rule application.", |
| "cite_spans": [ |
| { |
| "start": 942, |
| "end": 946, |
| "text": "[[G]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 730, |
| "end": 738, |
| "text": "Figure 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Intersection with Hyperedge Replacement Languages", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Suppose now that we are given an HRG G that generates graphs over \u03a3, and an unweighted DAG automaton M that recognizes a DAG language over \u03a3. We want to construct another HRG G such that [[G ] ", |
| "cite_spans": [ |
| { |
| "start": 187, |
| "end": 192, |
| "text": "[[G ]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Intersection with Hyperedge Replacement Languages", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "] = [[G]] \u2229 [[M]]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Intersection with Hyperedge Replacement Languages", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": ". That this is possible follows from several known results, but most easily using monadic second-order (MSO) logic. Courcelle (1990, Corollary 4.8) shows that the restriction of an HR language by a property expressible in MSO logic yields an HR language (for which a suitable HRG can effectively be constructed). Thus, it suffices to argue that every recognizable DAG language is definable by an MSO formula. Suppose we want to express in MSO logic that a given DAG automaton with state set , lab, src, tar) . We can do this by constructing an MSO formula that \"guesses\" an accepting run \u03c1. The formula states that there exists a partition of E into subsets", |
| "cite_spans": [ |
| { |
| "start": 116, |
| "end": 147, |
| "text": "Courcelle (1990, Corollary 4.8)", |
| "ref_id": null |
| }, |
| { |
| "start": 491, |
| "end": 507, |
| "text": ", lab, src, tar)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Intersection with Hyperedge Replacement Languages", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Q = {q 1 , . . . , q n } accepts a DAG D = (V, E", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Intersection with Hyperedge Replacement Languages", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "X 1 2 3 ::= a X 1 2 3 Y 1 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . a b b a X 1 2 3 a Y 1 2 \u21d2 a b b a a Y 1 2 a X 1 2 3 Y 1 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Intersection with Hyperedge Replacement Languages", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "An HR rule (top) and its application to a hyperedge (bottom). For better visibility the replaced hyperedge as well as the portion of the resulting hypergraph that is added by the rule are drawn using thick lines. E 1 , . . . , E n such that the following holds:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "For every node v \u2208 V with in(v) = {e 1 , . . . , e m } and out(v) = {e 1 , . . . , e n }, there exist i 1 , . . . , i m and j 1 , . . . , j n such that 1. {q i 1 , . . . , q i m } lab(v) \u2212 \u2212\u2212 \u2192 {q j 1 , . . . , q j n } is a transition of M and 2. e r \u2208 E i r for 1 \u2264 r \u2264 m and e s \u2208 E j s for 1 \u2264 s \u2264 n.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "Intuitively, E i corresponds to the set of all edges e for which \u03c1(e) = q i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "Let us now sketch a direct construction of G . Without loss of generality, we may assume that [[G] ] is a set of DAGs, because it is well known that the class of HR languages is closed under intersection with the set of all connected acyclic graphs. (This is, in fact, another application of the closedness under intersection with MSO properties.) The idea behind the construction of G is to use a guess-and-verify strategy to guarantee that only those graphs are generated that have accepting runs in M. To implement this strategy, we augment the nonterminal labels of hyperedges with the guessed information. To understand this, note that every tentacle of a hyperedge intuitively controls a node to which the derivation of this hyperedge will eventually attach a number of incoming and outgoing edges. We have to guess beforehand the multiset of states that an accepting run will assign to these edges. To keep track of this guess, we have to remember two multisets of states for each tentacle, one referring to outgoing edges and one referring to incoming edges that will be generated. Consequently, the new sets", |
| "cite_spans": [ |
| { |
| "start": 94, |
| "end": 98, |
| "text": "[[G]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "N k of nonterminal labels of rank k consist of all (X, \u00b5 1 \u2022 \u2022 \u2022 \u00b5 k , \u00b5 1 \u2022 \u2022 \u2022 \u00b5 k ) such that X \u2208 N k and \u00b5 1 , . . . , \u00b5 k , \u00b5 1 , .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": ". . , \u00b5 k are multisets of states in Q. The size of these multisets is bounded by the maximum size of multisets in the transitions of M. This makes sure that the set of nonterminals is finite. The initial nonterminal is (X 0 , \u03bb, \u03bb), where X 0 is the initial nonterminal of G and \u03bb is the empty sequence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "To define the rules of G , we need a few preparations. Consider a hypergraph H over \u03a3 and N and a function \u03c1 that maps every (ordinary) edge e of H to a state \u03c1(e) \u2208 Q. In the following, we call \u03c1 a state assignment for H. Given such a state assignment and a node v of H, we let in \u03c1 (v) denote the multiset of states obtained by taking the union of, first, all {\u03c1(e)} with e \u2208 in(v) and, second, all \u00b5 i such that there is a hyperedge h", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "labeled (X, \u00b5 1 \u2022 \u2022 \u2022 \u00b5 k , \u00b5 1 \u2022 \u2022 \u2022 \u00b5 k )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "whose ith tentacle is attached to v. Similarly, out \u03c1 (v) is the union of all {\u03c1(e)} with e \u2208 out(v) and all \u00b5 i such that there is a hyperedge h labeled", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "(X, \u00b5 1 \u2022 \u2022 \u2022 \u00b5 k , \u00b5 1 \u2022 \u2022 \u2022 \u00b5 k ) whose ith tentacle is attached to v.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "Now, consider all HR rules L ::= R that can be obtained from rules of G by augmenting each nonterminal label in all possible ways. A rule L ::= R obtained in this way becomes a rule of G if there exists a state assignment \u03c1 for R such that 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "in", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "(v) = in \u03c1 (v) and out (v) = out \u03c1 (v) for all nodes v of L,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "where is the unique (empty) state assignment for L, and ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "2. in \u03c1 (v) lab(v) \u2212 \u2212\u2212 \u2192 out \u03c1 (v) is a transition of M for every node v of R which is not in L.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "The emptiness problem for DAG automata asks, for an unweighted DAG automaton M as input, whether [[M] ] = \u2205. As mentioned earlier, the DAG automata of Kamimura and Slutzki (1981) can encode computations of Turing machines. In particular, this means that their emptiness problem is undecidable. As we shall see next, this sharply distinguishes their DAG automata from ours, whose emptiness problem can be decided in polynomial time as it can be reduced to a particular case of the reachability problem for Petri nets. A similar idea was used by Kaminski and Pinter (1992) to prove the decidability of the emptiness problem for their graph automata. However, in their case it required the use of the general Petri net reachability problem, which leads to an algorithm whose running time is non-elementary. In contrast, we obtain a polynomial algorithm.", |
| "cite_spans": [ |
| { |
| "start": 97, |
| "end": 101, |
| "text": "[[M]", |
| "ref_id": null |
| }, |
| { |
| "start": 151, |
| "end": 178, |
| "text": "Kamimura and Slutzki (1981)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 544, |
| "end": 570, |
| "text": "Kaminski and Pinter (1992)", |
| "ref_id": "BIBREF48" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Emptiness", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Let us first briefly recall Petri nets. A Petri net is an unlabeled directed graph N = (V, E, src, tar) such that V consists of disjoint sets T and P of transitions and places. Edges only point from places to transitions and from transitions to places (i.e., N is a bipartite graph). A marking is a mapping \u00b5 : P \u2192 N that assigns to every place p a number \u00b5(p) of tokens. Intuitively, the idea is that a transition t consumes tokens via edges leading from places to t and it produces tokens via edges leading from t to some places. We make this more precise, as follows. For a place p and a transition t let input t (p) = |in(t) \u2229 out(p)| be the number of times p occurs as an input place of t. Similarly, let output t (p) = |out(t) \u2229 in(p)| be the number of times p occurs as an output place of t. For a given marking \u00b5, a transition t can fire if \u00b5(p) \u2265 input t (p) for each place p, that is, if there are enough tokens on the input places of t. In this case, the firing of t yields the marking \u00b5 given by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Emptiness", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u00b5 (p) = \u00b5(p) \u2212 input t (p) + output t (p) for all p \u2208 P.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Emptiness", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Note that a place p can be an input and output place of t at the same time-that is, we may have input t (p) > 0 and output t (p) > 0. A simple example of a Petri net consisting of one transition together with its input and output places is shown in Figure 10 , where the bar represents the transition and the circles represent places. The transition can fire if the topmost place contains at least one token. If it does fire, the token on the topmost place is immediately reproduced. At the same time, four additional tokens are placed on the places at the bottom, namely, two on the place in the middle and one on each of the other two places.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 249, |
| "end": 258, |
| "text": "Figure 10", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Emptiness", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Naturally, a firing sequence is a sequence of admissible firings. It transforms an initial marking into a final marking. The Petri net reachability problem is the following problem: Given a Petri net and two markings \u00b5, \u00b5 , is \u00b5 reachable from \u00b5 via some firing sequence? This problem is known to be decidable, but no solution with a primitive recursive running time is known (Reutenauer 1990; Esparza and Nielsen 1994). Fortunately, for our purpose it suffices to consider the case where both \u00b5 and \u00b5 are equal to the zero marking 0, that is, \u00b5(p) = \u00b5 (p) = 0 for all places p. If 0 is reachable from itself in a Petri net N via a nonempty firing sequence, then we say that N is structurally cyclic, because then it holds for all markings \u00b5 that \u00b5 is reachable from itself. Drewes and Leroux (2015) show that it is decidable in polynomial time whether a Petri net is structurally cyclic.", |
| "cite_spans": [ |
| { |
| "start": 775, |
| "end": 799, |
| "text": "Drewes and Leroux (2015)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Emptiness", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We can reduce the emptiness problem for DAG automata M to the question of whether a Petri net is structurally cyclic, as follows. Every state of the DAG automaton becomes a place of the Petri net N and every transition t of M becomes a transition of N in an obvious way: for 1 \u2264 i \u2264 m, there is an edge pointing from q i to t, and for 1 \u2264 j \u2264 n, there is one pointing from t to r j .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Emptiness", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "= ({q 1 , . . . , q m } \u03c3 \u2212 \u2192 {r 1 , . . . , r n })", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Emptiness", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "To argue for the correctness of the construction, let us consider DAGs that are partial in the sense that, for some edges e, there is no node v with e \u2208 in(v). Such edges are \"downward dangling.\" Now, given a firing sequence starting with the empty marking, we can inductively construct a run on a corresponding partial DAG. The initial empty marking of N corresponds to the empty DAG (with no nodes and zero dangling edges). After some firings the Petri net has reached a marking \u00b5 and we have inductively constructed a partial DAG D and a run \u03c1 on D such that for each state q, there are exactly \u00b5(q) dangling edges e with \u03c1(e) = q. Now suppose that, in N, a transition fires, which was obtained from transition {q 1 , . . . , q m } \u03c3 \u2212 \u2192 {r 1 , . . . , r n } of the DAG automaton M. To reflect the firing of t, we add a node v labeled by \u03c3 to D and choose previously dangling edges e 1 , . . . , e m with \u03c1(e i ) = q i as incoming edges of v; n new outgoing dangling edges e 1 , . . . , e n are attached to v, and \u03c1 is extended by defining \u03c1(e i ) = r i for 1 \u2264 i \u2264 n. Clearly, D is a DAG without dangling edges if \u00b5 = 0. Thus, [[M] ] = \u2205 if 0 is reachable from itself in N. In a similar way, if M accepts a DAG D, a run of M on D can easily be turned into a nonempty firing sequence of N (under the top-down interpretation of runs) that turns 0 into itself. Thus, we have reduced the emptiness problem for DAG automata to the problem of deciding whether a Petri net is structurally cyclic. Clearly, the reduction can be computed in polynomial time (and, in fact, in logarithmic space). Using the main result of Drewes and Leroux (2015) mentioned earlier, we conclude that the emptiness problem for DAG automata is decidable in polynomial time.", |
| "cite_spans": [ |
| { |
| "start": 1131, |
| "end": 1135, |
| "text": "[[M]", |
| "ref_id": null |
| }, |
| { |
| "start": 1615, |
| "end": 1639, |
| "text": "Drewes and Leroux (2015)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Emptiness", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We consider the recognition problem for unweighted DAG automata: For a DAG automaton M and a DAG D, does M accept D? This problem turns out to be NP-complete even in case M is fixed (i.e., instead of both M and D, only D is the input). (In theoretical computer science, the variant where M is part of the input is called the uniform membership problem; accordingly, the one where M is fixed is the potentially easier non-uniform one.) The situation is similar to that of the recognition problem based on the hyperedge replacement grammar introduced in Section 4.2, which is NP-complete even for a fixed grammar (Aalbersberg, Rozenberg, and Ehrenfeucht 1986; Lange and Welzl 1987) . On the positive side, as we shall see in Section 6, recognition by a (fixed) DAG automaton can be done in polynomial time for input graphs of bounded treewidth, which is encouraging in view of Table 1.", |
| "cite_spans": [ |
| { |
| "start": 611, |
| "end": 657, |
| "text": "(Aalbersberg, Rozenberg, and Ehrenfeucht 1986;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 658, |
| "end": 679, |
| "text": "Lange and Welzl 1987)", |
| "ref_id": "BIBREF53" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recognition", |
| "sec_num": "5." |
| }, |
| { |
| "text": "It is easy to see that recognition is in NP even if the automaton is part of the input: We can nondeterministically \"guess\" an assignment of states to the edges of D and check in linear time whether it constitutes an accepting run of M. Next, we show that recognition is NP-complete. Like Fujiyoshi (2010), we do this by reduction from SAT, but the reduction is different (because our DAG automata differ essentially from his).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Because we want to prove NP-completeness of the non-uniform membership problem, that is, for a fixed DAG automaton, we construct a single DAG automaton M and a reduction that turns any propositional formula \u03c6 into a DAG D \u03c6 that is accepted by M if and only if \u03c6 is satisfiable. We first define D \u03c6 . Thus, assume that we are given a propositional formula \u03c6 (which we do not require to be in conjunctive normal form). We use the alphabet \u03a3 = {true, \u2227, \u2228, \u00ac, x}. First, we construct in the obvious way the tree T \u03c6 corresponding to \u03c6 (where every occurrence of a variable x i is represented by a node labeled x). We then add a special root node labeled true on top of the tree. Intuitively, the root node represents the claim that \u03c6 evaluates to true under an appropriate assignment. Finally, for every variable x i , if there are n + 1 nodes u 0 , . . . , u n in T \u03c6 that represent the occurrences of x i in \u03c6 from left to right, we add edges from u j\u22121 to u j for j = 1, . . . , n. Thus, all nodes representing the same variable are linked together in a chain. Figure 11, where we have added indices to the x-labeled nodes in order to illustrate the correspondence with the formula \u03c6.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 1062, |
| "end": 1068, |
| "text": "Figure", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "NP-completeness", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "For \u03c6 = ((x 1 \u2228 x 2 ) \u2228 \u00acx 3 ) \u2227 (\u00acx 2 \u2228 (x 4 \u2228 x 1 )), the resulting DAG D \u03c6 is shown in", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 7", |
| "sec_num": null |
| }, |
| { |
| "text": "We can easily construct a DAG automaton M that, for every formula \u03c6, accepts DAG D \u03c6 if and only if \u03c6 is satisfiable. The automaton has just two states, t and f, to compute a truth value for each node in a consistent way by means of a guess-and-verify technique. The only transition for true is \u2205 true \u2212 \u2212 \u2192 {t}. The transitions for processing conjunctions, disjunctions, and negations are the expected ones:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 7", |
| "sec_num": null |
| }, |
| { |
| "text": "{t} \u2227 \u2212 \u2192 {t, t} {f } \u2227 \u2212 \u2192 {t, f } {f } \u2227 \u2212 \u2192 {f, f } {t} \u2228 \u2212 \u2192 {t, t} {t} \u2228 \u2212 \u2192 {t, f } {f } \u2228 \u2212 \u2192 {f, f } {t} \u00ac \u2212 \u2192 {f } {f } \u00ac \u2212 \u2192 {t}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 7", |
| "sec_num": null |
| }, |
| { |
| "text": "Then for the nodes corresponding to the variables, we need the following transitions, for b \u2208 {t, f }:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 7", |
| "sec_num": null |
| }, |
| { |
| "text": "{b, b} x \u2212 \u2192 {b}, {b} x \u2212 \u2192 {b}, {b, b} x \u2212 \u2192 \u2205, and {b} x \u2212 \u2192 \u2205", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 7", |
| "sec_num": null |
| }, |
| { |
| "text": "These ensure that multiple occurrences of the same variable (i.e., occurrences of x that are \"chained together\") are assigned the same truth value. It should be clear that D \u03c6 is accepted by this DAG automaton if and only if \u03c6 is satisfiable.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 7", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that, no matter whether we construct runs top-down or bottom-up, there is always nondeterminism involved. Under the top-down view, the transitions for \u2227 and \u2228 are nondeterministic (reflecting the fact that \u2227 and \u2228 are not injective) whereas", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 7", |
| "sec_num": null |
| }, |
| { |
| "text": "true \u2227 \u2228 \u2228 x 1 x 4 \u00ac x 2 \u2228 \u00ac x 3 \u2228 x 2 x 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 7", |
| "sec_num": null |
| }, |
| { |
| "text": "Example instance in the reduction of 3-SAT to DAG automata recognition. The 3-SAT instance is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 11", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03c6 = ((x 1 \u2228 x 2 ) \u2228 \u00acx 3 ) \u2227 (\u00acx 2 \u2228 (x 4 \u2228 x 1 )).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 11", |
| "sec_num": null |
| }, |
| { |
| "text": "We have added indices to the x-labeled nodes merely to illustrate the correspondence with \u03c6.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 11", |
| "sec_num": null |
| }, |
| { |
| "text": "those for x are deterministic. Conversely, under the bottom-up view, the transitions for x become nondeterministic whereas those for \u2227 and \u2228 become deterministic (because \u2227 and \u2228 are functions). Intuitively, the top-down process corresponds to guessing the values of subtrees and verifying consistency. In contrast, the bottom-up process guesses an assignment of truth values and computes the resulting truth value of \u03c6 deterministically in order to check that it results in true. In both cases, the outlined computational difficulty is preserved.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 11", |
| "sec_num": null |
| }, |
| { |
| "text": "We provide an algorithm for a more general problem than the recognition problem for unweighted DAG automata: Given a weighted DAG automaton M and a DAG D, what is the total weight (in the semiring K) of all runs of M for D? This includes in particular the recognition problem, because unweighted DAG automata are a special case of general DAG automata, as explained at the end of Section 3.1. We also obtain an analogue of the Viterbi algorithm if we define \u2297 and \u2295 to be multiplication and maximum. In Section 5.3 we will also discuss how to use this algorithm for learning transition weights from data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We have already discussed in Example 3 how our DAG automata generalize finite automata for strings. In order to introduce our algorithm for DAG automata, we therefore consider the analogous problem for finite automata: Given an input string w, find the total weight of all runs of a nondeterministic weighted finite automaton M on w. Let Q be the state set of M. A na\u00efve algorithm for this problem would consider all possible assignments of states in Q to the |w| + 1 inter-symbol positions of w, under the restriction that the first position is assigned the unique starting state for M. For each such assignment, we then check against M's transitions that it corresponds to a run of M and, if this is the case, we add in the weight of that run. The number of possible assignments is |Q| |w| and each assignment can clearly be checked in time O (|w|). If we assume that the semiring operations can be computed in constant time, the algorithm runs in time O |Q| |w| |w| .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "A better algorithm, the forward algorithm (Baum 1972) , uses dynamic programming to run in polynomial time in the size of both w and M. This is reported in Algorithm 1. We view w as a sequence of tokens w i from the alphabet of M. Symbols s and F denote the initial state and the final state set, respectively, of M. Symbol \u03b4 denotes the transition function, mapping a pair of states and an input symbol from M to a weight. For instance, \u03b4(q, w i , r) is the weight of the transition that takes M from state q to state r upon reading token w i .", |
| "cite_spans": [ |
| { |
| "start": 42, |
| "end": 53, |
| "text": "(Baum 1972)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Algorithm 1 (Forward algorithm) Sum the weights of all computations of a finite automaton on a single string.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "n = |w| \u03b1[0, s] \u2190 1 for i \u2190 1, . . . n do for r \u2208 Q do \u03b1[i, r] = q\u2208Q \u03b1[i \u2212 1, q] \u2297 \u03b4(q, w i , r) return f \u2208F \u03b1[n, f ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The algorithm processes w from left to right, computing the weights of larger and larger prefixes of w. More precisely, for each prefix w 1 w 2 \u2022 \u2022 \u2022 w i of w and for each state r \u2208 Q, we compute the sum of the weights of all runs of M that start in s, read w 1 w 2 \u2022 \u2022 \u2022 w i , and end up in r. This quantity is then stored in a chart entry \u03b1[i, r], for future reuse. In fact, the basic idea underlying Algorithm 1 is that \u03b1[i, r] can be computed as a function of all quantities \u03b1[i \u2212 1, q], q \u2208 Q, combined with all possible transitions of M over token w i , using a recursive relation. We call each chart entry \u03b1[i, r] a partial analysis of w. Observe that each partial analysis of w is uniquely identified by the inter-symbol position i we have reached on w, and by the state r we have reached on M.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The complexity analysis of Algorithm 1 is rather straightforward. Considering the two embedded for-loops and the summation performed at the inner loop, we get a running time of O |Q| 2 |w| .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We are now in a position to discuss the same problem for DAG automata. Let D be an input DAG and let M be our DAG automaton with state set Q. In order to strengthen the similarity with the string case, we view the nodes of D like the tokens of w and the edges of D like the inter-symbol positions of w. A na\u00efve algorithm, similar to the one for finite automata, can be developed for computing the total weight for all runs of M on D. We iterate over all possible assignments of states from Q to edges in E, that is, over all runs, and sum up their weights. The total number of runs is |Q| |E| , and the weight of each run can be checked in time O (|E|). We thus conclude that the algorithm runs in time O |Q| |E| |E| .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Once again, we can do much better by using dynamic programming. The main difference with respect to the string case is that the tokens of D are now organized in some partial order, so we can no longer parse the input from left to right. To deal with this, our algorithm assumes a total ordering of the edges of D, which is provided along with D, and parses D accordingly, as we explain now.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Informally, our parsing algorithm consists of the following two phases. r Second, we merge partial analyses into larger and larger partial analyses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "For each edge e (following the total ordering of edges provided as input), we contract it, replacing its source node src(e) and target node tar(e) with a new node z. We then retrieve partial analyses associated with src(e) and tar(e) and merge them into new partial analyses associated with z. This process is repeated, ending when all of D has been contracted to a single node with a single analysis. The weight of this analysis is the weight of all the runs on D.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The merging of partial analyses in the second phase requires some additional discussion. If p 1 and p 2 are partial analyses associated with src(e) and tar(e), respectively, the partial analysis p, associated with z, inherits its state assignments from p 1 and p 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Because the edge e is shared between p 1 and p 2 , the merging of p 1 and p 2 can be carried out only if they assign the same state to e. Moreover, if several merges result in several analyses for z with the same state assignments, their weights are summed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In order to gain a better understanding of these ideas, we discuss a simple example, before providing a precise specification of the algorithm itself.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The evolution of the structure of a DAG over a run of our DAG parsing algorithm is shown in Figure 12 . We start with DAG D in (a) with node set", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 92, |
| "end": 101, |
| "text": "Figure 12", |
| "ref_id": "FIGREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Example 8", |
| "sec_num": null |
| }, |
| { |
| "text": "{v 1 , v 2 , v 3 , v 4 , v 5 }.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 8", |
| "sec_num": null |
| }, |
| { |
| "text": "To keep the example simple, we only display one possible assignment of a hypothetical automaton at each node; for instance, at node v 2 we display the partial analysis representing the transition in which q 1 is assigned to the incoming edge, and q 2 , q 3 are assigned to the outgoing edges. We then contract the edge from v 2 to v 3 , resulting in the new DAG displayed in (b), where node (v 2 , v 3 ) represents the merge of nodes v 2 and v 3 . Observe that, after edge contraction, the remaining incoming and outgoing edges at v 2 and v 3 are inherited at (v 2 , v 3 ). All possible partial analyses at v 2 and v 3 are pairwise merged at (v 2 , v 3 ) (again, only one such analysis is displayed). We proceed by contracting the edge from v 1 to (v 2 , v 3 ), the edge from v 4 to v 5 , and finally the multiple edges from", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 8", |
| "sec_num": null |
| }, |
| { |
| "text": "(v 1 , v 2 , v 3 ) to (v 4 , v 5 ), ending up with the final DAG in (d) consisting of a single node (v 1 , v 2 , v 3 , v 4 , v 5 ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 8", |
| "sec_num": null |
| }, |
| { |
| "text": "In general, whenever we contract an edge e we also contract all parallel edges along with it to avoid creating loops.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 8", |
| "sec_num": null |
| }, |
| { |
| "text": "Just as DAG automata generalize traditional finite automata defined on strings, our DAG parsing algorithm generalizes Algorithm 1. To see this, imagine applying our DAG parsing algorithm to a DAG consisting of a single long chain of edges. If the edges are contracted in order from left to right, our DAG parsing algorithm performs the same computation as Algorithm 1, building partial analyses for longer and longer prefixes of the chain. Of course, under some other ordering of the edges, a partial analysis may correspond to a sub-chain of D that is not a prefix. As we will see later, the choice of ordering does affect the overall computational complexity of the algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 8", |
| "sec_num": null |
| }, |
| { |
| "text": "v 1 v 2 q 1 q 1 v 3 v 4 q 2 q 2 q 3 q 3 v 5 q 4 q 4 q 5 q 5 (a) v 1 (v 2 , v 3 ) q 1 q 1 v 4 q 3 q 3 v 5 q 4 q 4 q 5 q 5 (b) (v 1 , v 2 , v 3 ) v 4 q 3 q 3 v 5 q 4 q 4 q 5 q 5 (c) (v 1 , v 2 , v 3 ) q 3 q 3 (v 4 , v 5 ) q 4 q 4 (d) (v 1 , v 2 , v 3 , v 4 , v 5 ) (e)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 8", |
| "sec_num": null |
| }, |
| { |
| "text": "We note that the problem of summing over state assignments is an instance of the general problem of weighted constraint satisfaction, where each edge in our input DAG is a variable whose values are states of M, and each node in our DAG is a weighted constraint, with weights specified by the transitions in the automaton. We can solve this problem using general techniques for graphical models (Jensen, Lauritzen, and Olesen 1990; Shafer and Shenoy 1990) ; the algorithm here is an adaptation of the variable elimination algorithm to our setting.", |
| "cite_spans": [ |
| { |
| "start": 394, |
| "end": 430, |
| "text": "(Jensen, Lauritzen, and Olesen 1990;", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 431, |
| "end": 454, |
| "text": "Shafer and Shenoy 1990)", |
| "ref_id": "BIBREF78" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 8", |
| "sec_num": null |
| }, |
| { |
| "text": "The pseudocode of our recognition algorithm for DAG automata is reported in Algorithm 2. It uses some additional notation, which we define here. For a node", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 8", |
| "sec_num": null |
| }, |
| { |
| "text": "v of D, let star(v) = in(v) \u222a out(v).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 8", |
| "sec_num": null |
| }, |
| { |
| "text": "In words, star(v) is the set of edges connecting v to its neighbor nodes. In order to assign states to these edges, we use functions f :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 8", |
| "sec_num": null |
| }, |
| { |
| "text": "star(v) \u2192 Q.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 8", |
| "sec_num": null |
| }, |
| { |
| "text": "For an edge set I \u2286 star(v), we also write f | I to denote f restricted to I, and f [I] to denote the multiset of all f (e) such that e \u2208 I, i.e., if I = {e 1 , . . . , e n } then f [I] = {f (e 1 ), . . . , f (e n )}. for each node v do for all f :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 8", |
| "sec_num": null |
| }, |
| { |
| "text": "star(v) \u2192 Q do \u03b1[v, f ] \u2190 \u03b4( f [in(v)], lab(v), f [out(v)] )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 8", |
| "sec_num": null |
| }, |
| { |
| "text": "for each edge e in order, s.t. e has not been deleted do (u, v) \u2190 (src(e), tar(e))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 8", |
| "sec_num": null |
| }, |
| { |
| "text": "I \u2190 star(u) \u2229 star(v) create new node z in(z) \u2190 in(u) \u222a in(v) \\ I out(z) \u2190 out(u) \u222a out(v) \\ I for all h : star(z) \u2192 Q do \u03b1[z, h] \u2190 0 for all f : star(u) \u2192 Q do for all g : star(v) \u2192 Q s.t. f | I = g| I do h = f \u222a g \\ f | I \u03b1[z, h] \u2190 \u03b1[z, h] \u2295 \u03b1[u, f ] \u2297 \u03b1[v, g] delete u, v, and all edges in I one node v remains return \u03b1[v, \u2205]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 8", |
| "sec_num": null |
| }, |
| { |
| "text": "The complexity of Algorithm 2 depends both on the structure of the input DAG and the order in which we contract its edges. More precisely, the complexity of the optimal edge ordering is determined by the treewidth (see Definition 2 in Section 3.1) of the line graph of D.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 8", |
| "sec_num": null |
| }, |
| { |
| "text": "The becomes a hyperedge of LG(D) attached to e 1 , . . . , e n (in any order, as the order will not affect any of the following).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 4", |
| "sec_num": null |
| }, |
| { |
| "text": "A simple example of a graph with four nodes and its corresponding line graph is shown in Figure 13 . Note that labels, edge directions, order of attached nodes of hyperedges, and labels are irrelevant and therefore not shown.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 89, |
| "end": 98, |
| "text": "Figure 13", |
| "ref_id": "FIGREF32" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Example 9", |
| "sec_num": null |
| }, |
| { |
| "text": "Because we want to make use of the treewidth of a line graph, and line graphs are hypergraphs (see Section 4.2), we extend the notion of tree decompositions to hypergraphs in the obvious way: For every hyperedge e, there must be a bag of the tree decomposition that contains all of the attached nodes of e. Note that the bags of the tree decomposition of LG(D) contain nodes of LG(D), which correspond to edges of D.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 9", |
| "sec_num": null |
| }, |
| { |
| "text": "To obtain an optimal edge ordering, first find an optimal tree decomposition, that is, a tree decomposition with minimal width, which we call k. This takes time O(|E| k+2 ) using the algorithm of Arnborg, Corneil, and Proskurowski (1987) . We can also take advantage of the various heuristics and approximation algorithms that are available for treewidth (Gogate and Dechter 2004; Feige, Hajiaghayi, and Lee 2005) ; as mentioned in Section 2, these heuristics work extremely well on AMR.", |
| "cite_spans": [ |
| { |
| "start": 196, |
| "end": 237, |
| "text": "Arnborg, Corneil, and Proskurowski (1987)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 355, |
| "end": 380, |
| "text": "(Gogate and Dechter 2004;", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 381, |
| "end": 413, |
| "text": "Feige, Hajiaghayi, and Lee 2005)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 9", |
| "sec_num": null |
| }, |
| { |
| "text": "Second, visit the bags bottom-up. For each bag b, contract the edges that are in b but not in the parent of b. It can be shown (Rose 1970; Arnborg, Corneil, and Proskurowski 1987) that the maximum degree of any node created by an edge contraction is k. This means that there are at most (k + 1) edges in star(u) \u222a star(v), and at most |Q| k+1 possible state assignments to those edges in the innermost loop of the algorithm.", |
| "cite_spans": [ |
| { |
| "start": 127, |
| "end": 138, |
| "text": "(Rose 1970;", |
| "ref_id": "BIBREF76" |
| }, |
| { |
| "start": 139, |
| "end": 179, |
| "text": "Arnborg, Corneil, and Proskurowski 1987)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 9", |
| "sec_num": null |
| }, |
| { |
| "text": "Then, because there are |E D | edges to contract, the overall running time of the algorithm is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 9", |
| "sec_num": null |
| }, |
| { |
| "text": "O |E D | \u2022 |Q| tw(LG(D))+1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 9", |
| "sec_num": null |
| }, |
| { |
| "text": "(2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 9", |
| "sec_num": null |
| }, |
| { |
| "text": "Thus, recognition is polynomial in the number of states but exponential in the treewidth of the line graph of the input graph. Holding these factors constant, recognition is linear in the size of the input graph.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 9", |
| "sec_num": null |
| }, |
| { |
| "text": "We briefly discuss here the problem of learning the weights of our DAG automata, though this in itself is a broad topic worthy of further research. Throughout this section, we assume that our semiring of weights K is the semiring of real numbers, with the usual addition and multiplication operations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We define a log-linear model on runs of M on some input DAG D as follows. Let \u03a6 : \u0398 \u2192 R d be a mapping from transitions to feature vectors. This extends naturally to runs by summing over the transitions in the run:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "\u03a6(\u03c1) def = v\u2208V D \u03a6( \u03c1(in(v)), lab(v), \u03c1(out(v)) )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Let w \u2208 R d be a vector of feature weights, which are the parameters to be estimated. Then we can parameterize \u03b4 in terms of the features and feature weights:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "\u03b4(t) = exp w \u2022 \u03a6(t) so that \u03b4(\u03c1) = exp w \u2022 \u03a6(\u03c1) [[M]] (D) = run \u03c1 on D exp w \u2022 \u03a6(\u03c1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "To obtain a probability model of runs of M on D, we simply renormalize the run weights:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "p M (\u03c1 | D) = \u03b4(\u03c1) [[M]] (D) Assume a set of training examples {(D i , \u03c1 i ) | 1 \u2264 i \u2264 N},", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "where each example consists of a DAG D i and an associated run \u03c1 i . We can train the model by analogy with conditional random fields (CRFs), which are log-linear models on finite automata (Johnson et al. 1999; Lafferty, McCallum, and Pereira 2001) . The training procedure is essentially gradient ascent on the log-likelihood, which is", |
| "cite_spans": [ |
| { |
| "start": 189, |
| "end": 210, |
| "text": "(Johnson et al. 1999;", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 211, |
| "end": 248, |
| "text": "Lafferty, McCallum, and Pereira 2001)", |
| "ref_id": "BIBREF51" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "LL = N i=1 log p M (\u03c1 i | D i ) = N i=1 log \u03b4(\u03c1 i ) \u2212 log [[M]] (D i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The gradient of LL is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2202LL \u2202w = N i=1 \u2202 \u2202w log \u03b4(\u03c1 i ) \u2212 \u2202 \u2202w log [[M]] (D i ) = N i=1 1 \u03b4(\u03c1 i ) \u2202 \u2202w \u03b4(\u03c1 i ) \u2212 1 [[M]] (D i ) \u2202 \u2202w [[M]] (D i ) = N i=1 \uf8eb \uf8ed 1 \u03b4(\u03c1 i ) \u2202 \u2202w \u03b4(\u03c1 i ) \u2212 1 [[M]] (D i ) \u03c1 on D i \u2202 \u2202w \u03b4(\u03c1) \uf8f6 \uf8f8 = N i=1 \uf8eb \uf8ed \u03a6(\u03c1 i ) \u2212 \u03c1 on D i \u03b4(\u03c1) [[M]] (D i ) \u03a6(\u03c1) \uf8f6 \uf8f8 since \u2202\u03b4(\u03c1) \u2202w = \u03b4(\u03c1) \u03a6(\u03c1) = N i=1 \u03a6(\u03c1 i ) \u2212 E \u03c1|D i [\u03a6(\u03c1)]", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Unfortunately, we cannot derive a closed-form solution for the zeros of Equation (3). We therefore use gradient ascent. In CRF training for finite automata, the expectation in Equation 3is computed efficiently using the forward-backward algorithm; for DAG automata, the expectation can be computed analogously. Algorithm 2 provides the bottom-up procedure for computing a chart of inside weights. If we compute weights in the derivation forest semiring (Goodman 1999) , in which \u2297 creates an \"and\" node and \u2295 creates an \"or\" node, the resulting and/or graph has the same structure as a CFG parse forest generated by CKY, so we can simply run the inside-outside algorithm (Lari and Young 1990) on it to obtain the desired expectations. Alternatively, we could compute weights in the expectation semiring (Eisner 2002; Chiang 2012 ). Because the log-likelihood LL is concave (Boyd and Vandenberghe 2004) , gradient ascent is guaranteed to converge to the unique global maximum.", |
| "cite_spans": [ |
| { |
| "start": 453, |
| "end": 467, |
| "text": "(Goodman 1999)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 671, |
| "end": 692, |
| "text": "(Lari and Young 1990)", |
| "ref_id": "BIBREF54" |
| }, |
| { |
| "start": 803, |
| "end": 816, |
| "text": "(Eisner 2002;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 817, |
| "end": 828, |
| "text": "Chiang 2012", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 873, |
| "end": 901, |
| "text": "(Boyd and Vandenberghe 2004)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We may also wish to learn a distribution over the DAGs themselves-for example, in order to provide a prior over semantic structures. A natural choice would be to adopt a similar log-linear framework:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "p M (D, \u03c1) = \u03b4(\u03c1) D [[M]] (D )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "where \u03b4(\u03c1) is a log-linear combination of weights and per-transition features as before. Here, the normalization ranges over all possible DAGs. For some values of the weight vector, this sum may diverge, as in weighted context-free grammars (CFGs; Chi 1999), meaning that the corresponding probability distribution is not defined. More importantly, estimating the normalization constant is computationally difficult, whereas in the case of weighted CFGs it can be estimated relatively easily with an iterative numerical algorithm (Abney, McAllester, and Pereira 1999; Smith and Johnson 2007) .", |
| "cite_spans": [ |
| { |
| "start": 530, |
| "end": 567, |
| "text": "(Abney, McAllester, and Pereira 1999;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 568, |
| "end": 591, |
| "text": "Smith and Johnson 2007)", |
| "ref_id": "BIBREF79" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "A similar problem arises in Exponential Random Graph Models (Frank and Strauss 1986); the most common solution is to use Markov chain Monte Carlo (MCMC) methods (Snijders 2002) . To train a model over DAGs, we can perform gradient ascent on the log likelihood:", |
| "cite_spans": [ |
| { |
| "start": 161, |
| "end": 176, |
| "text": "(Snijders 2002)", |
| "ref_id": "BIBREF80" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "LL = i log p M (\u03c1 i , D i ) \u2202LL \u2202w = i \u03a6(\u03c1 i ) \u2212 E D ,\u03c1 [\u03a6(\u03c1)]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "by using MCMC to estimate the second expectation. Finally, we may wish to learn a distribution over DAGs by learning the states in an unsupervised manner, either because it is not practical to annotate states by hand, or because we wish to automatically find the set of states that best predicts the observed DAGs. This corresponds to a latent variable CRF model (Quattoni, Collins, and Darrell 2004) with states as the hidden variables:", |
| "cite_spans": [ |
| { |
| "start": 374, |
| "end": 400, |
| "text": "Collins, and Darrell 2004)", |
| "ref_id": "BIBREF73" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "p M (D) = run \u03c1 on D \u03b4(\u03c1) D [[M]] (D ) LL = i log p M (D i ) \u2202LL \u2202w = i E \u03c1|D i [\u03a6(\u03c1)] \u2212 E D ,\u03c1 [\u03a6(\u03c1)]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Here, the second expectation is again over all possible DAGs. We can use the derivation forest semiring to compute the first expectation as with Equation 3, and we can use MCMC methods to estimate the second expectation. Although gradient ascent methods are often used with latent variable CRF models, it is important to note that the log likelihood is not concave-meaning that local maxima are possible.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Let M be a DAG automaton with set of states Q and let D be an input DAG with set of edges E D . As we have seen in Section 5.2, the time complexity of Algorithm 2 is O |E D | \u2022 |Q| tw(LG(D))+1 , where tw(LG(D)) is the treewidth of the line graph LG(D). By definition, tw(LG(D)) is at least the degree of nodes of D minus one, because every node of degree k is turned into a hyperedge of size k that must be covered by some bag. The treewidth of LG(D) can therefore be quite large. We can improve Algorithm 2 by binarizing both the input DAG and, accordingly, the transitions of our DAG automaton. In this section we develop specialized techniques for the binarization of DAGs and for the binarization of transitions of DAG automata, and prove some relevant properties. Our techniques will further be developed in Section 7 to process DAG languages with unbounded node degree.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Binarization", |
| "sec_num": "6." |
| }, |
| { |
| "text": "A binary DAG is one in which each node has at most two incoming edges and one outgoing edge, or else one incoming edge and two outgoing edges. In order to produce a binary DAG D from a source DAG D, we introduce a construction that replaces every node of D with a treelet consisting of fresh nodes, and connects the edges of D to these fresh nodes in such a way that the resulting DAG D is binary. Furthermore, D preserves all of the information in D, in a way that will be specified later. Our technique is a generalization of what is known from the theory of tree automata, in particular unranked tree automata, where nodes of any rank are encoded by subtrees entirely consisting of binary nodes; see Comon et al. (2002, Section 8 .3) for details. We introduce the idea underlying our DAG binarization technique by discussing a simple example.", |
| "cite_spans": [ |
| { |
| "start": 703, |
| "end": 732, |
| "text": "Comon et al. (2002, Section 8", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Idea", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Consider the DAG D shown in Figure 14(a) . From D we construct a new binary DAG D , shown in Figure 14(b) , using the following procedure. Let v be a node of D with label \u03c3 and with node degree n. Node v is replaced in D by a binary treelet T v with exactly n leaf nodes that are labeled by \u03c3. All of the remaining nodes of T v are labeled by \u03c3 : these are internal nodes with one or two children. For instance, if v is the root node of D labeled a, then T v is the treelet at the top of D consisting of two binary nodes with label a and three leaves with label a.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 28, |
| "end": 40, |
| "text": "Figure 14(a)", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 93, |
| "end": 105, |
| "text": "Figure 14(b)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Example 10", |
| "sec_num": null |
| }, |
| { |
| "text": "Because the leaves of T v correspond one-to-one to the edges of D incident on v, they can be used as \"docking places\" for the original edges. More precisely, each edge e in D such that src(e) = v and tar(e) = v is used in D to connect some leaf of T v to some leaf of T v . Note that, according to this construction, the edge set of D can be partitioned into the set of fresh edges coming from the treelets and the set of edges coming from the source DAG D; the latter are exactly those edges whose source nodes carry a label \u03c3 \u2208 \u03a3, and are drawn with thick lines in Figure 14 In the following, the specific topology of each treelet T v will be obtained from a tree decomposition of D. Because the leaves of T v have only one parent and no child, the construction yields a binary DAG.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 567, |
| "end": 576, |
| "text": "Figure 14", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Example 10", |
| "sec_num": null |
| }, |
| { |
| "text": "Along with DAG binarization, we must also replace each transition t of the DAG automaton with a set of \"binary\" transitions that process the nodes of the binarized graph. The binary transitions have at most two states in the left-hand side and one state in the right-hand side, or else at most one state in the left-hand side and two states in the right-hand side. Again, we demonstrate the intuitive idea underlying the construction by means of a simple example.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 10", |
| "sec_num": null |
| }, |
| { |
| "text": "Consider an unweighted transition t : {p, q} a \u2212 \u2192 {r, s} applied to a node v with label a in a DAG D, as shown in the snapshot in Figure 15(a) . Consider also the snapshot of the binary DAG D in Figure 15(b) , representing the treelet T v obtained from v. We discuss how to binarize t such that the resulting transitions can process T v .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 131, |
| "end": 143, |
| "text": "Figure 15(a)", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 196, |
| "end": 208, |
| "text": "Figure 15(b)", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Example 11", |
| "sec_num": null |
| }, |
| { |
| "text": "For the binary transitions we use the states p, q, r, s appearing in t, along with some new states of the form (I, O), where I is a subset of the multiset in the left-hand side of t and O is a subset of the multiset in the right-hand side of t. States p, q, r, s will be assigned to the edges of D that were also present in D, drawn with thick lines in Figure 15( Consider an edge e of T v . Let T be the subtree of T v whose root node is the target node of e. Let also S T be the set of edges of D that are connected to the leaves of T, not including edges internal to T. When viewed as edges from D, the edges in S T are a subset of the edges incident on v. Informally, the meaning of a state (I, O) being assigned to edge e is that we expect to find the states in I on the edges within S T that are incoming at v, and likewise we expect to find the states in O on the edges within S T that are outgoing at v.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 353, |
| "end": 363, |
| "text": "Figure 15(", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Example 11", |
| "sec_num": null |
| }, |
| { |
| "text": "Let us discuss three among the several binary transitions obtained from t. Consider the run represented in Figure 15(a) . To support intuition, we view this run as a topdown process. The transition", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 107, |
| "end": 119, |
| "text": "Figure 15(a)", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Example 11", |
| "sec_num": null |
| }, |
| { |
| "text": "t 1 : \u2205 a \u2212 \u2192 {(\u2205, {r}), ({p, q}, {s})}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 11", |
| "sec_num": null |
| }, |
| { |
| "text": "is one of those that apply at the (binary) roots of treelets labeled with a , implementing the \"guess\" that the left subtree will provide the required outgoing edge that is assigned the state r, and the right subtree will provide the required incoming edges that are assigned the states p and q, and the required outgoing edge that is assigned the state s.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 11", |
| "sec_num": null |
| }, |
| { |
| "text": "A second example is the transition", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 11", |
| "sec_num": null |
| }, |
| { |
| "text": "t 2 : {({p, q}, {s})} a \u2212 \u2192 {({p, q}, \u2205), (\u2205, {s})}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 11", |
| "sec_num": null |
| }, |
| { |
| "text": "which processes a node with an incoming edge that has been assigned the state {({p, q}, {s})}. Transition t 2 makes the guess that the expected incoming states {p, q} are both realized at the left subtree, and that the expected outgoing state in {s} is realized at the right subtree. Finally, our third example is the transition", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 11", |
| "sec_num": null |
| }, |
| { |
| "text": "t 3 : {({q}, \u2205), q} a \u2212 \u2192 \u2205", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 11", |
| "sec_num": null |
| }, |
| { |
| "text": "which processes two incoming edges and zero outgoing edges. This transition matches the expectation, indicated by {q}, that there is an incoming edge with state q, and the state actually encountered on the other incoming edge. Now that we have seen an intuitive description of the procedures for binarizing a DAG and for binarizing a DAG automaton, we can outline the improved version of Algorithm 2:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 11", |
| "sec_num": null |
| }, |
| { |
| "text": "1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 11", |
| "sec_num": null |
| }, |
| { |
| "text": "For each transition in the input DAG automaton M, construct the corresponding set of binary transitions to form a new automaton M .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 11", |
| "sec_num": null |
| }, |
| { |
| "text": "Binarize the input DAG D into DAG D .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "Run Algorithm 2 on the binarized DAG D with automaton M .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "Step 1 is independent of the remaining steps, and can therefore be carried out offline.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "In the remainder of this section, we discuss at length the process of binarizing a DAG and that of binarizing a DAG automaton, and we present a computational analysis of the improved algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "Let D be some input DAG and let D be a binarized DAG derived from D. We have already discussed in Section 2 how AMR structures have very small treewidth. For this reason, in the following discussion we use as a term of comparison quantity tw(D), the treewidth of D. When processing D , the running time of Algorithm 2 depends on tw(LG(D )), the treewidth of the line graph of D , which in turn depends on the choice of the binarization of D. There are several ways in which we can binarize D, resulting in different values of tw(LG(D )). However, a bad choice of binarization for D may result in tw(LG(D )) much larger than tw(D). Our objective should therefore be to binarize D in such a way that tw(LG(D )) is not much larger than tw(D). We provide an algorithm for constructing D from a tree decomposition of D, and we show that tw(D ) \u2264 tw(D) + 1 and tw(LG(D )) \u2264 2(tw(D) + 1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG Binarization", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "In what follows, we exclude from our DAG automata transitions of the form \u2205 a \u2212 \u2192 \u2205, which only accept DAGs consisting of a single isolated node. Clearly, this is an uncritical assumption because D \u03a3 contains only |\u03a3| of these DAGs. This assumption is similar to the exclusion of the empty string from context-free languages when parsing with the CKY algorithm that uses context-free grammars in Chomsky normal form (Aho and Ullman 1972) .", |
| "cite_spans": [ |
| { |
| "start": 416, |
| "end": 437, |
| "text": "(Aho and Ullman 1972)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG Binarization", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We will have to refer to the components of different graphs and tree decompositions. To disambiguate the notation, we will index the components of such an object by the object in question. For example, the edge set of a DAG D will be referred to as E D , the source of an edge e \u2208 E D by src D (e), and the set of bags of a tree decomposition T by V T .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG Binarization", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "In this section we consider tree decompositions of DAGs that are in a special form that we call binary. This has the advantage of greatly simplifying the binarization construction. A tree decomposition T of a DAG D is binary if both of the following conditions are met:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG Binarization", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "r every bag of T has at most two children;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG Binarization", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "r each edge e \u2208 E D is explicitly assigned to a unique leaf b of T.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG Binarization", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "More precisely, every leaf b of T is assigned an edge edg T (b) \u2208 E D such that the content of b consists of the two nodes this edge is incident upon. Formally, we have cont T (b) = {src D (edg T (b)), tar D (edg T (b))}. Furthermore, we require that the mapping edg T is a bijection between the leaves of T and the edges of D. In other words, every edge of D is introduced by a unique leaf b of T. Note that, because edg T is a bijection, for every edge e \u2208 E D we have that edg \u22121 T (e) yields the unique leaf b of T such that edg T (b) = e. In Appendix A, Theorem 8, we show that if a graph has a tree decomposition of width k, then there exists a binary tree decomposition of the same graph also having width k.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG Binarization", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Our method of binarization is illustrated in Figure 16 and explained in the following. 4 Let D \u2208 D \u03a3 . For every symbol \u03c3 \u2208 \u03a3, we let \u03c3 be a fresh copy of \u03c3. In the binarized DAG, every node v \u2208 V D will be represented by a treelet, each of whose nodes is labeled by \u03c3 or \u03c3 . To define the binarized version of D, consider a binary tree decomposition T of D. By the definition of tree decomposition, the subtree of T induced by {b \u2208 V T | v \u2208 cont T (b)} forms itself a (binary) tree. Let us denote this treelet by T v . To distinguish between the copies of b \u2208 V T appearing in the different treelets T v such that v \u2208 cont T (b), we let [v, b] denote the copy of b in T v . In DAG D of Figure 16 , its nodes x, y, u, v are shown instead of their node labels. In the tree decomposition T, the bags b are identified with their Gorn addresses and the boxes show their contents.", |
| "cite_spans": [ |
| { |
| "start": 639, |
| "end": 645, |
| "text": "[v, b]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 45, |
| "end": 54, |
| "text": "Figure 16", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 688, |
| "end": 697, |
| "text": "Figure 16", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "DAG Binarization", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Binarization replaces each node v \u2208 V D by T v . Formally, D T is the DAG obtained from the union of all T v , for v \u2208 V D , by labeling the nodes and inserting the edges of D as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG Binarization", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "r For every node [v, b] of T v , lab D T ([v, b]) = lab D (v) if b is a leaf of T and lab D T ([v, b]) = lab D (v) otherwise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG Binarization", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "r Let e \u2208 E D with (src D (e), tar D (e)) = (x, y) and b = edg \u22121 T (e). Then To avoid confusion, we note that, according to the second item above, the edges of D are \"reused\" in D T (though of course with other source and target nodes) rather than taking copies, as one would probably do in an implementation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG Binarization", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "The construction is illustrated in the middle of Clearly, D T can be turned into D by contracting each sub-DAG T v into a single node (with the appropriate label). D T is binary because the T v are treelets and each leaf [v, b] is attached to exactly one of the original edges e \u2208 E D , namely, to edg T (b). In particular, D T does not have cycles. 5 Note that, as D T depends on T, one of the effects of binarization is that DAGs representing the same semantic information are not necessarily isomorphic anymore. However, this is not a severe disadvantage because binarization is only a technical tool that allows us to derive efficient algorithms and, in Section 7, transfer results from the ranked case to the unranked one.", |
| "cite_spans": [ |
| { |
| "start": 221, |
| "end": 227, |
| "text": "[v, b]", |
| "ref_id": null |
| }, |
| { |
| "start": 350, |
| "end": 351, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG Binarization", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "D = x y u v T = x, y, u x, y, u 1 x y 1.1 y u 1.2 x, y, u 2 y v 2.1 x u 2.2 T x [x, ] [x, 1] [x, 1.1] [x, 2] [x, 2.2] T y [y, ] [y, 1] [y, 1.1] [y, 1.2] [y, 2] [y, 2.1] T v [v, 2.1] T u [u, ] [u, 1] [u, 1.2] [u, 2] [u, 2.2] D T (", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG Binarization", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "For every DAG D and every binary tree decomposition T of D of width k \u2265 1, tw(D T ) \u2264 k + 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 1", |
| "sec_num": null |
| }, |
| { |
| "text": "Proof. As a first step, consider the tree T that is identical to T, but with the content of bag b", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 1", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2208 V T = V T being given by cont T (b) = {[v, b] | v \u2208 cont T (b)}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 1", |
| "sec_num": null |
| }, |
| { |
| "text": "Intuitively, (the content of bags of) T is obtained by overlaying the different copies T v of T; see again Figure 16 . With this definition, T is not a tree decomposition of D T yet, but we note that |cont T (b)| = |cont T (b)| for all b \u2208 V T and that every edge e \u2208 E D can be assigned to the bag b = edg \u22121", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 107, |
| "end": 116, |
| "text": "Figure 16", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Theorem 1", |
| "sec_num": null |
| }, |
| { |
| "text": "T (e) because, in D T , e connects the two nodes in cont T (b). Furthermore, every node [v, b] of D T occurs in precisely one bag: [v, b] \u2208 cont T (b). Hence, T is a tree decomposition of width k except for the fact that those edges of D T which are arcs of the treelets T v are not covered by any bags. For this, we shall add intermediate bags to T in order to construct a valid tree decomposition T of D T .", |
| "cite_spans": [ |
| { |
| "start": 88, |
| "end": 94, |
| "text": "[v, b]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 1", |
| "sec_num": null |
| }, |
| { |
| "text": "Consider an arc e \u2208 E T with src T (e) = b and tar T (e) = c. A treelet T v contains a copy e v of e if v \u2208 cont T (b) \u2229 cont T (c), and in this case we have src T", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 1", |
| "sec_num": null |
| }, |
| { |
| "text": "v (e v ) = [v, b] and tar T v (e v ) = [v, c]. Thus, {v 1 , . . . , v } = cont T (b) \u2229 cont T (c)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 1", |
| "sec_num": null |
| }, |
| { |
| "text": "is the set of nodes v such that the edges e v exist. To make sure that each e v i is covered by a bag of T we insert, between c and b in T , a rising chain of bags b 1 , . . . , b . In other words, b 0 = c becomes the child of b 1 , which becomes the child of b 2 , . . . to b , which becomes the child of", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 1", |
| "sec_num": null |
| }, |
| { |
| "text": "b +1 = b. For i \u2208 {1, . . . , } define cont T (b i ) = cont T (c) \\ {[v 1 , c], . . . , [v i\u22121 , c]} \u222a {[v 1 , b], . . . , [v i , b]} Intuitively, viewing b 1 , . . . , b bottom up, the nodes [v i , b] are introduced while the [v i , c]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 1", |
| "sec_num": null |
| }, |
| { |
| "text": "are forgotten, but with a delay of one. In Figure 16 , this is illustrated for the right subtree of T . Now, b i covers e v i , and", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 43, |
| "end": 52, |
| "text": "Figure 16", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Theorem 1", |
| "sec_num": null |
| }, |
| { |
| "text": "|cont T (b i )| = |cont T (c)| + 1 = |cont T (c)| + 1 \u2264 k + 2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 1", |
| "sec_num": null |
| }, |
| { |
| "text": "For b \u2208 V T , we let cont T (b) = cont T (b). By construction, for a given bag b \u2208 V T , the bags of T that contain nodes of the form [v, b] form a connected subgraph of T . This completes the proof.", |
| "cite_spans": [ |
| { |
| "start": 134, |
| "end": 140, |
| "text": "[v, b]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 1", |
| "sec_num": null |
| }, |
| { |
| "text": "For every DAG D and every binary tree decomposition T of D of width k \u2265 1, tw(LG(D T )) \u2264 2(k + 1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "Proof. For a node [u, b] of D T (where u \u2208 V D and b \u2208 V T ) which is not the root of treelet T u , let e(u, b) denote the arc of T u (which is an edge of D T ) whose target is [u, b] . As an illustration, Figure 17 (top) shows the line graph of the binarized", |
| "cite_spans": [ |
| { |
| "start": 18, |
| "end": 24, |
| "text": "[u, b]", |
| "ref_id": null |
| }, |
| { |
| "start": 177, |
| "end": 183, |
| "text": "[u, b]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 206, |
| "end": 221, |
| "text": "Figure 17 (top)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "DAG D T from [x, ] [x, 1] [x, 1.1] \u2022 \u2022 [x, 2] [x, 2.2] \u2022 \u2022 [y, ] [y, 1] [y, 1.1] \u2022 [y, 1.2] \u2022 \u2022 [y, 2] [y, 2.1] \u2022 \u2022 [v, 2.1] [u, ] [u, 1] [u, 1.2] \u2022 \u2022 [u, 2] [u, 2.2] \u2022 \u2022 [x, ] [x, 1] [x, 1.1] \u2022 \u2022 [x, 2] [x, 2.2] \u2022 \u2022 [y, ] [y, 1] [y, 1.1] \u2022 [y, 1.2] \u2022 \u2022 [y, 2] [y, 2.1] \u2022 \u2022 [v, 2.1] [u, ] [u, 1] [u, 1.2] \u2022 \u2022 [u, 2] [u, 2.2] \u2022 \u2022", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "The line graph of the binarized DAG D T of Figure 16 . Figure 16 , e(v i , b) does not exist and is omitted from the bags.) Clearly, T b is as claimed because k \u2265 1. Now suppose that b is not a leaf and assume that it has two children c 1 , c 2 , because this is the interesting case. Combine the inductively computed tree decompositions T c 1 and T c 2 into a single tree T 0 by adding a root bag b 0 whose contents are the union of the contents of the root bags of T c 1 and T c 2 . Then T 0 is a tree decomposition of the union G of LG(D T , c 1 ) and LG(D T , c 2 ) of width 2(k + 1) whose root b 0 contains the edges e(v, c i ) for all v \u2208 cont T (c i ) and i = 1, 2. If b is the root of T, this completes the construction since then T 0 is also a tree decomposition of LG(D T ). This is because G is LG(D T ) without the hyperedges [v, b] , which are covered by b 0 as [v, b] is only connected to [v, c 1 ] and [v, c 2 ] in T v (provided that the latter exist). For example, in Figure 17 , these are the hyperedges [x, ], [y, ], and [u, ] , which are covered by {e(x, 1), e(x, 2), e(y, 1), e(y, 2), e(u, 1), e(u, 2)}.", |
| "cite_spans": [ |
| { |
| "start": 838, |
| "end": 844, |
| "text": "[v, b]", |
| "ref_id": null |
| }, |
| { |
| "start": 875, |
| "end": 881, |
| "text": "[v, b]", |
| "ref_id": null |
| }, |
| { |
| "start": 1021, |
| "end": 1044, |
| "text": "[x, ], [y, ], and [u, ]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 43, |
| "end": 52, |
| "text": "Figure 16", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 55, |
| "end": 64, |
| "text": "Figure 16", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 984, |
| "end": 993, |
| "text": "Figure 17", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 17", |
| "sec_num": null |
| }, |
| { |
| "text": "v 1 , b), e(v 2 , b)}. (If [v i , b] is the root of T v i , such as [v, 2.1] in", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 17", |
| "sec_num": null |
| }, |
| { |
| "text": "Thus, assume finally that b is not the root of T. If cont T (b) = {v 1 , . . . , v } then b 0 contains the (at most) two outgoing edges e 1 i = e(v i , c 1 ) and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 17", |
| "sec_num": null |
| }, |
| { |
| "text": "e 2 i = e(v i , c 2 ) of [v i , b], for i = 1, . . . , . (Again, if v i / \u2208 cont T (c j ) then e j i", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 17", |
| "sec_num": null |
| }, |
| { |
| "text": "does not exist and can be disregarded.) The edges e 1 i and e 2 i are connected to e i = e(v i , b) by the ternary hyperedge", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 17", |
| "sec_num": null |
| }, |
| { |
| "text": "[v i , b] in", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 17", |
| "sec_num": null |
| }, |
| { |
| "text": "LG(D T , b) (or by a binary hyperedge if only one of e 1 i , e 2 i exists). This hyperedge must be covered by a bag. As an example, consider [y, 1] in Figure 17 (bottom) . Its outgoing edges are e(y, 1.1) and e(y, 1.2), and [y, 1] is the hyperedge that connects them to e(y, 1). The situation for [x, 1] and [u, 1] is similar, even though these have only one outgoing edge each, and are thus binary hyperedges in LG(D T ).", |
| "cite_spans": [ |
| { |
| "start": 141, |
| "end": 147, |
| "text": "[y, 1]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 151, |
| "end": 169, |
| "text": "Figure 17 (bottom)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 17", |
| "sec_num": null |
| }, |
| { |
| "text": "The bag b 0 contains all of e 1 1 , e 2 1 , . . . , e 1 , e 2 that exist. Hence, to cover [v 1 , b], . . . , [v , b] , we can proceed in a way similar to the completion of the tree decomposition T in the preceding proof by adding, on top of b 0 , a chain of bags as follows: e 1 . . . e e 1 . . . e \u22121 e e 1 e 2 e 1 e 2 e 1 2 e 2 2 e 1 3 e 2 3 . . . e 1 e 2 e 1 e 1 1 e 2 1 e 1 2 e 2 2 . . . e 1 e 2 e 1 1 e 2 1 . . . e 1 e 2 This completes the construction of T b . Since \u2264 k + 1, the largest bag added contains 2 + 1 \u2264 2(k + 1) + 1 edges of LG(D T , b) (i.e., the width of T b is at most 2(k + 1)) and the root contains e 1 , . . . , e , as claimed.", |
| "cite_spans": [ |
| { |
| "start": 109, |
| "end": 116, |
| "text": "[v , b]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 17", |
| "sec_num": null |
| }, |
| { |
| "text": "We now describe how to construct the binarized DAG automaton M . Let M be the source DAG automaton, with state set Q. The set of states of M is defined as Q = Q \u222a Q io , where each state in Q io is an ordered pair (I, O) of multisets over Q. These states will be assigned to the edges which, in a binarized DAG, D T , stem from the treelets T v , as opposed to the original edges of D. The interpretation of assigning (I, O) to an edge e belonging to T v (i.e., an edge whose source node carries a label of the form \u03c3 ) is that we are in the process of simulating some transition M applied to v, and that the subtree of T v rooted at tar D T (e) collects those incoming and outgoing edges of the original node v that need to be assigned the states in I and O, respectively. 2. \u2205 \u03c3 \u2212 \u2192 {(I 1 , O 1 ), (I 2 , O 2 )} for all I 1 , I 2 , O 1 , O 2 such that I 1 I 2 = I and O 1 O 2 = O (these transitions process binary roots of treelets).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Binarization", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Note that the unions I 1 I 2 and O 1 O 2 in item 2 (and similarly in item 4) are multiset unions. Thus, no states are \"lost\" if I 1 and I 2 or O 1 and O 2 overlap. The transitions for binary roots can be omitted if we change the treelets by adding a unary root above every binary root.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Binarization", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Second, for nodes labeled \u03c3 that are not roots, and all I \u2286 I and O \u2286 O, we use the following transitions: As a slight optimization, the reader may have noticed that the state (\u2205, \u2205) is actually useless and can therefore be discarded, together with all transitions in which it appears.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Binarization", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "3. {(I , O )} \u03c3 \u2212 \u2192 {(I , O )} (these transitions simply skip unary nodes). 4. {(I , O )} \u03c3 \u2212 \u2192 {(I 1 , O 1 ), (I 2 , O 2 )} for all I 1 , I 2 , O 1 , O 2 such that I 1 I 2 = I and O 1 O 2 = O (", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Binarization", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "To see that M works as intended, consider a DAG D \u2208 D \u03a3 , a binary tree decomposition T of D, and the binarized DAG D T . Given an accepting run \u03c1 of M on D, we can build an accepting run \u03c1 of M on D T , as follows. For all e \u2208 E D , we let \u03c1 (e) = \u03c1(e). It remains to assign appropriate states to the edges of the treelets", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Binarization", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "T v for v \u2208 V D . Consider such a node v and let {p 1 , . . . , p m } \u03c3 \u2212 \u2192 {q 1 , . . . , q n } be the transition of M applied to v, i.e., ({p 1 , . . . , p m }, {q 1 , . . . , q n }) = (\u03c1(in D (v)), \u03c1(out D (v))).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Binarization", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "For each edge e of T v such that tar T v (e ) is a leaf u of T v , consider the unique edge e \u2208 E D which is incident on u in D T . Let \u03c1 (e ) = ({\u03c1(e)}, \u2205) if tar D T (e) = u (which means that e \u2208 in D (v)) and \u03c1 (e ) = (\u2205, {\u03c1(e)}) otherwise. It follows that \u03c1 assigns ({p 1 }, \u2205), . . . , ({p m }, \u2205), (\u2205, {q 1 }), . . . , (\u2205, {q n }) to the m + n edges of T v that target the leaves of T v , and that the corresponding transitions", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Binarization", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "{p i , ({p i }, \u2205)} \u03c3 \u2212 \u2192 \u2205 and {(\u2205, {q j })} \u03c3 \u2212 \u2192 {q j } exist in M .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Binarization", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Every other edge e of T v points to a \u03c3 -labeled unary or binary node of T v . If out T v (tar D T (e )) = {e 1 }, let \u03c1 (e ) = \u03c1 (e 1 ). If out T v (tar D T (e )) = {e 1 , e 2 } with \u03c1 (e i ) = (I i , O i ) for i = 1, 2, we set \u03c1 (e ) = (I 1 I 2 , O 1 O 2 ). By items 1-4 in the construction of M , the corresponding transitions are in M , which means that \u03c1 is accepting.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Binarization", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Conversely, suppose that \u03c1 is a run of M on D T and consider one of its treelets T v whose root is labeled \u03c3 . By the transitions in items 1-4, together with the fact that only transitions of the form {p, ({p}, \u2205)} \u03c3 \u2212 \u2192 \u2205 or {(\u2205, {q})} \u03c3 \u2212 \u2192 {q} apply to the leaves of T v , it follows that there is a transition {p 1 , . .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Binarization", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": ". , p m } \u03c3 \u2212 \u2192 {q 1 , . . . , q n } in M such that \u03c1 assigns the states ({p 1 }, \u2205), . . . , ({p m }, \u2205), (\u2205, {q 1 }), . . . , (\u2205, {q n }) to the m + n edges of T v that target the leaves of T v . In turn, this means that \u03c1 (in D (v)) = {p 1 , . . . , p m } and \u03c1 (out D (v)) = {q 1 , .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Binarization", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": ". . , q n }, because the edges in in D (v) and out D (v) are those whose targets and sources, respectively, are the leaves of T v in D T . Consequently, the restriction of \u03c1 to E D is a run of M on D.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Binarization", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "This argument yields the following theorem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Binarization", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "For every DAG D \u2208 D \u03a3 and every binary tree decomposition of D, M accepts D T if and only if M accepts D.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 3", |
| "sec_num": null |
| }, |
| { |
| "text": "We derive here an upper bound on the running time of the improved version of Algorithm 2. Recall that the binarized automaton M has state set Q = Q \u222a Q io , where Q is the state set of the source automaton M, and Q io is the set of new states of the form (I, O) added by the binarization construction. We start by deriving an upper bound on |Q io |.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computational Analysis", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "As already discussed, each state (I, O) \u2208 Q io refers to some transition t of M, with multisets I and O representing instances of states from t that still need to be assigned. Let us focus for now on I, and let us assume that m is the maximum size of the lefthand side of a transition of M. We can represent I by providing a count, for each state q \u2208 Q, of the occurrences of q in I. In this way, a number between 0 and m needs to be stored for each q. Because the left-hand side of t contains at most |Q| distinguishable states, the total number of possible values for I is the total number of possible choices with repetition of m elements from set Q. This number is usually written as (( |Q| m )) and amounts to |Q|+m\u22121 m . To simplify our formulas, we bound |Q|+m\u22121 m from above by (m + 1) |Q| . We observe that this is not a tight bound, because the worst case of m and |Q| cannot be both realized at the same time. We will discuss a tighter bound later.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computational Analysis", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "Similarly to the case of multiset I, if n is the maximum size of of the right-hand side of a transition of M, we can derive an upper bound of (n + 1) |Q| for the total number of possible values for O. Putting everything together we have", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computational Analysis", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "|Q io | \u2264 (m + 1) |Q| (n + 1) |Q|", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Computational Analysis", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "Let D be some source DAG, and let D be the binarized DAG obtained from D through our construction in Section 6.2. Recall that Algorithm 2, when run on D, has a time complexity of", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computational Analysis", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "O |E D | \u2022 |Q| tw(LG(D))+1", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Computational Analysis", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "where tw(LG(D)) is the treewidth of the line graph LG(D). From Theorem 2 we know that tw(LG(D )) \u2264 2(tw(D) + 1). Combining these facts and our upper bound on |Q io | we have that, when given as input the DAG D and the binarized automaton M , Algorithm 2 runs in time", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computational Analysis", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "O |E D |(|Q| + (m + 1) |Q| (n + 1) |Q| ) 2tw(D)+3 (6)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computational Analysis", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "In what follows, we compare the two running times in (5) and (6). We start our analysis by looking into the input DAG automaton M. We have already remarked that tw(LG(D)) + 1 is at least the degree of nodes of D because every node of degree k in D is turned into a hyperedge of size k that must be covered by some bag. Thus we have tw(LG(D)) + 1 \u2265 m + n Whereas the original algorithm was exponential in the number of edges participating in M's transitions, the binarized algorithm is only polynomial in this number.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computational Analysis", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "We now hold the automaton M fixed, and analyze the running time of the two algorithms in terms of the properties of the input DAG D. In this way, the running time for the original algorithm is O |E D |c tw(LG(D))+1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computational Analysis", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": ", and the running time for the binarized algorithm is O |E D |c 2tw(D)+3 2 , for some constants c 1 and c 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1", |
| "sec_num": null |
| }, |
| { |
| "text": "We start by comparing quantities |E D | and |E D |. The binarized DAG D can be preprocessed to remove any unary nodes in a treelet T v in linear time. This leaves 2(n \u2212 1) internal edges in each treelet ) , two edges internal to T u , and two edges internal to T v . Thus,", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 203, |
| "end": 204, |
| "text": ")", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "1", |
| "sec_num": null |
| }, |
| { |
| "text": "T v derived from a vertex v in D having degree n. This implies that |E D | < 5|E D |, because for each edge (u, v) \u2208 E D , E D contains a copy of (u, v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1", |
| "sec_num": null |
| }, |
| { |
| "text": "|E D | = O (|E D |)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1", |
| "sec_num": null |
| }, |
| { |
| "text": "We are now left with the comparison of the two terms c . The binarized algorithm depends on tw(D) rather than tw(LG(D)). In general, tw(LG(D)) may be larger than tw(D) by an arbitrary amount. To see this, consider a star graph with one central node attached to n leaves. Whereas the treewidth is 1, the treewidth of the line graph is n.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1", |
| "sec_num": null |
| }, |
| { |
| "text": "In the other direction, we can derive a lower bound on tw(LG(D)) in terms of tw(D) as follows. A tree decomposition of D can be produced from a tree decomposition T of LG(D) by replacing each node of LG(D) in T with the corresponding two nodes of D. This leads to the relation", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1", |
| "sec_num": null |
| }, |
| { |
| "text": "tw(LG(D)) + 1 \u2265 1 2 (tw(D) + 1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1", |
| "sec_num": null |
| }, |
| { |
| "text": "Thus, although the exponent in the running time of the binarized algorithm may be larger than the exponent in the original algorithm, it must be within a constant factor. As already observed, our upper bound on |Q io | is not very tight, because our worst case hypotheses cannot be realized all at the same time. Consider then the maximum number of distinguishable states appearing in the left-hand side or in the right-hand side of a transition of M, which we denote as m Q . Let also \u00b5 be the maximum number of occurrences of an individual state appearing in the left-hand side or in the righthand side of a transition of M. While we have m Q \u2264 |Q| and \u00b5 \u2264 max{m, n}, we cannot have m Q = |Q| and \u00b5 = max{m, n} both at the same time. Using these quantities, we can rewrite our upper bound on Q io as (\u00b5 + 1) 2m Q .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1", |
| "sec_num": null |
| }, |
| { |
| "text": "To summarize, the binarized algorithm is particularly beneficial for automata having transitions with large degree and small numbers of states, or else for automata with small values of m Q and \u00b5. In these cases, the time savings over the unbinarized algorithm can be exponentially large.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1", |
| "sec_num": null |
| }, |
| { |
| "text": "In a natural language setting, we may want to model DAGs in which the incoming degree of a node is not bounded by any constant. This is useful in the semantic representation of sentences with coreference relations, in which some concept is shared among several predicates. Similarly, the outgoing degree of our DAGs should not be bounded by a constant, allowing us to add to a given predicate a number of optional modifiers that can grow with the length of the sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended DAG Automata", |
| "sec_num": "7." |
| }, |
| { |
| "text": "As already discussed in Section 3.3, Quernheim and Knight (2012) exploit some ordering on the incoming edges at each node and introduce implicit rules that process these edges in several steps, making it possible to accept nodes of unbounded in-degree. This approach allows the incoming edges of a node to have states that form any semilinear set-for example, an equal number of edges in state q and in state r. This does not seem to be motivated in the perspective of natural language semantics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended DAG Automata", |
| "sec_num": "7." |
| }, |
| { |
| "text": "As an alternative, we propose a milder extension of the DAG automata in Definition 3 that is analogous to the definition of extended CFGs, also called regular right part grammars (LaLonde 1977) . In extended CFGs, the right-hand side of each production is a regular expression denoting a set of strings of nonterminal and terminal symbols. Similarly, in our extended DAG automata the left-hand side and the right-hand side of a transition can be a regular expression of a restricted type.", |
| "cite_spans": [ |
| { |
| "start": 179, |
| "end": 193, |
| "text": "(LaLonde 1977)", |
| "ref_id": "BIBREF52" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended DAG Automata", |
| "sec_num": "7." |
| }, |
| { |
| "text": "Let Q be the state set used by some DAG automaton that we do not further specify yet. Subsequently, we view Q as the input alphabet of a device that is used to recognize the collections of incoming or outgoing states of a transition of our DAG automaton. Because these collections are multisets rather than strings, we must first introduce some machinery for the denotation or recognition of regular languages of multisets. 7.1.1 Multisets and Languages of Multisets. Recall from Section 3.1 that M(Q) denotes the collection of all (finite) multisets over Q. If \u00b5 1 , \u00b5 2 \u2208 M(Q), we write \u00b5 1 \u00b5 2 for the multiset union of \u00b5 1 and \u00b5 2 , which just adds the counts from \u00b5 1 and the counts from \u00b5 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regular and Recognizable Languages of Multisets", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "A language L of multisets is a subset of M(Q). If K, \u2295, \u2297, 0, 1 is a (commutative) semiring, a K-weighted (or simply weighted) language of multisets additionally assigns a weight from K to each multiset in the language. More formally, in this case L is a function L : M(Q) \u2192 K that maps every multiset \u00b5 \u2208 M(Q) to its weight L(\u00b5) \u2208 K. The weight L(\u00b5) = 0 indicates that \u00b5 is not in the language at all.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regular and Recognizable Languages of Multisets", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "The union of two languages L 1 \u222a L 2 is defined as usual; in the weighted case, if \u00b5 is in both languages, its weights are added. The concatenation of two languages is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regular and Recognizable Languages of Multisets", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "L 1 L 2 = {\u00b5 1 \u00b5 2 | \u00b5 1 \u2208 L 1 , \u00b5 2 \u2208 L 2 }; in the weighted case, L = L 1 L 2 is given by L(\u00b5) = \u00b5=\u00b5 1 \u00b5 2 L 1 (\u00b5 1 ) \u2297 L 2 (\u00b5 2 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regular and Recognizable Languages of Multisets", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "For a (weighted) language L of multisets and an integer n, we let L n = {\u2205} if n = 0 and L n = LL n\u22121 if n > 0. Finally, the Kleene star is defined as L * = i\u22650 L i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regular and Recognizable Languages of Multisets", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "Define a unary language to be a language that only uses one symbol; that is, a language L \u2286 M({q}) for some symbol q \u2208 Q. Next, we give two equivalent characterizations of a class of regular (or recognizable) languages of multisets, analogous to regular expressions and finite automata for languages of strings. 7.1.2 C-regular Expressions. The set of c-regular expressions \u03b1 for multisets over alphabet Q (cf. Ochma\u0144ski 1985) is defined inductively as follows, together with the semantics [[\u03b1] ] of these expressions:", |
| "cite_spans": [ |
| { |
| "start": 406, |
| "end": 426, |
| "text": "(cf. Ochma\u0144ski 1985)", |
| "ref_id": null |
| }, |
| { |
| "start": 490, |
| "end": 494, |
| "text": "[[\u03b1]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regular and Recognizable Languages of Multisets", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regular and Recognizable Languages of Multisets", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "is a c-regular expression, and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regular and Recognizable Languages of Multisets", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "[[ ]] = {\u2205}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regular and Recognizable Languages of Multisets", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regular and Recognizable Languages of Multisets", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "If q \u2208 Q, then q is a c-regular expression, and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regular and Recognizable Languages of Multisets", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "[[q]] = {{q}}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regular and Recognizable Languages of Multisets", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regular and Recognizable Languages of Multisets", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "If \u03b1 1 and \u03b1 2 are c-regular expressions, then \u03b1 1 \u222a \u03b1 2 is a c-regular expression, and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regular and Recognizable Languages of Multisets", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "[[\u03b1 1 \u222a \u03b1 2 ]] = [[\u03b1 1 ]] \u222a [[\u03b1 2 ]].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regular and Recognizable Languages of Multisets", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "If \u03b1 1 and \u03b1 2 are c-regular expressions, then \u03b1 1 \u03b1 2 is a c-regular expression, and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.", |
| "sec_num": null |
| }, |
| { |
| "text": "[[\u03b1 1 \u03b1 2 ]] = [[\u03b1 1 ]] [[\u03b1 2 ]].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.", |
| "sec_num": null |
| }, |
| { |
| "text": "If \u03b1 is a c-regular expression such that [[\u03b1] ] is unary, then \u03b1 * is a c-regular expression, and", |
| "cite_spans": [ |
| { |
| "start": 41, |
| "end": 45, |
| "text": "[[\u03b1]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5.", |
| "sec_num": null |
| }, |
| { |
| "text": "[[\u03b1 * ]] = [[\u03b1]] * .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5.", |
| "sec_num": null |
| }, |
| { |
| "text": "No expressions but those which can be constructed according to the previous items are c-regular expressions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "6.", |
| "sec_num": null |
| }, |
| { |
| "text": "Sometimes we write q n in place of the c-regular expression q \u2022 \u2022 \u2022 q, where q is repeated n times for some integer n.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "6.", |
| "sec_num": null |
| }, |
| { |
| "text": "To mention some examples, let q, r \u2208 Q. number of q's and r's. We emphasize that (qr) * is not a valid c-regular expression for this language, because the starred subexpression involves mentions of more than one state. It should be clear that the language cannot be expressed by means of a c-regular expression, because such an expression would have to contain at least two starred subexpressions, one containing only qs and one containing only rs, which necessarily allows for multisets containing different numbers of qs and rs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "6.", |
| "sec_num": null |
| }, |
| { |
| "text": "The definition of c-regular expressions can be extended to weighted c-regular expressions (Droste and Gastin 1999; Allauzen and Mohri 2006) . The semantics of a weighted c-regular expression \u03b1 over Q with weights in K is then a function [[\u03b1] ] : M(Q) \u2192 K. We have already specified how to combine the weights for the union Table 2 Terminology and notational conventions used for multiset automata and DAG automata.", |
| "cite_spans": [ |
| { |
| "start": 90, |
| "end": 114, |
| "text": "(Droste and Gastin 1999;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 115, |
| "end": 139, |
| "text": "Allauzen and Mohri 2006)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 237, |
| "end": 241, |
| "text": "[[\u03b1]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 323, |
| "end": 330, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "6.", |
| "sec_num": null |
| }, |
| { |
| "text": "state m-state transition m-transition M A (automaton) Q \u039e (state set) \u03a3 Q (input alphabet) \u2206 \u03c4 (transitions)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG automaton Multiset automaton", |
| "sec_num": null |
| }, |
| { |
| "text": "and the concatenation operators. Then it suffices to add the following conditions, which newly introduce weights into a c-regular expression: 6", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG automaton Multiset automaton", |
| "sec_num": null |
| }, |
| { |
| "text": "1. 2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG automaton Multiset automaton", |
| "sec_num": null |
| }, |
| { |
| "text": "If \u03b1 is a (weighted) c-regular expression and k \u2208 K, then k\u03b1 is a weighted c-regular expression. The weighted language [[k\u03b1] ] is just [[\u03b1] ] with all of its weights multiplied by k:", |
| "cite_spans": [ |
| { |
| "start": 119, |
| "end": 124, |
| "text": "[[k\u03b1]", |
| "ref_id": null |
| }, |
| { |
| "start": 135, |
| "end": 139, |
| "text": "[[\u03b1]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG automaton Multiset automaton", |
| "sec_num": null |
| }, |
| { |
| "text": "[[k\u03b1]] (\u00b5) = k \u2297 [[\u03b1]] (\u00b5) for all \u00b5 \u2208 M(Q).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG automaton Multiset automaton", |
| "sec_num": null |
| }, |
| { |
| "text": "(When writing regular expressions that are not fully parenthesized, this operation has the same precedence as concatenation.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG automaton Multiset automaton", |
| "sec_num": null |
| }, |
| { |
| "text": "We note that expressions of the form k\u03b1 are not the only ones that create weights other than 1 (depending, of course, on the definition of the operations of the semiring K). For example, [[q \u222a q]] assigns the weight 1 \u2295 1 to {q} (and 0 to every other multiset). Multiset languages generated by c-regular expressions will be called regular multiset languages, and similarly for the weighted case. 7.1.3 Multiset Finite Automata. In this section we introduce weighted finite automata that recognize multisets. Later on, the use of finite automata for multisets might create several clashes with our notion of (extended) DAG automata. To avoid this, we introduce in Table 2 our naming and notational conventions used to distinguish between multiset automata and DAG automata. When referring to multiset automata, for instance, we always use the terms m-state and m-transition, whereas the terms state and transition are used for DAG automata. Note also that we are overloading symbol Q, which is used to denote the state set of a DAG automaton as well as the input alphabet of a multiset automaton. As already explained, this is because we use multiset automata to recognize the collections of incoming or outgoing states of a transition of the DAG automaton.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 663, |
| "end": 670, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "DAG automaton Multiset automaton", |
| "sec_num": null |
| }, |
| { |
| "text": "In the following, we assume that a multiset \u00b5 \u2208 M(Q) is represented as a sequence of all the elements of \u00b5, with repetitions, in any possible order. In this way we can use standard string automata to process multisets. A weighted finite automaton has the same form as a weighted finite automaton for strings (F\u00fcl\u00f6p and Kuich 2009) .", |
| "cite_spans": [ |
| { |
| "start": 308, |
| "end": 330, |
| "text": "(F\u00fcl\u00f6p and Kuich 2009)", |
| "ref_id": "BIBREF37" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG automaton Multiset automaton", |
| "sec_num": null |
| }, |
| { |
| "text": "The difference is that the order in which the elements of the multiset are read by the automaton must not affect the computed weight.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DAG automaton Multiset automaton", |
| "sec_num": null |
| }, |
| { |
| "text": "A weighted finite automaton for multisets, or m-automaton for short, is defined as a tuple A = (\u039e, Q, \u03c4, s, \u03c1) , where", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 95, |
| "end": 110, |
| "text": "(\u039e, Q, \u03c4, s, \u03c1)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Definition 5", |
| "sec_num": null |
| }, |
| { |
| "text": "\u039e is a set of m-states,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1.", |
| "sec_num": null |
| }, |
| { |
| "text": "Q is a finite input alphabet, 3. \u03c4 : \u039e \u00d7 Q \u00d7 \u039e \u2192 K is the m-transition function, satisfying the following condition: for all m-states i, k \u2208 \u039e and for all alphabet symbols q, r", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2208 Q j\u2208\u039e \u03c4(i, q, j)\u03c4(j, r, k) = j\u2208\u039e \u03c4(i, r, j)\u03c4(j, q, k)", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "4. s \u2208 \u039e is the initial m-state, and 5. \u03c1 : \u039e \u2192 K maps m-states to final weights (where a state j \u2208 \u039e is called final if \u03c1(j) = 0).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "As a notational convention, in what follows we always assume \u039e = {1, . . . , d}, for some d \u2265 1. Condition (7) in Definition 5 states that, when we move from i to j by processing symbols q and r, the resulting total weight does not depend on the order in which q and r are read. It should be clear that Condition (7) is a sufficient condition for the desired property that an m-automaton assigns the same weight to all possible permutations of a given sequence (see below for a precise definition of the weight of a sequence). However, Condition (7) is not a necessary condition for this property; it is not difficult to provide a counter-example to show this; however, we do not further pursue this issue here.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "Let us now formally define the semantics [[A] ] of an m-automaton as a mapping from multisets to weights in K. Let \u00b5 be a multiset over Q with n elements, n \u2265 0. We arrange the elements of \u00b5 into a string c = q 1 q 2 \u2022 \u2022 \u2022 q n , choosing the order of symbols arbitrarily. Now, we define [[A]] (\u00b5) to be [[A] ] (c), where [[A] ] (c) is given as usual for weighted finite automata on strings. To make this explicit, let a run of A on c be any sequence \u03c1 c = i 0 i 1 \u2022 \u2022 \u2022 i n of m-states in \u039e such that i 0 = s. We extend the m-transition function \u03c4 to runs by viewing \u03c1 c as a sequence of n transitions of A on c, and by taking the product of weights of these transitions and the final weight for i n : ", |
| "cite_spans": [ |
| { |
| "start": 41, |
| "end": 45, |
| "text": "[[A]", |
| "ref_id": null |
| }, |
| { |
| "start": 303, |
| "end": 307, |
| "text": "[[A]", |
| "ref_id": null |
| }, |
| { |
| "start": 321, |
| "end": 325, |
| "text": "[[A]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "[[A]] (\u03c1 c ) = n h=1 \u03c4(i h\u22121 , q h , i h ) \u2297 \u03c1(i n ) Accordingly,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "As an example, consider the language represented by the weighted c-regular expression (qq) * (rr) * . This is the language of all multisets containing an even number of q's and an even number of r's, where each multiset has the weight 1. The corresponding m-automaton A is shown below. Here, an edge from i to j labeled by q/w means that \u03c4(i, q, j) = w, and a missing edge indicates that \u03c4(i, q, j) = 0. The final weight of all the states of A is 0 except for state 1, whose final weight is 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 12", |
| "sec_num": null |
| }, |
| { |
| "text": "1 2 3 4 q/1 q/1 r/1 r/1 q/1 q/1 r/1 r/1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 12", |
| "sec_num": null |
| }, |
| { |
| "text": "It is easy to verify that Condition (7) holds for this m-automaton. Consider for instance the multiset \u00b5 = {q, q, r, r}. We have that, for any ordering c of the elements of \u00b5, one run relative to c has weight 1 and all remaining runs have weight of 0. We thus have [[A] ] (\u00b5) = 1. Multiset (weighted) languages generated by (weighted) m-automata will be called recognizable multiset (weighted) languages. 7.1.4 Equivalence of C-regular Expressions and M-automata. The relationship between (restrictions of) regular expressions and finite automata on trace monoids was investigated by Ochma\u0144ski (1985) and extended by Droste and Gastin (1999) to weighted regular expressions and finite automata. These results, applied to multisets (the simplest example of trace monoids), imply that weighted c-regular expressions and weighted m-automata are equivalent. Because the equivalence proof is much easier for this case, we include it here for completeness. For this, let us make a short detour to recall the technique for turning an ordinary regular expression (i.e., on strings) into a finite automaton originally proposed by McNaughton and Yamada (1960) . (For regular expressions and finite automata on strings, we use the same notation as introduced above for the multiset case, only changing their semantics in the obvious way.)", |
| "cite_spans": [ |
| { |
| "start": 265, |
| "end": 269, |
| "text": "[[A]", |
| "ref_id": null |
| }, |
| { |
| "start": 584, |
| "end": 600, |
| "text": "Ochma\u0144ski (1985)", |
| "ref_id": "BIBREF64" |
| }, |
| { |
| "start": 617, |
| "end": 641, |
| "text": "Droste and Gastin (1999)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 1121, |
| "end": 1149, |
| "text": "McNaughton and Yamada (1960)", |
| "ref_id": "BIBREF62" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 12", |
| "sec_num": null |
| }, |
| { |
| "text": "Every string-based regular expression \u03b1 can be converted into an equivalent finite automaton A such that:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 4 (McNaughton-Yamada)", |
| "sec_num": null |
| }, |
| { |
| "text": "A has no transitions, and 2. the initial state of A has no incoming transitions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1.", |
| "sec_num": null |
| }, |
| { |
| "text": "Proof. The construction proceeds by induction on the structure of the regular expression:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1.", |
| "sec_num": null |
| }, |
| { |
| "text": "(a) If \u03b1 = then A consists of a single initial and final state.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1.", |
| "sec_num": null |
| }, |
| { |
| "text": "If \u03b1 = q, q \u2208 Q, then A consists of the following two states (initial and final, respectively):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(b)", |
| "sec_num": null |
| }, |
| { |
| "text": "q (c)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(b)", |
| "sec_num": null |
| }, |
| { |
| "text": "If \u03b1 = \u03b2 \u222a \u03b3, then convert \u03b2 and \u03b3 to automata A 1 and A 2 , respectively. Let s 1 and s 2 be the initial states of A 1 and A 2 , respectively. Then merge s 1 and s 2 into a single new initial state, which is a final state if either s 1 or s 2 was.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(b)", |
| "sec_num": null |
| }, |
| { |
| "text": "If \u03b1 = \u03b2\u03b3, then convert \u03b2 and \u03b3 to automata A 1 and A 2 , respectively. Then for each final state f of A 1 and for each transition s 2 q \u2212 \u2192 j, where s 2 is the initial state of A 2 , add a transition f q \u2212 \u2192 j. State f continues to be a final state if and only if s 2 was. Then remove s 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(d)", |
| "sec_num": null |
| }, |
| { |
| "text": "If \u03b1 = \u03b2 * , then convert \u03b2 to an automaton A 1 . Then for each final state f of A 1 and for each transition s 1 q \u2212 \u2192 j, where s 1 is the initial state of A 1 , add a transition f q \u2212 \u2192 j. Finally, add s 1 to the set of final states.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(e)", |
| "sec_num": null |
| }, |
| { |
| "text": "Still discussing the string case, the preceding theorem can easily be extended to weighted regular expressions and weighted finite automata by augmenting the cases of the construction above as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(e)", |
| "sec_num": null |
| }, |
| { |
| "text": "(a)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(e)", |
| "sec_num": null |
| }, |
| { |
| "text": "The unique state of A gets the final weight 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(e)", |
| "sec_num": null |
| }, |
| { |
| "text": "The transition of A gets the weight 1 and so does the final state, while the initial state gets the weight 0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(b)", |
| "sec_num": null |
| }, |
| { |
| "text": "(c) When merging s 1 and s 2 into a single state in the construction of A for \u03b2 \u222a \u03b3, their final weights are summed up. This is correct because these states have no incoming transitions. 7 ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(b)", |
| "sec_num": null |
| }, |
| { |
| "text": "Finally, to convert an expression \u03b1 = k\u03b2 (k \u2208 K), just take the automaton obtained for \u03b2 and multiply all final weights by k.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(f)", |
| "sec_num": null |
| }, |
| { |
| "text": "Let us now return to the case of c-regular expressions and m-automata. We will need the notion of size for (weighted) c-regular expressions and m-automata. The size |\u03b1| of a c-regular expression \u03b1 is the number of occurrences of nullary symbols ( s and alphabet symbols) in it. The size |A| of an m-automaton A is its number of states.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(f)", |
| "sec_num": null |
| }, |
| { |
| "text": "For the equivalence of weighted c-regular expressions and weighted m-automata, note first that if we restrict attention to unary languages, then there is no relevant difference between weighted c-regular expressions and weighted regular expressions on strings, or between weighted m-automata and weighted finite automata on strings. This is because commuting symbols in a string over a unary alphabet does not change anything. Hence weighted c-regular expressions and weighted m-automata are clearly equivalent in the unary case.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(f)", |
| "sec_num": null |
| }, |
| { |
| "text": "In particular, it is possible to convert a unary weighted c-regular expression \u03b1 to an equivalent weighted m-automaton A(\u03b1), using the McNaughton-Yamada construction recalled earlier. By an easy induction following the construction in Theorem 4, the size of A(\u03b1) will then be at most |\u03b1| + 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(f)", |
| "sec_num": null |
| }, |
| { |
| "text": "For treating the general case, the property in Theorem 4 that the initial state has no incoming transitions is quite useful. We will call weighted m-automata satisfying this requirement non-reentrant.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(f)", |
| "sec_num": null |
| }, |
| { |
| "text": "We will make use of the following lemma.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(f)", |
| "sec_num": null |
| }, |
| { |
| "text": "Let L 1 and L 2 be multiset languages recognizable by non-reentrant weighted m-automata.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lemma 1", |
| "sec_num": null |
| }, |
| { |
| "text": "1. L 1 \u222a L 2 is recognizable by a non-reentrant weighted m-automaton.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lemma 1", |
| "sec_num": null |
| }, |
| { |
| "text": "If L 1 , L 2 use disjoint sets of symbols, then L 1 L 2 is recognizable by a non-reentrant weighted m-automaton.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "Proof. For the first statement, if A 1 and A 2 recognize L 1 and L 2 , respectively, just use the McNaughton-Yamada construction for the union operator. The resulting weighted mautomaton satisfies the commutativity requirement (Condition (7)) because each of the individual automata does. For the second statement, now let A 1 = (S 1 , Q 1 , \u03c4 1 , s 1 , \u03c1 1 ) and A 2 = (S 2 , Q 2 , \u03c4 2 , s 2 , \u03c1 2 ) recognize L 1 and L 2 , respectively, where Q 1 \u2229 Q 2 = \u2205. Then the shuffle product (Hopcroft and Ullman 1979, p. 142) of A 1 and A 2 is the automaton that simulates A 1 and A 2 together, feeding each input symbol to either machine but not both. More formally,", |
| "cite_spans": [ |
| { |
| "start": 485, |
| "end": 519, |
| "text": "(Hopcroft and Ullman 1979, p. 142)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "A = (S 1 \u00d7 S 2 , Q 1 \u222a Q 2 , \u03c4, (s 1 , s 2 ), \u03c1), where: \u03c4((i 1 , i 2 ), q, (j 1 , i 2 )) = \u03c4 1 (i 1 , q, j 1 ) q \u2208 Q 1 \u03c4((i 1 , i 2 ), q, (i 1 , j 2 )) = \u03c4 2 (i 2 , q, j 2 ) q \u2208 Q 2 and \u03c1(i 1 , i 2 ) = \u03c1 1 (i 1 )\u03c1 2 (i 2 ) for all (i 1 , i 2 ) \u2208 S 1 \u00d7 S 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "Clearly, A recognizes the multisets of L 1 L 2 in any order, and it is non-reentrant if both A 1 and A 2 are.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "Every weighted c-regular expression \u03b1 with |\u03b1| \u2264 n can be converted into an equivalent non-reentrant weighted m-automaton A(\u03b1) with |A(\u03b1)| \u2264 2 n .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 5", |
| "sec_num": null |
| }, |
| { |
| "text": "Proof. First, we show that any weighted c-regular expression \u03b1 can be rewritten in the form", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 5", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03b1 = i\u2208I q\u2208Q \u03b1 iq", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 5", |
| "sec_num": null |
| }, |
| { |
| "text": "for a suitable index set I, where each \u03b1 iq only uses the alphabet {q}. We show this by induction on the structure of \u03b1. Throughout the proof, we write \u03b1 \u2261 \u03b1 if [[\u03b1] ] = [[\u03b1 ] ]. If \u03b1 = , \u03b1 = q or \u03b1 = \u03b2 * , then \u03b1 is trivially in the desired form. Now, assume as induction hypothesis that \u03b2 and \u03b3 are in the desired form.", |
| "cite_spans": [ |
| { |
| "start": 161, |
| "end": 165, |
| "text": "[[\u03b1]", |
| "ref_id": null |
| }, |
| { |
| "start": 170, |
| "end": 175, |
| "text": "[[\u03b1 ]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 5", |
| "sec_num": null |
| }, |
| { |
| "text": "r If \u03b1 = \u03b2 \u222a \u03b3, then \u03b1 = i\u2208I q\u2208Q \u03b2 iq \u222a j\u2208J q\u2208Q \u03b3 jq", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 5", |
| "sec_num": null |
| }, |
| { |
| "text": "which is in the desired form. r If \u03b1 = k\u03b2 for a k \u2208 K, choose a state q 0 \u2208 Q. Then", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 5", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03b1 = k \uf8eb \uf8ed i\u2208I q\u2208Q \u03b2 iq \uf8f6 \uf8f8 \u2261 i\u2208I k \uf8eb \uf8ed q\u2208Q \u03b2 iq \uf8f6 \uf8f8 (by distributivity of multiplication over addition) \u2261 i\u2208I (k\u03b2 iq 0 ) q\u2208Q\\{q 0 } \u03b2 iq", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 5", |
| "sec_num": null |
| }, |
| { |
| "text": "Second, we convert an expression in this form to a non-reentrant weighted m-automaton as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 5", |
| "sec_num": null |
| }, |
| { |
| "text": "r For each \u03b1 iq , convert it into an automaton A(\u03b1 iq ) using the ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 5", |
| "sec_num": null |
| }, |
| { |
| "text": "r If \u03b1 = \u03b2 \u222a \u03b3, then |A(\u03b1)| = |A(\u03b2)| + |A(\u03b3)| \u2212 1 < 2 |\u03b2| + 2 |\u03b3| \u2264 2 |\u03b2| \u2022 2 |\u03b3| = 2 |\u03b1|", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 5", |
| "sec_num": null |
| }, |
| { |
| "text": "r If \u03b1 = \u03b2\u03b3, and \u03b2 \u2261 i q \u03b2 iq , and \u03b3 \u2261 j q \u03b3 jq , then", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 5", |
| "sec_num": null |
| }, |
| { |
| "text": "|A(\u03b1)| = i j q (|A(\u03b2 iq )| + |A(\u03b3 jq )|) \u2264 i j q |A(\u03b2 iq )| \u2022 |A(\u03b3 jq )| = \uf8eb \uf8ed i q |A(\u03b2 iq )| \uf8f6 \uf8f8 \uf8eb \uf8ed j q |A(\u03b3 jq )| \uf8f6 \uf8f8 = |A(\u03b2)| \u2022 |A(\u03b3)| \u2264 2 |\u03b2| 2 |\u03b3| = 2 |\u03b1|", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 5", |
| "sec_num": null |
| }, |
| { |
| "text": "It is also possible to convert a weighted m-automaton to an equivalent weighted c-regular expression. Even though we do not use this result here, we mention it briefly for completeness.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 5", |
| "sec_num": null |
| }, |
| { |
| "text": "Any weighted m-automaton can be converted into an equivalent weighted c-regular expression.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 6", |
| "sec_num": null |
| }, |
| { |
| "text": "Proof. Given a weighted m-automaton A, view it as a weighted string automaton and intersect it with an ordinary string automaton recognizing the language", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 6", |
| "sec_num": null |
| }, |
| { |
| "text": "q * 1 \u2022 \u2022 \u2022 q * |Q| .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 6", |
| "sec_num": null |
| }, |
| { |
| "text": "The resulting automaton A is a weighted string automaton such that", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 6", |
| "sec_num": null |
| }, |
| { |
| "text": "[[A ]](u) = [[A]] (u) if u \u2208 q * 1 \u2022 \u2022 \u2022 q * |Q| 0 otherwise", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 6", |
| "sec_num": null |
| }, |
| { |
| "text": "Now use the standard state elimination algorithm (for the weighted case) to convert A into a regular expression (which will then necessarily be a weighted c-regular expression).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 6", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section we introduce a definition that extends the DAG automata of Section 3. We start with an overview of the basic idea. In our extended DAG automata, the left-hand side and the right-hand side of a transition are weighted c-regular expressions \u03b1 and \u03b2 defining (the weights of) acceptable combinations of states at the incoming and at the outgoing edges, respectively, of the node to be processed. These c-regular expressions are therefore defined over the alphabet Q, that is, the state set of the extended DAG automaton.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Weighted DAG Automata", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "Note that a transition in an extended DAG automaton has a potentially infinite set of instantiations {q 1 , . . . , q m } \u03c3 \u2212 \u2192 {r 1 , . . . , r n }, where a transition instantiation is defined as a transition of the DAG automata of Section 3. The weight of such a transition instantiation is defined as the product of the weights assigned by \u03b1 and \u03b2 to {q 1 , . . . , q m } and {r 1 , . . . , r n }, respectively. Using the definition of run for a DAG D that we introduced in Section 3.1, the weight of a run on D is the product of the weights of all instantiated transitions of the run. In the unweighted case, this means that D is accepted by the extended DAG automaton if and only if there exists an assignment of states to the edges of D such that, for each node v of D with label \u03c3, there is some extended transition \u03b1 \u03c3 \u2212 \u2192 \u03b2 such that the multiset of states at the incoming edges of v matches \u03b1 and, similarly, the multiset of states at the outgoing edges matches \u03b2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Weighted DAG Automata", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "An extended weighted DAG automaton is a tuple M = (\u03a3, Q, \u2206, K), where 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 6", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03a3, Q, and K are defined as in the case of weighted DAG automata 2. \u2206 is a transition relation consisting of a finite set of triples of the form t = \u03b1, \u03c3, \u03b2 , where \u03c3 \u2208 \u03a3 and \u03b1, \u03b2 are K-weighted c-regular expressions over Q. We also write t in the form \u03b1 \u03c3 \u2212 \u2192 \u03b2 and call it an extended transition.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 6", |
| "sec_num": null |
| }, |
| { |
| "text": "For a precise definition of the semantics of extended DAG automata, recall that a run has been defined in Section 3.1 as an assignment of states of the automaton to the edges of a DAG. Consider a run \u03c1 on a DAG D = (V, E, lab, src, tar). Let v \u2208 V with lab(v) = \u03c3, and let l and r be the multisets of states on its incoming and outgoing edges, respectively, under \u03c1. This gives rise to an unweighted transition instance", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 6", |
| "sec_num": null |
| }, |
| { |
| "text": "t = (l \u03c3 \u2212 \u2192 r). Every extended transition \u03b1 \u03c3 \u2212 \u2192 \u03b2 in \u2206 contributes [[\u03b1]] (l) \u2297 [[\u03b2]] (r) to the weight of t.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 6", |
| "sec_num": null |
| }, |
| { |
| "text": "We denote the resulting weight of t by w \u03c1 (v), namely,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 6", |
| "sec_num": null |
| }, |
| { |
| "text": "w \u03c1 (v) = (\u03b1 \u03c3 \u2212 \u2192\u03b2)\u2208\u2206 [[\u03b1]] (l) \u2297 [[\u03b2]] (r)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 6", |
| "sec_num": null |
| }, |
| { |
| "text": "Now, as mentioned before, the weight of \u03c1 is obtained by taking the product of all the w \u03c1 (v), over all v \u2208 V, and the total weight of D is the sum of the weights of all runs on D:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 6", |
| "sec_num": null |
| }, |
| { |
| "text": "[[M]] (D) = run \u03c1 on D v\u2208V w \u03c1 (v)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 6", |
| "sec_num": null |
| }, |
| { |
| "text": "We now argue that, if the support of [[\u03b1] ] and [[\u03b2] ] is finite for all transitions \u03b1, \u03c3, \u03b2 of an extended DAG automaton, then the automaton is equivalent to an ordinary DAG automaton. (The support of a weighted c-regular expression \u03b1 over Q is the set of all \u00b5 \u2208 M(Q) such that [[\u03b1] ] (\u00b5) = 0.) To turn a transition l \u03c3/w \u2212\u2212\u2192 r of an ordinary DAG automaton into an equivalent extended transition \u03b1 \u03c3 \u2212 \u2192 \u03b2, let \u03b1 be a sequence with all the elements of multiset l, in any order, and define \u03b1 = w\u03b1 . Furthermore, let \u03b2 be defined as a sequence with all the elements of multiset r, in any order. In this way we have that, for all multisets m \u2208 M(Q), ", |
| "cite_spans": [ |
| { |
| "start": 37, |
| "end": 41, |
| "text": "[[\u03b1]", |
| "ref_id": null |
| }, |
| { |
| "start": 48, |
| "end": 52, |
| "text": "[[\u03b2]", |
| "ref_id": null |
| }, |
| { |
| "start": 280, |
| "end": 284, |
| "text": "[[\u03b1]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 6", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03b4(l, \u03c3, r) = (\u03b1 \u03c3 \u2212 \u2192\u03b2)\u2208\u2206 [[\u03b1]] (l) \u2297 [[\u03b2]] (r)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 6", |
| "sec_num": null |
| }, |
| { |
| "text": "Because any c-regular expression appearing in an extended transition of \u2206 has finite support, function \u03b4 is finite, as desired.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 6", |
| "sec_num": null |
| }, |
| { |
| "text": "An extended DAG automaton that models AMR structures with unbounded node degree is specified in Figure 18 . The extended transitions are based on the (non-extended) transitions in Figure 7 , and make use of the Kleene star operator of c-regular expressions. More specifically, each predicate can take zero or more modifiers, labeled mod, allowing sentences such as \"John wants Mary to believe Sue,\" \"John wants Mary to believe Sue today,\" and so forth. Similarly, entities including John, Mary, and Sue can be generated from one or more states labeled q person , allowing an arbitrary number of instances of coreference.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 96, |
| "end": 105, |
| "text": "Figure 18", |
| "ref_id": null |
| }, |
| { |
| "start": 180, |
| "end": 188, |
| "text": "Figure 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Example 13", |
| "sec_num": null |
| }, |
| { |
| "text": "We now extend the properties studied for unweighted non-extended DAG automata to the (also unweighted) extended case. Thus, from this point onwards up until the start of Section 7.4, all DAG automata and extended DAG automata are assumed to be unweighted. Consequently, the m-automata A = (\u039e, Q, \u03c4, s, \u03c1) appearing in this section are also unweighted. We therefore view the transition function \u03c4 as a set of transitions \u03be, q, \u03be rather than as a mapping from all possible such transitions to the domain {true, false} of the Boolean semiring. As a basis for most of the results of this section, we use a binarization approach similar to Section 6, though somewhat simpler because we do not want to optimize parsing and thus do not need to use tree decompositions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "Let M = (\u03a3, Q, \u2206, K) be an extended DAG automaton. We binarize (unranked) DAGs over \u03a3 as follows. Let \u03a3 = {\u03c3 m,n | \u03c3 \u2208 \u03a3} for m, n \u2208 {0, 1, 2}. The idea ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "Transitions: want \u2212\u2212\u2192 q want-arg0 q want-arg1 q * want-mod q want-arg0 ARG0 \u2212 \u2212\u2212 \u2192 q person q want-arg1 ARG1 \u2212 \u2212\u2212 \u2192 q pred q pred believe \u2212\u2212\u2212\u2192 q believe-arg0 q believe-arg1 q * believe-mod q believe-arg0 ARG0 \u2212 \u2212\u2212 \u2192 q person q believe-arg1 ARG1 \u2212 \u2212\u2212 \u2192 q person q person q * person John \u2212 \u2212 \u2192 q person q * person Mary \u2212 \u2212\u2212 \u2192 q person q *", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "Extended rules for AMRs. The Kleene star in the expressions for input and output state multisets means that a state can occur zero or more times.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 18", |
| "sec_num": null |
| }, |
| { |
| "text": "underlying the binarization of a DAG D is to replace every \u03c3-labeled node v of in-degree m and out-degree n by a \"vertical\" chain of m + n + 3 copies of \u03c3 of the form", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 18", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03c3 0,1 \u03c3 2,1 \u2022 \u2022 \u2022 \u03c3 2,1 m times \u03c3 1,1 \u03c3 1,2 \u2022 \u2022 \u2022 \u03c3 1,2 n times \u03c3 1,0 ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 18", |
| "sec_num": null |
| }, |
| { |
| "text": "where each \u03c3 2,1 is assigned one of the incoming edges of v, and vice versa for \u03c3 1,2 with respect to the outgoing edges of v. Figure 19 shows an example where (m, n) = (3, 2). Note that nodes of in-degree 0 are turned into chains that start with \u03c3 0,1 \u03c3 1,1 at the top. Similarly, nodes of out-degree 0 are turned into chains that end in \u03c3 1,1 \u03c3 1,0 at the bottom. Figure 19 contain m + n + 2 additional edges-those on the vertical spine-and m + n edges stemming from the original DAG. In a run of M , the latter are assigned states", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 127, |
| "end": 136, |
| "text": "Figure 19", |
| "ref_id": null |
| }, |
| { |
| "start": 366, |
| "end": 375, |
| "text": "Figure 19", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 18", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03c3 \u03c3 0,1 \u03c3 2,1 \u03c3 2,1 \u03c3 2,1 \u03c3 1,1 \u03c3 1,2 \u03c3 1,2 \u03c3 1,0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 18", |
| "sec_num": null |
| }, |
| { |
| "text": "A node of in-degree 3 and out-degree 2, and the corresponding sub-DAG created by binarization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 19", |
| "sec_num": null |
| }, |
| { |
| "text": "in Q, whereas the former are assigned states of the m-automata that implement the transitions of M.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 19", |
| "sec_num": null |
| }, |
| { |
| "text": "Consider such a transition \u03b1, \u03c3, \u03b2 and let A = (\u039e, Q, \u03c4, s, \u03c1) and A = (\u039e , Q, \u03c4 , s , \u03c1 ) with \u039e \u2229 \u039e = \u2205 be m-automata equivalent to \u03b1 and \u03b2, respectively. Then M contains the transitions", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 19", |
| "sec_num": null |
| }, |
| { |
| "text": "r \u2205 \u03c3 0,1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 19", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2212 \u2212 \u2192 s (this assigns the initial state of A to the top-most edge of the spine),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 19", |
| "sec_num": null |
| }, |
| { |
| "text": "r {\u03be, q} \u03c3 2,1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 19", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2212 \u2212 \u2192 {\u03be } for all \u03be \u2208 \u039e, q \u2208 Q, and \u03be, q, \u03be \u2208 \u03c4 (this \"reads\" q on an incoming edge in state \u03be, assigning the resulting state \u03be to the next edge on the spine),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 19", |
| "sec_num": null |
| }, |
| { |
| "text": "r {\u03be} \u03c3 1,1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 19", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2212 \u2212 \u2192 {s } for all final states \u03be of A (this allows A to \"cross\" the middle of the spine and continue to work on the outgoing edges) and, similarly to the preceding items,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 19", |
| "sec_num": null |
| }, |
| { |
| "text": "r {\u03be} \u03c3 1,2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 19", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2212 \u2212 \u2192 {\u03be , q} for all \u03be \u2208 \u039e , q \u2208 Q, and \u03be, q, \u03be \u2208 \u03c4 (similarly to the second item, but \"reading\" q on an outgoing edge) and . As a consequence, we can extend three results from non-extended DAG automata to extended ones: The emptiness and finiteness problems are decidable and the path languages are regular. (The decidability of the finiteness problem was shown in Blum and Drewes [2016] for the non-extended case.)", |
| "cite_spans": [ |
| { |
| "start": 367, |
| "end": 389, |
| "text": "Blum and Drewes [2016]", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 19", |
| "sec_num": null |
| }, |
| { |
| "text": "These results are summarized in the next theorem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 19", |
| "sec_num": null |
| }, |
| { |
| "text": "For unweighted extended DAG automata 1. the emptiness problem is decidable, 2. the finiteness problem is decidable, and 3. the path language is regular.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 7", |
| "sec_num": null |
| }, |
| { |
| "text": "If the transitions of the input automata of the emptiness and finiteness problems are specified by means of m-automata (rather than c-regular expressions), then the decision algorithms run in polynomial time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 7", |
| "sec_num": null |
| }, |
| { |
| "text": "Proof. The first two statements follow directly from the fact that a DAG language is empty (finite) if and only if its binarized counterpart is empty (finite, respectively). Furthermore, if the m-automata that specify the transitions of M are given, then the DAG automaton M discussed earlier can be obtained from M in polynomial time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 7", |
| "sec_num": null |
| }, |
| { |
| "text": "To see that the path languages are regular, consider some D \u2208 [[M] ] and D \u2208 B(D). Intuitively, every path in D is represented by one in D such that, whenever the original path passes a node, the corresponding path in D enters the chain in Figure 19 through one of the edges coming from the right and leaves it through one of the outgoing edges going to the right. However, in D a path can start (and end) at any such chain, since each contains a root (and a leaf), even if the node it represents is an internal node of D. Fortunately, the desired paths can easily be singled out, because a chain represents a root if it starts with \u03c3 0,1 \u03c3 1,1 and a leaf if it ends with \u03c3 1,1 \u03c3 1,0 .", |
| "cite_spans": [ |
| { |
| "start": 62, |
| "end": 66, |
| "text": "[[M]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 240, |
| "end": 249, |
| "text": "Figure 19", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Theorem 7", |
| "sec_num": null |
| }, |
| { |
| "text": "This amounts to saying that a string w \u2208 \u03a3 * is a path in D if and only if there is a path w in D that satisfies the following:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 7", |
| "sec_num": null |
| }, |
| { |
| "text": "(a)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 7", |
| "sec_num": null |
| }, |
| { |
| "text": "w has a prefix of the form \u03c3 0,1 \u03c3 1,1 , (b) w has a suffix of the form \u03c3 1,1 \u03c3 1,0 , and (c) w is obtained from w by applying the homomorphism that replaces each \u03c3 1,1 by \u03c3 and erases all other symbols.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 7", |
| "sec_num": null |
| }, |
| { |
| "text": "Thus, because the path language of [[M ] ] is regular, so is that of [[M] ], because it is obtained from the former by intersection with two regular languages and applying a homomorphism.", |
| "cite_spans": [ |
| { |
| "start": 35, |
| "end": 40, |
| "text": "[[M ]", |
| "ref_id": null |
| }, |
| { |
| "start": 69, |
| "end": 73, |
| "text": "[[M]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 7", |
| "sec_num": null |
| }, |
| { |
| "text": "Let us now consider the intersection of a hyperedge replacement language with [[M] ]. Assume for simplicity that the given HRG G is \"normalized\" in the sense that every right-hand side either contains no terminal edges at all, or consists only of the nodes in the left-hand side and terminal edges. We sketch briefly how G can be turned into a HRG G that generates [[G] ] \u2229 [[M] ], hoping that the interested reader will be able to work out the details by herself.", |
| "cite_spans": [ |
| { |
| "start": 78, |
| "end": 82, |
| "text": "[[M]", |
| "ref_id": null |
| }, |
| { |
| "start": 365, |
| "end": 369, |
| "text": "[[G]", |
| "ref_id": null |
| }, |
| { |
| "start": 374, |
| "end": 378, |
| "text": "[[M]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 7", |
| "sec_num": null |
| }, |
| { |
| "text": "Similarly to Section 4.2, the idea is to use a Bar-Hillel-like construction. However, the construction of Section 4.2 has to be generalized slightly because now the degree of nodes in the graphs in [[M] ] is not a priori bounded anymore.", |
| "cite_spans": [ |
| { |
| "start": 198, |
| "end": 202, |
| "text": "[[M]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 7", |
| "sec_num": null |
| }, |
| { |
| "text": "Recall that in the previous case we annotated each tentacle of a nonterminal hyperedge with two multisets of states. Intuitively, if v is the node the tentacle points to, then this annotation \"guesses\" the states on incoming and outgoing edges that this nonterminal will attach to v. In the extended version, suppose the label of v is \u03c3 and there is a transition \u03b1, \u03c3, \u03b2 , where A = (\u039e, Q, \u03c4, s, \u03c1) and A = (\u039e , Q, \u03c4 , s , \u03c1 ) are mautomata that implement \u03b1 and \u03b2, respectively. Then the annotation of the tentacle will consist of two pairs of states, ((\u03be 1 , \u03be 2 ), (\u03be 1 , \u03be 2 )) \u2208 \u039e 2 \u00d7 \u039e 2 , representing the \"guess\" that the derivation of this nonterminal hyperedge will eventually attach incoming and outgoing edges to the node that can be assigned states from Q which take A and A from \u03be 1 to \u03be 2 and from \u03be 1 to \u03be 2 , respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 7", |
| "sec_num": null |
| }, |
| { |
| "text": "To see how this can be done, consider first a nonterminal rule of the original HRG, and assume that it replaces the nonterminal hyperedge in such a way that, in the right-hand side of the rule, two new nonterminal hyperedges have tentacles to the corresponding node. Then the resulting HRG will contain a version of the rule in which these tentacles carry annotations ((\u03be 1 , \u03be), (\u03be 1 , \u03be )) and ((\u03be, \u03be 2 ), (\u03be , \u03be 2 )), for all possible choices of \u03be \u2208 \u039e and \u03be \u2208 \u039e . Similarly, if the right-hand side contains a node that is not in the left-hand side, and that node is attached to, say, a single tentacle, then this tentacle would be annotated with some ((s, \u03be), (s , \u03be )) such that \u03be and \u03be are final states of A and A , respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 7", |
| "sec_num": null |
| }, |
| { |
| "text": "Finally, the terminal rules verify the consistency of the nondeterministic guesses. Suppose the original HRG contains a terminal rule L ::= R for the nonterminal in question. For each annotated version L of L, the modified HRG contains the rule L ::= R if there exists an assignment of states in Q to the edges in R that is consistent with the annotation of (tentacles in) L . For example, if the annotation of one of the tentacles is ((\u03be 1 , \u03be 2 ), (\u03be 1 , \u03be 2 )) and the incoming and outgoing edges of the corresponding node in R are assigned the multisets of states Q in and Q out , then it must be the case that Q in takes A from \u03be 1 to \u03be 2 and Q out takes A from \u03be 1 to \u03be 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 7", |
| "sec_num": null |
| }, |
| { |
| "text": "As mentioned, we leave the details of the construction to the reader. The resulting HRG generates the language [[G] ", |
| "cite_spans": [ |
| { |
| "start": 111, |
| "end": 115, |
| "text": "[[G]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 7", |
| "sec_num": null |
| }, |
| { |
| "text": "] \u2229 [[M]]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 7", |
| "sec_num": null |
| }, |
| { |
| "text": ", thus showing that the class of hyperedge replacement languages is closed under intersection with extended DAG automata.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 7", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section we present two parsing algorithms for extended DAG automata. The first algorithm is a reduction to the recognition problem of Section 5 for (non-extended) DAG automata, and demonstrates the close relationship between these two problems. The reduction is based on the binarization method presented in Section 6, and involves constructing a binarized DAG D on the basis of a tree decomposition of the input graph D. The second algorithm we present operates directly on this tree decomposition, and can be implemented without using Algorithm 2 as a subroutine. The binarization of D is done on the basis of a tree decomposition of D, using the techniques presented in Section 6. The automaton M is also constructed using techniques similar to those presented in Section 6, as described in detail here.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recognition", |
| "sec_num": "7.4" |
| }, |
| { |
| "text": "Let Q be the state set of M and let Q A = Q \u222a Q , where Q = {q | q \u2208 Q} is a set of fresh copies of the states in Q. We compile all the transitions on an input symbol \u03c3 \u2208 \u03a3 into a single m-automaton A \u03c3 over Q A , using the states in Q and Q to distinguish between incoming and outgoing edges. Suppose that \u03b1 1 , \u03c3, \u03b2 1 , . . . , \u03b1 n , \u03c3, \u03b2 n are the transitions of M on \u03c3. Let \u03b2 , 1 \u2264 \u2264 n, be obtained from \u03b2 by replacing each q \u2208 Q with its copy q \u2208 Q . Now, let A \u03c3 = (\u039e \u03c3 , Q A , \u03c4 \u03c3 , s \u03c3 , \u03c1 \u03c3 ) be an m-automaton such that [[A \u03c3 ]] ", |
| "cite_spans": [ |
| { |
| "start": 530, |
| "end": 538, |
| "text": "[[A \u03c3 ]]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recognition", |
| "sec_num": "7.4" |
| }, |
| { |
| "text": "= [[ n =1 (\u03b1 \u03b2 )]]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recognition", |
| "sec_num": "7.4" |
| }, |
| { |
| "text": ". Recall from Section 6 that all edges of D are copied into D and that these copied edges are all attached to the leaf nodes of the treelets in D . The remaining edges of D , that is, those edges that are newly added in the binarization of D, are called D -auxiliary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recognition", |
| "sec_num": "7.4" |
| }, |
| { |
| "text": "States in M are symbols in Q or else pairs of states from the m-automata A \u03c3 . States q \u2208 Q are used by M at edges copied from D. Pairs of states from A \u03c3 are used by M at the D -auxiliary edges. More specifically, consider a node v of D with lab(v) = \u03c3, and the corresponding treelet T v in D . Let e be some D -auxiliary edge in T v with target node u, and let E be the set of all edges copied from D that are attached to the leaves of the sub-treelet of T v rooted at node u; see Figure 20 . A pair (i, j) with i, j \u2208 \u039e \u03c3 is used by M at e to indicate that A \u03c3 can process the multiset of symbols from Q A assigned to the edges from E by starting in state i and ending in state j.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 483, |
| "end": 492, |
| "text": "Figure 20", |
| "ref_id": "FIGREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Recognition", |
| "sec_num": "7.4" |
| }, |
| { |
| "text": "We now specify in detail the transitions of M , which correspond one-to-one to the rules defined in Section 6.3. Let \u03c3 \u2208 \u03a3 be an input symbol of M. The first two rules here apply at the root of a treelet T v derived from a vertex v of D labeled with \u03c3, and stipulate initial and final states of some computation in A \u03c3 :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recognition", |
| "sec_num": "7.4" |
| }, |
| { |
| "text": "\u2200 i \u2208 \u039e \u03c3 \u2205 \u03c3 /\u03c1 \u03c3 (i) \u2212 \u2212\u2212\u2212\u2212 \u2192 {(s \u03c3 , i)} (8) \u2200 i, j \u2208 \u039e \u03c3 \u2205 \u03c3 /\u03c1 \u03c3 (j) \u2212 \u2212\u2212\u2212\u2212 \u2192 {(s \u03c3 , i), (i, j)} (9) T v \u2022 u e \u2022 \u2022 \u2022 . . . \u2022 E", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recognition", |
| "sec_num": "7.4" |
| }, |
| { |
| "text": "Set of edges E copied from D and attached to the leaves of the sub-tree of treelet T v rooted at node u.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "The next two rules apply at nodes that are internal to a treelet T v . The unary rule does nothing, simply skipping the node, and the binary rule concatenates two sub-computations in A \u03c3 :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2200 i, j \u2208 \u039e \u03c3 {(i, j)} \u03c3 /1 \u2212 \u2212\u2212 \u2192 {(i, j)} (10) \u2200 i, j, k \u2208 \u039e \u03c3 {(i, k)} \u03c3 /1 \u2212 \u2212\u2212 \u2192 {(i, j), (j, k)}", |
| "eq_num": "(11)" |
| } |
| ], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "Finally, we need transitions to process leaf nodes at the treelets in D , where we need to simulate transitions of A \u03c3 that process edges copied from the original DAG D. Recall that these copied edges are labeled with states in Q, whereas in A \u03c3 states from Q are indicative of outgoing edges. Thus, we use the following transitions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2200 q \u2208 Q, i, j \u2208 \u039e \u03c3 {(i, j), q} \u03c3/\u03c4 \u03c3 (i,q,j) \u2212 \u2212\u2212\u2212\u2212\u2212 \u2192 \u2205 (12) \u2200 q \u2208 Q, i, j \u2208 \u039e \u03c3 {(i, j)} \u03c3/\u03c4 \u03c3 (i,q ,j) \u2212 \u2212\u2212\u2212\u2212\u2212\u2212 \u2192 {q}", |
| "eq_num": "(13)" |
| } |
| ], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "The following transitions handle the special case of a treelet consisting of a single node:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2200 q \u2208 Q, i \u2208 \u039e \u03c3 {q} \u03c3/\u03c4 \u03c3 (s \u03c3 ,q,i)\u03c1 \u03c3 (i) \u2212\u2212\u2212\u2212\u2212\u2212\u2212\u2212\u2212\u2212\u2192 \u2205 (14) \u2200 q \u2208 Q, i \u2208 \u039e \u03c3 \u2205 \u03c3/\u03c4 \u03c3 (s \u03c3 ,q ,i)\u03c1 \u03c3 (i) \u2212 \u2212\u2212\u2212\u2212\u2212\u2212\u2212\u2212\u2212\u2212 \u2192 {q}", |
| "eq_num": "(15)" |
| } |
| ], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "Once M has been constructed from M, we can process each input DAG D by converting it into a binary DAG D and then by running M on D using Algorithm 2. Thus, as with the algorithm of Section 6 for non-extended DAGs, the running time is exponential in the treewidth of the input DAG, linear in the total size of the input DAG, and, for a fixed treewidth, polynomial in the number of states of the extended automaton and in the number of states of the largest m-automaton A \u03c3 for the transitions on a single input symbol \u03c3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "7.4.2 Direct Recognition. We now present an alternative algorithm for processing D according to the extended automaton M. The algorithm uses the same ideas as before, but works directly on D and M, without any preprocessing (binarization). Thus the alternative algorithm avoids the overhead of compiling (or computing on-the-fly) the large number of (non-extended) rules defined in the previous subsection. Let T be a tree decomposition of DAG D. Recall from Definition 2 that a node b of T is called a bag, and cont(b), the label of b, is a set of nodes of D. In what follows, we assume our tree decompositions are in a canonical form that has been introduced by Cygan et al. (2011) . Tree decomposition T is nice if every bag b of T satisfies one of the following conditions. Similarly to Section 7.4.1, we compile all transitions of M on an input symbol \u03c3 \u2208 \u03a3 into a single m-automaton A \u03c3 = (\u039e \u03c3 , Q A , \u03c4 \u03c3 , s \u03c3 , \u03c1 \u03c3 ). Let \u039e = \u03c3\u2208\u03a3 \u039e \u03c3 . In what follows, given a bag b of T, we denote by \u03a6(b) the set of all functions \u03c6 : cont(b) \u2192 \u039e \u00d7 \u039e such that, if v \u2208 cont(b) with lab(v) = \u03c3, then \u03c6(v) \u2208 \u039e \u03c3 \u00d7 \u039e \u03c3 . In words, function \u03c6 assigns a pair of states from A \u03c3 to each node v of D in a bag b, where \u03c3 is the label of v. Similarly to Section 7.4.1, the intended meaning of a pair \u03c6(v) = (i, j) is as follows. Let v be some node in cont(b) and let T be the subtree of T rooted at b. Let also E be the set of all edges of D that are introduced by bags from T . When processing the edges in E that are attached to v, A \u03c3 can begin in state i and end in state j. For compactness, we use the notation v \u2192 ij for the map \u03c6 : {v} \u2192 {(i, j)} such that \u03c6(v) = (i, j). Let \u03c6 \u2208 \u03a6(b) and consider a node v of D (which may or may not be in cont(b)). We then write v \u2192 ij, \u03c6 to denote the function \u03c6 : cont(b) \u222a {v} \u2192 \u039e \u00d7 \u039e such that \u03c6 (u) = \u03c6(u) for every u \u2208 cont(b) \\ {v}, and \u03c6 (v) = (i, j).", |
| "cite_spans": [ |
| { |
| "start": 664, |
| "end": 683, |
| "text": "Cygan et al. (2011)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "The recognition algorithm (Algorithm 3) processes the input DAG D by visiting its edges in the order in which they appear in a bottom-up walk through the tree decomposition T, computing a partial analysis of M for D. It uses function \u03c6 to group into equivalence classes all partial analyses that share the same assignment of pairs of states of the appropriate A \u03c3 to the nodes in cont(b), and it uses dynamic programming to compute the overall weight of the computations in the same equivalence class.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "The algorithm maintains a chart with entries chart b [\u03c6] \u2208 K, for each b and for each \u03c6 \u2208 \u03a6(b). Thus, if m is again the size of the largest \u039e \u03c3 , chart b has at most m 2|cont(b)| entries and could be thought of as an order-2|cont(b)| tensor. Each entry chart b [\u03c6] is the total weight of derivations of the processed part of the graph where, if v \u2208 cont(b) and \u03c6(v) = (i, j), the m-automaton processing the incident edges of v starts in state i and stops in state j. for i, k \u2208 \u039e \u03c3 , i , k \u2208 \u039e \u03c3 and \u03c6 \u2208 \u03a6(b) do 9:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "chart b [v \u2192 ik, v \u2192 i k , \u03c6] = q j\u2208\u039e \u03c3 ,j \u2208\u039e \u03c3 chart b 1 [v \u2192 ij, v \u2192 i j , \u03c6] \u2297 \u03c4 \u03c3 (j, q, k) \u2297 \u03c4 \u03c3 (j , q , k )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "10:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "else if b forgets node v with lab(v) = \u03c3 then 11:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "for \u03c6 \u2208 \u03a6(b) do 12:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "chart b [\u03c6] = j chart b 1 [v \u2192 s \u03c3 j, \u03c6] \u2297 \u03c1 \u03c3 (j) 13:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "else if b is a join bag with cont(b) = {v 1 , . . . , v } and lab(v p ) = \u03c3 p (1 \u2264 p \u2264 ) then 14:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "for i 1 , k 1 \u2208 \u039e \u03c3 1 , . . . , i , k \u2208 \u039e \u03c3 do 15: chart b [v 1 \u2192 i 1 k 1 , . . . , v \u2192 i k ] = j p \u2208\u039e \u03c3p chart b 1 [v 1 \u2192 i 1 j 1 , . . . , v \u2192 i j ] \u2297 chart b 2 [v 1 \u2192 j 1 k 1 , . . . , v \u2192 j k ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "Computational Analysis. The processing of a join bag in the algorithm takes time m 3(tw(D)+1) because it iterates over triples of states i h , j h , k h for each of the w nodes in the join bag, where w can be as large as tw(D) + 1. The processing of a bag that introduces an edge involves iterating over m 2(tw(D)\u22121) values of \u03c6, m 6 values of i, i , j, j , k, and k , and |Q| values of q, for a total time of |Q|m 2(tw(D)+2) . The other types of bags result in strictly lower complexities, giving a total running time of:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "O |E D |(|Q|m 2(tw(D)+2) + m 3(tw(D)+1) )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "This bound is slightly tighter than Equation 16, though similar qualitatively: The running time is exponential in the treewidth of the input DAG, linear in the total size of the input DAG, and polynomial in the number of states of the extended automaton and the transition automata.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "We have aimed to develop a formalism for DAG automata that lends itself to efficient algorithms for processing semantic graphs such as Abstract Meaning Representations. In particular, motivated by the success of finite-state methods in natural language processing, we have tried to develop a graph analog of standard finite-state automata for strings. The resulting formalism, despite having a straightforward and intuitive definition, differs from previously developed formalisms including those of Kamimura and Slutzki (1981) , Charatonik (1999) , Priese (2007) , and Quernheim and Knight (2012).", |
| "cite_spans": [ |
| { |
| "start": 500, |
| "end": 527, |
| "text": "Kamimura and Slutzki (1981)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 530, |
| "end": 547, |
| "text": "Charatonik (1999)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 550, |
| "end": 563, |
| "text": "Priese (2007)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8." |
| }, |
| { |
| "text": "We have shown that our choice of definitions allows a number of desirable properties to carry over from finite-state automata for strings, including the regularity of path languages, the polynomial decidability of emptiness and finiteness, and the ability to intersect with hyperedge replacement grammars, which can be viewed as a graph analog of context-free grammars. However, recognition in general for our formalism remains an NP-complete problem, a major difference from finite-state automata for strings. Motivated by the need for practical algorithms, we study the complexity of this problem in detail. Whereas most previous theoretical work on graph automata deals with general complexity classes such as decidability or NP-completeness, we develop more specific asymptotic complexity results with respect to a number of parameters of the input problem. Our binarization technique allows recognition in time exponential in the treewidth of the input graph. This is a major improvement over the na\u00efve strategy, which is exponential in the treewidth of the line graph of the input graph, which itself is at least the degree of the input graph. For semantic representation from the AMR Bank, the maximum treewidth is 4, and the maximum degree is 17. This indicates that the binarization technique is essential to making recognition practical.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8." |
| }, |
| { |
| "text": "Finally, we show how to extend our formalism to DAGs of unbounded degree, which is necessary for handling natural language phenomena such as coreference and optional modifiers. We show that our algorithms and complexity results apply essentially unchanged in this extended setting.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8." |
| }, |
| { |
| "text": "Real-world systems based on our formalism will have to address a number of problems not touched upon in this article, including determining the appropriate set of states and node labels for a particular application. Another avenue for future work is the possibility of rules that process a larger fragment of the input DAG in one transition, as with \"extended\" rules for tree automata (Maletti et al. 2009) . Finally, while we have studied recognition with DAG automata, the development of formalisms for transducers between DAGs and either strings, trees, DAGs, or even general graphs, remains an important area for future work. this repeatedly yields a tree decomposition which is a binary tree such that every leaf is of the form b(e) for one or more edges e (but not all b(e) need to be leaves).", |
| "cite_spans": [ |
| { |
| "start": 385, |
| "end": 406, |
| "text": "(Maletti et al. 2009)", |
| "ref_id": "BIBREF59" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8." |
| }, |
| { |
| "text": "Finally, for every bag b such that there are (pairwise distinct) edges e 1 , . . . , e with b(e 1 ) = \u2022 \u2022 \u2022 = b(e ) = b, add a comb whose spine consists of \u2212 1 bags with the same contents as b, and whose leaves are bags b 1 , . . . , b with cont(b i ) = {src(e i ), tar(e i )}. Now define edg(b i ) = e i for i = 1, . . . , . Obviously, the width of the tree decomposition stays the same, and now the mapping b \u2192 edg(b) is a bijection between the leaves of the tree decomposition and the edges of G. We also have that cont(b) = {src(edg(b)), tar(edg(b))} for every bag b which is a leaf, as required. To see that the size of the resulting tree decomposition is linear in the number of edges of G, it suffices to notice that the first step doubles the size of T in the worst case, the second step reduces its size, and the third step adds at most two bags for each edge of G. This completes the proof.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8." |
| }, |
| { |
| "text": "http://www.isi.edu/\u223culf/amr/help/amr-guidelines.pdf. 2 The first release is LDC catalog number LDC2014T12; we are grateful to ISI for providing us with an internal release that is somewhat larger than the first release.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "LDC catalog number LDC2014T12.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The DAG D used inFigure 16already happens to be binary, but this does not affect the construction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "D T would not even have cycles if D did, because every cycle would have to enter some T v through one leaf and exit it through another, which is impossible because T v is a directed tree.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Droste and Gastin (1999) talk about weighted mc-rational languages, where the m stands for an additional constraint needed in their more general case. In our case, c-regular and mc-regular are equivalent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Both here and in the remaining items all weights and final weights not explicitly mentioned carry over from A 1 and A 2 , respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We are grateful to the anonymous reviewers for their useful suggestions and to Sorcha Gilroy and Parker Riley for comments on drafts of this article. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| }, |
| { |
| "text": "We provide an explicit proof for the fact, mentioned in Section 6.2, that tree decompositions can efficiently be transformed into binary tree decompositions of the same width.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Appendix A. Binary Tree Decomposition", |
| "sec_num": null |
| }, |
| { |
| "text": "Every tree decomposition of a graph G without isolated nodes can in linear time be transformed into a binary tree decomposition of G of the same width and of size linear in the number of edges of G.Proof. Let T be a tree decomposition of G of width k. As shown by Kloks (1994, Lemma 2.2.5) , it may be assumed without loss of generality that the size of T is at most the number of nodes of G. Clearly, the resulting tree T is still a tree decomposition of width k. Repeating this step will eventually result in a tree decomposition in which every bag has at most two children.Next, assign to every edge e of G a unique bag b(e) such that {src(r), tar(e)} \u2286 cont(b(e)). Any leaf b such that b = b(e) for all edges e can be removed from the tree, because the nodes in cont(b) are not isolated and are thus contained in other bags. Doing", |
| "cite_spans": [ |
| { |
| "start": 264, |
| "end": 289, |
| "text": "Kloks (1994, Lemma 2.2.5)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 8", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "On the membership problem for regular DNLC grammars", |
| "authors": [ |
| { |
| "first": "Ijsbrand", |
| "middle": [ |
| "J" |
| ], |
| "last": "Aalbersberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrzej", |
| "middle": [], |
| "last": "Grzegorz Rozenberg", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ehrenfeucht", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "Discrete Applied Mathematics", |
| "volume": "13", |
| "issue": "", |
| "pages": "79--85", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aalbersberg, IJsbrand J., Grzegorz Rozenberg, and Andrzej Ehrenfeucht. 1986. On the membership problem for regular DNLC grammars. Discrete Applied Mathematics, 13:79-85.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Universal conceptual cognitive annotation (UCCA)", |
| "authors": [ |
| { |
| "first": "Omri", |
| "middle": [], |
| "last": "Abend", |
| "suffix": "" |
| }, |
| { |
| "first": "Ari", |
| "middle": [], |
| "last": "Rappoport", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "228--238", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abend, Omri and Ari Rappoport. 2013. Universal conceptual cognitive annotation (UCCA). In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 228-238, Sofia.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Relating probabilistic grammars and automata", |
| "authors": [ |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Abney", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mcallester", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the 37th Annual Conference of the Association for Computational Linguistics (ACL-99)", |
| "volume": "", |
| "issue": "", |
| "pages": "542--549", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abney, Steven, David McAllester, and Fernando Pereira. 1999. Relating probabilistic grammars and automata. In Proceedings of the 37th Annual Conference of the Association for Computational Linguistics (ACL-99), pages 542-549, College Park, MD.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "The Theory of Parsing, Translation and Compiling", |
| "authors": [ |
| { |
| "first": "Alfred", |
| "middle": [ |
| "V" |
| ], |
| "last": "Aho", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [ |
| "D" |
| ], |
| "last": "Ullman", |
| "suffix": "" |
| } |
| ], |
| "year": 1972, |
| "venue": "", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aho, Alfred V. and Jeffrey D. Ullman. 1972. The Theory of Parsing, Translation and Compiling, volume 1. Prentice-Hall, Englewood Cliffs, NJ.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "A unified construction of the Glushkov, follow, and Antimirov automata", |
| "authors": [ |
| { |
| "first": "Cyril", |
| "middle": [], |
| "last": "Allauzen", |
| "suffix": "" |
| }, |
| { |
| "first": "Mehryar", |
| "middle": [], |
| "last": "Mohri", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of Mathematical Foundations of Computer Science", |
| "volume": "", |
| "issue": "", |
| "pages": "110--124", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Allauzen, Cyril and Mehryar Mohri. 2006. A unified construction of the Glushkov, follow, and Antimirov automata. In Proceedings of Mathematical Foundations of Computer Science 2006, pages 110-124, Star\u00e1 Lesn\u00e1.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Closure properties and decision problems of DAG automata", |
| "authors": [ |
| { |
| "first": "Siva", |
| "middle": [], |
| "last": "Anantharaman", |
| "suffix": "" |
| }, |
| { |
| "first": "Paliath", |
| "middle": [], |
| "last": "Narendran", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Rusinowitch", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Information Processing Letters", |
| "volume": "94", |
| "issue": "5", |
| "pages": "231--240", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anantharaman, Siva, Paliath Narendran, and Michael Rusinowitch. 2005. Closure properties and decision problems of DAG automata. Information Processing Letters, 94(5):231-240.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Complexity of finding embeddings in a k-tree", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Arnborg", |
| "suffix": "" |
| }, |
| { |
| "first": "Derek", |
| "middle": [ |
| "G" |
| ], |
| "last": "Stefen", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrzej", |
| "middle": [], |
| "last": "Corneil", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Proskurowski", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "SIAM Journal of Algebraic and Discrete Methods", |
| "volume": "8", |
| "issue": "", |
| "pages": "277--284", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Arnborg, Stefen, Derek G. Corneil, and Andrzej Proskurowski. 1987. Complexity of finding embeddings in a k-tree. SIAM Journal of Algebraic and Discrete Methods, 8:277-284.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "On formal properties of simple phrase structure grammars", |
| "authors": [ |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Banarescu", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Bonial", |
| "suffix": "" |
| }, |
| { |
| "first": "Shu", |
| "middle": [], |
| "last": "Cai", |
| "suffix": "" |
| }, |
| { |
| "first": "Madalina", |
| "middle": [], |
| "last": "Georgescu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kira", |
| "middle": [], |
| "last": "Griffitt", |
| "suffix": "" |
| }, |
| { |
| "first": "Ulf", |
| "middle": [], |
| "last": "Hermjakob", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathan", |
| "middle": [], |
| "last": "Schneider", |
| "suffix": "" |
| } |
| ], |
| "year": 1961, |
| "venue": "Proceedings of the Linguistic Annotation Workshop", |
| "volume": "14", |
| "issue": "", |
| "pages": "143--172", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Banarescu, Laura, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the Linguistic Annotation Workshop, pages 178-186, Sofia. Bar-Hillel, Yehoshua, Micha Perles, and Eli Shamir. 1961. On formal properties of simple phrase structure grammars. Zeitschrift f\u00fcr Phonetik, Sprachwissenschaft und Kommunikationsforschung, 14(2):143-172.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Graph expressions and graph rewriting", |
| "authors": [ |
| { |
| "first": "Michel", |
| "middle": [], |
| "last": "Bauderon", |
| "suffix": "" |
| }, |
| { |
| "first": "Bruno", |
| "middle": [], |
| "last": "Courcelle", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Mathematical Systems Theory", |
| "volume": "20", |
| "issue": "", |
| "pages": "83--127", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bauderon, Michel and Bruno Courcelle. 1987. Graph expressions and graph rewriting. Mathematical Systems Theory, 20:83-127.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "An inequality and associated maximization technique in statistical estimation for probabilistic functions of Markov processes", |
| "authors": [ |
| { |
| "first": "Leonard", |
| "middle": [ |
| "E" |
| ], |
| "last": "Baum", |
| "suffix": "" |
| } |
| ], |
| "year": 1972, |
| "venue": "Inequalities III: Proceedings of the Third Symposium on Inequalities", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Baum, Leonard E. 1972. An inequality and associated maximization technique in statistical estimation for probabilistic functions of Markov processes. In Inequalities III: Proceedings of the Third Symposium on Inequalities, pages 1-8, Los Angeles, CA.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Between a rock and a hard place-uniform parsing for hyperedge replacement DAG grammars", |
| "authors": [ |
| { |
| "first": "Henrik", |
| "middle": [], |
| "last": "Bj\u00f6rklund", |
| "suffix": "" |
| }, |
| { |
| "first": "Frank", |
| "middle": [], |
| "last": "Drewes", |
| "suffix": "" |
| }, |
| { |
| "first": "Petter", |
| "middle": [], |
| "last": "Ericson", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 10th International Conference on Language and Automata Theory and Applications", |
| "volume": "9618", |
| "issue": "", |
| "pages": "521--532", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bj\u00f6rklund, Henrik, Frank Drewes, and Petter Ericson. 2016. Between a rock and a hard place-uniform parsing for hyperedge replacement DAG grammars. In Proceedings of the 10th International Conference on Language and Automata Theory and Applications, volume 9618 of Lecture Notes in Computer Science, pages 521-532, Prague. Blum, Johannes. 2015. DAG automata-variants, languages and properties. Master's thesis, Ume\u00e5 University.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Properties of regular DAG languages", |
| "authors": [ |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Blum", |
| "suffix": "" |
| }, |
| { |
| "first": "Frank", |
| "middle": [], |
| "last": "Drewes", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 10th International Conference on Language and Automata Theory and Applications", |
| "volume": "9618", |
| "issue": "", |
| "pages": "427--438", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Blum, Johannes and Frank Drewes. 2016. Properties of regular DAG languages. In Proceedings of the 10th International Conference on Language and Automata Theory and Applications, volume 9618 of Lecture Notes in Computer Science, pages 427-438, Prague.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "The Prague Dependency Treebank: A three-level annotation scenario", |
| "authors": [ |
| { |
| "first": "Alena", |
| "middle": [], |
| "last": "B\u00f6hmov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Haji\u010d", |
| "suffix": "" |
| }, |
| { |
| "first": "Eva", |
| "middle": [], |
| "last": "Haji\u010dov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "Barbora", |
| "middle": [], |
| "last": "Hladk\u00e1", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Treebanks: Building and Using Parsed Corpora. Kluwer", |
| "volume": "", |
| "issue": "", |
| "pages": "103--127", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B\u00f6hmov\u00e1, Alena, Jan Haji\u010d, Eva Haji\u010dov\u00e1, and Barbora Hladk\u00e1. 2003. The Prague Dependency Treebank: A three-level annotation scenario. In A. Abeill\u00e9, editor, Treebanks: Building and Using Parsed Corpora. Kluwer, pages 103-127.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A Kleene theorem for a class of planar acyclic graphs", |
| "authors": [ |
| { |
| "first": "Francis", |
| "middle": [], |
| "last": "Bossut", |
| "suffix": "" |
| }, |
| { |
| "first": "Max", |
| "middle": [], |
| "last": "Dauchet", |
| "suffix": "" |
| }, |
| { |
| "first": "Bruno", |
| "middle": [], |
| "last": "Warin", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Mathematical Foundations of Computer Science", |
| "volume": "117", |
| "issue": "", |
| "pages": "251--265", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bossut, Francis, Max Dauchet, and Bruno Warin. 1988. Automata and rational expressions on planar graphs. In Mathematical Foundations of Computer Science 1988, pages 190-200, Carlsbad. Bossut, Francis, Max Dauchet, and Bruno Warin. 1995. A Kleene theorem for a class of planar acyclic graphs. Information and Computation, 117(2):251-265.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Convex Optimization", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Boyd", |
| "suffix": "" |
| }, |
| { |
| "first": "Lieven", |
| "middle": [], |
| "last": "Vandenberghe", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Boyd, Stephen and Lieven Vandenberghe. 2004. Convex Optimization. Cambridge University Press.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Word-sense disambiguation using statistical methods", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics (ACL-91)", |
| "volume": "", |
| "issue": "", |
| "pages": "264--270", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mercer. 1991. Word-sense disambiguation using statistical methods. In Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics (ACL-91), pages 264-270, Berkeley, CA.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Automata on DAG representations of finite trees. Technical report MPI-I-1999-2-001", |
| "authors": [ |
| { |
| "first": "Witold", |
| "middle": [], |
| "last": "Charatonik", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Charatonik, Witold. 1999. Automata on DAG representations of finite trees. Technical report MPI-I-1999-2-001, Max Planck Institute for Informatics, Saarbr\u00fccken, Germany.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Statistical parsing with a context-free grammar and word statistics", |
| "authors": [ |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the Fourteenth National Conference on Artificial Intelligence and Ninth Innovative Applications of Artificial Intelligence Conference, AAAI 97, IAAI 97", |
| "volume": "", |
| "issue": "", |
| "pages": "598--603", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Charniak, Eugene. 1997. Statistical parsing with a context-free grammar and word statistics. In Proceedings of the Fourteenth National Conference on Artificial Intelligence and Ninth Innovative Applications of Artificial Intelligence Conference, AAAI 97, IAAI 97, pages 598-603, Providence, RI.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Statistical properties of probabilistic context-free grammars", |
| "authors": [ |
| { |
| "first": "Zhiyi", |
| "middle": [], |
| "last": "Chi", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Computational Linguistics", |
| "volume": "25", |
| "issue": "", |
| "pages": "131--160", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chi, Zhiyi. 1999. Statistical properties of probabilistic context-free grammars. Computational Linguistics, 25:131-160.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Hope and fear for discriminative training of statistical translation models", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "13", |
| "issue": "1", |
| "pages": "1159--1187", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chiang, David. 2012. Hope and fear for discriminative training of statistical translation models. Journal of Machine Learning Research, 13(1):1159-1187.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Parsing graphs with hyperedge replacement grammars", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Andreas", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Bauer", |
| "suffix": "" |
| }, |
| { |
| "first": "Karl", |
| "middle": [ |
| "Moritz" |
| ], |
| "last": "Hermann", |
| "suffix": "" |
| }, |
| { |
| "first": "Bevan", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL-13)", |
| "volume": "", |
| "issue": "", |
| "pages": "924--932", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chiang, David, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, Bevan Jones, and Kevin Knight. 2013. Parsing graphs with hyperedge replacement grammars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL-13), pages 924-932, Sofia.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Three generative, lexicalised models for statistical parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "16--23", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Collins, Michael. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics, pages 16-23, Madrid.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Prepositional phrase attachment through a backed-off model", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Brooks", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the Third Workshop on Very Large Corpora", |
| "volume": "", |
| "issue": "", |
| "pages": "27--38", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Collins, Michael and James Brooks. 1995. Prepositional phrase attachment through a backed-off model. In Proceedings of the Third Workshop on Very Large Corpora, pages 27-38, Cambridge, MA.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Tree Automata Techniques and Applications", |
| "authors": [ |
| { |
| "first": "Hubert", |
| "middle": [], |
| "last": "Comon", |
| "suffix": "" |
| }, |
| { |
| "first": "Max", |
| "middle": [], |
| "last": "Dauchet", |
| "suffix": "" |
| }, |
| { |
| "first": "R\u00e9mi", |
| "middle": [], |
| "last": "Gilleron", |
| "suffix": "" |
| }, |
| { |
| "first": "Florent", |
| "middle": [], |
| "last": "Jacquemard", |
| "suffix": "" |
| }, |
| { |
| "first": "Denis", |
| "middle": [], |
| "last": "Lugiez", |
| "suffix": "" |
| }, |
| { |
| "first": "Sophie", |
| "middle": [], |
| "last": "Tison", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Tommasi", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Comon, Hubert, Max Dauchet, R\u00e9mi Gilleron, Florent Jacquemard, Denis Lugiez, Sophie Tison, and Marc Tommasi. 2002. Tree Automata Techniques and Applications. Internet publication available at http: //www.grappa.univ-lille3.fr/tata.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "The monadic second-order logic of graphs. I. Recognizable sets of finite graphs. Information and Computation", |
| "authors": [ |
| { |
| "first": "Bruno", |
| "middle": [], |
| "last": "Courcelle", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "", |
| "volume": "85", |
| "issue": "", |
| "pages": "12--75", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Courcelle, Bruno. 1990. The monadic second-order logic of graphs. I. Recognizable sets of finite graphs. Information and Computation, 85:12-75.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Solving connectivity problems parameterized by treewidth in single exponential time", |
| "authors": [ |
| { |
| "first": "Marek", |
| "middle": [], |
| "last": "Cygan", |
| "suffix": "" |
| }, |
| { |
| "first": "Jesper", |
| "middle": [], |
| "last": "Nederlof", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcin", |
| "middle": [], |
| "last": "Pilipczuk", |
| "suffix": "" |
| }, |
| { |
| "first": "Micha\u0142", |
| "middle": [], |
| "last": "Pilipczuk", |
| "suffix": "" |
| }, |
| { |
| "first": "Johan", |
| "middle": [ |
| "M M" |
| ], |
| "last": "Van Rooij", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakub Onufry", |
| "middle": [], |
| "last": "Wojtaszczyk", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cygan, Marek, Jesper Nederlof, Marcin Pilipczuk, Micha\u0142 Pilipczuk, Johan M. M. van Rooij, and Jakub Onufry Wojtaszczyk. 2011. Solving connectivity problems parameterized by treewidth in single exponential time. ArXiv:1103.0534.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Handbook of Graph Grammars and Computing by Graph Transformation", |
| "authors": [ |
| { |
| "first": "Frank", |
| "middle": [], |
| "last": "Drewes", |
| "suffix": "" |
| }, |
| { |
| "first": "Hans-J\u00f6rg", |
| "middle": [], |
| "last": "Kreowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Annegret", |
| "middle": [], |
| "last": "Habel", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "95--162", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Drewes, Frank, Hans-J\u00f6rg Kreowski, and Annegret Habel. 1997. Hyperedge replacement graph grammars. In Grzegorz Rozenberg, editor, Handbook of Graph Grammars and Computing by Graph Transformation, World Scientific, pages 95-162.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Structurally cyclic Petri nets", |
| "authors": [ |
| { |
| "first": "Frank", |
| "middle": [], |
| "last": "Drewes", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00e9r\u00f4me", |
| "middle": [], |
| "last": "Leroux", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Logical Methods in Computer Science", |
| "volume": "11", |
| "issue": "", |
| "pages": "1--9", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Drewes, Frank and J\u00e9r\u00f4me Leroux. 2015. Structurally cyclic Petri nets. Logical Methods in Computer Science, 11:1-9.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Weighted automata and logics on graphs", |
| "authors": [ |
| { |
| "first": "Manfred", |
| "middle": [], |
| "last": "Droste", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "D\u00fcck", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 40th International Symposium on Mathematical Foundations of Computer Science (MFCS 2015), Part I", |
| "volume": "9234", |
| "issue": "", |
| "pages": "192--204", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Droste, Manfred and Stefan D\u00fcck. 2015. Weighted automata and logics on graphs. In Proceedings of the 40th International Symposium on Mathematical Foundations of Computer Science (MFCS 2015), Part I, volume 9234 of Lecture Notes in Computer Science, pages 192-204, Milan.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "The Kleene-Sch\u00fctzenberger theorem for formal power series in partially commuting variables", |
| "authors": [ |
| { |
| "first": "Manfred", |
| "middle": [], |
| "last": "Droste", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Gastin", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Information and Computation", |
| "volume": "153", |
| "issue": "", |
| "pages": "47--80", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Droste, Manfred and Paul Gastin. 1999. The Kleene-Sch\u00fctzenberger theorem for formal power series in partially commuting variables. Information and Computation, 153:47-80.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Parameter estimation for probabilistic finite-state transducers", |
| "authors": [ |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th Annual Conference of the Association for Computational Linguistics (ACL-02)", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eisner, Jason. 2002. Parameter estimation for probabilistic finite-state transducers. In Proceedings of the 40th Annual Conference of the Association for Computational Linguistics (ACL-02), pages 1-8, Philadelphia, PA.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Decidability issues for Petri nets-a survey", |
| "authors": [ |
| { |
| "first": "Javier", |
| "middle": [], |
| "last": "Esparza", |
| "suffix": "" |
| }, |
| { |
| "first": "Mogens", |
| "middle": [], |
| "last": "Nielsen", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Elektronische Informationsverarbeitung und Kybernetik", |
| "volume": "30", |
| "issue": "", |
| "pages": "143--160", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Esparza, Javier and Mogens Nielsen. 1994. Decidability issues for Petri nets-a survey. Elektronische Informationsverarbeitung und Kybernetik, 30:143-160.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Improved approximation algorithms for minimum-weight vertex separators", |
| "authors": [ |
| { |
| "first": "Uriel", |
| "middle": [], |
| "last": "Feige", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohammadtaghi", |
| "middle": [], |
| "last": "Hajiaghayi", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "R" |
| ], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "STOC '05: Proceedings of the Thirty-Seventh Annual ACM Symposium on Theory of Computing", |
| "volume": "", |
| "issue": "", |
| "pages": "563--572", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Feige, Uriel, MohammadTaghi Hajiaghayi, and James R. Lee. 2005. Improved approximation algorithms for minimum-weight vertex separators. In STOC '05: Proceedings of the Thirty-Seventh Annual ACM Symposium on Theory of Computing, pages 563-572, Baltimore, MD.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "A discriminative graph-based parser for the abstract meaning representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Flanigan", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaime", |
| "middle": [], |
| "last": "Carbonell", |
| "suffix": "" |
| }, |
| { |
| "first": ";", |
| "middle": [], |
| "last": "Flanigan", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Sam", |
| "middle": [], |
| "last": "Thomson", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaime", |
| "middle": [], |
| "last": "Carbonell", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "81", |
| "issue": "", |
| "pages": "832--842", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Flanigan, Jeffrey, Chris Dyer, Noah A. Smith, and Jaime Carbonell. 2016. Generation from abstract meaning representation using tree transducers. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 731-739, San Diego, CA. Flanigan, Jeffrey, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the abstract meaning representation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL-14), pages 1426-1436, Baltimore, MD. Frank, Ove and David Strauss. 1986. Markov graphs. Journal of the American Statistical Association, 81(395):832-842.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Query evaluation on compressed trees", |
| "authors": [ |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "Frick", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Grohe", |
| "suffix": "" |
| }, |
| { |
| "first": "Christoph", |
| "middle": [], |
| "last": "Koch", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Frick, Markus, Martin Grohe, and Christoph Koch. 2003. Query evaluation on compressed trees (extended abstract).", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Recognition of directed acyclic graphs by spanning tree automata", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ottawa", |
| "suffix": "" |
| }, |
| { |
| "first": "Akio", |
| "middle": [], |
| "last": "Fujiyoshi", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 18th Annual IEEE Symposium on Logic in Computer Science (LICS 2003)", |
| "volume": "411", |
| "issue": "", |
| "pages": "3493--3506", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "In Proceedings of the 18th Annual IEEE Symposium on Logic in Computer Science (LICS 2003), pages 188-197, Ottawa. Fujiyoshi, Akio. 2010. Recognition of directed acyclic graphs by spanning tree automata. Theoretical Computer Science, 411(38-39):3493-3506.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Weighted tree automata and tree transducers", |
| "authors": [ |
| { |
| "first": "Zolt\u00e1n", |
| "middle": [], |
| "last": "F\u00fcl\u00f6p", |
| "suffix": "" |
| }, |
| { |
| "first": "Werner", |
| "middle": [], |
| "last": "Kuich", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "3", |
| "issue": "", |
| "pages": "69--104", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F\u00fcl\u00f6p, Zolt\u00e1n and Werner Kuich. 2009. Weighted tree automata and tree transducers. In Werner Kuich, Manfred Droste, and Heiko Vogler, editors, Handbook of Weighted Automata. Springer, chapter 3, pages 69-104.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Automatic labeling of semantic roles", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the 38th Annual Conference of the Association for Computational Linguistics (ACL-00)", |
| "volume": "", |
| "issue": "", |
| "pages": "512--520", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gildea, Daniel and Daniel Jurafsky. 2000. Automatic labeling of semantic roles. In Proceedings of the 38th Annual Conference of the Association for Computational Linguistics (ACL-00), pages 512-520, Hong Kong.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "A complete anytime algorithm for treewidth", |
| "authors": [ |
| { |
| "first": "Vibhav", |
| "middle": [], |
| "last": "Gogate", |
| "suffix": "" |
| }, |
| { |
| "first": "Rina", |
| "middle": [], |
| "last": "Dechter", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Uncertainty in Artificial Intelligence (UAI)", |
| "volume": "", |
| "issue": "", |
| "pages": "201--208", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gogate, Vibhav and Rina Dechter. 2004. A complete anytime algorithm for treewidth. In Uncertainty in Artificial Intelligence (UAI), pages 201-208, Banff.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Semiring parsing. Computational Linguistics", |
| "authors": [ |
| { |
| "first": "Joshua", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "25", |
| "issue": "", |
| "pages": "573--605", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Goodman, Joshua. 1999. Semiring parsing. Computational Linguistics, 25(4):573-605.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Hyperedge Replacement: Grammars and Languages", |
| "authors": [ |
| { |
| "first": "Annegret", |
| "middle": [], |
| "last": "Habel", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Lecture Notes in Computer Science", |
| "volume": "643", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Habel, Annegret. 1992. Hyperedge Replacement: Grammars and Languages, volume 643 of Lecture Notes in Computer Science. Springer.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "May we introduce to you: Hyperedge replacement", |
| "authors": [ |
| { |
| "first": "Annegret", |
| "middle": [], |
| "last": "Habel", |
| "suffix": "" |
| }, |
| { |
| "first": "Hans-J\u00f6rg", |
| "middle": [], |
| "last": "Kreowski", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Proceedings of the Third International Workshop on Graph Grammars and Their Application to Computer Science", |
| "volume": "", |
| "issue": "", |
| "pages": "15--26", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Habel, Annegret and Hans-J\u00f6rg Kreowski. 1987. May we introduce to you: Hyperedge replacement. In Proceedings of the Third International Workshop on Graph Grammars and Their Application to Computer Science, volume of 291 Lecture Notes in Computer Science. pages 15-26, Warrenton, VA", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Introduction to Automata Theory, Languages, and Computation", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [ |
| "E" |
| ], |
| "last": "Hopcroft", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [ |
| "D" |
| ], |
| "last": "Ullman", |
| "suffix": "" |
| } |
| ], |
| "year": 1979, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hopcroft, John E. and Jeffrey D. Ullman. 1979. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley, Reading, MA.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Ontonotes: The 90% solution", |
| "authors": [ |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Mitchell", |
| "middle": [], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Lance", |
| "middle": [], |
| "last": "Ramshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Weischedel", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "57--60", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hovy, Eduard, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: The 90% solution. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 57-60, New York, NY.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Bayesian updating in causal probabilistic networks by local computations", |
| "authors": [ |
| { |
| "first": "Finn", |
| "middle": [ |
| "V" |
| ], |
| "last": "Jensen", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Steffen", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristian", |
| "middle": [ |
| "G" |
| ], |
| "last": "Lauritzen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Olesen", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Computational Statistics Quarterly", |
| "volume": "4", |
| "issue": "", |
| "pages": "269--282", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jensen, Finn V., Steffen L. Lauritzen, and Kristian G. Olesen. 1990. Bayesian updating in causal probabilistic networks by local computations. Computational Statistics Quarterly, 4:269-282.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Estimators for stochastic \"unification-based\" grammars", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "Stuart", |
| "middle": [], |
| "last": "Geman", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Canon", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhiyi", |
| "middle": [], |
| "last": "Chi", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Riezler", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the 37th Annual Conference of the Association for Computational Linguistics (ACL-99)", |
| "volume": "", |
| "issue": "", |
| "pages": "535--541", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johnson, Mark, Stuart Geman, Stephen Canon, Zhiyi Chi, and Stefan Riezler. 1999. Estimators for stochastic \"unification-based\" grammars. In Proceedings of the 37th Annual Conference of the Association for Computational Linguistics (ACL-99), pages 535-541, College Park, MD.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "Parallel and two-way automata on directed ordered acyclic graphs", |
| "authors": [ |
| { |
| "first": "Tsutomu", |
| "middle": [], |
| "last": "Kamimura", |
| "suffix": "" |
| }, |
| { |
| "first": "Giora", |
| "middle": [], |
| "last": "Slutzki", |
| "suffix": "" |
| } |
| ], |
| "year": 1981, |
| "venue": "Information and Control", |
| "volume": "49", |
| "issue": "", |
| "pages": "10--51", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kamimura, Tsutomu and Giora Slutzki. 1981. Parallel and two-way automata on directed ordered acyclic graphs. Information and Control, 49:10-51.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Finite automata on directed graphs", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Kaminski", |
| "suffix": "" |
| }, |
| { |
| "first": "Shlomit", |
| "middle": [ |
| "S" |
| ], |
| "last": "Pinter", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Journal of Computer and System Sciences", |
| "volume": "44", |
| "issue": "", |
| "pages": "425--446", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kaminski, Michael and Shlomit S. Pinter. 1992. Finite automata on directed graphs. Journal of Computer and System Sciences, 44:425-446.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "Lexical-Functional Grammar: A formal system for grammatical representation", |
| "authors": [ |
| { |
| "first": "Ronald", |
| "middle": [ |
| "M" |
| ], |
| "last": "Kaplan", |
| "suffix": "" |
| }, |
| { |
| "first": "Joan", |
| "middle": [], |
| "last": "Bresnan", |
| "suffix": "" |
| } |
| ], |
| "year": 1982, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "173--281", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kaplan, Ronald M. and Joan Bresnan. 1982. Lexical-Functional Grammar: A formal system for grammatical representation. In Joan Bresnan, editor, The Mental Representation of Grammatical Relations. MIT Press, Cambridge, MA, pages 173-281.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "Treewidth. Computations and Approximations", |
| "authors": [ |
| { |
| "first": "Ton", |
| "middle": [], |
| "last": "Kloks", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "", |
| "volume": "842", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kloks, Ton. 1994. Treewidth. Computations and Approximations, volume 842 of Lecture Notes in Computer Science. Springer.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Lafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Machine Learning: Proceedings of the Eighteenth International Conference (ICML 2001)", |
| "volume": "", |
| "issue": "", |
| "pages": "282--289", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lafferty, John, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Machine Learning: Proceedings of the Eighteenth International Conference (ICML 2001), pages 282-289, Stanford, CA.", |
| "links": null |
| }, |
| "BIBREF52": { |
| "ref_id": "b52", |
| "title": "Regular right part grammars and their parsers", |
| "authors": [ |
| { |
| "first": "Wilf", |
| "middle": [ |
| "R" |
| ], |
| "last": "Lalonde", |
| "suffix": "" |
| } |
| ], |
| "year": 1977, |
| "venue": "Communications of the Association for Computing Machinery", |
| "volume": "20", |
| "issue": "10", |
| "pages": "731--741", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "LaLonde, Wilf R. 1977. Regular right part grammars and their parsers. Communications of the Association for Computing Machinery, 20(10): 731-741.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "String grammars with disconnecting or a basic root of the difficulty in graph grammar parsing", |
| "authors": [ |
| { |
| "first": "Klaus", |
| "middle": [], |
| "last": "Lange", |
| "suffix": "" |
| }, |
| { |
| "first": "Emo", |
| "middle": [], |
| "last": "J\u00f6rn", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Welzl", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Discrete Applied Mathematics", |
| "volume": "16", |
| "issue": "", |
| "pages": "17--30", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lange, Klaus J\u00f6rn and Emo Welzl. 1987. String grammars with disconnecting or a basic root of the difficulty in graph grammar parsing. Discrete Applied Mathematics, 16:17-30.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "The estimation of stochastic context-free grammars using the Inside-Outside algorithm", |
| "authors": [ |
| { |
| "first": "Kamran", |
| "middle": [], |
| "last": "Lari", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [ |
| "J" |
| ], |
| "last": "Young", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Computer Speech and Language", |
| "volume": "4", |
| "issue": "", |
| "pages": "35--56", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lari, Kamran and Steve J. Young. 1990. The estimation of stochastic context-free grammars using the Inside-Outside algorithm. Computer Speech and Language, 4:35-56.", |
| "links": null |
| }, |
| "BIBREF55": { |
| "ref_id": "b55", |
| "title": "The complexity of graph languages generated by hyperedge replacement", |
| "authors": [ |
| { |
| "first": "Clemens", |
| "middle": [], |
| "last": "Lautemann", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Acta Informatica", |
| "volume": "27", |
| "issue": "", |
| "pages": "399--421", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lautemann, Clemens. 1990. The complexity of graph languages generated by hyperedge replacement. Acta Informatica, 27:399-421.", |
| "links": null |
| }, |
| "BIBREF56": { |
| "ref_id": "b56", |
| "title": "Improving event detection with abstract meaning representation", |
| "authors": [ |
| { |
| "first": "Xiang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Thien", |
| "middle": [], |
| "last": "Huu Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Cao", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the First Workshop on Computing News Storylines", |
| "volume": "", |
| "issue": "", |
| "pages": "11--15", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Li, Xiang, Thien Huu Nguyen, Kai Cao, and Ralph Grishman. 2015. Improving event detection with abstract meaning representation. In Proceedings of the First Workshop on Computing News Storylines, pages 11-15, Beijing.", |
| "links": null |
| }, |
| "BIBREF57": { |
| "ref_id": "b57", |
| "title": "Toward abstractive summarization using semantic representations", |
| "authors": [ |
| { |
| "first": "Fei", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Flanigan", |
| "suffix": "" |
| }, |
| { |
| "first": "Sam", |
| "middle": [], |
| "last": "Thomson", |
| "suffix": "" |
| }, |
| { |
| "first": "Norman", |
| "middle": [], |
| "last": "Sadeh", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1077--1086", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liu, Fei, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, and Noah A. Smith. 2015. Toward abstractive summarization using semantic representations. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1077-1086, Denver, CO.", |
| "links": null |
| }, |
| "BIBREF58": { |
| "ref_id": "b58", |
| "title": "The complexity of tree automata and XPath on grammar-compressed trees", |
| "authors": [ |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "Lohrey", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Maneth", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Theoretical Computer Science", |
| "volume": "363", |
| "issue": "", |
| "pages": "196--210", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lohrey, Markus and Sebastian Maneth. 2006. The complexity of tree automata and XPath on grammar-compressed trees. Theoretical Computer Science, 363:196-210.", |
| "links": null |
| }, |
| "BIBREF59": { |
| "ref_id": "b59", |
| "title": "The power of extended top-down tree transducers", |
| "authors": [ |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Maletti", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Graehl", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Hopkins", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "SIAM Journal on Computing", |
| "volume": "39", |
| "issue": "2", |
| "pages": "410--430", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maletti, Andreas, Jonathan Graehl, Mark Hopkins, and Kevin Knight. 2009. The power of extended top-down tree transducers. SIAM Journal on Computing, 39(2):410-430.", |
| "links": null |
| }, |
| "BIBREF60": { |
| "ref_id": "b60", |
| "title": "Building a large annotated corpus of English: The Penn Treebank", |
| "authors": [ |
| { |
| "first": "Mitchell", |
| "middle": [ |
| "P" |
| ], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "Beatrice", |
| "middle": [], |
| "last": "Mary Ann Marcinkiewicz", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Santorini", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "313--330", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marcus, Mitchell P., Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330.", |
| "links": null |
| }, |
| "BIBREF61": { |
| "ref_id": "b61", |
| "title": "Semeval-2016 task 8: Meaning representation parsing", |
| "authors": [ |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "May", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", |
| "volume": "", |
| "issue": "", |
| "pages": "1063--1073", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "May, Jonathan. 2016. Semeval-2016 task 8: Meaning representation parsing. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1063-1073, San Diego, CA.", |
| "links": null |
| }, |
| "BIBREF62": { |
| "ref_id": "b62", |
| "title": "Regular expressions and state graphs for automata", |
| "authors": [ |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Mcnaughton", |
| "suffix": "" |
| }, |
| { |
| "first": "Hisao", |
| "middle": [], |
| "last": "Yamada", |
| "suffix": "" |
| } |
| ], |
| "year": 1960, |
| "venue": "IRE Transactions on Electronic Computers, EC", |
| "volume": "9", |
| "issue": "", |
| "pages": "39--47", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "McNaughton, Robert and Hisao Yamada. 1960. Regular expressions and state graphs for automata. IRE Transactions on Electronic Computers, EC-9:39-47.", |
| "links": null |
| }, |
| "BIBREF63": { |
| "ref_id": "b63", |
| "title": "Learning for semantic parsing", |
| "authors": [ |
| { |
| "first": "Raymond", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Computational Linguistics and Intelligent Text Processing: Proceedings of the 8th International Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "311--324", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mooney, Raymond J. 2007. Learning for semantic parsing. In Computational Linguistics and Intelligent Text Processing: Proceedings of the 8th International Conference (CICLing 2007), pages 311-324, Mexico City.", |
| "links": null |
| }, |
| "BIBREF64": { |
| "ref_id": "b64", |
| "title": "Regular behaviour of concurrent systems", |
| "authors": [ |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Ochma\u0144ski", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "Bulletin of the European Association for Theoretical Computer Science", |
| "volume": "27", |
| "issue": "", |
| "pages": "56--67", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ochma\u0144ski, Edward. 1985. Regular behaviour of concurrent systems. Bulletin of the European Association for Theoretical Computer Science, 27:56-67.", |
| "links": null |
| }, |
| "BIBREF65": { |
| "ref_id": "b65", |
| "title": "Broad-coverage semantic dependency parsing", |
| "authors": [ |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Oepen", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Kuhlmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Zeman", |
| "suffix": "" |
| }, |
| { |
| "first": "Silvie", |
| "middle": [], |
| "last": "Cinkova", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Flickinger", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", |
| "volume": "18", |
| "issue": "", |
| "pages": "915--926", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oepen, Stephan, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkova, Dan Flickinger, Jan Hajic, and Zdenka Uresova. 2015. Semeval 2015 Task 18: Broad-coverage semantic dependency parsing. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 915-926, Denver, CO.", |
| "links": null |
| }, |
| "BIBREF66": { |
| "ref_id": "b66", |
| "title": "Semeval 2014 task 8: Broad-coverage semantic dependency parsing", |
| "authors": [ |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Oepen", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Kuhlmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Zeman", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Flickinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Hajic", |
| "suffix": "" |
| }, |
| { |
| "first": "Angelina", |
| "middle": [], |
| "last": "Ivanova", |
| "suffix": "" |
| }, |
| { |
| "first": "Yi", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "63--72", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oepen, Stephan, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Dan Flickinger, Jan Hajic, Angelina Ivanova, and Yi Zhang. 2014. Semeval 2014 task 8: Broad-coverage semantic dependency parsing. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 63-72, Dublin.", |
| "links": null |
| }, |
| "BIBREF67": { |
| "ref_id": "b67", |
| "title": "Unsupervised entity linking with abstract meaning representation", |
| "authors": [ |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Oepen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Genoa", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pan", |
| "suffix": "" |
| }, |
| { |
| "first": "Taylor", |
| "middle": [], |
| "last": "Xiaoman", |
| "suffix": "" |
| }, |
| { |
| "first": "Ulf", |
| "middle": [], |
| "last": "Cassidy", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hermjakob", |
| "suffix": "" |
| }, |
| { |
| "first": "Ji", |
| "middle": [], |
| "last": "Heng", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1130--1139", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oepen, Stephan and Jan Tore L\u00f8nning. 2006. Discriminant-based MRS banking. In International Conference on Language Resources and Evaluation (LREC), pages 1250-1255, Genoa. Pan, Xiaoman, Taylor Cassidy, Ulf Hermjakob, Heng Ji, and Kevin Knight. 2015. Unsupervised entity linking with abstract meaning representation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1130-1139, Denver, CO.", |
| "links": null |
| }, |
| "BIBREF68": { |
| "ref_id": "b68", |
| "title": "A synchronous hyperedge replacement grammar based approach for AMR parsing", |
| "authors": [ |
| { |
| "first": "Xiaochang", |
| "middle": [], |
| "last": "Peng", |
| "suffix": "" |
| }, |
| { |
| "first": "Linfeng", |
| "middle": [], |
| "last": "Song", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "32--41", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peng, Xiaochang, Linfeng Song, and Daniel Gildea. 2015. A synchronous hyperedge replacement grammar based approach for AMR parsing. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, pages 32-41, Beijing.", |
| "links": null |
| }, |
| "BIBREF69": { |
| "ref_id": "b69", |
| "title": "Learning accurate, compact, and interpretable tree annotation", |
| "authors": [ |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Leon", |
| "middle": [], |
| "last": "Barrett", |
| "suffix": "" |
| }, |
| { |
| "first": "Romain", |
| "middle": [], |
| "last": "Thibaux", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "433--440", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Petrov, Slav, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 433-440, Sydney.", |
| "links": null |
| }, |
| "BIBREF70": { |
| "ref_id": "b70", |
| "title": "Head-Driven Phrase Structure Grammar", |
| "authors": [ |
| { |
| "first": "Carl", |
| "middle": [], |
| "last": "Pollard", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [ |
| "A" |
| ], |
| "last": "Sag", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pollard, Carl and Ivan A. Sag. 1994. Head-Driven Phrase Structure Grammar. University of Chicago Press, Chicago.", |
| "links": null |
| }, |
| "BIBREF71": { |
| "ref_id": "b71", |
| "title": "Nondeterminism versus determinism of finite automata over directed acyclic graphs", |
| "authors": [ |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Potthoff", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Seibert", |
| "suffix": "" |
| }, |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Thomas", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Bulletin of the Belgian Mathematical Society -Simon Stevin", |
| "volume": "1", |
| "issue": "", |
| "pages": "285--298", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Potthoff, Andreas, Sebastian Seibert, and Wolfgang Thomas. 1994. Nondeterminism versus determinism of finite automata over directed acyclic graphs. Bulletin of the Belgian Mathematical Society -Simon Stevin, 1:285-298.", |
| "links": null |
| }, |
| "BIBREF73": { |
| "ref_id": "b73", |
| "title": "Towards probabilistic acceptors and transducers for feature structures", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Turku", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Quattoni", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Ariadna", |
| "suffix": "" |
| }, |
| { |
| "first": "Trevor", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Darrell", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the Sixth Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST)", |
| "volume": "4588", |
| "issue": "", |
| "pages": "76--85", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "In Proceedings of the 11th International Conference on Developments in Language Theory (DLT 2007), volume 4588 of Lecture Notes in Computer Science. pages 346-360, Turku. Quattoni, Ariadna, Michael Collins, and Trevor Darrell. 2004. Conditional random fields for object recognition. In Advances in Neural Information Processing Systems (NIPS-17), pages 1097-1104, Vancouver. Quernheim, Daniel and Kevin Knight. 2012. Towards probabilistic acceptors and transducers for feature structures. In Proceedings of the Sixth Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST), pages 76-85, Jeju Island.", |
| "links": null |
| }, |
| "BIBREF74": { |
| "ref_id": "b74", |
| "title": "Text chunking using transformation-based learning", |
| "authors": [ |
| { |
| "first": "Lance", |
| "middle": [], |
| "last": "Ramshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Mitch", |
| "middle": [], |
| "last": "Marcus", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the Third Workshop on Very Large Corpora", |
| "volume": "", |
| "issue": "", |
| "pages": "82--94", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ramshaw, Lance and Mitch Marcus. 1995. Text chunking using transformation-based learning. In Proceedings of the Third Workshop on Very Large Corpora, pages 82-94, Cambridge, MA.", |
| "links": null |
| }, |
| "BIBREF75": { |
| "ref_id": "b75", |
| "title": "A maximum entropy model for part-of-speech tagging", |
| "authors": [ |
| { |
| "first": "Adwait", |
| "middle": [], |
| "last": "Ratnaparkhi", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "133--142", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ratnaparkhi, Adwait. 1996. A maximum entropy model for part-of-speech tagging. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 133-142, Philadelphia, PA. Reutenauer, Christophe. 1990. The Mathematics of Petri Nets. Prentice-Hall.", |
| "links": null |
| }, |
| "BIBREF76": { |
| "ref_id": "b76", |
| "title": "Triangulated graphs and the elimination process", |
| "authors": [ |
| { |
| "first": "Donald", |
| "middle": [ |
| "J" |
| ], |
| "last": "Rose", |
| "suffix": "" |
| } |
| ], |
| "year": 1970, |
| "venue": "Journal of Mathematical Analysis and Applications", |
| "volume": "32", |
| "issue": "3", |
| "pages": "597--609", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rose, Donald J. 1970. Triangulated graphs and the elimination process. Journal of Mathematical Analysis and Applications, 32(3):597-609.", |
| "links": null |
| }, |
| "BIBREF77": { |
| "ref_id": "b77", |
| "title": "MIX is a 2-MCFL and the word problem in Z 2 is solved by a third-order collapsible pushdown automaton", |
| "authors": [ |
| { |
| "first": "Sylvain", |
| "middle": [], |
| "last": "Salvati", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Journal of Computer and System Sciences", |
| "volume": "81", |
| "issue": "", |
| "pages": "1252--1277", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Salvati, Sylvain. 2014. MIX is a 2-MCFL and the word problem in Z 2 is solved by a third-order collapsible pushdown automaton. Journal of Computer and System Sciences, 81:1252-1277.", |
| "links": null |
| }, |
| "BIBREF78": { |
| "ref_id": "b78", |
| "title": "Probability propagation", |
| "authors": [ |
| { |
| "first": "Glenn", |
| "middle": [ |
| "R" |
| ], |
| "last": "Shafer", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Prakash", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Shenoy", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Annals of Mathematics and Artificial Intelligence", |
| "volume": "2", |
| "issue": "", |
| "pages": "327--353", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shafer, Glenn R. and Prakash P. Shenoy. 1990. Probability propagation. Annals of Mathematics and Artificial Intelligence, 2:327-353.", |
| "links": null |
| }, |
| "BIBREF79": { |
| "ref_id": "b79", |
| "title": "Weighted and probabilistic context-free grammars are equally expressive", |
| "authors": [ |
| { |
| "first": "Stuart M ;", |
| "middle": [], |
| "last": "Shieber", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "A" |
| ], |
| "last": "Stanford", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Noah", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "Computational Linguistics", |
| "volume": "33", |
| "issue": "4", |
| "pages": "477--491", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shieber, Stuart M. 1986. An Introduction to Unification-Based Approaches to Grammar. CSLI Publications, Stanford, CA. Smith, Noah A. and Mark Johnson. 2007. Weighted and probabilistic context-free grammars are equally expressive. Computational Linguistics, 33(4):477-491.", |
| "links": null |
| }, |
| "BIBREF80": { |
| "ref_id": "b80", |
| "title": "Markov chain Monte Carlo estimation of exponential random graph models", |
| "authors": [ |
| { |
| "first": "Tom", |
| "middle": [ |
| "A B" |
| ], |
| "last": "Snijders", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Journal of Social Structure", |
| "volume": "3", |
| "issue": "2", |
| "pages": "1--40", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Snijders, Tom A. B. 2002. Markov chain Monte Carlo estimation of exponential random graph models. Journal of Social Structure, 3(2):1-40.", |
| "links": null |
| }, |
| "BIBREF81": { |
| "ref_id": "b81", |
| "title": "A machine learning approach to coreference resolution of noun phrases", |
| "authors": [ |
| { |
| "first": "Wee", |
| "middle": [], |
| "last": "Soon", |
| "suffix": "" |
| }, |
| { |
| "first": "Hwee Tou", |
| "middle": [], |
| "last": "Meng", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel Chung Long", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lim", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Computational Linguistics", |
| "volume": "27", |
| "issue": "4", |
| "pages": "521--544", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Soon, Wee Meng, Hwee Tou Ng, and Daniel Chung Long Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521-544.", |
| "links": null |
| }, |
| "BIBREF82": { |
| "ref_id": "b82", |
| "title": "On logics, tilings, and automata", |
| "authors": [ |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Thomas", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Proceedings of the 18th International Colloquium on Automata, Languages and Programming", |
| "volume": "", |
| "issue": "", |
| "pages": "441--454", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas, Wolfgang. 1991. On logics, tilings, and automata. In Proceedings of the 18th International Colloquium on Automata, Languages and Programming, pages 441-454, Madrid.", |
| "links": null |
| }, |
| "BIBREF83": { |
| "ref_id": "b83", |
| "title": "Boosting transition-based AMR parsing with refined actions and auxiliary analyzers", |
| "authors": [ |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Thomas", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Vijay-Shanker", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "David", |
| "suffix": "" |
| }, |
| { |
| "first": "Aravind", |
| "middle": [ |
| "K" |
| ], |
| "last": "Weir", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "A" |
| ], |
| "last": "Stanford", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Chuan", |
| "suffix": "" |
| }, |
| { |
| "first": "Sameer", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pradhan", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "29", |
| "issue": "", |
| "pages": "857--862", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas, Wolfgang. 1996. Elements of an automata theory over partial orders. In Proceedings of the DIMACS Workshop on Partial Order Methods in Verification, volume 29 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science. pages 25-40, Princeton, NJ. Vijay-Shanker, K., David J. Weir, and Aravind K. Joshi. 1987. Characterizing structural descriptions produced by various grammatical formalisms. In Proceedings of the 25th Annual Conference of the Association for Computational Linguistics (ACL-87), pages 104-111, Stanford, CA. Wang, Chuan, Nianwen Xue, and Sameer Pradhan. 2015a. Boosting transition-based AMR parsing with refined actions and auxiliary analyzers. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 857-862, Beijing.", |
| "links": null |
| }, |
| "BIBREF84": { |
| "ref_id": "b84", |
| "title": "A transition-based algorithm for AMR parsing", |
| "authors": [ |
| { |
| "first": "Chuan", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "Sameer", |
| "middle": [], |
| "last": "Pradhan", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "366--375", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wang, Chuan, Nianwen Xue, and Sameer Pradhan. 2015b. A transition-based algorithm for AMR parsing. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 366-375, Denver, CO.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "num": null, |
| "text": "The AMR of Figure 1, presented as a directed graph. Reification of a role (edge label) can break cycles.", |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "num": null, |
| "text": "Degree distribution of nodes in the AMR Bank (reversed and reified).", |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "num": null, |
| "text": "DAG D (a) and a tree decomposition T (b).", |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "num": null, |
| "text": "Example run of a DAG automaton.", |
| "type_str": "figure" |
| }, |
| "FIGREF5": { |
| "uris": null, |
| "num": null, |
| "text": "Hyperedge replacement graph languages are closed under intersection with languages recognized by DAG automata (Section 4.2).", |
| "type_str": "figure" |
| }, |
| "FIGREF6": { |
| "uris": null, |
| "num": null, |
| "text": "By induction on the length of derivations it can be shown that D \u2208 [[G ]] if and only if D \u2208 [[G]] and there exists an accepting run of M on D. In other words, [[G ]] = [[G]] \u2229 [[M]], as required.", |
| "type_str": "figure" |
| }, |
| "FIGREF7": { |
| "uris": null, |
| "num": null, |
| "text": "Petri net with four places and one transition.", |
| "type_str": "figure" |
| }, |
| "FIGREF8": { |
| "uris": null, |
| "num": null, |
| "text": "Example run of the DAG parsing algorithm: (a) the starting DAG D; (b), (c), (d) intermediate DAGs obtained after individual edge contraction; (e) final DAG consisting of a single node.", |
| "type_str": "figure" |
| }, |
| "FIGREF9": { |
| "uris": null, |
| "num": null, |
| "text": "Compute [[M]] (D) by summing up the weights of all runs of M on D.", |
| "type_str": "figure" |
| }, |
| "FIGREF10": { |
| "uris": null, |
| "num": null, |
| "text": "line graph of a graph D is the hypergraph LG(D) obtained as follows: Each edge of D becomes a node of LG(D); conversely, each node of D with incident edges e 1 , . . . , e n graph D (left) and its line graph LG(D) (right); as in Section 4.2 hyperedges of LG(D) are drawn as squares connected by lines to their attached nodes (which are the edges of D).", |
| "type_str": "figure" |
| }, |
| "FIGREF11": { |
| "uris": null, |
| "num": null, |
| "text": "Source DAG D and (b) binarized DAG D . The edges in D and their counterparts in D are drawn using thick lines.", |
| "type_str": "figure" |
| }, |
| "FIGREF12": { |
| "uris": null, |
| "num": null, |
| "text": "of a node labeled a in some DAG D, with two incoming edges assigned states p and q and two outgoing edges assigned states r and s. (b) Snapshot of the binary DAG D obtained from D, showing the treelet associated with node in (a). As inFigure 14, we use thick lines for edges of D and for their counterparts in D , and we use thin lines for fresh edges in D .", |
| "type_str": "figure" |
| }, |
| "FIGREF13": { |
| "uris": null, |
| "num": null, |
| "text": "T x and T y contain the leaves [x, b] and [y, b], respectively, and we set src D T (e) = [x, b] and tar D T (e) = [y, b].", |
| "type_str": "figure" |
| }, |
| "FIGREF14": { |
| "uris": null, |
| "num": null, |
| "text": "Figure 16. (The figure does not show node labels, however. For example, in the sub-DAG resulting from T x , if lab D (x) = \u03c3 then the label of [x, ], [x, 1], and [x, 2] is \u03c3 and the label of [x, 1.2] and [x, 2.2] is \u03c3.)", |
| "type_str": "figure" |
| }, |
| "FIGREF15": { |
| "uris": null, |
| "num": null, |
| "text": "showing nodes instead of node labels) T = Binarization of a DAG D along a tree decomposition T, yielding D T . The bottom part illustrates the construction of a tree decomposition of D T in the proof of Theorem 1.", |
| "type_str": "figure" |
| }, |
| "FIGREF16": { |
| "uris": null, |
| "num": null, |
| "text": "The nodes of D T have become hyperedges drawn as boxes and the edges have become nodes drawn as bullets. The node e(x, 1.1), for instance, is the one on top of the hyperedge [x, 1.1].Consider a bag b of T and let D T (b) be the sub-DAG of D T induced by the treelet nodes[v, c] such that c is a descendant of b (or b itself) and v \u2208 cont T (c). Let, furthermore, LG(D T , b) be the subgraph of LG(D T ) having as nodes the edges of D T (b) as well as all edges e(v, b) with v \u2208 cont T (b), and as hyperedges the nodes of D T (b). As an example, Figure 17 (middle) indicates LG(D T , 1) by means of thick lines. In a bottom-up manner, starting at the leaves b of T, we construct a tree decomposition T b of LG(D T , b) of width at most 2(k + 1) such that the root bag of T b contains {e(v, b) | v \u2208 cont T (b)}. (Recall that the bags of T b should contain the edges of D T , because they are the nodes of LG(D T ).) If b is a leaf and cont T (b) = {v 1 , v 2 }, then LG(D T , b) contains the edge e of D covered by b (i.e., the one incident on v 1 and v 2 ), as well as e(v 1 , b) and e(v 2 , b). Thus, we let T b consist of a leaf containing {e, e(v 1 , b), e(v 2 , b)} and a root containing {e(", |
| "type_str": "figure" |
| }, |
| "FIGREF17": { |
| "uris": null, |
| "num": null, |
| "text": "this intuition, the transitions of M are specified as follows. Consider a transition I \u03c3 \u2212 \u2192 O of the original DAG automaton M. The roots of D T with label \u03c3 are handled by the following transitions of M : 1. \u2205 \u03c3 \u2212 \u2192 {(I, O)} (these transitions process unary roots of treelets).", |
| "type_str": "figure" |
| }, |
| "FIGREF18": { |
| "uris": null, |
| "num": null, |
| "text": "these transitions split I and O at binary nodes).Finally, we let M contain the transitions:5. {p, ({p}, \u2205)} \u03c3 \u2212 \u2192 \u2205 and {(\u2205, {q})} \u03c3 \u2212 \u2192 {q} for all p \u2208 I and all q \u2208 O (these transitions process leaf bags of a treelet, matching the individual state at the edge of D incoming or outgoing at the leaf bag). 6. {p} \u03c3 \u2212 \u2192 \u2205 if I = {p} and O = \u2205 and \u2205 \u03c3 \u2212 \u2192 {q} if I = \u2205 and O = {q} (these transitions handle the special case of treelets consisting of a single node).", |
| "type_str": "figure" |
| }, |
| "FIGREF20": { |
| "uris": null, |
| "num": null, |
| "text": "the weight of c under A is defined as [[A]] (c) = \u03c1 c [[A]] (\u03c1 c ), where \u03c1 c ranges over all runs of A on c. Finally, as mentioned, set [[A]] (\u00b5) = [[A]] (c). Using Condition (7) in Definition 5, it is not difficult to show that [[A]] (\u00b5) is unique, that is, this quantity does not depend on the specific order of the symbols in \u00b5 used to create c.", |
| "type_str": "figure" |
| }, |
| "FIGREF21": { |
| "uris": null, |
| "num": null, |
| "text": "created in A gets the product of the final weight of f in A 1 and the weight of s 2 q \u2212 \u2192 j in A 2 . The final weight of f is the product of the final weights of f and s 2 . (e) Similarly to the previous case, the weight of f q \u2212 \u2192 j is the product of the final weight of f and the weight of s 1 q \u2212 \u2192 j in A 1 . The final weight of s 1 becomes 1.", |
| "type_str": "figure" |
| }, |
| "FIGREF22": { |
| "uris": null, |
| "num": null, |
| "text": "If \u03b1 = \u03b2\u03b3, then we can rewrite \u03b1 to the desired form because iq \u03b3 jq (by commutativity of concatenation) which is in the desired form.", |
| "type_str": "figure" |
| }, |
| "FIGREF23": { |
| "uris": null, |
| "num": null, |
| "text": "", |
| "type_str": "figure" |
| }, |
| "FIGREF24": { |
| "uris": null, |
| "num": null, |
| "text": "For each i, form the shuffle product of the A(\u03b1 iq ), according to the second statement of Lemma 1.r Finally, use the first statement of Lemma 1 to obtain an automaton A(\u03b1).The size bound |A(\u03b1)| \u2264 2 |\u03b1| can be shown by induction on |\u03b1|.", |
| "type_str": "figure" |
| }, |
| "FIGREF25": { |
| "uris": null, |
| "num": null, |
| "text": "If \u03b1 = , because we use the McNaughton-Yamada construction, we have|A(\u03b1)| = 1 < 2 |\u03b1| .r If \u03b1 = q, because we use the McNaughton-Yamada construction, we have|A(\u03b1)| = 2 \u2264 2 |\u03b1| . r If \u03b1 = \u03b2 * , then |A(\u03b1)| = |A(\u03b2)| \u2264 2 |\u03b2| = 2|\u03b1| , again by the McNaughton-Yamada construction. r If \u03b1 = k\u03b2 for k \u2208 K, then |A(\u03b1)| = |A(\u03b2)| \u2264 2 |\u03b2| = 2 |\u03b1| , by our extension of the McNaughton-Yamada construction to the weighted case.", |
| "type_str": "figure" |
| }, |
| "FIGREF26": { |
| "uris": null, |
| "num": null, |
| "text": "every extended DAG automaton M there exists a non-extended DAG automaton M over \u03a3 such that [[M (D) = [[M]] (D) for every DAG D. For this, just define the transition function \u03b4 of M similarly to w \u03c1 : for every symbol \u03c3 \u2208 \u03a3 and all multisets l, r \u2208 M(Q),", |
| "type_str": "figure" |
| }, |
| "FIGREF28": { |
| "uris": null, |
| "num": null, |
| "text": "\u2205 for all final states \u03be of A (similarly to the first item). It should be clear that, indeed, [[M ]] = D\u2208[[M]] B(D)", |
| "type_str": "figure" |
| }, |
| "FIGREF29": { |
| "uris": null, |
| "num": null, |
| "text": "7.4.1 Reduction to Non-Extended Recognition. Let M be an extended DAG automaton, and let D be an input DAG. Informally, our reduction consists of the following steps: r encode D into a binary DAG D ; r transform M into a binary, non-extended DAG automaton M ; r run M on D using Algorithm 2.", |
| "type_str": "figure" |
| }, |
| "FIGREF30": { |
| "uris": null, |
| "num": null, |
| "text": "Computational Analysis. Recall from Section 5 that Algorithm 2, when run on a nonextended DAG automaton M and a ranked DAG D, has a time complexity ofO |E D | \u2022 |Q| tw(LG(D))+1where Q is the state set of M and tw(LG(D)) is the treewidth of the line graph of D.An upper bound on the number of states of the binarized non-extended automaton M just defined is |Q| + m 2 |\u03a3|, where m = max \u03c3\u2208\u03a3 |\u039e \u03c3 |. This is because states are either states of M or otherwise pairs of states of one of the m-automata A \u03c3 . Furthermore, by Theorem 2, tw(LG(D )) \u2264 2(tw(D) + 1). The binarization construction ensures that |E D | is O (|E D |). Combining these facts, we have that the running time of Algorithm 2 on the binarized DAG D and the binarized automaton M is O |E D |(|Q| + m 2 |\u03a3|) 2tw(D)+3 (16)", |
| "type_str": "figure" |
| }, |
| "FIGREF31": { |
| "uris": null, |
| "num": null, |
| "text": "b has no children and cont(b) = \u2205. r b has one child b 1 , and b = b 1 \u222a {v} for some node v \u2208 cont(b 1 ). In this case, b is said to introduce v. r b has one child b 1 , and b \u222a {v} = b 1 for some node v \u2208 cont(b). In this case, b is said to forget v. r b has one child b 1 with b = b 1 , and b is additionally labeled with an edge e such that src(e), tar(e) \u2208 cont(b). In this case b is said to introduce e. For every edge e, exactly one bag introduces e. r b has two children b 1 and b 2 , and cont(b) = cont(b 1 ) = cont(b 2 ). In this case b is called a join bag. It can be shown from the procedure of Cygan et al. (2011) for constructing nice tree decompositions that the number of bags in a nice tree decomposition of D is O (|E D |).", |
| "type_str": "figure" |
| }, |
| "FIGREF32": { |
| "uris": null, |
| "num": null, |
| "text": "Direct DAG recognition based on extended automata. 1: for each bag b, bottom-up do introduces node v with lab(v) = \u03c3 then 5:for i \u2208 \u039e \u03c3 and \u03c6 \u2208 \u03a6(b 1 ) do 6:chart b [v \u2192 ii, \u03c6] = chart b 1 [\u03c6] 7:else if b introduces edge e pointing from v to v , lab(v) = \u03c3 and lab(v ) = \u03c3 then 8:", |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "text": "and call it the DAG language recognized by M. DAG languages that can be recognized by unweighted DAG automata are recognizable DAG languages. Note that because false is the zero element of the Boolean semiring, all transitions appearing in an unweighted DAG automaton are of the form {q 1 , . . . , q m } \u2192 {r 1 , . . . , r n }. So we can simplify the notation of such a transition by writing", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>\u03c3/true \u2212\u2212\u2212</td></tr></table>" |
| }, |
| "TABREF3": { |
| "text": "b). States of the form (I, O) will be assigned to the fresh edges of the treelet T v .", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td/><td>. . .</td><td/><td/><td>. . .</td></tr><tr><td/><td/><td>p</td><td/><td>q</td></tr><tr><td/><td/><td>r</td><td/><td>s</td></tr><tr><td/><td>. . .</td><td/><td/><td>. . .</td></tr><tr><td/><td/><td/><td>(a)</td></tr><tr><td>. . .</td><td>(\u2205, {r})</td><td>a</td><td colspan=\"2\">({p, q}, {s})</td></tr><tr><td/><td>a</td><td colspan=\"2\">({p, q}, \u2205)</td><td>a</td><td>(\u2205, {s})</td></tr><tr><td/><td>({p}, \u2205)</td><td>a</td><td colspan=\"2\">({q}, \u2205)</td><td>a</td></tr><tr><td/><td>a</td><td/><td/><td>a</td></tr><tr><td>. . .</td><td/><td/><td/></tr></table>" |
| }, |
| "TABREF6": { |
| "text": "Different orderings of the incoming and outgoing edges yield potentially different binarizations. Thus, every DAG D over \u03a3 gives rise to a finite set B(D) of binarized DAGs", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| } |
| } |
| } |
| } |