ACL-OCL / Base_JSON /prefixP /json /P97 /P97-1043.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P97-1043",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:15:42.779012Z"
},
"title": "The Complexity of Recognition of Linguistically Adequate Dependency Grammars",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Neuhaus",
"suffix": "",
"affiliation": {
"laboratory": "Computational Linguistics Research Group",
"institution": "Freiburg University",
"location": {
"addrLine": "Friedrichstrage 50",
"postCode": "D-79098",
"settlement": "Freiburg",
"country": "Germany"
}
},
"email": "neuhaus@coling.uni-freiburg.de"
},
{
"first": "Norbert",
"middle": [],
"last": "Briiker",
"suffix": "",
"affiliation": {
"laboratory": "Computational Linguistics Research Group",
"institution": "Freiburg University",
"location": {
"addrLine": "Friedrichstrage 50",
"postCode": "D-79098",
"settlement": "Freiburg",
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Results of computational complexity exist for a wide range of phrase structure-based grammar formalisms, while there is an apparent lack of such results for dependency-based formalisms. We here adapt a result on the complexity of ID/LP-grammars to the dependency framework. Contrary to previous studies on heavily restricted dependency grammars, we prove that recognition (and thus, parsing) of linguistically adequate dependency grammars is~A/T'-complete.",
"pdf_parse": {
"paper_id": "P97-1043",
"_pdf_hash": "",
"abstract": [
{
"text": "Results of computational complexity exist for a wide range of phrase structure-based grammar formalisms, while there is an apparent lack of such results for dependency-based formalisms. We here adapt a result on the complexity of ID/LP-grammars to the dependency framework. Contrary to previous studies on heavily restricted dependency grammars, we prove that recognition (and thus, parsing) of linguistically adequate dependency grammars is~A/T'-complete.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The introduction of dependency grammar (DG) into modern linguistics is marked by Tesni~re (1959) . His conception addressed didactic goals and, thus, did not aim at formal precision, but rather at an intuitive understanding of semantically motivated dependency relations. An early formalization was given by Gaifman (1965) , who showed the generative capacity of DG to be (weakly) equivalent to standard context-free grammars. Given this equivalence, interest in DG as a linguistic framework diminished considerably, although many dependency grammarians view Gaifman's conception as an unfortunate one (cf. Section 2). To our knowledge, there has been no other formal study of DG.This is reflected by a recent study (Lombardo & Lesmo, 1996) , which applies the Earley parsing technique (Earley, 1970) to DG, and thereby achieves cubic time complexity for the analysis of DG. In their discussion, Lombardo & Lesmo express their hope that slight increases in generative capacity will correspond to equally slight increases in computational complexity. It is this claim that we challenge here.",
"cite_spans": [
{
"start": 81,
"end": 96,
"text": "Tesni~re (1959)",
"ref_id": "BIBREF21"
},
{
"start": 308,
"end": 322,
"text": "Gaifman (1965)",
"ref_id": "BIBREF4"
},
{
"start": 716,
"end": 740,
"text": "(Lombardo & Lesmo, 1996)",
"ref_id": "BIBREF11"
},
{
"start": 786,
"end": 800,
"text": "(Earley, 1970)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "After motivating non-projective analyses for DG, we investigate various variants of DG and identify the separation of dominance and precedence as a major part of current DG theorizing. Thus, no current variant of DG (not even Tesni~re's original formulation) is compatible with Gaifman' s conception, which seems to be motivated by formal considerations only (viz., the proof of equivalence). Section 3 advances our proposal, which cleanly separates dominance and precedence relations. This is illustrated in the fourth section, where we give a simple encoding of an A/P-complete problem in a discontinuous DG. Our proof of A/79-completeness, however, does not rely on discontinuity, but only requires unordered trees. It is adapted from a similar proof for unordered contextfree grammars (UCFGs) by Barton (1985) .",
"cite_spans": [
{
"start": 800,
"end": 813,
"text": "Barton (1985)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The growing interest in the dependency concept (which roughly corresponds to the O-roles of GB, subcategorization in HPSG, and the so-called domain of locality of TAG) again raises the issue whether non-lexical categories are necessary for linguistic analysis. After reviewing several proposals in this section, we argue in the next section that word order --the description of which is the most prominent difference between PSGs and DGs -can adequately be described without reference to nonlexical categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Versions of Dependency Grammar",
"sec_num": "2"
},
{
"text": "Standard PSG trees are projective, i.e., no branches cross when the terminal nodes are projected onto the input string. In contrast to PSG approaches, DG requires non-projective analyses. As DGs are restricted to lexical nodes, one cannot, e.g., describe the so-called unbounded dependencies without giving up projectivity. First, the categorial approach employing partial constituents (Huck, 1988; Hepple, 1990) is not available, since there are no phrasal categories. Second, the coindexing (Haegeman, 1994) or structure-sharing (Pollard & Sag, 1994) approaches are not available, since there are no empty categories.",
"cite_spans": [
{
"start": 386,
"end": 398,
"text": "(Huck, 1988;",
"ref_id": "BIBREF9"
},
{
"start": 399,
"end": 412,
"text": "Hepple, 1990)",
"ref_id": "BIBREF8"
},
{
"start": 493,
"end": 509,
"text": "(Haegeman, 1994)",
"ref_id": "BIBREF6"
},
{
"start": 531,
"end": 552,
"text": "(Pollard & Sag, 1994)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Versions of Dependency Grammar",
"sec_num": "2"
},
{
"text": "Consider the extracted NP in \"Beans, I know John likes\" (cf. also to Fig.1 in Section 3). A projective tree would require \"Beans\" to be connected to either \"I\" or \"know\" -none of which is conceptually directly related to \"Beans\". It is \"likes\" that determines syntactic fea-tures of \"Beans\" and which provides a semantic role for it. The only connection between \"know\" and \"Beans\" is that the finite verb allows the extraction of \"Beans\", thus defining order restrictions for the NP. This has led some DG variants to adopt a general graph structure with multiple heads instead of trees. We will refer to DGs allowing non-projective analyses as discontinuous DGs. Tesni~re (1959) devised a bipartite grammar theory which consists of a dependency component and a translation component (' translation' used in a technical sense denoting a change of category and grammatical function). The dependency component defines four main categories and possible dependencies between them. What is of interest here is that there is no mentioning of order in TesniSre's work. Some practitioneers of DG have allowed word order as a marker for translation, but they do not prohibit non-projective trees. Gaifman (1965) designed his DG entirely analogous to context-free phrase structure grammars. Each word is associated with a category, which functions like the non-terminals in CFG. He then defines the following rule format for dependency grammars:",
"cite_spans": [
{
"start": 663,
"end": 678,
"text": "Tesni~re (1959)",
"ref_id": "BIBREF21"
},
{
"start": 1187,
"end": 1201,
"text": "Gaifman (1965)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 69,
"end": 74,
"text": "Fig.1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Versions of Dependency Grammar",
"sec_num": "2"
},
{
"text": "(1) X (Y,,... , Y~, ,, Y~+I,..., Y,,) This rule states that a word of category X governs words of category Y1,... , Yn which occur in the given order. The head (the word of category X) must occur between the i-th and the (i + 1)-th modifier. The rule can be viewed as an ordered tree of depth one with node labels. Trees are combined through the identification of the root of one tree with a leaf of identical category of another tree. This formalization is restricted to projective trees with a completely specified order of sister nodes. As we have argued above, such a formulation cannot capture semantically motivated dependencies.",
"cite_spans": [
{
"start": 6,
"end": 37,
"text": "(Y,,... , Y~, ,, Y~+I,..., Y,,)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Versions of Dependency Grammar",
"sec_num": "2"
},
{
"text": "Today's DGs differ considerably from Gaifman's conception, and we will very briefly sketch various order descriptions, showing that DGs generally dissociate dominance and precedence by some mechanism. All variants share, however, the rejection of phrasal nodes (although phrasal features are sometimes allowed) and the introduction of edge labels (to distinguish different dependency relations).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Current Dependency Grammars",
"sec_num": "2.1"
},
{
"text": "Meaning-Text Theory (Mer 5uk, 1988) assumes seven strata of representation. The rules mapping from the unordered dependency trees of surface-syntactic representations onto the annotated lexeme sequences of deepmorphological representations include global ordering rules which allow discontinuities. These rules have not yet been formally specified (Mel' 5uk & Pertsov, 1987, p. 187f) , but see the proposal by Rambow & Joshi (1994) .",
"cite_spans": [
{
"start": 348,
"end": 383,
"text": "(Mel' 5uk & Pertsov, 1987, p. 187f)",
"ref_id": null
},
{
"start": 410,
"end": 431,
"text": "Rambow & Joshi (1994)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Current Dependency Grammars",
"sec_num": "2.1"
},
{
"text": "Word Grammar (Hudson, 1990 ) is based on general graphs. The ordering of two linked words is specified together with their dependency relation, as in the proposition \"object of verb succeeds it\". Extraction is analyzed by establishing another dependency, visitor, between the verb and the extractee, which is required to precede the verb, as in \"visitor of verb precedes it\". Resulting inconsistencies, e.g. in case of an extracted object, are not resolved, however.",
"cite_spans": [
{
"start": 13,
"end": 26,
"text": "(Hudson, 1990",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Current Dependency Grammars",
"sec_num": "2.1"
},
{
"text": "Lexicase (Starosta, 1988; 1992) employs complex feature structures to represent lexical and syntactic entities. Its word order description is much like that of Word Grammar (at least at some level of abstraction), and shares the above inconsistency.",
"cite_spans": [
{
"start": 9,
"end": 25,
"text": "(Starosta, 1988;",
"ref_id": "BIBREF19"
},
{
"start": 26,
"end": 31,
"text": "1992)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Current Dependency Grammars",
"sec_num": "2.1"
},
{
"text": "Dependency Unification Grammar (Hellwig, 1988 ) defines a tree-like data structure for the representation of syntactic analyses. Using morphosyntactic features with special interpretations, a word defines abstract positions into which modifiers are mapped. Partial orderings and even discontinuities can thus be described by allowing a modifier to occupy a position defined by some transitive head. The approach cannot restrict discontinuities properly, however.",
"cite_spans": [
{
"start": 31,
"end": 45,
"text": "(Hellwig, 1988",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Current Dependency Grammars",
"sec_num": "2.1"
},
{
"text": "Slot Grammar (McCord, 1990 ) employs a number of rule types, some of which are exclusively concerned with precedence. So-called head/slot and slot/slot ordering rules describe the precedence in projective trees, referring to arbitrary predicates over head and modifiers. Extractions (i.e., discontinuities) are merely handled by a mechanism built into the parser.",
"cite_spans": [
{
"start": 5,
"end": 26,
"text": "Grammar (McCord, 1990",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Current Dependency Grammars",
"sec_num": "2.1"
},
{
"text": "This brief overview of current DG flavors shows that various mechanisms (global rules, general graphs, procedural means) are generally employed to lift the limitation to projective trees. Our own approach presented below improves on these proposals because it allows the lexicalized and declarative formulation of precedence constraints. The necessity of non-projective analyses in DG results from examples like \"Beans, 1 know John likes\" and the restriction to lexical nodes which prohibits gapthreading and other mechanisms tied to phrasal categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Current Dependency Grammars",
"sec_num": "2.1"
},
{
"text": "We now sketch a minimal DG that incorporates only word classes and word order as descriptional dimensions. The separation of dominance and precedence presented here grew out of our work on German, and retains the local flavor of dependency specification, while at the same time covering arbitrary discontinuities. It is based on a (modal) logic with model-theoretic interpretation, which is presented in more detail in (Br~ker, 1997) .",
"cite_spans": [
{
"start": 419,
"end": 433,
"text": "(Br~ker, 1997)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Dependency Grammar with Word Order Domains",
"sec_num": "3"
},
{
"text": "f know //~,,,@x i \u1ebd s ~ ) I dl d2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Dependency Grammar with Word Order Domains",
"sec_num": "3"
},
{
"text": "Figure 1: Word order domains in \"Beans, I know John likes\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Dependency Grammar with Word Order Domains",
"sec_num": "3"
},
{
"text": "Our initial observation is that DG cannot use binary precedence constraints as PSG does. Since DG analyses are hierarchically flatter, binary precedence constraints result in inconsistencies, as the analyses of Word Grammar and Lexicase illustrate. In PSG, on the other hand, the phrasal hierarchy separates the scope of precedence restrictions. This effect is achieved in our approach by defining word order domains as sets of words, where precedence restrictions apply only to words within the same domain. Each word defines a sequence of order domains, into which the word and its modifiers are placed. Several restrictions are placed on domains. First, the domain sequence must mirror the precedence of the words included, i.e., words in a prior domain must precede all words in a subsequent domain. Second, the order domains must be hierarchically ordered by set inclusion, i.e., be projective. Third, a domain (e.g., dl in Fig.l ) can be constrained to contain at most one partial dependency tree. l We will write singleton domains as \"_\", while other domains are represented by \"-\". The precedence of words within domains is described by binary precedence restrictions, which must be locally satisfied in the domain with which they are associated. Considering Fig. 1 again, a precedence restriction for \"likes\" to precede its object has no effect, since the two are in different domains. The precedence constraints are formulated as a binary relation \"~\" over dependency labels, including the special symbol \"self\" denoting the head. Discontinuities can easily be characterized, since a word may be contained in any domain of (nearly) any of its transitive heads. If a domain of its direct head contains the modifier, a continuous dependency results. If, however, a modifier is placed in a domain of some transitive head (as \"Beans\" in Fig. 1 ), discontinuities occur. Bounding effects on discontinuities are described by specifying that certain dependencies may not be crossed. 2 For the tFor details, cf. (Br6ker, 1997) . 2German data exist that cannot be captured by the (more common) bounding of discontinuities by nodes of a certain purpose of this paper, we need not formally introduce the bounding condition, though.",
"cite_spans": [
{
"start": 2014,
"end": 2028,
"text": "(Br6ker, 1997)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 929,
"end": 934,
"text": "Fig.l",
"ref_id": null
},
{
"start": 1267,
"end": 1273,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 1843,
"end": 1849,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Order Specification",
"sec_num": "3.1"
},
{
"text": "A sample domain structure is given in Fig.l , with two domains dl and d2 associated with the governing verb \"know\" (solid) and one with the embedded verb \"likes\" (dashed). dl may contain only one partial dependency tree, the extracted phrase, d2 contains the rest of the sentence. Both domains are described by 2, where the domain sequence is represented as \"<<\". d2 contains two precedence restrictions which require that \"know\" (represented by self) must follow the subject (first precedence constraint) and precede the object (second precedence constraint).",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 43,
"text": "Fig.l",
"ref_id": null
}
],
"eq_spans": [],
"section": "Order Specification",
"sec_num": "3.1"
},
{
"text": "(2) __ { } << ----. { (subject -.< self), (self --< object)}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Order Specification",
"sec_num": "3.1"
},
{
"text": "The following notation is used in the proof. Lombardo & Lesmo (1996, p.728) convey their hope that increasing the flexibility of their conception of DG will \" ... imply the restructuring of some parts of the recognizer, with a plausible increment of the complexity\". We will show that adding a little (linguistically required) flexibility might well render recognition A/P-complete. To prove this, we will encode the vertex cover problem, which is known to be A/P-complete, in a DG.",
"cite_spans": [
{
"start": 45,
"end": 75,
"text": "Lombardo & Lesmo (1996, p.728)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Description",
"sec_num": "3.2"
},
{
"text": "A vertex cover of a finite graph is a subset of its vertices such that (at least) one end point of every edge is a member of that set. The vertex cover problem is to decide whether for a given graph there exists a vertex cover with at most k elements. The problem is known to be A/7~-complete (Garey & Johnson, 1983, pp.53-56) . Fig. 2 gives a simple example where {c, d} is a vertex cover. A straightforward encoding of a solution in the DG formalism introduced in Section 3 defines a root word s of class S with k valencies for words of class O. O has IWl subclasses denoting the nodes of the graph. An edge is represented by two linked words (one for each end point) with the governing word corresponding to the node included in the vertex cover. The subordinated word is assigned the class R, while the governing word is assigned the subclass of O denoting the node it represents. The latter word classes define a valency for words of class R (for the other end point) and a possibly discontinuous valency for another word of the identical class (representing the end point of another edge which is included in the vertex cover). This encoding is summarized in Table 1 .",
"cite_spans": [
{
"start": 293,
"end": 326,
"text": "(Garey & Johnson, 1983, pp.53-56)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 329,
"end": 335,
"text": "Fig. 2",
"ref_id": "FIGREF1"
},
{
"start": 1165,
"end": 1172,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Encoding the Vertex Cover Problem in Discontinuous DG",
"sec_num": "4.1"
},
{
"text": "The input string contains an initial s and for each edge the words representing its end points, e.g. \"saccdadbdcb\" for our example. If the grammar allows the construction of a complete dependency tree (cf. Fig. 3 for one solution), this encodes a solution of the vertex cover problem. ",
"cite_spans": [],
"ref_spans": [
{
"start": 206,
"end": 212,
"text": "Fig. 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Encoding the Vertex Cover Problem in Discontinuous DG",
"sec_num": "4.1"
},
{
"text": "The encoding outlined above uses non-projective trees, i.e., crossing dependencies. In anticipation of counter arguments such as that the presented dependency grammar was just too powerful, we will present the proof using only one feature supplied by most DG formalisms, namely the free order of modifiers with respect to their head. Thus, modifiers must be inserted into an order domain of their head (i.e., no + mark in valencies). This version of the proof uses a slightly more complicated encoding of the vertex cover problem and resembles the proof by Barton (1985) .",
"cite_spans": [
{
"start": 557,
"end": 570,
"text": "Barton (1985)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Proof using Continuous DG",
"sec_num": "4.2"
},
{
"text": "Let II \u2022 II be a measure for the encoded input length of a computational problem. We require that if S is a set or string and k E N then ISl > k implies IlSll ___ Ilkll and that for any tuple I1(\"\" ,z,.. \")11 -Ilzll holds. <",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 1 (Measure)",
"sec_num": null
},
{
"text": "A possible instance of the vertex cover problem is a triple ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 2 (Vertex Cover Problem)",
"sec_num": null
},
{
"text": "(V, E, k) where (V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 2 (Vertex Cover Problem)",
"sec_num": null
},
{
"text": "For an algorithm to decide the VC problem consider a data structure representing the vertices of the graph (e.g., a set). We separate the elements of this data structure classes valencies order domain S {(-, markl,O), (-, mark2,0)} --{(self-~ mark1), (mark1 -.< mark2)} A isac 0 {(-, unmrk, R), (+, same, A)} ={(unmrk -K same), (self -4 same)} B isac O {(-, unmrk, R), (+, same, B)} ={(unmrk --< same), (self -.< same)} (7 isac O {(-, unmrk, R), (+, same, C)} ~{(unmrk --4 same), (self -4 same)} D isac O { (-, unmrk, R) Fig. 2 into the (maximal) vertex cover set and its complement set. Hence, one end point of every edge is assigned to the vertex cover (i.e., it is marked). Since (at most) all IEI edges might share a common vertex, the data structure has to be a multiset which contains IEI copies of each vertex. Thus, marking the IVI -k complement vertices actually requires marking IVI -k times IE[ identical vertices. This will leave (k -1) * IEI unmarked vertices in the input structure. To achieve this algorithm through recognition of a dependency grammar, the marking process will be encoded as the filling of appropriate valencies of a word s by words representing the vertices. Before we prove that this encoding can be generated in polynomial time we show that:",
"cite_spans": [
{
"start": 507,
"end": 520,
"text": "(-, unmrk, R)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 521,
"end": 527,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "(G). <1",
"sec_num": null
},
{
"text": "Lemma 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(G). <1",
"sec_num": null
},
{
"text": "The DG recognition problem is in the complexity class Alp.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(G). <1",
"sec_num": null
},
{
"text": "[] Let G = (Lex, C, isac, Z) and a E ]E +. We give a nondeterministic algorithm for deciding whether a = (Sl-.-sn) is in L(G). Let H be an empty set initially:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(G). <1",
"sec_num": null
},
{
"text": "1. Repeat until IHI = Iol (a) i. For every Si E O r choose a lexicon entry ci E Lex(si).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(G). <1",
"sec_num": null
},
{
"text": "ii. From the ci choose one word as the head h0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(G). <1",
"sec_num": null
},
{
"text": "iii The algorithm obviously is (nondeterministically) polynomial in the length of the input. Given that (G, g ) E DGR, a dependency tree covering the whole input exists and the algorithm will be able to guess the dependents of every head correctly. If, conversely, the algorithm halts for some input (G, or) , then there necessarily must be a dependency tree rooted in ho completely ",
"cite_spans": [
{
"start": 104,
"end": 109,
"text": "(G, g",
"ref_id": null
},
{
"start": 300,
"end": 307,
"text": "(G, or)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "(G). <1",
"sec_num": null
},
{
"text": "v (G(V, E, k), a(V, E, k)) E DGR []",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(G). <1",
"sec_num": null
},
{
"text": "For the proof, we first define the encoding and show that it can be constructed in polynomial time. Then we proceed showing that the equivalence claim holds. The lexicon Lex associates words with classes as given in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 216,
"end": 223,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "(G). <1",
"sec_num": null
},
{
"text": "We set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(G). <1",
"sec_num": null
},
{
"text": "G(V, E, k) =clef ( Lex, C, isac, ~)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(G). <1",
"sec_num": null
},
{
"text": "and a(V, E, k) =def s Vl''\" Vl\"'\" yIV[ \" \" \" VlV ~",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(G). <1",
"sec_num": null
},
{
"text": "For an example, cf. Fig. 4 which shows a dependency tree for the instance of the vertex cover problem from Fig. 2 . The two dependencies Ul and u2 represent the complement of the vertex cover.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 26,
"text": "Fig. 4",
"ref_id": null
},
{
"start": 107,
"end": 113,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "IEI IEI",
"sec_num": null
},
{
"text": "It is easily seen 3 that [[(G(V,E,k),a(V,E,k)) [[ is polynomial in [[V[[, [[E[[ and k. From [El _> k and Def- inition 1 it follows that H (V,E,k) u,, u),..., (-, u,v,_,, v) , (-, hi, Hi) ,-'-, (-, hie I, HIEI), (-, n, R) {U.~}U{Hjl3vm, v. \u2022 v: ej = (vm, v, , ) ^ s {s} Figure 4 : Encoding a solution to the vertex cover problem from Fig. 2 .",
"cite_spans": [
{
"start": 25,
"end": 46,
"text": "[[(G(V,E,k),a(V,E,k))",
"ref_id": null
},
{
"start": 47,
"end": 109,
"text": "[[ is polynomial in [[V[[, [[E[[ and k. From [El _> k and Def-",
"ref_id": null
},
{
"start": 138,
"end": 145,
"text": "(V,E,k)",
"ref_id": null
},
{
"start": 175,
"end": 186,
"text": "(-, hi, Hi)",
"ref_id": null
},
{
"start": 211,
"end": 220,
"text": "(-, n, R)",
"ref_id": null
},
{
"start": 221,
"end": 235,
"text": "{U.~}U{Hjl3vm,",
"ref_id": null
},
{
"start": 236,
"end": 253,
"text": "v. \u2022 v: ej = (vm,",
"ref_id": null
},
{
"start": 254,
"end": 256,
"text": "v,",
"ref_id": null
},
{
"start": 257,
"end": 258,
"text": ",",
"ref_id": null
},
{
"start": 259,
"end": 260,
"text": ")",
"ref_id": null
}
],
"ref_spans": [
{
"start": 146,
"end": 172,
"text": "u,, u),..., (-, u,v,_,, v)",
"ref_id": null
},
{
"start": 269,
"end": 277,
"text": "Figure 4",
"ref_id": null
},
{
"start": 333,
"end": 339,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "IEI IEI",
"sec_num": null
},
{
"text": "Vvi \u2022 V Vi isac R { } Vvi \u2022 V Ui isac U {(-, rz, V/),--. , (-, rlEl_l, V/)} Vei E E Hi {} S {(-,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IEI IEI",
"sec_num": null
},
{
"text": ", \u2022 \u2022 \u2022 , (-, r(k-,)l~l, R)} I order I ={ } word ] ={ } \"i -{} -{} word classes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IEI IEI",
"sec_num": null
},
{
"text": "Hence, the construction of (G(V, E, k), a(V, E, k)) can be done in worst-case time polynomial in II(V,E,k)ll.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IEI IEI",
"sec_num": null
},
{
"text": "We next show the equivalence of the two problems. , E, k ) ) and, thus, , E, k) (2) V (v,,,,v,d e E: f((v,.,,,v,4 ,) e {v,,,,v,.,}",
"cite_spans": [
{
"start": 86,
"end": 113,
"text": "(v,,,,v,d e E: f((v,.,,,v,4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 50,
"end": 58,
"text": ", E, k )",
"ref_id": null
},
{
"start": 72,
"end": 79,
"text": ", E, k)",
"ref_id": null
}
],
"eq_spans": [],
"section": "IEI IEI",
"sec_num": null
},
{
"text": "Assume (V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IEI IEI",
"sec_num": null
},
{
"text": "V, E, k) \u2022 L ( G ( V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IEI IEI",
"sec_num": null
},
{
"text": "(G(V, E, k), a(V, E, k)) \u2022 DGR. Conversely assume (G(V, E, k), a(V, E, k)) \u2022 DGR: Then a(V, E, k) \u2022 L(G(V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IEI IEI",
"sec_num": null
},
{
"text": "We define a subset V' C V by V' =def {f(e)le \u2022 E}. []",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IEI IEI",
"sec_num": null
},
{
"text": "The Af:P-completeness of the DG recognition problem follows directly from lemmata 1 and 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IEI IEI",
"sec_num": null
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IEI IEI",
"sec_num": null
},
{
"text": "We have shown that current DG theorizing exhibits a feature not contained in previous formal studies of DG, namely the independent specification of dominance and precedence constraints. This feature leads to a A/'7% complete recognition problem. The necessity of this extension approved by most current DGs relates to the fact that DG must directly characterize dependencies which in PSG are captured by a projective structure and additional processes such as coindexing or structure sharing (most easily seen in treatments of so-called unbounded dependencies). The dissociation of tree structure and linear order, as we have done in Section 3, nevertheless seems to be a promising approach for PSG as well; see a very similar proposal for HPSG (Reape, 1989) . The .N'79-completeness result also holds for the discontinuous DG presented in Section 3. This DG can characterize at least some context-sensitive languages such as anbnc n, i.e., the increase in complexity corresponds to an increase of generative capacity. We conjecture that, provided a proper formalization of the other DG versions presented in Section 2, their .A/P-completeness can be similarly shown. With respect to parser design, this result implies that the well known polynomial time complexity of chart-or tabular-based parsing techniques cannot be achieved for these DG formalisms in general. This is the reason why the PARSETALK text understanding system (Neuhaus & Hahn, 1996) utilizes special heuristics in a heterogeneous chart-and backtrackingbased parsing approach.",
"cite_spans": [
{
"start": 745,
"end": 758,
"text": "(Reape, 1989)",
"ref_id": "BIBREF18"
},
{
"start": 1429,
"end": 1451,
"text": "(Neuhaus & Hahn, 1996)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "On the complexity of ID/LP parsing",
"authors": [
{
"first": "Jr",
"middle": [],
"last": "Barton",
"suffix": ""
},
{
"first": "G",
"middle": [
"E"
],
"last": "",
"suffix": ""
}
],
"year": 1985,
"venue": "Computational Linguistics",
"volume": "11",
"issue": "4",
"pages": "205--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barton, Jr., G. E. (1985). On the complexity of ID/LP parsing. Computational Linguistics, 11(4):205- 218.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Eine Dependenzgrammatik zur Kopplung heterogener Wissenssysteme auf modaUogischer Basis, (Dissertation)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eine Dependenzgrammatik zur Kopplung heterogener Wissenssysteme auf modaUogischer Basis, (Dissertation). Freiburg, DE: Philosophische Fakult~it, Albert-Ludwigs- Universit~it.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An efficient context-free parsing algorithm",
"authors": [
{
"first": "J",
"middle": [],
"last": "Earley",
"suffix": ""
}
],
"year": 1970,
"venue": "Communications of the ACM",
"volume": "13",
"issue": "2",
"pages": "94--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Earley, J. (1970). An efficient context-free parsing algo- rithm. Communications of the ACM, 13(2):94-102.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Dependency systems and phrasestructure systems",
"authors": [
{
"first": "H",
"middle": [],
"last": "Gaifman",
"suffix": ""
}
],
"year": 1965,
"venue": "Information & Control",
"volume": "8",
"issue": "",
"pages": "304--337",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gaifman, H. (1965). Dependency systems and phrase- structure systems. Information & Control, 8:304-- 337.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Computers and Intractability: A Guide to the Theory of NPcompleteness",
"authors": [
{
"first": "M",
"middle": [
"R D S"
],
"last": "Garey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Garey, M. R. & D. S. Johnson (1983). Computers and Intractability: A Guide to the Theory of NP- completeness (2. ed.). New York, NY: Freeman.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Introduction to Government and Binding",
"authors": [
{
"first": "L",
"middle": [],
"last": "Haegeman",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haegeman, L. (1994). Introduction to Government and Binding. Oxford, UK: Basil Blackwell.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Chart parsing according to the slot and filler principle",
"authors": [
{
"first": "E",
"middle": [],
"last": "Hellwig",
"suffix": ""
}
],
"year": 1988,
"venue": "Proc. of the 12 th Int. Conf. on Computational Linguistics. Budapest, HU",
"volume": "1",
"issue": "",
"pages": "242--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hellwig, E (1988). Chart parsing according to the slot and filler principle. In Proc. of the 12 th Int. Conf. on Computational Linguistics. Budapest, HU, 22- 27Aug 1988, Vol. 1, pp. 242-244.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Word order and obliqueness in categorial grammar",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hepple",
"suffix": ""
}
],
"year": 1990,
"venue": "Studies in categorial grammar",
"volume": "",
"issue": "",
"pages": "47--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hepple, M. (1990). Word order and obliqueness in cat- egorial grammar. In G. Barry & G. Morill (Eds.), Studies in categorial grammar, pp. 47--64. Edin- burgh, UK: Edinburgh University Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Categorial Grammars and Natural Language Structures",
"authors": [
{
"first": "G",
"middle": [],
"last": "Huck",
"suffix": ""
}
],
"year": 1988,
"venue": "Studies in Linguistics and Philosophy 32",
"volume": "",
"issue": "",
"pages": "249--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huck, G. (1988). Phrasal verbs and the categories of postponement. In R. Oehrle, E. Bach & D. Wheeler (Eds.), Categorial Grammars and Natural Lan- guage Structures, pp. 249-263. Studies in Linguis- tics and Philosophy 32. Dordrecht, NL: D. Reidel.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "English Word Grammar",
"authors": [
{
"first": "R",
"middle": [],
"last": "Hudson",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hudson, R. (1990). English Word Grammar. Oxford, UK: Basil Blackwell.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An earley-type recognizer for dependency grammar",
"authors": [
{
"first": "V",
"middle": [
"& L"
],
"last": "Lombardo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lesmo",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. of the 16 th Int. Conf. on Computational Linguistics. Copenhagen, DK",
"volume": "2",
"issue": "",
"pages": "723--728",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lombardo, V. & L. Lesmo (1996). An earley-type recog- nizer for dependency grammar. In Proc. of the 16 th Int. Conf. on Computational Linguistics. Copen- hagen, DK, 5-9 Aug 1996, Vol. 2, pp. 723-728.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Slot grammar: A system for simpler construction of practical natural language grammars",
"authors": [
{
"first": "M",
"middle": [],
"last": "Mccord",
"suffix": ""
}
],
"year": 1990,
"venue": "Natural Language and Logic",
"volume": "",
"issue": "",
"pages": "118--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McCord, M. (1990). Slot grammar: A system for simpler construction of practical natural language gram- mars. In R. Studer (Ed.), Natural Language and Logic, pp. 118-145. Berlin, Heidelberg: Springer.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dependency Syntax: Theory and Practice",
"authors": [
{
"first": "I",
"middle": [],
"last": "Mer ~uk",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mer ~uk, I. (1988). Dependency Syntax: Theory and Practice. New York, NY: SUNY State University Press of New York.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Surface Syntax of English: A Formal Model within the MTT Framework",
"authors": [
{
"first": "I",
"middle": [
"& N"
],
"last": "Mel' 6uk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pertsov",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mel' 6uk, I. & N. Pertsov (1987). Surface Syntax of En- glish: A Formal Model within the MTT Framework. Amsterdam, NL: John Benjamins.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Restricted parallelism in object-oriented lexical parsing",
"authors": [
{
"first": "R & U",
"middle": [],
"last": "Neuhaus",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hahn",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. of the 16 th Int. Conf. on Computational Linguistics. Copenhagen, DK",
"volume": "",
"issue": "",
"pages": "502--507",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neuhaus, R & U. Hahn (1996). Restricted parallelism in object-oriented lexical parsing. In Proc. of the 16 th Int. Conf. on Computational Linguistics. Copen- hagen, DK, 5-9 Aug 1996, pp. 502-507.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Head-Driven Phrase Structure Grammar",
"authors": [
{
"first": "C",
"middle": [
"& I"
],
"last": "Pollard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sag",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pollard, C. & I. Sag (1994). Head-Driven Phrase Struc- ture Grammar. Chicago, IL: University of Chicago Press.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A formal look at DGs and PSGs, with consideration of word-order phenomena",
"authors": [
{
"first": "O",
"middle": [
"& A"
],
"last": "Rambow",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 1994,
"venue": "Current Issues in Meaning-Text-Theory",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rambow, O. & A. Joshi (1994). A formal look at DGs and PSGs, with consideration of word-order phe- nomena. In L. Wanner (Ed.), Current Issues in Meaning-Text-Theory. London: Pinter.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A logical treatment of semi-free word order and discontinuous constituents",
"authors": [
{
"first": "M",
"middle": [],
"last": "Reape",
"suffix": ""
}
],
"year": 1989,
"venue": "Proc. of the 27 th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "103--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reape, M. (I 989). A logical treatment of semi-free word order and discontinuous constituents. In Proc. of the 27 th Annual Meeting of the Association for Compu- tational Linguistics. Vancouver, BC, 1989, pp. 103- 110.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The Case for Lexicase",
"authors": [
{
"first": "S",
"middle": [],
"last": "Starosta",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Starosta, S. (1988). The Case for Lexicase. London: Pinter.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Lexicase revisited",
"authors": [
{
"first": "S",
"middle": [],
"last": "Starosta",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Starosta, S. (1992). Lexicase revisited. Department of Linguistics, University of Hawaii.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Elements de Syntaxe Structurale",
"authors": [
{
"first": "L",
"middle": [],
"last": "Tesni~re",
"suffix": ""
}
],
"year": 1959,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tesni~re, L. ((1969) 1959). Elements de Syntaxe Struc- turale (2. ed.). Paris, FR: Klincksieck.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "A lexicon Lez maps words from an alphabet E to word classes, which in turn are associated with valencies and domain sequences. The set C of word classes is hierarchically ordered by a subclass relation (3) isaccCxC A word w of class c inherits the valencies (and domain sequence) from c, which are accessed by (4) w.valencies A valency (b, d, c) describes a possible dependency relation by specifying a flag b indicating whether the dependency may be discontinuous, the dependency name d (a symbol), and the word class c E C of the modifier. A word h may govern a word m in dependency d if h defines a valency (b, d, c) such that (m isao c) and m can consistently be inserted into a domain of h (for b = -) or a domain of a transitive head of h (for b = +). This condition is written as (5) governs(h,d,m) A DG is thus characterized by (6) G = (Lex, C, isac, E) The language L(G) includes any sequence of words for which a dependency tree can be constructed such that for each word h governing a word m in dependency d, governs(h, d, m) holds. The modifier of h in dependency d is accessed by (7) h.mod(d) category.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Simple graph with vertex cover {c, d}.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Encoding a solution to the vertex cover problem fromFig. 2.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": ", E) is a finite graph and IvI > k N. The vertex cover problem is the set VC of all instances (V, E, k) for which there exists a subset V' C_ V and a function f : E ---> V I such that IV'l <_ k and V(Vm,Vn) E E: f((vm,Vn)) E {Vm,Vn}. <1 Definition 3 (DG recognition problem) A possible instance of the DG recognition problem is a tuple (G, a) where G = (Lex, C, isac, ~) is a dependency grammar as defined in Section 3 and a E E +. The DG recognition problem DGR consists of all instances (G, a) such that a E L",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF4": {
"text": ". Let H := {ho} and M := {cili E [1, IOrl]} \\ H. (b) Repeat until M = 0: i. Choose a head h E H and a valency (b, d, c) E h.valencies and a modifier m E M. ii. If governs(h, d, m) holds then establish the dependency relation between h and the m, and add m to the set H. iii. Remove m from M.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF5": {
"text": "E, k) be a possible instance of the vertex cover problem. Then a grammar G(V, E, k) and an input a(V, E, k) can be constructed in time polynomial in II (v, E, k)II such that (V, E, k) E VC \u00a2:::::",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF6": {
"text": "The dependents of s in valencies hl are from the set V'Vo. We define a function f : E --+ V \\ Vo by f(ei) =def s.mod(hi) for all ei E E. By construction f(ei) is an end point of edge ei, i.e.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF7": {
"text": "and (4) we induce (V, E, k) \u2022 VC. \u2022 Theorem 3 The DG recognition problem is in the complexity class Af l)C.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"text": "Word classes and lexicon for vertex cover problem from",
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF2": {
"text": "IVI]}. In the isac hierarchy the classes Ui share the superclass U, the classes V~ the superclass R. Valencies are defined for the classes according toTable 2.",
"html": null,
"content": "<table><tr><td>The</td></tr><tr><td>set of classes is G =aef {S, R, U} U {Hdi e [1, IEI]} U</td></tr><tr><td>{U~, \u00bc1i e [1, Furthermore, we define E =dee {S} U {vii/ E [1, IVl]}.</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF4": {
"text": "Word classes and lexicon to encode vertex cover problem",
"html": null,
"content": "<table><tr><td>$</td><td/></tr><tr><td>aaaa</td><td>bbbb</td></tr></table>",
"type_str": "table",
"num": null
}
}
}
}