| { |
| "paper_id": "C16-1042", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:02:49.069707Z" |
| }, |
| "title": "Promoting multiword expressions in A TAG parsing", |
| "authors": [ |
| { |
| "first": "Jakub", |
| "middle": [], |
| "last": "Waszczuk", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Universit\u00e9 Fran\u00e7ois-Rabelais Tours", |
| "location": { |
| "addrLine": "3 place Jean-Jaur\u00e8s", |
| "postCode": "41000", |
| "settlement": "Blois", |
| "country": "France" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Agata", |
| "middle": [], |
| "last": "Savary", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Universit\u00e9 Fran\u00e7ois-Rabelais Tours", |
| "location": { |
| "addrLine": "3 place Jean-Jaur\u00e8s", |
| "postCode": "41000", |
| "settlement": "Blois", |
| "country": "France" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Yannick", |
| "middle": [], |
| "last": "Parmentier", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "LIFO -Universit\u00e9 d'Orl\u00e9ans", |
| "location": { |
| "addrLine": "6, rue L\u00e9onard de Vinci", |
| "postCode": "45067", |
| "settlement": "Orl\u00e9ans", |
| "country": "France" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Multiword expressions (MWEs) are pervasive in natural languages and often have both idiomatic and compositional readings, which leads to high syntactic ambiguity. We show that for some MWE types idiomatic readings are usually the correct ones. We propose a heuristic for an A parser for Tree Adjoining Grammars which benefits from this knowledge by promoting MWEoriented analyses. This strategy leads to a substantial reduction in the parsing search space in case of true positive MWE occurrences, while avoiding parsing failures in case of false positives. 1 Introduction Multiword expressions (MWEs), e.g. by and large, red tape, and to pull one's socks up 'to correct one's work or behavior', are linguistic objects containing two or more words and showing idiosyncratic behavior at different levels. Notably, their meaning is often not deducible from the meanings of their components and from their syntactic structure in a fully compositional way. Thus, interpretation-oriented NLP tasks, such as semantic calculus or translation, call for MWE-dedicated procedures. Syntactic parsing often underlies such tasks, and the crucial issue is at which point the MWE identification should take place: before (", |
| "pdf_parse": { |
| "paper_id": "C16-1042", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Multiword expressions (MWEs) are pervasive in natural languages and often have both idiomatic and compositional readings, which leads to high syntactic ambiguity. We show that for some MWE types idiomatic readings are usually the correct ones. We propose a heuristic for an A parser for Tree Adjoining Grammars which benefits from this knowledge by promoting MWEoriented analyses. This strategy leads to a substantial reduction in the parsing search space in case of true positive MWE occurrences, while avoiding parsing failures in case of false positives. 1 Introduction Multiword expressions (MWEs), e.g. by and large, red tape, and to pull one's socks up 'to correct one's work or behavior', are linguistic objects containing two or more words and showing idiosyncratic behavior at different levels. Notably, their meaning is often not deducible from the meanings of their components and from their syntactic structure in a fully compositional way. Thus, interpretation-oriented NLP tasks, such as semantic calculus or translation, call for MWE-dedicated procedures. Syntactic parsing often underlies such tasks, and the crucial issue is at which point the MWE identification should take place: before (", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Our objective is to propose a parsing strategy which would promote analysis (b) and reading (ii). More precisely, the parser should only provide grammar-compliant MWE-oriented analyses each time they are feasible. Thus, we wish to both avoid the parsing failure for (1), and rapidly achieve the correct syntactic parses of (2)-(4), due to imposing their idiomatic interpretations. In this way, the parser's search space is reduced, with virtually no loss of correct parses, and with rare errors at the level of MWE identification, as in (3). The rate of such errors is the complement of the idiomaticity rate of the text to be parsed (here: 0.05).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that promoting the most probably correct analysis, whether containing MWEs or not, is the goal of probabilistic parsers in general. Thus, instead of designing a custom parsing architecture for promoting MWEs, it would be more adequate to simply train a general-purpose parser on a treebank containing MWE annotations. This solution is however hindered by data insufficiency. Firstly, many languages still lack large-size treebanks. Secondly, very few treebanks contain a full-fledged range of MWE annotations, even for English (Ros\u00e9n et al., 2015) . Thirdly, MWEs are subject to sparseness problems even more than single words: most existing MWEs occur never or rarely in MWE-annotated corpora (Czerepowicka and Savary, 2015) , let alone treebanks. Here, we partly cope with these problems by an Earley-style A parser using a MWE-oriented heuristic, which takes advantage of a potential occurrence of MWEs in a sentence. While it is designed to systematically promote MWEs regardless of their probabilities, the parser could be very well used with a weighted TAG and the weights assigned to individual elementary trees could be estimated on the basis of training data.", |
| "cite_spans": [ |
| { |
| "start": 532, |
| "end": 552, |
| "text": "(Ros\u00e9n et al., 2015)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 699, |
| "end": 730, |
| "text": "(Czerepowicka and Savary, 2015)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In Sec. 2 we remind basic facts about TAGs. In Sec. 3 we explain the MWE-promoting strategy in TAG parsing. In Sec. 4 we describe the parsing algorithm on a running example and we formalize its heuristics in Sec. 5. In Sec. 6 we show experimental results on a Polish TAG grammar extracted from a treebank. The choice of Polish is due to the fact that high-quality MWE resources compatible with the treebank are available for this language. In Sec. 7 we compare our approach with related work. Finally, we conclude and comment on future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "A TAG (Joshi et al., 1975 ) is a tree-rewriting system defined as a tuple \u03a3, N, I, A, S , where \u03a3 (resp. N ) is a set of terminal (resp. non-terminal) symbols, I and A are sets of elementary trees (ETs), and S \u2208 N is the axiom. Trees in I are called initial trees (ITs), their internal and leaf nodes are labeled with symbols in N and in \u03a3 \u222a N , respectively. Their non-terminal leaf nodes are called substitution nodes and marked with \u2193. Trees in A are called auxiliary trees (ATs) and are similar to trees in I except that they contain a leaf node (called a foot and marked with ) whose label is the same as the one of the root. Consider the toy TAG in Fig. 2 covering three competing interpretations for acid rains in example (4). Notably, tree t 5 represents its idiomatic reading. We have I = {t 1 , t 3 , t 4 , t 5 , t 6 } and A = {t 2 }.", |
| "cite_spans": [ |
| { |
| "start": 6, |
| "end": 25, |
| "text": "(Joshi et al., 1975", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 655, |
| "end": 661, |
| "text": "Fig. 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tree Adjoining Grammars", |
| "sec_num": "2" |
| }, |
| { |
| "text": "ETs are combined to derive new trees using substitution and adjunction. Substitution consists in replacing a leaf with an ET whose root is labeled with the same non-terminal (cf. the dotted arrow in Fig. 1 ). Adjunction consists in inserting an AT t inside any tree t provided that the root/foot label of t is the same as the label of the insertion point in t (cf. the dashed arrows in Fig. 1 ). The result of a TAG derivation is twofold: a derived tree, and a derivation tree. The former represents the syntactic tree resulting from tree rewriting. The latter shows which ETs have been combined and how, as shown in Fig. 1(b) . The derived tree of a sentence containing a syntactically regular MWE is identical to the one with its compositional reading, but their derivation trees differ. Thus, in the context of joint syntactic parsing and MWE identification (cf. Sec. 1), the derived and the derivation trees can be seen as the results of the former and of the latter task, respectively.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 199, |
| "end": 205, |
| "text": "Fig. 1", |
| "ref_id": null |
| }, |
| { |
| "start": 386, |
| "end": 392, |
| "text": "Fig. 1", |
| "ref_id": null |
| }, |
| { |
| "start": 617, |
| "end": 626, |
| "text": "Fig. 1(b)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tree Adjoining Grammars", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A TAG whose every ET contains at least one terminal leaf is called an LTAG (lexicalized TAG). The reason why we are particularly interested in LTAGs is that we consider MWEs a central challenge in NLP, and LTAGs show several advantages with respect to them (Abeill\u00e9 and Schabes, 1989) . Firstly, each MWE, together with the lexical and morphosyntactic constraints that it imposes, can be represented as a unique ET. Unification constraints on feature structures attached to tree nodes allow one to naturally express dependencies between arguments at different depths in the ETs (e.g. the subject-possessive agreement in to pull one's socks up). This is not the case for most other grammatical formalism, which handle long-distance dependencies by feature percolation. Secondly, the so-called extended domain of locality offers a natural framework for representing two different kinds of discontinuities. Namely, discontinuities coming from the internal structure of a MWE (e.g. required but non-lexicalized arguments) are directly visible in elementary trees and are handled in parsing mostly by substitution. Discontinuities coming from insertion of adjuncts (e.g. a bunch of NP, a whole bunch of NP) are invisible in elementary trees but are handled by adjunction.", |
| "cite_spans": [ |
| { |
| "start": 257, |
| "end": 284, |
| "text": "(Abeill\u00e9 and Schabes, 1989)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Adjoining Grammars", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "V happen NP N rains N N acid N S NP N N acid N rains VP V happen \u21d2 t6 t3(1) t2(1) (a)", |
| "eq_num": "(b)" |
| } |
| ], |
| "section": "S NP\u2193 VP", |
| "sec_num": null |
| }, |
| { |
| "text": "Figure 1: Tree rewriting in TAG resulting in a derived tree (a), and a derivation tree (b). ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "S NP\u2193 VP", |
| "sec_num": null |
| }, |
| { |
| "text": "(t1) (t2) (t3) (t4) (t5) (t6) NP N acid N N N acid NP N rains S VP V rains NP\u2193 NP N rains N acid S VP V happen NP\u2193 NP \u2192 N0 N \u2192 N1 N NP \u2192 N2 S \u2192 NP VP3 NP \u2192 N5 N6 S \u2192 NP VP7 N0 \u2192 acid N1 \u2192 acid N2 \u2192 rains VP3 \u2192 V4 N5 \u2192 acid VP7 \u2192 V8 V4 \u2192 rains N6 \u2192 rains V8 \u2192 happen", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "S NP\u2193 VP", |
| "sec_num": null |
| }, |
| { |
| "text": "(N 5 \u2192 \u2022acid, 0, 0) 0, 1 (N 0 \u2192 \u2022acid, 0, 0) 0, 1.5 (N 5 \u2192 acid\u2022, 0, 1) 0, 1 (NP \u2192 \u2022N 5 N 6 , 0, 0) 0, 1 (N 6 \u2192 \u2022rains, 1, 1) 0, 1 (N 0 \u2192 acid\u2022, 0, 1) 0, 1.5 (NP \u2192 \u2022N 0 , 0, 0) 0, 1.5 (V 4 \u2192 \u2022rains, 1, 1) 0, 1.5 (NP \u2192 N 5 \u2022 N 6 , 0, 1) 0, 1 (N 6 \u2192 rains\u2022, 1, 2) 0, 1 (S \u2192 \u2022NP VP 3 , 0, 0) 0, 1.5 (NP \u2192 N 0 \u2022, 0, 1) 1, 0.5 (V 4 \u2192 rains\u2022, 1, 2) 0, 1.5 (VP 3 \u2192 \u2022V 4 , 1, 1) 0, 1.5 (NP \u2192 N 5 N 6 \u2022, 0, 2) 1, 0 (S \u2192 NP \u2022 VP 3 , 0, 1) 1, 1 (VP 3 \u2192 V 4 \u2022, 1, 2) 0, 1.5 (S \u2192 NP \u2022 V P 3 , 0, 2) 1, 0 (S \u2192 NP VP 3 \u2022, 0, 2) 2, 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "S NP\u2193 VP", |
| "sec_num": null |
| }, |
| { |
| "text": "The fact that MWEs are represented in LTAGs as ETs allows us to propose a very simple and yet powerful strategy of promoting them in parsing. As seen in Sec. 2, parsing with an LTAG consists in combining ETs via substitution or adjunction. We define the weight of a full parse as the sum of the weights of the participating ETs. Note that the more sentence words belong to MWEs, and the longer are those MWEs, the less ETs are needed to cover the sentence. Suppose, for instance, that the sequence acid rains in Fig. 1 is covered by its idiomatic interpretation represented by tree t 5 from Fig. 2 , instead of being handled by adjunction. In this case parsing acid rains happen produces the same derived tree as before but the derivation tree is smaller: it involves 2 ETs instead of 3.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 512, |
| "end": 518, |
| "text": "Fig. 1", |
| "ref_id": null |
| }, |
| { |
| "start": 591, |
| "end": 597, |
| "text": "Fig. 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Promoting MWEs in weighted TAG parsing", |
| "sec_num": "3" |
| }, |
| { |
| "text": "This simple observation underlies our idea of promoting MWE-oriented analyses. Namely, suppose the input LTAG trivially weighted, i.e., each ET having weight 1. Then, finding analyses containing the maximum number of MWEs boils down to achieving the lowest-weight parses. Our objective is to find them more rapidly than other parses, which can be achieved by an A algorithm using a MWE-driven heuristics, as described in the following sections. See also Sec. 8 for considerations on how this solution might generalize to non-trivially weighted grammars, notably with weights estimated on the basis of treebanks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Promoting MWEs in weighted TAG parsing", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In (Waszczuk et al., 2016) we presented a TAG parsing architecture based notably on grammar flattening, subtree sharing and finite-state-based compression. Here, we sketch a simplified version of this architecture, and explain how it implements parsing as an A graph traversal algorithm. Then in Sec. 5 we define the heuristic implementing the MWE promoting strategy, which -to the best of our knowledgeis totally novel.", |
| "cite_spans": [ |
| { |
| "start": 3, |
| "end": 26, |
| "text": "(Waszczuk et al., 2016)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Weighted parsing with a flattened TAG", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Consider again the LTAG in Fig. 2 . For the sake of presentation and compression (cf. Sec. 6), we represent TAG ETs as sets of flat production rules (Alonso et al., 1999) with indexed non-terminals. 1 For instance, the two N non-terminals in t 5 receive different indexes so as to avoid spurious analyses like", |
| "cite_spans": [ |
| { |
| "start": 149, |
| "end": 170, |
| "text": "(Alonso et al., 1999)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 27, |
| "end": 33, |
| "text": "Fig. 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Weighted parsing with a flattened TAG", |
| "sec_num": "4" |
| }, |
| { |
| "text": "[[rains] N [acid] N ] N P .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Weighted parsing with a flattened TAG", |
| "sec_num": "4" |
| }, |
| { |
| "text": "A rule headed by the root of an ET (e.g., S \u2192 NP VP 3 ) is called a top rule. The other rules are called inside rules.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Weighted parsing with a flattened TAG", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Suppose that only the first two words of sentence (4) are to be parsed with a grammar subset limited to t 1 , t 4 and t 5 . With a flattened grammar representation, TAG parsing comes close to CFG parsing (even if dedicated inference rules are needed for adjunction, which is neglected in this paper). Like for CFG, an Earley-style parsing process for TAGs defined within a deductive framework (Shieber et al., 1995) , involving an agenda (queue of weighted items) and a chart, can be represented as a hypergraph (Klein and Manning, 2001 ), more precisely a B-graph (Gallo et al., 1993) , whose nodes are items of the chart and of the agenda, and whose hyperarcs represent applications of inference rules, as shown in Fig. 3 . Each item I = (r, k, l) contains a dotted rule r and the span (k, l) over which the symbols to the left of the dot have been parsed. 2 For instance, the hyperarc leading from (N 5 \u2192 \u2022acid, 0, 0) to (N 5 \u2192 acid\u2022, 0, 1) means that the terminal acid has been scanned from position 0 to 1. The latter item can then be combined with", |
| "cite_spans": [ |
| { |
| "start": 393, |
| "end": 415, |
| "text": "(Shieber et al., 1995)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 512, |
| "end": 536, |
| "text": "(Klein and Manning, 2001", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 565, |
| "end": 585, |
| "text": "(Gallo et al., 1993)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 717, |
| "end": 723, |
| "text": "Fig. 3", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Weighted parsing with a flattened TAG", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(NP \u2192 \u2022N 5 N 6 , 0, 0) to yield (NP \u2192 N 5 \u2022 N 6 , 0, 1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Weighted parsing with a flattened TAG", |
| "sec_num": "4" |
| }, |
| { |
| "text": ", etc. I and r are called passive, if the dot occurs at the end of r, and active otherwise. A sentence s has been parsed if a target item has been reached (spanning over the whole sentence, with a passive top rule headed by S).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Weighted parsing with a flattened TAG", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The specificity of such a hypergraph lies in the fact that it is dynamically generated as the parsing process goes on. The main objectives include the generation of the smallest possible portion of this hypergraph, while including all the requested parses. In our case those are all optimal parses 3 , in the sense of the MWE-promoting strategy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Weighted parsing with a flattened TAG", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Each derivation traversing I = (r, k, l) and resulting in a full parse tree T can be divided into two parts: (i) I's inside derivation, i.e., the part of the derivation corresponding to a (possibly partial) subtree of T rooted at r' head and spanning over (k, l), (ii) I's outside derivation, the part of the derivation corresponding to a partial tree obtained from T but excluding I's inside derivation. The weights of I's best inside and outside derivations are denoted by \u03b2(I) and \u03b1(I). They are calculated according to the strategy described in Sec. 3, i.e. as numbers of ETs involved.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Weighted parsing with a flattened TAG", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In symbolic CFG parsing, and in deductive parsing in general, the sentence parsability problem boils down to target node B-reachability in the (gradually constructed) hypergraph, and can be solved e.g. by a depth-first search generalized to hypegraphs. In probabilistic CFG parsing, parse trees and hypergraph B-paths are scored, and discovering the best parse is equivalent to finding the shortest B-path, which can be done by Dijkstra's algorithm generalized to hypergraphs (Gallo et al., 1993) . The search space of this basic algorithm can be reduced in the A algorithm (Klein and Manning, 2003) , by introducing a heuristic which estimates the distance of each node to a target node. Namely, each I is assigned two values: \u03b2(I) and h(I), the latter being an estimation of \u03b1(I). The parsing items are popped from agenda in increasing order of \u03b2(I)+h(I). The heuristic used to calculate h(I) should be admissible, i.e. should never overestimate (h(I) \u2264 \u03b1(I)). Additionally, if the heuristic is monotonic (i.e. \u03b2(I) + h(I) never increases), then an item is never re-introduced into the agenda once is has been popped, and the algorithm runs faster.", |
| "cite_spans": [ |
| { |
| "start": 476, |
| "end": 496, |
| "text": "(Gallo et al., 1993)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 574, |
| "end": 599, |
| "text": "(Klein and Manning, 2003)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Weighted parsing with a flattened TAG", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We apply the A algorithm in a slightly adapted version in that we do not search for one but for all optimal parses, i.e. those containing grammar-compliant idiomatic interpretations. Thus, we do not quit when the first target item has been reached, but only when we are sure that no more optimal derivations can be found. As long as I stays on the agenda, \u03b2(I) has to be recalculated each time a new hyperarc with head node I is added. Once I moves to the chart, \u03b2(I) remains constant. In Fig. 3 , the couple \u03b2(I), h(I) decorates each node. Note that in case of parsing with a flattened TAG, only an ET t, not its individual flat rules, is assigned a weight. Therefore, t's weight contributes to \u03b2(I) only when t has been fully parsed, and it contributes to h(I) otherwise. For instance, going from items (N P \u2192 N 5 \u2022 N 6 , 0, 1) and (N 6 \u2192 rains\u2022, 1, 2) to (N P \u2192 N 5 N 6 \u2022, 0, 2) we have completed parsing the top rule of t 5 , thus the weight of this ET (1) is added to W 1 . However, item (N 6 \u2192 rains\u2022, 1, 2) is decorated with 0, 1 , since no ET has been fully parsed so far but we are parsing tree t 5 (with weight 1), whose terminals fully cover the intended span (0, 2).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 489, |
| "end": 495, |
| "text": "Fig. 3", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Weighted parsing with a flattened TAG", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The proper choice of the heuristic is crucial for the performance of the A algorithm. We propose a heuristic h(I) specifically designed to handle MWEs and, more generally, ETs with multiple anchors, which allows to use the A parsing algorithm with MWE-aware weighted TAG grammars. In case weight 1 is assigned to all ETs, the heuristic closely models the strategy of promoting MWEs described in Sec. 3. Namely, it admits that if a given MWE has a chance to occur in the part of the sentence that remains to be parsed (i.e., in its outside derivation), then this MWE probably occurs. More precisely, the yet unparsed portion of the sentence can be divided into two parts: (i) the terminals yet to be covered by the tree that we are currently parsing, (ii) the remaining terminals. The heuristic consists in considering each terminal s i from (ii) separately and assuming that it will be parsed with the ET containing s i within the longest possible MWE.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MWE-driven heuristic", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Formally, let S = s 1 s 2 . . . s |S| be the input sentence and P os(S) the set of positions between its words, ranging from 0 to |S|. Since the same word can occur more than once in a sentence or a tree, we manipulate multisets of words. For a set X, a multiset over X is a set of pairs {(x, m(x)) : x \u2208 X}, where m(x) \u2208 N + is called the multiplicity of x. We extend set notations and operators to multisets. For instance, {(a, 2), (b, 1)} is noted as {a, a, b} ms , and we have {a, b} ms \u222a {a} ms = {a, a, b} ms , {a, a, b} ms \\ {a, b} ms = {a} ms , {a, b} ms \u2286 {a, a, b} ms , {a, a, b} ms \u2286 {a, b} ms , |{a, a, b} ms | = 3, etc. For any set X, let M(X) be the set of all multisets over X. Let Rest(I) denote a multiset of words in the input sentence S outside of I's span, i.e., Rest(I 4 Let tree(r) be the ET from which r stems, and W (t) \u2208 [0, \u221e) the weight of the ET t. For instance, in Fig. 2 and 3 , for r = N 5 \u2192 acid\u2022 we have tree(r) = t 5 and W (t i ) = 1 for i = 1, . . . , 6.", |
| "cite_spans": [ |
| { |
| "start": 790, |
| "end": 791, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 894, |
| "end": 906, |
| "text": "Fig. 2 and 3", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "MWE-driven heuristic", |
| "sec_num": "5" |
| }, |
| { |
| "text": ") = {s 1 , . . . , s k , s l+1 , . . . , s |S| } ms .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MWE-driven heuristic", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Let sub(t) \u2208 M(\u03a3) be the multiset of terminals in tree t. For instance, sub(t 5 ) = {acid, rains} ms . For each word w, let minw(w) denote the minimal weight of scanning w by an ET, i.e., the minimum proportion of w among all terminals of a single ET. More precisely,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MWE-driven heuristic", |
| "sec_num": "5" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "minw(w) = min t:(w,i)\u2208sub(t) W (t) |sub(t)| .", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "MWE-driven heuristic", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For instance, the proportion of acid in the terminals of t 1 , t 2 and t 5 is, 1, 1 and 0.5, respectively, so minw(acid) = 0.5. Similarly minw(rains) = 0.5. 5 Thus, with all ET weights equal to 1, the longer a MWE, the lower are the minw values of its components. Let sub(r), super(r) \u2208 M(\u03a3) be the multisets of terminals occurring in tree(r) inside and outside of the subtree rooted at r's head, respectively. For instance, sub(N 5 \u2192 acid\u2022) = {acid} ms and super(N 5 \u2192 acid\u2022) = {rains} ms . Note that for any top rule r, super (r) = \u2205 ms .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MWE-driven heuristic", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Let suff (r) be the set of passive non-top rules headed by the symbols in r' body after the dot. For instance, suff", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MWE-driven heuristic", |
| "sec_num": "5" |
| }, |
| { |
| "text": "(NP \u2192 N 5 \u2022 N 6 ) = {N 6 \u2192 rains\u2022} and suff (S \u2192 \u2022NP VP 3 ) = {VP 3 \u2192 V 4 \u2022}. Note that if r is passive, suff (r) = \u2205.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MWE-driven heuristic", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Finally, let Req(I) be the multiset of words required by the yet unparsed part of the current tree, i.e.,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MWE-driven heuristic", |
| "sec_num": "5" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "Req(I) = super(r) \u222a p\u2208suff (r) sub(p).", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "MWE-driven heuristic", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For instance in Fig.3 , for item I = (NP \u2192 N 5 \u2022 N 6 , 0, 1) we have super(NP \u2192 N 5 \u2022 N 6 ) = \u2205 ms , sub(N 6 \u2192 rains\u2022) = {rains} ms , and Req(I) = {rains} ms .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 16, |
| "end": 21, |
| "text": "Fig.3", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "MWE-driven heuristic", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For any item I = (r, k, l) we define a primary heuristic h 0 (I) as in equation 7.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MWE-driven heuristic", |
| "sec_num": "5" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "h 0 (I) = \uf8f1 \uf8f2 \uf8f3 \u221e, if Req(I) \u2286 Rest(I) (s,i)\u2208Rest(I)\\Req(I) minw(s) \u00d7 i, otherwise", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "MWE-driven heuristic", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Then the estimation for the weight of I's best outside derivation, i.e. \u03b1(I), is given by equation 8.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MWE-driven heuristic", |
| "sec_num": "5" |
| }, |
| { |
| "text": "h(I) = h 0 (I), if I is a top-rule passive item W (tree(r)) + h 0 (I), otherwise", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MWE-driven heuristic", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For instance, in the top-rule passive item (NP \u2192 N 0 \u2022, 0, 1) we have finished parsing t 1 (\u03b2(I) = 1) and we still have to consume rains, which implies a weight at least equal to h(I) = minw(rains) = 0.5. With this heuristic, and weight 1 assigned to individual ETs, the derivations containing MWEs are often reached before the paths towards compositional ones are even followed. For instance the item (N 0 \u2192 acid\u2022, 0, 1) has the estimated cost 1.5, and it will be created later than (S \u2192 NP VP 3 \u2022, 0, 2). Thus, the hyperpath (highlighted in bold) assuming the idiomatic reading of acid rains, will be followed before the path assuming that rains is a verb.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MWE-driven heuristic", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For a given item the heuristic assumes that each remaining word w from the input sentence (with the exception of the words required by the rule underlying the item) will be scanned with the lowest possible cost, i.e. minw(w) -see Eq. 5. The heuristic never over-estimates the cost of parsing the remaining part of the sentence and is thus admissible. All but one inference rules of the parser are also monotonic, in the sense that the estimation, stemming from the application of an inference rule, of the total weight \u03b2(I) + h(I) of an item I is greater or equal to the total weight, \u03b2(I ) + h(I ), of any premise item I of this rule. The sole exception concerns the inference rule -called foot adjoin (FA), see (Waszczuk et al., 2016) -responsible for recognizing the so-called gaps over which adjoining could be performed. This is related to the fact that the weight of the item inferred with FA does not depend on the \u03b2(I ) weight of its premise item I = (r, k, l), where item I provides an evidence that adjunction could possibly take place over span (k, l) . Nonetheless, the algorithm guarantees that when item I is popped from the agenda, one of the hyperarcs representing an optimal derivation of I is already inferred, and thus the \u03b2(I) value is correctly calculated.", |
| "cite_spans": [ |
| { |
| "start": 713, |
| "end": 736, |
| "text": "(Waszczuk et al., 2016)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 1056, |
| "end": 1062, |
| "text": "(k, l)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MWE-driven heuristic", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We evaluated our parsing strategy with Sk\u0142adnica, a Polish treebank with over 9,000 manually disambiguated constituency trees (\u015awidzi\u0144ski and Woli\u0144ski, 2010). As it contains no MWE annotations, we produced them automatically, by projecting 3 existing MWE resources: (i) the named entity (NE) layer of the National Corpus of Polish (NCP) (Savary et al., 2010 ) (only the multiword NEs were taken into account), (ii) SEJF, an extensional lexicon of Polish nominal, adjectival and adverbial MWEs (Czerepowicka and Savary, 2015), (iii) Walenty, a Polish valence dictionary (Przepi\u00f3rkowski et al., 2014) with over 8,000 verbal MWEs. The mapping for (i) was straightforward and did not require manual validation, since Sk\u0142adnica is a subcorpus of the NCP, whose NE annotation and adjudication were performed manually. The mapping for (ii) and (iii), followed by a manual validation, consisted in searching for syntactic nodes satisfying all lexical constraints and part of syntactic constraints of a MWE entry. The required lexical nodes were to be contiguous for (ii) but not for (iii). As a result, 2026 idiomatic occurrences (1303 from NCP-NE, 368 from SEJF and 355 from Walenty) and 40 compositional ones (22 for SEJF and 18 for Walenty) were identified, which implies the idiomaticity rate about 0.95 (0.95 for Walenty and 0.94 for SEJF). A TAG grammar with 28652 lexicalized elementary trees was then extracted from the MWE-marked treebank, similarly to (Krasnowska, 2013) or (Chen and Shanker, 2004) . Each treebank subtree marked for a MWE yielded: (i) a MWE-dedicated ET containing all paths leading to the lexical (co-)anchors, (ii) ETs covering the compositional interpretations. Various compression techniques can be applied to a flattened TAG (Waszczuk et al., 2016) . We used a representation in which common subtrees and prefixes of flat rules are shared.", |
| "cite_spans": [ |
| { |
| "start": 337, |
| "end": 357, |
| "text": "(Savary et al., 2010", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 569, |
| "end": 598, |
| "text": "(Przepi\u00f3rkowski et al., 2014)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 1454, |
| "end": 1472, |
| "text": "(Krasnowska, 2013)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1476, |
| "end": 1500, |
| "text": "(Chen and Shanker, 2004)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1750, |
| "end": 1773, |
| "text": "(Waszczuk et al., 2016)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We assess our parser's efficiency in terms of the size of its parsing hypergraph. We believe it to be a more objective measure to compare different parsing strategies than the absolute parsing time, since each hypergraph edge corresponds to an application of an inference rule, i.e. to a basic parsing step (as in theoretical complexity considerations). 6 Conversely, the parsing time is highly dependent on the low level implementation details. 7 The baseline hypergraph is the one generated with the full grammar, when no MWE-promoting strategy is used and all grammar-compliant parses are generated for each sentence. The MWE-promoting (PM) hypergraph, compared to this baseline, includes mainly the optimal parses (the algorithm ensures that, in PM, all optimal parses are achieved, but some sub-optimal parses may also be reached, since heuristic h is an imperfect estimation of \u03b1), i.e. those in which the maximum number of words belongs to potential MWEs.", |
| "cite_spans": [ |
| { |
| "start": 446, |
| "end": 447, |
| "text": "7", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The experiment was carried out on the same dataset from which the grammar was extracted. Therefore, for each sentence, the baseline hypergraph contained both its gold (i.e., conforming to Sk\u0142adnica) parse (derived tree) and its gold MWE identification (derivation tree). The PM hypergraphs, in turn, contained the correct parses for virtually 100% of the sentences, 8 and correct MWE identification for around 95% of them (due to the idiomaticity rate equal to 0.95). Thus, the parsing efficiency gain due to the PM strategy occurred with no loss of accuracy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The PM strategy is comparable to supertagging (ST), i.e. pre-selecting, for each sentence, a subset of ETs which have good chances to be used in the derivation, in order to reduce the parsing search space. We experimented with a simple form of ST, which restricts the grammar to ETs whose terminals occur in the given sentence. Namely, we examined the ST hypergraph containing all parses for each sentence, and the one when ST was combined with PM (where mainly optimal parses were achieved). Fig. 4a shows the absolute sizes of the hypergraphs for these 4 strategies in function of the sentence length. The PM strategy brings enhancement regardless of whether supertagging is used or not. The supertagging alone outperforms, on average, the baseline MWEs-promoting strategy. Since the combination of ST and PM strategies proves the most efficient, we restrict further experiments to this version. Note that Fig. 4a does not fully reflect the potential advantages of the PM strategy, whose behavior does not directly depend on the length of the parsed sentence, but rather on the number and the size of the MWEs potentially occurring in it. These 2 values can be together represented as the ratio of the size of the MWE-based derivation tree to the size of the corresponding compositional derivation tree (i.e. the one assuming no MWE occurrence). Expectedly, as shown in Fig. 4b , the lower this ratio (i.e. the more words in the sentence belong to MWEs, and the longer are these MWEs), the more significant the hypergraph size reductions. Moreover, the resulting graph suggests that the hypergraph size reductions are linear with respect to this ratio. Note that the vertical axis now shows the proportional gain in the hypergraph size due to the ST+PM strategy with respect to the ST strategy alone.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 493, |
| "end": 500, |
| "text": "Fig. 4a", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 908, |
| "end": 915, |
| "text": "Fig. 4a", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 1372, |
| "end": 1379, |
| "text": "Fig. 4b", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Finally, we investigated the behavior of the PM+ST strategy for two types of MWEs independently: verbal MWEs from Walenty and compounds from NCP and SEJF. As shown in Fig. 4c , verbal MWEs, while less frequent, prove to be better in reducing ambiguity for sentences with low number of potential MWEs. It is hard to ascertain this claim for sentences with lower gold derivation size ratio. While compounds seem to outperform verbal MWEs in this case, sentences with verbal MWEs for which this ratio is low are also very short in our dataset (of length 5, on average, for the 20 sentences with the lowest ratio), and thus exhibit low syntactic ambiguity.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 167, |
| "end": 174, |
| "text": "Fig. 4c", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "While A algorithms have been widely used for AI inference problems where a lightest derivation is to be found (Felzenszwalb and McAllester, 2007) , this is to our knowledge the first attempt at using them within the context of MWE parsing with TAG. This work was inspired by Lewis and Steedman (2014) who applied A to parsing with another strongly lexicalized grammar formalism, namely CCG. Unlike in this work, our grammar rules are not constrained to have a single lexical item, hence they can explicitly represent MWEs. This calls for a more elaborate heuristic, since a not yet parsed terminal can either be consumable by the currently parsed tree or not, as is the case with rains in item (NP \u2192 N 5 \u2022 N 6 , 0, 1) as opposed to (NP \u2192 N 0 \u2022, 0, 1) in Fig. 3 . Distinguishing these two cases leads to a more precise weight estimation. Angelov and Ljungl\u00f6f (2014) proposed to apply A top-down parsing to parallel multiple contextfree grammars, a formalism strictly more expressive than TAGs. In their approach weights are assigned to production rules and the grammar is not assumed to be strongly lexicalized, which complicates the design of an efficient heuristic. Their evaluation showed that a non-admissible heuristic can be orders of magnitude faster than the admissible version, at the expense of parsing quality.", |
| "cite_spans": [ |
| { |
| "start": 110, |
| "end": 145, |
| "text": "(Felzenszwalb and McAllester, 2007)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 275, |
| "end": 300, |
| "text": "Lewis and Steedman (2014)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 837, |
| "end": 864, |
| "text": "Angelov and Ljungl\u00f6f (2014)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 754, |
| "end": 760, |
| "text": "Fig. 3", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Other ways of dealing with MWEs in the context of TAG would involve pre-or post-processing. A post-processing step would consist in identifying MWE interpretations in derivation structures (potentially with an additional processing cost). Regarding pre-processing, current state-of-the-art techniques are related to probabilistic supertagging (Bangalore and Joshi, 1999) , as opposed to the simple symbolic supertagging applied in Sec.6. While labeling the words of a sentence with candidate ETs, one may either keep for each word the most probable ET, or all ETs whose probabilities are above a given threshold. Large MWE annotations are needed to train such supertaggers. Probabilistic treatment of contiguous MWEs has been applied to Tree-Substitution Grammar with encouraging results (Green et al., 2013) . The main drawback of such probabilistic pre-processing is the fact that it can prevent the parser from finding the right derivations in case when the supertagging was wrong. This situation is avoided in A parsing which, while requiring that candidate ETs be annotated with the corresponding probabilities, performs a filtering of unlike ET candidates on the fly.", |
| "cite_spans": [ |
| { |
| "start": 343, |
| "end": 370, |
| "text": "(Bangalore and Joshi, 1999)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 788, |
| "end": 808, |
| "text": "(Green et al., 2013)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "An alternative to probabilistic supertagging has been proposed by Boullier (2003) . There, an approximated CFG grammar is computed from an input TAG, and used to parse the input sentence so as to decide which ETs should be selected for TAG parsing. This approach has been enhanced by Gardent et al. (2014) to take word order into account. We consider such a supertagging technique an interesting candidate for future work. One could indeed not only select ETs that are compatible with the sentence to parse but also distinguish ETs for literal interpretations from ETs for MWEs. Like non-statistical supertagging, using an A algorithm has the advantage to process MWEs while keeping ambiguity as long as possible to avoid dismissing valid interpretations.", |
| "cite_spans": [ |
| { |
| "start": 66, |
| "end": 81, |
| "text": "Boullier (2003)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 284, |
| "end": 305, |
| "text": "Gardent et al. (2014)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Relatively few works have explicitly addressed the idiomaticity rate of MWEs. (Savary et al., 2012 ) perform a straightforward matching of a Polish economic MWE lexicon, containing extensional descriptions of morpho-syntactic variants, against a corpus and obtain only 0.12%-0.21% of false positives. (El Maarouf and Oakes, 2015) examine 10 verbal MWEs in the British National Corpus and find out that the idiomaticity measure for half of them exceeds 0.95, and for 9 most frequent of them is above 0.676.", |
| "cite_spans": [ |
| { |
| "start": 78, |
| "end": 98, |
| "text": "(Savary et al., 2012", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We have presented a novel LTAG parsing architecture in which parses potentially containing MWEs are given higher priorities so as to be achieved faster than the competing compositional analyses. The underlying A algorithm uses a distance estimation heuristic based on the number of terminal nodes in elementary trees. The results obtained with a Polish TAG grammar show that this strategy can considerably reduce the number of parsing items to be explored in order to generate a subset of parses very likely to contain the correct parse. The tests used a grammar extracted from a MWE-annotated treebank but the method also applies to hand-crafted grammars.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and future work", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Future work includes possible enhancements of the A heuristic. It currently does not require that, if an ET is used to scan an input terminal, then all the other terminals of this ET also have to be present. It does not require either that the terminals need to be scanned in the appropriate order. Taking such constraints into account might enhance both the parsing quality and speed. Note also that the heuristic ignores ETs which contain no lexical anchors, so it is mainly adapted to strongly lexicalized TAGs. Relieving this constraint, while preserving MWE promotion, would be worth consideration.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and future work", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Another perspective is to evaluate the computational overhead of the MWE-based heuristic, as opposed to identifying MWEs in a post-parsing step. Also, a fine-grained estimation of the idiomaticity rate of different types of MWEs might give us hints as to which of them should best be identified before, during or after parsing. With such data at hand, it should be possible to construct a multi-stage MWE-aware parsing architecture, tunable for optimum trade-off between accuracy and speed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and future work", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Even with MWE lexicon mapping on a treebank, as shown in Sec. 6, sufficiently large MWE-annotated treebanks are hard to obtain, and if they do exist, they are still concerned by MWE sparseness. In the long run, we aim at a hybrid parsing architecture in which a MWE-driven parser is fed with a probabilistic TAG grammar combined with MWE lexicons. We believe that such an extension of our solution to a hybrid setting is possible due to two factors. Firstly, the heuristic described in Sec. 5 generalizes to any weighted TAG with non-negative weights assigned to individual ETs. Secondly, systematically promoting MWE-oriented analyses in probabilistic parsing can be achieved even if MWEs are underrepresented in the training corpus. Namely, MWE-oriented ETs could stem from a syntactic MWE lexicon, such as Walenty (Przepi\u00f3rkowski et al., 2014) , while their weights could be calculated from the weights of the ETs corresponding to their compositional analyses. Alternatively, the weights could be represented as lexicographically ordered pairs, consisting of (i) the number of ETs participating in the underlying derivations, and (ii) the actual weights stemming from the weighted grammar.", |
| "cite_spans": [ |
| { |
| "start": 817, |
| "end": 846, |
| "text": "(Przepi\u00f3rkowski et al., 2014)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and future work", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Finally, integrating feature structures and unification within this parsing framework might lead to faster pruning of spurious analyses, and enable a more precise MWE identification, especially for inflectionally rich languages like Polish.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and future work", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Our proposal applies, however, to other LTAG representations as well. 2 For simplicity, we ignore the fact that an item's span can include a gap accounting for adjunction.3 In probabilistic CFG parsing, the 1-best parse(Klein and Manning, 2003) or k-best parses(Pauls and Klein, 2009) are usually considered.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In case of adjunction I's span includes two additional indices denoting the gap, and the words within the gap also belong to Rest(I).5 Variants of the minw(w) definition include distributing the weights of individual terminals in an ET proportionally to their frequencies in the corpus. Our experiments did not show any advantage of such a distribution over the uniform one.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The overhead related to computing the values of the heuristic is at most linear in the size of the sentence, and may be much lower with efficient low-level optimizations.7 In an optimized implementation, TAG parsing time is proportional to the number of hyperarcs, as reported by(Waszczuk et al., 2016).8 A sanity check showed that for 54 sentences the gold parse was not found, mainly due to some abbreviation-and lettercase-related specificities, as well as to missing MWE annotations in Sk\u0142adnica.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "www.parseme.eu 10 http://parsemefr.lif.univ-mrs.fr", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work has been supported by the European Framework Programme Horizon 2020 via the PARSEME 9 European COST Action (IC1207), as well as by the French National Research Agency (ANR) via the PARSEME-FR 10 project (ANR-14-CERA-0001).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Parsing idioms in lexicalized tags", |
| "authors": [ |
| { |
| "first": "Anne", |
| "middle": [], |
| "last": "Abeill\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Yves", |
| "middle": [], |
| "last": "Schabes", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "Proceedings of the 4th Conference of the European Chapter of the ACL, EACL'89", |
| "volume": "", |
| "issue": "", |
| "pages": "1--9", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anne Abeill\u00e9 and Yves Schabes. 1989. Parsing idioms in lexicalized tags. In Harold L. Somers and Mary McGee Wood, editors, Proceedings of the 4th Conference of the European Chapter of the ACL, EACL'89, Manchester, pages 1-9. The Association for Computer Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Eric Villemonte de la Clergerie, and Manuel Vilares Ferro. 1999. Tabular algorithms for TAG parsing", |
| "authors": [ |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Alonso", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Cabrero", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "EACL 1999", |
| "volume": "", |
| "issue": "", |
| "pages": "150--157", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Miguel Alonso, David Cabrero, Eric Villemonte de la Clergerie, and Manuel Vilares Ferro. 1999. Tabular algo- rithms for TAG parsing. In EACL 1999, pages 150-157.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Fast statistical parsing with parallel multiple context-free grammars", |
| "authors": [ |
| { |
| "first": "Krasimir", |
| "middle": [], |
| "last": "Angelov", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Ljungl\u00f6f", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "EACL", |
| "volume": "14", |
| "issue": "", |
| "pages": "368--376", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Krasimir Angelov and Peter Ljungl\u00f6f. 2014. Fast statistical parsing with parallel multiple context-free grammars. In EACL, volume 14, pages 368-376.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Supertagging: An Approach to Almost Parsing", |
| "authors": [ |
| { |
| "first": "Srinivas", |
| "middle": [], |
| "last": "Bangalore", |
| "suffix": "" |
| }, |
| { |
| "first": "Aravind", |
| "middle": [ |
| "K" |
| ], |
| "last": "Joshi", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Comput. Linguist", |
| "volume": "25", |
| "issue": "2", |
| "pages": "237--265", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Srinivas Bangalore and Aravind K. Joshi. 1999. Supertagging: An Approach to Almost Parsing. Comput. Lin- guist., 25(2):237-265, June.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Supertagging: a Non-Statistical Parsing-Based Approach", |
| "authors": [ |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Boullier", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 8th International Workshop on Parsing Technologies (IWPT 03)", |
| "volume": "", |
| "issue": "", |
| "pages": "55--65", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pierre Boullier. 2003. Supertagging: a Non-Statistical Parsing-Based Approach. In Proceedings of the 8th International Workshop on Parsing Technologies (IWPT 03), pages 55-65, Nancy, France.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Strategies for Contiguous Multiword Expression Analysis and Dependency Parsing", |
| "authors": [ |
| { |
| "first": "Marie", |
| "middle": [], |
| "last": "Candito", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthieu", |
| "middle": [], |
| "last": "Constant", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "743--753", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marie Candito and Matthieu Constant. 2014. Strategies for Contiguous Multiword Expression Analysis and De- pendency Parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 1: Long Papers, pages 743-753.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "New developments in parsing technology. chapter Automated Extraction of Tags from the Penn Treebank", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Vijay", |
| "middle": [ |
| "K" |
| ], |
| "last": "Shanker", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "73--89", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Chen and Vijay K. Shanker. 2004. New developments in parsing technology. chapter Automated Extraction of Tags from the Penn Treebank, pages 73-89. Kluwer Academic Publishers, Norwell, MA, USA.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "A transition-based system for joint lexical and syntactic analysis", |
| "authors": [ |
| { |
| "first": "Matthieu", |
| "middle": [], |
| "last": "Constant", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "161--171", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthieu Constant and Joakim Nivre. 2016. A transition-based system for joint lexical and syntactic analysis. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 161-171, Berlin, Germany, August. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Discriminative strategies to integrate multiword expression recognition and parsing", |
| "authors": [ |
| { |
| "first": "Matthieu", |
| "middle": [], |
| "last": "Constant", |
| "suffix": "" |
| }, |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Sigogne", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Watrin", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", |
| "volume": "1", |
| "issue": "", |
| "pages": "204--212", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthieu Constant, Anthony Sigogne, and Patrick Watrin. 2012. Discriminative strategies to integrate multiword expression recognition and parsing. In Proceedings of the 50th Annual Meeting of the Association for Compu- tational Linguistics: Long Papers -Volume 1, ACL '12, pages 204-212, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "SEJF -a Grammatical Lexicon of Polish Multi-Word Expression", |
| "authors": [ |
| { |
| "first": "Monika", |
| "middle": [], |
| "last": "Czerepowicka", |
| "suffix": "" |
| }, |
| { |
| "first": "Agata", |
| "middle": [], |
| "last": "Savary", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of Language and Technology Conference (LTC'15)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Monika Czerepowicka and Agata Savary. 2015. SEJF -a Grammatical Lexicon of Polish Multi-Word Expression. In Proceedings of Language and Technology Conference (LTC'15), Pozna\u0144, Poland. Wydawnictwo Pozna\u0144skie.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Statistical Measures for Characterising MWEs", |
| "authors": [ |
| { |
| "first": "Ismail", |
| "middle": [ |
| "El" |
| ], |
| "last": "Maarouf", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Oakes", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "IC1207 COST PARSEME 5th general meeting", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ismail El Maarouf and Michael Oakes. 2015. Statistical Measures for Characterising MWEs. In IC1207 COST PARSEME 5th general meeting.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "The generalized A architecture", |
| "authors": [ |
| { |
| "first": "Pedro", |
| "middle": [], |
| "last": "Felzenszwalb", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mcallester", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Journal of Artificial Intelligence Research", |
| "volume": "29", |
| "issue": "", |
| "pages": "153--190", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pedro Felzenszwalb and David McAllester. 2007. The generalized A architecture. Journal of Artificial Intelli- gence Research, 29:153-190.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Directed hypergraphs and applications", |
| "authors": [ |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Gallo", |
| "suffix": "" |
| }, |
| { |
| "first": "Giustino", |
| "middle": [], |
| "last": "Longo", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefano", |
| "middle": [], |
| "last": "Pallottino", |
| "suffix": "" |
| }, |
| { |
| "first": "Sang", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Discrete Appl. Math", |
| "volume": "42", |
| "issue": "2-3", |
| "pages": "177--201", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Giorgio Gallo, Giustino Longo, Stefano Pallottino, and Sang Nguyen. 1993. Directed hypergraphs and applica- tions. Discrete Appl. Math., 42(2-3):177-201, April.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Lexical Disambiguation in LTAG using Left Context", |
| "authors": [ |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Gardent", |
| "suffix": "" |
| }, |
| { |
| "first": "Yannick", |
| "middle": [], |
| "last": "Parmentier", |
| "suffix": "" |
| }, |
| { |
| "first": "Guy", |
| "middle": [], |
| "last": "Perrier", |
| "suffix": "" |
| }, |
| { |
| "first": "Sylvain", |
| "middle": [], |
| "last": "Schmitz", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Human Language Technology. Challenges for Computer Science and Linguistics. 5th Language and Technology Conference", |
| "volume": "8387", |
| "issue": "", |
| "pages": "67--79", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Claire Gardent, Yannick Parmentier, Guy Perrier, and Sylvain Schmitz. 2014. Lexical Disambiguation in LTAG using Left Context. In Zygmunt Vetulani and Joseph Mariani, editors, Human Language Technology. Chal- lenges for Computer Science and Linguistics. 5th Language and Technology Conference, LTC 2011, Poznan, Poland, November 25-27, 2011, Revised Selected Papers, volume 8387, pages 67-79. Springer.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Parsing Models for Identifying Multiword Expressions", |
| "authors": [ |
| { |
| "first": "Spence", |
| "middle": [], |
| "last": "Green", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie-Catherine", |
| "middle": [], |
| "last": "De Marneffe", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Computational Linguistics", |
| "volume": "39", |
| "issue": "1", |
| "pages": "195--227", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Spence Green, Marie-Catherine de Marneffe, and Christopher D. Manning. 2013. Parsing Models for Identifying Multiword Expressions. Computational Linguistics, 39(1):195-227.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Tree adjunct grammars", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Aravind", |
| "suffix": "" |
| }, |
| { |
| "first": "Leon", |
| "middle": [ |
| "S" |
| ], |
| "last": "Joshi", |
| "suffix": "" |
| }, |
| { |
| "first": "Masako", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Takahashi", |
| "suffix": "" |
| } |
| ], |
| "year": 1975, |
| "venue": "Journal of the Computer and System Sciences", |
| "volume": "10", |
| "issue": "", |
| "pages": "136--163", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aravind K. Joshi, Leon S. Levy, and Masako Takahashi. 1975. Tree adjunct grammars. Journal of the Computer and System Sciences, 10:136-163.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Parsing and hypergraphs", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the Seventh International Workshop on Parsing Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "17--19", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dan Klein and Christopher D. Manning. 2001. Parsing and hypergraphs. In Proceedings of the Seventh In- ternational Workshop on Parsing Technologies (IWPT-2001), 17-19 October 2001, Beijing, China. Tsinghua University Press.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "A* parsing: Fast exact viterbi parse selection", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dan Klein and Christopher D. Manning. 2003. A* parsing: Fast exact viterbi parse selection. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Towards a polish LTAG grammar", |
| "authors": [ |
| { |
| "first": "Katarzyna", |
| "middle": [], |
| "last": "Krasnowska", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Language Processing and Intelligent Information Systems -20th International Conference, IIS 2013", |
| "volume": "7912", |
| "issue": "", |
| "pages": "16--21", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Katarzyna Krasnowska. 2013. Towards a polish LTAG grammar. In Mieczyslaw A. Klopotek, Jacek Koronacki, Malgorzata Marciniak, Agnieszka Mykowiecka, and Slawomir T. Wierzchon, editors, Language Processing and Intelligent Information Systems -20th International Conference, IIS 2013, Warsaw, Poland, June 17-18, 2013. Proceedings, volume 7912 of Lecture Notes in Computer Science, pages 16-21. Springer.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "A* CCG Parsing with a Supertag-factored Model", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "990--1000", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Lewis and Mark Steedman. 2014. A* CCG Parsing with a Supertag-factored Model. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 990-1000. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Joint Dependency Parsing and Multiword Expression Tokenisation", |
| "authors": [ |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Nasr", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Ramisch", |
| "suffix": "" |
| }, |
| { |
| "first": "Jos\u00e9", |
| "middle": [], |
| "last": "Deulofeu", |
| "suffix": "" |
| }, |
| { |
| "first": "Valli", |
| "middle": [], |
| "last": "Andr\u00e9", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL'15)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexis Nasr, Carlos Ramisch, Jos\u00e9 Deulofeu, and Valli Andr\u00e9. 2015. Joint Dependency Parsing and Multiword Ex- pression Tokenisation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL'15).", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Multiword units in syntactic parsing", |
| "authors": [ |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Nilsson", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of MEMURA 2004 -Methodologies and Evaluation of Multiword Units in Real-World Applications, Workshop at LREC 2004", |
| "volume": "", |
| "issue": "", |
| "pages": "39--46", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joakim Nivre and Jens Nilsson. 2004. Multiword units in syntactic parsing. In Proceedings of MEMURA 2004 - Methodologies and Evaluation of Multiword Units in Real-World Applications, Workshop at LREC 2004, May 25, 2004, Lisbon, Portugal, pages 39-46, Lisbon, Portugal, May.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "K-best a* parsing", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Pauls", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "958--966", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam Pauls and Dan Klein. 2009. K-best a* parsing. In Keh-Yih Su, Jian Su, and Janyce Wiebe, editors, ACL 2009, Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP, 2-7 August 2009, Singapore, pages 958-966. The Association for Computer Linguistics.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Extended phraseological information in a valence dictionary for NLP applications", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Przepi\u00f3rkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "El\u017cbieta", |
| "middle": [], |
| "last": "Hajnicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Agnieszka", |
| "middle": [], |
| "last": "Patejuk", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcin", |
| "middle": [], |
| "last": "Woli\u0144ski", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Workshop on Lexical and Grammatical Resources for Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "83--91", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam Przepi\u00f3rkowski, El\u017cbieta Hajnicz, Agnieszka Patejuk, and Marcin Woli\u0144ski. 2014. Extended phraseolog- ical information in a valence dictionary for NLP applications. In Proceedings of the Workshop on Lexical and Grammatical Resources for Language Processing (LG-LP 2014), pages 83-91, Dublin, Ireland. Association for Computational Linguistics and Dublin City University.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "A survey of multiword expressions in treebanks", |
| "authors": [ |
| { |
| "first": "Victoria", |
| "middle": [], |
| "last": "Ros\u00e9n", |
| "suffix": "" |
| }, |
| { |
| "first": "Gyri", |
| "middle": [], |
| "last": "Sm\u00f8rdal Losnegaard", |
| "suffix": "" |
| }, |
| { |
| "first": "Koenraad", |
| "middle": [ |
| "De" |
| ], |
| "last": "Smedt", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Bej\u010dek", |
| "suffix": "" |
| }, |
| { |
| "first": "Agata", |
| "middle": [], |
| "last": "Savary", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Przepi\u00f3rkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Petya", |
| "middle": [], |
| "last": "Osenova", |
| "suffix": "" |
| }, |
| { |
| "first": "Verginica Barbu", |
| "middle": [], |
| "last": "Mitetelu", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 14th International Workshop on Treebanks & Linguistic Theories conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Victoria Ros\u00e9n, Gyri Sm\u00f8rdal Losnegaard, Koenraad De Smedt, Eduard Bej\u010dek, Agata Savary, Adam Przepi\u00f3rkowski, Petya Osenova, and Verginica Barbu Mitetelu. 2015. A survey of multiword expressions in treebanks. In Proceedings of the 14th International Workshop on Treebanks & Linguistic Theories conference, Warsaw, Poland, December.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Towards the annotation of named entities in the national corpus of polish", |
| "authors": [ |
| { |
| "first": "Agata", |
| "middle": [], |
| "last": "Savary", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakub", |
| "middle": [], |
| "last": "Waszczuk", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [ |
| "Przepi\u00f3rkowski" |
| ], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": ".", |
| "middle": [ |
| ";" |
| ], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Khalid", |
| "middle": [], |
| "last": "Choukri", |
| "suffix": "" |
| }, |
| { |
| "first": "Bente", |
| "middle": [], |
| "last": "Maegaard", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Mariani", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Agata Savary, Jakub Waszczuk, and Adam Przepi\u00f3rkowski. 2010. Towards the annotation of named entities in the national corpus of polish. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, Mike Rosner, and Daniel Tapias, editors, Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta, may. European Language Resources Association (ELRA).", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "SEJFEK -a Lexicon and a Shallow Grammar of Polish Economic Multi-Word Units", |
| "authors": [ |
| { |
| "first": "Agata", |
| "middle": [], |
| "last": "Savary", |
| "suffix": "" |
| }, |
| { |
| "first": "Bartosz", |
| "middle": [], |
| "last": "Zaborowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Aleksandra", |
| "middle": [], |
| "last": "Krawczyk-Wieczorek", |
| "suffix": "" |
| }, |
| { |
| "first": "Filip", |
| "middle": [], |
| "last": "Makowiecki", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 3rd Workshop on Cognitive Aspects of the Lexicon", |
| "volume": "", |
| "issue": "", |
| "pages": "195--214", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Agata Savary, Bartosz Zaborowski, Aleksandra Krawczyk-Wieczorek, and Filip Makowiecki. 2012. SEJFEK -a Lexicon and a Shallow Grammar of Polish Economic Multi-Word Units. In Proceedings of the 3rd Workshop on Cognitive Aspects of the Lexicon, pages 195-214, Mumbai, India, December. The COLING 2012 Organizing Committee.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Principles and implementation of deductive parsing", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Stuart", |
| "suffix": "" |
| }, |
| { |
| "first": "Yves", |
| "middle": [], |
| "last": "Shieber", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando Cn", |
| "middle": [], |
| "last": "Schabes", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "The Journal of logic programming", |
| "volume": "24", |
| "issue": "1", |
| "pages": "3--36", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stuart M Shieber, Yves Schabes, and Fernando CN Pereira. 1995. Principles and implementation of deductive parsing. The Journal of logic programming, 24(1):3-36.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Enhancing practical TAG parsing efficiency by capturing redundancy", |
| "authors": [ |
| { |
| "first": "Jakub", |
| "middle": [], |
| "last": "Waszczuk", |
| "suffix": "" |
| }, |
| { |
| "first": "Agata", |
| "middle": [], |
| "last": "Savary", |
| "suffix": "" |
| }, |
| { |
| "first": "Yannick", |
| "middle": [], |
| "last": "Parmentier", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 21st International Conference on Implementation and Application of Automata (CIAA 2016)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jakub Waszczuk, Agata Savary, and Yannick Parmentier. 2016. Enhancing practical TAG parsing efficiency by capturing redundancy. In 21st International Conference on Implementation and Application of Automata (CIAA 2016), Proceedings of the 21st International Conference on Implementation and Application of Automata (CIAA 2016), S\u00e9oul, South Korea, July.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Sentence analysis and collocation identification", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Wehrli", |
| "suffix": "" |
| }, |
| { |
| "first": "Violeta", |
| "middle": [], |
| "last": "Seretan", |
| "suffix": "" |
| }, |
| { |
| "first": "Luka", |
| "middle": [], |
| "last": "Nerima", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the Workshop on Multiword Expressions: from Theory to Applications (MWE 2010)", |
| "volume": "", |
| "issue": "", |
| "pages": "27--35", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric Wehrli, Violeta Seretan, and Luka Nerima. 2010. Sentence analysis and collocation identification. In Pro- ceedings of the Workshop on Multiword Expressions: from Theory to Applications (MWE 2010), pages 27-35, Beijing, China, August. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "The relevance of collocations for parsing", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Wehrli", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 10th Workshop on Multiword Expressions (MWE)", |
| "volume": "", |
| "issue": "", |
| "pages": "26--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric Wehrli. 2014. The relevance of collocations for parsing. In Proceedings of the 10th Workshop on Multiword Expressions (MWE), pages 26-32, Gothenburg, Sweden, April. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Towards a bank of constituent parse trees for Polish", |
| "authors": [ |
| { |
| "first": "Marcin", |
| "middle": [], |
| "last": "Marek\u015bwidzi\u0144ski", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Woli\u0144ski", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Text, Speech and Dialogue: 13th International Conference, TSD 2010", |
| "volume": "6231", |
| "issue": "", |
| "pages": "197--204", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marek\u015awidzi\u0144ski and Marcin Woli\u0144ski. 2010. Towards a bank of constituent parse trees for Polish. In Petr Sojka, Ale\u0161 Hor\u00e1k, Ivan Kope\u010dek, and Karel Pala, editors, Text, Speech and Dialogue: 13th International Conference, TSD 2010, Brno, Czech Republic, volume 6231 of Lecture Notes in Artificial Intelligence, pages 197-204, Heidelberg. Springer-Verlag.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "A toy TAG grammar converted into flat rules" |
| }, |
| "FIGREF1": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Hypergraph representing the chart parsing of the substring acid rains with ETs t 1 , t 4 and t 5 fromFig. 2. The lowest-cost path representing the idiomatic interpretation is highlighted in bold." |
| }, |
| "FIGREF2": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "In the inside-rule passive item I = (N 5 \u2192 acid\u2022, 0, 1) we have Rest(I) = {rains} ms , Req(N 5 \u2192 acid\u2022) = {rains} ms , thus h(I) = W (t 5 ) = 1. Finally, in the active item I = (NP \u2192 N 5 \u2022 N 6 , 0, 1) we have Rest(I) = {rains} ms , super(NP \u2192 N 5 \u2022 N 6 ) = \u2205 mt , and Req(I) = {rains} ms , thus h(I) = W (t 5 ) = 1." |
| }, |
| "FIGREF3": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "(a) Average number of hyperarcs explored depending on the parsing strategy (for clarity using only sentence of length < 20), (b) Average % of hyperarcs explored with the PM+ST strategy, using the ST strategy as a reference, and (c) Average % of hyperarcs explored depending on the type of MWEs." |
| } |
| } |
| } |
| } |