| { |
| "paper_id": "N13-1024", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:41:05.529433Z" |
| }, |
| "title": "Enforcing Subcategorization Constraints in a Parser Using Sub-parses Recombining", |
| "authors": [ |
| { |
| "first": "Seyed", |
| "middle": [], |
| "last": "Abolghasem Mirroshandel", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Laboratoire d'Informatique Fondamentale de Marseille-CNRS -UMR 7279", |
| "institution": "Universit\u00e9 Aix-Marseille", |
| "location": { |
| "settlement": "Marseille, Alpage", |
| "country": "France" |
| } |
| }, |
| "email": "ghasem.mirroshandel@lif.univ-mrs.fr" |
| }, |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Nasr", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Laboratoire d'Informatique Fondamentale de Marseille-CNRS -UMR 7279", |
| "institution": "Universit\u00e9 Aix-Marseille", |
| "location": { |
| "settlement": "Marseille, Alpage", |
| "country": "France" |
| } |
| }, |
| "email": "alexis.nasr@lif.univ-mrs.fr" |
| }, |
| { |
| "first": "Beno\u00eet", |
| "middle": [], |
| "last": "Sagot", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "benoit.sagot@inria.fr" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Treebanks are not large enough to adequately model subcategorization frames of predicative lexemes, which is an important source of lexico-syntactic constraints for parsing. As a consequence, parsers trained on such treebanks usually make mistakes when selecting the arguments of predicative lexemes. In this paper, we propose an original way to correct subcategorization errors by combining subparses of a sentence S that appear in the list of the n-best parses of S. The subcategorization information comes from three different resources, the first one is extracted from a treebank, the second one is computed on a large corpora and the third one is an existing syntactic lexicon. Experiments on the French Treebank showed a 15.24% reduction of erroneous subcategorization frames (SF) selections for verbs as well as a relative decrease of the error rate of 4% Labeled Accuracy Score on the state of the art parser on this treebank.", |
| "pdf_parse": { |
| "paper_id": "N13-1024", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Treebanks are not large enough to adequately model subcategorization frames of predicative lexemes, which is an important source of lexico-syntactic constraints for parsing. As a consequence, parsers trained on such treebanks usually make mistakes when selecting the arguments of predicative lexemes. In this paper, we propose an original way to correct subcategorization errors by combining subparses of a sentence S that appear in the list of the n-best parses of S. The subcategorization information comes from three different resources, the first one is extracted from a treebank, the second one is computed on a large corpora and the third one is an existing syntactic lexicon. Experiments on the French Treebank showed a 15.24% reduction of erroneous subcategorization frames (SF) selections for verbs as well as a relative decrease of the error rate of 4% Labeled Accuracy Score on the state of the art parser on this treebank.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Automatic syntactic parsing of natural languages has witnessed many important changes in the last fifteen years. Among these changes, two have modified the nature of the task itself. The first one is the availability of treebanks such as the Penn Treebank (Marcus et al., 1993) or the French Treebank (Abeill\u00e9 et al., 2003) , which have been used in the parsing community to train stochastic parsers, such as (Collins, 1997; Petrov and Klein, 2008) . Such work remained rooted in the classical language theoretic tradition of parsing, generally based on vari-ants of generative context free grammars. The second change occurred with the use of discriminative machine learning techniques, first to rerank the output of a stochastic parser (Collins, 2000; Charniak and Johnson, 2005) and then in the parser itself (Ratnaparkhi, 1999; Nivre et al., 2007; McDonald et al., 2005a) . Such parsers clearly depart from classical parsers in the sense that they do not rely anymore on a generative grammar: given a sentence S, all possible parses for S 1 are considered as possible parses of S. A parse tree is seen as a set of lexico-syntactic features which are associated to weights. The score of a parse is computed as the sum of the weights of its features.", |
| "cite_spans": [ |
| { |
| "start": 256, |
| "end": 277, |
| "text": "(Marcus et al., 1993)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 301, |
| "end": 323, |
| "text": "(Abeill\u00e9 et al., 2003)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 409, |
| "end": 424, |
| "text": "(Collins, 1997;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 425, |
| "end": 448, |
| "text": "Petrov and Klein, 2008)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 738, |
| "end": 753, |
| "text": "(Collins, 2000;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 754, |
| "end": 781, |
| "text": "Charniak and Johnson, 2005)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 812, |
| "end": 831, |
| "text": "(Ratnaparkhi, 1999;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 832, |
| "end": 851, |
| "text": "Nivre et al., 2007;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 852, |
| "end": 875, |
| "text": "McDonald et al., 2005a)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This new generation of parsers allows to reach high accuracy but possess their own limitations. We will focus in this paper on one kind of weakness of such parser which is their inability to properly take into account subcategorization frames (SF) of predicative lexemes 2 , an important source of lexicosyntactic constraints. The proper treatment of SF is actually confronted to two kinds of problems: (1) the acquisition of correct SF for verbs and (2) the integration of such constraints in the parser.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The first problem is a consequence of the use of treebanks for training parsers. Such treebanks are composed of a few thousands sentences and only a small subpart of acceptable SF for a verb actually 1 Another important aspect of the new parsing paradigm is the use of dependency trees as a means to represent syntactic structure. In dependency syntax, the number of possible syntactic trees associated to a sentence is bounded, and only depends on the length of the sentence, which is not the case with syntagmatic derivation trees. 2 We will concentrate in this paper on verbal SF.", |
| "cite_spans": [ |
| { |
| "start": 200, |
| "end": 201, |
| "text": "1", |
| "ref_id": null |
| }, |
| { |
| "start": 534, |
| "end": 535, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "occur in the treebank.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The second problem is a consequence of the parsing models. For algorithmic complexity as well as data sparseness reasons, the parser only considers lexico-syntactic configurations of limited domain of locality (in the parser used in the current work, this domain of locality is limited to configurations made of one or two dependencies). As described in more details in section 2, SF often exceed in scope such domains of locality and are therefore not easy to integrate in the parser. A popular method for introducing higher order constraints in a parser consist in reranking the n best output of a parser as in (Collins, 2000; Charniak and Johnson, 2005) . The reranker search space is restricted by the output of the parser and high order features can be used. One drawback of the reranking approach is that correct SF for the predicates of a sentence can actually appear in different parse trees. Selecting complete trees can therefore lead to sub-optimal solutions. The method proposed in this paper merges parts of different trees that appear in an n best list in order to build a new parse.", |
| "cite_spans": [ |
| { |
| "start": 613, |
| "end": 628, |
| "text": "(Collins, 2000;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 629, |
| "end": 656, |
| "text": "Charniak and Johnson, 2005)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Taking into account SF in a parser has been a major issue in the design of syntactic formalisms in the eighties and nineties. Unification grammars, such as Lexical Functional Grammars (Bresnan, 1982) , Generalized Phrase Structure Grammars (Gazdar et al., 1985) and Head-driven Phrase Structure Grammars (Pollard and Sag, 1994) , made SF part of the grammar. Tree Adjoining Grammars (Joshi et al., 1975) proposed to extend the domain of locality of Context Free Grammars partly in order to be able to represent SF in a generative grammar. More recently, (Collins, 1997) proposed a way to introduce SF in a probabilistic context free grammar and (Arun and Keller, 2005) used the same technique for French. (Carroll et al., 1998) , used subcategorization probabilities for ranking trees generated by unification-based phrasal grammar and (Zeman, 2002) showed that using frame frequency in a dependency parser can lead to a significant improvement of the performance of the parser.", |
| "cite_spans": [ |
| { |
| "start": 184, |
| "end": 199, |
| "text": "(Bresnan, 1982)", |
| "ref_id": null |
| }, |
| { |
| "start": 240, |
| "end": 261, |
| "text": "(Gazdar et al., 1985)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 304, |
| "end": 327, |
| "text": "(Pollard and Sag, 1994)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 383, |
| "end": 403, |
| "text": "(Joshi et al., 1975)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 554, |
| "end": 569, |
| "text": "(Collins, 1997)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 645, |
| "end": 668, |
| "text": "(Arun and Keller, 2005)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 705, |
| "end": 727, |
| "text": "(Carroll et al., 1998)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 836, |
| "end": 849, |
| "text": "(Zeman, 2002)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The main novelties of the work presented here is (1) the way a new parse is built by combining subparses that appear in the n best parse list and (2) the use of three very different resources that list the possible SF for verbs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The organization of the paper is the following: in section 2, we will briefly describe the parsing model that we will be using for this work and give accuracy results on a French corpus. Section 3 will describe three different resources that we have been using to correct SF errors made by the parser and give coverage results for these resources on a development corpus. Section 4 will propose three different ways to take into account, in the parser, the resources described in section 3 and give accuracy results. Section 5 concludes the paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The parser used in this work is the second order graph based parser (McDonald et al., 2005b) implementation of (Bohnet, 2010) . The parser was trained on the French Treebank (Abeill\u00e9 et al., 2003) which was transformed into dependency trees by (Candito et al., 2009 The parser gave state of the art results for parsing of French, reported in table 2. Table 2 reports the standard Labeled Accuracy Score (LAS) and Unlabeled Accuracy Score (UAS) which is the ratio of correct labeled (for LAS) or unlabeled (for UAS) dependencies in a sentence. We also defined a more specific measure: the SF Accuracy Score (SAS) which is the ratio of verb occurrences that have been paired with the correct SF by the parser. We have introduced this quantity in order to measure more accurately the impact of the methods described in this paper on the selection of a SF for the verbs of a sentence. We have chosen a second order graph parser in this work for two reasons. The first is that it is the parsing model that obtained the best results on the French Treebank. The second is that it allows us to impose structural constraints in the solution of the parser, as described in (Mirroshandel and Nasr, 2011) , a feature that will reveal itself precious when enforcing SF in the parser output.", |
| "cite_spans": [ |
| { |
| "start": 68, |
| "end": 92, |
| "text": "(McDonald et al., 2005b)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 111, |
| "end": 125, |
| "text": "(Bohnet, 2010)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 174, |
| "end": 196, |
| "text": "(Abeill\u00e9 et al., 2003)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 244, |
| "end": 265, |
| "text": "(Candito et al., 2009", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 1163, |
| "end": 1192, |
| "text": "(Mirroshandel and Nasr, 2011)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 351, |
| "end": 358, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Parser", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Three resources have been used in this work in order to correct SF errors. The first one has been extracted from a treebank, the second has been extracted from an automatically parsed corpus that is several order of magnitude bigger than the treebank. The third one has been extracted from an existing lexico-syntactic resource. The three resources are respectively described in sections 3.2, 3.3 and 3.4. Before describing the resources, we describe in details, in section 3.1 our definition of SF. In section 3.5, we evaluate the coverage of these resources on the DEV corpus. Coverage is an important characteristic of a resource: in case of an SF error made by the parser, if the correct SF that should be associated to a verb, in a sentence, does not appear in the resource, it will be impossible to correct the error.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Resources", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In this work, a SF is defined as a couple (G, L) where G is the part of speech tag of the element that licenses the SF. This part of speech tag can either be a verb in infinitive form (VINF), a past participle (VPP), a finite tense verb (V) or a present participle (VPR). L is a set of couples (f, c) where f is a syntactic function tag chosen among a set F and c is a part of speech tag chosen among the set C. Couple (f, c) indicates that function f can be realized as part of speech tag c. Sets F and C are respectively displayed in top and bottom tables of figure 1. An anchored SF (ASF) is a couple (v, S) where v is a verb lemma and S is a SF, as described above.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Subcat Frames Description", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "A resource is defined as a collection of ASF (v, S), each associated to a count c, to represent the fact that verb v has been seen with SF S c times. In the case of the resource extracted form an existing lexicon (section 3.4), the notion of count is not applicable and we will consider that it is always equal (top table) . Part of speech tags of the arguments of the SF (bottom table) to one.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 311, |
| "end": 322, |
| "text": "(top table)", |
| "ref_id": null |
| }, |
| { |
| "start": 372, |
| "end": 386, |
| "text": "(bottom table)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Subcat Frames Description", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Below is an example of three ASF for the french verb donner (to give). The first one is a transitive SF where both the subject and the object are realized as nouns as in Jean donne un livre (Jean gives a book.). The second one is ditransitive, it has both a direct object and an indirect one introduced by the preposition\u00e0 as in Jean donne un livre\u00e0 Marie. (Jean gives a book to Marie). The third one corresponds to a passive form as in le livre est donn\u00e9\u00e0 Marie par Jean (The book is given to Marie by Jean).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Subcat Frames Description", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(donner,(V,(suj,N),(obj,N))) (donner,(V,(suj,N),(obj,N),(a_obj,N))) (donner,(VPP,(suj,N),(aux_pass,V), (a_obj,N),(p_obj,N)))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Subcat Frames Description", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "One can note that when an argument corresponds to an indirect dependent of the verb (introduced either by a preposition or a subordinating conjunction), we do not represent in the SF, the category of the element that introduces the argument, but the category of the argument itself, a noun or a verb.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Subcat Frames Description", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Two important choices have to be made when defining SF. The first one concerns the dependents of the predicative element that are in the SF (argument/adjunct distinction) and the second is the level of abstraction at which SF are defined.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Subcat Frames Description", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In our case, the first choice is constrained by the treebank annotation guidelines. The FTB distinguishes seven syntactic functions which can be considered as arguments of a verb. They are listed in the top table of figure 1. Most of them are straight-forward and do not deserve an explanation. Something has to be said though on the syntactic function P OBJ which is used to model arguments of the verb introduced by a preposition that is neither\u00e0 nor de, such as the agent in passive form, which is introduced by the preposition par.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Subcat Frames Description", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We have added in the SF two elements that do not correspond to arguments of the verb: the reflexive pronoun, and the passive auxiliary. The reason for adding these elements to the SF is that their presence influences the presence or absence of some arguments of the verb, and therefore the SF.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Subcat Frames Description", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The second important choice that must be made when defining SF is the level of abstraction, or, in other words, how much the SF abstracts away from its realization in the sentence. In our case, we have used two ways to abstract away from the surface realization of the SF. The first one is factoring several part of speech tags. We have factored pronouns, common nouns and proper nouns into a single category N. We have not gathered verbs in different modes into one category since the mode of the verb influences its syntactic behavior and hence its SF. The second means of abstraction we have used is the absence of linear order between the arguments. Taking into account argument order increases the number of SF and, hence, data sparseness, without adding much information for selecting the correct SF, this is why we have decided to to ignore it. In our second example above, each of the three arguments can be realized as one out of eight parts of speech that correspond to the part of speech tag N and the 24 possible orderings are represented as one canonical ordering. This SF therefore corresponds to 12 288 possible realizations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Subcat Frames Description", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "This resource has been extracted from the TRAIN corpus. At a first glance, it may seen strange to extract data from the corpus that have been used for training our parser. The reason is that, as seen in section 1, SF are not directly modeled by the parser, which only takes into account subtrees made of, at most, two dependencies.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Treebank Extracted Subcat Frames", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The extraction procedure of SF from the treebank is straightforward : the tree of every sentence is visited and, for every verb of the sentence, its daughters are visited, and, depending whether they are consid-ered as arguments of the verb (with respect to the conventions or section 3.1), they are added to the SF. The number of different verbs extracted, as well as the number of different SF and the average number of SF per verb are displayed in table 3. Column T (for Train) is the one that we are interested in here. The extracted resource can directly be compared with the TREELEX resource (Kupsc and Abeill\u00e9, 2008) , which has been extracted from the same treebank. The result that we obtain is different, due to the fact that (Kupsc and Abeill\u00e9, 2008) have a more abstract definition of SF. As a consequence, they define a smaller number of SF: 58 instead of 666 in our case. The smaller number of SF yields a smaller average number of SF per verb: 1.72 instead of 4.83 in our case.", |
| "cite_spans": [ |
| { |
| "start": 598, |
| "end": 623, |
| "text": "(Kupsc and Abeill\u00e9, 2008)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 736, |
| "end": 761, |
| "text": "(Kupsc and Abeill\u00e9, 2008)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Treebank Extracted Subcat Frames", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The extraction procedure described above has been used to extract ASF from an automatically parsed corpus. The corpus is actually a collection of three corpora of slightly different genres. The first one is a collection of news reports of the French press agency Agence France Presse, the second is a collection of newspaper articles from a local French newspaper : l'Est R\u00e9publicain. The third one is a collection of articles from the French Wikipedia. The size of the different corpora are detailed in table 4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automatically computed Subcat Frames", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The corpus was first POS tagged with the MELT tagger (Denis and Sagot, 2010) , lemmatized with the MACAON tool suite and parsed in order to get the best parse for every sentence. Then the ASF have been extracted.", |
| "cite_spans": [ |
| { |
| "start": 53, |
| "end": 76, |
| "text": "(Denis and Sagot, 2010)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automatically computed Subcat Frames", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The number of verbs, number of SF and average number of SF per verb are represented in table 3, in column A 0 (A stands for Automatic). As one can see, the number of verbs and SF are unrealistic. This is due to the fact that the data that we extract SF from is noisy: it consists of automatically produced syntactic trees which contain errors (recall that the LAS on the DEV corpus is 88, 02%). There are two main sources of errors in the parsed data: the pre-processing chain (tokenization, part of speech tagging and lemmatization) which can consider as a verb a word that is not, and, of course, parsing errors, which tend to create crazy SF. In order to fight against noise, we have used a simple thresholding: we only collect ASF that occur more than a threshold i. The result of the thresholding appears in columns A 5 and A 10 , where the subscript is the value of the threshold. As expected both the number of verbs and SF decrease sharply when increasing the value of the threshold. Extracting SF for verbs from raw data has been an active direction of research for a long time, dating back at least to the work of (Brent, 1991) and (Manning, 1993) . More recently (Messiant et al., 2008) proposed such a system for French verbs. The method we use for extracting SF is not novel with respect to such work. Our aim was not to devise new extraction techniques but merely to evaluate the resource produced by such techniques for statistical parsing.", |
| "cite_spans": [ |
| { |
| "start": 1124, |
| "end": 1137, |
| "text": "(Brent, 1991)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1142, |
| "end": 1157, |
| "text": "(Manning, 1993)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 1174, |
| "end": 1197, |
| "text": "(Messiant et al., 2008)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automatically computed Subcat Frames", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The third resource that we have used is the Lefff (Lexique des formes fl\u00e9chies du fran\u00e7ais -Lexicon of French inflected form), a large-coverage syntactic lexicon for French (Sagot, 2010) . The Lefff was developed in a semi-automatic way: automatic tools were used together with manual work. The latest version of the Lefff contains 10,618 verbal entries for 7,835 distinct verbal lemmas (the Lefff covers all categories, but only verbal entries are used in this work).", |
| "cite_spans": [ |
| { |
| "start": 173, |
| "end": 186, |
| "text": "(Sagot, 2010)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using an existing resource", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "A sub-categorization frame consists in a list of syntactic functions, using an inventory slightly more fine-grained than in the French Treebank, and for each of them a list of possible realizations (e.g., noun phrase, infinitive clause, or null-realization if the syntactic function is optional).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using an existing resource", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "For each verbal lemma, we extracted all subcategorization frames for each of the four verbal part-of-speech tags (V, VINF, VPR, VPP), thus creating an inventory of SFs in the same sense and format as described in Section 3.1. Note that such SFs do not contain alternatives concerning the way each syntactic argument is realized or not: this extraction process includes a de-factorization step. Its output, hereafter L, contains 801,246 distinct (lemma, SF) pairs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using an existing resource", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "In order to be able to correct SF errors, the three resources described above must possess two important characteristics: high coverage and high accuracy. Coverage measures the presence, in the resource, of the correct SF of a verb, in a given sentence. Accuracy measures the ability of a resource to select the correct SF for a verb in a given context when several ones are possible.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coverage", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "We will give in this section coverage result, computed on the DEV corpus. Accuracy will be described and computed in section 4. The reason why the two measures are not described together is due to the fact that coverage can be computed on a reference corpus while accuracy must be computed on the output of a parser, since it is the parser that will propose different SF for a verb in a given context.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coverage", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "Given a reference corpus C and a resource R, two coverage measures have been computed, lexical coverage, which measures the ratio of verbs of C that appear in R and syntactic coverage, which measures the ratio of ASF of C that appear in R. Two variants of each measures are computed: on types and on occurrences. The values of these measures computed on the DEV corpus are summarized in table 5. The lowest coverage is obtained by the T resource, which does not come as a surprise since it is computed on a rather small number of sentences. It is also interesting to note that lexical coverage of A does not decrease much when augmenting the threshold, while the size of the resource decreases dramatically (as shown in table 3). This validates the hypothesis that the resource is very noisy and that a simple threshold on the occurrences of ASF is a reasonable means to fight against noise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coverage", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "Syntactic coverage is, as expected, lower than lexical coverage. The best results are obtained by A 0 : 95.78 on types and 97.13 on occurrences. Thresholding on the occurrences of anchored SF has a bigger impact on syntactic coverage than it had on lexical coverage. A threshold of 10 yields a coverage of 88.84 on types and 92.39 on occurrences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coverage", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "As already mentioned in section 1, SF usually exceed the domain of locality of the structures that are directly modeled by the parser. It is therefore difficult to integrate directly SF in the model of the parser. In order to circumvent the problem, we have decided to work on the n-best output of the parser: we consider that a verb v, in a given sentence S, can be associated to any of the SF that v licenses in one of the n-best trees. The main weakness of this method is that an SF error can be corrected only if the right SF appears at least in one of the n-best parse trees.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Integrating Subcat Frames in the Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In order to estimate an upper bound of the SAS that such methods can reach (how many SF errors can actually be corrected), we have computed the oracle SAS on the 100 best trees of the DEV corpus DEV (for how many verbs the correct SF appears in at least one of the n-best parse trees). The oracle score is equal to 95.16, which means that for 95.16% of the verb occurrences of the DEV, the correct SF appears somewhere in the 100-best trees. 95.16 is therefore the best SAS that we can reach. Recall that the baseline SAS is equal to 79.88% the room for progress is therefore equal to 15.28% absolute.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Integrating Subcat Frames in the Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Three experiments are described below. In the first one, section 4.1, a simple technique, called Post Processing is used. Section 4.2 describes a second technique, called Double Parsing, which is a is a refinement of Post Processing. Both sections 4.1 and 4.2 are based on single resources. Section 4.3 proposes a simple way to combine the different resources.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Integrating Subcat Frames in the Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The post processing method (PP) is the simplest one that we have tested. It takes as input the different ASF that occur in the n-best output of the parser as well as a resource R. Given a sentence, let's note T 1 . . . T n the trees that appear in the n-best output of the parser, in decreasing order of their score. For every verb v of the sentence, we note S(v) the set of all the SF associated to v that appear in the trees T 1 . . . T n .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Post Processing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Given a verb v and a SF s, we define the following functions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Post Processing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "C(v, s) is the number of occurrences of the ASF", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Post Processing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(v, s) in the trees T 1 . . . T n . F(v) is the SF associated to v in T 1 C R (v, s) the number of occurrences of the ASF (v, s) in the resource R.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Post Processing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We define a selection function as a function that selects a SF for a given verb in a given sentence. A selection function has to take into account the information given by the resource (whether an SF is acceptable/frequent for a given verb) as well as the information given by the parser (whether the parser has a strong preference to associate a given SF to a given verb).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Post Processing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In our experiments, we have tested two simple selection functions. \u03d5 R which selects the first SF s \u2208 S(v), such that C R (v, s) > 0 when traversing the trees T 1 . . . T n in the decreasing order of score (best tree first).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Post Processing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The second function, \u03c8 R (v) compares the most frequent SF for v in the resource R with the SF of the first parse. If the ratio of the number of occurrences in the n-best of the former and the latter is above a threshold \u03b1, the former is selected. More formally:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Post Processing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u03c8 R (v) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3\u015d = arg max s\u2208S(v) C R (v, s) if C(v,\u015d) C(v,F (v)) > \u03b1 F(v) otherwise", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Post Processing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The coefficient \u03b1 has been optimized on DEV corpus. Its value is equal to 2.5 for the Automatic resource, 2 for the Train resource and 1.5 for the Lefff.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Post Processing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The construction of the new solution proceeds as follows: for every verb v of the sentence, a SF is selected with the selection function. It is important to note, at this point, that the SF selected for different verbs of the sentence can pertain to different parse trees. The new solution is built based on tree T 1 . For every verb v, its arguments are potentially modified in agreement with the SF selected by the selection function. There is no guarantee at this point that the solution is well formed. We will return to this problem in section 4.2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Post Processing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We have evaluated the PP method with different selection functions on the TEST corpus. The results of applying function \u03c8 R were more successful. As a result we just report the results of this function in table 6. Different levels of thresholding for resource A gave almost the same results, we therefore used A 10 which is the smallest one. The results of table 6 show two interesting facts. First, the SAS is improved, it jumps from 80.84 to 83.11. PP therefore corrects some SF errors made by the parser. It must be noted however that this improvement is much lower than the oracle score. The second interesting fact is the very moderate increase of both LAS and UAS. This is due to the fact that the number of dependencies modified is small with respect to the total number of dependencies. The impact on LAS and UAS is therefore weak.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Post Processing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The best results are obtained with resource T . Although the coverage of T is low, the resource is very close to the train data, this fact probably explains the good results obtained with this resource.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Post Processing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "It is interesting, at this point, to compare our method with a reranking approach. In order to do so, we have compared the upper bound of the number of SF errors that can be corrected when using reranking and our approach. The results of the comparison computed on a list of 100 best trees is reported in table 7 which shows the ratio of subcat frame errors that could be corrected with a reranking approach and the ratio of errors sub-parse recombining could reach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Post Processing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "53.9% 58.5% sub-parse recombining 75.5% 76% Table 7 : Correction rate for subcat frames errors with different methods Table 7 shows that combining sub-parses can, in theory, correct a much larger number of wrong SF assignments than reranking.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 44, |
| "end": 51, |
| "text": "Table 7", |
| "ref_id": null |
| }, |
| { |
| "start": 118, |
| "end": 125, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "DEV TEST reranking", |
| "sec_num": null |
| }, |
| { |
| "text": "The post processing method shows some improvement over the baseline. But it has an important drawback: it can create inconsistent parses. Recall that the parser we are using is based on a second order model. In other words, the score of a dependency depends on some neighboring dependencies. When building a new solution, the post processing method modifies some dependencies independently of their context, which may give birth to very unlikely configurations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Double Parsing", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In order to compute a new optimal parse tree that preserves the modified dependencies, we have used a technique proposed in (Mirroshandel and Nasr, 2011) that modifies the scoring function of the parser in such a way that the dependencies that we want to keep in the parser output get better scores than all competing dependencies. The new solution is therefore the optimal solution that preserves the dependencies modified by the PP method.", |
| "cite_spans": [ |
| { |
| "start": 124, |
| "end": 153, |
| "text": "(Mirroshandel and Nasr, 2011)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Double Parsing", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The double parsing (DP) method is therefore a three stage method. First, sentence S is parsed, producing the n-best parses. Then, the post processing method is used, modifying the first best parse. Let's note D the set of dependencies that were changed in this process. In the last stage, a new parse is produced, that preserves D. The results of DP on TEST are reported in table 8. SAS did not change with respect to PP, because DP keeps the SF selected by PP. As expected DP does increase LAS and UAS. Recomputing an optimal solution therefore increases the quality of the parses. Table 8 also shows that the three resources get almost the same LAS and UAS although SAS is better for resource T.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 583, |
| "end": 590, |
| "text": "Table 8", |
| "ref_id": "TABREF14" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Double Parsing", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Due to the different generation techniques of our three resources, another direction of research is combining them. We did different experiments concerning all possible combination of resources: A and L (AL), T and L (TL), T and A (TA), and all tree (TAL) resources. The results of these combinations for PP and DP methods are shown in tables 9 and 10, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combining Resources", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The resource are combined in a back-off schema: we search for a candidate ASF in a first resource. If it is found, the search stops. Otherwise, the next resource(s) are probed. One question that arises is: which sequence is the optimal one for combining the resources. To answer this question, we did several experiments on DEV set. Our experiments have shown that it is better to search T resource, then A, and, eventually, L. The results of this combining method, using PP are reported in table 9. The best results are obtained for the TL combination. The SAS jumps from 83.11 to 83.76. As it was the case with single resources, the LAS and UAS increase is moderate. With DP (table 9), the order of resource combination is exactly the same as with PP. As was the case with single resources, DP has a positive, but moderate, impact on LAS and UAS.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combining Resources", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The results of tables 9 and 10 do not show considerable improvement over single resources. This might be due to the large intersection between our resources. In other words, they do not have complementary information, and their combination will not Table 10 : LAS and UAS on TEST using DP with resource combination introduce much information. Another possible reason for this result is the combination technique used. More sophisticated techniques might yield better results.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 249, |
| "end": 257, |
| "text": "Table 10", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Combining Resources", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Subcategorization frames for verbs constitute a rich source of lexico-syntactic information which is hard to integrate in graph based parsers. In this paper, we have used three different resources for subcategorization frames. These resources are from different origins with various characteristics. We have proposed two different methods to introduce the useful information from these resources in a second order model parser. We have conducted different experiments on French Treebank that showed a 15.24% reduction of erroneous SF selections for verbs. Although encouraging, there is still plenty of room for better results since the oracle score for 100 best parses is equal to 95.16% SAS and we reached 83.76%. Future work will concentrate on more elaborate selection functions as well as more sophisticated ways to combine the different resources.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "5" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work has been funded by the French Agence Nationale pour la Recherche, through the project EDYLEX (ANR-08-CORD-009).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Building a treebank for french", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Abeill\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Cl\u00e9ment", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Toussenel", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Abeill\u00e9, L. Cl\u00e9ment, and F. Toussenel. 2003. Building a treebank for french. In Anne Abeill\u00e9, editor, Tree- banks. Kluwer, Dordrecht.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Lexicalization in crosslinguistic probabilistic parsing: The case of french", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Arun", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Keller", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "306--313", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Arun and F. Keller. 2005. Lexicalization in crosslin- guistic probabilistic parsing: The case of french. In Proceedings of the 43rd Annual Meeting on Associ- ation for Computational Linguistics, pages 306-313. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Very high accuracy and fast dependency parsing is not a contradiction", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Bohnet", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "89--97", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Bohnet. 2010. Very high accuracy and fast depen- dency parsing is not a contradiction. In Proceedings of ACL, pages 89-97.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Automatic acquisition of subcategorization frames from untagged text", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Brent", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Brent. 1991. Automatic acquisition of subcate- gorization frames from untagged text. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The Mental Representation of Grammatical Relations", |
| "authors": [], |
| "year": 1982, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joan Bresnan, editor. 1982. The Mental Representation of Grammatical Relations. MIT Press.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Analyse syntaxique du fran\u00e7ais : des constituants aux d\u00e9pendances", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Candito", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Crabb\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Denis", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Gu\u00e9rin", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of Traitement Automatique des Langues Naturelles", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Candito, B. Crabb\u00e9, P. Denis, and F. Gu\u00e9rin. 2009. Analyse syntaxique du fran\u00e7ais : des constituants aux d\u00e9pendances. In Proceedings of Traitement Automa- tique des Langues Naturelles.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Can subcategorisation probabilities help a statistical parser?", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Carroll", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Minnen", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Briscoe", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Carroll, G. Minnen, and T. Briscoe. 1998. Can sub- categorisation probabilities help a statistical parser? Arxiv preprint cmp-lg/9806013.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Coarseto-Fine n-Best Parsing and MaxEnt Discriminative Reranking", |
| "authors": [ |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse- to-Fine n-Best Parsing and MaxEnt Discriminative Reranking. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Three Generative, Lexicalised Models for Statistical Parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the 35th Annual Meeting of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 1997. Three Generative, Lexicalised Models for Statistical Parsing. In Proceedings of the 35th Annual Meeting of the ACL.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Discriminative Reranking for Natural Language Parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 2000. Discriminative Reranking for Natural Language Parsing. In Proceedings of ICML.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Exploitation d'une ressource lexicale pour la construction d'un\u00e9tiqueteur morphosyntaxique\u00e9tat-de-l'art du fran\u00e7ais", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Denis", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Sagot", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of Traitement Automatique des Langues Naturelles", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Denis and B. Sagot. 2010. Exploitation d'une ressource lexicale pour la construction d'un\u00e9tiqueteur morphosyntaxique\u00e9tat-de-l'art du fran\u00e7ais. In Pro- ceedings of Traitement Automatique des Langues Na- turelles.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Generalized Phrase Structure Grammar", |
| "authors": [ |
| { |
| "first": "Gerald", |
| "middle": [], |
| "last": "Gazdar", |
| "suffix": "" |
| }, |
| { |
| "first": "Ewan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [ |
| "K" |
| ], |
| "last": "Pullum", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Sag", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gerald Gazdar, Ewan Klein, Geoffrey K. Pullum, and Ivan Sag. 1985. Generalized Phrase Structure Gram- mar. Harvard University Press.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Tree adjunct grammars", |
| "authors": [ |
| { |
| "first": "Aravind", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| }, |
| { |
| "first": "Leon", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Takahashi", |
| "suffix": "" |
| } |
| ], |
| "year": 1975, |
| "venue": "Journal of Computer and System Sciences", |
| "volume": "10", |
| "issue": "", |
| "pages": "136--163", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aravind Joshi, Leon Levy, and M Takahashi. 1975. Tree adjunct grammars. Journal of Computer and System Sciences, 10:136-163.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Treelex: A subcategorisation lexicon for french verbs", |
| "authors": [ |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Kupsc", |
| "suffix": "" |
| }, |
| { |
| "first": "Anne", |
| "middle": [], |
| "last": "Abeill\u00e9", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the First International Conference on Global Interoperability for Language Resources", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anna Kupsc and Anne Abeill\u00e9. 2008. Treelex: A subcat- egorisation lexicon for french verbs. In Proceedings of the First International Conference on Global Interop- erability for Language Resources.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Automatic acquisition of a large subcategorization dictionary from corpora", |
| "authors": [ |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christopher Manning. 1993. Automatic acquisition of a large subcategorization dictionary from corpora. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Building a large annotated corpus of english: The penn treebank", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [ |
| "P" |
| ], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "A" |
| ], |
| "last": "Marcinkiewicz", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Santorini", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "313--330", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M.P. Marcus, M.A. Marcinkiewicz, and B. Santorini. 1993. Building a large annotated corpus of en- glish: The penn treebank. Computational linguistics, 19(2):313-330.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Online large-margin training of dependency parsers", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Crammer", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "91--98", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. McDonald, K. Crammer, and F. Pereira. 2005a. On- line large-margin training of dependency parsers. In Proceedings of the 43rd Annual Meeting of the Asso- ciation for Computational Linguistics, pages 91-98.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Non-projective dependency parsing using spanning tree algorithms", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Ribarov", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Haji\u010d", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of HLT-EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "523--530", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. McDonald, F. Pereira, K. Ribarov, and J. Haji\u010d. 2005b. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of HLT-EMNLP, pages 523-530.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Lexschem: A large subcategorization lexicon for french verbs", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Messiant", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Poibeau", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Language Resources and Evaluation Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Messiant, A. Korhonen, T. Poibeau, et al. 2008. Lexschem: A large subcategorization lexicon for french verbs. In Proceedings of the Language Re- sources and Evaluation Conference.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Active learning for dependency parsing using partially annotated sentences", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "A" |
| ], |
| "last": "Mirroshandel", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Nasr", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of International Conference on Parsing Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S.A. Mirroshandel and A. Nasr. 2011. Active learning for dependency parsing using partially annotated sen- tences. In Proceedings of International Conference on Parsing Technologies.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "MACAON: An NLP tool suite for processing word lattices", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Nasr", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "B\u00e9chet", |
| "suffix": "" |
| }, |
| { |
| "first": "J-F", |
| "middle": [], |
| "last": "Rey", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Favre", |
| "suffix": "" |
| }, |
| { |
| "first": "Le", |
| "middle": [], |
| "last": "Roux", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Nasr, F. B\u00e9chet, J-F. Rey, B. Favre, and Le Roux J. 2011. MACAON: An NLP tool suite for processing word lattices. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Maltparser: A language-independent system for data-driven dependency parsing", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Nilsson", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Chanev", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Eryigit", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Kbler", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Marinov", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Marsi", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Natural Language Engineering", |
| "volume": "13", |
| "issue": "2", |
| "pages": "95--135", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Nivre, J. Hall, J. Nilsson, A. Chanev, G. Eryigit, S. Kbler, S. Marinov, and E. Marsi. 2007. Maltparser: A language-independent system for data-driven de- pendency parsing. Natural Language Engineering, 13(2):95-135.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Discriminative Log-Linear Grammars with Latent Variables", |
| "authors": [ |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Advances in Neural Information Processing Systems 20 (NIPS)", |
| "volume": "", |
| "issue": "", |
| "pages": "1153--1160", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Slav Petrov and Dan Klein. 2008. Discriminative Log- Linear Grammars with Latent Variables. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20 (NIPS), pages 1153-1160, Cambridge, MA. MIT Press.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Head-driven Phrase Structure Grammmar. CSLI Series", |
| "authors": [ |
| { |
| "first": "Carl", |
| "middle": [], |
| "last": "Pollard", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Sag", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carl Pollard and Ivan Sag. 1994. Head-driven Phrase Structure Grammmar. CSLI Series. University of Chicago Press.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Learning to parse natural language with maximum entropy models", |
| "authors": [ |
| { |
| "first": "Adwait", |
| "middle": [], |
| "last": "Ratnaparkhi", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Machine learning", |
| "volume": "34", |
| "issue": "1", |
| "pages": "151--175", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adwait Ratnaparkhi. 1999. Learning to parse natural language with maximum entropy models. Machine learning, 34(1):151-175.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "The Lefff, a freely available and large-coverage morphological and syntactic lexicon for french", |
| "authors": [ |
| { |
| "first": "Beno\u00eet", |
| "middle": [], |
| "last": "Sagot", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10)", |
| "volume": "", |
| "issue": "", |
| "pages": "2744--2751", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Beno\u00eet Sagot. 2010. The Lefff, a freely available and large-coverage morphological and syntactic lexicon for french. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10), pages 2744-2751, Valletta, Malta.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Can subcategorization help a statistical dependency parser?", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Zeman", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 19th international conference on Computational linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1--7", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Zeman. 2002. Can subcategorization help a statistical dependency parser? In Proceedings of the 19th in- ternational conference on Computational linguistics- Volume 1, pages 1-7. Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Syntactic functions of the arguments of the SF", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "text": "Size and decomposition of the French Treebank", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "text": "Subcategorization Frame Accuracy, Labeled and Unlabeled Accuracy Score on TEST and DEV.", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "text": "Resources statistics", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF8": { |
| "text": "sizes of the corpora used to collect SF", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF10": { |
| "text": "Lexical and syntactic coverage of the three resources on DEV The figures of table 5 show that lexical coverage of the three resources is quite high, ranging from 89.56 to 99.52 when computed on types and from 96.98 to 99.85 when computed on occurrences.", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF12": { |
| "text": "LAS and UAS on TEST using PP", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF14": { |
| "text": "LAS and UAS on TEST using DP", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF15": { |
| "text": ".84 82.12 83.76 83.50 83.50 LAS 88.88 89.03 89.22 89.19 89.19 UAS 90.71 90.79 90.98 90.95 90.95", |
| "content": "<table><tr><td>B</td><td>AL</td><td>TL</td><td>TA</td><td>TAL</td></tr><tr><td>SAS 80</td><td/><td/><td/><td/></tr></table>", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF16": { |
| "text": "LAS and UAS on TEST using PP with resource combination", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF17": { |
| "text": "B AL TL TA TAL SAS 80.84 82.12 83.76 83.50 83.50 LAS 88.88 89.22 89.31 89.34 89.34 UAS 90.71 91.02 91.05 91.08 91.09", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |