ACL-OCL / Base_JSON /prefixP /json /P06 /P06-1042.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P06-1042",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:26:28.665629Z"
},
"title": "Error mining in parsing results",
"authors": [
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": "",
"affiliation": {},
"email": "benoit.sagot@inria.fr"
},
{
"first": "\u00c9ric",
"middle": [],
"last": "De",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "La",
"middle": [],
"last": "Clergerie",
"suffix": "",
"affiliation": {},
"email": "eric.de_la_clergerie@inria.fr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce an error mining technique for automatically detecting errors in resources that are used in parsing systems. We applied this technique on parsing results produced on several million words by two distinct parsing systems, which share the syntactic lexicon and the pre-parsing processing chain. We were thus able to identify missing and erroneous information in these resources.",
"pdf_parse": {
"paper_id": "P06-1042",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce an error mining technique for automatically detecting errors in resources that are used in parsing systems. We applied this technique on parsing results produced on several million words by two distinct parsing systems, which share the syntactic lexicon and the pre-parsing processing chain. We were thus able to identify missing and erroneous information in these resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Natural language parsing is a hard task, partly because of the complexity and the volume of information that have to be taken into account about words and syntactic constructions. However, it is necessary to have access to such information, stored in resources such as lexica and grammars, and to try and minimize the amount of missing and erroneous information in these resources. To achieve this, the use of these resources at a largescale in parsers is a very promising approach (van Noord, 2004) , and in particular the analysis of situations that lead to a parsing failure: one can learn from one's own mistakes.",
"cite_spans": [
{
"start": 482,
"end": 499,
"text": "(van Noord, 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We introduce a probabilistic model that allows to identify forms and form bigrams that may be the source of errors, thanks to a corpus of parsed sentences. In order to facilitate the exploitation of forms and form bigrams detected by the model, and in particular to identify causes of errors, we have developed a visualization environment. The whole system has been tested on parsing results produced for several multi-million-word corpora and with two different parsers for French, namely SXLFG and FRMG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, the error mining technique which is the topic of this paper is fully system-and language-independent. It could be applied without any change on parsing results produced by any system working on any language. The only information that is needed is a boolean value for each sentence which indicates if it has been successfully parsed or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The idea we implemented is inspired from (van Noord, 2004) . In order to identify missing and erroneous information in a parsing system, one can analyze a large corpus and study with statistical tools what differentiates sentences for which parsing succeeded from sentences for which it failed.",
"cite_spans": [
{
"start": 41,
"end": 58,
"text": "(van Noord, 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General idea",
"sec_num": "2.1"
},
{
"text": "The simplest application of this idea is to look for forms, called suspicious forms, that are found more frequently in sentences that could not be parsed. This is what van Noord (2004) does, without trying to identify a suspicious form in any sentence whose parsing failed, and thus without taking into account the fact that there is (at least) one cause of error in each unparsable sentence. 1 On the contrary, we will look, in each sentence on which parsing failed, for the form that has the highest probability of being the cause of this failure: it is the main suspect of the sentence. This form may be incorrectly or only partially described in the lexicon, it may take part in constructions that are not described in the grammar, or it may exemplify imperfections of the pre-syntactic processing chain. This idea can be easily extended to sequences of forms, which is what we do by tak-ing form bigrams into account, but also to lemmas (or sequences of lemmas).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General idea",
"sec_num": "2.1"
},
{
"text": "We suppose that the corpus is split in sentences, sentences being segmented in forms. We denote by s i the i-th sentence. We denote by o i,j , (1 \u2264 j \u2264 |s i |) the occurrences of forms that constitute s i , and by F (o i,j ) the corresponding forms. Finally, we call error the function that associates to each sentence s i either 1, if s i 's parsing failed, and 0 if it succeeded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "Let O f be the set of the occurrences of a form f in the corpus:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "O f = {o i,j |F (o i,j ) = f }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "The number of occurrences of f in the corpus is therefore |O f |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "Let us define at first the mean global suspicion rate S, that is the mean probability that a given occurrence of a form be the cause of a parsing failure. We make the assumption that the failure of the parsing of a sentence has a unique cause (here, a unique form. . . ). This assumption, which is not necessarily exactly verified, simplifies the model and leads to good results. If we call occ total the total amount of forms in the corpus, we have then:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "S = \u03a3 i error(s i ) occ total",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "Let f be a form, that occurs as the j-th form of sentence s i , which means that F (o i,j ) = f . Let us assume that s i 's parsing failed: error(s i ) = 1. We call suspicion rate of the j-th form o i,j of sentence s i the probability, denoted by S i,j , that the occurrence o i,j of form form f be the cause of the s i 's parsing failure. If, on the contrary, s i 's parsing succeeded, its occurrences have a suspicion rate that is equal to zero. We then define the mean suspicion rate S f of a form f as the mean of all suspicion rates of its occurrences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "S f = 1 |O f | \u2022 o i,j \u2208O f S i,j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "To compute these rates, we use a fix-point algorithm by iterating a certain amount of times the following computations. Let us assume that we just completed the n-th iteration: we know, for each sentence s i , and for each occurrence o i,j of this sentence, the estimation of its suspicion rate S i,j as computed by the n-th iteration, estimation that is denoted by S (n) i,j . From this estimation, we compute the n + 1-th estimation of the mean suspicion rate of each form f , denoted by S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "(n+1) f : S (n+1) f = 1 |O f | \u2022 o i,j \u2208O f S (n) i,j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "This rate 2 allows us to compute a new estimation of the suspicion rate of all occurrences, by giving to each occurrence if a sentence s i a suspicion rate S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "(n+1) i,j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "that is exactly the estimation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "S (n+1) f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "of the mean suspicion rate of S f of the corresponding form, and then to perform a sentencelevel normalization. Thus:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "S (n+1) i,j = error(s i ) \u2022 S (n+1) F (o i,j ) 1\u2264j\u2264|s i | S (n+1) F (o i,j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "At this point, the n+1-th iteration is completed, and we can resume again these computations, until convergence on a fix-point. To begin the whole process, we just say, for an occurrence o i,j of sen-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "tence s i , that S (0) i,j = error(s i )/|s i |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "This means that for a non-parsable sentence, we start from a baseline where all of its occurrences have an equal probability of being the cause of the failure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "After a few dozens of iterations, we get stabilized estimations of the mean suspicion rate each form, which allows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "\u2022 to identify the forms that most probably cause errors,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "\u2022 for each form f , to identify non-parsable sentences s i where an occurrence o i,j \u2208 O f of f is a main suspect and where o i,j has a very 2 We also performed experiment in which S f was estimated by an other estimator, namely the smoothed mean suspicion rate, denoted byS (n) f , that takes into account the number of occurrences of f . Indeed, the confidence we can have in the estimation S ",
"cite_spans": [
{
"start": 141,
"end": 142,
"text": "2",
"ref_id": null
},
{
"start": 275,
"end": 278,
"text": "(n)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "\u2212 \u03bb depend on |O f |: if |O f | is high,S (n) f will be close from S (n) f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "; if it is low, it will be closer from S:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "S (n) f = \u03bb(|O f |) \u2022 S (n) f + (1 \u2212 \u03bb(|O f |)) \u2022 S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "In these experiments, we used the smoothing function \u03bb(|O f |) = 1 \u2212 e \u2212\u03b2|O f | with \u03b2 = 0.1. But this model, used with the ranking according to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "M f = S f \u2022 ln |O f | (see below)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": ", leads results that are very similar to those obtained without smoothing. Therefore, we describe the smoothingless model, which has the advantage not to use an empirically chosen smoothing function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "high suspicion rate among all occurrences of form f .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "We implemented this algorithm as a perl script, with strong optimizations of data structures so as to reduce memory and time usage. In particular, form-level structures are shared between sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Form-level probabilistic model",
"sec_num": "2.2"
},
{
"text": "This model gives already very good results, as we shall see in section 4. However, it can be extended in different ways, some of which we already implemented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extensions of the model",
"sec_num": "2.3"
},
{
"text": "First of all, it is possible not to stick to forms. Indeed, we do not only work on forms, but on couples made out of a form (a lexical entry) and one or several token(s) that correspond to this form in the raw text (a token is a portion of text delimited by spaces or punctuation tokens).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extensions of the model",
"sec_num": "2.3"
},
{
"text": "Moreover, one can look for the cause of the failure of the parsing of a sentence not only in the presence of a form in this sentence, but also in the presence of a bigram 3 of forms. To perform this, one just needs to extend the notions of form and occurrence, by saying that a (generalized) form is a unigram or a bigram of forms, and that a (generalized) occurrence is an occurrence of a generalized form, i.e., an occurrence of a unigram or a bigram of forms. The results we present in section 4 includes this extension, as well as the previous one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extensions of the model",
"sec_num": "2.3"
},
{
"text": "Another possible generalization would be to take into account facts about the sentence that are not simultaneous (such as form unigrams and form bigrams) but mutually exclusive, and that must therefore be probabilized as well. We have not yet implemented such a mechanism, but it would be very interesting, because it would allow to go beyond forms or n-grams of forms, and to manipulate also lemmas (since a given form has usually several possible lemmas).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extensions of the model",
"sec_num": "2.3"
},
{
"text": "In order to validate our approach, we applied these principles to look for error causes in parsing results given by two deep parsing systems for French, FRMG and SXLFG, on large corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Both parsing systems we used are based on deep non-probabilistic parsers. They share:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsers",
"sec_num": "3.1"
},
{
"text": "\u2022 the Lefff 2 syntactic lexicon for French , that contains 500,000 entries (representing 400,000 different forms) ; each lexical entry contains morphological information, sub-categorization frames (when relevant), and complementary syntactic information, in particular for verbal forms (controls, attributives, impersonals,. . . ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsers",
"sec_num": "3.1"
},
{
"text": "\u2022 the SXPipe pre-syntactic processing chain , that converts a raw text in a sequence of DAGs of forms that are present in the Lefff ; SXPipe contains, among other modules, a sentence-level segmenter, a tokenization and spelling-error correction module, named-entities recognizers, and a non-deterministic multi-word identifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsers",
"sec_num": "3.1"
},
{
"text": "But FRMG and SXLFG use completely different parsers, that rely on different formalisms, on different grammars and on different parser builder. Therefore, the comparison of error mining results on the output of these two systems makes it possible to distinguish errors coming from the Lefff or from SXPipe from those coming to one grammar or the other. Let us describe in more details the characteristics of these two parsers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsers",
"sec_num": "3.1"
},
{
"text": "The FRMG parser (Thomasset and Villemonte de la Clergerie, 2005) is based on a compact TAG for French that is automatically generated from a meta-grammar. The compilation and execution of the parser is performed in the framework of the DYALOG system (Villemonte de la .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsers",
"sec_num": "3.1"
},
{
"text": "The SXLFG parser (Boullier and Sagot, 2005b; Boullier and Sagot, 2005a) is an efficient and robust LFG parser. Parsing is performed in two steps. First, an Earley-like parser builds a shared forest that represents all constituent structures that satisfy the context-free skeleton of the grammar. Then functional structures are built, in one or more bottom-up passes. Parsing efficiency is achieved thanks to several techniques such as compact data representation, systematic use of structure and computation sharing, lazy evaluation and heuristic and almost non-destructive pruning during parsing.",
"cite_spans": [
{
"start": 17,
"end": 44,
"text": "(Boullier and Sagot, 2005b;",
"ref_id": "BIBREF2"
},
{
"start": 45,
"end": 71,
"text": "Boullier and Sagot, 2005a)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsers",
"sec_num": "3.1"
},
{
"text": "Both parsers implement also advanced error recovery and tolerance techniques, but they were Table 1 : General information on corpora and parsing results useless for the experiments described here, since we want only to distinguish sentences that receive a full parse (without any recovery technique) from those that do not.",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 99,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parsers",
"sec_num": "3.1"
},
{
"text": "We parsed with these two systems the following corpora:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora",
"sec_num": "3.2"
},
{
"text": "MD corpus : This corpus is made out of 14.5 million words (570,000 sentences) of general journalistic corpus that are articles from the Monde diplomatique.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora",
"sec_num": "3.2"
},
{
"text": "EASy corpus : This is the 40,000-sentence corpus that has been built for the EASy parsing evaluation campaign for French (Paroubek et al., 2005) . We only used the raw corpus (without taking into account the fact that a manual parse is available for 10% of all sentences). The EASy corpus contains several sub-corpora of varied style: journalistic, literacy, legal, medical, transcription of oral, email, questions, etc.",
"cite_spans": [
{
"start": 121,
"end": 144,
"text": "(Paroubek et al., 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora",
"sec_num": "3.2"
},
{
"text": "Both corpora are raw in the sense that no cleaning whatsoever has been performed so as to eliminate some sequences of characters that can not really be considered as sentences. Table 1 gives some general information on these corpora as well as the results we got with both parsing systems. It shall be noticed that both parsers did not parse exactly the same set and the same number of sentences for the MD corpus, and that they do not define in the exactly same way the notion of sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 184,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpora",
"sec_num": "3.2"
},
{
"text": "We developed a visualization tool for the results of the error mining, that allows to examine and annotate them. It has the form of an HTML page that uses dynamic generation methods, in particular javascript. An example is shown on Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 240,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results visualization environment",
"sec_num": "3.3"
},
{
"text": "To achieve this, suspicious forms are ranked according to a measure M f that models, for a given form f , the benefit there is to try and correct the (potential) corresponding error in the resources. A user who wants to concentrate on almost certain errors rather than on most frequent ones can visualize suspicious forms ranked according to M f = S f . On the contrary, a user who wants to concentrate on most frequent potential errors, rather than on the confidence that the algorithm has given to errors, can visualize suspicious forms ranked according to 4 M f = S f |O f |. The default choice, which is adopted to produce all tables shown in this paper, is a balance between these two possibilities, and ranks suspicious forms according to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results visualization environment",
"sec_num": "3.3"
},
{
"text": "M f = S f \u2022 ln |O f |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results visualization environment",
"sec_num": "3.3"
},
{
"text": "The visualization environment allows to browse through (ranked) suspicious forms in a scrolling list on the left part of the page (A). When the suspicious form is associated to a token that is the same as the form, only the form is shown. Otherwise, the token is separated from the form by the symbol \" / \". The right part of the page shows various pieces of information about the currently selected form. After having given its rank according to the ranking measure M f that has been chosen (B), a field is available to add or edit an annotation associated with the suspicious form (D). These annotations, aimed to ease the analysis of the error mining results by linguists and by the developers of parsers and resources (lexica, grammars), are saved in a database (SQLITE). Statistical information is also given about f (E), including its number of occurrences occ f , the number of occurrences of f in non-parsable sentences, the final estimation of its mean suspicion rate S f and the rate err(f ) of non-parsable sentences among those where f appears. This indications are complemented by a brief summary of the iterative process that shows the convergence of the successive estimations of f 's entries in the Lefff lexicon (G) as well as nonparsable sentences where f is the main suspect and where one of its occurrences has a particularly high suspicion rate 5 (H). The whole page (with annotations) can be sent by e-mail, for example to the developer of the lexicon or to the developer of one parser or the other (C).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results visualization environment",
"sec_num": "3.3"
},
{
"text": "In this section, we mostly focus on the results of our error mining algorithm on the parsing results provided by SXLFG on the MD corpus. We first present results when only forms are taken into account, and then give an insight on results when both forms and form bigrams are considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "5 Such an information, which is extremely valuable for the developers of the resources, can not be obtained by global (form-level and not occurrence-level) approaches such as the err(f )-based approach of (van Noord, 2004) . Indeed, enumerating all sentences which include a given form f , and which did not receive a full parse, is not precise enough: it would show at the same time sentences wich fail because of f (e.g., because its lexical entry lacks a given subcategorization frame) and sentences which fail for an other independent reason.",
"cite_spans": [
{
"start": 205,
"end": 222,
"text": "(van Noord, 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "The execution of our error mining script on MD/SXLFG, with i max = 50 iterations and when only (isolated) forms are taken into account, takes less than one hour on a 3.2 GHz PC running Linux with a 1.5 Go RAM. It outputs 18,334 relevant suspicious forms (out of the 327,785 possible ones), where a relevant suspicious form is defined as a form f that satisfies the following arbitrary constraints:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding suspicious forms",
"sec_num": "4.1"
},
{
"text": "6 S (imax) f > 1, 5 \u2022 S and |O f | > 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding suspicious forms",
"sec_num": "4.1"
},
{
"text": "We still can not prove theoretically the convergence of the algorithm. 7 But among the 1000 bestranked forms, the last iteration induces a mean variation of the suspicion rate that is less than 0.01%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding suspicious forms",
"sec_num": "4.1"
},
{
"text": "On a smaller corpus like the EASy corpus, 200 iterations take 260s. The algorithm outputs less than 3,000 relevant suspicious forms (out of the 61,125 possible ones). Convergence information is the same as what has been said above for the MD corpus. Table 2 gives an idea of the repartition of suspicious forms w.r.t. their frequency (for FRMG on MD), showing that rare forms have a greater probability to be suspicious. The most frequent suspicious form is the double-quote, with (only) S f = 9%, partly because of segmentation problems. Table 3 gives an insight on the output of our algorithm on parsing results obtained by SXLFG on the MD corpus. For each form f (in fact, for each couple of the form (token,form)), this table displays its suspicion rate and its number of occurrences, as well as the rate err(f ) of non-parsable sentences among those where f appears and a short manual analysis of the underlying error.",
"cite_spans": [],
"ref_spans": [
{
"start": 250,
"end": 257,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 539,
"end": 546,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Finding suspicious forms",
"sec_num": "4.1"
},
{
"text": "In fact, a more in-depth manual analysis of the results shows that they are very good: errors are correctly identified, that can be associated with four error sources: (1) the Lefff lexicon, (2) the SXPipe pre-syntactic processing chain, (3) imperfections of the grammar, but also (4) problems related to the corpus itself (and to the fact that it is a raw corpus, with meta-data and typographic noise).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analyzing results",
"sec_num": "4.2"
},
{
"text": "On the EASy corpus, results are also relevant, but sometimes more difficult to interpret, because of the relative small size of the corpus and because of its heterogeneity. In particular, it contains email and oral transcriptions sub-corpora that introduce a lot of noise. Segmentation problems (caused both by SXPipe and by the corpus itself, which is already segmented) play an especially important role.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analyzing results",
"sec_num": "4.2"
},
{
"text": "In order to validate our approach, we compared our results with results given by two other relevant algorithms:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing results with results of other algorithms",
"sec_num": "4.3"
},
{
"text": "\u2022 van Noord's (van Noord, 2004) (form-level and non-iterative) evaluation of err(f ) (the rate of non-parsable sentences among sentences containing the form f ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing results with results of other algorithms",
"sec_num": "4.3"
},
{
"text": "\u2022 a standard (occurrence-level and iterative) maximum entropy evaluation of each form's contribution to the success or the failure of a sentence (we used the MEGAM package (Daum\u00e9 III, 2004) ).",
"cite_spans": [
{
"start": 172,
"end": 189,
"text": "(Daum\u00e9 III, 2004)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing results with results of other algorithms",
"sec_num": "4.3"
},
{
"text": "As done for our algorithm, we do not rank forms directly according to the suspicion rate S f computed by these algorithms. Instead, we use the M f measure presented above (M f = S f \u2022ln |O f |). Using directly van Noord's measure selects as most suspicious words very rare words, which shows the importance of a good balance between suspicion rate and frequency (as noted by (van Noord, 2004) in the discussion of his results). This remark applies to the maximum entropy measure as well. Table 4 shows for all algorithms the 10 bestranked suspicious forms, complemented by a manual evaluation of their relevance. One clearly sees that our approach leads to the best results. Van Noord's technique has been initially designed to find errors in resources that already ensured a very high coverage. On our systems, whose development is less advanced, this technique ranks as most suspicious forms those which are simply the most frequent ones. It seems to be the case for the standard maximum entropy algorithm, thus showing the importance to take into account the fact that there is at least one cause of error in any sentence whose parsing failed, not only to identify a main suspicious form in each sentence, but also to get relevant global results.",
"cite_spans": [
{
"start": 375,
"end": 392,
"text": "(van Noord, 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 488,
"end": 495,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Comparing results with results of other algorithms",
"sec_num": "4.3"
},
{
"text": "We complemented the separated study of error mining results on the output of both parsers by an analysis of merged results. We computed for each form the harmonic mean of both measures M f = S f \u2022 ln |O f | obtained for each parsing system. Results (not shown here) are very interesting, because they identify errors that come mostly from resources that are shared by both systems (the Lefff lexicon and the pre-syntactic processing chain SXPipe). Although some errors come from common lacks of coverage in both grammars, it is nevertheless a very efficient mean to get a first repartition between error sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing results for both parsers",
"sec_num": "4.4"
},
{
"text": "As said before, we also performed experiments where not only forms but also form bigrams are treated as potential causes of errors. This approach allows to identify situations where a form is not in itself a relevant cause of error, but leads often to a parse failure when immediately followed or preceded by an other form. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introducing form bigrams",
"sec_num": "4.5"
},
{
"text": "As we have shown, parsing large corpora allows to set up error mining techniques, so as to identify missing and erroneous information in the different resources that are used by full-featured parsing systems. The technique described in this paper and its implementation on forms and form bigrams has already allowed us to detect many errors and omissions in the Lefff lexicon, to point out inappropriate behaviors of the SXPipe pre-syntactic processing chain, and to reveal the lack of coverage of the grammars for certain phenomena. We intend to carry on and extend this work. First of all, the visualization environment can be enhanced, as is the case for the implementation of the algorithm itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and perspectives",
"sec_num": "5"
},
{
"text": "We would also like to integrate to the model the possibility that facts taken into account (today, forms and form bigrams) are not necessarily certain, because some of them could be the consequence of an ambiguity. For example, for a given form, several lemmas are often possible. The probabilization of these lemmas would thus allow to look for most suspicious lemmas.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and perspectives",
"sec_num": "5"
},
{
"text": "We are already working on a module that will allow not only to detect errors, for example in the lexicon, but also to propose a correction. To achieve this, we want to parse anew all nonparsable sentences, after having replaced their main suspects by a special form that receives under-specified lexical information. These information can be either very general, or can be computed by appropriate generalization patterns applied on the information associated by the lexicon with the original form. A statistical study of the new parsing results will make it possible to propose corrections concerning the involved forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and perspectives",
"sec_num": "5"
},
{
"text": "Indeed, he defines the suspicion rate of a form f as the rate of unparsable sentences among sentences that contain f .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "One could generalize this to n-grams, but as n gets higher the number of occurrences of n-grams gets lower, hence leading to non-significant statistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Let f be a form. The suspicion rate S f can be considered as the probability for a particular occurrence of f to cause a parsing error. Therefore, S f |O f | models the number of occurrences of f that do cause a parsing error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These constraints filter results, but all forms are taken into account during all iterations of the algorithm.7 However, the algorithms shares many common points with iterative algorithm that are known to converge and that have been proposed to find maximum entropy probability distributions under a set of constraints(Berger et al., 1996). Such an algorithm is compared to ours later on in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A maximun entropy approach to natural language processing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "S",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "39--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Berger, S. Della Pietra, and V. Della Pietra. 1996. A maximun entropy approach to natural language pro- cessing. Computational Linguistics, 22(1):pp. 39- 71.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Analyse syntaxique profonde \u00e0 grande \u00e9chelle: SXLFG. Traitement Automatique des Langues",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Boullier",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Boullier and Beno\u00eet Sagot. 2005a. Analyse syn- taxique profonde \u00e0 grande \u00e9chelle: SXLFG. Traite- ment Automatique des Langues (T.A.L.), 46(2).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Efficient and robust LFG parsing: SxLfg",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Boullier",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of IWPT'05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Boullier and Beno\u00eet Sagot. 2005b. Efficient and robust LFG parsing: SxLfg. In Proceedings of IWPT'05, Vancouver, Canada, October.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Notes on CG and LM-BFGS optimization of logistic regression",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III. 2004. Notes on CG and LM-BFGS optimization of logistic regression. Paper available at http://www.isi.edu/~hdaume/docs/ daume04cg-bfgs.ps, implementation avail- able at http://www.isi.edu/~hdaume/ megam/.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "EASy : campagne d'\u00e9valuation des analyseurs syntaxiques",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Paroubek",
"suffix": ""
},
{
"first": "Louis-Gabriel",
"middle": [],
"last": "Pouillot",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Robba",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "Vilnat",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the EASy workshop of TALN 2005",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Paroubek, Louis-Gabriel Pouillot, Isabelle Robba, and Anne Vilnat. 2005. EASy : cam- pagne d'\u00e9valuation des analyseurs syntaxiques. In Proceedings of the EASy workshop of TALN 2005, Dourdan, France.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "From raw corpus to word lattices: robust pre-parsing processing",
"authors": [
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Boullier",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of L&TC 2005",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beno\u00eet Sagot and Pierre Boullier. 2005. From raw cor- pus to word lattices: robust pre-parsing processing. In Proceedings of L&TC 2005, Pozna\u0144, Pologne.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Journ\u00e9e d'\u00e9tude de l'ATALA sur l'interface lexique-grammaire et les lexiques syntaxiques et s\u00e9mantiques",
"authors": [
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "Lionel",
"middle": [],
"last": "Cl\u00e9ment",
"suffix": ""
},
{
"first": "\u00c9ric",
"middle": [],
"last": "Villemonte De La Clergerie",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Boullier",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beno\u00eet Sagot, Lionel Cl\u00e9ment, \u00c9ric Villemonte de la Clergerie, and Pierre Boullier. 2005. Vers un m\u00e9ta-lexique pour le fran\u00e7ais : architecture, acqui- sition, utilisation. Journ\u00e9e d'\u00e9tude de l'ATALA sur l'interface lexique-grammaire et les lexiques syntax- iques et s\u00e9mantiques, March.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Fran\u00e7ois Thomasset and \u00c9ric Villemonte de la Clergerie",
"authors": [],
"year": 2005,
"venue": "Proceedings of TALN'05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fran\u00e7ois Thomasset and \u00c9ric Villemonte de la Clerg- erie. 2005. Comment obtenir plus des m\u00e9ta- grammaires. In Proceedings of TALN'05, Dourdan, France, June. ATALA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Error mining for widecoverage grammar engineering",
"authors": [
{
"first": "",
"middle": [],
"last": "Gertjan Van Noord",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gertjan van Noord. 2004. Error mining for wide- coverage grammar engineering. In Proc. of ACL 2004, Barcelona, Spain.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "DyALog: a tabular logic programming based environment for NLP",
"authors": [
{
"first": "\u00c9ric",
"middle": [],
"last": "Villemonte De La",
"suffix": ""
},
{
"first": "Clergerie",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of 2nd International Workshop on Constraint Solving and Language Processing (CSLP'05)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u00c9ric Villemonte de la Clergerie. 2005. DyALog: a tabular logic programming based environment for NLP. In Proceedings of 2nd International Work- shop on Constraint Solving and Language Process- ing (CSLP'05), Barcelona, Spain, October.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "the number of occurrences of f is lower. Hence the idea to smooth S",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "S f . The lower part of the page gives a mean to identify the cause of f -related errors by showing Error mining results visualization environment (results are shown for MD/FRMG).",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"text": "",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>shows best-ranked form bigrams (forms</td></tr><tr><td>that are ranked in-between are not shown, to em-</td></tr></table>"
},
"TABREF2": {
"text": "Suspicious forms repartition for MD/FRMG",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>Rank</td><td>Token(s)/form</td><td>S</td><td>(50) f</td><td colspan=\"3\">|O f | err(f ) M f</td><td>Error cause</td></tr><tr><td>1</td><td colspan=\"4\">_____/_UNDERSCORE 100% 6399</td><td>100%</td><td>8.76 corpus: typographic noise</td></tr><tr><td>2</td><td>(...)</td><td colspan=\"2\">46%</td><td>2168</td><td>67%</td><td>2.82 SXPipe: should be treated as skippable words</td></tr><tr><td>3</td><td>2_]/_NUMBER</td><td colspan=\"2\">76%</td><td>30</td><td>93%</td><td>2.58 SXPipe: bad treatment of list constructs</td></tr><tr><td>4</td><td>priv\u00e9es</td><td colspan=\"2\">39%</td><td>589</td><td>87%</td><td>2.53 Lefff : misses as an adjective</td></tr><tr><td>5</td><td>Haaretz/_Uw</td><td colspan=\"2\">51%</td><td>149</td><td>70%</td><td>2.53 SXPipe: needs local grammars for references</td></tr><tr><td>6</td><td>contest\u00e9</td><td colspan=\"2\">52%</td><td>122</td><td>90%</td><td>2.52 Lefff : misses as an adjective</td></tr><tr><td>7</td><td>occup\u00e9s</td><td colspan=\"2\">38%</td><td>601</td><td>86%</td><td>2.42 Lefff : misses as an adjective</td></tr><tr><td>8</td><td>priv\u00e9e</td><td colspan=\"2\">35%</td><td>834</td><td>82%</td><td>2.38 Lefff : misses as an adjective</td></tr><tr><td>9</td><td>[...]</td><td colspan=\"2\">44%</td><td>193</td><td>71%</td><td>2.33 SXPipe: should be treated as skippable words</td></tr><tr><td>10</td><td>faudrait</td><td colspan=\"2\">36%</td><td>603</td><td>85%</td><td>2.32 Lefff : can have a nominal object</td></tr></table>"
},
"TABREF3": {
"text": "Analysis of the 10 best-ranked forms (ranked according toM f = S f \u2022 ln |O f |)",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td>this paper</td><td/><td>global</td><td/><td>maxent</td><td/></tr><tr><td>Rank</td><td>Token(s)/form</td><td>Eval</td><td colspan=\"2\">Token(s)/form Eval</td><td colspan=\"2\">Token(s)/form Eval</td></tr><tr><td>1</td><td>_____/_UNDERSCORE</td><td>++</td><td>*</td><td>+</td><td>pour</td><td>-</td></tr><tr><td>2</td><td>(...)</td><td>++</td><td>,</td><td>-</td><td>)</td><td>-</td></tr><tr><td>3</td><td>2_]/_NUMBER</td><td>++</td><td>livre</td><td>-</td><td>\u00e0</td><td>-</td></tr><tr><td>4</td><td>priv\u00e9es</td><td>++</td><td>.</td><td>-</td><td>qu'il/qu'</td><td>-</td></tr><tr><td>5</td><td>Haaretz/_Uw</td><td>++</td><td>de</td><td>-</td><td>sont</td><td>-</td></tr><tr><td>6</td><td>contest\u00e9</td><td>++</td><td>;</td><td>-</td><td>le</td><td>-</td></tr><tr><td>7</td><td>occup\u00e9s</td><td>++</td><td>:</td><td>-</td><td>qu'un/qu'</td><td>+</td></tr><tr><td>8</td><td>priv\u00e9e</td><td>++</td><td>la</td><td>-</td><td>qu'un/un</td><td>+</td></tr><tr><td>9</td><td>[...]</td><td colspan=\"2\">++\u00e9trang\u00e8res</td><td>-</td><td>que</td><td>-</td></tr><tr><td>10</td><td>faudrait</td><td>++</td><td>lecteurs</td><td>-</td><td>pourrait</td><td>-</td></tr></table>"
},
"TABREF4": {
"text": "The 10 best-ranked suspicious forms, according the the M f measure, as computed by different algorithms: ours (this paper), a standard maximum entropy algorithm (maxent) and van Noord's rate err(f ) (global).",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">Rank Tokens and forms M f</td><td>Error cause</td></tr><tr><td>4</td><td>Toutes/toutes les</td><td colspan=\"2\">2.73 grammar: badly treated pre-determiner adjective</td></tr><tr><td>6</td><td>y en</td><td colspan=\"2\">2,34 grammar: problem with the construction il y en a. . .</td></tr><tr><td>7</td><td>in \"</td><td colspan=\"2\">1.81 Lefff : in misses as a preposition, which happends before book titles (hence the \")</td></tr><tr><td>10</td><td>donne \u00e0</td><td colspan=\"2\">1.44 Lefff : donner should sub-categorize \u00e0-vcomps (donner \u00e0 voir. . . )</td></tr><tr><td>11</td><td>de demain</td><td colspan=\"2\">1.19 Lefff : demain misses as common noun (standard adv are not preceded by prep)</td></tr><tr><td>16</td><td>( 22/_NUMBER</td><td colspan=\"2\">0.86 grammar: footnote references not treated</td></tr><tr><td>16</td><td>22/_NUMBER )</td><td colspan=\"2\">0.86 as above</td></tr></table>"
},
"TABREF5": {
"text": "Best ranked form bigrams (forms ranked inbetween are not shown; ranked according to M f = S f \u2022 ln |O f |). These results have been computed on a subset of the MD corpus (60,000 sentences). phasize bigram results), with the same data as in table 3.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}