| { |
| "paper_id": "E12-1018", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T10:35:47.336738Z" |
| }, |
| "title": "Tree Representations in Probabilistic Models for Extended Named Entities Detection", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Dinarelli", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "LIMSI-CNRS", |
| "location": { |
| "settlement": "Orsay", |
| "country": "France" |
| } |
| }, |
| "email": "marcod@limsi.fr" |
| }, |
| { |
| "first": "Sophie", |
| "middle": [], |
| "last": "Rosset", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "LIMSI-CNRS", |
| "location": { |
| "settlement": "Orsay", |
| "country": "France" |
| } |
| }, |
| "email": "rosset@limsi.fr" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper we deal with Named Entity Recognition (NER) on transcriptions of French broadcast data. Two aspects make the task more difficult with respect to previous NER tasks: i) named entities annotated used in this work have a tree structure, thus the task cannot be tackled as a sequence labelling task; ii) the data used are more noisy than data used for previous NER tasks. We approach the task in two steps, involving Conditional Random Fields and Probabilistic Context-Free Grammars, integrated in a single parsing algorithm. We analyse the effect of using several tree representations. Our system outperforms the best system of the evaluation campaign by a significant margin.", |
| "pdf_parse": { |
| "paper_id": "E12-1018", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper we deal with Named Entity Recognition (NER) on transcriptions of French broadcast data. Two aspects make the task more difficult with respect to previous NER tasks: i) named entities annotated used in this work have a tree structure, thus the task cannot be tackled as a sequence labelling task; ii) the data used are more noisy than data used for previous NER tasks. We approach the task in two steps, involving Conditional Random Fields and Probabilistic Context-Free Grammars, integrated in a single parsing algorithm. We analyse the effect of using several tree representations. Our system outperforms the best system of the evaluation campaign by a significant margin.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Named Entity Recognition is a traditinal task of the Natural Language Processing domain. The task aims at mapping words in a text into semantic classes, such like persons, organizations or localizations. While at first the NER task was quite simple, involving a limited number of classes (Grishman and Sundheim, 1996) , along the years the task complexity increased as more complex class taxonomies were defined (Sekine and Nobata, 2004) . The interest in the task is related to its use in complex frameworks for (semantic) content extraction, such like Relation Extraction applications (Doddington et al., 2004) .", |
| "cite_spans": [ |
| { |
| "start": 288, |
| "end": 317, |
| "text": "(Grishman and Sundheim, 1996)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 412, |
| "end": 437, |
| "text": "(Sekine and Nobata, 2004)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 587, |
| "end": 612, |
| "text": "(Doddington et al., 2004)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This work presents research on a Named Entity Recognition task defined with a new set of named entities. The characteristic of such set is in that named entities have a tree structure. As concequence the task cannot be tackled as a sequence labelling approach. Additionally, the use of noisy data like transcriptions of French broadcast data, makes the task very challenging for traditional NLP solutions. To deal with such problems, we adopt a two-steps approach, the first being realized with Conditional Random Fields (CRF) (Lafferty et al., 2001) , the second with a Probabilistic Context-Free Grammar (PCFG) (Johnson, 1998) . The motivations behind that are:", |
| "cite_spans": [ |
| { |
| "start": 527, |
| "end": 550, |
| "text": "(Lafferty et al., 2001)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 613, |
| "end": 628, |
| "text": "(Johnson, 1998)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Since the named entities have a tree structure, it is reasonable to use a solution coming from syntactic parsing. However preliminary experiments using such approaches gave poor results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Despite the tree-structure of the entities, trees are not as complex as syntactic trees, thus, before designing an ad-hoc solution for the task, which require a remarkable effort and yet it doesn't guarantee better performances, we designed a solution providing good results and which required a limited development effort.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Conditional Random Fields are models robust to noisy data, like automatic transcriptions of ASR systems (Hahn et al., 2010) , thus it is the best choice to deal with transcriptions of broadcast data. Once words have been annotated with basic entity constituents, the tree structure of named entities is simple enough to be reconstructed with relatively simple model like PCFG (Johnson, 1998) .", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 125, |
| "text": "(Hahn et al., 2010)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 378, |
| "end": 393, |
| "text": "(Johnson, 1998)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The two models are integrated in a single parsing algorithm. We analyze the effect of the use of several tree representations, which result in different parsing models with different performances. We provide a detailed evaluation of our models. Results can be compared with those obtained in the evaluation campaign where the same data were used. Our system outperforms the best system of the evaluation campaign by a significant margin.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The rest of the paper is structured as follows: in the next section we introduce the extended named entities used in this work, in section 3 we describe our two-steps algorithm for parsing entity trees, in section 4 we detail the second step of our approach based on syntactic parsing approaches, in particular we describe the different tree representations used in this work to encode entity trees in parsing models. In section 6 we describe and comment experiments, and finally, in section 7, we draw some conclusions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The most important aspect of the NER task we investigated is provided by the tree structure of named entities. Examples of such entities are given in figure 1 and 2, where words have been remove for readability issues and are: (\"90 persons are still present at Atambua. It's there that 3 employees of the High Conseil of United Nations for refugees have been killed yesterday morning\"):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Named Entities", |
| "sec_num": "2" |
| }, |
| { |
| "text": "90 personnes toujours pr\u00e9sentes\u00e0 Atambua c' est l\u00e0 qu' hier matin ont et\u00e9 tu\u00e9s 3 employ\u00e9s du haut commissariat des Nations unies aux r\u00e9fugi\u00e9s , le HCR Words realizing entities in figure 2 are in bold, and they correspond to the tree leaves in the picture. As we see in the figures, entities can have complex structures. Beyond the use of subtypes, like individual in person (to give pers.ind), or administrative in organization (to give org.adm), entities with more specific content can be constituents of more general entities to form tree structures, like name.first and name.last for pers.ind or val (for value) and object for amount. These named entities have been annotated on transcriptions of French broadcast news coming from several radio channels. The transcriptions constitute a corpus that has been split into training, development and evaluation sets.The evaluation set, in particular, is composed of two set of data, Broadcast News (BN in the table) and Broadcast Conversations (BC in the table). The evaluation of the models presented in this work is performed on the merge of the two data types. Some statistics of the corpus are reported in table 1 and 2. This set of named entities has been defined in order to provide more fine semantic information for entities found in the data, e.g. a person is better specified by first and last name, and is fully described in (Grouin, 2011) . In order to avoid confusion, entities that can be associated directly to words, like name.first, name.last, val and object, are called entity constituents, components or entity pre-terminals (as they are preterminals nodes in the trees). The other entities, like pers.ind or amount, are called entities or nonterminal entities, depending on the context.", |
| "cite_spans": [ |
| { |
| "start": 1384, |
| "end": 1398, |
| "text": "(Grouin, 2011)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 946, |
| "end": 963, |
| "text": "(BN in the table)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Extended Named Entities", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Since the task of Named Entity Recognition presented here cannot be modeled as sequence labelling and, as mentioned previously, an approach coming from syntactic parsing to perform named entity annotation in \"one-shot\" is not robust on the data used in this work, we adopt a two-steps. The first is designed to be robust to noisy data and is used to annotate entity components, while the second is used to parse complete entity trees and is based on a relatively simple model. Since we are dealing with noisy data, the hardest part of the task is indeed to annotate components on words. On the other hand, since entity trees are relatively simple, at least much simpler than syntactic trees, once entity components have been annotated in a first step, for the second step, a complex model is not required, which would also make the processing slower. Taking all these issues into account, the two steps of our system for tree-structured named entity recognition are performed as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models Cascade for Extended Named Entities", |
| "sec_num": "3" |
| }, |
| { |
| "text": "1. A CRF model (Lafferty et al., 2001 ) is used to annotate components on words.", |
| "cite_spans": [ |
| { |
| "start": 15, |
| "end": 37, |
| "text": "(Lafferty et al., 2001", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models Cascade for Extended Named Entities", |
| "sec_num": "3" |
| }, |
| { |
| "text": "2. A PCFG model (Johnson, 1998 ) is used to parse complete entity trees upon components, i.e. using components annotated by CRF as starting point.", |
| "cite_spans": [ |
| { |
| "start": 16, |
| "end": 30, |
| "text": "(Johnson, 1998", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models Cascade for Extended Named Entities", |
| "sec_num": "3" |
| }, |
| { |
| "text": "This processing schema is depicted in figure 3. Conditional Random Fields are described shortly in the next subsection. PCFG models, constituting the main part of this work together with the analysis over tree representations, is described more in details in the next sections.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models Cascade for Extended Named Entities", |
| "sec_num": "3" |
| }, |
| { |
| "text": "CRFs are particularly suitable for sequence labelling tasks (Lafferty et al., 2001) . Beyond the possibility to include a huge number of features using the same framework as Maximum Entropy models (Berger et al., 1996) , CRF models encode global conditional probabilities normalized at sentence level. Given a sequence of N words W N 1 = w 1 , ..., w N and its corresponding components sequence E N 1 = e 1 , ..., e N , CRF trains the conditional probabilities", |
| "cite_spans": [ |
| { |
| "start": 60, |
| "end": 83, |
| "text": "(Lafferty et al., 2001)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 197, |
| "end": 218, |
| "text": "(Berger et al., 1996)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional Random Fields", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "P (E N 1 |W N 1 ) = 1 Z N Y n=1 exp M X m=1 \u03bbm \u2022 hm(e n\u22121 , en, w n+2 n\u22122 ) ! (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional Random Fields", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where \u03bb m are the training parameters. h m (e n\u22121 , e n , w n+2 n\u22122 ) are the feature functions capturing dependencies of entities and words. Z is the partition function:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional Random Fields", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "Z = X e N 1 N Y n=1 H(\u1ebd n\u22121 ,\u1ebdn, w n+2 n\u22122 )", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Conditional Random Fields", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "which ensures that probabilities sum up to one. e n\u22121 and\u1ebd n are components for previous and current words,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional Random Fields", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "H(\u1ebd n\u22121 ,\u1ebd n , w n+2 n\u22122 ) is an abbreviation for M m=1 \u03bb m \u2022 h m (e n\u22121 , e n , w n+2 n\u22122 ), i.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional Random Fields", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "e. the set of active feature functions at current position in the sequence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional Random Fields", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In the last few years different CRF implementations have been realized. The implementation we refer in this work is the one described in (Lavergne et al., 2010) , which optimize the following objective function:", |
| "cite_spans": [ |
| { |
| "start": 137, |
| "end": 160, |
| "text": "(Lavergne et al., 2010)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional Random Fields", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2212log(P (E N 1 |W N 1 )) + \u03c1 1 \u03bb 1 + \u03c1 2 2 \u03bb 2 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional Random Fields", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(3) \u03bb 1 and \u03bb 2 2 are the l1 and l2 regularizers (Riezler and Vasserman, 2004) , and together in a linear combination implement the elastic net regularizer (Zou and Hastie, 2005) . As mentioned in (Lavergne et al., 2010) , this kind of regularizers are very effective for feature selection at training time, which is a very good point when dealing with noisy data and big set of features.", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 78, |
| "text": "(Riezler and Vasserman, 2004)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 156, |
| "end": 178, |
| "text": "(Zou and Hastie, 2005)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 197, |
| "end": 220, |
| "text": "(Lavergne et al., 2010)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional Random Fields", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The models used in this work for parsing entity trees refer to the models described in (Johnson, 1998) , in (Charniak, 1997; Caraballo and Charniak, 1997) and (Charniak et al., 1998) , and which constitutes the basis of the maximum entropy model for parsing described in (Charniak, 2000) . A similar lexicalized model has been proposed also by Collins (Collins, 1997) . All these models are based on a PCFG trained from data and used in a chart parsing algorithm to find the best parse for the given input. The PCFG model of (Johnson, 1998) is made of rules of the form:", |
| "cite_spans": [ |
| { |
| "start": 87, |
| "end": 102, |
| "text": "(Johnson, 1998)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 108, |
| "end": 124, |
| "text": "(Charniak, 1997;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 125, |
| "end": 154, |
| "text": "Caraballo and Charniak, 1997)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 159, |
| "end": 182, |
| "text": "(Charniak et al., 1998)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 271, |
| "end": 287, |
| "text": "(Charniak, 2000)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 352, |
| "end": 367, |
| "text": "(Collins, 1997)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 525, |
| "end": 540, |
| "text": "(Johnson, 1998)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models for Parsing Trees", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 X i \u21d2 X j X k \u2022 X i \u21d2 w", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models for Parsing Trees", |
| "sec_num": "4" |
| }, |
| { |
| "text": "where X are non-terminal entities and w are terminal symbols (words in our case). 1 The probability associated to these rules are:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models for Parsing Trees", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p i\u2192j,k = P (X i \u21d2 X j , X k ) P (X i ) (4) p i\u2192w = P (X i \u21d2 w) P (X i )", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Models for Parsing Trees", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The models described in (Charniak, 1997; Caraballo and Charniak, 1997) encode probabilities involving more information, such as head words. In order to have a PCFG model made of rules with their associated probabilities, we extract rules from the entity trees of our corpus. This processing is straightforward, for example from the tree depicted in figure 2, the following rules are extracted:", |
| "cite_spans": [ |
| { |
| "start": 24, |
| "end": 40, |
| "text": "(Charniak, 1997;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 41, |
| "end": 70, |
| "text": "Caraballo and Charniak, 1997)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models for Parsing Trees", |
| "sec_num": "4" |
| }, |
| { |
| "text": "S \u21d2 amount loc.adm.town time.dat.rel amount amount \u21d2 val object time.date.rel \u21d2 name time-modifier object \u21d2 func.coll func.coll \u21d2 kind org.adm org.adm \u21d2 name", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models for Parsing Trees", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Using counts of these rules we then compute maximum likelihood probabilities of the Right Hand Side (RHS) of the rule given its Left Hand Side (LHS). Also binarization of rules, applied to have all rules in the form of 4 and 5, is straightforward and can be done with simple algorithms not discussed here.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models for Parsing Trees", |
| "sec_num": "4" |
| }, |
| { |
| "text": "As discussed in (Johnson, 1998) , an important point for a parsing algorithm is the representation of trees being parsed. Changing the tree representation can change significantly the performances of the parser. Since there is a large difference between entity trees used in this work and syntactic trees, from both meaning and structure point of view, it is worth performing an analysis with the aim of finding the most suitable representation for our task. In order to perform this analysis, we start from a named entity annotated on the words de notre president , M. Nicolas Sarkozy(of our president, Mr. Nicolas Sarkozy). The corresponding named entity is shown in figure 4. As decided in the annotation guidelines, fillers can be part of a named entity. This can happen for complex named entities involving several words. The representation shown in figure 4 is the default representation and will be referred to as baseline. A problem created by this representation is the fact that fillers are present also outside entities. Fillers of named entities should be, in principle, distinguished from any other filler, since they may be informative to discriminate entities. Following this intuition, we designed two different representations where entity fillers are con- textualized so that to be distinguished from the other fillers. In the first representation we give to the filler the same label of the parent node, while in the second representation we use a concatenation of the filler and the label of the parent node. These two representations are shown in figure 5 and 6, respectively. The first one will be referred to as filler-parent, while the second will be referred as parent-context. A problem that may be introduced by the first representation is that some entities that originally were used only for nonterminal entities will appear also as components, i.e. entities annotated on words. This may introduce some ambiguity.", |
| "cite_spans": [ |
| { |
| "start": 16, |
| "end": 31, |
| "text": "(Johnson, 1998)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Representations for Extended Named Entities", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Another possible contextualization can be to annotate each node with the label of the parent node. This representation is shown in figure 7 and will be referred to as parent-node. Intuitively, this representation is effective since entities annotated directly on words provide also the entity of the parent node. However this representation increases drastically the number of entities, in particular the number of components, which in our case are the set of labels to be learned by the CRF model. For the same reason this representation produces more rigid models, since label sequences vary widely and thus is not likely to match sequences not seen in the training data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Representations for Extended Named Entities", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Finally, another interesting tree representation is a variation of the parent-node tree, where entity fillers are only distinguished from fillers not in an entity, using the label ne-filler, but they are not contextualized with entity information. This representation is shown in figure 8 and it will be referred to as parent-node-filler. This representation is a good trade-off between contextual information and rigidity, by still representing entities as concatenation of labels, while using a common special label for entity fillers. This allows to keep lower the number of entities annotated on words, i.e. components.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Representations for Extended Named Entities", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Using different tree representations affects both the structure and the performance of the parsing model. The structure is described in the next section, the performance in the evaluation section.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Representations for Extended Named Entities", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Lexicalized models for syntactic parsing described in (Charniak, 2000; Charniak et al., 1998) and (Collins, 1997) , integrate more information than what is used in equations 4 and 5. Considering a particular node in the entity tree, not including terminals, the information used is:", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 70, |
| "text": "(Charniak, 2000;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 71, |
| "end": 93, |
| "text": "Charniak et al., 1998)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 98, |
| "end": 113, |
| "text": "(Collins, 1997)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 s: the head word of the node, i.e. the most important word of the chunk covered by the current node", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 h: the head word of the parent node", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 t: the entity tag of the current node", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 l: the entity tag of the parent node", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The head word of the parent node is defined percolating head words from children nodes to parent nodes, giving the priority to verbs. They can be found using automatic approaches based on words and entity tag co-occurrence or mutual information. Using this information, the model described in (Charniak et al., 1998) is P (s|h, t, l). This model being conditioned on several pieces of information, it can be affected by data sparsity problems. Thus, the model is actually approximated as an interpolation of probabilities:", |
| "cite_spans": [ |
| { |
| "start": 293, |
| "end": 316, |
| "text": "(Charniak et al., 1998)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (s|h, t, l) = \u03bb 1 P (s|h, t, l) + \u03bb 2 P (s|c h , t, l)+ \u03bb 3 P (s|t, l) + \u03bb 4 P (s|t)", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "where \u03bb i , i = 1, ..., 4, are parameters of the model to be tuned, and c h is the cluster of head words for a given entity tag t. With such model, when not all pieces of information are available to estimate reliably the probability with more conditioning, the model can still provide a probability with terms conditioned with less information. The use of head words and their percolation over the tree is called lexicalization. The goal of tree lexicalization is to add lexical information all over the tree. This way the probability of all rules can be conditioned also on lexical information, allowing to define the probabilities P (s|h, t, l) and P (s|c h , t, l). Tree lexicalization reflects the characteristics of syntactic parsing, for which the models described in (Charniak, 2000; Charniak et al., 1998) and (Collins, 1997) were defined. Head words are very informative since they constitute keywords instantiating labels, regardless if they are syntactic constituents or named entities. However, for named entity recognition it doesn't make sense to give priority to verbs when percolating head words over the tree, even more because head words of named entities are most of the time nouns. Moreover, it doesn't make sense to give priority to the head word of a particular entity with respect to the others, all entities in a sentence have the same importance. Intuitively, lexicalization of entity trees is not straightforward as lexicalization of syntactic trees. At the same time, using not lexicalized trees doesn't make sense with models like 6, since all the terms involve lexical information. Instead, we can use the model of (Johnson, 1998) , which define the probability of a tree \u03c4 as:", |
| "cite_spans": [ |
| { |
| "start": 775, |
| "end": 791, |
| "text": "(Charniak, 2000;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 792, |
| "end": 814, |
| "text": "Charniak et al., 1998)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 819, |
| "end": 834, |
| "text": "(Collins, 1997)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 1645, |
| "end": 1660, |
| "text": "(Johnson, 1998)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (\u03c4 ) = Y X\u2192\u03b1 P (X \u2192 \u03b1) C\u03c4 (X\u2192\u03b1)", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "here the RHS of rules has been generalized with \u03b1, representing RHS of both unary and binary rules 4 and 5. C \u03c4 (X \u2192 \u03b1) is the number of times the rule X \u2192 \u03b1 appears in the tree \u03c4 . The model 7 is instantiated when using tree representations shown in Fig. 4 , 5 and 6. When using representations given in Fig. 7 and 8, the model is:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 251, |
| "end": 257, |
| "text": "Fig. 4", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 305, |
| "end": 311, |
| "text": "Fig. 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (\u03c4 |l)", |
| "eq_num": "(8)" |
| } |
| ], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "where l is the entity label of the parent node. Although non-lexicalized models like 7 and 8 have shown less effective for syntactic parsing than their lexicalized couter-parts, there are evidences showing that they can be effective in our task. With reference to figure 4, considering the entity pers.ind instantiated by Nicolas Sarkozy, our algorithm detects first name.first for Nicolas and name.last for Sarkozy using the CRF model. As mentioned earlier, once the CRF model has detected components, since entity trees have not a complex structure with respect to syntactic trees, even a simple model like the one in equation 7 or 8 is effective for entity tree parsing. For example, once name.first and name.last have been detected by CRF, pers.ind is the only entity having name.first and name.last as children. Ambiguities, like for example for kind or qualifier, which can appear in many entities, can affect the model 7, but they are overcome by the model 8, taking the entity tag of the parent node into account. Moreover, the use of CRF allows to include in the model much more features than the lexicalized model in equation 6. Using features like word prefixes (P), suffixes (S), capitalization (C), morpho-syntactic features (MS) and other features indicated as F 2 , the CRF model encodes the conditional probability:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (t|w, P, S, C, M S, F )", |
| "eq_num": "(9)" |
| } |
| ], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "where w is an input word and t is the corresponding component.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The probability of the CRF model, used in the first step to tag input words with components, is combined with the probability of the PCFG model, used to parse entity trees starting from components. Thus the structure of our model is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (t|w, P, S, C, M S, F ) \u2022 P (\u03c4 )", |
| "eq_num": "(10)" |
| } |
| ], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "or", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (t|w, P, S, C, M S, F ) \u2022 P (\u03c4 |l)", |
| "eq_num": "(11)" |
| } |
| ], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "depending if we are using the tree representation given in figure 4, 5 and 6 or in figure 7 and 8, respectively. A scale factor could be used to combine the two scores, but this is optional as CRFs can provide normalized posterior probabilities.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure of the Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "While the models used for named entity detection and the set of named entities defined along the years have been discussed in the introduction and in section 2, since CRFs and models for parsing constitute the main issue in our work, we discuss some important models here.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Beyond the models for parsing discussed in section 4, together with motivations for using or not in our work, another important model for syntactic parsing has been proposed in (Ratnaparkhi, 1999) . Such model is made of four Maximum Entropy models used in cascade for parsing at different stages. Also this model makes use of head words, like those described in section 4, thus the same considerations hold, moreover it seems quite complex for real applications, as it involves the use of four different models together. The models described in (Johnson, 1998) , (Charniak, 1997; Caraballo and Charniak, 1997) , (Charniak et al., 1998) , (Charniak, 2000) , (Collins, 1997) and (Ratnaparkhi, 1999) , constitute the main individual models proposed for constituent-based syntactic parsing. Later other approaches based on models combination have been proposed, like e.g. the reranking approach described in (Collins and Koo, 2005) , among many, and also evolutions or improvements of these models.", |
| "cite_spans": [ |
| { |
| "start": 177, |
| "end": 196, |
| "text": "(Ratnaparkhi, 1999)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 546, |
| "end": 561, |
| "text": "(Johnson, 1998)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 564, |
| "end": 580, |
| "text": "(Charniak, 1997;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 581, |
| "end": 610, |
| "text": "Caraballo and Charniak, 1997)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 613, |
| "end": 636, |
| "text": "(Charniak et al., 1998)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 639, |
| "end": 655, |
| "text": "(Charniak, 2000)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 658, |
| "end": 673, |
| "text": "(Collins, 1997)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 678, |
| "end": 697, |
| "text": "(Ratnaparkhi, 1999)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 905, |
| "end": 928, |
| "text": "(Collins and Koo, 2005)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "More recently, approaches based on log-linear models have been proposed (Clark and Curran, 2007; Finkel et al., 2008) for parsing, called also \"Tree CRF\", using also different training criteria (Auli and Lopez, 2011) . Using such models in our work has basically two problems: one related to scaling issues, since our data present a large number of labels, which makes CRF training problematic, even more when using \"Tree CRF\"; another problem is related to the difference between syntactic parsing and named entity detection tasks, as mentioned in sub-section 4.2. Adapting \"Tree CRF\" to our task is thus a quite complex work, it constitutes an entire work by itself, we leave it as feature work.", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 96, |
| "text": "(Clark and Curran, 2007;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 97, |
| "end": 117, |
| "text": "Finkel et al., 2008)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 194, |
| "end": 216, |
| "text": "(Auli and Lopez, 2011)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Concerning linear-chain CRF models, the one we use is a state-of-the-art implementation (Lavergne et al., 2010) , as it implements the most effective optimization algorithms as well as state-of-the-art regularizers (see sub-section 3.1). Some improvement of linear-chain CRF have been proposed, trying to integrate higher order target-side features (Tang et al., 2006) . An integration of the same kind of features has been tried also in the model used in this work, without giving significant improvements, but making model training much harder. Thus, this direction has not been further investigated.", |
| "cite_spans": [ |
| { |
| "start": 88, |
| "end": 111, |
| "text": "(Lavergne et al., 2010)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 349, |
| "end": 368, |
| "text": "(Tang et al., 2006)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In this section we describe experiments performed to evaluate our models. We first describe the settings used for the two models involved in the entity tree parsing, and then describe and comment the results obtained on the test corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The CRF implementation used in this work is described in (Lavergne et al., 2010) , named wapiti. 3 We didn't optimize parameters \u03c1 1 and \u03c1 2 of the elastic net (see section 3.1), although this improves significantly the performances and leads to more compact models, default values lead in most cases to very accurate models. We used a wide set of features in CRF models, in a window of [\u22122, +2] around the target word:", |
| "cite_spans": [ |
| { |
| "start": 57, |
| "end": 80, |
| "text": "(Lavergne et al., 2010)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 97, |
| "end": 98, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Settings", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "\u2022 A set of standard features like word prefixes and suffixes of length from 1 to 6, plus some Yes/No features like Does the word start with capital letter?, etc.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Settings", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "\u2022 Morpho-syntactic features extracted from the output of the tool tagger (Allauzen and Bonneau-Maynard, 2008) \u2022 Features extracted from the output of the semantic analyzer (Rosset et al., (2009) ) provided by the tool WMatch (Galibert, 2009) .", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 109, |
| "text": "(Allauzen and Bonneau-Maynard, 2008)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 172, |
| "end": 194, |
| "text": "(Rosset et al., (2009)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 225, |
| "end": 241, |
| "text": "(Galibert, 2009)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Settings", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "This analysis morpho-syntactic information as well as semantic information at the same level of named entities. Using two different sets of morpho-syntactic features results in more effective models, as they create a kind of agreement for a given word in case of match. Concerning the PCFG model, grammars, tree binarization and the different tree representations are created with our own scripts, while entity tree parsing is performed with the chart parsing algorithm described in (Johnson, 1998 et al., 1999) which has a similar definition of word error rate for ASR systems, with the difference that substitution errors are split in three types: i) correct entity type with wrong segmentation; ii) wrong entity type with correct segmentation; iii) wrong entity type with wrong segmentation; here, i) and ii) are given half points, while iii), as well as insertion and deletion errors, are given full points. Moreover, results are given using the well known F 1 measure, defined as a function of precision and recall.", |
| "cite_spans": [ |
| { |
| "start": 483, |
| "end": 497, |
| "text": "(Johnson, 1998", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 498, |
| "end": 511, |
| "text": "et al., 1999)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Settings", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "In this section we provide evaluations of the models described in this work, based on combination of CRF and PCFG and using different tree representations of named entity trees.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "As a first evaluation, we describe some statistics computed from the CRF and PCFG models using the tree representations. Such statistics provide interesting clues of how difficult is learning the task and which performance we can expect from the model. Statistics for this evaluation are presented in table 3. Rows corresponds to the different tree representations described in this work, while in the columns we show the number of features and labels for the CRF models (# features and # labels), and the number of rules for PCFG models (# rules).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Statistics", |
| "sec_num": "6.3.1" |
| }, |
| { |
| "text": "As we can see from the table, the number of rules is the same for the tree representations baseline, filler-parent and parent-context, and for the representations parent-node and parentnode-filler. This is the consequence of the contextualization applied by the latter representations, i.e. parent-node and parent-node-filler create several different labels depending from the context, thus the corresponding grammar 8 . In contrast the other tree representations modify only fillers, thus the number of rules is not affected. Concerning CRF models, as shown in table 3, the use of the different tree representations results in an increasing number of labels to be learned by CRF. This aspect is quite critical in CRF learning, as training time is exponential in the number of labels. Indeed, the most complex models, obtained with parent-node and parent-node-filler tree representations, took roughly 8 days for training. Additionally, increasing the number of labels can create data sparseness problems, however this problem doesn't seem to arise in our case since, apart the baseline model which has quite less features, all the others have approximately the same number of features, meaning that there are actually enough data to learn the models, regardless the number of labels.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 417, |
| "end": 418, |
| "text": "8", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model Statistics", |
| "sec_num": "6.3.1" |
| }, |
| { |
| "text": "In this section we evaluate the models in terms of the evaluation metrics described in previous section, Slot Error Rate (SER) and F1 measure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluations of Tree Representations", |
| "sec_num": "6.3.2" |
| }, |
| { |
| "text": "In order to evaluate PCFG models alone, we performed entity tree parsing using as input reference transcriptions, i.e. manual transcriptions and reference component annotations taken from development and test sets. This can be considered a kind of oracle evaluations and provides us an upper bound of the performance of the PCFG models. Results for this evaluation are reported in table 4 . As it can be intuitively expected, adding more contextualization in the trees results in more accurate models, the simplest model, baseline, has the worst oracle performance, filler-parent and parent-context models, adding similar contextualization information, have very similar oracle performances. Same line of reasoning applies to models parent-node and parent-node-filler, which also add similar contextualization and have very similar oracle predictions. These last two models have also the best absolute oracle performances. However, adding more contextualization in the trees results also in more rigid models, the fact that models are robust on reference transcriptions and based on reference component annotations, doesn't imply a proportional robustness on component sequences generated by CRF models. This intuition is confirmed from results reported in table 5, where a real evaluation of our models is reported, using this time CRF output components as input to PCFG models, to parse entity trees. The results reported in table 5 show in particular that models using baseline, filler-parent and parent-context tree representations have similar performances, especially on test set. Models characterized by parent-node and parent-node-filler tree representations have indeed the best performances, although the gain with respect to the other models is not as much as it could be expected given the difference in the oracle performances discussed above. In particular the best absolute performance is obtained with the model parent-node-filler. As we mentioned in subsection 4.1, this model represents the best trade-off between rigidity and accuracy using the same label for all entity fillers, but still distinguishing between fillers found in entity structures and other fillers found in words not instantiating any entity.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 381, |
| "end": 388, |
| "text": "table 4", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluations of Tree Representations", |
| "sec_num": "6.3.2" |
| }, |
| { |
| "text": "As a final evaluation of our models, we provide a comparison of official results obtained at the 2011 evaluation campaign of extended named entity recognition (Galibert et al., 2011; 2) Results are reported in table 6, where the other two participants to the campaign are indicated as P 1 and P 2. These two participants P1 and P2, used a system based on CRF, and rules for deep syntactic analysis, respectively. In particular, P 2 obtained superior performances in previous evaluation campaign on named entity recognition. The system we proposed at the evaluation campaign used a parent-context tree representation. The results obtained at the evaluation campaign are in the first three lines of Table 6 . We compare such results with those obtained with the parentnode and parent-node-filler tree representations, reported in the last two rows of the same table. As we can see, the new tree representations described in this work allow to achieve the best absolute performances.", |
| "cite_spans": [ |
| { |
| "start": 159, |
| "end": 182, |
| "text": "(Galibert et al., 2011;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 183, |
| "end": 183, |
| "text": "", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 698, |
| "end": 705, |
| "text": "Table 6", |
| "ref_id": "TABREF10" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison with Official Results", |
| "sec_num": "6.3.3" |
| }, |
| { |
| "text": "In this paper we have presented a Named Entity Recognition system dealing with extended named entities with a tree structure. Given such representation of named entities, the task cannot be modeled as a sequence labelling approach. We thus proposed a two-steps system based on CRF and PCFG. CRF annotate entity components directly on words, while PCFG apply parsing techniques to predict the whole entity tree. We motivated our choice by showing that it is not effective to apply techniques used widely for syntactic parsing, like for example tree lexicalization. We presented an analysis of different tree representations for PCFG, which affect significantly parsing performances.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We provided and discussed a detailed evaluation of all the models obtained by combining CRF and PCFG with the different tree representation proposed. Our combined models result in better performances with respect to other models proposed at the official evaluation campaign, as well as our previous model used also at the evaluation campaign.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "7" |
| }, |
| { |
| "text": "These rules are actually in Chomsky Normal Form, i.e. unary or binary rules only. A PCFG, in general, can have any rule, however, the algorithm we are discussing convert the PCFG rules into Chomsky Normal Form, thus for simplicity we provide directly such formulation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The set of features used in the CRF model will be described in more details in the evaluation section.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "available at http://wapiti.limsi.fr 4 available at http://web.science.mq.edu.au/ mjohnson/Software.htm", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work has been funded by the project Quaero, under the program Oseo, French State agency for innovation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Message Understanding Conference-6: a brief history", |
| "authors": [ |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| }, |
| { |
| "first": "Beth", |
| "middle": [], |
| "last": "Sundheim", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the 16th conference on Computational linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "466--471", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ralph Grishman and Beth Sundheim. 1996. Mes- sage Understanding Conference-6: a brief history. In Proceedings of the 16th conference on Com- putational linguistics -Volume 1, pages 466-471, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Definition, Dictionaries and Tagger for Extended Named Entity Hierarchy", |
| "authors": [ |
| { |
| "first": "Satoshi", |
| "middle": [], |
| "last": "Sekine", |
| "suffix": "" |
| }, |
| { |
| "first": "Chikashi", |
| "middle": [], |
| "last": "Nobata", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Satoshi Sekine and Chikashi Nobata. 2004. Defini- tion, Dictionaries and Tagger for Extended Named Entity Hierarchy. In Proceedings of LREC.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "The Automatic Content Extraction (ACE) Program-Tasks, Data, and Evaluation", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Doddington", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Przybocki", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Ramshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Strassel", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Weischedel", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of LREC 2004", |
| "volume": "", |
| "issue": "", |
| "pages": "837--840", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Doddington, A. Mitchell, M. Przybocki, L. Ramshaw, S. Strassel, and R. Weischedel. 2004. The Automatic Content Extraction (ACE) Program-Tasks, Data, and Evaluation. Proceedings of LREC 2004, pages 837-840.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Proposal for an extension or traditional named entities: From guidelines to evaluation, an overview", |
| "authors": [ |
| { |
| "first": "Cyril", |
| "middle": [], |
| "last": "Grouin", |
| "suffix": "" |
| }, |
| { |
| "first": "Sophie", |
| "middle": [], |
| "last": "Rosset", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Zweigenbaum", |
| "suffix": "" |
| }, |
| { |
| "first": "Karn", |
| "middle": [], |
| "last": "Fort", |
| "suffix": "" |
| }, |
| { |
| "first": "Olivier", |
| "middle": [], |
| "last": "Galibert", |
| "suffix": "" |
| }, |
| { |
| "first": "Ludovic", |
| "middle": [], |
| "last": "Quintard", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Linguistic Annotation Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cyril Grouin, Sophie Rosset, Pierre Zweigenbaum, Karn Fort, Olivier Galibert, Ludovic Quintard. 2011. Proposal for an extension or traditional named entities: From guidelines to evaluation, an overview. In Proceedings of the Linguistic Annota- tion Workshop (LAW).", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Lafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the Eighteenth International Conference on Machine Learning (ICML)", |
| "volume": "", |
| "issue": "", |
| "pages": "282--289", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Lafferty, A. McCallum, and F. Pereira. 2001. Con- ditional random fields: Probabilistic models for segmenting and labeling sequence data. In Pro- ceedings of the Eighteenth International Confer- ence on Machine Learning (ICML), pages 282-289, Williamstown, MA, USA, June.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Pcfg models of linguistic tree representations", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Computational Linguistics", |
| "volume": "24", |
| "issue": "", |
| "pages": "613--632", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Johnson. 1998. Pcfg models of linguistic tree representations. Computational Linguistics, 24:613-632.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Comparing stochastic approaches to spoken language understanding in multiple languages", |
| "authors": [ |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Hahn", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Dinarelli", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Raymond", |
| "suffix": "" |
| }, |
| { |
| "first": "Fabrice", |
| "middle": [], |
| "last": "Lef\u00e8vre", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Lehen", |
| "suffix": "" |
| }, |
| { |
| "first": "Renato", |
| "middle": [ |
| "De" |
| ], |
| "last": "Mori", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Moschitti", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| }, |
| { |
| "first": "Giuseppe", |
| "middle": [ |
| "Riccardi" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "IEEE Transactions on Audio, Speech and Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stefan Hahn, Marco Dinarelli, Christian Raymond, Fabrice Lef\u00e8vre, Patrick Lehen, Renato De Mori, Alessandro Moschitti, Hermann Ney, and Giuseppe Riccardi. 2010. Comparing stochastic approaches to spoken language understanding in multiple lan- guages. IEEE Transactions on Audio, Speech and Language Processing (TASLP), 99.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "A maximum entropy approach to natural language processing", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [ |
| "L" |
| ], |
| "last": "Berger", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [ |
| "A" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [ |
| "J Della" |
| ], |
| "last": "Pietra", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "", |
| "volume": "22", |
| "issue": "", |
| "pages": "39--71", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam L. Berger, Stephen A. Della Pietra, and Vin- cent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. COMPU- TATIONAL LINGUISTICS, 22:39-71.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Practical very large scale CRFs", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Lavergne", |
| "suffix": "" |
| }, |
| { |
| "first": "Olivier", |
| "middle": [], |
| "last": "Capp\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Fran\u00e7ois", |
| "middle": [], |
| "last": "Yvon", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings the 48th Annual Meeting of the Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "504--513", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Lavergne, Olivier Capp\u00e9, and Fran\u00e7ois Yvon. 2010. Practical very large scale CRFs. In Proceed- ings the 48th Annual Meeting of the Association for Computational Linguistics (ACL), pages 504-513. Association for Computational Linguistics, July.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Incremental feature selection and l1 regularization for relaxed maximum-entropy modeling", |
| "authors": [ |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Riezler", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Vasserman", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the International Conference on Empirical Methods for Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stefan Riezler and Alexander Vasserman. 2004. In- cremental feature selection and l1 regularization for relaxed maximum-entropy modeling. In Pro- ceedings of the International Conference on Em- pirical Methods for Natural Language Processing (EMNLP).", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Regularization and variable selection via the Elastic Net", |
| "authors": [ |
| { |
| "first": "Hui", |
| "middle": [], |
| "last": "Zou", |
| "suffix": "" |
| }, |
| { |
| "first": "Trevor", |
| "middle": [], |
| "last": "Hastie", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Journal of the Royal Statistical Society B", |
| "volume": "67", |
| "issue": "", |
| "pages": "301--320", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hui Zou and Trevor Hastie. 2005. Regularization and variable selection via the Elastic Net. Journal of the Royal Statistical Society B, 67:301-320.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Statistical parsing with a context-free grammar and word statistics", |
| "authors": [ |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence, AAAI'97/IAAI'97", |
| "volume": "", |
| "issue": "", |
| "pages": "598--603", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugene Charniak. 1997. Statistical parsing with a context-free grammar and word statistics. In Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence, AAAI'97/IAAI'97, pages 598-603. AAAI Press.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "A maximum-entropyinspired parser", |
| "authors": [ |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference", |
| "volume": "", |
| "issue": "", |
| "pages": "132--139", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugene Charniak. 2000. A maximum-entropy- inspired parser. In Proceedings of the 1st North American chapter of the Association for Computa- tional Linguistics conference, pages 132-139, San Francisco, CA, USA. Morgan Kaufmann Publish- ers Inc.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "New figures of merit for best-first probabilistic chart parsing", |
| "authors": [ |
| { |
| "first": "Sharon", |
| "middle": [ |
| "A" |
| ], |
| "last": "Caraballo", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Computational Linguistics", |
| "volume": "24", |
| "issue": "", |
| "pages": "275--298", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sharon A. Caraballo and Eugene Charniak. 1997. New figures of merit for best-first probabilistic chart parsing. Computational Linguistics, 24:275-298.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Three generative, lexicalised models for statistical parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics, ACL '98", |
| "volume": "", |
| "issue": "", |
| "pages": "16--23", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th Annual Meeting of the Association for Com- putational Linguistics and Eighth Conference of the European Chapter of the Association for Computa- tional Linguistics, ACL '98, pages 16-23, Strouds- burg, PA, USA. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Edge-based best-first chart parsing", |
| "authors": [ |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the Sixth Workshop on Very Large Corpora", |
| "volume": "", |
| "issue": "", |
| "pages": "127--133", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugene Charniak, Sharon Goldwater, and Mark John- son. 1998. Edge-based best-first chart parsing. In In Proceedings of the Sixth Workshop on Very Large Corpora, pages 127-133. Morgan Kaufmann.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Training and evaluation of pos taggers on the french multitag corpus", |
| "authors": [ |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Allauzen", |
| "suffix": "" |
| }, |
| { |
| "first": "H\u00e9l\u00e9ne", |
| "middle": [], |
| "last": "Bonneau-Maynard", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Sixth International Language Resources and Evaluation (LREC'08)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexandre Allauzen and H\u00e9l\u00e9ne Bonneau-Maynard. 2008. Training and evaluation of pos taggers on the french multitag corpus. In Proceedings of the Sixth International Language Resources and Evaluation (LREC'08), Marrakech, Morocco, may.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Approches et m\u00e9thodologies pour la r\u00e9ponse automatique\u00e0 des questions adapt\u00e9es\u00e0 un cadre interactif en domaine ouvert", |
| "authors": [ |
| { |
| "first": "Olivier", |
| "middle": [], |
| "last": "Galibert", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Olivier Galibert. 2009. Approches et m\u00e9thodologies pour la r\u00e9ponse automatique\u00e0 des questions adapt\u00e9es\u00e0 un cadre interactif en domaine ouvert. Ph.D. thesis, Universit\u00e9 Paris Sud, Orsay.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "The LIMSI multilingual, multitask QAst system", |
| "authors": [ |
| { |
| "first": "Rosset", |
| "middle": [], |
| "last": "Sophie", |
| "suffix": "" |
| }, |
| { |
| "first": "Galibert", |
| "middle": [], |
| "last": "Olivier", |
| "suffix": "" |
| }, |
| { |
| "first": "Bernard", |
| "middle": [], |
| "last": "Guillaume", |
| "suffix": "" |
| }, |
| { |
| "first": "Bilinski", |
| "middle": [], |
| "last": "Eric", |
| "suffix": "" |
| }, |
| { |
| "first": "Adda", |
| "middle": [], |
| "last": "Gilles", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access, CLEF'08", |
| "volume": "", |
| "issue": "", |
| "pages": "480--487", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rosset Sophie, Galibert Olivier, Bernard Guillaume, Bilinski Eric, and Adda Gilles. The LIMSI mul- tilingual, multitask QAst system. In Proceed- ings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilin- gual and multimodal information access, CLEF'08, pages 480-487, Berlin, Heidelberg, 2009. Springer- Verlag.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Efficient combined approach for named entity recognition in spoken language", |
| "authors": [ |
| { |
| "first": "Azeddine", |
| "middle": [], |
| "last": "Zidouni", |
| "suffix": "" |
| }, |
| { |
| "first": "Sophie", |
| "middle": [], |
| "last": "Rosset", |
| "suffix": "" |
| }, |
| { |
| "first": "Herv\u00e9", |
| "middle": [], |
| "last": "Glotin", |
| "suffix": "" |
| }, |
| { |
| "first": ".", |
| "middle": [ |
| ";" |
| ], |
| "last": "Makuhari", |
| "suffix": "" |
| }, |
| { |
| "first": "Japan", |
| "middle": [ |
| "John" |
| ], |
| "last": "Makhoul", |
| "suffix": "" |
| }, |
| { |
| "first": "Francis", |
| "middle": [], |
| "last": "Kubala", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Weischedel", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the International Conference of the Speech Communication Assosiation (Interspeech)", |
| "volume": "", |
| "issue": "", |
| "pages": "249--252", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Azeddine Zidouni, Sophie Rosset, and Herv\u00e9 Glotin. 2010. Efficient combined approach for named en- tity recognition in spoken language. In Proceedings of the International Conference of the Speech Com- munication Assosiation (Interspeech), Makuhari, Japan John Makhoul, Francis Kubala, Richard Schwartz, and Ralph Weischedel. 1999. Performance mea- sures for information extraction. In Proceedings of DARPA Broadcast News Workshop, pages 249-252.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Learning to Parse Natural Language with Maximum Entropy Models", |
| "authors": [ |
| { |
| "first": "Adwait", |
| "middle": [], |
| "last": "Ratnaparkhi", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Journal of Machine Learning", |
| "volume": "34", |
| "issue": "", |
| "pages": "151--175", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adwait Ratnaparkhi. 1999. Learning to Parse Natural Language with Maximum Entropy Models. Journal of Machine Learning, vol. 34, issue 1-3, pages 151- 175.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Discriminative Re-ranking for Natural Language Parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Terry", |
| "middle": [], |
| "last": "Koo", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Journal of Machine Learning", |
| "volume": "31", |
| "issue": "1", |
| "pages": "25--70", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins and Terry Koo. 2005. Discriminative Re-ranking for Natural Language Parsing. Journal of Machine Learning, vol. 31, issue 1, pages 25-70.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Wide-Coverage Efficient Statistical Parsing with CCG and Log-Linear Models", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "R" |
| ], |
| "last": "Curran", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Journal of Computational Linguistics", |
| "volume": "33", |
| "issue": "", |
| "pages": "493--552", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Clark, Stephen and Curran, James R. 2007. Wide- Coverage Efficient Statistical Parsing with CCG and Log-Linear Models. Journal of Computational Lin- guistics, vol. 33, issue 4, pages 493-552.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Efficient, Feature-based", |
| "authors": [ |
| { |
| "first": "Jenny", |
| "middle": [ |
| "R" |
| ], |
| "last": "Finkel", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Kleeman", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Conditional Random Field Parsing. Proceedings of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "959--967", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Finkel, Jenny R. and Kleeman, Alex and Manning, Christopher D. 2008. Efficient, Feature-based, Conditional Random Field Parsing. Proceedings of the Association for Computational Linguistics, pages 959-967, Columbus, Ohio.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Training a Log-Linear Parser with Loss Functions via Softmax-Margin", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Auli", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Lopez", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of Empirical Methods for Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "333--343", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Auli and Adam Lopez 2011. Training a Log- Linear Parser with Loss Functions via Softmax- Margin. Proceedings of Empirical Methods for Natural Language Processing, pages 333-343, Ed- inburgh, U.K.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Tree-Structured Conditional Random Fields for Semantic Annotation", |
| "authors": [ |
| { |
| "first": "Jie", |
| "middle": [], |
| "last": "Tang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mingcai", |
| "middle": [], |
| "last": "Hong", |
| "suffix": "" |
| }, |
| { |
| "first": "Juan-Zi", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Bangyong", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedgins of the International Semantic Web Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "640--653", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tang, Jie and Hong, MingCai and Li, Juan-Zi and Liang, Bangyong. 2006. Tree-Structured Con- ditional Random Fields for Semantic Annotation. Proceedgins of the International Semantic Web Conference, pages 640-653, Edited by Springer.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Structured and Extended Named Entity Evaluation in Automatic Speech Transcriptions", |
| "authors": [ |
| { |
| "first": "Ludovic", |
| "middle": [], |
| "last": "Quintard", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "IJCNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ludovic Quintard. 2011. Struc- tured and Extended Named Entity Evaluation in Au- tomatic Speech Transcriptions. IJCNLP 2011.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Models Cascade for Tree-Structured Named Entity Detection IJCNLP", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Dinarelli", |
| "suffix": "" |
| }, |
| { |
| "first": "Sophie", |
| "middle": [], |
| "last": "Rosset", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Dinarelli, Sophie Rosset. Models Cascade for Tree-Structured Named Entity Detection IJCNLP 2011.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "Examples of structured named entities annotated on the data used in this work", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "text": "An example of named entity tree corresponding to entities of a whole sentence. Tree leaves, corresponding to sentence words have been removed to keep readability", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "text": "Processing schema of the two-steps approach proposed in this work: CRF plus PCFG", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "text": "Baseline tree representations used in the PCFG parsing model Figure 5: Filler-parent tree representations used in the PCFG parsing model", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "text": "Parent-context tree representations used in the PCFG parsing model Figure 7: Parent-node tree representations used in the PCFG parsing model", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF5": { |
| "uris": null, |
| "text": "Parent-node-filler tree representations used in the PCFG parsing model", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "content": "<table/>", |
| "type_str": "table", |
| "text": "", |
| "html": null, |
| "num": null |
| }, |
| "TABREF3": { |
| "content": "<table><tr><td>Broadcast News (BN) and Broadcast Conversations (BC)</td></tr></table>", |
| "type_str": "table", |
| "text": "Statistics on the test set of the Quaero corpus, divided in", |
| "html": null, |
| "num": null |
| }, |
| "TABREF5": { |
| "content": "<table><tr><td>models used in this work</td></tr><tr><td>6.2 Evaluation Metrics</td></tr><tr><td>All results are expressed in terms of Slot Error</td></tr><tr><td>Rate (SER) (Makhoul</td></tr></table>", |
| "type_str": "table", |
| "text": "Statistics showing the characteristics of the different", |
| "html": null, |
| "num": null |
| }, |
| "TABREF7": { |
| "content": "<table><tr><td/><td>DEV</td><td/><td>TEST</td><td/></tr><tr><td>Model</td><td>SER</td><td>F1</td><td>SER</td><td>F1</td></tr><tr><td>baseline</td><td colspan=\"2\">33.5% 72.5%</td><td colspan=\"2\">33.4% 72.8%</td></tr><tr><td>filler-parent</td><td colspan=\"2\">31.3% 74.4%</td><td colspan=\"2\">33.4% 72.7%</td></tr><tr><td>parent-context</td><td colspan=\"2\">30.9% 74.6%</td><td colspan=\"2\">33.3% 72.8%</td></tr><tr><td>parent-node</td><td colspan=\"2\">31.2% 77.8%</td><td colspan=\"2\">31.4% 79.5%</td></tr><tr><td>parent-node-filler</td><td colspan=\"2\">28.7% 78.9%</td><td colspan=\"2\">30.2% 80.3%</td></tr></table>", |
| "type_str": "table", |
| "text": "Results computed from oracle predictions obtained with the different models presented in this work", |
| "html": null, |
| "num": null |
| }, |
| "TABREF8": { |
| "content": "<table/>", |
| "type_str": "table", |
| "text": "Results obtained with our combined algorithm based on CRF and PCFG will have more rules. For example, the rule pers.ind \u21d2 name.first name.last can appear as it is or contextualized with func.ind, like in figure", |
| "html": null, |
| "num": null |
| }, |
| "TABREF10": { |
| "content": "<table/>", |
| "type_str": "table", |
| "text": "Results obtained with our combined algorithm based on CRF and PCFG", |
| "html": null, |
| "num": null |
| } |
| } |
| } |
| } |