ACL-OCL / Base_JSON /prefixE /json /E09 /E09-1007.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E09-1007",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:49:10.485158Z"
},
"title": "Clique-Based Clustering for improving Named Entity Recognition systems",
"authors": [
{
"first": "Julien",
"middle": [],
"last": "Ah-Pine",
"suffix": "",
"affiliation": {},
"email": "julien.ah-pine@xrce.xerox.com"
},
{
"first": "Guillaume",
"middle": [],
"last": "Jacquet",
"suffix": "",
"affiliation": {
"laboratory": "Xerox Research Centre Europe 6, chemin de Maupertuis",
"institution": "",
"location": {
"postCode": "38240",
"settlement": "Meylan",
"country": "France"
}
},
"email": "guillaume.jacquet@xrce.xerox.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a system which builds, in a semi-supervised manner, a resource that aims at helping a NER system to annotate corpus-specific named entities. This system is based on a distributional approach which uses syntactic dependencies for measuring similarities between named entities. The specificity of the presented method however, is to combine a clique-based approach and a clustering technique that amounts to a soft clustering method. Our experiments show that the resource constructed by using this cliquebased clustering system allows to improve different NER systems.",
"pdf_parse": {
"paper_id": "E09-1007",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a system which builds, in a semi-supervised manner, a resource that aims at helping a NER system to annotate corpus-specific named entities. This system is based on a distributional approach which uses syntactic dependencies for measuring similarities between named entities. The specificity of the presented method however, is to combine a clique-based approach and a clustering technique that amounts to a soft clustering method. Our experiments show that the resource constructed by using this cliquebased clustering system allows to improve different NER systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In Information Extraction domain, named entities (NEs) are one of the most important textual units as they express an important part of the meaning of a document. Named entity recognition (NER) is not a new domain (see MUC 1 and ACE 2 conferences) but some new needs appeared concerning NEs processing. For instance the NE Oxford illustrates the different ambiguity types that are interesting to address:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 intra-annotation ambiguity: Wikipedia lists more than 25 cities named Oxford in the world \u2022 systematic inter-annotation ambiguity: the name of cities could be used to refer to the university of this city or the football club of this city. This is the case for Oxford or Newcastle \u2022 non-systematic inter-annotation ambiguity:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Oxford is also a company unlike Newcastle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main goal of our system is to act in a complementary way with an existing NER system, in order to enhance its results. We address two kinds of issues: first, we want to detect and correctly annotate corpus-specific NEs 3 that the NER system could have missed; second, we want to correct some wrong annotations provided by the existing NER system due to ambiguity. In section 3, we give some examples of such corrections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows. We present, in section 2, the global architecture of our system and from \u00a72.1 to \u00a72.6, we give details about each of its steps. In section 3, we present the evaluation of our approach when it is combined with other classic NER systems. We show that the resulting hybrid systems perform better with respect to F-measure. In the best case, the latter increased by 4.84 points. Furthermore, we give examples of successful correction of NEs annotation thanks to our approach. Then, in section 4, we discuss about related works. Finally we sum up the main points of this paper in section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given a corpus, the main objectives of our system are: to detect potential NEs; to compute the possible annotations for each NE and then; to annotate each occurrence of these NEs with the right annotation by analyzing its local context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the system",
"sec_num": "2"
},
{
"text": "We assume that this corpus dependent approach allows an easier NE annotation. Indeed, even if a NE such as Oxford can have many annotation types, it will certainly have less annotation possibilities in a specific corpus. Figure 1 presents the global architecture of our system. The most important part concerns steps 3 ( \u00a72.3) and 4 ( \u00a72.4). The aim of these subprocesses is to group NEs which have the same annotation with respect to a given context. On the one hand, clique-based methods (see \u00a72.3 for Figure 1 : General description of our system details on cliques) are interesting as they allow the same NE to be in different cliques. In other words, cliques allow to represent the different possible annotations of a NE. The clique-based approach drawback however, is the over production of cliques which corresponds to an artificial over production of possible annotations for a NE. On the other hand, clustering methods aim at structuring a data set and such techniques can be seen as data compression processes. However, a simple NEs hard clustering doesn't allow a NE to be in several clusters and thus to express its different annotations. Then, our proposal is to combine both methods in a clique-based clustering framework. This combination leads to a soft-clustering approach that we denote CBC system. The following paragraphs, from 2.1 to 2.6, describe the respective steps mentioned in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 221,
"end": 229,
"text": "Figure 1",
"ref_id": null
},
{
"start": 504,
"end": 512,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1402,
"end": 1410,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Description of the system",
"sec_num": "2"
},
{
"text": "Different methods exist for detecting potential NEs. In our system, we used some lexicosyntactic constraints to extract expressions from a corpus because it allows to detect some corpusspecific NEs. In our approach, a potential NE is a noun starting with an upper-case letter or a noun phrase which is (see (Ehrmann and Jacquet, 2007) for similar use):",
"cite_spans": [
{
"start": 307,
"end": 334,
"text": "(Ehrmann and Jacquet, 2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Detection of potential Named Entities",
"sec_num": "2.1"
},
{
"text": "\u2022 a governor argument of an attribute syntactic relation with a noun as governee argument (e.g. president attribute \u2212\u2212\u2212\u2212\u2192 George Bush)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Detection of potential Named Entities",
"sec_num": "2.1"
},
{
"text": "\u2022 a governee argument of a modifier syntactic relation with a noun as a governor argument (e.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Detection of potential Named Entities",
"sec_num": "2.1"
},
{
"text": "\u2190 \u2212\u2212\u2212 \u2212 Coca-Cola). The list of potential NEs extracted from the corpus will be denoted NE and the number of NEs |NE|.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "company modifier",
"sec_num": null
},
{
"text": "The distributional approach aims at evaluating a distance between words based on their syntactic distribution. This method assumes that words which appear in the same contexts are semantically similar (Harris, 1951) .",
"cite_spans": [
{
"start": 201,
"end": 215,
"text": "(Harris, 1951)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional space of NEs",
"sec_num": "2.2"
},
{
"text": "To construct the distributional space associated to a corpus, we use a robust parser (in our experiments, we used XIP parser (A\u00eft et al., 2002)) to extract chunks (i.e. nouns, noun phrases, . . . ) and syntactic dependencies between these chunks. Given this parser's output, we identify triple instances. Each triple has the form w 1 .R.w 2 where w 1 and w 2 are chunks and R is a syntactic relation (Lin, 1998) , (Kilgarriff et al., 2004) .",
"cite_spans": [
{
"start": 125,
"end": 144,
"text": "(A\u00eft et al., 2002))",
"ref_id": "BIBREF0"
},
{
"start": 400,
"end": 411,
"text": "(Lin, 1998)",
"ref_id": "BIBREF15"
},
{
"start": 414,
"end": 439,
"text": "(Kilgarriff et al., 2004)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional space of NEs",
"sec_num": "2.2"
},
{
"text": "One triple gives two contexts (1.w 1 .R and 2.w 2 .R) and two chunks (w 1 and w 2 ). Then, we only select chunks w which belong to NE. Each point in the distributional space is a NE and each dimension is a syntactic context. CT denotes the set of all syntactic contexts and |CT| represents its cardinal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional space of NEs",
"sec_num": "2.2"
},
{
"text": "We illustrate this construction on the sentence \"provide Albania with food aid\". We obtain the three following triples (note that aid and food aid are considered as two different chunks): According to the NEs detection method described previously, we only keep the chunks and contexts which are in bold in the above table.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional space of NEs",
"sec_num": "2.2"
},
{
"text": "We also use an heuristic in order to reduce the over production of chunks and contexts: in our experiments for example, each NE and each context should appear more than 10 times in the corpus for being considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional space of NEs",
"sec_num": "2.2"
},
{
"text": "D is the resulting (|NE| \u00d7 |CT|) NE-Context matrix where e i : i = 1, . . . , |NE| is a NE and c j : j = 1, . . . , |CT| is a syntactic context. Then we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional space of NEs",
"sec_num": "2.2"
},
{
"text": "D(e i , c j ) = Nb. of occ. of c j associated to e i (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional space of NEs",
"sec_num": "2.2"
},
{
"text": "A clique in a graph is a set of pairwise adjacent nodes which is equivalent to a complete subgraph. A maximal clique is a clique that is not a subset of any other clique. Maximal cliques computation was already employed for semantic space representation (Ploux and Victorri, 1998) . In this work, cliques of lexical units are used to represent a precise meaning. Similarly, we compute cliques of NEs in order to represent a precise annotation.",
"cite_spans": [
{
"start": 254,
"end": 280,
"text": "(Ploux and Victorri, 1998)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cliques of NEs computation",
"sec_num": "2.3"
},
{
"text": "For example, Oxford is an ambiguous NE but a clique such as <Cambridge, Oxford, Edinburgh University, Edinburgh, Oxford Univer-sity> allows to focus on the specific annotation <organization> (see (Ehrmann and Jacquet, 2007) for similar use). Given the distributional space described in the previous paragraph, we use a probabilistic framework for computing similarities between NEs. The approach that we propose is inspired from the language modeling framework introduced in the information retrieval field (see for example (Lavrenko and Croft, 2003) ). Then, we construct cliques of NEs based on these similarities.",
"cite_spans": [
{
"start": 196,
"end": 223,
"text": "(Ehrmann and Jacquet, 2007)",
"ref_id": "BIBREF7"
},
{
"start": 524,
"end": 550,
"text": "(Lavrenko and Croft, 2003)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cliques of NEs computation",
"sec_num": "2.3"
},
{
"text": "We first compute the maximum likelihood estimation for a NE e i to be associated with a context c j :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measures between NEs",
"sec_num": "2.3.1"
},
{
"text": "P ml (c j |e i ) = D(e i ,c j ) |e i | , where |e i | = |CT| j=1 D(e i , c j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measures between NEs",
"sec_num": "2.3.1"
},
{
"text": "is the total occurrences of the NE e i in the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measures between NEs",
"sec_num": "2.3.1"
},
{
"text": "This leads to sparse data which is not suitable for measuring similarities. In order to counter this problem, we use the Jelinek-Mercer smoothing method:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measures between NEs",
"sec_num": "2.3.1"
},
{
"text": "D (e i , c j ) = \u03bbP ml (c j |e i ) + (1 \u2212 \u03bb)P ml (c j |CORP)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measures between NEs",
"sec_num": "2.3.1"
},
{
"text": "where CORP is the corpus and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measures between NEs",
"sec_num": "2.3.1"
},
{
"text": "P ml (c j |CORP) = P i D(e i ,c j ) P i,j D(e i ,c j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measures between NEs",
"sec_num": "2.3.1"
},
{
"text": ". In our experiments we took \u03bb = 0.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measures between NEs",
"sec_num": "2.3.1"
},
{
"text": "Given D , we then use the cross-entropy as a similarity measure between NEs. Let us denote by s this similarity matrix, we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measures between NEs",
"sec_num": "2.3.1"
},
{
"text": "s(e i , e i ) = \u2212 c j \u2208CT D (e i , c j ) log(D (e i , c j )) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measures between NEs",
"sec_num": "2.3.1"
},
{
"text": "Next, we convert s into an adjacency matrix denoted\u015d. In a first step, we binarize s as follows. Let us denote {e i 1 , . . . , e i |NE| }, the list of NEs ranked according to the descending order of their similarity with e i . Then, L(e i ) is the list of NEs which are considered as the nearest neighbors of e i according to the following definition:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "From similarity matrix to adjacency matrix",
"sec_num": "2.3.2"
},
{
"text": "L(e i ) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "From similarity matrix to adjacency matrix",
"sec_num": "2.3.2"
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "From similarity matrix to adjacency matrix",
"sec_num": "2.3.2"
},
{
"text": "{e i 1 , ..., e i p : p i =1 s(e i , e i i ) |NE| i =1 s(e i , e i ) \u2264 a; p \u2264 b} where a \u2208 [0, 1] and b \u2208 {1, . . . , |NE|}. L(e i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "From similarity matrix to adjacency matrix",
"sec_num": "2.3.2"
},
{
"text": "gathers the most significant nearest neighbors of e i by choosing the ones which bring the a most relevant similarities providing that the neighborhood's size doesn't exceed b. This approach can be seen as a flexible k-nearest neighbor method. In our experiments we chose a = 20% and b = 10. Finally, we symmetrize the similarity matrix as follows and we obtain\u015d:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "From similarity matrix to adjacency matrix",
"sec_num": "2.3.2"
},
{
"text": "s(e i , e i ) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "From similarity matrix to adjacency matrix",
"sec_num": "2.3.2"
},
{
"text": "1 if e i \u2208 L(e i ) or e i \u2208 L(e i ) 0 otherwise (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "From similarity matrix to adjacency matrix",
"sec_num": "2.3.2"
},
{
"text": "Given\u015d, the adjacency matrix between NEs, we compute the set of maximal cliques of NEs denoted CLI. Then, we construct the matrix T of general term:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cliques computation",
"sec_num": "2.3.3"
},
{
"text": "T (cli k , e i ) = 1 if e i \u2208 cli k 0 otherwise (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cliques computation",
"sec_num": "2.3.3"
},
{
"text": "where cli k is an element of CLI. T will be the input matrix for the clustering method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cliques computation",
"sec_num": "2.3.3"
},
{
"text": "In the following, we also use cli k for denoting the vector represented by (T (cli k , e 1 ), . . . , T (cli k , e |NE| )). Figure 2 shows some cliques which contain Oxford that we can obtain with this method. This figure also illustrates the over production of cliques since at least cli8, cli10 and cli12 can be annotated as <organization>. ",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 132,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Cliques computation",
"sec_num": "2.3.3"
},
{
"text": "We use a clustering technique in order to group cliques of NEs which are mutually highly similar. The clusters of cliques which contain a NE allow to find the different possible annotations of this NE. This clustering technique must be able to construct \"pure\" clusters in order to have precise annotations. In that case, it is desirable to avoid fixing the number of clusters. That's the reason why we propose to use the Relational Analysis approach described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cliques clustering",
"sec_num": "2.4"
},
{
"text": "We propose to apply the Relational Analysis approach (RA) which is a clustering model that doesn't require to fix the number of clusters (Michaud and Marcotorchino, 1980) , (B\u00e9d\u00e9carrax and Warnesson, 1989) . This approach takes as input a similarity matrix. In our context, since we want to cluster cliques of NEs, the corresponding similarity matrix S between cliques is given by the dot products matrix taken from T : S = T \u2022 T . The general term of this similarity matrix is: S(cli k , cli k ) = S kk = cli k , cli k . Then, we want to maximize the following clustering function:",
"cite_spans": [
{
"start": 137,
"end": 170,
"text": "(Michaud and Marcotorchino, 1980)",
"ref_id": "BIBREF17"
},
{
"start": 173,
"end": 205,
"text": "(B\u00e9d\u00e9carrax and Warnesson, 1989)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Relational Analysis approach",
"sec_num": "2.4.1"
},
{
"text": "\u2206(S, X) = (6) |CLI| k,k =1 S kk \u2212 (k ,k )\u2208S + S k k |S + | cont kk X kk where S + = {(cli k , cli k ) : S kk > 0}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Relational Analysis approach",
"sec_num": "2.4.1"
},
{
"text": "In other words, cli k and cli k have more chances to be in the same cluster providing that their similarity measure, S kk , is greater or equal to the mean average of positive similarities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Relational Analysis approach",
"sec_num": "2.4.1"
},
{
"text": "X is the solution we are looking for. It is a binary relational matrix with general term: X kk = 1, if cli k is in the same cluster as cli k ; and X kk = 0, otherwise. X represents an equivalence relation. Thus, it must respect the following properties:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Relational Analysis approach",
"sec_num": "2.4.1"
},
{
"text": "\u2022 binarity: X kk \u2208 {0, 1}; \u2200k, k , \u2022 reflexivity: X kk = 1; \u2200k, \u2022 symmetry: X kk \u2212 X k k = 0; \u2200k, k , \u2022 transitivity: X kk + X k k \u2212 X kk \u2264 1; \u2200k, k , k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Relational Analysis approach",
"sec_num": "2.4.1"
},
{
"text": "As the objective function is linear with respect to X and as the constraints that X must respect are linear equations, we can solve the clustering problem using an integer linear programming solver. However, this problem is NP-hard. As a result, in practice, we use heuristics for dealing with large data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Relational Analysis approach",
"sec_num": "2.4.1"
},
{
"text": "The presented heuristic is quite similar to another algorithm described in (Hartigan, 1975 ) known as the \"leader\" algorithm. But unlike this last approach which is based upon euclidean distances and inertial criteria, the RA heuristic aims at maximizing the criterion given in (6). A sketch of this heuristic is given in Algorithm 1, (see (Marcotorchino and Michaud, 1981) for further details).",
"cite_spans": [
{
"start": 75,
"end": 90,
"text": "(Hartigan, 1975",
"ref_id": "BIBREF11"
},
{
"start": 340,
"end": 373,
"text": "(Marcotorchino and Michaud, 1981)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Relational Analysis heuristic",
"sec_num": "2.4.2"
},
{
"text": "Require: nbitr = number of iterations; \u03bamax = maximal number of clusters; S the similarity matrix",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 RA heuristic",
"sec_num": null
},
{
"text": "m \u2190 P (k,k )\u2208S + S kk |S + |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 RA heuristic",
"sec_num": null
},
{
"text": "Take the first clique cli k as the first element of the first cluster \u03ba = 1 where \u03ba is the current number of cluster for q = 1 to nbitr do for k = 1 to |CLI| do for l = 1 to \u03ba do Compute the contribution of clique cli k with cluster clu l : cont l = P cli k \u2208clu l (S kk \u2212 m) end for clu l * is the cluster id which has the highest contribution with clique cli k and cont l * is the corresponding contribution value",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 RA heuristic",
"sec_num": null
},
{
"text": "if (cont l * < (S kk \u2212 m)) \u2227 (\u03ba < \u03bamax) then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 RA heuristic",
"sec_num": null
},
{
"text": "Create a new cluster where clique cli k is the first element and \u03ba \u2190 \u03ba + 1 else Assign clique cli k to cluster clu l * if the cluster where was taken cli k before its new assignment, is empty then \u03ba \u2190 \u03ba \u2212 1 end if end if end for end for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 RA heuristic",
"sec_num": null
},
{
"text": "We have to provide a number of iterations or/and a delta threshold in order to have an approximate solution in a reasonable processing time. Besides, it is also required a maximum number of clusters but since we don't want to fix this parameter, we put by default \u03ba max = |CLI|.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 RA heuristic",
"sec_num": null
},
{
"text": "Basically, this heuristic has a O(nbitr \u00d7 \u03ba max \u00d7 |CLI|) computation cost. In general terms, we can assume that nbitr << |CLI|, but not \u03ba max << |CLI|. Thus, in the worst case, the algorithm has a O(\u03ba max \u00d7 |CLI|) computation cost. Figure 3 gives some examples of clusters of cliques 5 obtained using the RA approach. Now, we want to exploit the clusters of cliques in order to annotate NE occurrences. Then, we need to construct a NE resource where for each pair (NE x syntactic context) we have an annotation. To this end, we need first, to assign a cluster to each pair (NE x syntactic context) ( \u00a72.5.1) and second, to assign each cluster an annotation ( \u00a72.5.2).",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 240,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Algorithm 1 RA heuristic",
"sec_num": null
},
{
"text": "For each cluster clu l we provide a score F c (c j , clu l ) for each context c j and a score 5 We only represent the NEs and their frequency in the cluster which corresponds to the number of cliques which contain the NEs. Furthermore, we represent the most relevant contexts for this cluster according to equation 7introduced in the following. F e (e i , clu l ) for each NE e i . These scores 6 are given by:",
"cite_spans": [
{
"start": 94,
"end": 95,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster assignment to each pair (NE x syntactic context)",
"sec_num": "2.5.1"
},
{
"text": "F c (c j , clu l ) = (7) e i \u2208clu l D(e i , c j ) |NE| i=1 D(e i , c j ) e i \u2208clu l 1 {D(e i ,c j ) =0}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster assignment to each pair (NE x syntactic context)",
"sec_num": "2.5.1"
},
{
"text": "where 1 {P } equals 1 if P is true and 0 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster assignment to each pair (NE x syntactic context)",
"sec_num": "2.5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "F e (e i , clu l ) = #(clu l , e i )",
"eq_num": "(8)"
}
],
"section": "Cluster assignment to each pair (NE x syntactic context)",
"sec_num": "2.5.1"
},
{
"text": "Given a NE e i and a syntactic context c j , we now introduce the contextual cluster assignment matrix A ctxt (e i , c j ) as follows: A ctxt (e i , c j ) = clu * where: clu * = Argmax {clu l :clu l e i ;Fe(e i ,clu l )>1} F c (c j , clu l ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster assignment to each pair (NE x syntactic context)",
"sec_num": "2.5.1"
},
{
"text": "In other words, clu * is the cluster for which we find more than one occurrence of e i and the highest score related to the context c j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster assignment to each pair (NE x syntactic context)",
"sec_num": "2.5.1"
},
{
"text": "Furthermore, we compute a default cluster assignment matrix A def , which does not depend on the local context: A def (e i ) = clu \u2022 where:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster assignment to each pair (NE x syntactic context)",
"sec_num": "2.5.1"
},
{
"text": "clu \u2022 = Argmax {clu l :clu l {cli k :cli k e i }} |cli k |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster assignment to each pair (NE x syntactic context)",
"sec_num": "2.5.1"
},
{
"text": "In other words, clu \u2022 is the cluster containing the biggest clique cli k containing e i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster assignment to each pair (NE x syntactic context)",
"sec_num": "2.5.1"
},
{
"text": "So far, the different steps that we have introduced were unsupervised. In this paragraph, our aim is to give a correct annotation to each cluster (hence, to all NEs in this cluster). To this end, we need some annotation seeds and we propose two different semi-supervised approaches (regarding the classification given in (Nadeau and Sekine, 2007) ). The first one is the manual annotation of some clusters. The second one proposes an automatic cluster annotation and assumes that we have some NEs that are already annotated.",
"cite_spans": [
{
"start": 321,
"end": 346,
"text": "(Nadeau and Sekine, 2007)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Clusters annotation",
"sec_num": "2.5.2"
},
{
"text": "Manual annotation of clusters This method is fastidious but it is the best way to match the corpus data with a specific guidelines for annotating NEs. It also allows to identify new types of annotation. We used the ACE2007 guidelines for manually annotating each cluster. However, our CBC system leads to a high number of clusters of cliques and we can't annotate each of them. Fortunately, it also leads to a distribution of the clusters' size (number of cliques by cluster) which is similar to a Zipf distribution. Consequently, in our experiments, if we annotate the 100 biggest clusters, we annotate around eighty percent of the detected NEs (see \u00a73).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clusters annotation",
"sec_num": "2.5.2"
},
{
"text": "We suppose in this context that many NEs in NE are already annotated. Thus, under this assumption, we have in each cluster provided by the CBC system, both annotated and non-annotated NEs. Our goal is to exploit the available annotations for refining the annotation of a cluster by implicitly taking into account the syntactic contexts and for propagating the available annotations to NEs which have no annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic annotation of clusters",
"sec_num": null
},
{
"text": "Given a cluster clu l of cliques, #(clu l , e i ) is the weight of the NE e i in this cluster: it is the number of cliques in clu l that contain e i . For all annotations a p in the set of all possible annotations AN, we compute its associated score in cluster clu l : it is the sum of the weights of NEs in clu l that is annotated a p .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic annotation of clusters",
"sec_num": null
},
{
"text": "Then, if the maximal annotation score is greater than a simple majority (half) of the total votes 7 , we assign the corresponding annotation to the cluster. We precise that the annotation <none> 8 is processed in the same way as any other annotations. Thus, a cluster can be globally annotated <none>. The limit of this automatic approach is that it doesn't allow to annotate new NE types than the ones already available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic annotation of clusters",
"sec_num": null
},
{
"text": "In the following, we will denote by A clu (clu l ) the annotation of the cluster clu l .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic annotation of clusters",
"sec_num": null
},
{
"text": "The cluster annotation matrix A clu associated to the contextual cluster assignment matrix A ctxt and the default cluster assignment matrix A def introduced previously will be called the CBC system's NE resource (or shortly the NE resource).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic annotation of clusters",
"sec_num": null
},
{
"text": "In this paragraph, we describe how, given the CBC system's NE resource, we annotate occurrences of NEs in the studied corpus with respect to its local context. We precise that for an occurrence of a NE e i its associated local context is the set of syntactical dependencies c j in which e i is involved. 7 The total votes number is given by P e i \u2208clu l #(clu l , ei).",
"cite_spans": [
{
"start": 304,
"end": 305,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NEs annotation processes using the NE resource",
"sec_num": "2.6"
},
{
"text": "8 The NEs which don't have any annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NEs annotation processes using the NE resource",
"sec_num": "2.6"
},
{
"text": "2.6.1 NEs annotation process for the CBC system Given a NE occurrence and its local context we can use A ctxt (e i , c j ) and A def (e i ) in order to get the default annotation A clu (A def (e i )) and the list of contextual annotations {A clu (A ctxt (e i , c j ))} j . Then for annotating this NE occurrence using our NE resource, we apply the following rules:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NEs annotation processes using the NE resource",
"sec_num": "2.6"
},
{
"text": "\u2022 if the list of contextual annotations {A clu (A ctxt (e i , c j ))} j is conflictual, we annotate the NE occurrence as <none>, \u2022 if the list of contextual annotations is nonconflictual, then we use the corresponding annotation to annotate the NE occurrence \u2022 if the list of contextual annotations is empty, we use the default annotation A clu (A def (e i )).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NEs annotation processes using the NE resource",
"sec_num": "2.6"
},
{
"text": "The NE resource plus the annotation process described in this paragraph lead to a NER system based on the CBC system. This NER system will be called CBC-NER system and it will be tested in our experiments both alone and as a complementary resource.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NEs annotation processes using the NE resource",
"sec_num": "2.6"
},
{
"text": "We place ourselves into an hybrid situation where we have two NER systems (NER 1 + NER 2) which provide two different lists of annotated NEs. We want to combine these two systems when annotating NEs occurrences. Therefore, we resolve any conflicts by applying the following rules:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NEs annotation process for an hybrid system",
"sec_num": "2.6.2"
},
{
"text": "\u2022 If the same NE occurrence has two different annotations from the two systems then there are two cases. If one of the two system is CBC-NER system then we take its annotation; otherwise we take the annotation provided by the NER system which gave the best precision. \u2022 If a NE occurrence is included in another one we only keep the biggest one and its annotation. For example, if Jacques Chirac is annotated <person> by one system and Chirac by <person> by the other system, then we only keep the first annotation. \u2022 If two NE occurrences are contiguous and have the same annotation, we merge the two NEs in one NE occurrence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NEs annotation process for an hybrid system",
"sec_num": "2.6.2"
},
{
"text": "The system described in this paper rather target corpus-specific NE annotation. Therefore, our ex-periments will deal with a corpus of recent news articles (see (Shinyama and Sekine, 2004) for motivations regarding our corpus choice) rather than well-known annotated corpora. Our corpus is constituted of news in English published on the web during two weeks in June 2008. This corpus is constituted of around 300,000 words (10Mb) which doesn't represent a very large corpus. These texts were taken from various press sources and they involve different themes (sports, technology, . . . ). We extracted randomly a subset of articles and manually annotated 916 NEs (in our experiments, we deal with three types of annotation namely <person>, <organization> and <location>). This subset constitutes our test set.",
"cite_spans": [
{
"start": 161,
"end": 188,
"text": "(Shinyama and Sekine, 2004)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "In our experiments, first, we applied the XIP parser (A\u00eft et al., 2002) to the whole corpus in order to construct the frequency matrix D given by (1). Next, we computed the similarity matrix between NEs according to (2) in order to obtain\u015d defined by (4). Using the latter, we computed cliques of NEs that allow us to obtain the assignment matrix T given by (5). Then we applied the clustering heuristic described in Algorithm 1. At this stage, we want to build the NE resource using the clusters of cliques. Therefore, as described in \u00a72.5, we applied two kinds of clusters annotations: the manual and the automatic processes. For the first one, we manually annotated the 100 biggest clusters of cliques. For the second one, we exploited the annotations provided by XIP NER (Brun and Hag\u00e8ge, 2004) and we propagated these annotations to the different clusters (see \u00a72.5.2).",
"cite_spans": [
{
"start": 53,
"end": 71,
"text": "(A\u00eft et al., 2002)",
"ref_id": "BIBREF0"
},
{
"start": 775,
"end": 798,
"text": "(Brun and Hag\u00e8ge, 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "The different materials that we obtained constitute the CBC system's NE resource. Our aim now is to exploit this resource and to show that it allows to improve the performances of different classic NER systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "The different NER systems that we tested are the following ones:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "\u2022 CBC-NER system M (in short CBC M) based on the CBC system's NE resource using the manual cluster annotation (line 1 in Table 1 ), \u2022 CBC-NER system A (in short CBC A) based on the CBC system's NE resource using the automatic cluster annotation (line 1 in Table 1 ), \u2022 XIP NER or in short XIP (Brun and Hag\u00e8ge, 2004) (Finkel et al., 2005 ) (line 3 in Table 1 ), \u2022 GATE NER or in short GATE (Cunningham et al., 2002 ) (line 4 in Table 1 ), \u2022 and several hybrid systems which are given by the combination of pairs taken among the set of the three last-mentioned NER systems (lines 5 to 7 in Table 1 ). Notice that these baseline hybrid systems use the annotation combination process described in \u00a72.6.1.",
"cite_spans": [
{
"start": 293,
"end": 316,
"text": "(Brun and Hag\u00e8ge, 2004)",
"ref_id": "BIBREF2"
},
{
"start": 317,
"end": 337,
"text": "(Finkel et al., 2005",
"ref_id": "BIBREF8"
},
{
"start": 390,
"end": 414,
"text": "(Cunningham et al., 2002",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 121,
"end": 128,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 256,
"end": 263,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 351,
"end": 358,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 428,
"end": 435,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 589,
"end": 596,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "In Table 1 we first reported in each line, the results given by each system when they are applied alone (figures in italics). These performances represent our baselines. Second, we tested for each baseline system, an extended hybrid system that integrates the CBC-NER systems (with respect to the combination process detailed in \u00a72.6.2).",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "The first two lines of Table 1 show that the two CBC-NER systems alone lead to rather poor results. However, our aim is to show that the CBC-NER system is, despite its low performances alone, complementary to other basic NER systems. In other words, we want to show that the exploitation of the CBC system's NE resource is beneficial and non-redundant compared to other baseline NER systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 30,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "This is actually what we obtained in Table 1 as for each line from 2 to 7, the extended hybrid systems that integrate the CBC-NER systems (M or A) always perform better than the baseline either in terms of precision 9 or recall. For each line, we put in bold the best performance according to the F-measure.",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 44,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "These results allow us to show that the NE resource built using the CBC system is complementary to any baseline NER systems and that it allows to improve the results of the latter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "In order to illustrate why the CBC-NER systems are beneficial, we give below some examples taken from the test corpus for which the CBC system A had allowed to improve the performances by respectively disambiguating or correcting a wrong annotation or detecting corpus-specific NEs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "First, in the sentence \"From the start, his parents, Lourdes and Hemery, were with him.\", the baseline hybrid system Stanford + XIP annotated the ambiguous NE \"Lourdes\" as <location> whereas Stanford + XIP + CBC A gave the correct annotation <person>.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Second, in the sentence \"Got 3 percent chance of survival, what ya gonna do?\" The back read, \"A) Fight Through, b) Stay Strong, c) Overcome Because I Am a Warrior.\", the baseline hybrid system Stanford + XIP annotated \"Warrior\" as <organization> whereas Stanford + XIP + CBC A corrected this annotation with <none>.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Finally, in the sentence \"Matthew, also a favorite to win in his fifth and final appearance, was stunningly eliminated during the semifinal round Friday when he misspelled \"secernent\".\", the baseline hybrid system Stanford + XIP didn't give any annotation to \"Matthew\" whereas Stanford + XIP + CBC A allowed to give the annotation <person>.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Many previous works exist in NEs recognition and classification. However, most of them do not build a NEs resource but exploit external gazetteers (Bunescu and Pasca, 2006) , (Cucerzan, 2007) .",
"cite_spans": [
{
"start": 147,
"end": 172,
"text": "(Bunescu and Pasca, 2006)",
"ref_id": "BIBREF3"
},
{
"start": 175,
"end": 191,
"text": "(Cucerzan, 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "4"
},
{
"text": "A recent overview of the field is given in (Nadeau and Sekine, 2007) . According to this paper, we can classify our method in the category of semi-supervised approaches. Our proposal is close to (Cucchiarelli and Velardi, 2001) as it uses syntactic relations ( \u00a72.2) and as it relies on existing NER systems ( \u00a72.6.2). However, the particularity of our method concerns the clustering of cliques of NEs that allows both to represent the different annotations of the NEs and to group the latter with respect to one precise annotation according to a local context.",
"cite_spans": [
{
"start": 43,
"end": 68,
"text": "(Nadeau and Sekine, 2007)",
"ref_id": "BIBREF18"
},
{
"start": 195,
"end": 227,
"text": "(Cucchiarelli and Velardi, 2001)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "4"
},
{
"text": "Regarding this aspect, (Lin and Pantel, 2001 ) and (Ngomo, 2008) also use a clique computation step and a clique merging method. However, they do not deal with ambiguity of lexical units nor with NEs. This means that, in their system, a lexical unit can be in only one merged clique.",
"cite_spans": [
{
"start": 23,
"end": 44,
"text": "(Lin and Pantel, 2001",
"ref_id": "BIBREF14"
},
{
"start": 51,
"end": 64,
"text": "(Ngomo, 2008)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "4"
},
{
"text": "From a methodological point of view, our proposal is also close to (Ehrmann and Jacquet, 2007) as the latter proposes a system for NEs finegrained annotation, which is also corpus dependent. However, in the present paper we use all syntactic relations for measuring the similarity between NEs whereas in the previous mentioned work, only specific syntactic relations were exploited. Moreover, we use clustering techniques for dealing with the issue related to over production of cliques.",
"cite_spans": [
{
"start": 67,
"end": 94,
"text": "(Ehrmann and Jacquet, 2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "4"
},
{
"text": "In this paper, we construct a NE resource from the corpus that we want to analyze. In that context, (Pasca, 2004 ) presents a lightly supervised method for acquiring NEs in arbitrary categories from unstructured text of Web documents. However, Pasca wants to improve web search whereas we aim at annotating specific NEs of an analyzed corpus. Besides, as we want to focus on corpus-specific NEs, our work is also related to (Shinyama and Sekine, 2004) . In this work, the authors found a significant correlation between the similarity of the time series distribution of a word and the likelihood of being a NE. This result motivated our choice to test our approach on recent news articles rather than on well-known annotated corpora.",
"cite_spans": [
{
"start": 100,
"end": 112,
"text": "(Pasca, 2004",
"ref_id": "BIBREF20"
},
{
"start": 424,
"end": 451,
"text": "(Shinyama and Sekine, 2004)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "4"
},
{
"text": "We propose a system that allows to improve NE recognition. The core of this system is a cliquebased clustering method based upon a distributional approach. It allows to extract, analyze and discover highly relevant information for corpusspecific NEs annotation. As we have shown in our experiments, this system combined with another one can lead to strong improvements. Other applications are currently addressed in our team using this approach. For example, we intend to use the concept of clique-based clustering as a soft clustering method for other issues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://www-nlpir.nist.gov/related projects/muc/ 2 http://www.nist.gov/speech/tests/ace",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In our definition a corpus-specific NE is the one which does not appear in a classic NEs lexicon. Recent news articles for instance, are often constituted of NEs that are not in a classic NEs lexicon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In the context 1.VERB:provide.I-OBJ, the figure 1 means that the verb provide is the governor argument of the Indirect OBJect relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For data fusion tasks in information retrieval field, the scoring method in equation(7)is denoted CombMNZ(Fox and Shaw, 1994). Other scoring approaches can be used see for example(Cucchiarelli and Velardi, 2001).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "XIP NER 77.77 56.55 65.48 XIP + CBC M 78.41 60.26 68.15 XIP + CBC A 76.31 60.48 67.48 3 Stanford NER 67.94 68.01 67.97 Stanford + CBC M 69.40 71.07 70.23 Stanford + CBC A 70.09 72.93 71.48 4 GATE NER 63.30 56.88 59.92 GATE + CBC M 66.43 61.79 64.03 GATE + CBC A 66.51 63.10 64.76 5 Stanford + XIP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Except for XIP+CBC A in line 2 where the precision is slightly lower than XIP's one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Robustness beyond shallowness: incremental dependency parsing",
"authors": [
{
"first": "S",
"middle": [],
"last": "A\u00eft",
"suffix": ""
},
{
"first": "J",
"middle": [
"P"
],
"last": "Chanod",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Roux",
"suffix": ""
}
],
"year": 2002,
"venue": "NLE Journal",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. A\u00eft, J.P. Chanod, and C. Roux. 2002. Robustness beyond shallowness: incremental dependency pars- ing. NLE Journal.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Relational analysis and dictionnaries",
"authors": [
{
"first": "C",
"middle": [],
"last": "B\u00e9d\u00e9carrax",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Warnesson",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of AS-MDA 1988",
"volume": "",
"issue": "",
"pages": "131--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. B\u00e9d\u00e9carrax and I. Warnesson. 1989. Relational analysis and dictionnaries. In Proceedings of AS- MDA 1988, pages 131-151. Wiley, London, New- York.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Intertwining deep syntactic processing and named entity detection",
"authors": [
{
"first": "C",
"middle": [],
"last": "Brun",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Hag\u00e8ge",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ESTAL 2004",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Brun and C. Hag\u00e8ge. 2004. Intertwining deep syntactic processing and named entity detection. In Proceedings of ESTAL 2004, Alicante, Spain.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Using encyclopedic knowledge for named entity disambiguation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Bunescu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Pasca",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Bunescu and M. Pasca. 2006. Using encyclope- dic knowledge for named entity disambiguation. In Proceedings of EACL 2006.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unsupervised Named Entity Recognition using syntactic and semantic contextual evidence",
"authors": [
{
"first": "A",
"middle": [],
"last": "Cucchiarelli",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Velardi",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Cucchiarelli and P. Velardi. 2001. Unsupervised Named Entity Recognition using syntactic and se- mantic contextual evidence. Computational Lin- guistics, 27(1).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Large-scale named entity disambiguation based on wikipedia data",
"authors": [
{
"first": "S",
"middle": [],
"last": "Cucerzan",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP/CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Cucerzan. 2007. Large-scale named entity disam- biguation based on wikipedia data. In Proceedings of EMNLP/CoNLL 2007, Prague, Czech Republic.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "GATE: A framework and graphical development environment for robust NLP tools and applications",
"authors": [
{
"first": "H",
"middle": [],
"last": "Cunningham",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Maynard",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Tablan",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL 2002",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Cunningham, D. Maynard, K. Bontcheva, and V. Tablan. 2002. GATE: A framework and graphical development environment for robust NLP tools and applications. In Proceedings of ACL 2002, Philadel- phia.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Vers une double annotation des entit\u00e9s nomm\u00e9es",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ehrmann",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Jacquet",
"suffix": ""
}
],
"year": 2007,
"venue": "Traitement Automatique des Langues",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Ehrmann and G. Jacquet. 2007. Vers une dou- ble annotation des entit\u00e9s nomm\u00e9es. Traitement Au- tomatique des Langues, 47(3).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Incorporating non-local information into information extraction systems by gibbs sampling",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.R. Finkel, T. Grenager, and C. Manning. 2005. In- corporating non-local information into information extraction systems by gibbs sampling. In Proceed- ings of ACL 2005.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Combination of multiple searches",
"authors": [
{
"first": "E",
"middle": [
"A"
],
"last": "Fox",
"suffix": ""
},
{
"first": "J",
"middle": [
"A"
],
"last": "Shaw",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 3rd NIST TREC Conference",
"volume": "",
"issue": "",
"pages": "105--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E.A. Fox and J.A. Shaw. 1994. Combination of multi- ple searches. In Proceedings of the 3rd NIST TREC Conference, pages 105-109.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Structural Linguistics",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1951,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. Harris. 1951. Structural Linguistics. University of Chicago Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Clustering Algorithms",
"authors": [
{
"first": "J",
"middle": [
"A"
],
"last": "Hartigan",
"suffix": ""
}
],
"year": 1975,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.A. Hartigan. 1975. Clustering Algorithms. John Wi- ley and Sons.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The sketch engine",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rychly",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Smr",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Tugwell",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EURALEX",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Kilgarriff, P. Rychly, P. Smr, and D. Tugwell. 2004. The sketch engine. In In Proceedings of EURALEX 2004.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Relevance models in information retrieval",
"authors": [
{
"first": "V",
"middle": [],
"last": "Lavrenko",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
}
],
"year": 2003,
"venue": "Language modeling in information retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Lavrenko and W.B. Croft. 2003. Relevance models in information retrieval. In W.B. Croft and J. Laf- ferty (Eds), editors, Language modeling in informa- tion retrieval. Springer.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Induction of semantic classes from natural language text",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of ACM SIGKDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Lin and P. Pantel. 2001. Induction of semantic classes from natural language text. In Proceedings of ACM SIGKDD.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Using collocation statistics in information extraction",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of MUC-7",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Lin. 1998. Using collocation statistics in informa- tion extraction. In Proceedings of MUC-7.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Heuristic approach of the similarity aggregation problem",
"authors": [
{
"first": "J",
"middle": [
"F"
],
"last": "Marcotorchino",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Michaud",
"suffix": ""
}
],
"year": 1981,
"venue": "Methods of operation research",
"volume": "43",
"issue": "",
"pages": "395--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.F. Marcotorchino and P. Michaud. 1981. Heuris- tic approach of the similarity aggregation problem. Methods of operation research, 43:395-404.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Optimisation en analyse de donn\u00e9es relationnelles",
"authors": [
{
"first": "P",
"middle": [],
"last": "Michaud",
"suffix": ""
},
{
"first": "J",
"middle": [
"F"
],
"last": "Marcotorchino",
"suffix": ""
}
],
"year": 1980,
"venue": "Data Analysis and informatics. North Holland Amsterdam",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Michaud and J.F. Marcotorchino. 1980. Optimisa- tion en analyse de donn\u00e9es relationnelles. In Data Analysis and informatics. North Holland Amster- dam.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A survey of Named Entity Recognition and Classification",
"authors": [
{
"first": "D",
"middle": [],
"last": "Nadeau",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sekine",
"suffix": ""
}
],
"year": 2007,
"venue": "Lingvisticae Investigationes",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Nadeau and S. Sekine. 2007. A survey of Named Entity Recognition and Classification. Lingvisticae Investigationes, 30(1).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Signum a graph algorithm for terminology extraction",
"authors": [
{
"first": "A",
"middle": [
"C"
],
"last": "Ngonga Ngomo",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of CICLING 2008",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. C. Ngonga Ngomo. 2008. Signum a graph algo- rithm for terminology extraction. In Proceedings of CICLING 2008, Haifa, Israel.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Acquisition of categorized named entities for web search",
"authors": [
{
"first": "M",
"middle": [],
"last": "Pasca",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of CIKM 2004",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Pasca. 2004. Acquisition of categorized named entities for web search. In Proceedings of CIKM 2004, New York, NY, USA.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Construction d'espaces s\u00e9mantiques\u00e0 l'aide de dictionnaires de synonymes",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ploux",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Victorri",
"suffix": ""
}
],
"year": 1998,
"venue": "TAL",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Ploux and B. Victorri. 1998. Construction d'espaces s\u00e9mantiques\u00e0 l'aide de dictionnaires de synonymes. TAL, 39(1).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Named Entity Discovery using comparable news articles",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Shinyama",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sekine",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of COLING 2004",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Shinyama and S. Sekine. 2004. Named Entity Dis- covery using comparable news articles. In Proceed- ings of COLING 2004, Geneva.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Examples of cliques containing Oxford",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Examples of clusters of cliques (only the NEs are represented) and their associated contexts 2.5 NE resource construction using the CBC system's outputs",
"type_str": "figure"
},
"TABREF2": {
"text": "",
"html": null,
"num": null,
"content": "<table><tr><td>: Results given by different hybrid NER</td></tr><tr><td>systems and coupled with the CBC-NER system</td></tr><tr><td>corpora (CoNLL, MUC6, MUC7 and ACE):</td></tr><tr><td>ner-eng-ie.crf-3-all2008-distsim.ser.gz</td></tr></table>",
"type_str": "table"
}
}
}
}