ACL-OCL / Base_JSON /prefixN /json /N04 /N04-1001.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N04-1001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:44:10.002193Z"
},
"title": "A Statistical Model for Multilingual Entity Detection and Tracking",
"authors": [
{
"first": "R",
"middle": [],
"last": "Florian",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Research Center Yorktown Heights",
"location": {
"postCode": "10598",
"region": "NY"
}
},
"email": ""
},
{
"first": "H",
"middle": [],
"last": "Hassan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Research Center Yorktown Heights",
"location": {
"postCode": "10598",
"region": "NY"
}
},
"email": ""
},
{
"first": "A",
"middle": [],
"last": "Ittycheriah",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Research Center Yorktown Heights",
"location": {
"postCode": "10598",
"region": "NY"
}
},
"email": ""
},
{
"first": "H",
"middle": [],
"last": "Jing",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Research Center Yorktown Heights",
"location": {
"postCode": "10598",
"region": "NY"
}
},
"email": "hjing@us.ibm.com"
},
{
"first": "N",
"middle": [],
"last": "Kambhatla",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Research Center Yorktown Heights",
"location": {
"postCode": "10598",
"region": "NY"
}
},
"email": ""
},
{
"first": "X",
"middle": [],
"last": "Luo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Research Center Yorktown Heights",
"location": {
"postCode": "10598",
"region": "NY"
}
},
"email": ""
},
{
"first": "N",
"middle": [],
"last": "Nicolov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Research Center Yorktown Heights",
"location": {
"postCode": "10598",
"region": "NY"
}
},
"email": "nicolas@us.ibm.com"
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Research Center Yorktown Heights",
"location": {
"postCode": "10598",
"region": "NY"
}
},
"email": "roukos@us.ibm.com"
},
{
"first": "I",
"middle": [
"B M T J"
],
"last": "Watson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Research Center Yorktown Heights",
"location": {
"postCode": "10598",
"region": "NY"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Entity detection and tracking is a relatively new addition to the repertoire of natural language tasks. In this paper, we present a statistical language-independent framework for identifying and tracking named, nominal and pronominal references to entities within unrestricted text documents, and chaining them into clusters corresponding to each logical entity present in the text. Both the mention detection model and the novel entity tracking model can use arbitrary feature types, being able to integrate a wide array of lexical, syntactic and semantic features. In addition, the mention detection model crucially uses feature streams derived from different named entity classifiers. The proposed framework is evaluated with several experiments run in Arabic, Chinese and English texts; a system based on the approach described here and submitted to the latest Automatic Content Extraction (ACE) evaluation achieved top-tier results in all three evaluation languages.",
"pdf_parse": {
"paper_id": "N04-1001",
"_pdf_hash": "",
"abstract": [
{
"text": "Entity detection and tracking is a relatively new addition to the repertoire of natural language tasks. In this paper, we present a statistical language-independent framework for identifying and tracking named, nominal and pronominal references to entities within unrestricted text documents, and chaining them into clusters corresponding to each logical entity present in the text. Both the mention detection model and the novel entity tracking model can use arbitrary feature types, being able to integrate a wide array of lexical, syntactic and semantic features. In addition, the mention detection model crucially uses feature streams derived from different named entity classifiers. The proposed framework is evaluated with several experiments run in Arabic, Chinese and English texts; a system based on the approach described here and submitted to the latest Automatic Content Extraction (ACE) evaluation achieved top-tier results in all three evaluation languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Detecting entities, whether named, nominal or pronominal, in unrestricted text is a crucial step toward understanding the text, as it identifies the important conceptual objects in a discourse. It is also a necessary step for identifying the relations present in the text and populating a knowledge database. This task has applications in information extraction and summarization, information retrieval (one can get all hits for Washington/person and not the ones for Washington/state or Washington/city), data mining and question answering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Entity Detection and Tracking task (EDT henceforth) has close ties to the named entity recognition (NER) and coreference resolution tasks, which have been the focus of attention of much investigation in the recent past (Bikel et al., 1997; Borthwick et al., 1998; Mikheev et al., 1999; Miller et al., 1998; Aberdeen et al., 1995; Ng and Cardie, 2002; Soon et al., 2001) , and have been at the center of several evaluations: MUC-6, MUC-7, CoNLL'02 and CoNLL'03 shared tasks. Usually, in computational linguistic literature, a named entity represents an instance of a name, either a location, a person, an organization, and the NER task consists of identifying each individual occurrence of such an entity. We will instead adopt the nomenclature of the Automatic Content Extraction program 1 (NIST, 2003a): we will call the instances of textual references to objects or abstractions mentions, which can be either named (e.g. John Mayor), nominal (e.g. the president) or pronominal (e.g. she, it). An entity consists of all the mentions (of any level) which refer to one conceptual entity. For instance, in the sentence President John Smith said he has no comments. there are two mentions: John Smith and he (in the order of appearance, their levels are named and pronominal), but one entity, formed by the set {John Smith, he}.",
"cite_spans": [
{
"start": 223,
"end": 243,
"text": "(Bikel et al., 1997;",
"ref_id": "BIBREF3"
},
{
"start": 244,
"end": 267,
"text": "Borthwick et al., 1998;",
"ref_id": "BIBREF4"
},
{
"start": 268,
"end": 289,
"text": "Mikheev et al., 1999;",
"ref_id": "BIBREF8"
},
{
"start": 290,
"end": 310,
"text": "Miller et al., 1998;",
"ref_id": "BIBREF9"
},
{
"start": 311,
"end": 333,
"text": "Aberdeen et al., 1995;",
"ref_id": "BIBREF0"
},
{
"start": 334,
"end": 354,
"text": "Ng and Cardie, 2002;",
"ref_id": "BIBREF11"
},
{
"start": 355,
"end": 373,
"text": "Soon et al., 2001)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a general statistical framework for entity detection and tracking in unrestricted text. The framework is not language specific, as proved by applying it to three radically different languages: Arabic, Chinese and English. We separate the EDT task into a mention detection part -the task of finding all mentions in the text -and an entity tracking part -the task of combining the detected mentions into groups of references to the same object.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The work presented here is motivated by the ACE evaluation framework, which has the more general goal of building multilingual systems which detect not only entities, but also relations among them and, more recently, events in which they participate. The EDT task is arguably harder than traditional named entity recognition, because of the additional complexity involved in extracting non-named mentions (nominals and pronouns) and the requirement of grouping mentions into entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present and evaluate empirically statistical models for both mention detection and entity tracking problems. For mention detection we use approaches based on Maximum Entropy (MaxEnt henceforth) (Berger et al., 1996) and Robust Risk Minimization (RRM henceforth) (Zhang et al., 2002) . The task is transformed into a sequence classification problem. We investigate a wide array of lexical, syntactic and semantic features to perform the mention detection and classification task including, for all three languages, features based on pre-existing statistical semantic taggers, even though these taggers have been trained on different corpora and use different semantic categories. Moreover, the presented approach implicitly learns the correlation between these different semantic types and the desired output types.",
"cite_spans": [
{
"start": 197,
"end": 218,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF2"
},
{
"start": 265,
"end": 285,
"text": "(Zhang et al., 2002)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a novel MaxEnt-based model for predicting whether a mention should or should not be linked to an existing entity, and show how this model can be used to build entity chains. The effectiveness of the approach is tested by applying it on data from the above mentioned languages -Arabic, Chinese, English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The framework presented in this paper is languageuniversal -the classification method does not make any assumption about the type of input. Most of the feature types are shared across the languages, but there are a small number of useful feature types which are languagespecific, especially for the mention detection task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows: Section 2 describes the algorithms and feature types used for mention detection. Section 3 presents our approach to entity tracking. Section 4 describes the experimental framework and the systems' results for Arabic, Chinese and English on the data from the latest ACE evaluation (September 2003) , an investigation of the effect of using different feature types, as well as a discussion of the results.",
"cite_spans": [
{
"start": 315,
"end": 331,
"text": "(September 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The mention detection system identifies the named, nominal and pronominal mentions introduced in the previous section. Similarly to classical NLP tasks such as base noun phrase chunking (Ramshaw and Marcus, 1994 ), text chunking (Ramshaw and Marcus, 1995) or named entity recognition (Tjong Kim Sang, 2002) , we formulate the mention detection problem as a classification problem, by assigning to each token in the text a label, indicating whether it starts a specific mention, is inside a specific mention, or is outside any mentions.",
"cite_spans": [
{
"start": 186,
"end": 211,
"text": "(Ramshaw and Marcus, 1994",
"ref_id": "BIBREF13"
},
{
"start": 229,
"end": 255,
"text": "(Ramshaw and Marcus, 1995)",
"ref_id": "BIBREF14"
},
{
"start": 284,
"end": 306,
"text": "(Tjong Kim Sang, 2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Detection",
"sec_num": "2"
},
{
"text": "Good performance in many natural language processing tasks, such as part-of-speech tagging, shallow parsing and named entity recognition, has been shown to depend heavily on integrating many sources of information (Zhang et al., 2002; Jing et al., 2003; . Given the stated focus of integrating many feature types, we are interested in algorithms that can easily integrate and make effective use of diverse input types. We selected two methods which satisfy these criteria: a linear classifier -the Robust Risk Minimization classifier -and a log-linear classifier -the Maximum Entropy classifier. Both methods can integrate arbitrary types of information and make a classification decision by aggregating all information available for a given classification. and labels the example with either the class corresponding to the classifier with the highest score, if above 0, or outside, otherwise. The full decoding algorithm is presented in Algorithm 1. This algorithm can also be used for sequence classification (Williams and Peng, 1990) , by converting the activation scores into probabilities (through the soft-max function, for instance) and using the standard dynamic programing search algorithm (also known as Viterbi search).",
"cite_spans": [
{
"start": 214,
"end": 234,
"text": "(Zhang et al., 2002;",
"ref_id": "BIBREF19"
},
{
"start": 235,
"end": 253,
"text": "Jing et al., 2003;",
"ref_id": "BIBREF6"
},
{
"start": 1011,
"end": 1036,
"text": "(Williams and Peng, 1990)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Statistical Classifiers",
"sec_num": "2.1"
},
{
"text": "Algorithm 1 The RRM Decoding Algorithm foreach & ) ' g B foreach h \u00a1 # i E`H \" p& \" q \u00a1 s r \u00a2 % W 8 P \u00a7 V H T W d t 0W 1 3 & 4 \u00a5 u` w v t v 1 x & 5 4 f y 8 i $ H`Hp & a q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Statistical Classifiers",
"sec_num": "2.1"
},
{
"text": "Somewhat similarly, the MaxEnt algorithm has an associated set of weights",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Statistical Classifiers",
"sec_num": "2.1"
},
{
"text": "1 3 H T W4 H P \u00a7 S R T R T R W 8 P \u00a7 X R T R T R %",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Statistical Classifiers",
"sec_num": "2.1"
},
{
"text": ", which are estimated during the training phase so as to maximize the likelihood of the data (Berger et al., 1996) . Given these weights, the model computes the probability distribution of a particular example",
"cite_spans": [
{
"start": 93,
"end": 114,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Statistical Classifiers",
"sec_num": "2.1"
},
{
"text": "& as follows: 1 \u00a5 H & 4 \u00a1 # % W 8 P \u00a7 Q d e H T \u1e84 \u00a1 b H W d \u00a6 e H c W",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Statistical Classifiers",
"sec_num": "2.1"
},
{
"text": "where is a normalization factor. After computing the class probability distribution, the assigned class is the most probable one a posteriori. The sketch of applying MaxEnt to the test data is presented in Algorithm 2. Similarly to the RRM model, we use the model to perform sequence classification, through dynamic programing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Statistical Classifiers",
"sec_num": "2.1"
},
{
"text": "& ) ' g B y ! foreach h \u00a1 # i E \u00a1 Hp& \" q \u00a1 % \u00a2 W 8 P \u00a7 d \u00a6 e H c W Normalize (p) \u00a5 u` w v t v 1 7 & 4 i y $ i \u00a9 w H \u00a1 Hp & a q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 The MaxEnt Decoding Algorithm foreach",
"sec_num": null
},
{
"text": "Within this framework, any type of feature can be used, enabling the system designer to experiment with interesting feature types, rather than worry about specific feature interactions. In contrast, in a rule based system, the system designer would have to consider how, for instance, a WordNet (Miller, 1995) derived information for a particular example interacts with a part-of-speech-based information and chunking information. That is not to say, ultimately, that rule-based systems are in some way inferior to statistical models -they are built using valuable insight which is hard to obtain from a statistical-modelonly approach. Instead, we are just suggesting that the output of such a system can be easily integrated into the previously described framework, as one of the input features, most likely leading to improved performance.",
"cite_spans": [
{
"start": 295,
"end": 309,
"text": "(Miller, 1995)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 The MaxEnt Decoding Algorithm foreach",
"sec_num": null
},
{
"text": "In addition to using rich lexical, syntactic, and semantic features, we leveraged several pre-existing mention taggers. These pre-existing taggers were trained on datasets outside of ACE training data and they identify types of mentions different from the ACE types of mentions . For instance, a pre-existing tagger may identify dates or occupation mentions (not used in ACE), among other types. It could also have a class called PERSON, but the annotation guideline of what represents a PERSON may not match exactly to the notion of the PERSON type in ACE. Our hypothesis -the combination hypothesis -is that combining pre-existing classifiers from diverse sources will boost performance by injecting complementary information into the mention detection models. Hence, we used the output of these pre-existing taggers and used them as additional feature streams for the mention detection models. This approach allows the system to automatically correlate the (different) mention types to the desired output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Combination Hypothesis",
"sec_num": "2.2"
},
{
"text": "Even if the three languages (Arabic, Chinese and English) are radically different syntacticly, semantically, and even graphically, all models use a few universal types of features, while others are language-specific. Let us note again that, while some types of features only apply to one language, the models have the same basic structure, treating the problem as an abstract classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language-Independent Features",
"sec_num": "2.3"
},
{
"text": "The following is a list of the features that are shared across languages (V H i s considered by default the current token):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language-Independent Features",
"sec_num": "2.3"
},
{
"text": "\u00a3 tokens 4 in a window of \u00a4 : \u00a3 V U H \u00a6 \u00a5 \u00a7 V I H \u00a9 \u00a7 ; \u00a3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language-Independent Features",
"sec_num": "2.3"
},
{
"text": "the part-of-speech associated with token V H \u00a3 dictionary information (whether the current token is part of a large collection of dictionaries -one boolean value for each dictionary) \u00a3 the output of named mention detectors trained on different style of entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language-Independent Features",
"sec_num": "2.3"
},
{
"text": "\u00a3 the previously assigned classification tags 5 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language-Independent Features",
"sec_num": "2.3"
},
{
"text": "The following sections describe in detail the languagespecific features, and Table 1 summarizes the feature types used in building the models in the three languages. Finally, the experiments in Section 4 detail the performance obtained by using selected combinations of feature subsets.",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 84,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Language-Independent Features",
"sec_num": "2.3"
},
{
"text": "Arabic, a highly inflected language, has linguistic peculiarities that affect any mention detection system. An important aspect that needs to be addressed is segmentation: which style should be used, how to deal with the inherent segmentation ambiguity of mention names, especially persons and locations, and, finally, how to handle the attachment of pronouns to stems. Arabic blank-delimited words are composed of zero or more prefixes, followed by a stem and zero or more suffixes. Each prefix, stem or suffix will be called a token in this discussion; any contiguous sequence of tokens can represent a mention. For example, the word \"trwmAn\" (translation: \"Truman\") could be segmented in 3 tokens (for instance, if the word was not seen in the training data): trwmAn t rwm An which introduces ambiguity, as the three tokens form really just one mention, and, in the case of the word \"tm-nEh\", which has the segmentation tmnEh t mnE h the first and third tokens should both be labeled as pronominal mentions -but, to do this, they need to be separated from the stem mnE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Mention Detection",
"sec_num": "2.4"
},
{
"text": "Pragmatically, we found segmenting Arabic text to be a necessary and beneficial process due mainly to two facts:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Mention Detection",
"sec_num": "2.4"
},
{
"text": "1. some prefixes/suffixes can receive a different mention type than the stem they are glued to (for instance, in the case of pronouns); 2. keeping words together results in significant data sparseness, because of the inflected nature of the language. Given these observations, we decided to \"condition\" the output of the system on the segmented data: the text is first segmented into tokens, and the classification is then performed on tokens. The segmentation model is similar to the one presented by Lee et al. (2003) , and obtains an accuracy of about 98%.",
"cite_spans": [
{
"start": 502,
"end": 519,
"text": "Lee et al. (2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Mention Detection",
"sec_num": "2.4"
},
{
"text": "In addition, special attention is paid to prefixes and suffixes: in order to reduce the number of spurious tokens we re-merge the prefixes or suffixes to their corresponding stem if they are not essential to the classification process. For this purpose, we collect the following statistics for each prefix/suffix is below a threshold (estimated on the development data),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Mention Detection",
"sec_num": "2.4"
},
{
"text": "\u00a1 5 v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Mention Detection",
"sec_num": "2.4"
},
{
"text": "is re-merged with its corresponding stem. Only few prefixes and suffixes were merged using these criteria. This is appropriate for the ACE task, since a large percentage of prefixes and suffixes are annotated as pronoun mentions 6 .",
"cite_spans": [
{
"start": 229,
"end": 230,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Mention Detection",
"sec_num": "2.4"
},
{
"text": "In addition to the language-general features described in Section 2.3, the Arabic system implements a feature specifying for each token its original stem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Mention Detection",
"sec_num": "2.4"
},
{
"text": "For this system, the gazetteer features are computed on words, not on tokens; the gazetteers consist of 12000 person names and 3000 location and country names, all of which have been collected by few man-hours web browsing. The system also uses features based on the output of three additional mention detection classifiers: a RRM model predicting 48 mention categories, a RRM model and a HMM model predicting 32 mention categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arabic Mention Detection",
"sec_num": "2.4"
},
{
"text": "In Chinese text, unlike in Indo-European languages, words neither are white-space delimited nor do they have capitalization markers. Instead of a word-based model, we build a character-based one, since word segmentation 6 For some additional data, annotated with 32 named categories, mentioned later on, we use the same approach of collecting the \u00a4 and \u00a5 statistics, but, since named mentions are predominant and there are no pronominal mentions in that case, most suffixes and some prefixes are merged back to their original stem.",
"cite_spans": [
{
"start": 220,
"end": 221,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Mention Detection",
"sec_num": "2.5"
},
{
"text": "errors can lead to irrecoverable mention detection errors; Jing et al. (2003) also observe that character-based models are better performing than word-based ones for Chinese named entity recognition. Although the model is character-based, segmentation information is still useful and is integrated as an additional feature stream.",
"cite_spans": [
{
"start": 59,
"end": 77,
"text": "Jing et al. (2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Mention Detection",
"sec_num": "2.5"
},
{
"text": "Some more information about additional resources used in building the system: \u00a3 Gazetteers include dictionaries of 10k person names, 8k location and country names, and 3k organization names, compiled from annotated corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Mention Detection",
"sec_num": "2.5"
},
{
"text": "There are four additional classifiers whose output is used as features: a RRM model which outputs 32 named categories, a RRM model identifying 49 categories, a RRM model identifying 45 mention categories, and a RRM model that classifies whether a character is an English character, a numeral or other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3",
"sec_num": null
},
{
"text": "The English mention detection model is similar to the system described in A combination of gazetteer, POS and capitalization information, obtained as follows: if the word is a closed-class word -select its class, else if it's in a dictionary -select that class, otherwise back-off to its capitalization information; we call this feature gap; \u00a3 WordNet information (the synsets and hypernyms of the two most frequent senses of the word); \u00a3 The outputs of three systems (HMM, RRM and MaxEnt) trained on a 32-category named entity data, the output of an RRM system trained on the MUC-6 data, and the output of RRM model identifying 49 categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English Mention Detection",
"sec_num": "2.6"
},
{
"text": "This section introduces a novel statistical approach to entity tracking. We choose to model the process of forming entities from mentions, one step at a time. The process works from left to right: it starts with an initial entity consisting of the first mention of a document, and the next mention is processed by either linking it with one of the existing entities, or starting a new entity. The process could have as output any one of the possible partitions of the mention set. 8 Two separate models are used to score the linking and starting actions, respectively. . At training time, the action is known to us, and at testing time, both hypotheses will be kept during search. Notice that a sequence of such actions corresponds uniquely to an entity outcome (or a partition of mentions). Therefore, the problem of coreference resolution is equivalent to ranking the action sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Tracking",
"sec_num": "3"
},
{
"text": "In this work, a binary model 1 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracking Algorithm",
"sec_num": "3.1"
},
{
"text": "\u00a1 ! \" 6 4 \u00a1 b % 8 7 @ 9 B A 1 2 1 \u00a1 \u00a2 ! \" C 4 \u00a1 ' \" 3 4 \u00a1 # b % 8 7 @ 9 B A 1 4 s \u00a1 ' \" 6 4 1 ) 1 \u00a1 # \" 6 3 4 \u00a1 ' 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracking Algorithm",
"sec_num": "3.1"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracking Algorithm",
"sec_num": "3.1"
},
{
"text": "Therefore, the probability of starting an entity can be computed using the linking probabilities 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracking Algorithm",
"sec_num": "3.1"
},
{
"text": "2 1 \u00a1 # \" 3 C 4 \u00a1 ' 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracking Algorithm",
"sec_num": "3.1"
},
{
"text": ", provided that the marginal 1 4 \u00a1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracking Algorithm",
"sec_num": "3.1"
},
{
"text": "D EF # if ' \u00a1 8 \u00a9 H 7 @ 9 A 1 2 1 \u00a1 # \" 6 C 4 \u00a1 h4 ! otherwise (2) 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracking Algorithm",
"sec_num": "3.1"
},
{
"text": "The number of all possible partitions of a set is given by the Bell number (Bell, 1934) . This number is very large even for a document with a moderate number of mentions: about G ( H & IP trillion for a 20-mention document. For practical reasons, the search space has to be reduced to a reasonably small set of hypotheses.",
"cite_spans": [
{
"start": 75,
"end": 87,
"text": "(Bell, 1934)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tracking Algorithm",
"sec_num": "3.1"
},
{
"text": "That is, the starting probability is just one minus the maximum linking probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracking Algorithm",
"sec_num": "3.1"
},
{
"text": "Training directly the model 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracking Algorithm",
"sec_num": "3.1"
},
{
"text": "2 1 \u00a1 # \" 6 C 4 \u00a1 h4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracking Algorithm",
"sec_num": "3.1"
},
{
"text": "is difficult since it depends on all partial entities",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracking Algorithm",
"sec_num": "3.1"
},
{
"text": ". As a first attempt of modeling the process from mentions to entities, we make the following modeling assumptions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\"",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 ) 1 \u00a1 # \" 3 3 4 \u00a1 h 4 Q 1 2 1 \u00a1 # # H\u00a8 6 4 (3) Q $ % 7 @ R T S 1 2 1 \u00a1 # \u00a8 3 4",
"eq_num": "(4)"
}
],
"section": "\"",
"sec_num": null
},
{
"text": "Once the linking probability 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\"",
"sec_num": null
},
{
"text": ") 1 \u00a1 # \" 6 3 4 \u00a1 h4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\"",
"sec_num": null
},
{
"text": "is available, the starting probability 1 2 1 \u00a1 ! \" 3 4 can be computed using (1) and (2). The strategy used to find the best set of entities is shown in Algorithm 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\"",
"sec_num": null
},
{
"text": "Input: mentions in text \u00a3 mention count -how many times a mention string appears in the document. The count is quantized;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3 Coreference Decoding Algorithm",
"sec_num": null
},
{
"text": "\u00a1 \u00a1 \u00a3 & H \u00a2 # E Output: a partition \" of the set \u00a1 U y \u00a3\" W V \u00a1 \u00a3 $ \u00a3 \u00a1 \u00a7 $ Y X v \u00a5 \u00e0 1 \" b V 4 \u00a1 # foreach \u00a1 ! c \u1e85 E U d y foreach \" ' U \" d y \" \u00a5 e \u00a3 $ \u00a3 \u00a1 v \u00a5 f 1 \" d 4 i y v \u00a5 \u00e0 1 \" 4 d 1 1 \u00a1 ! \" 3 4 U d y U d e \u00a3\" d foreach h i ' \" d y 1 \" \u00a5 g \u00a3 \u00a1 # H 4 e \u00a3 \u00a1 # H e \u00a3 & $ v \u00a5 f 1 \" d 4 f y v \u00a5 f 1 \" 4 d 1 ) 1 \u00a1 # \" 6 3 4 \u00a1 h4 U d y U d e \u00a3\" d U y \u00a1 $ h E # 1 U d 4 return $ $ i 7 ( p v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3 Coreference Decoding Algorithm",
"sec_num": null
},
{
"text": "\u00a3 distance -distance between the two mentions in words and sentences. This number is also quantized;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3 Coreference Decoding Algorithm",
"sec_num": null
},
{
"text": "\u00a3 editing distance -quantized editing distance between the two mentions;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3 Coreference Decoding Algorithm",
"sec_num": null
},
{
"text": "\u00a3 mention information -spellings of the two mentions and other information (such as POS tags) if available; If a mention is a pronoun, the feature also computes gender, plurality, possessiveness and reflexiveness;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3 Coreference Decoding Algorithm",
"sec_num": null
},
{
"text": "\u00a3 acronym -whether or not one mention is the acronym of the other mention;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3 Coreference Decoding Algorithm",
"sec_num": null
},
{
"text": "\u00a3 syntactic features -whether or not the two mentions appear in apposition. This information is extracted from a parse tree, and can be computed only when a parser is available; Another category of features is created by taking conjunction of the atomic features. For example, the model can capture how far a pronoun mention is from a named mention when the distance feature is used in conjunction with mention information feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3 Coreference Decoding Algorithm",
"sec_num": null
},
{
"text": "As it is the case with with mention detection approach presented in Section 2, most features used here are language-independent and are instantiated from the training data, while some are language-specific, but mostly because the resources were not available for the specific language. For example, syntactic features are not used in the Arabic system due to the lack of an Arabic parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3 Coreference Decoding Algorithm",
"sec_num": null
},
{
"text": "Simple as it seems, the mention-pair model has been shown to work well (Soon et al., 2001; Ng and Cardie, 2002) . As will be shown in Section 4, the relatively knowledge-lean feature sets work fairly well in our tasks.",
"cite_spans": [
{
"start": 71,
"end": 90,
"text": "(Soon et al., 2001;",
"ref_id": "BIBREF15"
},
{
"start": 91,
"end": 111,
"text": "Ng and Cardie, 2002)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3 Coreference Decoding Algorithm",
"sec_num": null
},
{
"text": "Although we also use a mention-pair model, our tracking algorithm differs from Soon et al. (2001) , Ng and Cardie (2002) in several aspects. First, the mention-pair model is used as an approximation to the entity-mention model (3), which itself is an approximation of 1",
"cite_spans": [
{
"start": 79,
"end": 97,
"text": "Soon et al. (2001)",
"ref_id": "BIBREF15"
},
{
"start": 100,
"end": 120,
"text": "Ng and Cardie (2002)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3 Coreference Decoding Algorithm",
"sec_num": null
},
{
"text": "2 1 \u00a1 # \" 6 C 4 \u00a1 h4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3 Coreference Decoding Algorithm",
"sec_num": null
},
{
"text": ". Second, instead of doing a pick-first (Soon et al., 2001) or best-first (Ng and Cardie, 2002) selection, the mention-pair linking model is used to compute a starting probability. The starting probability enables us to score the action of creating a new entity without thresholding the link probabilities. Third, this probabilistic framework allows us to search the space of all possible entities, while Soon et al. (2001) , Ng and Cardie (2002) take the \"best\" local hypothesis.",
"cite_spans": [
{
"start": 40,
"end": 59,
"text": "(Soon et al., 2001)",
"ref_id": "BIBREF15"
},
{
"start": 74,
"end": 95,
"text": "(Ng and Cardie, 2002)",
"ref_id": "BIBREF11"
},
{
"start": 405,
"end": 423,
"text": "Soon et al. (2001)",
"ref_id": "BIBREF15"
},
{
"start": 426,
"end": 446,
"text": "Ng and Cardie (2002)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3 Coreference Decoding Algorithm",
"sec_num": null
},
{
"text": "The data used in all experiments presented in this section is provided by the Linguistic Data Consortium and is distributed by NIST to all participants in the ACE evaluation. In the comparative experiments for the mention detection and entity tracking tasks, the training data for the English system consists of the training data from both the 2002 evaluation and the 2003 evaluation, while for Arabic and Chinese, new additions to the ACE task in 2003, consists of 80% of the provided training data. Table 2 shows the sizes of the training, development and evaluation test data for the 3 languages. The data is annotated with five types of entities: person, organization, geo-political entity, location, facility; each mention can be either named, nominal or pronominal, and can be either generic (not referring to a clearly described entity) or specific.",
"cite_spans": [],
"ref_spans": [
{
"start": 501,
"end": 508,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "The models for all three languages are built as joint models, simultaneously predicting the type, level and genericity of a mention -basically each mention is labeled with a 3-pronged tag. To transform the problem into a classification task, we use the IOB2 classification scheme (Tjong Kim Sang and Veenstra, 1999) .",
"cite_spans": [
{
"start": 280,
"end": 315,
"text": "(Tjong Kim Sang and Veenstra, 1999)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "A gauge of the performance of an EDT system is the ACE value, a measure developed especially for this purpose. It estimates the normalized weighted cost of detection of specific-only entities in terms of misses, false alarms and substitution errors (entities marked generic are excluded from computation): any undetected entity is considered a miss, system-output entities with no corresponding reference entities are considered false alarms, and entities whose type was mis-assigned are substitution errors. The ACE value computes a weighted cost by applying different weights to each error, depending on the error type and target entity type (e.g. PERSON-NAMEs are weighted a lot more heavily than FACILITY-PRONOUNs) (NIST, 2003a) . The cumulative cost is normalized by the cost of a (hypothetical) system that outputs no entities at all -which would receive an ACE value of",
"cite_spans": [
{
"start": 719,
"end": 732,
"text": "(NIST, 2003a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The ACE Value",
"sec_num": "4.1"
},
{
"text": ". Finally, the normalized cost is subtracted from 100.0 to obtain the ACE value; a value of 100% corresponds to perfect entity detection. A system can obtain a negative score if it proposed too many incorrect entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "!",
"sec_num": null
},
{
"text": "In addition, for the mention detection task, we will also present results by using the more established F-measure, computed as the harmonic mean of precision and recall -this measure gives equal importance to all entities, regardless of their type, level or genericity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "!",
"sec_num": null
},
{
"text": "As described in Section 2.6, the mention detection systems make use of a large set of features. To better assert the contribution of the different types of features to the final performance, we have grouped them into 4 categories:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EDT Results",
"sec_num": "4.2"
},
{
"text": "1. Surface features: lexical features that can be derived from investigating the words: words, morphs, prefix/suffix, capitalization/word-form flags 2. Features derived from processing the data with NLP techniques: POS tags, text chunks, word segmentation, etc. 3. Gazetteer/dictionary features 4. Features obtained by running other named-entity classifiers (with different tag sets): HMM, MaxEnt and RRM output on the 32-category, 49-category and MUC data sets. 9 the RRM model. There are some interesting observations: first, the F-measure performance does not correlate well with an improvement in ACE value -small improvements in F-measure sometimes are paired with large relative improvements in ACE value, fact due to the different weighting of entity types. Second, the largest single improvement in ACE value is obtained by adding dictionary features, at least in this order of adding features. For English, we investigated in more detail the way features interact. Figure 1 presents a hierarchical direct comparison between the performance of the RRM model and the MaxEnt model. We can observe that the RRM model makes better use of gazetteers, and manages to close the initial performance gap to the MaxEnt model. Table 4 presents the results obtained by running the entity tracking algorithm on true mentions. It is interesting to compare the entity tracking results with inter-annotator agreements. LDC reported (NIST, 2003b ) that the interannotator agreement (computed as ACE-values) between annotators are \u00a2 \u00a1 \u00a4 %, \u00a3 \u00a4 \u00a1 % and \u00a3 \u00a4 \u00a3 % for Arabic, Chinese and English, respectively. The system performance is very close to human performance on this task; this small difference in performance highlights the difficulty of the entity tracking task.",
"cite_spans": [
{
"start": 1424,
"end": 1436,
"text": "(NIST, 2003b",
"ref_id": null
}
],
"ref_spans": [
{
"start": 974,
"end": 982,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1224,
"end": 1231,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "EDT Results",
"sec_num": "4.2"
},
{
"text": "Finally, Table 5 presents the results obtained by running both mention detection followed by entity tracking on the ACE'03 evaluation data. Our submission in the evaluation performed well relative to the other participating systems (contractual obligations prevent us from elaborating further).",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 5",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "EDT Results",
"sec_num": "4.2"
},
{
"text": "The same basic model was used to perform EDT in three languages. Our approach is language-independent, in that Figure 1 : Performance of the English mention detection system on different sets of features (uniformly penalized F-measure), September'02 data. The lower part of each box describes the particular combination of feature types; the arrows show a inclusion relationship between the feature sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 119,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "the fundamental classification algorithm can be applied to every language and the only changes involve finding appropriate and available feature streams for each language. The entity tracking system uses even fewer languagespecific features than the mention detection systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "One limitation apparent in our mention detection system is that it does not model explicitly the genericity of a mention. Deciding whether a mention refers to a specific entity or a generic entity requires knowledge of substantially wider context than the window of 5 tokens we currently use in our mention detection systems. One way we plan to improve performance for such cases is to separate the task into two parts: one in which the mention type and level are predicted, followed by a genericitypredicting model which uses long-range features, such as sentence or document level features. Our entity tracking system currently cannot resolve the coreference of pronouns very accurately. Although this is weighted lightly in ACE evaluation, good anaphora resolution can be very useful in many applications and we will continue exploring this task in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "The Arabic and Chinese EDT tasks were included in the ACE evaluation for the first time in 2003. Unlike the English case, the systems had access to only a small amount of training data (60k words for Arabic and 90k characters for Chinese, in contrast with 340k words for English), which made it difficult to train statistical models with large number of feature types. Future ACE evaluations will shed light on whether this lower performance, shown in Table 3 , is due to lack of training data or to specific language-specific ambiguity.",
"cite_spans": [],
"ref_spans": [
{
"start": 452,
"end": 459,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "The final observation we want to make is that the systems were not directly optimized for the ACE value, and there is no obvious way to do so. As Table 3 shows, the F-measure and ACE value do not correlate well: systems trained to optimize the former might not end up optimiz-ing the latter. It is an open research question whether a system can be directly optimized for the ACE value.",
"cite_spans": [],
"ref_spans": [
{
"start": 146,
"end": 153,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "This paper presents a language-independent framework for the entity detection and tracking task, which is shown to obtain top-tier performance on three radically different languages: Arabic, Chinese and English. The task is separated into two sub-tasks: a mention detection part, which is modeled through a named entity-like approach, and an entity tracking part, for a which a novel modeling approach is proposed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "This statistical framework is general and can incorporate heterogeneous feature types -the models were built using a wide array of lexical, syntactic and semantic features extracted from texts, and further enhanced by adding the output of pre-existing semantic classifiers as feature streams; additional feature types help improve the performance significantly, especially in terms of ACE value. The experimental results show that the systems perform remarkably well, for both well investigated languages, such as English, and for the relatively new additions Arabic and Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "For a description of the ACE program see http://www.nist.gov/speech/tests/ace/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is not meant to be an in-depth introduction to the methods, but a brief overview to familiarize the reader with them.3 Actually, the optimizing function contains a regularization factor which considerably improves the robustness of the system -for full details, seeZhang et al. (2002).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Each language may have a different notion of what represents a token.5 In the current implementation, the models use a history of 2 tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The main difference between their system and ours is that they build a MaxEnt model capable of building hierarchical structures -therefore treating the problem as a parsing taskwhile our system treats the problem as a classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In the English MaxEnt system, which uses 295k features, the distribution among the four classes of features is: 1:72%, 2:24%, 3:1%, 4:3%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Dr. Tong Zhang for providing us with the RRM toolkit.This work was partially supported by the Defense Advanced Research Projects Agency and monitored by SPAWAR under contract No. N66001-99-2-8916. The views and findings contained in this material are those of the authors and do not necessarily reflect the position of policy of the U.S. government and no official endorsement should be inferred.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Mitre: Description of the Alembic system used for MUC-6",
"authors": [
{
"first": "J",
"middle": [],
"last": "Aberdeen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Day",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Robinson",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Vilain",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of MUC-6",
"volume": "",
"issue": "",
"pages": "141--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Aberdeen, D. Day, L. Hirschman, P. Robinson, and M. Vilain. 1995. Mitre: Description of the Alembic system used for MUC-6. In Proceedings of MUC-6, pages 141-155.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Exponential numbers",
"authors": [
{
"first": "E",
"middle": [
"T"
],
"last": "Bell",
"suffix": ""
}
],
"year": 1934,
"venue": "American Math. Monthly",
"volume": "41",
"issue": "",
"pages": "411--419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. T. Bell. 1934. Exponential numbers. American Math. Monthly, 41:411-419.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "S",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "39--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Berger, S. Della Pietra, and V. Della Pietra. 1996. A maximum entropy approach to natural language pro- cessing. Computational Linguistics, 22(1):39-71.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Nymble: a high-performance learning namefinder",
"authors": [
{
"first": "D",
"middle": [
"M"
],
"last": "Bikel",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of ANLP-97",
"volume": "",
"issue": "",
"pages": "194--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. M. Bikel, S. Miller, R. Schwartz, and R. Weischedel. 1997. Nymble: a high-performance learning name- finder. In Proceedings of ANLP-97, pages 194-201.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Exploiting diverse knowledge sources via maximum entropy in named entity recognition",
"authors": [
{
"first": "A",
"middle": [],
"last": "Borthwick",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Sterling",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Agichtein",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Borthwick, J. Sterling, E. Agichtein, and R. Grish- man. 1998. Exploiting diverse knowledge sources via maximum entropy in named entity recognition.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Identifying and tracking entity mentions in a maximum entropy framework",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lita",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Kambhatla",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Nicolov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Stys",
"suffix": ""
}
],
"year": 2003,
"venue": "HLT-NAACL 2003: Short Papers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Ittycheriah, L. Lita, N. Kambhatla, N. Nicolov, S. Roukos, and M. Stys. 2003. Identifying and track- ing entity mentions in a maximum entropy framework. In HLT-NAACL 2003: Short Papers, May 27 -June 1.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "HowtogetaChineseName(Entity): Segmentation and combination issues",
"authors": [
{
"first": "H",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of EMNLP'03",
"volume": "",
"issue": "",
"pages": "200--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Jing, R. Florian, X. Luo, T. Zhang, and A. Itty- cheriah. 2003. HowtogetaChineseName(Entity): Seg- mentation and combination issues. In Proceedings of EMNLP'03, pages 200-207.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Language model based Arabic word segmentation",
"authors": [
{
"first": "Y.-S",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Emam",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hassan",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the ACL'03",
"volume": "",
"issue": "",
"pages": "399--406",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y.-S. Lee, K. Papineni, S. Roukos, O. Emam, and H. Hassan. 2003. Language model based Arabic word segmentation. In Proceedings of the ACL'03, pages 399-406.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Named entity recognition without gazetteers",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mikheev",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Moens",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Grover",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of EACL'99",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Mikheev, M. Moens, and C. Grover. 1999. Named entity recognition without gazetteers. In Proceedings of EACL'99.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bbn: Description of the SIFT system as used for MUC-7",
"authors": [
{
"first": "S",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Crystal",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Fox",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwarz",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Stone",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Miller, M. Crystal, H. Fox, L. Ramshaw, R. Schwarz, R. Stone, and R. Weischedel. 1998. Bbn: Description of the SIFT system as used for MUC-7. In MUC-7.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "WordNet: A lexical database",
"authors": [
{
"first": "G",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. A. Miller. 1995. WordNet: A lexical database. Com- munications of the ACM, 38(11).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improving machine learning approaches to coreference resolution",
"authors": [
{
"first": "V",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL'02",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Ng and C. Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of the ACL'02, pages 104-111. NIST. 2003a. The ACE evaluation plan. www.nist.gov/speech/tests/ace/index.htm.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Proceedings of ACE'03. Booklet",
"authors": [],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "NIST. 2003b. Proceedings of ACE'03. Booklet, Alexan- dria, VA, September.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Exploring the statistical derivation of transformational rule sequences for part-of-speech tagging",
"authors": [
{
"first": "L",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the ACL Workshop on Combining Symbolic and Statistical Approaches to Language",
"volume": "",
"issue": "",
"pages": "128--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Ramshaw and M. Marcus. 1994. Exploring the sta- tistical derivation of transformational rule sequences for part-of-speech tagging. In Proceedings of the ACL Workshop on Combining Symbolic and Statistical Ap- proaches to Language, pages 128-135.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Text chunking using transformation-based learning",
"authors": [
{
"first": "L",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of WVLC'95",
"volume": "",
"issue": "",
"pages": "82--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Ramshaw and M. Marcus. 1995. Text chunking us- ing transformation-based learning. In Proceedings of WVLC'95, pages 82-94.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A machine learning approach to coreference resolution of noun phrases",
"authors": [
{
"first": "W",
"middle": [
"M"
],
"last": "Soon",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "C",
"middle": [
"Y"
],
"last": "Lim",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "4",
"pages": "521--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. M. Soon, H. T. Ng, and C. Y. Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521-544.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Representing text chunks",
"authors": [
{
"first": "E",
"middle": [
"F"
],
"last": "Tjong Kim Sang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Veenstra",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of EACL'99",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. F. Tjong Kim Sang and J. Veenstra. 1999. Represent- ing text chunks. In Proceedings of EACL'99.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "E",
"middle": [
"F"
],
"last": "Tjong Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sang",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of CoNLL-2002",
"volume": "",
"issue": "",
"pages": "155--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. F. Tjong Kim Sang. 2002. Introduction to the CoNLL- 2002 shared task: Language-independent named en- tity recognition. In Proceedings of CoNLL-2002, pages 155-158.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "An efficient gradient-based algorithm for on-line training of recurrent neural networks trajectories",
"authors": [
{
"first": "R",
"middle": [
"J"
],
"last": "Williams",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peng",
"suffix": ""
}
],
"year": 1990,
"venue": "Neural Computation",
"volume": "2",
"issue": "4",
"pages": "490--501",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. J. Williams and J. Peng. 1990. An efficient gradient-based algorithm for on-line training of re- current neural networks trajectories. Neural Compu- tation, 2(4):490-501.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Text chunking based on a generalization of Winnow",
"authors": [
{
"first": "T",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Damerau",
"suffix": ""
},
{
"first": "D",
"middle": [
"E"
],
"last": "Johnson",
"suffix": ""
}
],
"year": 2002,
"venue": "Machine Learning Research",
"volume": "2",
"issue": "",
"pages": "615--637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Zhang, F. Damerau, and D. E. Johnson. 2002. Text chunking based on a generalization of Winnow. Jour- nal of Machine Learning Research, 2:615-637.",
"links": null
}
},
"ref_entries": {
"FIGREF2": {
"type_str": "figure",
"text": "7 .The following is a list of additional features (again, Shallow parsing information associated with the tokens in window of 3; \u00a3 Prefixes/suffixes of length up to 4; \u00a3 A capitalization/word-type flag (similar to the ones described by Bikel et al. (1997)); \u00a3 Gazetteer information: a handful of location (55k entries) person names (30k) and organizations (5k) dictionaries; \u00a3",
"num": null,
"uris": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"text": "Summary of features used by the 3 systems",
"num": null,
"html": null
},
"TABREF5": {
"type_str": "table",
"content": "<table/>",
"text": "Data statistics (number of tokens) for Arabic, Chinese and English",
"num": null,
"html": null
},
"TABREF6": {
"type_str": "table",
"content": "<table><tr><td>presents the mention detection comparative re-</td></tr><tr><td>sults, F-measure and ACE value, on Arabic and Chinese</td></tr><tr><td>data. The Arabic and Chinese models were built using</td></tr></table>",
"text": "",
"num": null,
"html": null
},
"TABREF7": {
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">Arabic Chinese</td><td colspan=\"2\">English</td></tr><tr><td/><td/><td/><td colspan=\"2\">Feb02 Sept02</td></tr><tr><td>ACE value</td><td>83.2</td><td>89.4</td><td>90.9</td><td>88.0</td></tr></table>",
"text": "Mention detection results for the Arabic and Chinese",
"num": null,
"html": null
},
"TABREF8": {
"type_str": "table",
"content": "<table/>",
"text": "Entity tracking results on true mentions",
"num": null,
"html": null
},
"TABREF10": {
"type_str": "table",
"content": "<table><tr><td/><td/><td>73.2</td><td>73.4</td><td/></tr><tr><td/><td/><td colspan=\"2\">1+2+3+4</td><td/></tr><tr><td>72.1</td><td>72.1</td><td colspan=\"2\">72.6 72.5</td><td colspan=\"2\">72.0 73.2</td></tr><tr><td colspan=\"2\">1+2+3</td><td colspan=\"2\">1+2+4</td><td colspan=\"2\">1+3+4</td></tr><tr><td colspan=\"2\">70.8 70.7</td><td>71.4</td><td>71.8</td><td>71.3</td><td>72.5</td></tr><tr><td colspan=\"2\">1+2</td><td colspan=\"2\">1+3</td><td colspan=\"2\">1+4</td></tr><tr><td/><td/><td colspan=\"2\">69.1 70.4</td><td/></tr><tr><td/><td>RRM</td><td/><td>1</td><td>MaxEnt</td></tr><tr><td/><td/><td colspan=\"2\">English</td><td/></tr></table>",
"text": "ACE value results for the three languages on ACE'03 evaluation data.",
"num": null,
"html": null
}
}
}
}