ACL-OCL / Base_JSON /prefixD /json /D10 /D10-1033.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D10-1033",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:53:06.911688Z"
},
"title": "Improving Mention Detection Robustness to Noisy Input",
"authors": [
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM T.J. Watson Research Center Yorktown Heights",
"location": {
"region": "NY",
"country": "U.S.A"
}
},
"email": ""
},
{
"first": "John",
"middle": [
"F"
],
"last": "Pitrelli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM T.J. Watson Research Center Yorktown Heights",
"location": {
"region": "NY",
"country": "U.S.A"
}
},
"email": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM T.J. Watson Research Center Yorktown Heights",
"location": {
"region": "NY",
"country": "U.S.A"
}
},
"email": ""
},
{
"first": "Imed",
"middle": [],
"last": "Zitouni",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM T.J. Watson Research Center Yorktown Heights",
"location": {
"region": "NY",
"country": "U.S.A"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Information-extraction (IE) research typically focuses on clean-text inputs. However, an IE engine serving real applications yields many false alarms due to less-well-formed input. For example, IE in a multilingual broadcast processing system has to deal with inaccurate automatic transcription and translation. The resulting presence of non-target-language text in this case, and non-language material interspersed in data from other applications, raise the research problem of making IE robust to such noisy input text. We address one such IE task: entity-mention detection. We describe augmenting a statistical mention-detection system in order to reduce false alarms from spurious passages. The diverse nature of input noise leads us to pursue a multi-faceted approach to robustness. For our English-language system, at various miss rates we eliminate 97% of false alarms on inputs from other Latin-alphabet languages. In another experiment, representing scenarios in which genre-specific training is infeasible, we process real financial-transactions text containing mixed languages and data-set codes. On these data, because we do not train on data like it, we achieve a smaller but significant improvement. These gains come with virtually no loss in accuracy on clean English text.",
"pdf_parse": {
"paper_id": "D10-1033",
"_pdf_hash": "",
"abstract": [
{
"text": "Information-extraction (IE) research typically focuses on clean-text inputs. However, an IE engine serving real applications yields many false alarms due to less-well-formed input. For example, IE in a multilingual broadcast processing system has to deal with inaccurate automatic transcription and translation. The resulting presence of non-target-language text in this case, and non-language material interspersed in data from other applications, raise the research problem of making IE robust to such noisy input text. We address one such IE task: entity-mention detection. We describe augmenting a statistical mention-detection system in order to reduce false alarms from spurious passages. The diverse nature of input noise leads us to pursue a multi-faceted approach to robustness. For our English-language system, at various miss rates we eliminate 97% of false alarms on inputs from other Latin-alphabet languages. In another experiment, representing scenarios in which genre-specific training is infeasible, we process real financial-transactions text containing mixed languages and data-set codes. On these data, because we do not train on data like it, we achieve a smaller but significant improvement. These gains come with virtually no loss in accuracy on clean English text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Information-extraction (IE) research is typically performed on clean text in a predetermined language. Lately, IE has improved to the point of being usable for some real-world tasks whose accuracy requirements are reachable with current technology. These uses include media monitoring, topic alerts, summarization, population of databases for advanced search, etc. These uses often combine IE with technologies such as speech recognition, machine translation, topic clustering, and information retrieval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The propagation of IE technology from isolated use to aggregates with such other technologies, from NLP experts to other types of computer scientists, and from researchers to users, feeds back to the IE research community the need for additional investigation which we loosely refer to as \"informationextraction robustness\" research. For example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Broadcast monitoring demands that IE handle as input not only clean text, but also the transcripts output by speech recognizers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. Multilingual applications, and the imperfection of translation technology, require IE to contend with non-target-language text input (Pitrelli et al., 2008) .",
"cite_spans": [
{
"start": 136,
"end": 159,
"text": "(Pitrelli et al., 2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. Naive users at times input to IE other material which deviates from clean text, such as a PDF file that \"looks\" like plain text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "4. Search applications require IE to deal with databases which not only possess clean text but at times exhibit other complications like markup codes particular to narrow, applicationspecific data-format standards, for example, the excerpt from a financial-transactions data set shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 288,
"end": 296,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Legacy industry-specific standards, such as illustrated in this example, are part of longestablished processes which are cumbersome to convert to a more-modern database format. Transaction data sets typically build up over a period of years, and as seen here, can exhibit peculiar mark-up interspersed with meaningful text. They also suffer complications arising from limited-size entry fields and a diversity of data-entry personnel, leading to effects like haphazard abbreviation and improper spacing, as shown. These issues greatly complicate the IE problem, particularly considering that adapting IE to such formats is hampered by the existence of a multitude of such \"standards\" and by lack of sufficient annotated data in each one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A typical state-of-the-art statistical IE engine will happily process such \"noisy\" inputs, and will typically provide garbage-in/garbage-out performance, embarrassingly reporting spurious \"information\" no human would ever mistake. Yet it is also inappropriate to discard such documents wholesale: even poor-quality inputs may have relevant information interspersed. This information can include accurate speech-recognition output, names which are recognizable even in wrong-language material, and clean target-language passages interleaved with the markup. Thus, here we address methods to make IE robust to such varied-quality inputs. Specifically, our overall goals are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 to skip processing non-language material such as standard or database-specific mark-up,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 to process all non-target-language text cautiously, catching interspersed target-language text as well as text which is compatible with the target language, e.g. person names which are the same in the target-and non-target language, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 to degrade gracefully when processing anomalous target-language material, while minimizing any disruption of the processing of clean, target-language text, and avoiding any necessity for explicit pre-classification of the genre of material being input to the system. Such explicit classification would be impractical in the presence of the interleaving and the unconstrained data formats from unpredetermined sources. We begin our robustness work by addressing an important and basic IE task: mention detection (MD). MD is the task of identifying and classifying textual references to entities in open-domain texts. Mentions may be of type \"named\" (e.g. John, Las Vegas), \"nominal\" (e.g. engineer, dentist) or \"pronominal\" (e.g. they, he). A mention also has a specific class which describes the type of entity it refers to. For instance, consider the following sentence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Julia Gillard, prime minister of Australia, declared she will enhance the country's economy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here we see three mentions of one person entity: Julia Gillard, prime minister, and she; these mentions are of type named, nominal, and pronominal, respectively. Australia and country are mentions of type named and nominal, respectively, of a single geopolitical entity. Thus, the MD task is a more general and complex task than named-entity recognition, which aims at identifying and classifying only named mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach to IE has been to use languageindependent algorithms, in order to facilitate reuse across languages, but we train them with languagespecific data, for the sake of accuracy. Therefore, input is expected to be predominantly in a target language. However, real-world data genres inevitably include some mixed-language/non-linguistic input. Genre-specific training is typically infeasible due to such application-specific data sets being unannotated, motivating this line of research. Therefore, the goal of this study is to investigate schemes to make a language-specific MD engine robust to the types of interspersed non-target material described above. In these initial experiments, we work with English as the target language, though we aim to make our approach to robustness as target-language-independent as possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While our ultimate goal is a languageindependent approach to robustness, in these initial experiments, English is the target language. However, we process mixed-language material including real-world data with its own peculiar mark-up, text conventions including abbreviations, and mix of languages, with the goal of English MD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We approach robust MD using a multi-stage strategy. First, non-target-character-set passages (here, non-Latin-alphabet) are identified and marked for non-processing. Then, following word-tokenization, we apply a language classifier to a sliding variablelength set of windows in order to generate features for each word indicative of how much the text around that word resembles good English, primarily in comparison to other Latin-alphabet languages. These features are used in a separate maximumentropy classifier whose output is a single feature to add to the MD classifier. Additional features, primarily to distinguish English from non-language input, are added to MD as well. An example is the minimum of the number of letters and the number of digits in the \"word\", which when greater than zero often indicates database detritus. Then we run the MD classifier enhanced with these new robustnessoriented features. We evaluate using a detectionerror-trade-off (DET) (Martin et al., 1997) analysis, in addition to traditional precision/recall/Fmeasure.",
"cite_spans": [
{
"start": 970,
"end": 991,
"text": "(Martin et al., 1997)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is organized as follows. Section 2 discusses previous work. Section 3 describes the baseline maximum-entropy-based MD system. Section 4 introduces enhancements to the system to achieve robustness. Section 5 describes databases used for experiments, which are discussed in Section 6, and Section 7 draws conclusions and plots future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The MD task has close ties to named-entity recognition, which has been the focus of much recent research (Bikel et al., 1997; Borthwick et al., 1998; Tjong Kim Sang, 2002; Florian et al., 2003; Benajiba et al., 2009) , and has been at the center of several evaluations: MUC-6, MUC-7, CoNLL'02 and CoNLL'03 shared tasks. Usually, in computationallinguistics literature, a named entity represents an instance of either a location, a person, an organization, and the named-entity-recognition task consists of identifying each individual occurrence of names of such an entity appearing in the text. As stated earlier, in this paper we are interested in identification and classification of textual references to object/abstraction mentions, which can be either named, nominal or pronominal. This task has been a focus of interest in ACE since 2003. The recent ACE evaluation campaign was in 2008.",
"cite_spans": [
{
"start": 105,
"end": 125,
"text": "(Bikel et al., 1997;",
"ref_id": "BIBREF1"
},
{
"start": 126,
"end": 149,
"text": "Borthwick et al., 1998;",
"ref_id": "BIBREF2"
},
{
"start": 150,
"end": 171,
"text": "Tjong Kim Sang, 2002;",
"ref_id": "BIBREF17"
},
{
"start": 172,
"end": 193,
"text": "Florian et al., 2003;",
"ref_id": "BIBREF5"
},
{
"start": 194,
"end": 216,
"text": "Benajiba et al., 2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work on mention detection",
"sec_num": "2"
},
{
"text": "Effort to handle noisy data is still limited, especially for scenarios in which the system at decoding time does not have prior knowledge of the input data source. Previous work dealing with unstructured data assumes the knowledge of the input data source. As an example, E. Minkov et al. (Minkov et al., 2005) assume that the input data is text from e-mails, and define special features to enhance the detection of named entities. Miller et al. (Miller et al., 2000) assume that the input data is the output of a speech or optical character recognition system, and hence extract new features for better named-entity recognition. In a different research problem, L. Yi et al. eliminate the noisy text from the document before performing data mining (Yi et al., 2003) . Hence, they do not try to process noisy data; instead, they remove it. The approach we propose in this paper does not assume prior knowledge of the data source. Also we do not want to eliminate the noisy data, but rather attempt to detect the appropriate mentions, if any, that appear in that portion of the data.",
"cite_spans": [
{
"start": 289,
"end": 310,
"text": "(Minkov et al., 2005)",
"ref_id": "BIBREF12"
},
{
"start": 446,
"end": 467,
"text": "(Miller et al., 2000)",
"ref_id": "BIBREF11"
},
{
"start": 749,
"end": 766,
"text": "(Yi et al., 2003)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work on mention detection",
"sec_num": "2"
},
{
"text": "Similarly to classical NLP tasks such as base phrase chunking (Ramshaw and Marcus, 1999) and namedentity recognition (Tjong Kim Sang, 2002), we formulate the MD task as a sequence-classification problem, by assigning to each word token in the text a label indicating whether it starts a specific mention, is inside a specific mention, or is outside any mentions. We also assign to every nonoutside label a class to specify entity type e.g. person, organization, location, etc. We are interested in a statistical approach that can easily be adapted for several languages and that has the ability to integrate easily and make effective use of diverse sources of information to achieve high system performance. This is because, similar to many NLP tasks, good performance has been shown to depend heavily on integrating many sources of information (Florian et al., 2004) . We choose a Maximum Entropy Markov Model (MEMM) as described previously (Florian et al., 2004; Zitouni and Florian, 2009) . The maximum-entropy model is trained using the sequential conditional generalized iterative scaling (SCGIS) technique (Goodman, 2002) , and it uses a Gaussian prior for regularization (Chen and Rosenfeld, 2000) 1 .",
"cite_spans": [
{
"start": 62,
"end": 88,
"text": "(Ramshaw and Marcus, 1999)",
"ref_id": "BIBREF16"
},
{
"start": 845,
"end": 867,
"text": "(Florian et al., 2004)",
"ref_id": "BIBREF6"
},
{
"start": 942,
"end": 964,
"text": "(Florian et al., 2004;",
"ref_id": "BIBREF6"
},
{
"start": 965,
"end": 991,
"text": "Zitouni and Florian, 2009)",
"ref_id": "BIBREF22"
},
{
"start": 1112,
"end": 1127,
"text": "(Goodman, 2002)",
"ref_id": "BIBREF8"
},
{
"start": 1178,
"end": 1204,
"text": "(Chen and Rosenfeld, 2000)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mention-detection algorithm",
"sec_num": "3"
},
{
"text": "The featues used by our mention detection systems can be divided into the following categories:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention detection: standard features",
"sec_num": "3.1"
},
{
"text": "1. Lexical Features Lexical features are implemented as token n-grams spanning the current token, both preceding and following it. For a token x i , token n-gram features will contain the previous n\u22121 tokens (x i\u2212n+1 , . . . x i\u22121 ) and the following n \u2212 1 tokens (x i+1 , . . . x i+n\u22121 ). Setting n equal to 3 turned out to be a good choice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention detection: standard features",
"sec_num": "3.1"
},
{
"text": "The gazetteerbased features we use are computed on tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gazetteer-based Features",
"sec_num": "2."
},
{
"text": "The gazetteers consist of several class of dictionaries: including person names, country names, company names, etc. Dictionaries contain single names such as John or Boston, and also phrases such as Barack Obama, New York City, or The United States. During both training and decoding, when we encounter in the text a token or a sequence of tokens that completely matches an entry in a dictionary, we fire its corresponding class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gazetteer-based Features",
"sec_num": "2."
},
{
"text": "The use of this framework to build MD systems for clean English text has given very competitive results at ACE evaluations (Florian et al., 2006) . Trying other classifiers is always a good experiment, which we didn't pursue here for two reasons: first, the MEMM system used here is state-of-the-art, as proven in evaluations and competitions -while it is entirely possible that another system might get better results, we don't think the difference would be large. Second, we are interested in ways of improving performance on noisy data, and we expect any system to observe similar degradation in performance when presented with unexpected input -showing results for multiple classifier types might very well dilute the message, so we stuck to one classifier type.",
"cite_spans": [
{
"start": 123,
"end": 145,
"text": "(Florian et al., 2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gazetteer-based Features",
"sec_num": "2."
},
{
"text": "As stated above, our goal is to skip spans of characters which do not lend themselves to target-language MD, while minimizing impact on MD for targetlanguage text, with English as the initial target language for our experiments. More specifically, our task is to process data automatically in any unpredetermined format from any source, during which we strive to avoid outputting spurious mentions on:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancements for robustness",
"sec_num": "4"
},
{
"text": "\u2022 non-language material, such as mark-up tags and other data-set detritus, as well as non-text data such as code or binaries likely mistakenly submitted to the MD system,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancements for robustness",
"sec_num": "4"
},
{
"text": "\u2022 non-target-character-set material, here, non-Latin-alphabet material, such as Arabic and Chinese in their native character sets, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancements for robustness",
"sec_num": "4"
},
{
"text": "\u2022 target-character-set material not in the target language, here, Latin-alphabet languages other than English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancements for robustness",
"sec_num": "4"
},
{
"text": "It is important to note that this is not merely a document-classification problem; this non-target data is often interspersed with valid input text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancements for robustness",
"sec_num": "4"
},
{
"text": "Mark-up is the obvious example of interspersing; however, other categories of non-target data can also interleave tightly with valid input. A few examples:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancements for robustness",
"sec_num": "4"
},
{
"text": "\u2022 English text is sometimes infixed right in a Chinese sentence, such as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancements for robustness",
"sec_num": "4"
},
{
"text": "\u2022 some translation algorithms will leave unchanged an untranslatable word, or will transliterate it into the target language using a character convention which may not be a standard known to the MD engine, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancements for robustness",
"sec_num": "4"
},
{
"text": "\u2022 some target-alphabet-but-non-target-language material will be compatible with the target language, particularly people's names. An example with English as the target language is Barack Obama in the Spanish text ...presidente de Estados Unidos, Barack Obama, dijo el da 24 que .... Therefore, to minimize needless loss of processable material, a robustness algorithm ideally does a sliding analysis, in which, character-by-character or word-by-word, material may be deemed to be suitable to process. Furthermore, a variety of strategies will be needed to contend with the diverse nature of non-target material and the patterns in which it will appear among valid input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancements for robustness",
"sec_num": "4"
},
{
"text": "Accordingly, the following is a summary of algorithmic enhancements to MD:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancements for robustness",
"sec_num": "4"
},
{
"text": "1. detection of standard file formats, such as SGML, and associated detagging, 2. segmentation of the file into target-vs. nontarget-character-set passages, such that the latter not be processed further,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancements for robustness",
"sec_num": "4"
},
{
"text": "3. tokenization to determine word and sentence units, and 4. MD, augmented as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancements for robustness",
"sec_num": "4"
},
{
"text": "\u2022 Sentence-level categorization of likelihood of good English. \u2022 If \"clean\" English was detected, run the same clean baseline model as described in Section 3. \u2022 If the text is determined to be a bad fit to English, run an alternate maximum-entropy model that is heavily based on gazetteers, using only contextindependent (e.g. primarily gazetteerbased) features, to catch isolated obvious English/English-compatible names embedded in otherwise-foreign text. \u2022 If in between \"clean\" and \"bad\", use a \"mixed\" maximum-entropy MD model whose training data and feature set are augmented to handle interleaving of English with mark-up and other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancements for robustness",
"sec_num": "4"
},
{
"text": "These MD-algorithm enhancements will be described in the following subsections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancements for robustness",
"sec_num": "4"
},
{
"text": "Some types of mark-up are well-known standards, such as SGML (Warmer and van Egmond, 1989) . Clearly the optimal way of dealing with them is to apply detectors of these specific formats, and associated detaggers, as done previously (Yi et al., 2003) . For this reason, standard mark-up is not a subject of the current study; rather, our concern is with markup peculiar to specific data sets, as described above, and so while this step is part of our overall strategy, it is not employed in the present experiments.",
"cite_spans": [
{
"start": 61,
"end": 90,
"text": "(Warmer and van Egmond, 1989)",
"ref_id": "BIBREF18"
},
{
"start": 232,
"end": 249,
"text": "(Yi et al., 2003)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Detection and detagging for standard file formats",
"sec_num": "4.1"
},
{
"text": "Some entity mentions may be recognizable in a nontarget language which shares the target-language's character set, for example, a person's name recognizable by English speakers in an otherwise-notunderstandable Spanish sentence. However, nontarget character sets, such as Arabic and Chinese when processing English, represent pure noise for an IE system. Therefore, deterministic characterset segmentation is applied, to mark non-targetcharacter-set passages for non-processing by the remainder of the system, or, in a multilingual system, to be diverted to a subsystem suited to process that character set. Characters which can be ambiguous with regard to character set, such as some punctuation marks, are attached to target-character-set passages when possible, but are not considered to break non-target-character-set passages surrounding them on both sides.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character-set segmentation",
"sec_num": "4.2"
},
{
"text": "Subsequent processing is based on determination of the language of target-alphabet text. The fundamental unit of such processing is target-alphabet word, necessitating tokenization at this point into wordlevel units. This step includes punctuation sepa-ration as well as the detction of sentence boundary (Zimmerman et al., 2006) .",
"cite_spans": [
{
"start": 305,
"end": 329,
"text": "(Zimmerman et al., 2006)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenization",
"sec_num": "4.3"
},
{
"text": "After preprocessing steps presented earlier, we detect mentions using a cascaded approach that combines several MD classifiers. Our goal is to select among maximum-entropy MD classifiers trained separately to represent different degrees of \"noisiness\" occurring in many genres of data, including machine-translation output, informal communications, mixed-language material, varied forms of non-standard database mark-up, etc. We somewhatarbitrarily choose to employ three classifiers as described below. We select a classifier based on a sentence-level determination of the material's fit to the target language. First, we build an n-gram language model on clean target-language training text. This language model is used to compute the perplexity (P P ) of each sentence during decoding. The P P indicates the quality of the text in the targetlanguage (i.e. English) (Brown et al., 1992) ; the lower the P P , the cleaner the text. A sentence with a P P lower than a threshold \u03b8 1 is considered \"clean\" and hence the \"clean\" baseline MD model described in Section 3 is used to detect mentions of this sentence. The clean MD model has access to standard features described in Section 3.1. In the case where a sentence looks particularly badly matched to the target language, defined as P P > \u03b8 2 , we use a \"gazetteer-based\" model based on a dictionary look-up to detect mentions; we retreat to seeking known mentions in a context-independent manner reflecting that most of the context consists of out-of-vocabulary words. The gazetteer-based MD model has access only to gazetteer information and does not look to lexical context during decoding, reflecting the likelihood that in this poor material, words surrounding any recognizable mention are foreign and therefore unusable. In the case of an in-between determination, that is, a sentence with \u03b8 1 < P P < \u03b8 2 , we use a \"mixed\" MD model, based on augmenting the training data set and the feature set as described in the next section. The values of \u03b8 1 and \u03b8 2 are estimated empirically on a separate development data set that is also used to tune the Gaussian prior (Chen and Rosenfeld, 2000) . This set contains a mix of clean English and Latin-alphabet-but-non-English text that is not used for traning and evaluation.",
"cite_spans": [
{
"start": 868,
"end": 888,
"text": "(Brown et al., 1992)",
"ref_id": "BIBREF3"
},
{
"start": 2122,
"end": 2148,
"text": "(Chen and Rosenfeld, 2000)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Robust mention detection",
"sec_num": "4.4"
},
{
"text": "The advantage of this combination strategy is that we do not need pre-defined knowledge of the text source in order to apply an appropriate model. The selection of the appropriate model to use for decoding is done automatically based on P P value of the sentence. We will show in the experiments section how this combination strategy is effective not only in maintaining good performance on a clean English text but also in improving performance on non-English data when compared to other sourcespecific MD models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robust mention detection",
"sec_num": "4.4"
},
{
"text": "The mixed MD model is designed to process \"sentences\" mixing English with non-English, whether foreign-language or non-language material. Our approach is to augment model training compared to the clean baseline by adding non-English, mixed-language, and non-language material, and to augment the model's feature set with languageidentification features more localized than the sentence-level perplexity described above, as well as other features designed primarily to distinguish nonlanguage material such as mark-up codes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixed mention detection model",
"sec_num": "4.5"
},
{
"text": "We apply an n-gram-based language classifier (Prager, 1999) to variable-length sliding windows as follows. For each word, we run 1-through 6-preceding-word windows through the classifier, and 1-through 6-word windows beginning with the word, for a total of 12 windows, yielding for each window a result like:",
"cite_spans": [
{
"start": 45,
"end": 59,
"text": "(Prager, 1999)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language-identification features",
"sec_num": "4.5.1"
},
{
"text": "0.235 Swedish 0.148 English 0.134 French ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language-identification features",
"sec_num": "4.5.1"
},
{
"text": "For each of the 12 results, we extract three features: the identity of the top-scoring language, here, Swedish; the confidence score in the top-scoring language, here, 0.235; and the score difference between the target language (English for these experiments) and the top-scoring non-target language, here, 0.148 \u2212 0.235 = \u22120.087. Thus we have a 36-feature vector for each word. We bin these and use them as input to a maximum-entropy classifier (separate from the MD classifier) which outputs \"English\" or \"Non-English\", and a confidence score. These scores in turn are binned into six categories to serve as a \"how-English-is-it\" feature in the augmented MD model. The language-identification classifier and the maximum-entropy \"how-English\" classifier are each trained on text data separate from each other and from the training and test sets for MD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language-identification features",
"sec_num": "4.5.1"
},
{
"text": "The following features are designed to capture evidence of whether a \"word\" is in fact linguistic material or not: number of alphabetic characters, number of characters, maximum consecutive repetitions of a character, numbers of non-alphabetic and non-alphanumeric characters, fraction of characters which are alphabetic, fraction alphanumeric, and number of vowels. These features are part of the augmentation of the mixed MD model relative to the clean MD model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional features",
"sec_num": "4.5.2"
},
{
"text": "Four data sets are used for our initial experiments. One, \"English\", consists of 367 documents totaling 170,000 words, drawn from web news stories from various sources and detagged to be plain text. This set is divided into 340 documents as a training set and 27 for testing, annotated as described in more detail elsewhere (Han, 2010) . These data average approximately 21 annotated mentions per 100 words.",
"cite_spans": [
{
"start": 324,
"end": 335,
"text": "(Han, 2010)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets",
"sec_num": "5"
},
{
"text": "The second set, \"Latin\", consists of 23 detagged web news articles from 11 non-English Latinalphabet languages totaling 31,000 words. Of these articles, 12 articles containing 19,000 words are used as a training set, with the remaining used for testing, and each set containing all 11 languages. They are annotated using the same annotation conventions as \"English\", and from the perspective of English; that is, only mentions which would be clear to an English speaker are labeled, such as Barack Obama in the Spanish example in Section 4. For this reason, these data average only approximately 5 mentions per 100 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets",
"sec_num": "5"
},
{
"text": "The third, \"Transactions\", consists of approximately 60,000 words drawn from a text data set logging real financial transactions. Figure 1 shows example passages from this database, anonymized while preserving the character of the content.",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 138,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data sets",
"sec_num": "5"
},
{
"text": "This data set logs transactions by a staff of customer-service representatives. English is the primary language, but owing to international clientele, occasionally representatives communicate in other languages, such as the German here, or in English but mentioning institutions in other countries, here, a Czech bank. Interspersed among text are codes specific to this application which delineate and identify various information fields and punctuate long pas-sages. The application also places constraints on legal characters, leading to the unusual representation of underline and the \"at\" sign as shown, making for an e-mail address which is human-readable but likely not obvious to a machine. Abbreviations represent terms particularly common in this application area, though they may not be obvious without adapting to the application; these include standards like HUF, a currency code which stands for Hungarian forint, and financial-transaction peculiarities like BNF for \"beneficiary\" as seen in Figure 1 . In short, good English is interspersed with nonlanguage content, foreign-language text, and rough English like data-entry errors and haphazard abbreviations. These data average 4 mentions per 100 words.",
"cite_spans": [],
"ref_spans": [
{
"start": 1005,
"end": 1013,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data sets",
"sec_num": "5"
},
{
"text": "Data sets with peculiarities analogous to those in this Transactions set are commonplace in a variety of settings. Training specific to data sets like this is often infeasible due to lack of labeled data, insufficient data for training, and the multitude of such data formats. For this reason, we do not train on Transactions, letting our testing on this data set serve as an example of testing on such data formats unseen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets",
"sec_num": "5"
},
{
"text": "MD systems were trained to recognize the 116 entity-mention types shown in Table 1 , annotated as described previously (Han, 2010) . The clean-data classifier was trained on the English training data using the feature set described in Section 3.1. The classifier for \"mixed\"-quality data and the \"gazetteer\" model were each trained on that set plus the \"Latin\" training set and the supplemental set. In addition, \"mixed\" training included the additional features described in Section 4.5. The framework used to build the baseline MD system is similar to the one we used in the ACE evaluation 2 . This system has achieved competitive results with an F -measure of 82.7 when trained on the seven main types of ACE data with access to wordnet and part-of-speech-tag information as well as output of other MD and named-entity recognizers (Zitouni and Florian, 2008) .",
"cite_spans": [
{
"start": 119,
"end": 130,
"text": "(Han, 2010)",
"ref_id": "BIBREF9"
},
{
"start": 834,
"end": 861,
"text": "(Zitouni and Florian, 2008)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 75,
"end": 82,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "It is instructive to evaluate on the individual component systems as well as the combination, despite the fact that the individual components are not wellsuited to all the data sets, for example, the mixed and gazetteer systems being a poorer fit to the English task than the baseline, and vice versa for the Table 2 : Performance of clean, mixed, and gazetteer-based mention detection systems as well as their combination. Performance is presented in terms of Precision (P), Recall (R), and F -measure (F).",
"cite_spans": [],
"ref_spans": [
{
"start": 309,
"end": 316,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "non-target data sets. Precision/recall/F -measure results are shown in Table 2 . Not surprisingly, the baseline system, intended for clean data, performs poorly on noisy data. The mixed and gazetteer systems, having a variety of noisy data in their training set, perform much better on the noisy conditions, particularly on Latin-alphabet-non-English data because that is one of the conditions included in its training, while Transactions remains a condition not covered in the training set and so shows less improvement. However, because the mixed classifier, and moreso the gazetteer classifier, are oriented to noisy data, on clean data they suffer in performance by 2.5 and 5 F -measure points, respectively. But system combination serves us well: it recovers all but 0.5 F -measure point of this loss, while also actually performing better on the noisy data sets than the two classifiers specifically targeted toward them, as can be seen in Table 2 . It is important to note that the major advantage of using the combination model is the fact that we do not have to know the data source in order to select the appropriate MD model to use. We assume that the data source is unknown, which is our claim in this work, and we show that we obtain better performance than using source-specific MD models. This reflects the fact that a noisy data set will in fact have portions with varying degrees of \"noise\", so the combination outperforms any single model targeted to a single particular level of noise, enabling the system to contend with such variability without the need for presegregating sub-types of data for noise level. The obtained improvement from the system combination over all other models is statistically significant based on the stratified bootstrap re-sampling significance test (Noreen, 1989) . We consider results statistically significant when p < 0.05, which is the case in this paper. This approach was used in the named-entityrecognition shared task of CoNNL-2002 3 .",
"cite_spans": [
{
"start": 1797,
"end": 1811,
"text": "(Noreen, 1989)",
"ref_id": "BIBREF13"
},
{
"start": 1977,
"end": 1989,
"text": "CoNNL-2002 3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 71,
"end": 78,
"text": "Table 2",
"ref_id": null
},
{
"start": 946,
"end": 953,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "It should be noted that some completely-nontarget types of data, such as non-target-character set data, have been omitted from analysis here. Including them would make our system look comparatively stronger, as they would have only spurious mentions and so generate false alarms but no correct mentions in the baseline system, while our system deterministically removes them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "As mentioned above, we view MD robustness primarily as an effort to eliminate, relative to a baseline system, large volumes of spurious \"mentions\" detected in non-target input content, while minimiz-(a) DET plot for clean (baseline), mixed, gazetteer, and combination MD systems on the Latin-alphabetnon-English text. The clean system (upper curve) performs far worse than the other three systems designed to provide robustness; these systems in turn perform nearly indistinguishably.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "(b) DET plot for clean (baseline), mixed, gazetteer, and combination MD systems on the Transactions data set. The clean system (upper/longer curve) reaches far higher false-alarm rates, while never approaching the lower miss rates achievable by any of the other three systems, which in turn perform comparably to each other. ing disruption of detection in target input. A secondary goal is recall in the event of occasional valid mentions in such non-target material. Thus, as input material degrades, precision increases in importance relative to recall. As such, we view precision and recall asymmetrically on this task, and so rather than evaluating purely in terms of F -measure, we perform a detection-error-trade-off (DET) (Martin et al., 1997) analysis, in which we plot a curve of miss rate on valid mentions vs. false-alarm rate, with the curve traced by varying a confidence threshold across its range. We measure false-alarm and miss rates relative to the number of actual mentions annotated in the data set:",
"cite_spans": [
{
"start": 729,
"end": 750,
"text": "(Martin et al., 1997)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "FA rate = # false alarms # annotated mentions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Miss rate = # misses # annotated mentions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "(2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "where false alarms are \"mentions\" output by the system but not appearing in annotation, while misses are mentions which are annotated but do not appear in the system output. Each mention is treated equally in this analysis, so frequently-recurring entity/mention types weigh on the results accordingly. Figure 2a shows a DET plot for the clean, mixed, gazetteer, and combination systems on the \"Latin\" data set, while Figure 2b shows the analogous plot for the \"Transactions\" data set. The drastic gains made over the baseline system by the three experimental systems are evident in the plots. For example, on Latin, choosing an operating point of a miss rate of 0.6 (nearly the best achievable by the clean system), we find that the robustness-oriented systems eliminate 97% of the false alarms of the clean baseline system, as the plot shows false-alarm rates near 0.07 compared to the baseline's of 2.08. Gains on Transaction data are more modest, owing to this case representing a data genre not included in training. It should be noted that the jaggedness of the Transaction curves traces to the repetitive nature of some of the terms in this data set.",
"cite_spans": [],
"ref_spans": [
{
"start": 303,
"end": 312,
"text": "Figure 2a",
"ref_id": "FIGREF1"
},
{
"start": 418,
"end": 427,
"text": "Figure 2b",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "In making a system more oriented toward robustness in the face of non-target inputs, it is important to quantify the effect of these systems being lessoriented toward clean, target-language text. Figure 3 shows the analogous DET plot for the English test set, showing that achieving robustness through the combination system comes at a small cost to accuracy on the text the original system is trained to process.",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 204,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "For information-extraction systems to be useful, their performance must degrade gracefully when confronted with inputs which deviate from ideal and/or derive from unknown sources in unknown formats. Imperfectly-translated, mixed-language, marked-up text and non-language material must not Figure 3 : DET plot for clean (baseline), mixed, gazetteer, and combination MD systems on clean English text, verifying that performance by the clean system (lowest curve) is very closely approximated by the combination system (second-lowest curve), while the mixed system performs somewhat worse and the gazetteer system (top curve), worse still, reflecting that these systems are increasingly oriented toward noisy inputs.",
"cite_spans": [],
"ref_spans": [
{
"start": 289,
"end": 297,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "be processed in a garbage-in-garbage-out fashion merely because the system was designed only to handle clean text in one language. Thus we have embarked on information-extraction-robustness work, to improve performance on imperfect inputs while minimizing disruption of processing of clean text. We have demonstrated that for one IE task, mention detection, a multi-faceted approach, motivated by the diversity of input data imperfections, can eliminate a large proportion of the spurious outputs compared to a system trained on the target input, at a relatively small cost of accuracy on that target input. This outcome is achieved by a system-combination approach in which a perplexity-based measure of how well the input matches the target language is used to select among models designed to deal with such varying levels of noise. Rather than relying on explicit recognition of genre of source data, the experimental system merely does its own assessment of how much each sentence-sized chunk matches the target language, an important feature in the case of unknown text sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Chief among directions for further work is to continue to improve performance on noisy data, and to strengthen our findings via larger data sets. Additionally, we look forward to expanding analysis to different types of imperfect input, such as machinetranslation output, different types of mark-up, and different genres of real data. Further work should also explore the degree to which the approach to achieving robustness must vary according to the tar-get language. Finally, robustness work should be expanded to other information-extraction tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Note that the resulting model cannot really be called a maximum-entropy model, as it does not yield the model which has the maximum entropy (the second term in the product), but rather is a maximum-a-posteriori model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.cnts.ua.ac.be/conll2002/ner/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors thank Ben Han, Anuska Renta, Veronique Baloup-Kovalenko and Owais Akhtar for their help with annotation. This work was supported in part by DARPA under contract HR0011-08-C-0110.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Arabic named entity recognition: A feature-driven study",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Benajiba",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2009,
"venue": "the special issue on Processing Morphologically Rich Languages of the IEEE Transaction on Audio, Speech and Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Benajiba, M. Diab, and P. Rosso. 2009. Arabic named entity recognition: A feature-driven study. In the spe- cial issue on Processing Morphologically Rich Lan- guages of the IEEE Transaction on Audio, Speech and Language.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Nymble: a high-performance learning namefinder",
"authors": [
{
"first": "M",
"middle": [],
"last": "Bikel",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of ANLP-97",
"volume": "",
"issue": "",
"pages": "194--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Bikel, S. Miller, R. Schwartz, and R. Weischedel. 1997. Nymble: a high-performance learning name- finder. In Proceedings of ANLP-97, pages 194-201.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Exploiting diverse knowledge sources via maximum entropy in named entity recognition",
"authors": [
{
"first": "A",
"middle": [],
"last": "Borthwick",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Sterling",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Agichtein",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Borthwick, J. Sterling, E. Agichtein, and R. Grishman. 1998. Exploiting diverse knowledge sources via max- imum entropy in named entity recognition.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An estimate of an upper bound for the entropy of English",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Lai",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, J. C. Lai, and R. L. Mercer. 1992. An estimate of an up- per bound for the entropy of English. Computational Linguistics, 18(1), March.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A survey of smoothing techniques for ME models",
"authors": [
{
"first": "S",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 2000,
"venue": "IEEE Transaction on Speech and Audio Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Chen and R. Rosenfeld. 2000. A survey of smooth- ing techniques for ME models. IEEE Transaction on Speech and Audio Processing.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Named entity recognition through classifier combination",
"authors": [
{
"first": "R",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2003,
"venue": "Conference on Computational Natural Language Learning -CoNLL-2003",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Florian, A. Ittycheriah, H. Jing, and T. Zhang. 2003. Named entity recognition through classifier combina- tion. In Conference on Computational Natural Lan- guage Learning -CoNLL-2003, Edmonton, Canada, May.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A statistical model for multilingual entity detection and tracking",
"authors": [
{
"first": "R",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Kambhatla",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Nicolov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of HLT-NAACL 2004",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Florian, H. Hassan, A. Ittycheriah, H. Jing, N. Kamb- hatla, X. Luo, N Nicolov, and S Roukos. 2004. A statistical model for multilingual entity detection and tracking. In Proceedings of HLT-NAACL 2004, pages 1-8.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Factorizing complex models: A case study in mention detection",
"authors": [
{
"first": "R",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Kambhatla",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Zitouni",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "473--480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Florian, H. Jing, N. Kambhatla, and I. Zitouni. 2006. Factorizing complex models: A case study in men- tion detection. In Proceedings of the 21st Interna- tional Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computa- tional Linguistics, pages 473-480, Sydney, Australia, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Sequential conditional generalized iterative scaling",
"authors": [
{
"first": "J",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL'02",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Goodman. 2002. Sequential conditional generalized iterative scaling. In Proceedings of ACL'02.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Klue annotation guidelines -version 2.0",
"authors": [
{
"first": "D",
"middle": [
"B"
],
"last": "Han",
"suffix": ""
}
],
"year": 2010,
"venue": "IBM Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. B. Han. 2010. Klue annotation guidelines -version 2.0. Technical Report RC25042, IBM Research, Au- gust.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The DET curve in assessment of detection task performance",
"authors": [
{
"first": "A",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Doddington",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kamm",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ordowski",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Przybocki",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the European Conference on Speech Communication and Technology (Eurospeech)",
"volume": "",
"issue": "",
"pages": "1895--1898",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Martin, G. Doddington, T. Kamm, M. Ordowski, and M. Przybocki. 1997. The DET curve in assessment of detection task performance. In Proceedings of the European Conference on Speech Communication and Technology (Eurospeech), pages 1895-1898. Rhodes, Greece.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Named entity extraction from noisy input: speech and OCR",
"authors": [
{
"first": "D",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Boisen",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Stone",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the sixth conference on Applied natural language processing",
"volume": "",
"issue": "",
"pages": "316--324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Miller, S. Boisen, R. Schwartz, R. Stone, and R. Weischedel. 2000. Named entity extraction from noisy input: speech and OCR. In Proceedings of the sixth conference on Applied natural language process- ing, pages 316-324, Morristown, NJ, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Extracting personal names from email: Applying named entity recognition to informal text",
"authors": [
{
"first": "E",
"middle": [],
"last": "Minkov",
"suffix": ""
},
{
"first": "R",
"middle": [
"C"
],
"last": "Wang",
"suffix": ""
},
{
"first": "W",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "443--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Minkov, R. C. Wang, and W. W. Cohen. 2005. Ex- tracting personal names from email: Applying named entity recognition to informal text. In Proceedings of Human Language Technology Conference and Confer- ence on Empirical Methods in Natural Language Pro- cessing, pages 443-450, Vancouver, British Columbia, Canada, October. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Computer-Intensive Methods for Testing Hypotheses",
"authors": [
{
"first": "E",
"middle": [
"W"
],
"last": "Noreen",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses. John Wiley Sons.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Aggregating Distributed STT, MT, and Information Extraction Engines: The GALE Interoperability-Demo System",
"authors": [
{
"first": "J",
"middle": [
"F"
],
"last": "Pitrelli",
"suffix": ""
},
{
"first": "B",
"middle": [
"L"
],
"last": "Lewis",
"suffix": ""
},
{
"first": "E",
"middle": [
"A"
],
"last": "Epstein",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kiecza",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Quinn",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Ramaswamy",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Virga",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. F. Pitrelli, B. L. Lewis, E. A. Epstein, M. Franz, D. Kiecza, J. L. Quinn, G. Ramaswamy, A. Srivas- tava, and P. Virga. 2008. Aggregating Distributed STT, MT, and Information Extraction Engines: The GALE Interoperability-Demo System. In Interspeech. Brisbane, NSW, Australia.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Linguini: Language identification for multilingual documents",
"authors": [
{
"first": "M",
"middle": [],
"last": "Prager",
"suffix": ""
}
],
"year": 1999,
"venue": "In Journal of Management Information Systems",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Prager. 1999. Linguini: Language identification for multilingual documents. In Journal of Management Information Systems, pages 1-11.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Text chunking using transformation-based learning",
"authors": [
{
"first": "L",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 1999,
"venue": "Natural Language Processing Using Very Large Corpora",
"volume": "",
"issue": "",
"pages": "157--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Ramshaw and M. Marcus. 1999. Text chunking using transformation-based learning. In S. Armstrong, K.W. Church, P. Isabelle, S. Manzi, E. Tzoukermann, and D. Yarowsky, editors, Natural Language Processing Using Very Large Corpora, pages 157-176. Kluwer.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Introduction to the conll-2002 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "E",
"middle": [
"F"
],
"last": "Tjong Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sang",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of CoNLL-2002",
"volume": "",
"issue": "",
"pages": "155--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. F. Tjong Kim Sang. 2002. Introduction to the conll- 2002 shared task: Language-independent named entity recognition. In Proceedings of CoNLL-2002, pages 155-158. Taipei, Taiwan.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The implementation of the Amsterdam SGML parser",
"authors": [
{
"first": "J",
"middle": [],
"last": "Warmer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Van Egmond",
"suffix": ""
}
],
"year": 1989,
"venue": "Electron. Publ. Origin. Dissem. Des",
"volume": "2",
"issue": "2",
"pages": "65--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Warmer and S. van Egmond. 1989. The implementa- tion of the Amsterdam SGML parser. Electron. Publ. Origin. Dissem. Des., 2(2):65-90.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Eliminating noisy information in web pages for data mining",
"authors": [
{
"first": "L",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2003,
"venue": "KDD '03: Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "296--305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Yi, B. Liu, and X. Li. 2003. Eliminating noisy in- formation in web pages for data mining. In KDD '03: Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 296-305, New York, NY, USA. ACM.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The ICSI+ multilingual sentence segmentation system",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zimmerman",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Mirghafori",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Gottlieb",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2006,
"venue": "Interspeech",
"volume": "",
"issue": "",
"pages": "117--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Zimmerman, D. Hakkani-Tur, J. Fung, N. Mirghafori, L. Gottlieb, E. Shriberg, and Y. Liu. 2006. The ICSI+ multilingual sentence segmentation system. In Interspeech, pages 117-120, Pittsburgh, Pennsylvania, September.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Mention detection crossing the language barrier",
"authors": [
{
"first": "I",
"middle": [],
"last": "Zitouni",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP'08",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Zitouni and R. Florian. 2008. Mention detection crossing the language barrier. In Proceedings of EMNLP'08, Honolulu, Hawaii, October.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Cross-language information propagation for Arabic mention detection",
"authors": [
{
"first": "I",
"middle": [],
"last": "Zitouni",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM Transactions on Asian Language Information Processing (TALIP)",
"volume": "8",
"issue": "4",
"pages": "1--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Zitouni and R. Florian. 2009. Cross-language informa- tion propagation for Arabic mention detection. ACM Transactions on Asian Language Information Process- ing (TALIP), 8(4):1-21.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Example application-specific text, in this case from financial transactions.",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "DET plots for Latin-alphabet-non-English and Transactions data sets",
"type_str": "figure",
"num": null
},
"TABREF0": {
"text": "RESULT OF THE PURCHASE OFFER ENDED ON 23 MAR 2008 CALDRADE LTD. IS POSSESSING WITH MORE THEN 90 PER CENT VOTING RIGHT OF SLICE. THEREFOR CALDRADE LTD. IS EXERCISING PURCHASE RIGHTS FOR ALL SLICE SHARES WHICH ARE CURRENTLY NOT INHIS OWN. PURCHASE PRICE: HUF 1.940 PER SHARE. PLEASE :58E::ADTX//NOTE THAT THOSE SHARES WHICH WILL NOT BE PRESENTED TO THE OFFER WILL BE CANCELLED AND INVALID. RUECKGABE DES BETRAGES LT. ANZBA43 M ZWECKS RUECKGABE IN AUD. URSPR. ZU UNSEREM ZA MIT REF. 0170252313279065 UND IHRE RUECKG. :42:/BNF/UNSERE REF:",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>:54D://121000358</td></tr><tr><td>BANK OF BOSTON</td></tr><tr><td>:55D:/0148280005</td></tr><tr><td>NEVADA DEPT.OF VET.94C RECOV.FD</td></tr><tr><td>-5:MAC:E19DECA8CHK:641EB09B8968</td></tr><tr><td>USING OF FIELD 59: ONLY /INS/ WHEN</td></tr><tr><td>FOLLOWED BY BCC CODE IN CASE</td></tr><tr><td>OF QUESTIONS DONT HESITATE TO</td></tr><tr><td>CONTACT US QUOTING REFERENCE</td></tr><tr><td>NON-STC CHARGES OR VIA E-MAIL:</td></tr><tr><td>YOVANKA(UL)BRATASOVA(AT)BOA.CZ.</td></tr><tr><td>BEST REGARDS</td></tr><tr><td>BANKA OBCHODNIKA, A.S. PRAGUE, CZ</td></tr><tr><td>:58E::ADTX//++ ADDITIONAL</td></tr><tr><td>INFORMATION ++ PLEASE BE</td></tr><tr><td>INFORMED THAT AS A :58:SIE SELBST</td></tr><tr><td>TRN/REF:515220 035</td></tr><tr><td>:78:</td></tr></table>"
}
}
}
}