ACL-OCL / Base_JSON /prefixW /json /W99 /W99-0209.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W99-0209",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:09:47.620906Z"
},
"title": "Orthographic Co-Reference Resolution Between Proper Nouns Through the Calculation of the Relation of \"Replicancia\"",
"authors": [
{
"first": "Pedro",
"middle": [],
"last": "Amo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad de Alcal~i Crta. Madrid-Barcelona km. 33",
"location": {
"addrLine": "600 Alcalfi de Henares-Madrid",
"country": "SPAIN"
}
},
"email": "pedro.amo@alcala.es"
},
{
"first": "Francisco",
"middle": [
"L"
],
"last": "Ferreras",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad de Alcal~i Crta. Madrid-Barcelona km. 33",
"location": {
"addrLine": "600 Alcalfi de Henares-Madrid",
"country": "SPAIN"
}
},
"email": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Cruz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad de Alcal~i Crta. Madrid-Barcelona km. 33",
"location": {
"addrLine": "600 Alcalfi de Henares-Madrid",
"country": "SPAIN"
}
},
"email": "femando.cruz@alcala.es"
},
{
"first": "Saturnino",
"middle": [],
"last": "Maldonado",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad de Alcal~i Crta. Madrid-Barcelona km. 33",
"location": {
"addrLine": "600 Alcalfi de Henares-Madrid",
"country": "SPAIN"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Nowadays there is a growing research activity centred in automated processing of texts on electronic form. One of the most common problems in this field is coreference resolution, i.e. the way of knowing when, in one or more texts, different descriptions refer to the same entity. A full co-reference resolution is difficult to achieve and too computationally demanding; moreover, in many applications it is enough to group the majority of coreferential expressions (expressions with the same referent) by means of a partial analysis. Our research is focused in coreference resolution restricted to proper names, and we part from the definition of a new relation called \"replicancia\". Though replicancia relation does not coincide extensionally nor intensionally with identity, attending to the tests we have carried out, the error produced when we take the replicantes as co-referents is admissible in those applications which need systems where a very high speed of processing takes priority.",
"pdf_parse": {
"paper_id": "W99-0209",
"_pdf_hash": "",
"abstract": [
{
"text": "Nowadays there is a growing research activity centred in automated processing of texts on electronic form. One of the most common problems in this field is coreference resolution, i.e. the way of knowing when, in one or more texts, different descriptions refer to the same entity. A full co-reference resolution is difficult to achieve and too computationally demanding; moreover, in many applications it is enough to group the majority of coreferential expressions (expressions with the same referent) by means of a partial analysis. Our research is focused in coreference resolution restricted to proper names, and we part from the definition of a new relation called \"replicancia\". Though replicancia relation does not coincide extensionally nor intensionally with identity, attending to the tests we have carried out, the error produced when we take the replicantes as co-referents is admissible in those applications which need systems where a very high speed of processing takes priority.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The proliferation of texts on electronic form during the last two decades has livened up the interest in Information Retrieval and given rise to new disciplines such as Information Extraction or automatic summarization. These three disciplines have in common the operation of Natural Language Processing techniques (Jacobs and Rau, 1993) , which thus can evolve synergically.",
"cite_spans": [
{
"start": 315,
"end": 337,
"text": "(Jacobs and Rau, 1993)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Identification and treatment of noun phrases is one of the fields of interest shared both by Information Retrieval and Information Extraction. Such interest must be understood within the trend to carry out only partial analysis of texts so as to process them in a reasonable time (Chinchor et al., 1993) .",
"cite_spans": [
{
"start": 280,
"end": 303,
"text": "(Chinchor et al., 1993)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The present work proposes a new instrument designed for the treatment of proper nouns and other simple noun phrases in texts written in Spanish language. The tool can be used both in Information Retrieval and Information Extraction systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Information Retrieval systems aim to discriminate between the documents which form the system's entry, according to the information need posed by the user. In the field of Information Retrieval we can find two different approaches: information filtering -or routing-and retrospective, also \"known as ad hoc. In the first modality, the information need remains fixed and the documents which form the entry are always new. The ad hoc modality retrieves relevant texts from a relative static set of documents but, in contrast, admits changing information needs (Oard, 1996) .",
"cite_spans": [
{
"start": 558,
"end": 570,
"text": "(Oard, 1996)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information Retrieval.",
"sec_num": "1.1"
},
{
"text": "Information Retrieval systems make up representations of the texts and compare them with the representation of the information need raised by the user. The representation form most commonly used is a vector whose coordinates depend on the terms' frequency of occurrence, where such \"terms\" are elements of the text represented which can coincide with words, stems or words associations (n-grams) (Evans and Zhai, 1996) .",
"cite_spans": [
{
"start": 396,
"end": 418,
"text": "(Evans and Zhai, 1996)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information Retrieval.",
"sec_num": "1.1"
},
{
"text": "At the present time designers try to enrich the sets of terms used in the representations with noun phrases or parts of them as, for example, proper names. Thompson and Dozier conclude in (Thompson and Dozier, 1997) that: first, the recognition of proper nouns in the texts can be done with remarkable effectiveness; second, proper nouns occur frequently enough as well as in queries ~ to warrant their separate treatment in document retrieval, at least when working with case law documents or press news; third, the inclusion of proper nouns as terms in the documents' representations can improve information retrieval when proper nouns appear in the query.",
"cite_spans": [
{
"start": 188,
"end": 215,
"text": "(Thompson and Dozier, 1997)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information Retrieval.",
"sec_num": "1.1"
},
{
"text": "As Gerald Salton has noted (Salton and Allan, 1995) , in the field of information retrieval to determine the meaning of words is not as important as to ascertain if the meaning of the terms within the detection need coincide or not with the meaning of the terms in each document. When the terms are proper nouns we shall not talk about meaning but of reference, so that it will be interesting to know if the referents of the proper nouns included in the query coincide or not with the referents of the proper nouns which appear in each document.",
"cite_spans": [
{
"start": 27,
"end": 51,
"text": "(Salton and Allan, 1995)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information Retrieval.",
"sec_num": "1.1"
},
{
"text": "Referents' resolution can not be done without an additional context (Thompson and Dozier, 1997) . Nonetheless, in an Information Retrieval process we can not count on the additional information of a data base register, such as personal identification number, profession or age. Nor can we make a linguistic analysis as thorough as can be done in a language understanding system or in an Information Extraction system. Because of these reasons, the identification of proper nouns as co-references is usually confined to verify if the superficial forms of the two nouns under examination are close enough as to suppose that both of them refer to the same individual or entity.",
"cite_spans": [
{
"start": 68,
"end": 95,
"text": "(Thompson and Dozier, 1997)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information Retrieval.",
"sec_num": "1.1"
},
{
"text": "Co-reference resolution is difficult because it demands the establishment of relationships between linguistic expressions -for example, proper names-and the entitys denoted. So as to establish the co-reference between two expressions, we must identify first the referent of each one of them and then check if both coincide.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aim of the Work",
"sec_num": "1.2"
},
{
"text": "1 \"Query\" is the translation of the user's information need into a format appropriate for the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aim of the Work",
"sec_num": "1.2"
},
{
"text": "In this work we propose a tool to resolve proper nouns co-reference, based only on the exam of those nouns' superficial form. To carry out this exam we have defined, in Section Two, the relation of replicancia. This relation links pairs of nouns -replicanteswhich show certain resemblance between their superficial forms and use to have the same referent. In Section Three we propose an algorithm for the calculation of replicancia. This algorithm has the ability to learn pairs of replicantes and also allows to manually introduce pairs of proper nouns with the same referent and which register a high frequency of occurrence, although their orthographic forms do not bear any similarity at all. Section Four contains the results of the evaluation of coreference resolution between proper nouns using only the relation of replicancia. Finally, in Section Five we draw some conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aim of the Work",
"sec_num": "1.2"
},
{
"text": "The need to develop an algorithm capable to resolve in a simple way to resolve proper nouns co-reference has been pointed-out by several authors. Among them, we can quote: Dragomir Radev, in (Radev and McKeown, 1997) , includes, between the forthcoming extensions of PROFILE system, an algorithm \"...that will match different instances of the same entity appearing in different syntactic forms -e.g. to establish that 'PLO' is an alias for de 'Palestine Liberation Organization'...\"",
"cite_spans": [
{
"start": 191,
"end": 216,
"text": "(Radev and McKeown, 1997)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Antecedent",
"sec_num": "2.2"
},
{
"text": "The second antecedent is (Bikel et al., 1998) , where Bikel and his collaborators say about the future improvements of their own system: \"... We would like to incorporate the following into the current model: ...an aliasing algorithm, which dynamically updates the model (where e.g. IBM is an alias of International Business Machines)...\"",
"cite_spans": [
{
"start": 25,
"end": 45,
"text": "(Bikel et al., 1998)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Antecedent",
"sec_num": "2.2"
},
{
"text": "We call replicancia to the relation which links proper nouns which presumably refer to the same entity if we attend exclusively to the nouns themselves, without paying attention to their respective contexts nor to which their actual referents may be. We shall call replicantes the nouns which maintain a relation of replicancia between them. We also assign the label ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": "2.2"
},
{
"text": "Once the replicancia relation is defined, we are in a position to present the algorithm devised to calculate it. Although the algorithm has been developed in Prolog, we are going to expose it in a pseudocode, similar to that used in PASCAL, which uses the following notation: the symbols \",\", \"/\" and \"7\" represent the conjunction, disjunction and negation of terms, respectively; the brackets \"(...)\" delimitate optional units; braces \"{... }\" are used to establish the prelation of logical operators; \"::=\" is the symbol chosen for the definition and \":=\" refers to the assignation of values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Replicancia Calculation Algorithm",
"sec_num": "3"
},
{
"text": "The calculation of the replicancia relation can impose a heavy computation burden if it is not somehow limited. The variety of forms in which two nouns can be mutually replicantes forces us to cover a long distance before being able to decide that two candidates are linked by such relation. That is why we have provided our algorithm with two instruments to reduce the computation burden; the first is its ability to learn, which permits the automatic creation of a data base of pairs of proper nouns mutually replicantes; the second is a filter which, based in a fast analysis of the initials of the nouns under comparison, reject most of the pairs of nouns that are not mutually replicantes. The main predicate can be expressed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Replicancia Calculation Algorithm",
"sec_num": "3"
},
{
"text": "replicante(Ni,N2) ::= Nlc = N2c/ replicante_guardado(Ni,N2)/ filtro(NI,N2), resto_replicante(Ni,N2), guarda_replicante(Ni,N2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Replicancia Calculation Algorithm",
"sec_num": "3"
},
{
"text": "where Nxc represents the canonic form of noun Nx and predicate resto replicante is defined by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Replicancia Calculation Algorithm",
"sec_num": "3"
},
{
"text": "resto_replicante (Ni, N2) :",
"cite_spans": [
{
"start": 17,
"end": 21,
"text": "(Ni,",
"ref_id": null
},
{
"start": 22,
"end": 25,
"text": "N2)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Replicancia Calculation Algorithm",
"sec_num": "3"
},
{
"text": ":= siglas(NI,N2) / sin_prep(Dl2),{N2 = N1 -DI2 version(Ni,N2)} / suprime_nexos(Ni,Nls), {N2 Nls -DI2 / version(Nls,N2)}. /",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Replicancia Calculation Algorithm",
"sec_num": "3"
},
{
"text": "where Nx-Ny represents the lexemes sequency of noun Nx not present in noun Ny and D12 is NI-N2; predicate sin_prep(Nx) is true for the nouns Nx which do not include prepositions; suprimenexos is the result of eliminating the noun of the lexemes in N1 which do not begin with a capital letter; finally, predicate version(Nx,Ny) is satisfied if every lexeme of Ny is a version_palabra of any of the lexemes of Nx without any alteration in the relative order of occurrence of each homologous lexeme, as we have already indicated in section 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Replicancia Calculation Algorithm",
"sec_num": "3"
},
{
"text": "In order to evaluate this algorithm we have designed the experiment described in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Replicancia Calculation Algorithm",
"sec_num": "3"
},
{
"text": "We have defined the replicancia relation as an instrument for the resolution of co-reference between proper nouns, although we are perfectly aware of its limitations to identify as coreferentials nouns which are not linked by an orthographic relation, as occurs with nicknames and familiar names (Josd and Pepe, for example).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment and Results",
"sec_num": "4"
},
{
"text": "In order not to conceal this limitations we have designed an experiment to evaluate the coreference between proper nouns instead of evaluating the algorithm for the replicancia resolution, because this relation is only useful inasmuch it is capable to resolve co-reference between proper nouns. In the context of natural language processing systems evaluation is very important to decide which collection of texts is going to be used. The current trend is not to use texts specifically prepared for the evaluation, but normal texts, that is to say, similar to the texts which the system will use in its normal operation. We have chosen documents available in the World Wide Web.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment and Results",
"sec_num": "4"
},
{
"text": "In our evaluation we will use a corpus manually processed composed by 100 documents with an extension scarcely over 1 MB, HTML ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment and Results",
"sec_num": "4"
},
{
"text": "Vanguardia Electrrnica. The extension of the documents varies between 4kB and 25 kB, being the average around 10kB. The utilisation of these variety of sources to obtain the documents which form the Corpus is very advisable, because people who work for the same newspaper tend to write similarly; the choosing of different document sources brings us closer to the style diversity characteristic of the texts available in the World Wide Web.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "La",
"sec_num": null
},
{
"text": "The result of co-reference resolution is the grouping of the document's selected objects -the proper nouns-in classes. The human analyst designs a template which includes the different instances of the same entity present in the document, and then compares it with the system's response, being this another grouping of objects in classes of co-reference. From this comparison we draw the system's quality evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "La",
"sec_num": null
},
{
"text": "The evaluation proceeding chosen is based on the measures used in MUC conferences (MUC-7, 1997 ) (Chinchor, 1997) (Hirschman, 1997) . In MUC evaluations, the results of co-reference analysis are classified according to three categories: COR -Correct. A result is correct when there is full coincidence between the contents of the template and the system's response.",
"cite_spans": [
{
"start": 82,
"end": 94,
"text": "(MUC-7, 1997",
"ref_id": null
},
{
"start": 97,
"end": 113,
"text": "(Chinchor, 1997)",
"ref_id": "BIBREF1"
},
{
"start": 114,
"end": 131,
"text": "(Hirschman, 1997)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "La",
"sec_num": null
},
{
"text": "Missing. We consider a result absent when it appears in the template but not in the answer. SPU -Spurious. We consider a result spurious when it does not appear in the template although it is found in the system's response.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MIS -",
"sec_num": null
},
{
"text": "With the cardinals of these three classes we obtain two intermediate magnitudes which will be useful to calculate merit figures. These magnitudes areS: (Rijsbergen, 1980) . We use it with the control parameter B= 1 to guarantee the same weight for recall and precision. Its general expression is:",
"cite_spans": [
{
"start": 152,
"end": 170,
"text": "(Rijsbergen, 1980)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MIS -",
"sec_num": null
},
{
"text": "F = ([3z + 1). PRE. REC flz.PRE+REC",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MIS -",
"sec_num": null
},
{
"text": "From the cardinals of classes COR, MIS and SPU we obtain measures REC, PREy F, the last one with parameter B= 1. With all these data we fill in table I-1 of Annex I, where each line shows the metrics and data of a document. Table 4 .1 shows overall results.",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 231,
"text": "Table 4",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "MIS -",
"sec_num": null
},
{
"text": "We have carried out the evaluation of the calculation of replicantes bearing in mind the use we intend to make of that relation. We have counted the cardinals of the co-reference classes among the proper nouns included in the text only in those classes whose elements adopted more than one form. Let us think, for example, in a text which had three classes of co-reference formed by the names6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MIS -",
"sec_num": null
},
{
"text": "[(Madrid, 2)] [(Jos6 Maria Aznar, 1), (Aznar, 3)] [(Sumo Pontffice, 2)], (Juan Pablo (Karol Wojtyla, 1)] II, 2),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MIS -",
"sec_num": null
},
{
"text": "The first class refers to the entity in a single way; the second class includes two ways of referring to the entity and the third one three. For the evaluation of co-reference between nouns we consider that there are four references to the second entity and five to the third, which makes a total of nine nouns, although we know that the system is not prepared to identify as coreferentials the elements included in the third class. So, the result of the evaluation would be: ACT=9, COR=6, MIS=3, SPU=3 because of the nine elements subject of consideration, (POS), the system has recognized nine (ACT) -six of them correctly located in their respective classes (COR), three located in two spurious classes (SPU) and three missing from their correct classes (MIS). Briefly, we evaluate the resolution of coreference between proper names without taking into account how the problem is solved by our algorithm; this way, the evaluation does not reflect the quality of the algorithm in calculating the replicancia, but the resolution of the coreference between proper nouns instead, which is the aim pursued.",
"cite_spans": [
{
"start": 476,
"end": 482,
"text": "ACT=9,",
"ref_id": null
},
{
"start": 483,
"end": 489,
"text": "COR=6,",
"ref_id": null
},
{
"start": 490,
"end": 496,
"text": "MIS=3,",
"ref_id": null
},
{
"start": 497,
"end": 502,
"text": "SPU=3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MIS -",
"sec_num": null
},
{
"text": "POS =9,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MIS -",
"sec_num": null
},
{
"text": "The algorithm conceived does not calculate the co-reference, but the replicancia between proper names instead. Replicancia and co-reference do not coincide neither extensionally nor intensionally. As a consequence, referents of nouns linked by the replicancia relation are not bound by an identity relation, among other things because replicancia is not an equivalence relation, meanwhile coreference it is. The result is that one class of replicancia may contain names which refer to different entities and, however, not contain names which refer to the entity associated to the class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Despite what it has been said, as the calculation of replicancia is much more simple than the calculation of coreferences, it is interesting, and even convenient, to calculate replicancia between nouns in limited contexts. Table I -1 in Annex I shows the results of the experiment. Under the line showing the total figures, we have added three lines with the average, median and mean deviation of the data obtained with each of the 100 documents analyzed. We can notice that the median take values between eight and nine points above average, which means that in most documents the co-reference between proper nouns is successfully decided. The negative burden comes from the variance, close to 25% of the recall and precision total values, which, along with the difference between average and median, forces us to think that most of mistakes are concentrated in some documents, in which precision and recall can be much smaller than expected.",
"cite_spans": [],
"ref_spans": [
{
"start": 223,
"end": 230,
"text": "Table I",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In Figure 5 .1 we have represented the histogram of the group of values obtained for F measure, values included in the right column in table I-1. It can be noted that more than half of the documents subject to evaluation obtain an Fmeasure over 0.9, and more than two thirds are over 0.8.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The statistical analysis of the system's quality measures is not enough to guarantee the representativity of the sample. It is also necessary to analyze its composition. The 100 documents of the sample were obtained aleatorily (random choice) during January 1998. The five sources used are different but all of them are relevant newspapers. Moreover, we have selected news under two different sections: domestic and international. The diversity of the documents which form the sample and the homogeneity of the measures obtained strengthen the hypothesis which states the results validity. The mistakes detected are mainly due to two causes: the presence of co-referents which are not replicantes and the presence of replicantes which are not co-referents, as sometimes it occurs with initials. Other errors, not attributable to the algorithm, are due to faults in the automatic system we have used for the extraction of proper nouns. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We will use Courier New font to write the names of the predicates defined in Prolog which take part in the replicancia calculation algorythm.3 The canonic form of a proper noun is its representation in small letters without accentuation.4 In the last case the whole word P1 in the longer noun is assimilated to only the initial ' of P2. The rest of the letters in P2 must have some correspondence in the first noun so that we can consider both nouns as replicantes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "When the names of these measures appear in arithmetical expressions, they must be taken as the cardinals of the respective classes.6 Each name shows the number of times it appears in the relevant text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Nymble: a High-Performance Learning Name-finder",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bikel",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bikel, D., Miller, S., Schwartz, R. and Weischedel, R. (1998): \"Nymble: a High-Performance Learning Name-finder\". March 1998. In http://xxx.lanl.gov/ ps/cmp-lg/9803003",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "MUC-7 Named Entity Task Definition. Dry Run Version. Versi6n 3.5",
"authors": [
{
"first": "N",
"middle": [],
"last": "Chinchor",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinchor, N. (1997): MUC-7 Named Entity Task Definition. Dry Run Version. Versi6n 3.5. Sep. 1997. In http://www.muc.saic.corn/",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Evaluating Message Understanding Systems: An Analysis of the ~Third Message Understanding Conference (MUC-3)\". Computational Linguistics",
"authors": [
{
"first": "N",
"middle": [],
"last": "Chinchor",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "19",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinchor, N., Hirschman, L. and Lewis, D. (1993): \"Evaluating Message Understanding Systems: An Analysis of the ~Third Message Understanding Conference (MUC-3)\". Computational Linguistics, 1993, Vol. 19, 1~. 3.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Noun-Phrase Analysis in Unrestricted Text for Information Retrieval",
"authors": [
{
"first": "D",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evans, D. and Zhai, C. (1996): \"Noun-Phrase Analysis in Unrestricted Text for Information Retrieval\". May 1996. In http://xxx.lanl.gov/ps/ cmp-lg/9605019",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "MUC-7 Coreference Task Definition. Versi6n 3.0. Julio",
"authors": [
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hirschman, L. (1997): MUC-7 Coreference Task Definition. Versi6n 3.0. Julio 1997. In http://www.muc.saic.com/",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Call For Partici-pation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Jacobs",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Rau",
"suffix": ""
}
],
"year": 1990,
"venue": "Seventh Message Understanding System and Message Under-standing Conference (MUC-7)",
"volume": "33",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacobs, P. and Rau, L. (1990): \"SCISOR: Extracting Information from On-line News\". Communications of the ACM, noviembre 1990, Vol. 33, N'-'. 11. MUC-7 CFP(1997): \"Seventh Message Under- standing System and Message Under-standing Conference (MUC-7). Call For Partici-pation\". 1997. In ftp://ftp.muc, saic.com/pub/MUC/ participation/call-for-participation",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Adaptive Vector Space Text Filtering for Monolingual and Cross-Language Applications",
"authors": [
{
"first": "D",
"middle": [
"W"
],
"last": "Oard",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oard, D. W. (1996): Adaptive Vector Space Text Filtering for Monolingual and Cross-Language Applications. PhD thesis, University of Maryland, College Park, August 1996. http://www.ee.umd. edu/medlab/filter/papers/thesis.ps.gz.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Building a Generation K.nowledge Source using Internet-Accessible Newswire",
"authors": [
{
"first": "D",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radev, D. and McKeown, K. (1997): \"Building a Generation K.nowledge Source using Internet- Accessible Newswire\". Feb. 1997. En http://xxx.lanl.gov/ps/cmp-lg/9702014",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Information Retrieval. Butterworths. Londres",
"authors": [
{
"first": "C",
"middle": [],
"last": "Rijsbergen",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rijsbergen, C. (1980): Information Retrieval. Butterworths. Londres, 1980.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Selective Text , Utilization and Text Traversal",
"authors": [
{
"first": "G",
"middle": [],
"last": "Salton",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Allan",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "43",
"issue": "",
"pages": "483--497",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Salton, G. and Allan, J. (1995): \"Selective Text , Utilization and Text Traversal\". Int. J. Human- Computer Studies (1995) 43, pp 483-497.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Name Searching and Information Retrieval",
"authors": [
{
"first": "P",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dozier",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thompson, P. and Dozier, C. (1997): \"Name Searching and Information Retrieval\". June 1997. In http://xxx.lanl.gov/ps/cmp-lg/9706017",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "code included. The documents come from five different newspapers, all of them spanish and available on electronic form. The newspapers, in alphabetical order, are: ABCe, El Mundo, E1 Pals Digital, El Perirdico On Line and",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "Possible. The number of elements contained in the template that contribute to the final scoring. It is calculated through the following expression: POS = COR + MIS ACT -Actual. The number of elements included in the system's response. It is calculated as follows: ACT = COR + SPU Once we have gathered the data relative to the classes of responses, we are prepared to calculate the measures of the system's quality. The metrics chosen are the ones normally used in Information Retrieval systems: REC -Recall. A quantity measure of the elements of the answer-key included in the response of the system. Its expression is: Precision. A measure of the quantity of elements of the response included also in the template. Its expression is:",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "Histogram of the values ofFmeasureThe last five linles of table I-1, Annex I, register the total results distributed according to the documents origin.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"text": "replicante 2 to the predicate which resolves the replicancia relation. Replicancia is a diadic, reflexive, simetric, but not necessarily transitive relation. It is reflexive because every noun is a replicante of itself; simetric because if noun A is a replicante of noun B, noun B will also be a replicante of noun A; it is not transitive because the replicancia relation is established attending only to the nouns form, not to their referents, and it is possible that the same noun denotes two or more entities depending on its context of utilization. For example, let us say that noun A designates to Jos6 Salazar Fernfindez, noun B to JSF and name C to Juan Sfinchez Figueroa; in this case A is a replicante of B and B a replicante of C, but A is not a replicante of C. do not ask the predicate replicante to recognize as replicantes the pseudonyms, diminutives, nicknames, abreviations and familiar names, although the most frequent tokens can be manually introduced as if they had previously been learned by the system. Two proper nouns are replicantes when any of the following assumptions is satisfied: a) Both nouns or their canonic forms are identicaP, as in Josd Maria Aznar and Jose Maria Aznar. One of them coincides with the initials of the other, be it in singular as in Uni6n Europea and UE, or in plural as Estados Unidos and EE UU. We also admit nouns with nexus as in the case of Boletin Oficial del Estado and BOE. The shorter noun is contained in the longer one and among the lexemes not shared there are no nexes. Under this rule it is admitted that Julio P6rez de Lucas is a replicante of Pgrez de Lucas; nevertheless, Caja de Madrid is not a replicante of Madrid because the part not shared, Caja de, includes the nexus de. d) Every word of the shorter noun, N2, is a version of some word included in the longer noun, N1, in the same relative order, although there can be words in N1 which have no version inN2.A word P2 is a version of another word P1 when their canonic forms are identical, when P2 is the initial of P1 or when the initials of both coincide 4. According to this fourth rule, the noun JM Pacheco is a replicante of Juan Manuel Pacheco Fern6ndez; to verify it we compare the lexemes of the longer noun, one by one, with the lexemes which form the second name. In the comparison of Juan with JM Pacheco we identify the letter J as a version of Juan, after which remains M Pacheco as a rest. In the comparison between Manuel and M Pacheco we identify the letter M as a version of Manuel, after which remains Pacheco as the new rest that can be identified with the same lexeme in the first noun. As the new rest is empty, we decide that both names are replicantes of eachother although in the first noun there are lexemes left unmatched.This definition of replicante makes it possible to identify the following list of nouns as",
"content": "<table><tr><td>We replicantes of the first:</td></tr><tr><td>[Jos~ Luis Martfnez L6pez, JL Marffnez, J.L.</td></tr><tr><td>Martfnez, J Martfnez, Luis Martlnez, Jos~</td></tr><tr><td>Marffnez, Martfnez, JL M L]</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF1": {
"text": "1. Summary of proper nouns core terence analysis",
"content": "<table><tr><td colspan=\"6\">DOCUMENT POS ACT COR MIS SPU</td><td colspan=\"2\">REC PRE</td><td>F</td></tr><tr><td>AB1501</td><td>9</td><td>8</td><td>7</td><td>2</td><td>1</td><td colspan=\"2\">77,8 87,5 82,4</td></tr><tr><td>ZEDILLO</td><td>20</td><td>20</td><td>18</td><td>2</td><td>2</td><td colspan=\"2\">90,0 90,0 90,0</td></tr><tr><td>TOTAL</td><td colspan=\"5\">1523 1512 1296 227 216</td><td colspan=\"2\">85,1 85,7 85,4</td></tr><tr><td>Mean</td><td>15</td><td>15</td><td>13</td><td>2</td><td colspan=\"3\">2! 84,6 86,2</td><td>85,0</td></tr><tr><td>Median</td><td colspan=\"2\">12 12,5</td><td>10</td><td>1</td><td colspan=\"2\">1 93,3</td><td>94</td><td>93,33</td></tr><tr><td>Std. Deviation</td><td>12,93</td><td>13</td><td colspan=\"2\">12 3,3</td><td colspan=\"3\">3,3 20,2 18,6</td><td>19,19</td></tr><tr><td>ABC</td><td>362</td><td>352</td><td>288</td><td>74</td><td colspan=\"3\">64 79,6 81,8</td><td>80,7</td></tr><tr><td>El Mundo</td><td>181</td><td>181</td><td>158</td><td>23</td><td colspan=\"3\">23' 87,3 87,3</td><td>87,3</td></tr><tr><td>El Pals</td><td>434</td><td>432</td><td>377</td><td>57</td><td colspan=\"3\">55 86,9 87,3</td><td>87,1</td></tr><tr><td>El Peri6dico</td><td>350</td><td>354</td><td>289</td><td>61</td><td colspan=\"3\">65 82,6 81,6</td><td>82,1</td></tr><tr><td>La Vanguardia</td><td>196</td><td>193</td><td>184</td><td>12</td><td colspan=\"3\">9 93,9 95,3</td><td>94,6</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF2": {
"text": "5_ ..... .1.0_ .... 1.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ............ J_-'. .... 9. ............. 6 .... .4. 0_ .... 1_2_ .... _1_2. \" q ..... _5 ..... 5_ . _ 5_ .8 , 3_ _ _5_ _8, ~ 5.5 .... _36 .... _37. ___2, .... !Q___!_1._7_2_,2__7_02.7__1,7 __J2 .... ! ..... 1_ .... ! ...... -]4 .... i~ .... i .... 6 .... ~_0_ ..... 9 ..... _9. ......... Q ..... Q ._tO_O_.95_ .... J9 .... !9. .... !",
"content": "<table><tr><td/><td/><td/><td/><td/><td colspan=\"7\">between proper names resolution 7</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"7\">D IpONIACTICORIMtSI,gptTIrF.ClprF.</td><td>F u</td></tr><tr><td/><td>8</td><td>7 24</td><td>2 7</td><td>1 77.8 87.5 82.,1 5 77,4 82,~ 80,C</td><td>1 2</td><td>12 10</td><td colspan=\"2\">121 1~</td><td/><td>q c</td><td>3 1</td><td>75.0 75.0 '5~ 90,0 90,0 t0.0</td></tr><tr><td/><td>9</td><td>8</td><td>1</td><td>1 88,9 885 88S</td><td>3 4</td><td>6 23</td><td>6 23</td><td>I</td><td colspan=\"2\">' 2~ \"</td><td>1</td><td>83,3 83,3 t3,31 100 1013 100</td></tr><tr><td/><td>1</td><td>1</td><td>3</td><td>] 13 25,0 100 40.( ___</td><td colspan=\"7\">1 5_ .... _1_o_ ..... _91 ..... ~ ---ii---i: 6 20 201 ~</td><td>90,0 10C )4,71 ....... -~6~ io.o 40.0</td></tr><tr><td/><td/><td/><td/><td/><td>7</td><td>19</td><td colspan=\"4\">17' 1:7</td><td>63,2 70,( 56.7i</td></tr><tr><td/><td/><td>7</td><td>4</td><td>3 63,6 70,0 66,7 I</td><td>8</td><td>27</td><td colspan=\"4\">271 2z</td><td>3</td><td>3! 88,9 88,9 ~8.91</td></tr><tr><td/><td/><td/><td>0</td><td>13' 100 100 100</td><td>9</td><td>6</td><td colspan=\"2\">6</td><td/><td>(</td><td>0</td><td>0 100 10( 100</td></tr><tr><td/><td/><td colspan=\"3\">J#. ..... ! ..... 1.9_9.,9.. 99,9_. 9_0,9_</td><td colspan=\"7\">. 8,3!</td></tr><tr><td/><td/><td colspan=\"3\">7~ 14 13 12,5 13,3 12,9</td><td>1</td><td>15</td><td colspan=\"2\">16</td><td/><td>(</td><td>9 lO 40,0 37,5 38,7</td></tr><tr><td/><td/><td>15</td><td>0</td><td>\u00a3 100 100 100</td><td>2</td><td>2</td><td colspan=\"2\">2</td><td/><td>;</td><td>0</td><td>0 100 100 100</td></tr><tr><td/><td/><td>10</td><td>2</td><td>~ 83,3 83,3 83,3</td><td>3</td><td>6</td><td colspan=\"2\">6</td><td/><td>\u00a2</td><td>0</td><td>0 100 100 100</td></tr><tr><td/><td/><td>4</td><td>3</td><td>1 57,1 80,13166,7</td><td colspan=\"2\">.41 29</td><td colspan=\"2\">29</td><td/><td>2(</td><td>9</td><td>9 69,0 69,0 59,0</td></tr><tr><td colspan=\"2\">1_5! ..... _4_ ..... _4</td><td colspan=\"3\">..4 .... 9. .... O_._J_OP___!_O_~ __1_0_0</td><td colspan=\"2\">.5 ~ 32</td><td colspan=\"2\">32</td><td/><td>3:</td><td>0</td><td>13 100 100 100</td></tr><tr><td>16</td><td>5</td><td>5</td><td>0</td><td>0 100 1013 1013</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">17 18 19 .29 ..... !_8_ .... 1._8 15 15 25 25 5 4</td><td colspan=\"3\">15 12 13 13 48,0 48\u00a3 48\u00a3 0 0 100 lOC 1013 2 3 2 40,0 50\u00a3 44A 18 0 13 100 10( 10(</td><td colspan=\"6\">I 'i 151 ' _0__]__. _2_3_ .... _2_3] 2 ;9 [ 14</td><td>8 2</td><td>95,5 95,5 95,5 42,9 40,t3 41,4 91,3 91,3 91,3</td></tr><tr><td>21 22 23 i 24</td><td>21 5 9 18 20 21 5 9</td><td colspan=\"3\">~/i --6 .... 6---i66-ii56 5 0 13 100 100 10( -iiS~ 0 el 100 100 10( 1\"~ 1 3 94,4 85,0 89,.*</td><td>'1 / '2 r3 '4</td><td>35 7 7 21</td><td colspan=\"3\">351--J: 7 8 20</td><td colspan=\"2\">.... 2 ....... 0 0 11 11 10 47,6 50,( 48,~ 9~f,-~--9L~ ~:,,~-0 100 IOC 10( 1 100 87,5 93,2</td></tr><tr><td colspan=\"2\">.2_5_. ~ ..._2_4... 2_4 26 12 13 27 17 17 28 19 19 29 4 4</td><td>1( 17 18 3</td><td>2 0 1 1</td><td>.83\u00a3 3 83,3 76,9 80,( \u00a3 100 100 10( 1 94,7 94,7 94,~ 1 75,0 75,13 75,(</td><td>Y6 Y7 v8 T9</td><td>30 50 30 26</td><td colspan=\"2\">30 50 28 26</td><td/><td>2 4 2 2</td><td>2 3 2 1</td><td>2 93,3 93,3 932 3 94,0 94,0 94,( 13 93,3 100 96,( 96,2 96,2 965</td></tr><tr><td colspan=\"2\">39 ...... _5_ ..... 31 15 15 32 15 15</td><td colspan=\"3\">_.3 .... 2_ .... _2_ . .6_ 02_ _ 69 g 60_,( 15 0 0 100 IOC 10( 15 0 0 100 lOC lO(</td><td>~1 ~2</td><td>2 12</td><td colspan=\"2\">2 12</td><td/><td>1</td><td>0 0</td><td>9!,7...J_0_0 9_5_,'_, C lO0 lO0 lO( C 100 100 10(</td></tr><tr><td colspan=\"2\">33 34 3_5_ ..... .4_4____4__5. 22 24 15 14</td><td colspan=\"3\">15 13 3#_ .... _9_ _ . . _1_0_ . .7 9~_5. . 7_7_ , ~. 7_8_ ~ 7 9 68,2 62,5 655 2 1 86,7 92,9 89,;</td><td>~4 g5</td><td>2</td><td>2</td><td/><td/><td>1</td><td>1</td><td>75,0 75,13 75,( 66,7 66,7 66,;</td></tr><tr><td colspan=\"4\">36 37 38 39 .49..__1._5 .... 1__5 31 32 48 48 8 8 29 27 41 42 43 44 4..e ~6 ~7 ~8 ~9 50. 29 3( 16 18 19 19i. 11( ~ 26 47 8 1~ 10 5 1 0 1( 0 4 19 19 3 18 18 16 2 11 8 8 3 5 ~ 4 1 20 2( 20 0 7 7 6 1 29 30. 27. 2</td><td>6 83,9 81,2 822 1 97,9 97, g 97,! 13 100 100 101 8 65,5 70,4 672 ._ JO_O__. l.O_O., l_OJ 2 100 88,9 94, 4 78,9 78,9 78,! 3 84,2 84,2 84,1 2 88,9 88,9 88,! .6_0,9.. 69,0._690 ( 72.7 1013 84.: 1' 80,0 80,13 80,1 0 100 1013 101 1 85,7 85,7 85, 3. 93,1 90,\u00a3.91,</td><td colspan=\"7\">i# -( --ibO---fOC -]-Oi g7 13 13] 0 ( 100 lOC 10( g8 4 Z 2 ( 50,0 IOC 66,\" g9 8 8 ~ 0 0 100 10\u00a3 101 . lgf _ Jg! 0 0 100 10( 101 91 6 6 0 0 100 100 101 92 4 4 0 0 100 100 101 93 2 2 0 0 100 100 101 94 17 17 1 .... Q_ .... 0 l_O_O___l_O__O __l_O_q 0 13 100 100 10~ 0 96 8 8 13 100 100 101 97 16 16 1 0 13 100 100 101 98 2 2 0 C 100 1013 101 99 5 2 90,0 90,13i 90,' 10t 20 2 1</td></tr><tr><td colspan=\"5\">7 Each line stands for a docucment</td><td/><td/><td/><td/><td/><td/></tr></table>",
"html": null,
"type_str": "table",
"num": null
}
}
}
}