ACL-OCL / Base_JSON /prefixY /json /Y02 /Y02-1005.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y02-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:43:38.519496Z"
},
"title": "Heuristic-based Korean Coreference Resolution for Information Extraction",
"authors": [
{
"first": "Euisok",
"middle": [],
"last": "Chung",
"suffix": "",
"affiliation": {},
"email": "eschung@etri.re.kr"
},
{
"first": "Soojong",
"middle": [],
"last": "Lim",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Bo-Hyun",
"middle": [],
"last": "Yun",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The information extraction is to delimit in advance, as part of the specification of the task, the semantic range of the output and to filter information from large volumes of texts. The most representative word of the document is composed of named entities and pronouns. Therefore, it is important to resolve coreference in order to extract the meaningful information in information extraction. Coreference resolution is to find name entities co-referencing real-world entities in the documents. Results of coreference resolution are used for name entity detection and template generation. This paper presents the heuristic-based approach for coreference resolution in Korean. We constructed the heuristics expanded gradually by using the corpus and derived the salience factors of antecedents as the importance measure in Korean. Our approach consists of antecedents selection and antecedents weighting. We used three kinds of salience factors that are used to weight each antecedent of the anaphor. The experiment result shows 80% precision.",
"pdf_parse": {
"paper_id": "Y02-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "The information extraction is to delimit in advance, as part of the specification of the task, the semantic range of the output and to filter information from large volumes of texts. The most representative word of the document is composed of named entities and pronouns. Therefore, it is important to resolve coreference in order to extract the meaningful information in information extraction. Coreference resolution is to find name entities co-referencing real-world entities in the documents. Results of coreference resolution are used for name entity detection and template generation. This paper presents the heuristic-based approach for coreference resolution in Korean. We constructed the heuristics expanded gradually by using the corpus and derived the salience factors of antecedents as the importance measure in Korean. Our approach consists of antecedents selection and antecedents weighting. We used three kinds of salience factors that are used to weight each antecedent of the anaphor. The experiment result shows 80% precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Information extraction(IE) systems take texts containing natural language as input and produce database templates relevant to a particular application. IE system must create templates describing the relevant entities that are reported on. This requires determining when two or more templates describe the same entity, as templates created from conferencing words to be merged. Thus, it is difficult to resolve coference in order to extract more reliable information. Results of coreference resolution are used for the clue of template generation. In coreference resolution, there are two kinds of problems such as anaphora resolution and name aliases recognition. The name aliases could be resolved by lexical pattern matching or synonym dictionary (Huyck, C. (1998) . Fukumoto, J., Masui (1998) ). However, the anaphora resolution has more complexities of natural language. It has been studied conservatively in the discourse part of natural language processing. Recently, several proposals addressed that using limited knowledge is better than using heavy linguistic and domain knowledge (Lappin, S. and Leass, H. (1994) . Baldwin, F. B. (1995) . Mitmov, R. (1998) ).",
"cite_spans": [
{
"start": 749,
"end": 766,
"text": "(Huyck, C. (1998)",
"ref_id": "BIBREF4"
},
{
"start": 769,
"end": 795,
"text": "Fukumoto, J., Masui (1998)",
"ref_id": "BIBREF2"
},
{
"start": 1090,
"end": 1122,
"text": "(Lappin, S. and Leass, H. (1994)",
"ref_id": "BIBREF6"
},
{
"start": 1125,
"end": 1146,
"text": "Baldwin, F. B. (1995)",
"ref_id": "BIBREF1"
},
{
"start": 1149,
"end": 1166,
"text": "Mitmov, R. (1998)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper presents the heuristic-based approach with limited knowledge such as pattern rules, preference rules, and conditional rules. The resolution procedure is to find antecedents and then to evaluate the weight of antecedents with the heuristics. In this paper, we focus on the anaphora resolution. In Korean, an anaphora consists of 'pronoun' and 'demonstrative pronoun + noun phrases' .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many researches have been performed to solve the problem of coreference resolution. One is the research of anaphora resolution which is based on the discourse theory such as centering theory (Lappin, S. and Leass, H. (1994) . Baldwin, F. B. (1995) . Mitmov, R. (1998) ), another is the information extraction system in order to apply to MUC(message understanding conference) (Huyck, C. (1998) . Yangarber, R. and Grishman, R. (1998) . Urbanowicz, R. A. and Nettleton, D. J. (1998) . Humphreys,K., Gaizauskas, R. Azzam, S., Huyck, C., Mitchell, B., Cunningham, H. and Wilks, Y. (1998) . Lin, D. (1998) . Fukumoto, J., Masui, F., Shimohata, M., and Sasaki, M. (1998) . Aone, C., Halverson, L., Hampton, T., and Ramos-Santacruz, M. (1998) ). With a view of the methodology, it could be divided with rule-based approaches using limited knowledges such as lexical patterns (Huyck, C. (1998) . Fukumoto, J., Masui, F., Shimohata, M., and Sasaki, M. (1998) ) and heuristics (Lappin, S. and Leass, H. (1994) . Baldwin, F. B. (1995) . Mitmov, R. (1998) ), knowledge based approach using semantic network (Yangarber, R. and Grishman, R. (1998) . Humphreys,K., Gaizauskas, R. Azzam, S., Huyck, C., Mitchell, B., Cunningham, H. and Wilks, Y. (1998)) , and Hybrid approaches which integrate knowledge based and machine learning approaches (Urbanowicz, R. A. and Nettleton, D. J. (1998) . Lin, D. (1998) ). Another is statistical approach (Kehler, A. (1997) ). In-the case of coreference resolution, the statistical approach is rare since the phenomenon of coreference is generally inter-sentential problem than intra-sentential thing. Thus, it makes the statistical modeling of coreference very difficult.",
"cite_spans": [
{
"start": 191,
"end": 223,
"text": "(Lappin, S. and Leass, H. (1994)",
"ref_id": "BIBREF6"
},
{
"start": 226,
"end": 247,
"text": "Baldwin, F. B. (1995)",
"ref_id": "BIBREF1"
},
{
"start": 250,
"end": 267,
"text": "Mitmov, R. (1998)",
"ref_id": "BIBREF8"
},
{
"start": 375,
"end": 392,
"text": "(Huyck, C. (1998)",
"ref_id": "BIBREF4"
},
{
"start": 395,
"end": 432,
"text": "Yangarber, R. and Grishman, R. (1998)",
"ref_id": "BIBREF10"
},
{
"start": 435,
"end": 480,
"text": "Urbanowicz, R. A. and Nettleton, D. J. (1998)",
"ref_id": "BIBREF9"
},
{
"start": 483,
"end": 583,
"text": "Humphreys,K., Gaizauskas, R. Azzam, S., Huyck, C., Mitchell, B., Cunningham, H. and Wilks, Y. (1998)",
"ref_id": "BIBREF3"
},
{
"start": 586,
"end": 600,
"text": "Lin, D. (1998)",
"ref_id": "BIBREF7"
},
{
"start": 603,
"end": 664,
"text": "Fukumoto, J., Masui, F., Shimohata, M., and Sasaki, M. (1998)",
"ref_id": "BIBREF2"
},
{
"start": 667,
"end": 735,
"text": "Aone, C., Halverson, L., Hampton, T., and Ramos-Santacruz, M. (1998)",
"ref_id": "BIBREF0"
},
{
"start": 868,
"end": 885,
"text": "(Huyck, C. (1998)",
"ref_id": "BIBREF4"
},
{
"start": 888,
"end": 949,
"text": "Fukumoto, J., Masui, F., Shimohata, M., and Sasaki, M. (1998)",
"ref_id": "BIBREF2"
},
{
"start": 967,
"end": 999,
"text": "(Lappin, S. and Leass, H. (1994)",
"ref_id": "BIBREF6"
},
{
"start": 1002,
"end": 1023,
"text": "Baldwin, F. B. (1995)",
"ref_id": "BIBREF1"
},
{
"start": 1026,
"end": 1043,
"text": "Mitmov, R. (1998)",
"ref_id": "BIBREF8"
},
{
"start": 1095,
"end": 1133,
"text": "(Yangarber, R. and Grishman, R. (1998)",
"ref_id": "BIBREF10"
},
{
"start": 1136,
"end": 1237,
"text": "Humphreys,K., Gaizauskas, R. Azzam, S., Huyck, C., Mitchell, B., Cunningham, H. and Wilks, Y. (1998))",
"ref_id": "BIBREF3"
},
{
"start": 1326,
"end": 1372,
"text": "(Urbanowicz, R. A. and Nettleton, D. J. (1998)",
"ref_id": "BIBREF9"
},
{
"start": 1375,
"end": 1389,
"text": "Lin, D. (1998)",
"ref_id": "BIBREF7"
},
{
"start": 1425,
"end": 1443,
"text": "(Kehler, A. (1997)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "The approach of using limited rules does not depend on the massive linguistic knowledge or domain knowledge but it only depends on the simple heuristics. The general resolution procedure is the selection of antecedent candidates, the ranking of the candidates, and the decision of the candidate of an anaphoric word. In each step, it uses the empirical heuristics. The typical heuristic-based approach is the dynamic coreference model generation such as (Lappin, S. and Leass, H. (1994) ). It divides the antecedents into the intra and inter sentential types, and evaluates the salience factors of the antecedents. The approach shows 86% precision in the computer manual domain.",
"cite_spans": [
{
"start": 454,
"end": 486,
"text": "(Lappin, S. and Leass, H. (1994)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Another heuristic-based research is the file card approach. It makes first discourse model of the anaphora and then resolves the model by the file card operation such as antecedents grouping, deleting and weighting with the morphological and syntactic analysis. It shows the 73% precision in the Wall Street Journal (Baldwin, F. B. (1995) ). The best report, 89.7% precision, is the approach that uses the antecedent indicator which is antecedents weighting types (Kehler, A. (1997) ). It depends on the heuristics and syntactic pattern of the anaphora contexts. However, in MUC, the heuristic approaches did not report affirmative result (Huyck, C. (1998) . Fukumoto, J., Masui, F., Shimohata, M., and Sasaki, M. (1998) ).",
"cite_spans": [
{
"start": 316,
"end": 338,
"text": "(Baldwin, F. B. (1995)",
"ref_id": "BIBREF1"
},
{
"start": 464,
"end": 482,
"text": "(Kehler, A. (1997)",
"ref_id": "BIBREF5"
},
{
"start": 639,
"end": 656,
"text": "(Huyck, C. (1998)",
"ref_id": "BIBREF4"
},
{
"start": 659,
"end": 720,
"text": "Fukumoto, J., Masui, F., Shimohata, M., and Sasaki, M. (1998)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "According to the previous researches, the heuristic-based approaches presented good results. Therefore, we use the approach. Furthermore, since a coreference resolution is the part of information extraction and the independence and conciseness of the module is mostly important, the approach of the using limited knowledge is appropriate for the information extraction system. Only the half of the cases have the antecedent. Therefore, we should divide it into the formal and informal phenomena. The coreference resolution depends on the name entity since the resolution procedure is the part of the information extraction system and is following the name entity recognition step. Therefore, the antecedents of the coreference are the name entities. In table 1, the formal coreference is the ordinary case, but the informal coreference could not be resolved with the formal approach. Thus, the informal coreference resolution approach is different with the formal approach. In this paper, we focus on the formal phenomenon, but we consider the informal coreference resolution, too.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "This paper presents the heuristic-based approach for coreference resolution in Korean. It is similar to the previous approaches in the viewpoint of resolution procedure (Lappin, S. and Leass, H. (1994) . Baldwin, K B. (1995) . Mitmov, R. (1998) ). However, we devised the heuristics to expand gradually by using the corpus and the derived salience factors of antecedents in Korean.",
"cite_spans": [
{
"start": 169,
"end": 201,
"text": "(Lappin, S. and Leass, H. (1994)",
"ref_id": "BIBREF6"
},
{
"start": 204,
"end": 224,
"text": "Baldwin, K B. (1995)",
"ref_id": null
},
{
"start": 227,
"end": 244,
"text": "Mitmov, R. (1998)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Resolution Based on Limited Knowledge",
"sec_num": "4"
},
{
"text": "The procedure for coreference resolution has three essential steps. First is the name entity recognition which is performed in the name entity module of information extraction system. It recognizes the antecedents and anaphora. In this paper, we don't described this process. Second, antecedents selection process is to select antecedent list per each anaphor. It consists of two steps such as antecedents grouping and eliminating using lexical patterns and disused lexical list. Finally, in the antecedent weighting process, each antecedent is weighted with salience factors. The most weighted antecedent, summation of all weights, is selected as the result. This steps is described in table 2. Table 2 . the coreference resolution steps",
"cite_spans": [],
"ref_spans": [
{
"start": 696,
"end": 703,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Coreference Resolution Procedure",
"sec_num": "4.1"
},
{
"text": "The heuristics is derived empirically from the training data. First, we find the anaphora candidates and name entities. In each anaphor, we choose the features which is the criteria to select antecedent. The features is derived and is structured. It could make us to find the salience factors and selectional restrictions to the antecedents in Korean. Furthermore, the count of features is used as the weight of each salience factor. With this analysis, we devised the coreference resolution approach for the antecedents selection and antecedents weighting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heuristics for Coreference Resolution",
"sec_num": "4.2"
},
{
"text": "The antecedents selection process consists of two steps, antecedents grouping and antecedents eliminating. The antecedent grouping is applied when splitting antecedents exists per the anaphor. Therefore, the antecedents should be grouped to link up with the anaphor. For example, the antecedents hanguk, ilbon, U.S., Korea, Japan)\" could be grouped to the anaphor \"01 LI-245(i naradeul, these nations)\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heuristics for Coreference Resolution",
"sec_num": "4.2"
},
{
"text": "The antecedents eliminating is applied to exclude an anaphora that has not the antecedent obviously. It usually occurred when the antecedent is not kind of name entity. We found the lexical items that could be clues to refer to sentences or events such as \"n z1 k(greohan, like that)\", \"ntl (greon, like that)\", etc. In figure 1, antecedents grouping and anaphor eliminating is represented. We described antecedents group as NE-SET. NE-SET create by the syntactic pattern such as parallel structure and the derived NE-SETs merge by the inner common NEs. Then, anaphor eliminating is processed. Exactly, it is to filter informal anaphoric lexical items such as \"naital (greohan, like that)\", \".D. (greon, like that)\". The antecedent weighting step is processing by using heuristics. We derived three kinds of salience factors that is to weight each antecedent of the anaphor. In table 3, the heuristics for the antecedent weighting is described. The morphological pattern rules using the similarity between antecedent and anaphor have three types of patterns such as case marker, affix and partial lexical item. The weight is determined in each pattern whether matched or not. The preference rules represent antecedent features as the sentence constituents such as subject and object, the distance from the anaphor and the frequency of antecedent itself. The constituents factor is from the centering theory. Thus, if the antecedent is subject or object, it could be topic word and it has a possibility of the antecedent. The conditional rules are used in the antecedents selection step. Thus, if an anaphor is person type, only the antecedents of person type is selected above all. In this paper, we use name entity types for the semantic compatibility such as person, location, organization, artifact and titile. In the following we shall illustrate them by examples. (-ga(-i) , subject marker)' and `-2-(-eul, object marker), etc. If the anaphor and antecedent have the same case marker, it is possible to consider them as coreference. n antecedent : 501(monggol-in-deul-i, Mongolians)\" n anaphor \"n50 (geudeul-i, they)\" -Affix : Number(sl or pl) is determined by suffixes such as \"--2-(-2)(deul(eul), -s(es)). If antecedent and anaphor have same suffix, we regard them as the same. However, this is not absolute rule since there are many cases to conflict with the rule even if they are discorded. 3 Positive case: n antecedent : \" g -E-21 501(monggol-in-deul-i, Mongolians)\" n anaphor : 5 0 (g-deul-i, they)\" 3 Negative case: n antecedent: \" g -a \u00b0I 0 I (monggol gukmin-i, Mongolians)\" n anaphor: \"D. 5 0 I (geudeul-i, they)\"",
"cite_spans": [
{
"start": 1869,
"end": 1877,
"text": "(-ga(-i)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Heuristics for Coreference Resolution",
"sec_num": "4.2"
},
{
"text": "-Partial lexical item : Korean is an agglutinative language, thus the compound noun is overflowed. Therefore, it needs the partial lexical matching. n antecedent: \"g z 21 5 0 I (monzzol-gukmin-deul-i, Mongolians)\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heuristics",
"sec_num": null
},
{
"text": "n anaphor: \"D. g z a! 5 g (geu monkkol-in-deul-en, the Mongolians)\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heuristics",
"sec_num": null
},
{
"text": "\u2022 Preference rules -Subject : this is similar to English. If the antecedent is subject or object, it gains more weight. Thus, if the antecedent is subject, the word could be the topic in text. It means that the topic is possible to be antecedent. -Frequency : the most important word is usually repetitive in text. This could be topic word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heuristics",
"sec_num": null
},
{
"text": "The antecedent having high frequency gains more weight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heuristics",
"sec_num": null
},
{
"text": "\u2022 Conditional rules -Reflexive pronoun : if the anaphor is reflexive pronoun, the recency salience factor is more important, since there is no inter-sentential case. -Syntactic pattern : this is based on the similarity of syntactic pattern between the contexts of anaphor and antecedents such as \"ANT and x ... ANA and y 4 ANT = ANA\" -Category restriction : this is semantic compatibility. It could be determined by name entity module. n antecedent : ?.; AI /PERSON (kim-su-hee-ssi, person name) n anaphor : g/PERSON (geu saram, the man) Figure 2 . antecedents weighting and selection",
"cite_spans": [],
"ref_spans": [
{
"start": 538,
"end": 546,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Heuristics",
"sec_num": null
},
{
"text": "In figure 2, antecedents weighting and selection is described. After antecedent grouping and anaphor eliminating, the coreference resolution uses heuristics to determine the appropriate antecedent. First, it list up the antecedents candidates considering semantic compatibility with conditional rules. Then, informal antecedents such as common noun is added to the antecedents by using context lexical patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heuristics",
"sec_num": null
},
{
"text": "Antecedent Selection",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Antecedent Candidates Selection",
"sec_num": null
},
{
"text": "The context lexical pattern is the lexical items surrounding the name entity and anaphor such as trigram or bigram lexical patterns. Finally, antecedent weighting is processed by salience factors that are morphological pattern rules and preference rules. After the weighting, the most weighted antecedent selection is the coreference resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manual Work",
"sec_num": null
},
{
"text": "This paper devised the method to extend heuristics iteratively for coreference resolution. In figure 3, workbench for iterative heuristic acquisition is described. Coreference Heuristic Extractor has a role to derive heuristics empirically. The context buffer is used for thedata structure in coreference module. It maintains the intermediate results of coreference resolution such as NE-SET and antecedents candidates. In the experiment, we trained our heuristics with 78 articles having 138 anaphoric lexical items. The domains of the articles are economy and performance. We tagged name entities and coreferences in the articles. In this evaluation, we excluded the temporal anaphora and non-referential anaphora. The average of antecedents per the anaphor is 12.45. We tried to test the coreference resolution with only heuristics. Thus, the tagged article has already name entity group and anaphora selection. Therefore, the test depends on the salience factors' weight and conditional rule usage. First test use the conditional rule that is category restriction such as type matching between antecedent and anaphor. Second test don't use it. The result of both tests is described in table 4. We can resolve the coreference at 80% precision even if we use only simple Heuristics. However, we assumed the name entity type matching. If we developed a coreference module, the result would lower a little since the semantic compatibility could not be implemented completely.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "From the evaluation, we found that the heuristics is not general. In table 4, the weights of test I and test2 are different. Thus, the best result of one weight is not in the another test. We should change the weight to get the best precision. The source of the trouble is the small set of the training data. However, the coreference phenomenon is very sparseness. It is the enormous work to construct appropriate test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "In addition, we could not find the reflexive pronoun and syntactic pattern matched coreference. This is also the problem of data sparseness, and the feature of Korean would be the source.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "In comparison with foreign research, 89.5% precision, we cannot achieve the better result. The reason of that is first, we use only compact heuristic rules. Second, the training data is too small to cover the general coreference phenomenon. In the future, we try to reduce the gap of the performance. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "This paper presents the heuristic-based approach for coreference resolution in Korean. We organized the heuristics to be expanded gradually using the corpus and derived the salience factors of antecedents in Korean. The coreference resolution approach consists of antecedents selection and antecedents weighting. We derived three kinds of salience factors that are used to weight each antecedent of the anaphor. The experiment result shows 80% precision. In the future work, we will consider the temporal coreference and discourse structure. These are the main causes of the coreference method now. The massive heuristic training and experiment will be processed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "SRA: Description of the 1E2 System used for MUC-7",
"authors": [
{
"first": "C",
"middle": [],
"last": "Aone",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Halverson",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Hampton",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ramos-Santacruz",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Seventh Message Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aone, C., Halverson, L., Hampton, T., and Ramos-Santacruz, M. 1998. \"SRA: Description of the 1E2 System used for MUC-7,\" In Proceedings of the Seventh Message Understanding Conference (MUC-7), Columbia, MD.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "CogNIAC: A discourse processing engine, a dissertation in computer and information science",
"authors": [
{
"first": "F",
"middle": [
"B"
],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baldwin, F. B. 1995. CogNIAC: A discourse processing engine, a dissertation in computer and information science.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Old Electric Industry: Description of the Old System as Used for MUC-7",
"authors": [
{
"first": "J",
"middle": [],
"last": "Fukumoto",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Masui",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Shimohata",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sasaki",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Seventh Message Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fukumoto, J., Masui, F., Shimohata, M.; and Sasaki, M. 1998. \"Old Electric Industry: Description of the Old System as Used for MUC-7,\" In Proceedings of the Seventh Message Understanding Conference (MUC-7), Columbia, MD.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "University of Sheffield : Description of the LaSIE-II system as used for MUC-7",
"authors": [
{
"first": "K",
"middle": [],
"last": "Humphreys",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Azzam",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Huyck",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Cunningham",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wilks",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Seventh Message Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Humphreys,K., Gaizauskas, R. Azzam, S., Huyck, C., Mitchell, B., Cunningham, H. and Wilks, Y. 1998. \"University of Sheffield : Description of the LaSIE-II system as used for MUC-7,\" In Proceedings of the Seventh Message Understanding Conference (MUC-7), Columbia, MD.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Description of the American University in Cairo's System Used for MUC-7",
"authors": [
{
"first": "C",
"middle": [],
"last": "Huyck",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Seventh Message Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huyck, C. 1998. \"Description of the American University in Cairo's System Used for MUC-7,\" In Proceedings of the Seventh Message Understanding Conference (MUC-7), Columbia, MD.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Probabilistic Coreference in Information Extraction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kehler",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Second Conference on Empirical Methods in Natural Language Processing (SIGDAT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kehler, A. 1997. \"Probabilistic Coreference in Information Extraction,\" In Proceedings of the Second Conference on Empirical Methods in Natural Language Processing (SIGDAT).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An algorithm for pronominal anaphora resolution",
"authors": [
{
"first": "S",
"middle": [],
"last": "Lappin",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Leass",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "4",
"pages": "535--561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lappin, S. and Leass, H. 1994. An algorithm for pronominal anaphora resolution. Computational Linguistics, 20(4):535-561.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Using collocation statistics in information extraction",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Seventh Message Understanding Conference (MUC-7)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, D. 1998. \"Using collocation statistics in information extraction,\" In Proceedings of the Seventh Message Understanding Conference (MUC-7), Columbia, MD.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Robust pronoun resolution with limited knowledge",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mitmov",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 17th International Conference on Computational Linguistics, COLING-98",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitmov, R. 1998. \"Robust pronoun resolution with limited knowledge,\" In Proceedings of the 17th International Conference on Computational Linguistics, COLING-98, Montreal.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "University of Durham: Description of the LOLITA system as used in MUC-7",
"authors": [
{
"first": "R",
"middle": [
"A"
],
"last": "Urbanowicz",
"suffix": ""
},
{
"first": "D",
"middle": [
"J"
],
"last": "Nettleton",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Seventh Message Understanding Conference (MUC-7)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Urbanowicz, R. A. and Nettleton, D. J. 1998. \"University of Durham: Description of the LOLITA system as used in MUC-7,\" In Proceedings of the Seventh Message Understanding Conference (MUC-7), Columbia, MD.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "NYU: Description of the Proteus /PET system as used for MUC-7 ST",
"authors": [
{
"first": "R",
"middle": [],
"last": "Yangarber",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Seventh Message Understanding Conference (MUC-7)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangarber, R. and Grishman, R. 1998. \"NYU: Description of the Proteus /PET system as used for MUC-7 ST,\" In Proceedings of the Seventh Message Understanding Conference (MUC-7), Columbia, MD.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "antecedents grouping and anaphor eliminating"
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Workbench for Iterative Heuristic Acquisition"
},
"TABREF3": {
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">using antecedents selectional</td><td/><td/><td/></tr><tr><td/><td colspan=\"2\">restriction rule antecedent</td><td/><td/><td/><td>Informal Coreference(antecedent is</td></tr><tr><td/><td>candidates list up</td><td/><td/><td/><td/><td>not NE) : using context lexical patterns</td></tr><tr><td/><td/><td/><td/><td/><td/><td>antecedents addition . . .</td></tr><tr><td/><td>MUM 110 araiiiMIIIIIIn 0</td><td/><td/><td/><td/><td>11110=11111111111111111 MIK\u00b0A</td></tr><tr><td>ntecedents candidates</td><td/><td/><td colspan=\"2\">=&gt; Conditional rules C</td><td>Antecedents candidates</td><td>1111510101 Mill1101211111111111 Illn........ . ..... cam , .... \u2022\" . INUallIL INK01111</td></tr><tr><td/><td>REF</td><td/><td/><td/><td/><td>REF</td></tr><tr><td/><td colspan=\"2\">Antecedent weighting</td><td/><td/><td/><td>Antecedent selection</td></tr><tr><td/><td/><td/><td/><td/><td/><td>,0011 11.4</td></tr><tr><td/><td colspan=\"2\">11111211011111111111111=</td><td/><td/><td/><td>11151100111111111111M111 Agellill\u2022IIIIIInN11111 151011M1</td></tr><tr><td>Antecedent candidates.</td><td colspan=\"2\">11111110t* mem y.. imintommu lipmeneromm</td><td colspan=\"2\">(=&gt; Morphological pattern rules</td><td>selection</td><td>\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022</td><td>... 11111115110 ----.. MilliffOREIn ILOWISZKOBINI IlliillNIIIIIMIIIMIMM .</td></tr><tr><td/><td colspan=\"2\">_\u00ae_ NIKON</td><td colspan=\"2\">and preference rules</td><td/><td>11111111.1111L 31IME01111</td></tr><tr><td/><td>REF</td><td/><td/><td/><td/><td>REF</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">NE/C0</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">tagged</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">document</td></tr><tr><td/><td>n</td><td colspan=\"2\">T7 I-c ig 5--1 Gil 311 11 ,</td><td colspan=\"3\">Ta CE. (Cheol-su-ga Young-Hee-ege Chak-ul Ju-ot-da ,</td></tr><tr><td/><td colspan=\"5\">Cheolsu gives a book to Younghee)</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Recency : this is similar to English. The distance between antecedents and anaphor is important factor................"
},
"TABREF5": {
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "heuristics and result in the evaluation"
}
}
}
}