ACL-OCL / Base_JSON /prefixW /json /W99 /W99-0104.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W99-0104",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:08:56.633191Z"
},
"title": "Knowledge-Lean Coreference Resolution and its Relation to Textual Cohesion and Coherence",
"authors": [
{
"first": "Sanda",
"middle": [
"M"
],
"last": "Harabagiu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Southern Methodist University Dallas",
"location": {
"postCode": "75275-0122",
"region": "TX"
}
},
"email": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Maiorano",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Southern Methodist University Dallas",
"location": {
"postCode": "75275-0122",
"region": "TX"
}
},
"email": ""
},
{
"first": "Aat",
"middle": [],
"last": "Washington",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Southern Methodist University Dallas",
"location": {
"postCode": "75275-0122",
"region": "TX"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we present a new empirical method for coreference resolution, implemented in the COCKTAIL system. The resuits of COCKTAIL are used for lightweight abduction of cohesion and coherence structures. We show that referential cohesion can be integrated with lexical cohesion to produce pragmatic knowledge. Upon this knowledge coherence abduction takes place.",
"pdf_parse": {
"paper_id": "W99-0104",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we present a new empirical method for coreference resolution, implemented in the COCKTAIL system. The resuits of COCKTAIL are used for lightweight abduction of cohesion and coherence structures. We show that referential cohesion can be integrated with lexical cohesion to produce pragmatic knowledge. Upon this knowledge coherence abduction takes place.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Coreference evaluation was introduced as a new domain-independent task at the 6th Message Under-standi~ Conference (MUC-6) in 1995. The task focused on a subset of coreference, namely the ide~tiQ/ coreference, established between nouns, pronouns and noun phrases (including proper names) that refer to the same entity. In d~-;,~ the coreference task (d. (Hirschnum and Chinchor, 1997) ) special care was taken to use the coreference output not only for supporting Information Extraction(IE), the central task of the MUCs, but also to create means for re.arch on corefea~mce and discourse phenom~ independent of IE.",
"cite_spans": [
{
"start": 350,
"end": 384,
"text": "(d. (Hirschnum and Chinchor, 1997)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Annotated corpora were made available, using SGML tagging with~, the text stream. The annotated texts served as tralz~g examples for a variety. of corderence resolution methods, that had to focus not only on precision and recall, but also on robustness. Two general classes of approaches were distinguished. The first class is characterized by adaptations of previously known reference algon'thms (e.g. (Lappin and Leass, 1994) , (Brennan et al., 1987) ) the scarce syntactic and semantic knowledge available m an w. system (e.g. (Kameyama, 1997) ).",
"cite_spans": [
{
"start": 403,
"end": 427,
"text": "(Lappin and Leass, 1994)",
"ref_id": null
},
{
"start": 430,
"end": 452,
"text": "(Brennan et al., 1987)",
"ref_id": "BIBREF5"
},
{
"start": 530,
"end": 546,
"text": "(Kameyama, 1997)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The second class is based on statistical and machine learning techniques that rely on the tagged corpora to extract features of the coreferential relations (e.g. (Aone and Bennett, 1994) (Kehler, 1997) ).",
"cite_spans": [
{
"start": 162,
"end": 186,
"text": "(Aone and Bennett, 1994)",
"ref_id": null
},
{
"start": 187,
"end": 201,
"text": "(Kehler, 1997)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "\u2022 In the past two MUC competitions, the high scoring systems achieved a recall in the high 50's to low 60's and a precision in the low 70's (d. (Hirschman et al., 1998) ). A study z of the contribution of each form of coreference to the overall performance shows that generally, proper name anaphora resolution have the highest precision (69%), followed by pronominal reference (62%). The worse .precision is obtained by the resolution of d~_ n!te nominals anaphors (46%). However, these results need to be contrasted with the distribution of coreferential links on the tagged corpora. The majority of coreference links (38.42%) connect names Of people, organizations or locations. In addition, 19.68% of the tagged co~ce links are accounted by appositives. Only 16.35% of the tagged coreferences are pronominal.",
"cite_spans": [
{
"start": 135,
"end": 168,
"text": "70's (d. (Hirschman et al., 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Nominal anaphors account for 25.55% of the coreference links, and their resolution is generally poorly represented in IE systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Due to the distribution of coreference links in newswire texts, a coreference module that is merely capable of handling recognition of appositives with high precision and incorporates rules of name alias identification can achieve a baseline coreference precision up to 58.1%, without sophisticated syntactic or discourse information. Precision increase is obtained by extending lfigh-performance pronoun resolution methods (e.g. (Lappin and Leass, 1994) ) to nominal corderence as well. Such enhancements rely on semantic and discourse knowledge.",
"cite_spans": [
{
"start": 430,
"end": 454,
"text": "(Lappin and Leass, 1994)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In this paper we describe COCKTAIL, a highperformance coreference resolution system that operatas on a mixture of heuristics that combine semantic and discourse information. The resulting tThe study, reported in (Kameyama, 1997) , was performed on the coreference module of SRI's FASTUS (Appelt et al., I993), an IE system representative of today's IE technology. coreference chains are shown to contribute in the derivation of cohesive chains and coherence graphs. Both cohesive and coherence structures are considered, partly because of their incremental complexity and partly because the tradition (started with (Hobbs, 1979) ) of studying the interaction of coreference and coherence. Section 2 presents COCKTAIL and the coreference methods it built upon. Sections 3 and 4 describe the derivation the cohesion and coherence structures.",
"cite_spans": [
{
"start": 212,
"end": 228,
"text": "(Kameyama, 1997)",
"ref_id": "BIBREF24"
},
{
"start": 615,
"end": 628,
"text": "(Hobbs, 1979)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Coreference resolution relies on a combination of linguistic and cognitive aspects of language. Linguistic constraints are provided mostly by the syntactic modeling of language, whereas computational models of discourse bring forward the cognitive aesumplions of anaphora resolution. Three different methods of combining anaphoric constraints am known to date. The Rrst one integrates anaphora resolution in computational models of discourse interpretation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "2"
},
{
"text": "Dynamic properties of discourse, especially focusing and centering are invoked as the primary b~-~|~ for identifying antecedents. Such computational methods were presented in (Grosz et al., 1995) and (Webber, 1988) .",
"cite_spans": [
{
"start": 175,
"end": 195,
"text": "(Grosz et al., 1995)",
"ref_id": null
},
{
"start": 200,
"end": 214,
"text": "(Webber, 1988)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "2"
},
{
"text": "A second category of approaches combines a v~ riety of syntactic, semantic and discourse factors as a multi-dimensional metric for ranking antecedent candidates. Anaphora resolution is determined by a composite of several distinct scoring procedures, each of which scores the prominence of the candidate with respect to a specific.type of information. The systems described in (Asher and Wada, 1988) (Carbonell and Brown, 1988) and (Rich and Luperfoy, 1988) are examples of the mixed evaluation strategy.",
"cite_spans": [
{
"start": 377,
"end": 399,
"text": "(Asher and Wada, 1988)",
"ref_id": null
},
{
"start": 400,
"end": 427,
"text": "(Carbonell and Brown, 1988)",
"ref_id": "BIBREF6"
},
{
"start": 432,
"end": 457,
"text": "(Rich and Luperfoy, 1988)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "2"
},
{
"text": "Alternatively, other discourse-based methods consider co~eference resolution a by-product of the recognition of coher~ce relations between sentences. Such methods were presented in (Hoblm et al., 1993) and ~flensky, 1978). Although M-complete, this approach has the appeal that it resolves the most complicated cases of coreference, uncovered by syntactic or semantic cues. We have revisited these methods by setting the relation between coreference and coherence on empirical grounds.",
"cite_spans": [
{
"start": 181,
"end": 201,
"text": "(Hoblm et al., 1993)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "2"
},
{
"text": "Two tendencies characterize current pronominal coreference algorithms. The first one makes use of the advances in the parsing technology or on the availability of large parsed corpora (e.g. Trcebank (Marcus et al.1993) ) to produce algorithms inspired by Hobbs' baseline method (Hobbs, 1978) . For example, the Resolution of Anaphor~ Procedure (RAP) i~itroduced in (Lappin and Leass, 1994) combines syntactic information with agreement and salience constraints. Recently, a probabilistic approach to pronominal coreference resolution was also devised (Ge et al., 1998) , using the parsed data available from Treebank. The knowledge-based method of Lappin and Leass produces better results. Nevertheless, RkPSTAT, a version of RAP obtained by using statistically measured preference patterns for the antecedents, prodticed a slight enhancement of performance over RAP.",
"cite_spans": [
{
"start": 199,
"end": 218,
"text": "(Marcus et al.1993)",
"ref_id": null
},
{
"start": 278,
"end": 291,
"text": "(Hobbs, 1978)",
"ref_id": null
},
{
"start": 344,
"end": 349,
"text": "(RAP)",
"ref_id": null
},
{
"start": 365,
"end": 389,
"text": "(Lappin and Leass, 1994)",
"ref_id": null
},
{
"start": 551,
"end": 568,
"text": "(Ge et al., 1998)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pronominal Coreference",
"sec_num": "2.1"
},
{
"text": "Other pronominal resolution approaches promote knowledge-poor methods (Mitkov , 1998) , either by using an ordered set of general heuristics or by combining scores assigned to candidate antecedents. The CogNIAC algorithm (Baldwin, 1997) uses six heuristic rules to resolve coreference, whereas the algorithm presented in (Mitkov, 1998) is based on a limited set of preferences (e.g. definitiveness, lexical reiteration or immediate reference). Both these algorithm rely only on part-of-speech tagging of texts and on patterns for NP identification. Their performance (dose to 90% for certain types of pronouns) indicates that full syntactic knowledge is not required by certain forms of pronominal coreference.",
"cite_spans": [
{
"start": 70,
"end": 85,
"text": "(Mitkov , 1998)",
"ref_id": null
},
{
"start": 221,
"end": 236,
"text": "(Baldwin, 1997)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pronominal Coreference",
"sec_num": "2.1"
},
{
"text": "The same claim is made in (Kennedy and Boguraev, 1996) and (Kameyama, 1997) , where algo-rithm~ approximating RAP for poorer syntactic input obtain precision of 75% and 71%, respectively, a surprising small precision decay from RAP's 86%. These results prompted us to devise COCKTAIL, a corderence resolution system, as a mixture of heuristics performing on the various syntactic, semantic and dL~ourse cues. COCKTAIL is a composite of heuristics learned from the tagged corpora, which has the following novel characteristics:",
"cite_spans": [
{
"start": 26,
"end": 54,
"text": "(Kennedy and Boguraev, 1996)",
"ref_id": null
},
{
"start": 59,
"end": 75,
"text": "(Kameyama, 1997)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pronominal Coreference",
"sec_num": "2.1"
},
{
"text": "1. C0cErIIL covers both nominal and pronoun cord-er~ce, but distinct sets of heuristics operate for different forms of anaphors. We have devised separate heuristics for reflexive, possessive, relative, 3rd person and 1st person pronouns. Similarly, de/inite nomlo-t~ are treated differently than bare or inddinite nominals. 2. c0crr/IL performs semantic checks between antecedents and ~phorL These chedm combine sottal co~aints from WordNet with co-occurance information from (a) Treebank and (b) conceptual 3. In COCET~L antecedents are sought not only in the ac~e~ble text region, but we also throughout the current co~efe~nce chains. In this way cohesive information, represented in corderence chains, is employed i~ the resolution pr _~___. 4. The heuristics d ~cErAIL allow for lexi~dizations (e.g. when the anaphor is an adjunct ofa commmdcation verbs) and of simplified coherence cues (e.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pronominal Coreference",
"sec_num": "2.1"
},
{
"text": "V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "30",
"sec_num": null
},
{
"text": "when the anaphor is the subject of verb add, the antecedent may be a preceding subject of a communication vehb). To exemplify some COCKTAIL heuristics that resolve pronominal coreference, we first present heuristics applicable for reflexive pronoun and then we list heuristics for possessive pronouns and 3rd person pronoun resolution. Brevity imposes the omission of heuristics for other forms of pronoun resolution. COCKTAIL operates by successively applying the following heuristics to the pronoun Pro~ Oif ( Table L The antecedents produced by COCKTAIL are boldfaced, whereas the referring expressions are emphasized. Both referring expressions and resolved antecedents and underlined. Precision results are listed in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 510,
"end": 511,
"text": "(",
"ref_id": null
},
{
"start": 512,
"end": 519,
"text": "Table L",
"ref_id": null
},
{
"start": 722,
"end": 729,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "30",
"sec_num": null
},
{
"text": "Antecedents of reflexive pronouns are always sought in the same sentence. Antecedents of other types of pronouns are sought in preceding sentences too, starting from the immediately preceding sentence. Inside the sentence, the search for a specific word is performed from the current position towards the beginning of the sentence, whereas in the pre-Before Pennzoii's court fight with Texaco over the Getty purchase, Mr. Liedtke -one of the ploy's foremost practitionersportrayed him.~elfas something of an oil-patch tube, a notable f~---'~\"~-~nsidering his diplomas from Amherst College and Harvard Business School. The' woman who is kuown to me as hard-working and. responsible, clearly isn't hersel/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "30",
"sec_num": null
},
{
"text": "Unlike many of her peers, m~t of whom are males in their 30s, s.he never takes herself too seriously. ceding sentences, the search starts at the beginning of the sentence and proceeds in a left to right fashion. The same search order was used in (Kameyama, 1997) . From now on, we indicate this search by COCKTAIL's test of semantic consistency blends togerber information available from WordNet and on statistics gathered from ~ebank. Different consistency checks are modeled for each of the heuristics. We detail here the check that applies to heuristic HIPos, that resolves the possessive from the first example listed in Table 3 There is a sense st of Nounx and a sense So of Nouno such that a common concept is found in their glosses. Cases 2 and 3 extend to synsets obtained through derivational morphology as well (e.g. nominalizations). For cases 2 and 3 COCKTAIL reinforces the coreference hypothesis by using a possessive. similarity metric based on Resuik's similarity measures for noun groups (B___,~m_ i_k, 1995). From a subset of Treebank, we collect all possessives, and measure whether the similarity~clam of Nouno, Noun1 and their eventual common concept is above a threshold produced off-line.",
"cite_spans": [
{
"start": 246,
"end": 262,
"text": "(Kameyama, 1997)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 625,
"end": 632,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "30",
"sec_num": null
},
{
"text": "Other pronominal coreference heuristics employ Search2, a search procedure that enhances Searchx, since it prefers antecedents that are immediately succeeded by relative pronouns. This search is in. corporated in COCKTAIL's heuristics that resolve 3rd person pronominal coreference: olution. FYom our initial experiments, we do not see the need for special semantic consistency checks, since all heuristics performed with precision in excess of 90% Part of this is explained by our usage of pleonastic filters and of recognizers of idiomatic usage. Table 5 illustrates some of the successful coreference resolutions.",
"cite_spans": [],
"ref_spans": [
{
"start": 549,
"end": 556,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "30",
"sec_num": null
},
{
"text": "H_qe says that in many years as a banker he has grown accustomed to \"dealing with honest people 99% of the time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "30",
"sec_num": null
},
{
"text": "sen. Byrd takes pains to reassure the voter that he will see to it that the trade picture improves. A..nurse who deals with the new patient ~Jmits sh. . _~e isn't afraid of her temper. and Vieira, 1998) ). In the tagged corpora, we have found only 20.93% of the nominal coreference cases to be definites, the majority (78.85%) being bare nominals 2, and only 1.32% were inclefiuites. However, more than 50% of the nominal referring expressions were names of people, org~n!-~tions or locations. Adding to this, 15.22% of nominal coreference links are accounted by appositives. Based on this evidence, COCKTtIL implements special rules for name alias identification and for robust recognition of appositions. Moreover, the heuristics for nominal coreference resolution apply Senrchs, and enhancement of Search~ that searches starting with the coreference chains, and then with the accessible text. To resolve nominal coref~eace, COCKTAIL successively applies the following heuristics: oHeuristic l.Nominal(H1Nom) if (Noun is the head of an appositive) then Pick the preceding NP. ~ect is sensitive at a time when IMB is !aying off thousands of employees Mr Iacocca led Chrysler through one of the 'largest stock sales ever for a U.S. industrial company, raising .$1.78 billion. Chrysler is using most of the proceeds to reduce its $4.4. billion unfunded pension liability. We read where the Clinton White House is seeking a deputy to chief of staff Mack McLarty to impose some disciplined coherence on the p/ace's \u2022 ambunctious young staff. Table 6 : Examples of nominal coreference the term repetition indicator, when consistency checks apply. For this heuristic, consistency checks are conservative, imposing that either the adjuncts be identical, coreferring or the adjunct of the referent be less specific than the antecedent. Specificity principles apply also to HSNom, where hyponymy is promoted, similarly to (Poesio and Vieirs, 1998) . Heuristic H3Nom allows coreference between \"the Securities and F_,z~ange Commission n and .~he commission ~ but it bans links between ~Reardon Steel Co.\" and \"tons of steal\".",
"cite_spans": [
{
"start": 185,
"end": 202,
"text": "and Vieira, 1998)",
"ref_id": null
},
{
"start": 1914,
"end": 1939,
"text": "(Poesio and Vieirs, 1998)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1539,
"end": 1546,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "30",
"sec_num": null
},
{
"text": "Many times coreferring nomln~l~ share a~o semantic relations (e.g. synonym#). Heuristic HSNom identifies such cases, by applying consistency checks. Based on experiments with the coreference module of FASTUS, where this heuristic was initially implemented, we require that most frequent senses of nouns be promoted. The same precedence of f~quent senses is implemented in the assi~ment of categories, defined as the immediate WordN~ h~ pernTpn. The category of proper names is dictated by the proper name recognizer, ~qlo~ing such categories m Person, Organization or In this way, coreference between \"IBM ~ and ~he wo,mded computer 9lent ~ can be estab!|~bed, since sense 3 of noun #/ant is Organim6on, the category of ~IBM~. Simi!m-~tegory-based semaatic cheCkS allow the recognition of the antecedent of proceeds from the second example listed in Table 6 . The h~l~ern~ of ~eceezk is ga/n, whose glou genus is amount, the category of $1.78 biUio~ Semantic checks are also required in H?Nom and HSNom, heuristic that rely on derivational morphology. The first example from Table 6 is resolved by HTNom, since d/scass/on the nominalization of d/scuss b~_q the category communication, a hypernym of subject, The antecedent is the object of the verb d/scuss.",
"cite_spans": [],
"ref_spans": [
{
"start": 850,
"end": 857,
"text": "Table 6",
"ref_id": null
},
{
"start": 1075,
"end": 1082,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "30",
"sec_num": null
},
{
"text": "The last heuristic, H9Nom identifies coreferring links with coerced entities of nominals. Coercions are obtained as paths of meronyms or hypernyms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "30",
"sec_num": null
},
{
"text": "(Harabagiu, 1998) discusses a coercion methodology based on WordNet and Treebank. Since in our test corpus there we very few cases of metonymic anaphors, Table 7 (Cardie and Wagstatf, 1999) and (McCarthy and Lehnert, 1995) . Our results show that high-precision empirical techniques can be ported from pronominal coreference resolution to the more difficult problem of nominal coreference.",
"cite_spans": [
{
"start": 162,
"end": 189,
"text": "(Cardie and Wagstatf, 1999)",
"ref_id": null
},
{
"start": 194,
"end": 222,
"text": "(McCarthy and Lehnert, 1995)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 154,
"end": 161,
"text": "Table 7",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "30",
"sec_num": null
},
{
"text": "The heuristics encoded in COCKTAIL make light use of textual cohesion, i.e. the property of texts to Ustick together s by using related words. Both pronominal and nominal coherence resolution heuristics use cohesion cues indicated by term repetition while nominal corofexence relies on semantic relations between anaphors and their antecedents. In addition, coreference chains are a form of textual cohesion, known as referential cohesion (d. (Halliday and Haesan, 1976) ).",
"cite_spans": [
{
"start": 439,
"end": 470,
"text": "(d. (Halliday and Haesan, 1976)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Cohesion",
"sec_num": "3"
},
{
"text": "Until now, lex/m/cohes/on, arising from semantic connections between words, was successfully used as the only form of textual cohesive structure, known as \u2022 lez/cd chdn& At present there are three methods of generating lexical chains. The first one, implemented in the TextTning algorithm (Hearst, 1997), counts the f~lUencies of term repetitions and is an ideal, lightweight tool for segmenting texts. The second method, adds knowledge from semantic dictionaries (e.g. Roget's Thesaurus in the work of (Morris and Hirst, 1991) or WordNet in the methods presented in (B~y and Elhadad, 1997) , (Hirst and St-Onge, 1998) ). Besides term repetition, this approach reco~i,~s relations between text words that are connected in the dictionaries with predefined patterns. This method was applied for generation of text ~lmmm'ies, the recognition of the intentional structure of texts and in the detection of malapropism. The third method is based on a path-finding algorithm detailed in (Harabagiu and Moldovan, 1998) . This method creates a richer SDefiuition introduced in (Halliday and Ha.man, 1976) and (Morris and Hirst, 1991) 34 @ 0 @ 0 O O 0 @ 0 0 @ 0 0 0 0 0 0 @ 0 @ 0 @ 0 0 0 0 @ 0 0 0 0 0 0 0 0 0",
"cite_spans": [
{
"start": 515,
"end": 527,
"text": "Hirst, 1991)",
"ref_id": null
},
{
"start": 567,
"end": 590,
"text": "(B~y and Elhadad, 1997)",
"ref_id": null
},
{
"start": 593,
"end": 618,
"text": "(Hirst and St-Onge, 1998)",
"ref_id": "BIBREF18"
},
{
"start": 980,
"end": 1010,
"text": "(Harabagiu and Moldovan, 1998)",
"ref_id": "BIBREF12"
},
{
"start": 1068,
"end": 1095,
"text": "(Halliday and Ha.man, 1976)",
"ref_id": null
},
{
"start": 1100,
"end": 1124,
"text": "(Morris and Hirst, 1991)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Cohesion",
"sec_num": "3"
},
{
"text": "0 0 0 O @ O O O O O O O O O O O @ O O O 0 O O O O O O O @ O 0 O @ O O O O @ O O O O O @ O @ O O @",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Cohesion",
"sec_num": "3"
},
{
"text": "structure, useful for the al~duction of coherer~e relations from the knowledge encoded in WordNet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Cohesion",
"sec_num": "3"
},
{
"text": "Here we describe a new cohesion structure that (a) incorporates both lexical and referential cohesion and (b) produces a unique chain that contains not only single words, but also textual entities encompassing head-adjunct lists. We use the finite-state parses of FaSTU$ for recognizing these entities, but the method extends to any basic phrasal parser 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Cohesion",
"sec_num": "3"
},
{
"text": "We produce this novel cohesive structure to exploit the close relation between text cohesion and coherence. It is known (cf. (Harabagiih 1999)) that cohesion, as a surface indicator of the text coherence, can indicate the lexico-semantic knowledge upon which coherence is inferred. Our aim is to use this cohesive chain for producing axiomatic knowledge for CICERO, a TACITUS-like system that abducts coherence relations. TACITU$ (Hobbs etal., 1993 ) is a successful abductive system when provided with extensive pra\u00a3~n~ic and linguistic knowledge. CICERO is des~ned as a Jightwe~t version of TACITUS, that performs reliable abductions, with minimal knowledge and effective searches. Translating all the lexical, morphological, synta~'c and semantic ambiguities from texts would make the search intractable. Out solution for CICERO is to use a cohesive chain to create manageable knowledge upon which the abduction can be performed. Section 4 describes this knowledge and the operation of CICERO.",
"cite_spans": [
{
"start": 430,
"end": 448,
"text": "(Hobbs etal., 1993",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Cohesion",
"sec_num": "3"
},
{
"text": "Our cohesive chain is a link~! structure consisting of three parts: (1) the connected text entity, (2) its incoming and outgoing pointers and (3) a fez/cosemantic. ~ph~ containing paths of WordNet con-cepts and relations. The lexico-semantic structure is later translated in the axiomatic knowledge that supports coherence inference. To exemplify the cohesion chain, we use the following text, spanned by the coreference chains produced with COCKTAIL: [Toys R Us] executive o~cer, chairman, executive, .president] . We would like to obtain richer lexico-semantic information, thus we build a cohesion chain that contains larger textual entities. To recognize the entities, we use the coreference chains and the following parse, pro duc.ecl, by FASTUS: #<P~(OR~ISlZATION-NANE) :\"Toys R gs\"> J<PBBISE (B~5I\u00a2) : ~named'> #<PHKASE(PERSOIJ-g~) : \"Nicl~el ~l~teln\"> 8<PHRISE(|G):mch/ef executive officer\">",
"cite_spans": [
{
"start": 452,
"end": 463,
"text": "[Toys R Us]",
"ref_id": null
},
{
"start": 464,
"end": 513,
"text": "executive o~cer, chairman, executive, .president]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Cohesion",
"sec_num": "3"
},
{
"text": ":",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8<P~5~(~)",
"sec_num": null
},
{
"text": "-oadlag'> 8e~J~U~(|~) :~yt~",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8<P~5~(~)",
"sec_num": null
},
{
"text": "Of epOCl~latiOIlw~ #<pna=~=(pKEp) : \"shoutS> 8<PHUSEfB~SIC) :\"~iZ1 succeed\"> IcPRRISE(PELS~I-NIM~) :'Charles Lazaru , the toy retailer \"s founder and ch/ef erchigect\"> S<PHRISE(PEP, SmI-JI~) : \"1obor~ Nakasou. formrly vice chsirnu\"> |<pNltA~(COLI) : walMl\"> 8<Pwst~_(BISIC) : \"gidely rogsrdod\"> 8<Pnt~(PR~) : \"u\"> 8<PH~SE(i~) :'the other sorioas contenderS> 8<l~mt-~_fJ~):%ho top executive J8 Jo]}m> 8<Pna~(CO~A) :\". \"> 8~qLt~E(BLSIC) : ~ nued'> #<PHIJ~E(B~) :'president and chief operatin K officer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8<P~5~(~)",
"sec_num": null
},
{
"text": "both nee poeiticm8\"> Textual entities are either basic phrases contained in the coreference chaln.q or lists of phrases collected from the parse, by scanning for all NGs or NAMEphrases directly connected to a verb phrase through a S~bject, Ob3ect or prepositional relations. For example, as phrase \"Toys R (.ramie the antecedent from a coreference chain, its corresponding textual entity is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8<P~5~(~)",
"sec_num": null
},
{
"text": "|a~oys R Us\"-Subject-+ '%ame\"]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8<P~5~(~)",
"sec_num": null
},
{
"text": "[ |nm~-Objectl-\"Mid~d GoMste/n~ . 1 \" [n~-Object2-\"cA/e/e~cuffve offu~,~ The cohesion chain for our. text is illustrated in Figure 1 . The algorithm that generates cohesion chains is: Algorithm Cohesion-Chaln-Builder I. if (current N@ belongs to a core/erenc~ chain) .",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 132,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "8<P~5~(~)",
"sec_num": null
},
{
"text": "Create its te.z'htal entity TE and place .it on the Chain ~,. if (the an ~__dent @ already/in the chain)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8<P~5~(~)",
"sec_num": null
},
{
"text": "Place the corefer~ pointer bet~eea the two TEs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8<P~5~(~)",
"sec_num": null
},
{
"text": "Populate the lezico-semantic s~ctu~(TE) \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "if (the ooreferen~ is not an appositive)",
"sec_num": "3."
},
{
"text": "The derivation of the lexico-semantic structure (LSS) follows the steps: We base our consideration of textual coherence on the definitions introduced in (Hobbs, 1985) . The formal definition of relations that capture the coherence between textual assertious is based on the relations between the states they infer, their changes and their logical connections. States, changes and logical connections can be retrieved from pragmatic knowledge, accessible in lexical knowledge bases like WordNet. The complex structure of our cohesion chains help guiding these inferences. 0 For each textual unit, defined from the parse of the O text, axiomatic knowledge produced. The acquisition of axiomatic knowledge is cued by the concepts O and relations from the LSS portion of the cohesion O chain, and is mined from WordNet. CICERO, our system, adds to this knowledge axioms that feature the O characteristics of every coherence relation. CICER0's O job is to abduct the coherence structure of a text. To do so, it follows the steps: For the text illustrated in Section 3, this proce-O dure generates the coherence graph illustrated in @ Figure 2 . We exemplify the operation of CICERO on this text by presenting the way it derives the Elaboration relation between the textual unit from the first sentence that announces the nomination of Michael Goldstein (TU.) and the textual unit from the same sentence that deals with the succession of Charles Lazarus (TUb). l~st, CICERO generates the knowledge upon which the abductions can be performed. This knowledge is represented in axiomatic form, using the notation proposed in and previously implemented in TACITUS. In this formalism each text unit represents an event or a state, thus has a special variable e associated with it. Events are lexicalized by verbs, which are reaped into predicates verb (e,z,y) , where z represents the subject of the event, and y represents its object (in the case of intransitive verbs, y is not attached to a predicate, whereas in the case of bitransitive verbs, y is mapped\" into Yl and l~2).\"Moreover, predicates from the text are related to other predicates, derived from a knowledge base. These relations are captured in first order predicate calculus. For example, the pragmatic knowledge used for the derivation of the Elaboration relation between TUa and TUbis: TU,: assiqn( e~ , z~ ) In the next step, ~!1 coherence relations are hypothesized, and the cost of their abduction is obtained. The appendix lists the LISP function created on the fly by CICERO that produces the abduction of the Elaboration function. Because of the computational expense, an intermediary Step simplifies the axiomatic knowledge. The appendix lists also the full abduciton and its cost. CICERO is a system still under development, and at present we did not evaluate the precision of its results.",
"cite_spans": [
{
"start": 153,
"end": 166,
"text": "(Hobbs, 1985)",
"ref_id": "BIBREF21"
},
{
"start": 1841,
"end": 1848,
"text": "(e,z,y)",
"ref_id": null
},
{
"start": 2343,
"end": 2365,
"text": "TU,: assiqn( e~ , z~ )",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1129,
"end": 1137,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "if (the ooreferen~ is not an appositive)",
"sec_num": "3."
},
{
"text": "0 /.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "if (the ooreferen~ is not an appositive)",
"sec_num": "3."
},
{
"text": "We have introduced a new empirical method for coreference resolution, implemented in the COCgTtIL system. The results of this algorithm are used to \u2022 guide the abduction of coherence relations, as performed in our ClC~0 system. In an intermediary step, a rich cohesion structure is produced. This novel relation between coreference and coherence contrasts with the traditional view that coreference is a by-product of coherence resolution. Moreover, we reiterate the belief that coherence builds up from cohesion. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": " A CM, 38(11) :39-41.Ruslan Mitkov. 1998 Con: 22.0 2ro8 .e~ IIO-SPECI~'rZOIIS :1~ 0 ~:rmg u:lLeo 2.0: ((CO~lJ~ FA E2 F.).I0.00.0)((ASSOI~-FOSZTZ~ E2 l),S.00,1) (0~pl'Yoposrrza E2 A), 6.00o 1)",
"cite_spans": [
{
"start": 1,
"end": 6,
"text": "A CM,",
"ref_id": null
},
{
"start": 7,
"end": 13,
"text": "38(11)",
"ref_id": null
},
{
"start": 28,
"end": 40,
"text": "Mitkov. 1998",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Communication of the",
"sec_num": null
},
{
"text": "3 Cost: 24.400002 fz~8 e~ ~_ltrquLT1/H in I uJLng mrA~ 1.1:(L~ilIME-P0$XI' XUM El A)(U, AII011ATXOM E2 Z44)(M0-SPECOLITXO~J E2 A) (co~ E1 E2 \u00a31) S Cost: 10 tz~m ezpiadJa 8 ~ ix 3 using azl~ 4.0: ((ll0-MPEC0UTI0ml E2 A), 10.00. 0) 01AM~ I~. A)~.'Jrx0g E! i~.)(J~-P0SI'rxIMI !~ A) (Cling. El El Zl)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "cIm-s~q~cuu'l~lnls E2 A)",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Evaluating automated and mam~ acqui~tioa of anaphom resolution strat~e&",
"authors": [
{
"first": "Chinatsu",
"middle": [],
"last": "Aone",
"suffix": ""
},
{
"first": "Scott",
"middle": [
"W"
],
"last": "Bennett",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the $Sth Annual Meeting of the A~odation for Computational f, ingu~. tics (ACL.gT)",
"volume": "",
"issue": "",
"pages": "122--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinatsu Aone and Scott W. Bennett. 1997. Evaluating automated and mam~ acqui~tioa of anaphom res- olution strat~e& In Proceedings of the $Sth Annual Meeting of the A~odation for Computational f, ingu~. tics (ACL.gT), pages 122-129, Madrid, Spain.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The SRI MUC-5 JV-FASTUS Information Extraction System",
"authors": [
{
"first": "Douglas",
"middle": [
"E"
],
"last": "Appelt",
"suffix": ""
},
{
"first": "Jerry",
"middle": [
"R"
],
"last": "Hobbs",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Beat",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Israel",
"suffix": ""
},
{
"first": "Megumi",
"middle": [],
"last": "Kameyama",
"suffix": ""
},
{
"first": "Mabry",
"middle": [],
"last": "Tyson",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the Fifth Me.uage Understanding Conference (MOC-5)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas E. Appelt, Jerry R. Hobbs, John Beat, David Israel, Megumi Kameyama and Mabry Tyson. 1993. The SRI MUC-5 JV-FASTUS Information Extraction System. In Proceedings of the Fifth Me.uage Under- standing Conference (MOC-5).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A computational account of syntactic, semantic and discourse principles for anap.hora resolution",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
},
{
"first": "Henri",
"middle": [],
"last": "Wad&",
"suffix": ""
}
],
"year": 1988,
"venue": "Journal of Semantics",
"volume": "6",
"issue": "",
"pages": "309--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Asher and Henri Wad& 1988. A computational account of syntactic, semantic and discourse principles for anap.hora resolution. Journal of Semantics, 6:309- 344.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "CogNIAC: high precision coreference with limited knowledge and linguistic resources",
"authors": [
{
"first": "Brack",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the A CL '97/EA CL '97 Workshop on Operutional factors in practical, robust anaphora reJolution",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brack Baldwin. 1997. CogNIAC: high precision corefer- ence with limited knowledge and linguistic resources. In Proceedings of the A CL '97/EA CL '97 Workshop on Operutional factors in practical, robust anaphora reJ- olution, pages 38-45, Madrid, Spain.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Using Lexical Chains for Text Summarization",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedinga of the A CL '97/BA CL '97 Workshop on Intelligent Scalable Text Summarization",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Michael Elhadad. 1997. Using Lex- ical Chains for Text Summarization. In Proceedinga of the A CL '97/BA CL '97 Workshop on Intelligent Scal- able Text Summarization, Madrid, Spain.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A centering approach to pronouns",
"authors": [
{
"first": "Susan",
"middle": [
"E"
],
"last": "Brennan",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [
"Walker"
],
"last": "Friedman",
"suffix": ""
},
{
"first": "Carl",
"middle": [
"J"
],
"last": "Pollard",
"suffix": ""
}
],
"year": 1987,
"venue": "Proceedings of the P.Sth Annual Meeting of the ACL (ACL-87)",
"volume": "",
"issue": "",
"pages": "155--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susan E. Brennan, Marilyn Walker Friedman and Carl J. Pollard. 1987. A centering approach to pronouns. In Proceedings of the P.Sth Annual Meeting of the ACL (ACL-87), pages 155-162.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Anaphora Resolution: A Multi-Strategy Approach",
"authors": [
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Brown",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the l~h International Conference on Com.m~-rational Linguistics",
"volume": "",
"issue": "",
"pages": "96--101",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jaime Carbonell and Richard Brown. 1988. Anaphora Resolution: A Multi-Strategy Approach. In Proceed- ings of the l~h International Conference on Com.m~- rational Linguistics, pages 96-101.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Noun phrase coreference as clustering",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Kiri",
"middle": [],
"last": "Wagstaff",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceed imJs of the Joint Conference on Bmpirical Methods in NI, P and Very Large Corpor\u00a2",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claire Cardie and Kiri Wagstaff. 1999. Noun phrase coreference as clustering. In Proceed imJs of the Joint Conference on Bmpirical Methods in NI, P and Very Large Corpor\u00a2",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Anaphora Resolution: A Multi-Strategy Approach",
"authors": [
{
"first": "Niyu",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Gale",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charuiak",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 6th Workshop on Very Large CoP. pore, (coLnvG/ACL 'gS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niyu Ge, John Gale and Eugene Charuiak. 1998. Anaphora Resolution: A Multi-Strategy Approach. In Proceedings of the 6th Workshop on Very Large CoP. pore, (coLnvG/ACL 'gS).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Centering. A Framework for Modeling the Local Coherence of Discourse",
"authors": [
{
"first": "Barbara",
"middle": [
"J"
],
"last": "Grca",
"suffix": ""
},
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Weinstein",
"suffix": ""
}
],
"year": 1995,
"venue": "Computational Linguistics",
"volume": "21",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara J. grca, Aravind K. Joshi and Scott Weinstein. 1995. Centering. A Framework for Modeling the Local Coherence of Discourse. Computational Linguistics, 21(2).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Cohesion in Eno glisk Longman",
"authors": [
{
"first": "M",
"middle": [
"A K"
],
"last": "Halliday",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hassan",
"suffix": ""
}
],
"year": 1976,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M.A.K. Halliday and 1~ Hassan. 1976. Cohesion in Eno glisk Longman, London.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Deriving metonymic coerdons from WordNet",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sanda",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harabagiu",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Worlc~hop of the OsmJe of WordaVet in Natural .Language Pro-ce~ir~ SysternJ, CO LING.A CI, \"gs",
"volume": "",
"issue": "",
"pages": "142--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanda M. Harabagiu. 1998. Deriving metonymic coer- dons from WordNet. In Proceedings of the Worlc~hop of the OsmJe of WordaVet in Natural .Language Pro- ce~ir~ SysternJ, CO LING.A CI, \"gs, pages 142-148.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A Parallel System for Text Inference Using Marker Propagations. IBB8 Tmnaactions on Pandlel and'D~ib~ ,ff/sl~s",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sanda",
"suffix": ""
},
{
"first": "Dan",
"middle": [
"I"
],
"last": "Harabagiu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moldovan",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "9",
"issue": "",
"pages": "729--747",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanda M. Harabagiu and Dan I. Moldovan. 1998. A Par- allel System for Text Inference Using Marker Propaga- tions. IBB8 Tmnaactions on Pandlel and'D~ib~ ,ff/sl~s, 9(8):729--747.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "From Lexical Cohesion to Textual Coherence:. A Data Driven Persoective. lm ternational Journal of Pattern Recofnition and Artitidal",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sands",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harabagiu",
"suffix": ""
}
],
"year": 1999,
"venue": "Intelligence",
"volume": "13",
"issue": "2",
"pages": "1--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sands M. Harabagiu. 1999. From Lexical Cohesion to Textual Coherence:. A Data Driven Persoective. lm ternational Journal of Pattern Recofnition and Arti- tidal Intelligence, 13(2):1-18.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "TextTiling: Segmenting Text into Multi-paragraph Subtopic Passages",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ma~",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Lin~tics",
"volume": "23",
"issue": "1",
"pages": "33--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ma~ A. Hearst. 1997. TextTiling: Segmenting Text into Multi-paragraph Subtopic Passages. Computa- tional Lin~tics, 23(1):33--64.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Coreference Task D,~nition",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Coreference Task D,~nition.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The role of Annotated Training Data",
"authors": [
{
"first": "Lynette",
"middle": [],
"last": "Itirshman",
"suffix": ""
},
{
"first": "Patricia",
"middle": [],
"last": "Robinson",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Vilain",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lynette Itirshman, Patricia Robinson, John Burger and Marc Vilain. 1998. The role of Annotated Training Data.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Lexical Chains as Representations of Context for the Detection and Correction of Malapropism",
"authors": [
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "St-Onge",
"suffix": ""
}
],
"year": 1998,
"venue": "WordNet -An Elec. tronic Lexical Databaze",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graeme Hirst and David St-Onge. 1998. Lexical Chains as Representations of Context for the Detection and Correction of Malapropism. In WordNet -An Elec. tronic Lexical Databaze, Edited by Christiane Fell- baum, MIT Press.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Resolving pronoun references",
"authors": [
{
"first": "Jerry",
"middle": [
"R"
],
"last": "Hobbs",
"suffix": ""
}
],
"year": null,
"venue": "Lingua",
"volume": "44",
"issue": "",
"pages": "311--338",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerry R. Hobbs. Resolving pronoun references. Lingua, 44:311-338.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Coherence and coreferen'ce. Cognitive",
"authors": [
{
"first": "R",
"middle": [],
"last": "Jerry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hobbs",
"suffix": ""
}
],
"year": 1979,
"venue": "Science",
"volume": "3",
"issue": "1",
"pages": "67--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerry R. Hobbs. 1979. Coherence and coreferen'ce. Cog- nitive Science, 3(1):67-90.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "On the coherence and structure of discourse",
"authors": [
{
"first": "Jerry",
"middle": [
"R"
],
"last": "Hobbs",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerry R. Hobbs. 1985. On the coherence and structure of discourse Technical Report CSLI-85-37, Stanford University.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Interpretation as abduction",
"authors": [
{
"first": "Jerry",
"middle": [],
"last": "It",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Hobbs",
"suffix": ""
},
{
"first": "Doug",
"middle": [
"E"
],
"last": "Stickel",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Appelt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 1993,
"venue": "Artificial InteUigence",
"volume": "63",
"issue": "",
"pages": "69--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerry It. Hobbs, Mark Stickel, Doug.E. Appelt, and Paul Martin. 1993. Interpretation as abduction. Artificial InteUigence; 63:69--142.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "An algorithm for pronominal anaphora resolution",
"authors": [
{
"first": "Shalom",
"middle": [],
"last": "Lappin",
"suffix": ""
},
{
"first": "Herbert",
"middle": [],
"last": "Learn",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "4",
"pages": "535--662",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shalom LapPin and Herbert Learn. 1994. An algorithm for pronominal anaphora resolution. Computational Linguistics, 20(4)'535-662.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Recognizing Re~erential Li, I~: An Information Extraction Perspective",
"authors": [
{
"first": "Megumi",
"middle": [],
"last": "Kameyama",
"suffix": ""
}
],
"year": 1997,
"venue": "Proe~-~-inos o/the Workshop on Operational Factorm in Practica~ Robu~rt Anaphom Resolution for Unre.drict~d Texts, (A CL-97/BA OL. 97)",
"volume": "",
"issue": "",
"pages": "46--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Megumi Kameyama. 1997. Recognizing Re~erential Li, I~: An Information Extraction Perspective. In Proe~-~-inos o/the Workshop on Operational Factorm in Practica~ Robu~rt Anaphom Resolution for Un- re.drict~d Texts, (A CL-97/BA OL. 97), pages 46-53, Madrid, Spa=.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "ProbabilLstic Coreference in Information Extraction",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Kehler",
"suffix": ""
}
],
"year": 1997,
"venue": "Pro,:ee~_ings of the Second Con/erenc~ on Empirical Methods in Natural Language PreceJsin9 ($IGDA T)",
"volume": "",
"issue": "",
"pages": "163--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Kehler. 1997. ProbabilLstic Coreference in In- formation Extraction. In Pro,:ee~_ings of the Second Con/erenc~ on Empirical Methods in Natural Lan- guage PreceJsin9 ($IGDA T), pages 163-173.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Aaaphora for everyone: Pronomln~d anaphora resolution without a parser",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Kennedy End Braulmir",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bagure~v",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc_eedings of the 16th International Conferenc~ on Computational Linguis-tic~ (COLhVQ.96)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Kennedy end Braulmir Bagure~v. 1996. Aaaphora for everyone: Pronomln~d anaphora reso- lution without a parser. In Proc_eedings of the 16th International Conferenc~ on Computational Linguis- tic~ (COLhVQ.96).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Rhetorical Structure ~ I Toward a functional theory of text organization",
"authors": [
{
"first": ";",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"A"
],
"last": "Mann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 1988,
"venue": "Te~",
"volume": "8",
"issue": "2",
"pages": "243--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William (2. Mann and Sandra A. Thompson 1988. Rhetorical Structure ~ I Toward a functional theory of text organization. Te~ 8:243-281.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Building & large annotated corpm of English: The Penn Tteebank",
"authors": [
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguis",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Marcus, B. Santorini and M.A. Mar~/-t~ewics. 1993. Building & large annotated corpm of En- glish: The Penn Tteebank. Computational Linguis- 19(2):313-330, 1993.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Us-Lug ded~'on trees f-~ corefereace resolution",
"authors": [
{
"first": "F",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "Weady",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lelmert",
"suffix": ""
}
],
"year": 1995,
"venue": "Pro-eenlin~ o/the IJth lntmmtior~ Joint Con/.~,.-.~e on Artificial Intdligen~ (IJOAI-95)",
"volume": "",
"issue": "",
"pages": "1050--1055",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph F. McCarthy and Weady Lelmert. 1995. Us- Lug ded~'on trees f-~ corefereace resolution. In Pro- eenlin~ o/the IJth lntmmtior~ Joint Con/.~,.-.~e on Artificial Intdligen~ (IJOAI-95), pages 1050-1055.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Discourm~ strategies for gemeragmg natural-language text",
"authors": [
{
"first": "Kathy",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 1985,
"venue": "Artificial Intdligenee",
"volume": "198",
"issue": "",
"pages": "1--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathy McKeown. 198,5. Discourm~ strategies for gem- eragmg natural-language text. Artificial Intdligenee, 27:1--41, 1985.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "WordNet: A Le~d Database",
"authors": [
{
"first": "George",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1995. WordNet: A Le~d Database.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "COCKTAIL doesn't employ semantic consistency checks for this form of pronominal coreference res-",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "Figure 1: Cohesion chain",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF4": {
"text": "Figure 2: Coherence graph",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF5": {
"text": "L~qOME-P0~Frx011 I~ A)(~m~n El. !~ El) 10 Cost: 211.400002 #oa expud/q IIO--SI~,ODL~I\"208 :i~ 3",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"html": null,
"content": "<table><tr><td>Pron is reflezive) then apply successively:</td></tr><tr><td>oHenristic 1-Reflexive(H1R)</td></tr><tr><td>Search for PN, the closest proper name from Prgn</td></tr><tr><td>in the same sentence, in right to left order.</td></tr><tr><td>if (PN agrees in number and gender with Pron)</td></tr><tr><td>if (PN belongs to core/erence chain CC)</td></tr><tr><td>Pick Pron:</td></tr><tr><td>o Heuristie 4-Reflezive(H4R)</td></tr><tr><td>Search/or then Pick Noun.~</td></tr><tr><td>Resolution examples for reflexive pronouns are il-</td></tr><tr><td>lustrated in</td></tr></table>",
"text": "Search for Pron\" the closest prenoun from Pron in the same sentence, in right to left order. if (Pron\" agrees in number and gender with Pron) Noun.e, the dosest noun .from Pron in the same sentence, in right to left order. if (Noun.c a#n~a in number and gender with Pron)",
"num": null,
"type_str": "table"
},
"TABREF1": {
"html": null,
"content": "<table><tr><td>Heuristic</td><td>HIR H2R H3R H4K</td></tr><tr><td>Precision on a test</td><td/></tr><tr><td>set of I00 randomly</td><td>95% 92% 98% 89%</td></tr><tr><td>selected pronouns</td><td/></tr></table>",
"text": "Examples of reflexive pronouns",
"num": null,
"type_str": "table"
},
"TABREF2": {
"html": null,
"content": "<table/>",
"text": "Coreference precision (reflexive pronovns)",
"num": null,
"type_str": "table"
},
"TABREF3": {
"html": null,
"content": "<table><tr><td colspan=\"3\">if (Noun belongs to coref~</td><td colspan=\"2\">chain CC)</td></tr><tr><td colspan=\"5\">Ronald Reagan sends him-a list of h/s film roles.</td></tr><tr><td colspan=\"5\">The 20-minute tiigfit )~elps him forget h/s troubles.</td></tr><tr><td colspan=\"4\">The president renewed h/s promise to veto</td><td/></tr><tr><td colspan=\"2\">\"tax-rate increases.\"</td><td/><td/><td/></tr><tr><td colspan=\"5\">Table 3: Examples of possessive pronouns</td></tr><tr><td>Precision on</td><td/><td/><td/><td/></tr><tr><td>100 random pronouns</td><td>96%</td><td>93%</td><td>78%</td><td>86%</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">o Henris6\u00a2. l-Pouessive(H IPos )</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">Searchl /or a posses~ve comb'uct of the form</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">[.ounl's ,~n2],</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">if ([Pron nouno] and</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">[nounl's noun2] agree in gender, n-tuber and</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">are semantically consistent)</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">then if (noun2 belonga to coreyerence chain CC)</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">and there is andement from CC which is</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">closest to Pron in Tezt, Pick that dement.</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">Pick noun,.</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">oHcur/sfc 2-pouess/ve(H2Pos)</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">Senrchl for PN, the closest proper name from Pron</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">if (PN agrees in number and gender with Pron)</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">if (PN belongs to corefe~n~ chain CC)</td></tr><tr><td/><td/><td/><td/><td>then ~</td><td>the dement from CC ~hich is</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">closest to Pron in Tezt</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">else Pick PN.</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">oHeuris6c 3-Possessive(H3Pce).</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">Search for Pron\" the closest pronoun .from Pron .</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">if (Pron\" egre~ in number and fender e~h Pron).</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">if (Pron' belongs to coreferen~ chain CC)</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">and there is an dement from CC which is</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">closest to Pron in Text, Pick that element.</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">else Pick Pron'</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">oHenrist~ ~.Possessiee(H4Pos)</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">Search for Noun, the closest eammon noun from Iron</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">if (Noun agrees in number and gender with Pron)</td></tr></table>",
"text": "Searchl. This search is employed by heuristics for possessive pronoun resolution:Oif (Pron is possessive) (i.e. we have a sequence [Pron nouno], where nouno is the head of the NP containing Pron) then apply suco,~_~sieely: and there is an element from CC which is closes~ to Pron in Tezt, Pick that element. else Pick Noun Examples and precision results are listed inTable 3 and Table 4, respectively.The timing of Mr. Shad's departure is likely to depend on how rapidly the Senate Banking Committee moves to confirm his successor.",
"num": null,
"type_str": "table"
},
"TABREF4": {
"html": null,
"content": "<table/>",
"text": "Coreference precision (possessive pronouns) Given a possessive pronotm in a sequence [Pron Noon0], the antecedent Ante of Pron is semanti. cal]y consistent if the same possessive relationship can be established between Ante and Noono. the problem is that the possessive relation semantically corresponds to an open list of relations. For example, Nouno may be a feature of Ante. Ante may own Noono or Ante may have pe, formed the action lexical/zed by the nominali~-~on Nouno.",
"num": null,
"type_str": "table"
},
"TABREF5": {
"html": null,
"content": "<table><tr><td>\u2022 ~</td></tr></table>",
"text": ". For this heuristic, we have to test whether from the possessive [Ante Nount] we can grant the pos~_~ve [Ante Noone] as well. There axe three cases that allow us to do so: \u2022 ~ase 1 Nount and Nouno corder. \u2022 Case ~Theceis ase~se ss of Nounx and asense so of Nouno such that a synonym of Noun~ i or of its immediate hypernym is found in the gloss of Noon~ or vicevers&",
"num": null,
"type_str": "table"
},
"TABREF7": {
"html": null,
"content": "<table><tr><td>: Examples of 3rd person pronouns</td></tr><tr><td>2.2 Nominal Coreference</td></tr><tr><td>Noun phrases can represent referring expressions in</td></tr></table>",
"text": "",
"num": null,
"type_str": "table"
},
"TABREF8": {
"html": null,
"content": "<table><tr><td/><td colspan=\"3\">such that head(PN)-Noun</td></tr><tr><td/><td colspan=\"3\">if (PN belongs to coreference chain CG)</td></tr><tr><td/><td colspan=\"3\">and there is an element from CC which is</td></tr><tr><td/><td colspan=\"3\">closest to Noun in Text, Pick that element.</td></tr><tr><td/><td colspan=\"2\">else Pick PN.</td><td/></tr><tr><td/><td colspan=\"3\">o Houristie 4-Nominal( H 4N om)</td></tr><tr><td/><td colspan=\"3\">Searchs ]or a proper name PN with the same</td></tr><tr><td/><td colspan=\"2\">category as Noun</td><td/></tr><tr><td/><td colspan=\"3\">if (PN belongs to core-ference chain CC)</td></tr><tr><td/><td colspan=\"3\">and there is an element from CO which is</td></tr><tr><td/><td colspan=\"3\">closest to Noun in Tezt, Pick that element.</td></tr><tr><td/><td colspan=\"2\">else Pick PN.</td><td/></tr><tr><td/><td colspan=\"3\">oHeuristic 5-Nominai(H5Nom)</td></tr><tr><td/><td colspan=\"3\">Searchs Noun\" a spnenym or hyponyrn of Noun</td></tr><tr><td/><td colspan=\"3\">if (Noun' belongs to core/erence chain CC)</td></tr><tr><td/><td colspan=\"3\">and there is an element fl'om CO which is</td></tr><tr><td/><td colspan=\"3\">closest to Noun in Text, Pick that dement.</td></tr><tr><td/><td colspan=\"2\">else Pick Noun'.</td><td/></tr><tr><td/><td colspan=\"3\">oH. euristic 6-Nominal(H6Nom )</td></tr><tr><td/><td colspan=\"3\">Searchs for Noun either in definites or</td></tr><tr><td/><td colspan=\"3\">in NPs having adjuncts in coreyerence chain CU)</td></tr><tr><td/><td colspan=\"3\">if Ante 8emantieally consistent with Noun</td></tr><tr><td/><td colspan=\"3\">if (Ante belongs to core/erenee chain UC)</td></tr><tr><td/><td colspan=\"3\">and there is an dement from UU which is</td></tr><tr><td/><td colspan=\"3\">closest to Noun in Text, Pick that element:</td></tr><tr><td/><td colspan=\"2\">else Pick Ante.</td><td/></tr><tr><td/><td colspan=\"3\">oHeuristic 7-Nomine/(H7Nom)</td></tr><tr><td/><td colspan=\"3\">if (Noun or one ol his hz~n~nrts</td></tr><tr><td/><td colspan=\"3\">or holonyms is a nominalization N)</td></tr><tr><td/><td colspan=\"3\">then Search/or the verb V deriving N</td></tr><tr><td/><td colspan=\"3\">or one o/ its synen~ns)</td></tr><tr><td/><td colspan=\"3\">then P/ok NP, the closest adjunct o/V</td></tr><tr><td/><td colspan=\"3\">if (NP belongs to \u00a2ore!erence chain 00)</td></tr><tr><td/><td colspan=\"3\">az~d there is an dement from CO which is</td></tr><tr><td/><td colspan=\"3\">closest to Noun in Te~, Pick that element.</td></tr><tr><td/><td colspan=\"2\">else Pick NP</td><td/></tr><tr><td/><td colspan=\"3\">oHeuristi\u00a2 &amp;N0m/na/(H8Nom)</td></tr><tr><td/><td colspan=\"3\">if (Noun is the head o/a prepositional</td></tr><tr><td/><td colspan=\"3\">phrase preceded by a nominalization N)</td></tr><tr><td/><td colspan=\"3\">then Search/or the verb V deriving N</td></tr><tr><td>o Heuristic P..Nor~inal(H2Nom)</td><td colspan=\"3\">or one oI its s~um~ns) if (Noun\" is on adjunct o/ V) and</td></tr><tr><td>if (Noun belongs to an NP, Searchs /or NP'</td><td colspan=\"3\">(Noun\" and Noun have the same category</td></tr><tr><td>such that Noun'ffiaame_name(head(NP),head(NP'))</td><td colspan=\"3\">\u2022 if (Noun' belongs to \u00a2ore/erenea chain CC)</td></tr><tr><td>or Noun'--same.name(adj(NP),adj(Ne'))) then if (Noun' belongs to core/erence chain CO)</td><td colspan=\"3\">and there is an dement from CC which is closest to ~Voen in Text, Pick that dement~</td></tr><tr><td>then Pick the element ~vm CC which is closest to Noun in Text. else Pick Noun: oHeuristif. 3-Nominal(H3Nom) if Noun is the head of an NP then Searchs for proper name PN</td><td colspan=\"3\">else Pick Noun\" oHouristi~ 9-Nominal(H9Nom) Searchs Jar Noun', a metonymp whose coercion is Noun Pick Noun'</td></tr><tr><td>well. 2We count as bare nominals coreferring adjuncts as</td><td>me</td><td>o</td><td>p. es</td></tr></table>",
"text": "York would;t discuss his compensation package which could easily reach into seven figures.",
"num": null,
"type_str": "table"
},
"TABREF9": {
"html": null,
"content": "<table><tr><td>I Heuristic Precision on [[ 98% I[ H1Nom i H2Nom H3Nom j H4Nom 1 95% 82% 88%</td></tr><tr><td>100 random [['HSNo.m He~om ti7Nom HSNom</td></tr></table>",
"text": "lists the precision of the other heuristics only.",
"num": null,
"type_str": "table"
},
"TABREF10": {
"html": null,
"content": "<table/>",
"text": "Nom!~nal coreference precisionThe empirical \u2022 methods employed in COCKTAIL are an alternative to the inductive approaches described in",
"num": null,
"type_str": "table"
},
"TABREF12": {
"html": null,
"content": "<table><tr><td colspan=\"2\">Lfor every re/at/on r(wt,,n2) from a TE</td></tr><tr><td colspan=\"2\">if(there is st a sense of wl a.d s2 a sense of</td></tr><tr><td colspan=\"2\">wa such that the same relation r'(ws,w4)/8 found</td></tr><tr><td>in a gtoss ~</td><td>the hierorchies of ,z~' or w~' )</td></tr><tr><td colspan=\"2\">Add relation r\" to LS$</td></tr></table>",
"text": "~.for every word tv in a TE if (there is a concept C in LSS such that there is a collocation [~1 c] in a gloss from the hierarchy(w)) Add to to LSS 3. if (word w is already in LSS) Add ne~a connection to w in LSS For example, in the first TE illustrated in Figure I, we have the relation Object(name, CEO). We find art Object relation also in the gloss of appoint, the hypernym of sense 3 of verb name. The new Obo ]ect relation connect verb assume with the synset {duty, responsibility, obligation}. A hypernym of CEO is manager, collocating with position in the gloss of managership. Noun position belongs to the hierarchy of duty, thus the new Object relation can be added to the LSS.",
"num": null,
"type_str": "table"
}
}
}
}