ACL-OCL / Base_JSON /prefixP /json /P93 /P93-1021.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P93-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:52:25.864090Z"
},
"title": "A LANGUAGE-INDEPENDENT ANAPHORA RES()LUTION SYSTEM FOR UNDERSTANDING MULTILINGUAL TEXTS",
"authors": [
{
"first": "Chinatsu",
"middle": [],
"last": "Aone",
"suffix": "",
"affiliation": {
"laboratory": "Systems Research and Applications (SRA)",
"institution": "",
"location": {
"addrLine": "2000 15th Street North Arlington",
"postCode": "22201",
"region": "VA"
}
},
"email": "aonec@sra.com"
},
{
"first": "Douglas",
"middle": [],
"last": "Mckee",
"suffix": "",
"affiliation": {
"laboratory": "Systems Research and Applications (SRA)",
"institution": "",
"location": {
"addrLine": "2000 15th Street North Arlington",
"postCode": "22201",
"region": "VA"
}
},
"email": "mckeed@sra.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a new discourse module within our multilingual NLP system. Because of its unique data-driven architecture, the discourse module is language-independent. Moreover, the use of hierarchically organized multiple knowledge sources makes the module robust and trainable using discourse-tagged corpora. Separating discourse phenomena from knowledge sources makes the discourse module easily extensible to additional phenomena.",
"pdf_parse": {
"paper_id": "P93-1021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a new discourse module within our multilingual NLP system. Because of its unique data-driven architecture, the discourse module is language-independent. Moreover, the use of hierarchically organized multiple knowledge sources makes the module robust and trainable using discourse-tagged corpora. Separating discourse phenomena from knowledge sources makes the discourse module easily extensible to additional phenomena.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper describes a new discourse module within our multilingual natural language processing system which has been used for understanding texts in English, Spanish and Japanese (el. [1, 2] )) The following design principles underlie the discourse module:",
"cite_spans": [
{
"start": 185,
"end": 188,
"text": "[1,",
"ref_id": null
},
{
"start": 189,
"end": 191,
"text": "2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Language-independence: No processing code depends on language-dependent facts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Extensibility: It is easy to handle additional phenomena.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Robustness: The discourse module does its best even when its input is incomplete or wrong.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Trainability: The performance can be tuned for particular domains and applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the following, we first describe the architecture of the discourse module. Then, we discuss how its performance is evaluated and trained using discoursetagged corpora. Finally, we compare our approach to other research. 1 Our system has been used in several data extraction tasks and a prototype nlachine translation systeln. ",
"cite_spans": [
{
"start": 223,
"end": 224,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our discourse module consists of two discourse processing submodules (the Discourse A dministralor and the Resolution Engine), and three discourse knowledge bases (the Discourse Knowledge Source KB, the Discourse Phenomenon KB, and the Discourse Domain KB). The Discourse Administrator is a development-time tool for defining the three discourse KB's. The Resolution Engine, on the other hand, is the run-time processing module which actually performs anaphora resolution using these discourse KB's. The Resolution Engine also has access to an external discourse data structure called the global discourse world, which is created by the top-level text processing controller. The global discourse world holds syntactic, semantic, rhetorical, and other information about the input text derived by other parts of the system. The architecture is shown in Figure i ",
"cite_spans": [],
"ref_spans": [
{
"start": 851,
"end": 859,
"text": "Figure i",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discourse Architecture",
"sec_num": "2"
},
{
"text": "There are four major discourse data types within the global discourse world: Discourse World (DW), [)is-course Clause (DC), Discourse Marker (DM), and File Card (FC), as shown in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 179,
"end": 187,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Discourse Data Structures",
"sec_num": "2.1"
},
{
"text": "The global discourse world corresponds to an entire text, and its sub-discourse worlds correspond to subcomponents of the text such as paragraphs. Discourse worlds form a tree representing a text's structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Data Structures",
"sec_num": "2.1"
},
{
"text": "A discourse clause is created for each syntactic structure of category S by the semantics module. It can correspond to either a full sentence or a part of a flfll sentence. Each discourse clause is typed according to its syntactic properties.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Data Structures",
"sec_num": "2.1"
},
{
"text": "A discourse marker (cf. Kamp [14] , or \"discourse entity\" in Ayuso [3] ) is created for each noun or verb in the input sentence during semantic interpietation. A discourse marker is static in that once it is introduced to the discourse world, the information within it is never changed.",
"cite_spans": [
{
"start": 29,
"end": 33,
"text": "[14]",
"ref_id": "BIBREF13"
},
{
"start": 67,
"end": 70,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Data Structures",
"sec_num": "2.1"
},
{
"text": "Unlike a discourse marker, a file card (cf. Heim [11] , \"discourse referent\" in Karttunen [15] , or \"discourse entity\" in Webber [19] ) is dynamic in a sense that it is continually updated as the discourse processing proceeds. While an indefinite discourse marker starts a file card, a definite discourse marker updates an already existing file card corresponding to its antecedent. In this way, a file card keeps track of all its co-referring discourse markers, and accumulates semantic information within them.",
"cite_spans": [
{
"start": 49,
"end": 53,
"text": "[11]",
"ref_id": "BIBREF10"
},
{
"start": 90,
"end": 94,
"text": "[15]",
"ref_id": "BIBREF14"
},
{
"start": 129,
"end": 133,
"text": "[19]",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Data Structures",
"sec_num": "2.1"
},
{
"text": "Our discourse module is customized at development time by creating and modifying the three discourse KB's using the Discourse Administrator. First, a discourse domain is established for a particular NLP application. Next, a set of discourse phenomena which should be handled within that domain by the discourse module is chosen (e.g. definite NP, 3rd person pronoun, etc.) because some phenomena may not be necessary to handle for a particular application domain. Then, for each selected discourse phenomenon, a set of discourse knowledge sources are chosen which are applied during anaphora resolution, since different discourse phenomena require different sets of knowledge sources. Guindon el al. [10] ). A filter is used to eliminate impossible hypotheses, while an orderer is used to rank possible hypotheses in a preference order. The KS tree is shown in Figure 3 . Each KS contains three slots: ks-flmction, ks-data, and ks-language. The ks-function slot contains a functional definition of the KS. For example, the functional definition of the Syntactic-Gender filter defines when the syntactic gender of an anaphor is compatible with that of an antecedent hypothesis. A ks-data slot contains data used by ks-function. The separation of data from function is desirable because a parent KS can specify ks-function while its sub-KS's inherit the same ks-function but specify their own data. For example, in languages like English and Japanese, the syntactic gender of a pronoun imposes a semantic gender restriction on its antecedent. An English pronoun \"he\", for instance, can never refer to an NP whose semantic gender is female like \"Ms. Smith\". The top-level Semantic-Gender KS, then, defines only ks-flmction, while its sub-KS's for English and Japanese specify their own ks-data and inherit the same ks-function. A ks-language slot specifies languages if a particular KS is applicable for specific languages.",
"cite_spans": [
{
"start": 700,
"end": 704,
"text": "[10]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 861,
"end": 869,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discourse Administrator",
"sec_num": "2.2"
},
{
"text": "Most of the KS's are language-independent (e.g. all the generators and the semantic type filters), and even when they are language-specific, the function ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Administrator",
"sec_num": "2.2"
},
{
"text": "The discourse phenomenon KB contains hierarchically organized discourse phenomenon objects as shown in Figure 4 . Each discourse phenomenon object has four slots (alp-definition, alp-main-strategy, dp-backup-strategy, and dp-language) whose values can be inherited. The dp-definilion of a discourse phenomenon object specifies a definition of the discourse phenomenon so that an anaphoric discourse marker can be classified as one of the discourse phenomena. The dp-main-strategy slot specifies, for each phenomenon, a set of KS's to apply to resolve this particular discourse phenomenon. The alp-backupstrategy slot, on the other hand, provides a set of backup strategies to use in case the main strategy fails to propose any antecedent hypothesis. The dplanguage slot specifies languages when the discourse phenomenon is only applicable to certain languages (e.g. Japanese \"dou\" ellipsis).",
"cite_spans": [],
"ref_spans": [
{
"start": 103,
"end": 111,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discourse Phenomenon KB",
"sec_num": "2.2.2"
},
{
"text": "When different languages use different sets of KS's for main strategies or backup strategies for the same discourse phenomenon, language specific dp-mainstrategy or dp-backup-strategy values are specified. For example, when an anaphor is a 3rd person pronoun in a partitive construction (i.e. 3PRO-Partitive-Parent) 2, Japanese uses a different generator for the main strategy (Current-and-Previous-DC) than English and Spanish (Current-and-Previous-Sentence).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Phenomenon KB",
"sec_num": "2.2.2"
},
{
"text": "2e.g. \"three of them\" ill English, \"tres de ellos\" in Spanish, \"uchi san-nin\" in Japaamse Because the discourse KS's are independent of discourse phenomena, the same discourse KS can be shared by different discourse phenomena. For example, the Semantic-Superclass filter is used by both Definite-NP and Pronoun, and the Recency orderer is used by most discourse phenomena.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Phenomenon KB",
"sec_num": "2.2.2"
},
{
"text": "The discourse domain KB contains discourse domain objects each of which defines a set of discourse phenomena to handle [n a particular domain. Since texts in different domains exhibit different sets of discourse phenomena, and since different applications even within the same domain may not have to handle the same set of discourse phenomena, the discourse domain KB is a way to customize and constrain the workload of the discourse module.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Domain KB",
"sec_num": "2.2.3"
},
{
"text": "The Resolution Engine is the run-time processing module which finds the best antecedent hypothesis for a given anaphor by using data in both the global discourse world and the discourse KB's. The Resolution Engine's basic operations are shown in Figure 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 246,
"end": 254,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Resolution Engine",
"sec_num": null
},
{
"text": "The Resolution Engine uses the discourse phenomenon KB to classify an anaphor as one of the discourse phenomena (using dp-definition values) and to determine a set of KS's to apply to the anaphor (using dp-main-strategy values). The Engine then applies the generator KS to get an initial set of hypotheses and removes those that do not pass tile filter When there is more than one hypothesis, orderer KS's are invoked. However, when more than one orderer KS could apply to the anaphor, we face the problem of how to combine the preference values returned by these multiple orderers. Some anaphora resolution systems (cf. Carbonell and Brown [6] , l~ich and LuperFoy [16] , Rimon el al. [17] ) assign scores to antecedent hypotheses, and the hypotheses are ranked according to their scores. Deciding the scores output by the orderers as well as the way the scores are combined requires more research with larger data. In our current system, therefore, when there are multiple hypotheses left, the most \"promising\" orderer is chosen for each discourse phenomenon. In Section 3, we discuss how we choose such an orderer for each discourse phenomenon by using statistical preference. In the future, we will experiment with ways for each orderer to assign \"meaningful\" scores to hypotheses.",
"cite_spans": [
{
"start": 641,
"end": 644,
"text": "[6]",
"ref_id": "BIBREF5"
},
{
"start": 666,
"end": 670,
"text": "[16]",
"ref_id": "BIBREF15"
},
{
"start": 686,
"end": 690,
"text": "[17]",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Finding Antecedents",
"sec_num": "2.3.1"
},
{
"text": "When there is no hypothesis left after the main strategy for a discourse phenomenon is performed, a series of backup strategies specified in the discourse phenomenon KB are invoked. Like the main strut-egy, a backup strategy specifies which generators, filters, and orderers to use. For example, a backup strategy may choose a new generator which generates more hypotheses, or it may turn off some of the filters used by the main strategy to accept previously rejected hypotheses. How to choose a new generator or how to use only a subset of filters can be determined by training the discourse module on a corpus tagged with discourse relations, which is discussed in Section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding Antecedents",
"sec_num": "2.3.1"
},
{
"text": "Thus, for example, in order to resolve a 3rd person pronoun in a partitive in an appositive (e.g. anaphor ID=1023 in Figure 7) , the phenomenon KB specifies the following main strategy for Japanese: generator = Head-NP, filters = {Semantic-Amount, Semantic-Class, Semantic-Superclass}, orderer = Recency. This particular generator is chosen because in almost every example in 50 Japanese texts, this type of anaphora has its antecedent in its head NP. No syntactic filters are used because the anaphor has no useful syntactic information. As a backup strategy, a new generator, Adjacent-NP, is chosen in case the parse fails to create an appositive relation between the antecedent NP ID=1022 and the anaphor. After each anaphor resolution, the global discourse world is updated as it would be in File Change Semantics (cf. Helm [11] ), and as shown in Figure 6 . First, the discourse marker for the anaphor is incorporated into the file card to which its antecedent discourse marker points so that the co-referring discourse markers point to the same file card. Then, the semantics information of the file card is updated so that it reflects the union of the information from all the co-referring discourse markers. In this way, a file card accumulates more information as the discourse processing proceeds.",
"cite_spans": [
{
"start": 828,
"end": 832,
"text": "[11]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 117,
"end": 126,
"text": "Figure 7)",
"ref_id": "FIGREF10"
},
{
"start": 852,
"end": 860,
"text": "Figure 6",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Finding Antecedents",
"sec_num": "2.3.1"
},
{
"text": "The motivation for having both discourse markers and file cards is to make the discourse processing a monotonic operation. Thus, the discourse processing does not replace an anaphoric discourse marker with its antecedent discourse marker, but only creates or updates file cards. This is both theoretically and computationally advantageous because the discourse processing can be redone by just retracting the file cards and reusing the same discourse markers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding Antecedents",
"sec_num": "2.3.1"
},
{
"text": "Advantages of Our Approach Now that we have described the discourse module in detail, we summarize its unique advantages. First, it is the only working language-independent discourse system we are aware of. By \"language-independent,\" we mean that the discourse module can be used for different languages if discourse knowledge is added for a new language. Second, since the anaphora resolution algorithm is not hard-coded in the Resolution Engine, but is kept in the discourse KB's, the discourse module is extensible to a new discourse phenomenon by choosing existing discourse KS's or adding new discourse KS's which the new phenomenon requires.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.4",
"sec_num": null
},
{
"text": "Making the discourse module robust is another important goal especially when dealing with real-world input, since by the time the input is processed and passed to the discourse module, the syntactic or semantic information of the input is often not as accurate as one would hope. The discourse module must be able to deal with partial information to make a decision. By dividing such decision-making into multiple discourse KS's and by letting just the applicable KS's fire, our discourse module handles partial information robustly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.4",
"sec_num": null
},
{
"text": "Robustness of the discourse module is also manifested when the imperfect discourse KB's or an inaccurate input cause initial anaphor resolution to fail. When the main strategy fails, a set of backup strategies specified in the discourse phenomenon KB provides alternative ways to get the best antecedent hypothesis. Thus, the system tolerates its own insufficiency in the discourse KB's as well as degraded input in a robust fashion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.4",
"sec_num": null
},
{
"text": "In order to choose the most effective KS's for a particular phenomenon, as well as to debug and track progress of the discourse module, we must be able to evaluate the performance of discourse processing. To perform objective evaluation, we compare the results of running our discourse module over a corpus with a set of manually created discourse tags. Examples of discourse-tagged text are shown in Figure 7 . The metrics we use for evaluation are detailed in Figure 8 .",
"cite_spans": [],
"ref_spans": [
{
"start": 401,
"end": 409,
"text": "Figure 7",
"ref_id": "FIGREF10"
},
{
"start": 462,
"end": 470,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluating and Training the Discourse Module",
"sec_num": "3"
},
{
"text": "We evaluate overall performance by calculating recall and precision of anaphora resolution results. The higher these measures are, the better the discourse module is working. In addition, we evaluate the discourse performance over new texts, using blackbox evaluation (e.g. scoring the results of a data extraction task.) To calculate a generator's failure vale, a filter's false positive rate, and an orderer's effectiveness, the algorithms in Figure 9 are used. 3",
"cite_spans": [],
"ref_spans": [
{
"start": 445,
"end": 453,
"text": "Figure 9",
"ref_id": "FIGREF9"
}
],
"eq_spans": [],
"section": "Evaluating the Discourse Module",
"sec_num": "3.1"
},
{
"text": "The uniqueness of our approach to discourse analysis is also shown by the fact that our discourse module can be trained for a particular domain, similar to the ways grammars have been trained (of. Black For each discourse phenomenon, given anaphor and antecedent pairs in the corpus, for each filter, calculate how often the filter incorrectly eliminates the antecedents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choosing Main Strategies",
"sec_num": "3.2"
},
{
"text": "For each anaphor exhibiting a given discourse phenomenon in the corpus, given the remaining antecedent hypotheses for the anaphor, for each applicable orderer, test if the orderer chooses the correct antecedent as the best hypothesis. In order to determine, for each discourse phenomenon, the most effective combination of generators, filters, and orderers, we evaluate overall performance of the discourse module (cf. Section 3.1) at different rate settings. We measure particular generators, filters, and orders for different phenomena to identify promising strategies. We try to minimize the failure rate and the false positive rate while minimizing the average number of hypotheses that the generator suggests and maximizing the number of hypotheses that the filter eliminates. As for orderers, those with highest effectiveness measures are chosen for each phenomenon. The discourse module is \"trained\" until a set of rate settings at which the overall performance of the discourse module becomes highest is obtained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choosing Main Strategies",
"sec_num": "3.2"
},
{
"text": "Our approach is more general than Dagan and Itai [7] , which reports on training their anaphora resolution component so that \"it\" can be resolved to its correct antecedent using statistical data on lexical relations derived from large corpora. We will certainly incorporate such statistical data into our discourse KS's.",
"cite_spans": [
{
"start": 49,
"end": 52,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Choosing Main Strategies",
"sec_num": "3.2"
},
{
"text": "If the main strategy for resolving a particular anaphor fails, a backup strategy that includes either a new set of filters or a new generator is atternpted. Since backup strategies are eml)loyed only when the main strategy does not return a hypothesis, a backup strategy will either contain fewer filters than the main strategy or it will employ a generator that returns more hypotheses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining Backup Strategies",
"sec_num": "3.3"
},
{
"text": "If the generator has a non-zero failure rate 4, a new generator with more generating capability is chosen from the generator tree in the knowledge source KB as a backup strategy. Filters that occur in the main strategy but have false positive rates above a certain threshold are not included in the backup strategy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining Backup Strategies",
"sec_num": "3.3"
},
{
"text": "Our discourse module is similar to Carbonell and Brown [6] and Rich and LuperFoy's [16] work in using multiple KS's rather than a monolithic approach (cf. Grosz, Joshi and Weinstein [9] , Grosz and Sidner [8] , Hobbs [12] , Ingria and Stallard [13] ) for anaphora resolution. However, the main difference is that our system can deal with multiple languages as well as multiple discourse phenomena 5 because of our more fine-grained and hierarchically organized KS's. Also, our system can be evaluated and tuned at a low level because each KS is independent of discourse phenomena and can be turned off and on for automatic evaluation. This feature is very important because we use our system to process real-world data in different domains for tasks involving text understanding.",
"cite_spans": [
{
"start": 55,
"end": 58,
"text": "[6]",
"ref_id": "BIBREF5"
},
{
"start": 83,
"end": 87,
"text": "[16]",
"ref_id": "BIBREF15"
},
{
"start": 182,
"end": 185,
"text": "[9]",
"ref_id": "BIBREF8"
},
{
"start": 205,
"end": 208,
"text": "[8]",
"ref_id": "BIBREF7"
},
{
"start": 217,
"end": 221,
"text": "[12]",
"ref_id": "BIBREF11"
},
{
"start": 244,
"end": 248,
"text": "[13]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Zero failure rate means that tile hypotheses generated by a generator always contained tile correct antecedent.SCarbonell and Brown's system handles only intersentential 3rd person pronotms and some defilfite NPs, and Rich and LuperFoy's system handles only 3rd person pronouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Murasaki Project: Multilingual Natural Language Understanding",
"authors": [
{
"first": "Chinatsu",
"middle": [],
"last": "Aone",
"suffix": ""
},
{
"first": "Hatte",
"middle": [],
"last": "Blejer",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Flank",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Mckee",
"suffix": ""
},
{
"first": "Sandy",
"middle": [],
"last": "Shinn",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the ARPA Human Language Technology Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinatsu Aone, Hatte Blejer, Sharon Flank, Douglas McKee, and Sandy Shinn. The Murasaki Project: Multilingual Natural Lan- guage Understanding. In Proceedings of the ARPA Human Language Technology Workshop, 1993.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "SRA: Description of the SOLOMON System as Used for MUC-4",
"authors": [
{
"first": "Chinatsu",
"middle": [],
"last": "Aone",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Mckee",
"suffix": ""
},
{
"first": "Sandy",
"middle": [],
"last": "Shinn",
"suffix": ""
},
{
"first": "Hatte",
"middle": [],
"last": "Blejer",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of Fourth Message Understanding Conferencc",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinatsu Aone, Doug McKee, Sandy Shinn, and Hatte Blejer. SRA: Description of the SOLOMON System as Used for MUC-4. In Pro- ceedings of Fourth Message Understanding Con- ferencc (MUC-4), 1992.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Discourse Entities in JANUS",
"authors": [
{
"first": "Damaris",
"middle": [],
"last": "Ayuso",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of 27th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Damaris Ayuso. Discourse Entities in JANUS. In Proceedings of 27th Annual Meeting of the ACL, 1989.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Development and Evaluation of a Broad-(:',overage Probablistic Grammar of English-Language Computer Manuals",
"authors": [
{
"first": "Ezra",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of 30lh Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ezra Black, John Lafferty, and Salim Roukos. Development and Evaluation of a Broad- (:',overage Probablistic Grammar of English- Language Computer Manuals. In Proceedings of 30lh Annual Meeting of the ACL, 1992.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Centering Approach to Pronouns",
"authors": [
{
"first": "Susan",
"middle": [],
"last": "Brennan",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Pollard",
"suffix": ""
}
],
"year": 1987,
"venue": "Proceedings of 25th Annual Meeting of the A(,'L",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susan Brennan, Marilyn Friedman, and Carl Pollard. A Centering Approach to Pronouns. In Proceedings of 25th Annual Meeting of the A(,'L, 1987.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Anaphora Resolution: A Multi-Strategy Ap-/)roach",
"authors": [
{
"first": "G",
"middle": [],
"last": "Jairne",
"suffix": ""
},
{
"first": "Ralf",
"middle": [
"D"
],
"last": "Carbonell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the 12lh International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jairne G. Carbonell and Ralf D. Brown. Anaphora Resolution: A Multi-Strategy Ap- /)roach. In Proceedings of the 12lh International Conference on Computational Linguistics, 1988.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic Acquisition of Constraints for the Resolution of Anaphora References and Syntactic Ambiguities",
"authors": [
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Itai",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the 13th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan and Alon Itai. Automatic Acquisition of Constraints for the Resolution of Anaphora References and Syntactic Ambiguities. In Pro- ceedings of the 13th International Conference on Computational Linguistics, 1990.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Attentions, Intentions and the Structure of Discourse",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Crosz",
"suffix": ""
},
{
"first": "Candace",
"middle": [
"L"
],
"last": "Sidner",
"suffix": ""
}
],
"year": 1986,
"venue": "Computational Linguistics",
"volume": "12",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Crosz and Candace L. Sidner. Atten- tions, Intentions and the Structure of Discourse. Computational Linguistics, 12, 1986.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Providing a Unified Account of Definite Noun Phrases in Discourse",
"authors": [
{
"first": "Barbara",
"middle": [
"J"
],
"last": "Grosz",
"suffix": ""
},
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Weinstein",
"suffix": ""
}
],
"year": 1983,
"venue": "Proceedings of 21st Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. Providing a Unified Account of Def- inite Noun Phrases in Discourse. In Proceedings of 21st Annual Meeting of the ACL, 1983.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The Structure of User-Adviser Dialogues: Is there Method in their Madness?",
"authors": [
{
"first": "Raymonde",
"middle": [],
"last": "Guindon",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Stadky",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Brunnet",
"suffix": ""
},
{
"first": "Joyce",
"middle": [],
"last": "Conner",
"suffix": ""
}
],
"year": 1986,
"venue": "Proceedings of 24th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raymonde Guindon, Paul Stadky, Hans Brun- net, and Joyce Conner. The Structure of User- Adviser Dialogues: Is there Method in their Madness? In Proceedings of 24th Annual Meet- ing of the ACL, 1986.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Semantics of Definite and Indefinite Noun Phrases",
"authors": [
{
"first": "Irene",
"middle": [],
"last": "Helm",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irene Helm. The Semantics of Definite and In- definite Noun Phrases. PhD thesis, University of Massachusetts, 1982.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Pronoun Resolution",
"authors": [
{
"first": "Jerry",
"middle": [
"R"
],
"last": "Hohbs",
"suffix": ""
}
],
"year": 1976,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerry R. Hohbs. Pronoun Resolution. Technical Report 76-1, Department of Computer Science, City College, City University of New York, 1976.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A Computational Mechanism for Pronominal Reference",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Ingria",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Stallard",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of 27th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Ingria and David Stallard. A Computa- tional Mechanism for Pronominal Reference. In Proceedings of 27th Annual Meeting of the ACL, 1989.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Theory of Truth and Semantic Representation",
"authors": [
{
"first": "Hans",
"middle": [],
"last": "Kamp",
"suffix": ""
}
],
"year": 1981,
"venue": "Formal Methods in the Study of Language. Mathematical Centre",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hans Kamp. A Theory of Truth and Semantic Representation. In J. Groenendijk et al., edi- tors, Formal Methods in the Study of Language. Mathematical Centre, Amsterdam, 1981.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Discourse Referents",
"authors": [
{
"first": "Lauri",
"middle": [],
"last": "Karttunen",
"suffix": ""
}
],
"year": 1976,
"venue": "",
"volume": "7",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lauri Karttunen. Discourse Referents. In J. Mc- Cawley, editor, Syntax and Semantics 7. Aca- demic Press, New York, 1976.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "An Architecture for Anaphora Resolution",
"authors": [
{
"first": "Elaine",
"middle": [],
"last": "Rich",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Luperfoy",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the Second Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elaine Rich and Susan LuperFoy. An Architec- ture for Anaphora Resolution. In Proceedings of the Second Conference on Applied Natural Lan- guage Processing, 1988.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Advances in Machine Translation Research in IBM",
"authors": [
{
"first": "Mort",
"middle": [],
"last": "Rimon",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"C"
],
"last": "Mccord",
"suffix": ""
},
{
"first": "Ulrike",
"middle": [],
"last": "Schwall",
"suffix": ""
},
{
"first": "Pilar",
"middle": [],
"last": "Mart~nez",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedzngs of Machine Translation Summit IIl",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mort Rimon, Michael C. McCord, Ulrike Schwall, and Pilar Mart~nez. Advances in Ma- chine Translation Research in IBM. In Proceed- zngs of Machine Translation Summit IIl, 1991.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Evaluating Discourse Processing Algorithms",
"authors": [
{
"first": "Marilyn",
"middle": [
"A"
],
"last": "Walker",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of 27th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn A. Walker. Evaluating Discourse Pro- cessing Algorithms. In Proceedings of 27th An- nual Meeting of the ACL, 1989.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A Formal Approach to Discourse Anaphora",
"authors": [
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 1978,
"venue": "Bolt, Beranek, and Newman",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonnie Webber. A Formal Approach to Dis- course Anaphora. Technical report, Bolt, Be- ranek, and Newman, 1978.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": ".... . . . . . . . . . . . . . . . . . . . r ............ . ....... o ....................................."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Discourse World, Discourse Clause, Discourse Marker, and File Card definitions are shared. In this way, much of the discourse knowledge source KB is sharable across different languages."
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": ".. _-._~_-' ~, ~,-,~-~ ....................."
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Figure 4: Discourse Phenomenon KB"
},
"FIGREF5": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Resolution Engine Operations KS's. If only one hypothesis rernains, it is returned as the anaphor's referent, but there may be more than one hypothesis or none at all."
},
"FIGREF6": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "'s: { DM-I DM-2} semantics: PatienL101 ^ Person.102"
},
"FIGREF7": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Updating Discourse World 2.3.2 Updating the Global Discourse World"
},
"FIGREF8": {
"num": null,
"type_str": "figure",
"uris": null,
"text": ",Tile remaining antecedent hypotheses\" are the hypotheses left after all the filters are applied for all anaphor.Overall Performance: Recall = No~I, Precision = N\u00a2/Nh I Number of anaphors in input Arc. Number of correct resolutions Nh Number of resolutions attempted Filter: Recall = OPc/IPc, ['recision = OPc/OP IP OP OF~ 1 -OP/IP -or~/IF~ Number of correct pairs in input Number of pairs in input Number of pairs output and passed by filter Number of correct pairs output by filter Fraction of input pairs filtered out Fraction of correct answers filtered out (false positive rate) Generator: Recall = N\u00a2/I, ['recision = Nc/Nh I Nh gc Nh/I 1 -N~/I Number of anaphors in input Number of hypotheses in input Number of times correct answer in output Average number of hypotheses Fraction of correct answers not returned (failure rate) Orderer: I Number of anaphors in input N\u00a2 Number of correct answers output first Nc/I Success rate (effectiveness) Metrics used for Evaluating and Training Discourse For each discourse phenomenon, given anaphor and antecedent pairs in the corpus, calculate how often the generator fails to generate the antecedents."
},
"FIGREF9": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Algorithms for Evaluating Discourse Knowledge Sources <DM ID=-I000>T 1 ' ~'.~.~4S]~<./DM> (<DM ID=1001 Type=3PARTA [The AIDS Surveillance Corru~ttee of the Health and Welfare Ministry (Chairman, Prof\u00a2.~or Emeritus Junlchi Sh/okawa), on the 6~h, newly COnfirmed 7 AIDS patients (of them 3 arc dead) and 17 iafec~d pcop!\u00a2.] <DM IDol 020 Typc-~DNP Ref=1000>~'/',: ~-?'~)~ ~ ~,:.~.~\" J~D M > (7)-~ \"k~<DM ID=1021>IKIJ~.</DM>~<DM lD=1022 Type=BE Ref=1021> ~[~']~.:~'~</DM> (<DM ID=1023 Type=3PARTA Ref=1021>5 </DM>~-'Jx) . <DM ID=I02AType-ZPARTF Ref=1020></DM>--j ~, ~'-~.~'~.~1~)~. <DM"
},
"FIGREF10": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Discourse Tagged Corpora [4]). As Walker [lS] reports, different discourse algorithms (i.e. Brennan, Friedman and Pollard's centering approach [5] vs. Hobbs' algorithm [12]) perform differently on different types of data. This suggests that different sets of KS's are suitable for different domains."
},
"TABREF1": {
"type_str": "table",
"text": "is originated ; semantic concepts which correspond to globM topics of the text ; the corresponding character position in the text ; ~ list of discourse clauses in the current DW ; a list of DWs subordinate to the current one (defframe discourse-clause (discourse-d~ta-structure ; D(: discourse-markers ; ~ list of discourse m~rkers in the current D(:~ syntax ; ~n f-structure for the current DC DM ........ .........",
"content": "<table><tr><td>(defframe discourse-world (discourse-d*ta-structure)</td><td>; DW</td><td/></tr><tr><td>date</td><td>date of the text</td><td/></tr><tr><td colspan=\"3\">location topics position discourse-clauses s u b-discou rse-worlds~ ; loc~tion parse-tree ; ~ p~rse tree of this S</td></tr><tr><td>semantics</td><td colspan=\"2\">; ~ semantic (KB) object representing</td><td>the current DC</td></tr><tr><td>position</td><td>; the corresponding</td><td colspan=\"2\">character position in the text</td></tr><tr><td>d~te</td><td colspan=\"2\">; date of the current DC~</td></tr><tr><td>loca.tion</td><td colspan=\"2\">; Ioco.tlon of the current D(2</td></tr><tr><td>subordinate-discourse-clsuse</td><td colspan=\"2\">; a DC,\" subordinate to the current D(:</td></tr><tr><td>coordin~te-dlscourse-clattses)</td><td colspan=\"3\">; coordinate DC's which a conjoined sentence consists of</td></tr><tr><td colspan=\"4\">II (dell position .... discourse-clause di ........... syntax semantics file card) ;Jr ker(dl d ture' ; the corresponding ; a pointer b~ck to DC: character position in the text ; an f-structure for the current DM ; a semantic (KB) object ; a pointer to the file card (deffr&amp;me file-card (discourse-d~t~-structure) ; FC:</td></tr><tr><td>co-referring-discou rse-m~r kers</td><td colspan=\"2\">a list of co-referring DM's</td></tr><tr><td>u pd ated-semantic-info)</td><td colspan=\"3\">; a semantic (KB) object which contains cumulative sem&amp;ntlcs</td></tr></table>",
"num": null,
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "DM> se encuentran nueve nin~os menores de 13 an'os.",
"content": "<table><tr><td/><td/><td colspan=\"2\">ID=1025 Typc--ZPARTF Ref=1020&gt;&lt;/DM&gt;</td></tr><tr><td colspan=\"4\">&lt;[}M ID=I026&gt;~J~,&lt;/DM&gt; (&lt;DM ID=1027 Typc=JDEL Ref=1026&gt;~</td></tr><tr><td colspan=\"4\">[4 of ~ 7 ~:wly discovered patients were male homosexuals&lt;t022&gt;</td></tr><tr><td colspan=\"4\">(of them&lt;1023&gt; 2 are dead), I is heterosexual woaran, and 2 (ditto l)</td></tr><tr><td colspan=\"3\">are by contaminated blood product.]</td></tr><tr><td colspan=\"3\">La Comisio~n de Te'cnicos</td><td>del SIDA informo' dyer</td></tr><tr><td colspan=\"4\">de que existen &lt;DM ID=2000&gt;196 enfermos de</td></tr><tr><td colspan=\"4\">&lt;DM ID=2OOI&gt;SIDA&lt;/DM&gt;&lt;/DM&gt; en la Comunidad</td></tr><tr><td>Valenciana.</td><td colspan=\"3\">De &lt;DM ID=2002 Type=PRO Reffi000&gt;ellos</td></tr><tr><td colspan=\"4\">&lt;/DM&gt;, 147 corresponden a Valencia; 34, a Alicante;</td></tr><tr><td colspan=\"2\">y 15, a Castello'n.</td><td colspan=\"2\">Mayoritariamente</td><td>&lt;DM ID=2003</td></tr><tr><td colspan=\"4\">Type=DNP Ref=2001&gt;la enfermedad&lt;/DM&gt; afecta a &lt;DM</td></tr><tr><td colspan=\"4\">ID=2004 Type=GEN~Ios hombres&lt;/DM&gt;, con 158 cases.</td></tr><tr><td colspan=\"4\">Entre &lt;DN ID=2OOfi Type=DNP Ref=2OOO&gt;los afectados</td></tr><tr><td>&lt;/</td><td/><td/></tr></table>",
"num": null,
"html": null
}
}
}
}