ACL-OCL / Base_JSON /prefixH /json /H94 /H94-1033.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H94-1033",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:30:04.492043Z"
},
"title": "Pattern Matching in a Linguistically-Motivated Text Understanding System",
"authors": [
{
"first": "Damaris",
"middle": [
"M"
],
"last": "Ayuso",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BBN Systems and Technologies",
"location": {
"addrLine": "70 Fawcett St",
"postCode": "02138",
"settlement": "Cambridge",
"region": "MA"
}
},
"email": "dayuso@bbn.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "An ongoing debate in text understanding efforts centers on the use of pattern-matching techniques, which some have characterized as \"designed to ignore as much text as possible,\" versus approaches which primarily employ rules that are domain-independent and linguisticaUy-motivated. For instance, in the message-processing community, there has been a noticeable pulling back from largecoverage grammars to the point where, in some systems, traditional models of syntax and semantics have been completely replaced by domain-specific finite-state approximations. In this paper we report on a hybrid approach which uses such domain-specific patterns as a supplement to domain-independent grammar rules, domain-independent semantic rules, and automatically hypothesized domain-specific semantic rules. The surprising result, as measured on TIPSTER test data, is that domain-specific pattern matching improved performance, but only slightly, over more general linguistically-motivated techniques.",
"pdf_parse": {
"paper_id": "H94-1033",
"_pdf_hash": "",
"abstract": [
{
"text": "An ongoing debate in text understanding efforts centers on the use of pattern-matching techniques, which some have characterized as \"designed to ignore as much text as possible,\" versus approaches which primarily employ rules that are domain-independent and linguisticaUy-motivated. For instance, in the message-processing community, there has been a noticeable pulling back from largecoverage grammars to the point where, in some systems, traditional models of syntax and semantics have been completely replaced by domain-specific finite-state approximations. In this paper we report on a hybrid approach which uses such domain-specific patterns as a supplement to domain-independent grammar rules, domain-independent semantic rules, and automatically hypothesized domain-specific semantic rules. The surprising result, as measured on TIPSTER test data, is that domain-specific pattern matching improved performance, but only slightly, over more general linguistically-motivated techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Virtually all systems which participated in the Fifth Message Understanding Conference, MUC-5 [1] , used finite-state (FS) pattem matching to some extent. Two useful tasks that this approach is well suited for are:",
"cite_spans": [
{
"start": 94,
"end": 97,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "1. treating application-specific simple constructions that may not belong in a general grammar of the language, and 2. detecting constructions which, though grammatical, may be found more reliably using domain-specific patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "For example, special-purpose FS subgrammars were used widely to efficiently and reliably recognize equipment names and company names. This illustrates one (1) above. An illustration of (2) appears in the complex sentence below:",
"cite_spans": [
{
"start": 155,
"end": 158,
"text": "(1)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Daio Paper Corp. said it will set up a cardboard factory in Ho Chi Minh City, Vietnam, jointly with state-run Cogido Paper Manufacturing Company.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "It is easy for any parser to err in not attaching the modifier \"jointly\" to \"set up,\" and thereby miss the fact that a joint venture is being reported. One might argue that the sentence includes a discontiguous constituent (\"set up ... jointly\"). Nevertheless, it is easy to write a general pattern to deal with the discontiguous constituent correctly for this domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Finite-state parsers perform simple operations, and they are fast.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In data-extraction applications, where much of the input can be safely ignored, they provide an easy means to skip text without deep analysis. Some of the best-performing systems in MUC-5 relied heavily on the use of finite-state pattern-matching in crucial system components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "However, there are several advantages in maintaining broad linguistically-based coverage of a language, even in a dataextraction task. First, it allows for well-defined linguistic structures to be recognized and represented in a domain independent way. This provides a level of linguistic representation that can be used by other general linguistic components such as a domainindependent discourse processor. In fact, this is a representational level which will probably be evaluated in the next Message Understanding Conference, MUC-6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Secondly, general linguistic coverage provides application independence. Different applications, such as data detection (information retrieval) can use the linguistic representations for various purposes. Achieving a synergistic operation of data-extraction and data-detection systems is one of the key goals of ARPA's TIPSTER Phase II project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Another intuitive advantage is portability. When porting a system to a new application, a base level of understanding is achieved very quickly before having to add domain-specific patterns. This is possible because the bulk of the processing work is done by the domain-independent rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "BBN's data-extraction system, PLUM [2] , showed consistently high-ranking performance in the MUC-3 [3] , MUC-4 [4] , and MUC-5 evaluations. We added two new finite-state patternmatching modules to PLUM between MUC-4 and MUC-5, expecting a substantial payoff in performance. The surprising result, as measured on TIPSTER test data, was that although domain-specific pattern matching improved performance, in the English domains it was only a slight improvement over more general, linguisticallymotivated techniques.",
"cite_spans": [
{
"start": 35,
"end": 38,
"text": "[2]",
"ref_id": "BIBREF1"
},
{
"start": 99,
"end": 102,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 111,
"end": 114,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In the next section we further discuss the movement towards FS approximations in the community. We then describe the role of finite-state pattern-matching in BBN's PLUM system in more detail. Finally we present experiments used to measure the resulting effect in PLUM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Text processing systems participating in the MUC evaluations (most recently, MUC-5) perform linguistic processing to various levels. Some systems may attempt to do a deep level of understanding whenever possible [5] , whereas others use more shallow \"skimming\" techniques [6] , focusing only on information of interest and ignoring all other text. Similarly, systems span the spectrum in their use of finite-state pattern-matching instead of the more traditional, general, syntactic and semantic processing.",
"cite_spans": [
{
"start": 212,
"end": 215,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 272,
"end": 275,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Shift in the Community",
"sec_num": "2."
},
{
"text": "There are several reasons for the recent shift to increased use of FS approximations. Work was published on deriving finite-state approximations from more general grammars [7] . Then in MUC-3 it became evident that, in certain data-extraction tasks, a system which ignored much of the input text but focussed attention on the items of interest could perform as well as other systems which emphasized deeper understanding of all the text. Once the problem of data-extraedon was perceived to only require the understanding of small fractions of the input text, some systems evolved to do more shallow processing and the use of finite-state approximations increased.",
"cite_spans": [
{
"start": 172,
"end": 175,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Shift in the Community",
"sec_num": "2."
},
{
"text": "It should be noted that incorporating finite-state elements can result in advantages that are important for achieving operational dataextraction systems. The simplicity of the finite-state formalism makes FS rules more easily understandable (and thus, modifiable) by non-experts. Since parsing finite-state grammars can be done very efficiently, another advantage is fast processing, which is desirable in many real applications. Although both of these systems (along with BBN's PLUM), were top performers in MUC-5, they now lack a large-coverage domainindependent syntactic and semantic model. Rather, they rely on intensive analysis of domain corpora in order to encode patterns in each new domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Shift in the Community",
"sec_num": "2."
},
{
"text": "BBN's PLUM has a traditional and general-purpose processing core, where morphological analysis, syntactic parsing, semantic interpretation, and discourse analysis take place. Purely syntactic parse structures and general semantic interpretations are created during processing. When porting to a new domain, we can use our automatic procedures for learning lexical-semantic case-frame information from annotated data [10] to quickly obtain domainspecific understanding without using finite-state approximations. This then becomes the initial system on which more detailed development is based.",
"cite_spans": [
{
"start": 416,
"end": 420,
"text": "[10]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Role in PLUM's Architecture",
"sec_num": "3."
},
{
"text": "During the development for TIPSTER, we added to the core PLUM system two new optional processing modules which do use finitestate patterns for the following specific purposes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Role in PLUM's Architecture",
"sec_num": "3."
},
{
"text": "1. detect domain-specific simple constructions that can be identiffed on/he basis of shallow lexical information, and Figure 1 shows PLUM's architecture. Parallel possible paths are indicated where the optional pattern-matching modules appear. The two new modules, the Lexical Pattern Matcher and the Sentence-Level Pattern Matcher, use the same core finite-state pattern-matching processor, SPURT, which is described in the next section.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 126,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Role in PLUM's Architecture",
"sec_num": "3."
},
{
"text": "The Lexical Pattern Matcher operates before parsing but after tagging by part-of-speech to recognize constructions which can be detected based on component words, their parts-of-speech, and simple properties of their lexical semantics. This is used primarily for structures that could be part of the grammar, but can be more efficiently recognized by a finite state machine. Examples are corporation/organization names, person names, and measurements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Role in PLUM's Architecture",
"sec_num": "3."
},
{
"text": "The Sentence-Level Pattern Matcher replaced our former fragment combining component which sought to attach contiguous fragments based on syntactic and semantic properties. The new patternmatching component applies FS patterns to the fragments of the parsed and semantically interpreted input; the matched patterns' associated actions may modify or add new semantic information at the level of the sentence. That is, semantic relationships may be added between objects in potentially widely separated fragments of the sentence, thereby handling the example of the discontiguous constituent presented earlier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Role in PLUM's Architecture",
"sec_num": "3."
},
{
"text": "We defined our first version of the FS patteru-matcher and FS grammar syntax for a gisting application [11] . The problem there was to extract key information (e.g., plane-id, command) from the output of a speech recognizer whose input was (real) air-traffic controller and pilot interactions. This initial version of the patternmatcher 'was also utilized, for the purpose of detecting company names, in the PLUM configuration used for the initial TIPSTER evaluations.",
"cite_spans": [
{
"start": 103,
"end": 107,
"text": "[11]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SPURT: A Finite-State Pattern-Action Language",
"sec_num": "4."
},
{
"text": "Before M[UC-5 we made the FS grammar syntax more powerful (though still finite-state) to give the rule-writer more flexibility. We also introduced optimizations to the parser and added an action component to the rules. The resulting utility is named SPURT. We first used SPURT for applying sentence-level patterns, and later replaced the simple company name recognizer by SPURT to perform general lexically-based pattern matching of various types of constructions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SPURT: A Finite-State Pattern-Action Language",
"sec_num": "4."
},
{
"text": "SPURT rules are finite-state patterns which can be used to search for complex patterns of information in a sentence and build semantic structures from that information. A SPURT rule has a :PATTERN component which is the expansion (the \"right-hand side\") of a finite-state grammar rule. When the SPURT rules are read in at system load time, they are compiled into a network of nodes and arcs. Arcs coming out of a node indicate multiple possible next states. Nodes contain tests, so that if the test at the end-node of an arc is successful when applied to the input at the pointer, that arc is traversed. The parser simply matches an input against the network, performing a depthfirst search, and selecting a path that matches the maximal amount of input. At each decision point, arcs are tried in an order which favors paths that consume a maximal amount of input in a meaningful way (e.g., the parser only follows \"don't-care\" arcs when other possibilities are exhausted). Once a successful parse of the whole input is found the search is terminated. 1 The resulting path is then traversed to execute the corresponding actions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SPURT: A Finite-State Pattern-Action Language",
"sec_num": "4."
},
{
"text": "The Lexical Pattern Matcher applies SPURT patterns after morphological analysis but prior to parsing. The input consists of 1 In all-paths mode, the parser can be used to find arc probabilities based on training data. This was used in the gisting application, but has not yet been used in PLUM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexically-based SPURT",
"sec_num": "4.1."
},
{
"text": "word tokens with part-of-speech information. A pattern can test on a token's word component, its part-of-speech, its semantic type, or a top-level predicate in its lexical semantics. When a pattern is matched, the action component identifies substrings of the matched sequence to add to the temporary lexicon. These temporary definitions are active for the duration of the message.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexically-based SPURT",
"sec_num": "4.1."
},
{
"text": "For example, a pattern for recognizing company names could match a sequence such as (\"Bfidgestone\" NP) (\"Sports\" NPS) (\"Co.\" NP), where NP is the tag for proper nouns, and NPS for plural proper nouns; the pattern's action results in this sequence being replaced by the singular token (\"Bridgestone Sports Co.\" NP), which is, as a side effect, defined as a lexical entry having semantic type CORPORATION. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexically-based SPURT",
"sec_num": "4.1."
},
{
"text": "The input to Sentence-Level SPURT is a sentence object which has already been processed through the fragment semantic interpreter. Its fragments' parse nodes have already been annotated with a semantic interpretation. SPURT's parser actually operates on the leaf elements (the nodes corresponding to the terminals, or words) of the fragment parses. The \"pointer\" can move along the input either at the word level, or at the level of higher structures, achieved by matching nodes that are ancestors of the leaf nodes. Thus patterns can test on words or phrases. When a word is matched, the parse pointer moves to the next word's leaf node; if a phrase is matched, it is moved to the next possible word not spanned by the tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-Level SPURT",
"sec_num": "4.2."
},
{
"text": "A pattern can test both syntactic and semantic information associated with the parse nodes. When a pattern is matched, the action component specifies new semantic information to be added to particular parse nodes (and thus to the fragment in which each node is contained). The new information is allowed to include predicates connecting semantic structures across different fragments-this is something the fragment semantic interpreter is unable to do, as it is a compositional operation on the individual, independent, parse fragments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-Level SPURT",
"sec_num": "4.2."
},
{
"text": "Below is an example of a sentence-level rule which will match the example given in the introduction. This pattern matches se- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-Level SPURT",
"sec_num": "4.2."
},
{
"text": "In order to measure the impact of the new FS components, we ran our MUC-5 English configurations (for English joint ventures and English microelectronics) on two test sets. The first is test data used for the TIPSTER 18-month evaluation, the second is data that was released for training, but was kept blind. For each pair of domain and test set, we ran experiments in each of 4 modes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "\u2022 Baseline: our default configuration, using both FS components;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "\u2022 No Lexical FS: turn off lexical FS processing, except for the old company-name recognizer;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "\u2022 No Sentence FS: turn off sentence-level FS processing; and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "\u2022 No FS: turn off both.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "The default configurations in the two domains share the same processing modules, the same general domain-independent grammar and semantic rules, and the same company-name recognizer. Each configuration contains its own set of domain-specific lexicalsemantic definitions. A lexical-semantic definition contains the word's semantic type and (optionally) case-frames identifying semantic tests on possible arguments to the word. The semantic interpreter uses these rules in compositionally assigning semantics to parse-trees. For EJV, the initial version of the lexical semantics Was automatically generated from training data [I0]; it was then modified manually as needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "Although we tested both domains, we consider the test on EME to be more representative of the effects of the new modules. Most of the EJV development preceded the existence of the modules. In fact, for EJV we added no new rules to the lexical FS component. EME development, however, was able to take advantage of the new utilities almost from the start. It made heavier use of the front-end rules for some of the tricky technical constructions in that domain; it should be noted that even then, the impact of the lexical FS was minimal in that domain. It should be noted that the Japanese domains, JJV and JME, made heavy use of the sentence-level patterns. FS patterns for JJV gave us a quick gain in performance, but the price paid was having little carryover to the JME domain once that development began. We did not test those domains without the FS components. Based on our experience, if multiple Japanese domains are expected, we would undoubtedly build a robust domain-independent core of semantic rules, which in the long-run maximizes re-usability and minimizes effort :for each new domain. We utilized FS pattems because our Japanese expert wanted to explore the capabilities, and limits, of pattern-matching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "Finite-state pattern-matching has already shown to be useful and valuable in data-extraction applications. Its full possible impact is still being investigated. For example, several groups are trying to find automatic ways to derive FS patterns in order to surmount the porting: problem they pose in systems that heavily depend on them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "However, maintaining a wide-coverage linguistic core can result in excellent data-extraction capability as has been evidenced by PLUM's performance in the government-sponsored MUC evaluations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "Perhaps the most interesting result was that domain-specific patterns, though in principle very powerful, added relatively little to the performance of the linguistically motivated components. Error rate was improved by at most 3 percentage points. Nevertheless, PLUM data extraction system's performance was among the highest of all systems participating in MUC-5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "While this one case study does not prove the relative efficacy of domain-specific patterns versus domain-independent, linguistically motivated processing, it does suggest that more research and development in linguistically motivated syntactic and semantic processing is promising even in the short term, not just in long range research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Proceedings of the Fifth Message Understanding Conference (MUC-5)",
"authors": [],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Proceedings of the Fifth Message Understanding Conference (MUC-5), August 1993, to appear.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "BBN PLUM: MUC-5 System Description",
"authors": [
{
"first": "",
"middle": [],
"last": "The Plum System",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Group",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the Fifth Message Understanding Conference (MUC-5)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The PLUM System Group. BBN PLUM: MUC-5 System De- scription. To appear in Proceedings of the Fifth Message Un- derstanding Conference (MUC-5), August 1993.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Proceedings of the Third Message Understanding Conference (MUC-3)",
"authors": [],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Proceedings of the Third Message Understanding Conference (MUC-3), Morgan Kaufmann Publishers Inc., May 1991.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Proceedings of the Fourth Message Understanding Conference (MUC-4)",
"authors": [],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Proceedings of the Fourth Message Understanding Confer- ence (MUC-4), Morgan Kaufmann Publishers Inc., June 1992.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "New York University: Description of the Proteus System as Used for MUC-5",
"authors": [
{
"first": "R",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Sterling",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the Fifth Message Understanding Conference (MUC-5)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grishman, R. and Sterling J. New York University: Descrip- tion of the Proteus System as Used for MUC-5. To appear in Proceedings of the Fifth Message Understanding Conference (MUC-5), August 1993.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Description of the CIRCUS System Used for MUC-5",
"authors": [
{
"first": "W",
"middle": [],
"last": "Lehnert",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peterson",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Umass",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hughes",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the Fifth Message Understanding Conference (MUC-5)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lehnert, W., McCarthy, J., Soderland, S., Riloff, E., Cardie, C., Peterson, J., and Feng, F. UMASS/HUGHES: Descrip- tion of the CIRCUS System Used for MUC-5. To appear in Proceedings of the Fifth Message Understanding Conference (MUC-5), August 1993.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Finite-State Approximations of Grammars. Proceedings of the Speech and Natural Language Workshop",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "20--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pereira, F. Finite-State Approximations of Grammars. Pro- ceedings of the Speech and Natural Language Workshop, pages 20-25. Morgan Kaufmann Publishers Inc., June 1990.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "GE-CMU: Description of the SHOGUN System Used for MUC-5",
"authors": [
{
"first": "P",
"middle": [],
"last": "Jacobs",
"suffix": ""
}
],
"year": 1993,
"venue": "To appear in Proceedings of the Fifth Message Understanding Conference (MUC-5)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacobs, P. (Contact). GE-CMU: Description of the SHOGUN System Used for MUC-5. To appear in Proceedings of the Fifth Message Understanding Conference (MUC-5), August 1993.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The SRI MUC-5 JV-FASTUS Information Extraction System",
"authors": [
{
"first": "D",
"middle": [],
"last": "Appelt",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hobbs",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bear",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Israel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kameyama",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1993,
"venue": "To appear in Proceedings of the Fifth Message Understanding Conference (MUC-5)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Appelt, D., Hobbs, J., Bear, J., Israel, D., Kameyama, M., and 'l~json, M. The SRI MUC-5 JV-FASTUS Information Extrac- tion System. To appear in Proceedings of the Fifth Message Understanding Conference (MUC-5), August 1993.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Report on Work in Progress. Proceedings of the Speech and Natural Language Workshop",
"authors": [
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ayuso",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bobrow",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Boisen",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ingda",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Palmucci",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Parsing",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "204--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weischedel, R., Ayuso, D., Bobrow, R., Boisen, S., Ingda, R., and Palmucci, J. Pa~ial Parsing: A Report on Work in Progress. Proceedings of the Speech and Natural Language Workshop, pages 204-209. Morgan Kaufmann Publishers Inc., Feb 1991.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Gisting Conversational Speech",
"authors": [
{
"first": "R",
"middle": [],
"last": "Rohlicek",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ayuso",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bobrow",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Boulanger",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Gish",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Jeanrenaud",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Meteer",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Siu",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of International Conference of Acoustics, Speech, and Signal Processing (ICASSP)",
"volume": "2",
"issue": "",
"pages": "113--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohlicek, R., Ayuso, D., Bates, M., Bobrow, R., Boulanger, A., Gish, H., Jeanrenaud, P., Meteer, M., Siu, M., Gisting Conversational Speech\" in Proceedings of International Con- ference of Acoustics, Speech, and Signal Processing (ICASSP), Mar. 23-26, 1992, Vol.2, pp. 113-116.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "shows the roles used to match the example above. The first sub-rule, NP-PLUS, finds sequences of tokens that have been tagged as proper nouns. The XXX-CO rule finds sequences of the type {'the'} [proper-noun]+ {[proper-noun-plural]} [corpdesignator]. The :TERM-PRED operator appearing in this rule allows for other simple tests on the tokens. In this case, the corpdesignator? test tries to match one of a list of possible company designators, e.g., \"Corp.\". The CO-INSTANCE rule determines the existence of a company name if one of many company patternsmatches. If there is a match, the pattern assigns the tag tag-string to the sequence, and the action component creates a lexical entry for it. The lexical entry is assigned type CORPORATION and assigned the predicate NAME-OF relating the entry to a string created out of the words in the matched sequence. Finally, the top-level rule CO finds multiple instances of companies in the input. Lexical Pattern Example",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "quences of the type [anyword]* [joint-word] [anyword]* [activitycorporation-or-venture-np] [anyword]*, where[joint-word] (or *JOINT-WORDS* as specified below) is one of a list of words such as \"jointly\" and \"together\". The operator :AND-ENV introduces tests on phrases in a parse tree: :CAT indicates the phrase category; because some phrasetypes are recursive, :LOW (or other values) is used to indicate which level of the recursive structure is the one to be looked at; and :CONCEPT indicates the semantic type that is desired of that phrase. The simple action component of this rule adds the semantic type JOINT-VENTURE to the parse-node where the joint-word occurred. In effect this is indicating there is a joint-venture in the sentence. Note that this pattern makes no decisions regarding the role, if any, that [activity-corporation-orventure-np] plays in the joint venture.(def-top-patt JOINT-JOINT-VENTURE joint-word-tag)))) Sentence-Level Pattern Example",
"num": null,
"uris": null
},
"TABREF2": {
"text": "shows the difference in ERR for the various modes. ERR was the primary error measure used in MUC-5; to show improved performance, the goal is to minimize this measure. The new FS components, as evidenced by the Base results, improved ERR by at most 3 percentage points.",
"content": "<table><tr><td/><td>Base</td><td>No Lex</td><td>No Sent</td><td>No FS</td></tr><tr><td>EJV-1</td><td>66</td><td>66</td><td>68</td><td>68</td></tr><tr><td>EJV-2</td><td>68</td><td>68</td><td>70</td><td>70</td></tr><tr><td>EME-1</td><td>59</td><td>60</td><td>61</td><td>62</td></tr><tr><td>EME-2</td><td>62</td><td>63</td><td>63</td><td>63</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF3": {
"text": "Impact of New FS Components: Numbers are ERR measures",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}