ACL-OCL / Base_JSON /prefixE /json /E99 /E99-1014.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E99-1014",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:37:19.884563Z"
},
"title": "Full Text Parsing using Cascades of Rules: an Information Extraction Perspective Fabio",
"authors": [
{
"first": "Alberto",
"middle": [],
"last": "Lavelli",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper proposes an approach to full parsing suitable for Information Extraction from texts. Sequences of cascades of rules deterministically analyze the text, building unambiguous structures. Initially basic chunks are analyzed; then argumental relations are recognized; finally modifier attachment is performed and the global parse tree is built. The approach was proven to work for three languages and different domains. It was implemented in the IE module of FACILE, a EU project for multilingual text classification and !E.",
"pdf_parse": {
"paper_id": "E99-1014",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper proposes an approach to full parsing suitable for Information Extraction from texts. Sequences of cascades of rules deterministically analyze the text, building unambiguous structures. Initially basic chunks are analyzed; then argumental relations are recognized; finally modifier attachment is performed and the global parse tree is built. The approach was proven to work for three languages and different domains. It was implemented in the IE module of FACILE, a EU project for multilingual text classification and !E.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Most successful approaches in IE (Appelt et al., 1993; Grishman, 1995; Aone et al., 1998 ) make a very poor use of syntactic information. They are generally based on shallow parsing for the analysis of (non recursive) NPs and Verba~ Groups (VGs). After such step regular patterns are applied in order to trigger primitive actions that fill template(s); meta-rules are applied to patterns to cope with different syntactic clausal forms (e.g., passive forms). If we consider the most complex MUC-7 task (i.e., the Scenario Template task (MUC7, 1998)), the current technology is not able to provide results near an operational level (expected F(1)=75%; the best system scored about 50% (Aone et al., 1998) ). One of the limitations of the current technology is the inability to extract (and to represent) syntactic relations among elements in the sentence, i.e. grammatical functions and thematic roles. Scenario Template recognition needs the correct treatment of syntactic relations at both sentence and text level (Aone et al., 1998) . Full parsing systems are generally able to correctly model syntactic relations, but they tend to be slow (because of huge search spaces) and brittle (because of gaps in the grammar). The use of big grammars partially solves the problem of gaps but worsens the problem of huge search spaces and makes grammar modifications difficult (Grishman, 1995) . Grammar modifications are always to be taken into account. Many domain-specific texts present idiosyncratic phenomena that require non-standard rules. Often such phenomena are limited to some cases only (e.g., some types of coordinations are applied to people only and not to organizations). Inserting generic rules for such structures introduces (useless) extra complexity into the search space and when applied indiscriminately (e.g., on classes other than people) -can worsen the system results. It is not clear how semantic restrictions can be introduced into (big) generic grammars.",
"cite_spans": [
{
"start": 33,
"end": 54,
"text": "(Appelt et al., 1993;",
"ref_id": "BIBREF2"
},
{
"start": 55,
"end": 70,
"text": "Grishman, 1995;",
"ref_id": "BIBREF4"
},
{
"start": 71,
"end": 88,
"text": "Aone et al., 1998",
"ref_id": "BIBREF1"
},
{
"start": 683,
"end": 702,
"text": "(Aone et al., 1998)",
"ref_id": "BIBREF1"
},
{
"start": 1014,
"end": 1033,
"text": "(Aone et al., 1998)",
"ref_id": "BIBREF1"
},
{
"start": 1368,
"end": 1384,
"text": "(Grishman, 1995)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "In this paper we propose an approach to full parsing for IE based on cascades of rules. The approach is inspired by the use of finite-state cascades for parsing (e.g., (Abney, 1996) uses them in a project for inducing lexical dependencies from corpora). Our work is interesting in an IE perspective because it proposes:",
"cite_spans": [
{
"start": 168,
"end": 181,
"text": "(Abney, 1996)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "\u2022 a method for efficiently and effectively performing full parsing on texts;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "\u2022 a way of organizing generic grammars that simplifies changes, insertion of new rules and especially integration of domain-oriented rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "The approach proposed in this paper for parsing has been extended to the whole architecture of an IE system. Also lexical (lexical normalization and preparsing), semantic (default reasoning and template filling) and discourse modules are based on the same approach. The system has been developed as part of FACILE (Ciravegna et al., 1999) , a successfully completed project funded by the European Union. FACILE deals with text classification and information extraction from text in the financial domain. The proposed approach has been tested mainly for Italian, but proved to work also for English and, as of the time of this writing, partially for Russian. Applications and demonstrators have been built in four different domains.",
"cite_spans": [
{
"start": 314,
"end": 338,
"text": "(Ciravegna et al., 1999)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "In this paper we first introduce the adopted formalism and then go into details on grammar organization and on the different steps through which parsing is accomplished. Finally we present some experimental results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Every lexical element a in the input sentence w is abstractly represented by means of elementary objects, called tokens. A token T is associated with three structures:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Rules",
"sec_num": "2"
},
{
"text": "\u2022 [T]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Rules",
"sec_num": "2"
},
{
"text": "dep is a dependency tree for a, i.e. a tree representing syntactic dependencies between a and other lexical elements (its dependees) in w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Rules",
"sec_num": "2"
},
{
"text": "\u2022 [T] leat is a feature structure representing syntactic and semantic information needed to combine a with other elements in the input.",
"cite_spans": [
{
"start": 2,
"end": 5,
"text": "[T]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Rules",
"sec_num": "2"
},
{
"text": "\u2022 [T] zy is a Quasi Logical Form (QLF) providing a semantic interpretation for the combination of a with its dependees.",
"cite_spans": [
{
"start": 2,
"end": 5,
"text": "[T]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Rules",
"sec_num": "2"
},
{
"text": "Rules operate on tokens, therefore they can access all the three structures above. Rules incrementally build and update the above structures. Lexical, syntactic and semantic constraints can then be used in rules at any level. The whole IE approach can be based on the same formalism and rule types, as both lexical, syntactic and semantic information can be processed uniformly. The general form of a rule is a triple (Ta~, FT, FA>, where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Rules",
"sec_num": "2"
},
{
"text": "\u2022 7c~d is a non-empty string of tokens, called the rule pattern; cr is called the rule core and is non-empty, 7, fi are called the rule context and may be empty;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Rules",
"sec_num": "2"
},
{
"text": "\u2022 FT is a set of boolean predicates, called rule test, defined over tokens in the rule pattern;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Rules",
"sec_num": "2"
},
{
"text": "\u2022 FA is a set of elementary operations, called rule action, defined over tokens in the sole rule core.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Rules",
"sec_num": "2"
},
{
"text": "The postfix, unary operators \",\" (Kleene star)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Rules",
"sec_num": "2"
},
{
"text": "and \"?\" (optionality operator) can be used in the rule patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Rules",
"sec_num": "2"
},
{
"text": "A basic data structure, called token chart, is processed and dynamically maintained. This is a directed graph whose vertices are tokens and whose arcs represent binary relations from some (finite) basic set. Initially, the token chart is a chain-like graph with tokens ordered as the corresponding lexical elements in w, i.e. arcs initially represent lexical adjacency between tokens. During the processing, arcs might be rewritten so that the token chart becomes a more general kind of graph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Rules",
"sec_num": "2"
},
{
"text": "For a rule to apply, a path cr must be found in the token chart that, when viewed as a string of tokens, satisfies the two following conditions: (i) ~ is matched by 7a~; and (ii) all the boolean predicates in FT hold when evaluated on c~.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Rules",
"sec_num": "2"
},
{
"text": "When a rule applies, the elementary operations in FA are executed on the tokens of \u00a2 matching the core of the rule. The effect of action execution is that [T]dep, IT]lear and [Tit/are updated for the appropriate matching tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Rules",
"sec_num": "2"
},
{
"text": "Rules are grouped into cascades that are finite, ordered sequences of rules. Cascades represent elementary logical units, in the sense that all rules in a cascade deal with some specific construction (e.g., subcategorization of verbs). From a functional point of view, a cascade is composed of three segments:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Rules",
"sec_num": "2"
},
{
"text": "\u2022 sl contains rules that deal with idiosyncratic cases for the construction at hand;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Rules",
"sec_num": "2"
},
{
"text": "\u2022 s2 contains rules dealing with the regular cases;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Rules",
"sec_num": "2"
},
{
"text": "\u2022 s3 contains default rules that fire only when no other rule can be successfully applied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation and Rules",
"sec_num": "2"
},
{
"text": "The parsing model is strongly influenced by IE needs. Its aim is to build the sufficient IE approximation (SIEA) of the correct parse tree for each sentence, i.e. a complete parse tree where all the relations relevant for template filling are represented, while other relations are left implicit. The parser assumes that there is one and only one possible correct parse tree for each sentence and therefore also only one SIEA. Parsing is based on the application of a fixed sequence of cascades of rules. It is performed in three steps using different grammars: * chunking (analysis of NPs, VGs and PPs);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing",
"sec_num": "3"
},
{
"text": ". subcategorization frame analysis (for verbs, nominalizations, etc.);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing",
"sec_num": "3"
},
{
"text": "\u2022 modifier attachment. The first two steps are syntax driven and based on generic grammars. Most rules in such grammars are general syntactic rules, even if they strongly rely on the semantic information provided by a foreground lexicon (see also Section 3.2). During modifier attachment, mainly semantic patterns are used. At the end of these three steps the SIEA is available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing",
"sec_num": "3"
},
{
"text": "We use deterministic dependency parsing 1 operating on a specific linear path within the token chart (the parsing path); at the beginning the parsing path is equal to the initial token chart. When a rule is successfully applied, the parsing path is modified so that only the head token is visible for the application of the following rules; this means that the other involved elements are no longer available to the parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing",
"sec_num": "3"
},
{
"text": "Chunking is accomplished in a standard way. In Figure I an example of chunk recognition is shown. 2",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 55,
"text": "Figure I",
"ref_id": null
}
],
"eq_spans": [],
"section": "Chunking",
"sec_num": "3.1"
},
{
"text": "A-structure analysis is concerned with the recognition of argumental dependencies between chunks. All kinds of modifier dependencies (e.g., PP attachment) are disregarded during this step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A-structure Analysis",
"sec_num": "3.2"
},
{
"text": "More precisely: let w be the input sentence. A dependency tree for w is a tree representing all predicate-argument and predicate-modifier syntactic relations between the lexical elements in w. The A-structure for w is a tree forest obtained from the dependency tree by unattaching all nodes that represent modifiers. A-structures are associated with the token that represents the semantic head of w (Tsent in the following). Astructure analysis associates to Ts~nt:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A-structure Analysis",
"sec_num": "3.2"
},
{
"text": "\u2022 [Tsent] dep: the A-structure spanning the whole sentence;",
"cite_spans": [
{
"start": 2,
"end": 9,
"text": "[Tsent]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A-structure Analysis",
"sec_num": "3.2"
},
{
"text": "t Even if we use dependency parsing, in this paper we will make reference to constituency based structures (e.g., PPs) because most readers are more acquainted with them than with dependency structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A-structure Analysis",
"sec_num": "3.2"
},
{
"text": "2In this paper we use literal English translations of Italian examples. A-structure analysis is performed without recursion by the successive application of three sequences of rule cascades: 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A-structure Analysis",
"sec_num": "3.2"
},
{
"text": ". The first sequence of cascades performs analysis of basic (i.e., non-recursive) sentences. It does so using the subcategorization frames of available chunks. Three cascades of rules are involved: one for the subcategorization frames of NP and PPs, one for those of VGs, and one for combining complementizers with their clausal arguments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A-structure Analysis",
"sec_num": "3.2"
},
{
"text": "\u2022 The second sequence of cascades performs analysis of dependencies between basic sentences. This sequence processes all sentential arguments and all incidentals by employing only two cascades of rules, without any need for recursion. This sequence is applied twice, i.e. it recognizes structures with a maximum of two nested sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A-structure Analysis",
"sec_num": "3.2"
},
{
"text": "* The third sequence of cascades performs recovery analysis. During this step all tree fragments not yet connected together are merged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A-structure Analysis",
"sec_num": "3.2"
},
{
"text": "Tokens not recognized as arguments at the end of A-structure analysis are marked as modifiers and left unattached in the resulting A-structure. They will be attached in the parse tree during modifier attachment (see Section 3.3). We adopt a highly lexicalized approach. In a pure IE perspective the information for Astructure analysis is provided by a foreground lexicon (Kilgarriff, 1997) . Foreground lexica provide detailed information about words relevant for the domain (e.g., subcategorization frame, relation with the ontology); for words outside the foreground lexicon a large background lexicon provides generic information. The term subcategorization frame is used here in restricted sense: it 3The order of the cascades in the sequences depends on the intrasentential structure of the specific language coped with. Finally, the recovery sequence collapses the two constituents above into a single sentence structure; the CP is considered a clausal modifier (as it was not subcategorized by anything) and is left unattached in the A-structure.",
"cite_spans": [
{
"start": 371,
"end": 389,
"text": "(Kilgarriff, 1997)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A-structure Analysis",
"sec_num": "3.2"
},
{
"text": "is the token associated with [has decided'].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "At the end of the A-structure recognition Tsent",
"sec_num": null
},
{
"text": "[Tsent]dep is integrated with the search space for each unattached modifier (see Section 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "At the end of the A-structure recognition Tsent",
"sec_num": null
},
{
"text": "The way A-structures are produced is interesting for a number of reasons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "At the end of the A-structure recognition Tsent",
"sec_num": null
},
{
"text": "First of all generic grammars are used to cope with generic linguistic phenomena at sentence level. Secondly we represent syntactic relations in the sentence (i.e., grammatical functions and thematic roles); such relations allow a better treatment of linguistic phenomena than possible in shallow approaches (Aone et ah, 1998; Kameyama, 1997) .",
"cite_spans": [
{
"start": 308,
"end": 326,
"text": "(Aone et ah, 1998;",
"ref_id": null
},
{
"start": 327,
"end": 342,
"text": "Kameyama, 1997)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "At the end of the A-structure recognition Tsent",
"sec_num": null
},
{
"text": "The initial generic grammar is designed to cover the most frequent phenomena in a restrictive sense. Additional rules can be added to the grammar (when necessary) for coping with the uncovered phenomena, especially domain-specific idiosyncratic forms. The limited size of the grammar makes modifications simple (the A-structure grammar for Italian contains 66 rules). The deterministic approach combined with the use of sequences, cascades and segments makes grammar modifications simple, as changes in a cascade (e.g., rule addition/modification) influence only the following part of the cascade or the following cascades. This makes the writing and debugging of grammars easier than in recursive approaches (e.g., context-free grammars), where changes to a rule can influence the application of any rule in the grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "At the end of the A-structure recognition Tsent",
"sec_num": null
},
{
"text": "The grammar organization in cascades and segments allows a clean definition of the grammar parts. Each cascade copes with a specific phenomenon (modularity of the grammar). All the rules for the specific phenomenon are grouped together and are easy to check.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "At the end of the A-structure recognition Tsent",
"sec_num": null
},
{
"text": "The segment/cascade structure is suitable for coping with the idiosyncratic phenomena of restricted corpora. As a matter of fact domainoriented corpora can differ from the standard use of language (such as those found in generic corpora) in two ways:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "At the end of the A-structure recognition Tsent",
"sec_num": null
},
{
"text": "\u2022 in the frequency of the constructions for a specific phenomenon;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "At the end of the A-structure recognition Tsent",
"sec_num": null
},
{
"text": "\u2022 in presenting different (idiosyncratic) constructions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "At the end of the A-structure recognition Tsent",
"sec_num": null
},
{
"text": "Coping with different frequency distributions is conceptually easy by using deterministic parsing and cascades of rules, as it is just necessary to change the rule order within the cascade coping with the specific phenomenon, so that more frequently applied rules are first in the cascade. Coping with idiosyncratic constructions requires the addition of new rules. Adding new rules in highly modularized small grammars is not complex.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "At the end of the A-structure recognition Tsent",
"sec_num": null
},
{
"text": "Finally from the point of view of grammar organization, defining segments is more than just having ordered cascades. Generic rules ~in s2) are separated from domain specific ones (in sl); rules covering standard situations (in s2) are separated from recovery rules (in s3). In s2, rules are generic and deal with unmarked cases. In principle s2 and s3 are units portable across the applications without changes. Domain-dependent rules are grouped together in sl and are the resources the application developer works on for adapting the grammar to the specific corpus needs (e.g., coping with idiosyncratic cases). Such rules generally use contexts and/or introduce domain-dependent (semantic) constraints in order to limit their application to well defined cases. S1 rules are applied before the standard rules and then idiosyncratic constructions have precedence with respect to standard forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "At the end of the A-structure recognition Tsent",
"sec_num": null
},
{
"text": "Segments also help in parsing robustly. $3 deals with unexpected situations, i.e. cases that could prevent the parser from continuing. For example the presence of unknown words is coped with after chunking by a cascade trying to guess the word's lexical class. If every strategy fails, a recovery rule includes the unknown word in the immediately preceding chunk so to let the parser continue. Recovery rules are applied only when rules in sl and s2 do not fire.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "At the end of the A-structure recognition Tsent",
"sec_num": null
},
{
"text": "The aim of modifier attachment is to find the correct position for attaching relevant modifiers in the parse tree and to add the proper semantic relations between each modifier and its modifiee in [Tsent] ty. [Tsent] Afterwards, the possible SP for each modifier is computed. The SP for a modifier Tn is a path in the token chart connecting T,~ with other elements in [Zsent] with Tsent. Then rules are applied to filter out elements in SPs according to syntactic constraints (e.g., NPs or PPs can be modified by a relative clause, but VGs can not).",
"cite_spans": [
{
"start": 197,
"end": 204,
"text": "[Tsent]",
"ref_id": null
},
{
"start": 209,
"end": 216,
"text": "[Tsent]",
"ref_id": null
},
{
"start": 368,
"end": 375,
"text": "[Zsent]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modifier attachment",
"sec_num": "3.3"
},
{
"text": "After SPs have been computed, modifiers are attached using a sequence of cascades of rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modifier attachment",
"sec_num": "3.3"
},
{
"text": "A first cascade, mainly composed by generic syntactic rules, attaches subordinates (e.g., relative clauses). Many of these rules are somehow similar to A-structure recognition rules. They are truly syntactic rules recognizing part of the subcategorization frame of subordinated verbs, using semantic information provided by the foreground lexicon. Note however that they are applied onto SPs not on the parsing path (as A-structure rules are).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modifier attachment",
"sec_num": "3.3"
},
{
"text": "Other cascades are used to attach different types of modifiers, such as PPs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modifier attachment",
"sec_num": "3.3"
},
{
"text": "Such rules mainly involve semantic constraints. For example, the rule shown in Figure 3 can recognize una crescita dei profitti del 20Y, (lit. an increase of profits of/by 20%).4 4Generally rules involve two elements (i.e. the modifier and the modifiee), taking into account intervening elements (such as other adjuncts) that do not have further associated conditions. The example above, instead, is more complex as it introduces constraints also on one of the intervening adjuncts (i.e., on T3). Such domain-oriented rule solves a recurring ambiguity in the domain of company financial results. As a matter of fact of/by 20% could modify both nouns from a syntactic and semantic point of view. The rule Rules for modifier attachment are easy to write. The SP allows to reduce complex cases to simple ones. For exampl e the rule in Figure 3 also applies to:",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 87,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Modifier attachment",
"sec_num": "3.3"
},
{
"text": "\u2022 an increase in 1997 of profits of/by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modifier attachment",
"sec_num": "3.3"
},
{
"text": "\u2022 an increase, news report, of profits of/by 20Z \u2022 an increase, considering the inflation rate, of profits (both gross and net) of/by 20~",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "20Z",
"sec_num": null
},
{
"text": "Patterns are usually developed having in mind the simplest case, i.e. a sequence of contiguous chunks in the sentence (such as in [an increase] [of profits] [of/by 20%]) that can be interleaved by other non relevant chunks. Conceptually this step is very similar to that used by shallow parsing approaches such as in (Grishman, 1997) . Note however that rules are not applied on a list of contiguous chunks, but on the search space (where the parse tree and related syntactic relations are available). Parse-tree based modifier attachment is less error prone than attachment performed on fiat chunk structures (as used in shallow parsing). For example it is possible to avoid cases of attachments violating syntactic constraints, as it would be the case in attaching allows to solve the ambiguity attaching of/by 20% to increase. NP p p \"..",
"cite_spans": [
{
"start": 317,
"end": 333,
"text": "(Grishman, 1997)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "20Z",
"sec_num": null
},
{
"text": "an increase, considering the inflation rate, of profits (both gross and net) of/by 20% and 20% to inflation rate (see Figure 4) . (Kameyama, 1997) . The obtained final parse tree is very close to the SIEA mentioned at the beginning of Section 3. In this tree all the A-structures are correctly built and all the modifiers are attached. Modifiers relevant for the IE application are attached in the correct position in the tree and a valid semantic relation is established with the modifiee in [Tsent]g. Irrelevant modifiers are attached in a default position in the tree (the lowest possible attachment) and a null semantic relation is established with the modifiee. The only difference between the produced tree and the SIEA is in the A-structure, where all the relations are captured (and not only those relevant for the domain). Modeling also the argumental structures of irrelevant constituents can be useful in order to correctly assign salience at discourse level. For example when interesting relations involve elements that are dependees of irrelevant verbs.",
"cite_spans": [
{
"start": 130,
"end": 146,
"text": "(Kameyama, 1997)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 118,
"end": 127,
"text": "Figure 4)",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": ",,,",
"sec_num": null
},
{
"text": "The approach to parsing proposed in this paper was implemented in the IE module of FACILE (Ciravegna et al., 1999) , a EU project for multilingual text classification and IE. It was tested on four domains and in three languages. In particular for Italian one application (about bond issues) has been fully developed and two others have reached the level of demonstration (management succession and company financial results). For English a demonstrator for the field of economic indicators was developed. A Russian demonstrator for bond issues was developed till the level of modifier attachment. The approach to rule writing and organization adopted for parsing '(i.e., the type of rules, cascades, segments, and available primitives and rule interpreter) was extended to the whole architecture of the IE module. Also lexical (lexical normalization and preparsing), semantic (default reasoning and template filling) and discourse levels are organized in the same way.",
"cite_spans": [
{
"start": 90,
"end": 114,
"text": "(Ciravegna et al., 1999)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Remarks and Conclusions",
"sec_num": "4"
},
{
"text": "Provided that the approach to parsing proposed in this paper is strongly influenced by IE needs, it is difficult to evaluate it by means of the standard tools used in the parsing community. Approximate indications can be provided by the effectiveness in recognizing A-structures and by the measures on the overall IE tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Remarks and Conclusions",
"sec_num": "4"
},
{
"text": "Effectiveness in recognizing A-structures was experimentally verified for Italian on a corpus of 95 texts in the domain of bond issue: 33 texts were used for training, 62 for test. Results were: P=97, R=83 on the training corpus, P=95, R=71 on the test corpus. In our opinion the high precision demonstrates the applicability of the method. The lower recall shows difficulties of building complete foreground lexica, a well known fact in IE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Remarks and Conclusions",
"sec_num": "4"
},
{
"text": "Concerning the effectiveness of the IE process, in the Italian application on bond issues the system reached P=80, R--72, F(1)=76 on the 95 texts used for development (33 ANSA agency news, 20 \"II Sole 24 ore\" newspaper articles, 42 Radiocor agency news; 10,472 words in all). Table 4 shows the kind of template used for this application. Effectiveness was automatically calculated by comparing the system results against a user-defined tagged corpus via the MUC scorer (Douthat, 1998) . The development cycle of the template application was organised as follows: resources (grammars, lexicon and knowledge base) were developed by carefully inspecting the first 33 texts of the corpus. Then the system was compared against the whole corpus (95 texts) with the following results: Recall=51, Precision=74, F(1)=60. Note that the corpus used for training was composed only by ANSA news, while the test corpus included 20 \"I1 Sole 24 ore\" newspaper articles and 42 Radiocor agency news (i.e., texts quite different from ANSA's in both terminology and length). Finally resources were tuned on the whole corpus mainly by focusing on the texts that did not reach sufficient results in terms of R&P. The system analyzed 1,125 word/minute on a Sparc Ultra 5, 128M RAM (whole IE process). issuer kind of bond amount currency announcement date placement date interest date maturity average duration global rate first rate a template element a label a monetary amount a string from the text a temporal expression a temporal expression a temporal expression a temporal expression a temporal expression a string from the text a string from the text Table 1 : The template to be filled for bond issues.",
"cite_spans": [
{
"start": 469,
"end": 484,
"text": "(Douthat, 1998)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1634,
"end": 1641,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Remarks and Conclusions",
"sec_num": "4"
}
],
"back_matter": [
{
"text": "Giorgio Satta has contributed to the whole work on parsing for IE via cascades of rules. The authors would like to thank him for the constant help and the fruitful discussions in the last two years. He also provided useful comments to this paper. The FACILE project (LE 2440) was partially funded by the European Union in the framework of the Language Engineering Sector. The English demonstrator was developed by Bill Black, Fabio Rinaldi and David Mowatt (Umist, Manchester) as part of the FACILE project. The Russian demonstrator was developed by Nikolai Grigoriev (Russian Academy of Sciences, Moscow).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Partial parsing via finitestate cascades",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Abney",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the ESSLI '96 Robust Parsing Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Abney. 1996. Partial parsing via finite- state cascades. In Proceedings of the ESSLI '96 Robust Parsing Workshop.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "SRA: description of the IE 2 system used for MUC-7",
"authors": [
{
"first": "Chinatsu",
"middle": [],
"last": "Aone",
"suffix": ""
},
{
"first": "Lauren",
"middle": [],
"last": "Halverson",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Hampton",
"suffix": ""
},
{
"first": "Mila",
"middle": [],
"last": "Ramos-Santacruz",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Seventh Message Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinatsu Aone, Lauren Halverson, Tom Hamp- ton, and Mila Ramos-Santacruz. 1998. SRA: description of the IE 2 system used for MUC-7. In Proceedings of the Seventh Message Understanding Conference (MUC-7), http://www.muc.saic.com/.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "FAS-TUS: A finite-state processor for information extraction from real-world text",
"authors": [
{
"first": "Douglas",
"middle": [
"E"
],
"last": "Appelt",
"suffix": ""
},
{
"first": "Jerry",
"middle": [
"R"
],
"last": "Hobbs",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bear",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Israel",
"suffix": ""
},
{
"first": "Mabry",
"middle": [],
"last": "Tyson",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas E. Appelt, Jerry R. Hobbs, John Bear, David Israel, and Mabry Tyson. 1993. FAS- TUS: A finite-state processor for information extraction from real-world text. In Proceed- ings of the Thirteenth International Joint Con- ference on Artificial Intelligence, Chambery, France.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "FACILE: Classifying texts integrating pattern matching and information extraction",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Ciravegna",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Lavelli",
"suffix": ""
},
{
"first": "Nadia",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Gilardoni",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Mazza",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Ferraro",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Matiasek",
"suffix": ""
},
{
"first": "William",
"middle": [
"J"
],
"last": "Black",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Rinatdi",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mowatt",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabio Ciravegna, Alberto Lavelli, Nadia Mann, Luca Gilardoni, Silvia Mazza, Massimo Ferraro, Johannes Matiasek, William J. Black, Fabio Ri- natdi, and David Mowatt. 1999. FACILE: Clas- sifying texts integrating pattern matching and information extraction. In Proceedings of the Sixteenth International Joint Conference on Ar- tificial Intelligence, Stockholm, Sweden. Aaron Douthat. 1998. The message un- derstanding conference scoring software user's manual. In Proceedings of the Seventh Message Understanding Conference (MUC-7}, http://www.muc.saic.com/.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The NYU system for MUC-6 or where's syntax? In Sixth message understanding conference MUC-6",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralph Grishman. 1995. The NYU system for MUC-6 or where's syntax? In Sixth mes- sage understanding conference MUC-6. Morgan Kaufmann Publishers.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Information Extraction: a multidisciplinary approach to an emerging technology",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralph Grishman. 1997. Information extraction: Techniques and challenges. In M. T. Pazienza, editor, Information Extraction: a multidisci- plinary approach to an emerging technology. Springer Verlag.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Recognizing referential links: An information extraction perspective",
"authors": [
{
"first": "Megumi",
"middle": [],
"last": "Kameyama",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of ACL/EACL Workshop on Operational Factors in Practical, Robust Anaphora Resolution for Unrestricted Texts",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Megumi Kameyama. 1997. Recognizing referen- tial links: An information extraction perspec- tive. In Mitkov and Boguraev, editors, Proceed- ings of ACL/EACL Workshop on Operational Factors in Practical, Robust Anaphora Resolu- tion for Unrestricted Texts, Madrid, Spain.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Foreground and background lexicons and word sense disambiguation for information extraction",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
}
],
"year": 1997,
"venue": "In International Workshop on Lexically Driven Information Extraction",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Kilgarriff. 1997. Foreground and back- ground lexicons and word sense disambiguation for information extraction. In International Workshop on Lexically Driven Information Ex- traction, Frascati, Italy.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Proceedings of the Seventh Message Understanding Conference (MUC-7}. SAIC",
"authors": [
{
"first": "",
"middle": [],
"last": "Muc7",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MUC7. 1998. Proceedings of the Seventh Message Understanding Conference (MUC-7}. SAIC, http://www.muc.saic.com/.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "The Italian sentence used as an example."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "dep and [Tsent]t$ are used to determine the correct attachments and are also modified during this step. Modifier attachment is performed in two steps: first all the possible attachments are computed for each modifier (its search space, SP). Here mainly generic syntactic rules are used. Then the correct attachment in the search space is determined for each modifier, applying domain-specific rules. The rules always modify both [Tsent]dep and [Tsent]ty. Only modifiers relevant for the IE task are attached in the proper position. Other modifiers are attached in a default position in [T~ent] d~p. Initially modifiers are attached in the lowest position in [Tsent]dep. No semantic relation is hypothesized between each modifier and the rest of the sentence in [Ts~nt]t/. Given: \u2022 T~: a modifier token derived from chunk n, \u2022 Tn-l: the token, derived from chunk n -1, immediately preceding n in the sentence, in right branching languages (such as Italian) the lowest possible attachment for T,~ is in the position of modifier of Tn-x."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "An example of modifier attachment rule."
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Violation of syntactic constraints (in the third example above) profit to increase"
},
"TABREF2": {
"html": null,
"num": null,
"text": "The rule that recognizes [the issue of bonds]np.",
"type_str": "table",
"content": "<table><tr><td>PATTERN</td><td>TEST</td><td/><td>ACTION</td><td>Matched Input</td></tr><tr><td>T1</td><td>[T1]yeat.cat=NP</td><td/><td>Depend ant (IT1] aep, [T3] aep)</td><td>\"the issue\"</td></tr><tr><td/><td/><td/><td>[T1]yeat.subcat.int-arg=[T3]yeat</td></tr><tr><td/><td/><td/><td>[T1]t/.patient =[7\"3]ty.head</td></tr><tr><td>T2*</td><td>[T2]/eat.cat=ld j unct</td><td/><td/></tr><tr><td>T3</td><td>[T3]yeat.cat=PP</td><td/><td/><td>\"of bonds\"</td></tr><tr><td/><td colspan=\"2\">[T3]]eat=[T1]]e~t.subcat.int-arg</td><td/></tr><tr><td/><td colspan=\"2\">Figure 2: a nomi-</td><td colspan=\"2\">The result after the application of the first se-quence is: [ACME has decided]ip</td></tr><tr><td colspan=\"3\">nalization -the internal argument is realized</td><td/></tr><tr><td colspan=\"2\">as a PP marked by of;</td><td/><td/></tr><tr><td colspan=\"3\">-[ACME has decided]ip: [ACME]np is the ex-</td><td/></tr><tr><td colspan=\"3\">ternal argument of [has decided]vg;</td><td/></tr><tr><td colspan=\"2\">\u2022 [tells a press release]ip:</td><td>[a press</td><td/></tr><tr><td colspan=\"3\">release]np is the external argument of</td><td/></tr><tr><td>[tells]vg;</td><td/><td/><td/></tr><tr><td colspan=\"3\">\u2022 [start the issue of bonds for 12</td><td/></tr><tr><td colspan=\"3\">million Euro]ip: [issue]np is the internal</td><td/></tr><tr><td colspan=\"3\">argument of [start]v9; [for 12 million</td><td/></tr><tr><td colspan=\"3\">Euro]pp is a modifier (as it is not subcatego-</td><td/></tr><tr><td colspan=\"3\">rized by other tokens) and is unattached in</td><td/></tr><tr><td colspan=\"2\">the A-structure;</td><td/><td/></tr><tr><td colspan=\"3\">-[diversify its obligation in the</td><td/></tr><tr><td>market]ip:</td><td colspan=\"2\">[its obligation]np is the</td><td/></tr><tr><td colspan=\"3\">internal argument of [diversify]v9; [in</td><td/></tr><tr><td colspan=\"2\">the market]pp is a modifier;</td><td/><td/></tr><tr><td colspan=\"3\">\u2022 [to start the issue of bonds for 12</td><td/></tr><tr><td colspan=\"3\">million Euro]ep: [tO]compl gets its argu-</td><td/></tr><tr><td colspan=\"2\">ment [start ...lip;</td><td/><td/></tr><tr><td colspan=\"3\">\u2022 [so to diversify its obligation in</td><td/></tr><tr><td colspan=\"2\">the market]cp:</td><td/><td/></tr></table>"
},
"TABREF3": {
"html": null,
"num": null,
"text": "dep that -from a syntactic point of view -can be modified by Tn. The initial SP for Tn is given by the path in [T~nt]d~p connecting Ta-1 T1] dev, [Tb] dev) [ T1] l] .increased-by=[Ta]t/.head",
"type_str": "table",
"content": "<table><tr><td>PATTERN</td><td>TEST</td><td>ACTION</td></tr><tr><td>TI</td><td>[T1] t/.head=TO-INCREASE</td><td>Dependant ([</td></tr><tr><td>T2*</td><td>[T2]/eat.cat =Adjunct</td><td/></tr><tr><td>T3</td><td>[T3]i/.head=PROFIT</td><td/></tr><tr><td/><td>[T3]/oor.cat=VV</td><td/></tr><tr><td/><td>[Ta]/~,~t.marked=' 'of\"</td><td/></tr><tr><td>T4*</td><td>[T4]Ieat.Cat :Adjunct</td><td/></tr><tr><td>T5</td><td>[Tb] I/.head:PERCENTAGE</td><td/></tr><tr><td/><td>[Tb]/eat.cat=PP</td><td/></tr><tr><td/><td>[Ts]feat.marked: \" of/by''</td><td/></tr></table>"
}
}
}
}