ACL-OCL / Base_JSON /prefixW /json /W96 /W96-0211.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W96-0211",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:59:33.790578Z"
},
"title": "Automating Feature Set Selection for Case-Based Learning of Linguistic Knowledge",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cornell University Ithaca",
"location": {
"postCode": "14853-7501",
"region": "NY"
}
},
"email": "cardie@cs.cornell.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper addresses the issue of \"algorithm vs. representation\" for case-based learning of linguistic knowledge. We first present empirical evidence that the success of case-based learning methods for natural language processing tasks depends to a large degree on the feature set used to describe the training instances. Next, we present a technique for automating feature set selection for case-based learning of linguistic knowledge. Given as input a baseline case representation, the method modifies the representation in response to a number of predefined linguistic biases by adding, deleting, and weighting features appropriately. We apply the linguistic bias approach to feature set selection to the problem of relative pronoun disambiguation and show that the casebased learning algorithm improves as relevant biasses are incorporated into the underlying instance representation. Finally, we argue that the linguistic bias approach to feature set selection offers new possibilities for case-based learning of natural language: it simplifies the process of instance representation design and, in theory, obviates the need for separate instance representations for each linguistic knowledge acquisition task. More importantly, the approach offers a mechanism for explicitly combining the frequency information available from corpus-based techniques with linguistic bias information employed in traditional linguistic and knowledge-based approaches to natural language processing.",
"pdf_parse": {
"paper_id": "W96-0211",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper addresses the issue of \"algorithm vs. representation\" for case-based learning of linguistic knowledge. We first present empirical evidence that the success of case-based learning methods for natural language processing tasks depends to a large degree on the feature set used to describe the training instances. Next, we present a technique for automating feature set selection for case-based learning of linguistic knowledge. Given as input a baseline case representation, the method modifies the representation in response to a number of predefined linguistic biases by adding, deleting, and weighting features appropriately. We apply the linguistic bias approach to feature set selection to the problem of relative pronoun disambiguation and show that the casebased learning algorithm improves as relevant biasses are incorporated into the underlying instance representation. Finally, we argue that the linguistic bias approach to feature set selection offers new possibilities for case-based learning of natural language: it simplifies the process of instance representation design and, in theory, obviates the need for separate instance representations for each linguistic knowledge acquisition task. More importantly, the approach offers a mechanism for explicitly combining the frequency information available from corpus-based techniques with linguistic bias information employed in traditional linguistic and knowledge-based approaches to natural language processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Standard symbolic machine learning techniques have been successfully applied to a number of tasks in natural language processing (NLP). Examples include the use of decision trees for syntactic analysis (Magerman, 1995) , coreference (Aone and Bennett, 1995; McCarthy and Lehnert, 1995) , and cue phrase identification (Litman, 1994) ; the use of inductive logic programming for learning semantic grammars and building prolog parsers (Zelle and Mooney, 1994; Zelle and Mooney, 1993) ; the use of conceptual clustering algorithms for relative pronoun resolution (Cardie, 1992a; Cardie 1992b) , and the use of case-based learning techniques for lexical tagging tasks (Cardie, 1993a; Daelemans et al., submitted) . In theory, both statistical and machine learning techniques can significantly reduce the knowledge-engineering effort for building large-scale NLP systems: they offer an automatic means for acquiring robust heuristics for a host of lexical and structural disambiguation tasks. It is well-known in the machine learning community, however, that the success of a learning algorithm depends critically on the representation used to describe the training and test instances (Almuallim and Dietterich, 1991, Langley and Sage, in press ). Unfortunately, the task of designing an appropriate instance representation --also known as feature set selection --can be extraordinarily difficult, time-consuming, and knowledge-intensive (Quinlan, 1983) . This poses a problem for current statistical and machine learning approaches to natural language understanding where a new instance representation is typically required for each linguistic task tackled.",
"cite_spans": [
{
"start": 202,
"end": 218,
"text": "(Magerman, 1995)",
"ref_id": "BIBREF13"
},
{
"start": 233,
"end": 257,
"text": "(Aone and Bennett, 1995;",
"ref_id": null
},
{
"start": 258,
"end": 285,
"text": "McCarthy and Lehnert, 1995)",
"ref_id": "BIBREF13"
},
{
"start": 318,
"end": 332,
"text": "(Litman, 1994)",
"ref_id": null
},
{
"start": 433,
"end": 457,
"text": "(Zelle and Mooney, 1994;",
"ref_id": null
},
{
"start": 458,
"end": 481,
"text": "Zelle and Mooney, 1993)",
"ref_id": null
},
{
"start": 560,
"end": 575,
"text": "(Cardie, 1992a;",
"ref_id": null
},
{
"start": 576,
"end": 589,
"text": "Cardie 1992b)",
"ref_id": null
},
{
"start": 664,
"end": 679,
"text": "(Cardie, 1993a;",
"ref_id": null
},
{
"start": 680,
"end": 708,
"text": "Daelemans et al., submitted)",
"ref_id": "BIBREF4"
},
{
"start": 1180,
"end": 1239,
"text": "(Almuallim and Dietterich, 1991, Langley and Sage, in press",
"ref_id": null
},
{
"start": 1433,
"end": 1448,
"text": "(Quinlan, 1983)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "This paper addresses the role of the underlying instance representation for one class of symbolic machine learning algorithm as applied to natural language understanding tasks, that of case-based learning (CBL) . In general, casebased learning algorithms (e.g., instance-based learning (Aha et al., 1991) , case-based reasoning (Riesbeck and Schank, 1989, Kolodner, 1993) , memory-based reasoning (Stanfill and Waltz, 1986) solve problems by first creating a case base of previous problem-solving episodes. Then, when a new problem is encountered, the \"most similar\" case is retrieved from the case base and used to solve the novel problem. The retrieved case can either be used directly or after one or more modifications to adapt it to the current problemsolving situation. Case-based learning algorithms have been used in NLP for context-sensitive parsing (Simmons and Yu, 1992) , for text categoriza-tion (Riloffand Lehnert, 1994) ; for lexical tagging tasks like part-of-speech tagging and semantic feature tagging (Daelemans et al., submitted, Cardie, 1994 , Cardie, 1993a ; for semantic interpretation (e.g., concept extraction (Cardie, 1994 , Cardie, 1993a ); and for a number of low-level language acquisition tasks, including stress acquisition (Daelemans et al., 1994) and graphemeto-phoneme conversion (Bosch and Daelemans, 1993) . In the sections below, we first present empirical evidence that the success of case-based learning methods for natural language processing tasks depends to a large degree on the feature set used to describe the training instances. Next, we present a technique for automating feature set selection for case-based learning of linguistic knowledge. Given as input a baseline instance representation comprised of both relevant and irrelevant attributes, the method modifies the representation in response to any of a number of predefined linguistic biases. More specifically, the technique uses linguistic biases to discard irrelevant features from the representation, to add new features to the representation, and to weight features appropriately. We then apply the linguistic bias approach to feature set selection in one natural language learning task --the relative pronoun (RP) disambiguation task from Cardie (1992a Cardie ( , 1992b . Experiments indicate that the case-based learning algorithm improves on the relative pronoun task as relevant biases are incorporated into the underlying instance representation. Furthermore, using the modified instance representation, the casebased learning algorithm is able to outperform a set of hand-coded heuristics designed for the same task.",
"cite_spans": [
{
"start": 205,
"end": 210,
"text": "(CBL)",
"ref_id": null
},
{
"start": 286,
"end": 304,
"text": "(Aha et al., 1991)",
"ref_id": "BIBREF0"
},
{
"start": 328,
"end": 341,
"text": "(Riesbeck and",
"ref_id": null
},
{
"start": 342,
"end": 371,
"text": "Schank, 1989, Kolodner, 1993)",
"ref_id": null
},
{
"start": 397,
"end": 423,
"text": "(Stanfill and Waltz, 1986)",
"ref_id": null
},
{
"start": 859,
"end": 881,
"text": "(Simmons and Yu, 1992)",
"ref_id": null
},
{
"start": 909,
"end": 934,
"text": "(Riloffand Lehnert, 1994)",
"ref_id": null
},
{
"start": 1020,
"end": 1062,
"text": "(Daelemans et al., submitted, Cardie, 1994",
"ref_id": null
},
{
"start": 1063,
"end": 1078,
"text": ", Cardie, 1993a",
"ref_id": null
},
{
"start": 1135,
"end": 1148,
"text": "(Cardie, 1994",
"ref_id": null
},
{
"start": 1149,
"end": 1164,
"text": ", Cardie, 1993a",
"ref_id": null
},
{
"start": 1255,
"end": 1279,
"text": "(Daelemans et al., 1994)",
"ref_id": "BIBREF3"
},
{
"start": 1325,
"end": 1341,
"text": "Daelemans, 1993)",
"ref_id": null
},
{
"start": 2249,
"end": 2262,
"text": "Cardie (1992a",
"ref_id": null
},
{
"start": 2263,
"end": 2279,
"text": "Cardie ( , 1992b",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Finally, we argue that the linguistic bias approach to feature set selection offers new possibilities for case-based learning of natural language:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "\u2022 It provides a natural mechanism for combining the frequency information available from corpus-based NLP techniques with linguistic bias information employed in traditional linguistic and knowledge-based approaches to language processing. The development of computational models of language processing that combine frequencies and linguistic biases has been noted by Pereira (Pereira, 1994) as an important area of research in corpus-based NLP.",
"cite_spans": [
{
"start": 376,
"end": 391,
"text": "(Pereira, 1994)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "\u2022 The linguistic bias approach to feature set selection simplifies and shortens the process of designing an appropriate instance representation for individual natural language learning tasks. System developers can safely include features for all available knowledge sources in the baseline instance representation --the irrelevant ones will be discarded automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "* By adopting the automated approach to feature set selection for CBL of linguistic knowledge, the same underlying instance representation can, in theory, be used across many linguistic knowledge acquisition tasks. A separate instance representation need not be designed each time we want to apply the learning algorithm to a new problem in natural language understanding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "The remainder of the paper is organized as follows. The section below describes the basic case-based learning algorithm used throughout the paper. The following section determines the role of the underlying instance representation in casebased learning of natural language by comparing the accuracy of the CBL algorithm on a number of natural language learning tasks using different instance representations. Next, we present the linguistic bias approach to feature set selection and applies the technique to the relative pronoun disambiguation task. We conclude with a discussion of the general implications of the linguistic bias approach to feature set selection for case-based learning of natural language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Throughout the paper, we employ a simple knearest neighbor case-based learning algorithm. In addition, we assume that the learning algorithm is embedded in a parser or larger NLP system and, hence, has access to all knowledge sources that are available to the NLP system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Basic Case-Based Learning Algorithm",
"sec_num": null
},
{
"text": "In case-based approaches to natural language understanding, the goal of the training phase is to collect a set of cases that describe ambiguity resolution episodes for a particular problem in text analysis. To do this, a small set of sentences is first selected randomly from an annotated training corpus. Next, the sentence analyzer processes the training sentences and creates a training case every time an instance of the ambiguity occurs. To learn heuristics for prepositional phrase attachment, for example, the parser would create a case whenever it recognizes a prepositional phrase. Each case is a set of features, or attribute-value pairs, that encode the context in which the ambiguity was encountered. In general, the context features represent the state of the parser at the point of the ambiguity. In addition, each case is annotated with one or more pieces of \"class\" information that describe how the ambiguity was resolved in the current example. We will refer to these as solution features. For lexical tagging tasks, for example, the class information is the syntactic or semantic category associated with the current word; for structural attachment decisions, the class in-formation indicates the position of the preferred attachment point. As cases are created, they are stored in a case base.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Basic Case-Based Learning Algorithm",
"sec_num": null
},
{
"text": "After training, the system can use the case base to resolve ambiguities in novel sentences. Whenever the sentence analyzer encounters an ambiguity, it creates a problem case, automatically filling in its context portion based on the state of the natural language system at the point of the ambiguity. The structure of a problem case is identical to that of a training case except that the solution part of the case is missing. Next, the case retrieval algorithm compares the problem case to those stored in the case base, finds the most similar training case, and then uses the class information to resolve the current ambiguity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Basic Case-Based Learning Algorithm",
"sec_num": null
},
{
"text": "The experiments described below employ the following case retrieval algorithm:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Basic Case-Based Learning Algorithm",
"sec_num": null
},
{
"text": "1. Compare the problem case, X, to each case, Y, in the case base and calculate for each pair:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Basic Case-Based Learning Algorithm",
"sec_num": null
},
{
"text": "Igl Z match(XN,, Ylv,) i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Basic Case-Based Learning Algorithm",
"sec_num": null
},
{
"text": "where N is the set of features used to describe all instances, Ni is the ith feature in the ordered set, XN~ is the value of Ni in the problem case, YN~ is the value of Ni in the training case, and match(a, b) is a function that returns 1 if a and b are equal and 0 otherwise. 2. Return the k highest-scoring cases plus any ties. 3. Let the retrieved cases vote on the predicted class (solution) value and use that value to resolve the ambiguity for X. We use a simple majority vote and break ties randomly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Basic Case-Based Learning Algorithm",
"sec_num": null
},
{
"text": "The case retrieval algorithm is essentially a simple k-nearest neighbors algorithm, with minor modifications to handle symbolic features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Basic Case-Based Learning Algorithm",
"sec_num": null
},
{
"text": "This section explores the role of the instance representation in case-based learning of natural language. In particular, it should be clear that the basic case-based learning Mgorithm will perform poorly when cases contain many irrelevant attributes (Aha et al., 1991 , Aha, 1989 . Unfortunately, deciding which features are important for a particular learning task is difficult, especially when interactions among potentially relevant features are unpredictable.",
"cite_spans": [
{
"start": 250,
"end": 267,
"text": "(Aha et al., 1991",
"ref_id": "BIBREF0"
},
{
"start": 268,
"end": 279,
"text": ", Aha, 1989",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Role of Representation in Case-Based Learning of Linguistic Knowledge",
"sec_num": null
},
{
"text": "In previous work (Cardie, 1994) , for example, we applied the above case-based learning algorithm to a number of problems in sentence analysis 115 both with and without mechanisms for feature set selection. Table 1 summarizes our results for simultaneous part-of-speech and semantic class (i.e., word sense) tagging. 1 Details regarding the experiments are included as part of Table 1 . It shows that tagging accuracy increases significantly when access to the available feature set is appropriately limited. More specifically, each tagging decision is initially described in the case representation in terms of 33 features: 22 local context features encode syntactic and semantic information for the words within a five-word window centered on the current word; 11 global context features encode information for any major syntactic constituents that have been recognized (e.g., semantic class and concept activation information for the subject, direct object, verb). The general idea behind the representation of context is to include any information available to the parser that might be useful for inferring the part of speech and semantic features of the current word. Results for the CBL algorithm using all 33 features are shown in the column labeled \"w/o feature selection.\" Intuitively, however, it seems that very different subsets of the feature set may be useful for part-of-speech prediction and semantic class prediction. Not surprisingly, the accuracy of the CBL algorithm increases when task-specific subsets of the original feature set are used instead of all of the available features (see the last column of Table 1 ).",
"cite_spans": [
{
"start": 17,
"end": 31,
"text": "(Cardie, 1994)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 207,
"end": 214,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 377,
"end": 384,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1626,
"end": 1633,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "The Role of Representation in Case-Based Learning of Linguistic Knowledge",
"sec_num": null
},
{
"text": "The task-specific subsets for the lexical tagging experiments of Table 1 were obtained automatically using the C4.5 decision tree algorithm (Quinlan, 1992) as described in Cardie(1993b). Very briefly, in addition to storing training cases in the case base, we use them to train a decision tree for each of the selected lexical tasks. Features that appear in the pruned decision tree are assumed to be relevant to the task; features that are missing from the tree are assumed to be unnecessary for the task. The feature sets proposed by C4.5 reduce the number of attributes used in the case retrieval algorithm from 33 to an average of 14, 11, and 15 features for part-of-speech, general semantic class, and specific semantic class tagging, respectively. 2 In addition, this automated approach to feature selection outperforms feature sets chosen by hand (Cardie, 1993b): the automated approach locates features that human experts consider mildly relevant to the task at best, but that, in practice, provide statistically reliable cues for the prediction 1 Word senses were represented in terms of a twolevel domain-specific semantic feature hierarchy.",
"cite_spans": [
{
"start": 140,
"end": 155,
"text": "(Quinlan, 1992)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 65,
"end": 72,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "The Role of Representation in Case-Based Learning of Linguistic Knowledge",
"sec_num": null
},
{
"text": "2A more sophisticated variation of this approach has been used by Daelemans et M. (1993) to provide weights on features rather than to eliminate features. It is able to improve on our semantic feature tagging results by a few percentage points. (MUC-5, 1994) . A relatively small corpus was used because the domain-specific semantic class tags and the tags for another lexical tagging task (not described here) were not available as part of any existing annotated corpus and had to be provided manually. The results presented are 10-fold cross validation averages using the same breakdown of training/test set cases for each experiment. The parser used to generate training and test cases was the CIRCUS system (Cardie and Lehnert, 1991; Lehnert, 1990) . The case retrieval algorithm was modified slightly to prefer cases among the top k = 10 cases that match the current word. A more detailed description of the experiments and an analysis of the results can be found in Cardie(1993a Cardie( , 1994 ",
"cite_spans": [
{
"start": 66,
"end": 88,
"text": "Daelemans et M. (1993)",
"ref_id": null
},
{
"start": 245,
"end": 258,
"text": "(MUC-5, 1994)",
"ref_id": null
},
{
"start": 711,
"end": 737,
"text": "(Cardie and Lehnert, 1991;",
"ref_id": null
},
{
"start": 738,
"end": 752,
"text": "Lehnert, 1990)",
"ref_id": "BIBREF13"
},
{
"start": 972,
"end": 984,
"text": "Cardie(1993a",
"ref_id": null
},
{
"start": 985,
"end": 999,
"text": "Cardie( , 1994",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Role of Representation in Case-Based Learning of Linguistic Knowledge",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i il. i tNINttkA ~w ~ '~ ' , , ~ . , ,",
"eq_num": ", , ,~ , , > ."
}
],
"section": "The Role of Representation in Case-Based Learning of Linguistic Knowledge",
"sec_num": null
},
{
"text": "z~dz~ozz~oz ~ o~o~ ~o prey and/ol refer to the preceding and following lexical items; gen-sera and spec-sera refer to general and specific semantic class values; cn refers to concept/case-frame activation; morphol refers to the morphology of the word to be tagged; s, do, v, and last-constit refer to the subject, direct object, verb, and last low-level constituent (i.e., noun phrase, verb, prepositional phrase), respectively.) task. Among the features deemed most important for part-of-speech tagging, for example, included the general semantic class of the two preceding words, the general semantic class of the following word, and the semantic class of the subject of the current clause. This is in addition to more obviously relevant features: e.g., the morphology of the current wordi the part of speech of the preceding and following words. A histogram of the relevant features for part-of-speech tagging across the 10 folds of the cross-validation experiments are shown in Figure 1 . Based on the experiments described in this section, we can conclude that the overall accuracy of case-based learning of linguistic knowledge depends to a large degree on the feature set used in the case representation. Moreover, automatic approaches to feature set selection can outperform feature sets chosen manually by taking advantage of statistical relationships in the data that are difficult for humans to predict and that may be idiosyncrasies of the task and data set at hand.",
"cite_spans": [],
"ref_spans": [
{
"start": 982,
"end": 990,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The Role of Representation in Case-Based Learning of Linguistic Knowledge",
"sec_num": null
},
{
"text": "We saw in the last section that the performance case-based learning algorithms degrades when features irrelevant to the learning task are included in the underlying instance representation. As a result, the basic CBL algorithm for lexical tagging tasks was augmented with a decision tree algorithm whose job it was to discard irrelevant features from the case representation. This section presents a new technique for feature set selection for case-based learning of natural language. The new approach is potentially more powerful than the decision tree method in that it can improve a baseline case representation in three ways rather than one:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic and Cognitive Biases for Feature Set Selection",
"sec_num": null
},
{
"text": "1. It discards irrelevant features from the representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic and Cognitive Biases for Feature Set Selection",
"sec_num": null
},
{
"text": "2. It determines the relative importance of relevant features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic and Cognitive Biases for Feature Set Selection",
"sec_num": null
},
{
"text": "3. It has a limited capability for adding new features when the existing ones are inadequate for the learning task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic and Cognitive Biases for Feature Set Selection",
"sec_num": null
},
{
"text": "Furthermore, the algorithm relies on an inductive bias that may be more appropriate to problems in natural language understanding than the information gain metric used in the C4.5 decision tree system: our linguistic bias approach to feature set selection automatically and explicitly encodes any of a predefined set of linguistic biases and cognitive processing limitations into a baseline instance representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic and Cognitive Biases for Feature Set Selection",
"sec_num": null
},
{
"text": "Thus far, we have incorporated three such biases into the feature set selection algorithm: (1) a recency bias, (2) a restricted memory bias, and (3) a subject accessibility bias. Modifications to the instance representation in response to these biases either directly or indirectly change the feature set used to describe all instances. Direct changes to the representation are made by adding or deleting features; indirect changes modify a weight associated with each feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic and Cognitive Biases for Feature Set Selection",
"sec_num": null
},
{
"text": "In the paragraphs below, we describe these biases and show how they can be used to modify the case representation for the task of relative pronoun (RP) disambiguation. The goal of the learning algorithm for relative pronoun disambiguation is: (1) to determine whether the wh-word is being used as a relative pronoun, and, if it is, (2) to determine which constituents comprise the antecedent. In the sentence, I saw the boy who won the contest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic and Cognitive Biases for Feature Set Selection",
"sec_num": null
},
{
"text": "for example, the CBL system must decide that \"who\" is a relative pronoun that refers to \"the boy.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic and Cognitive Biases for Feature Set Selection",
"sec_num": null
},
{
"text": "The baseline instance representation for the relative pronoun task is similar to the one used for the lexical tagging tasks. The main difference is that additional global context features are included in the case representation --namely, the parser includes one attribute-value pair for every constituent in the clause that precedes the relative pronoun. Figure 2 shows a portion of three relative pronoun disambiguation cases using the baseline case representation. Each constituent is described in terms of its syntactic class and its position in the sentence as it was encountered by the CIR-CUS parser. The value for each feature provides the phrase's semantic class. The class information assigned to each case describes the location of the correct antecedent. Note that no attachment decisions have been made by the parser; these will be made by learning algorithm as needed. In our current implementation, the learning algorithm, rather than the parser, is also responsible for interpreting any conjunctions and appositives that are part of the antecedent as shown in sentences $2 and $3 of Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 355,
"end": 363,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1098,
"end": 1106,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Using Linguistic and Cognitive Biases for Feature Set Selection",
"sec_num": null
},
{
"text": "The case representation for the RP task creates a minor problem for the CBL algorithm: no two instances are guaranteed to have the same features. Sentences that exhibit a direct object, for example, will have a \"direct object\" feature; sentences that have no direct object will contain no \"direct object\" feature. As a result, we require that all instances are described in terms of a normalized set of features. To do this, the algorithm keeps track of every attribute that occurs in the training instances and augments the value if the feature does not apply for the particular instance. Unfortunately, this means that most of the features in a normalized case will be one of these \"missing features.\" To ensure that the case retrieval algorithm focuses on features that are present rather than missing from the problem case, we also modify the original case retrieval algorithm to award full credit for matches on features present in the problem case and to allow partial credit for matches on missing features. This is accomplished by associating with each feature a weight that indicates the importance of the feature in determining case similarity and by using a 2. Compare the problem case, P, to each training case, T, in the case base and calculate, for each pair:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic and Cognitive Biases for Feature Set Selection",
"sec_num": null
},
{
"text": "Igl WN, * match(PN,, TNi) i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic and Cognitive Biases for Feature Set Selection",
"sec_num": null
},
{
"text": "where N is the normalized feature set, Ni is the ith feature in N, PN~ is the value of Ni in the problem case, TN~ is the value of Ni in the training case, and match(a, b) is a function that returns 1 if a and b are equal and 0 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic and Cognitive Biases for Feature Set Selection",
"sec_num": null
},
{
"text": "3. Return the case with the highest score as well as all ties.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic and Cognitive Biases for Feature Set Selection",
"sec_num": null
},
{
"text": "4. Let the retrieved cases vote on the value of the antecedent. Again, we use a simple majority vote and break ties randomly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic and Cognitive Biases for Feature Set Selection",
"sec_num": null
},
{
"text": "3A number of other values for the missing features weight were tested as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic and Cognitive Biases for Feature Set Selection",
"sec_num": null
},
{
"text": "Results using this 1-nearest neighbor CBL algorithm for relative pronoun disambiguation using the baseline case representation are shown in Table 2 . For these experiments, we drew training and test cases (241 instances) from MUC-3 texts that describe Latin American terrorist events (Chinchor et al., 1993) . As above, all results are 10-fold cross validation averages and the parser used to generate training and test cases was the CIRCUS system. The performance of the CBL algorithm is compared to that of: (1) a default rule that always chooses the most recent phrase as the antecedent, and (2) a set of hand-coded heuristics developed for the same task specifically for use in the terrorism domain. Chi-square significance tests indicate: (1) that the hand-coded heuristics perform better (at the 95% level) than the default rule and (2) that the CBL system is not significantly different from either the default rule or the hand-coded heuristics. In the sections below, we describe the recency bias, the restricted memory bias, and the subject accessibility bias in turn. We show how each bias can be used to automatically modify the baseline case representation and measure the effects of those modifications on the learning algorithm's ability to predict relative pronoun antecedents. Experiments will show that the changes in representation engender a 21.7% increase in accuracy, raising the performance of the CBL algo-rithm from 69.2% correct to 84.2%. In all experiments below, the same ten training and test set combinations as in the baseline experiments of Table 2 will be used. This procedure ensures that differences in performance are not attributable to the random partitions chosen for the test set.",
"cite_spans": [
{
"start": 284,
"end": 307,
"text": "(Chinchor et al., 1993)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 140,
"end": 147,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 1572,
"end": 1579,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "118",
"sec_num": null
},
{
"text": "In processing language, people consistently show a bias towards the use of the most recent information (e.g., Ffazier and Fodor (1978) , Gibson (1990) , Kimball: (1973) , Nicol (1988) ). In particular, Cuetos and Mitchell (1988) , Frazier and Fodor (1978) , and others have investigated the importance of recency in finding the antecedents of relative pronouns. They found that there is a preference for choosing the most recent noun phrase in sentences of the form NP V NP OF-PP, with ambiguous relative pronoun antecedents, e.g.:",
"cite_spans": [
{
"start": 110,
"end": 134,
"text": "Ffazier and Fodor (1978)",
"ref_id": null
},
{
"start": 137,
"end": 150,
"text": "Gibson (1990)",
"ref_id": "BIBREF10"
},
{
"start": 153,
"end": 168,
"text": "Kimball: (1973)",
"ref_id": "BIBREF11"
},
{
"start": 171,
"end": 183,
"text": "Nicol (1988)",
"ref_id": "BIBREF16"
},
{
"start": 202,
"end": 228,
"text": "Cuetos and Mitchell (1988)",
"ref_id": null
},
{
"start": 231,
"end": 255,
"text": "Frazier and Fodor (1978)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating the Recency Bias",
"sec_num": null
},
{
"text": "The journalist interviewed the daughter of the colonel who had had the accident.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating the Recency Bias",
"sec_num": null
},
{
"text": "In addition, Gibson et al. (1993) looked at phrases of the form: NP1 PREP NP2 OF NP3 RELATIVE-CLAUSE,. E.g.,",
"cite_spans": [
{
"start": 13,
"end": 33,
"text": "Gibson et al. (1993)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating the Recency Bias",
"sec_num": null
},
{
"text": ".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating the Recency Bias",
"sec_num": null
},
{
"text": "..the lamps near the paintings of the house that was damaged in the flood. ...the lamps near the painting of the houses that was damaged in the flood. ...the lamp near the paintings of the houses that was damaged in the flood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating the Recency Bias",
"sec_num": null
},
{
"text": "He found that the most recent noun phrase (NP3) was initially preferred as the antecedent and that recognizing antecedents in the NP2 and NP1 positions were significantly harder than recognizing the most recent noun phrase as the antecedent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating the Recency Bias",
"sec_num": null
},
{
"text": "We translate this recency bias into representational changes for the training and problem cases in two ways. The first is a direct modification to the attributes that comprise the case representation, and the second modifies the weights to indicate a constituent's distance from the relative pronoun.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating the Recency Bias",
"sec_num": null
},
{
"text": "In the first approach, we label the each constituent feature by its position relative to the relative pronoun. This establishes a right-to-left labeling of constituents rather than the left-to-right labeling that the baseline representation incorporates. In Figure 3 , for example, \"in Congress\" receives the attribute ppl in the right-to-left labeling because it is a prepositional phrase one position to the left of \"who.\" Similarly, \"the hardliners\" receives the attribute np2 because it is a noun phrase two positions to the left of \"who.\" The right-to-left ordering yields a different feature set and, hence, a different case representation. For ex-ample, the right-to-left labeling assigns the same antecedent value (i.e., ppP) to both of the following sentences:",
"cite_spans": [],
"ref_spans": [
{
"start": 258,
"end": 266,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Incorporating the Recency Bias",
"sec_num": null
},
{
"text": "\u2022 \"it was a message from the hardliners in Congress, who...\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating the Recency Bias",
"sec_num": null
},
{
"text": "\u2022 \"it was from the hardliners in Congress, who...\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating the Recency Bias",
"sec_num": null
},
{
"text": "The baseline (left-to-right) representation, on the other hand, labels the antecedents with distinct attributes --do-ppl and v-ppl, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating the Recency Bias",
"sec_num": null
},
{
"text": "In the second approach to incorporating the recency bias, we increment the weight associated with each constituent as a function of its proximity to the relative pronoun (see Table 3 ). The feature associated with the constituent farthest from the relative pronoun receives a weight of one, and the weights are increased by one for each subsequent constituent. All features added to the case as a result of feature normalization (not shown in Table 3) receive a weight of one. ",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 182,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Incorporating the Recency Bias",
"sec_num": null
},
{
"text": "v 1 do 1 do-ppl 1 Re- cency weight",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating the Recency Bias",
"sec_num": null
},
{
"text": "The results of experiments that use each of the recency representations separately and in a combined form are shown in Table 4 . To combine the two implementations of the recency bias, we first relabel the attributes of a case using the right-toleft labeling and then initialize the weight vector using the recency weighting procedure described above. The table shows that the recency weighting representation alone tends to degrade prediction of relative pronoun antecedents as compared to the baseline CBL system. Both the right-to-left labeling and combined representations improve performance --they perform significantly better than the default heuristic, but do not yet exceed the level of the hand-coded heuristics. The final row of results will be described below.",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 126,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Incorporating the Recency Bias",
"sec_num": null
},
{
"text": "As shown in Table 4 , the combined recency bias outperforms the right-to-left labeling despite the fact that the recency weighting tends to lower the accuracy of relative pronoun antecedent prediction when used alone. The right-to-left labeling appears to provide a representation of the local context of the relative pronoun that is critical for finding antecedents. The disappointing perfor-",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Incorporating the Recency Bias",
"sec_num": null
},
{
"text": "baseline representation: mance of the recency weighting representation, on the other hand, may be caused by (1) its lack of such a representation of local context, and (2) its bias against antecedents that are distant from the relative pronoun (e.g., \"...to help especially those people living in the Patagonia region of Argentina, who are being treated inhumanely...\"). Nineteen of the 241 cases have antecedents that include the often distant subject of the preceding clause.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence:",
"sec_num": null
},
{
"text": "right-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence:",
"sec_num": null
},
{
"text": "Furthermore, the recency bias performs well in spite of the fact that the baseline representation already provides a built-in recency bias. The baseline represents the constituent that precedes the relative pronoun up to three times in the baseline representation --as a constituent feature (e.g., \"direct object\") and via the \"last constituent\" global context features. 4 The last row in Table 4 shows the performance of the baseline representation when this built-in bias is removed by discarding the last-constituent features.",
"cite_spans": [],
"ref_spans": [
{
"start": 389,
"end": 396,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Sentence:",
"sec_num": null
},
{
"text": "Psychological studies have determined that people can remember at most seven plus or minus two items at any one time (Miller, 1956) . More recently, Carpenter (1983, 1980) show that working memory capacity affects a subject's ability to find the referents of pronouns over vary-4This means that when the constituent immediately preceding \"who\" in the problem case and a training case match, that constituent accounts for a greater percentage of the similarity score than does any other constituent.",
"cite_spans": [
{
"start": 117,
"end": 131,
"text": "(Miller, 1956)",
"ref_id": "BIBREF14"
},
{
"start": 149,
"end": 171,
"text": "Carpenter (1983, 1980)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating the Restricted Memory Bias",
"sec_num": null
},
{
"text": "ing distances. King and Just (1991) show that differences in working memory capacity can cause differences in the reading time and comprehension of certain classes of relative clauses. Moreover, it has been hypothesized that language learning in humans is successful precisely because limits on information processing capacities allow children to ignore much of the linguistic data they receive (Newport, 1990) . Some computational language learning systems (e.g., Elman (1990) ) actually build a short term memory directly into the architecture of the system. Our baseline case representation does not necessarily make use of this restricted memory bias, however. Each case is described in terms of the normalized feature set, which contains an average of 38.8 features. Unfortunately, incorporating the restricted memory limitations into the case representation is problematic. Previous restricted memory studies (e.g., short term memory studies) do not state explicitly what the memory limit should be --it varies from five to nine depending on the cognitive task and depending on the size and type of the \"chunks\" that have to be remembered. In addition, the restricted memory bias alone does not state which chunks, or features, to keep and which to discard.",
"cite_spans": [
{
"start": 15,
"end": 35,
"text": "King and Just (1991)",
"ref_id": null
},
{
"start": 395,
"end": 410,
"text": "(Newport, 1990)",
"ref_id": null
},
{
"start": 465,
"end": 477,
"text": "Elman (1990)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating the Restricted Memory Bias",
"sec_num": null
},
{
"text": "To apply the restricted memory bias to the baseline case representation, we let n represent the memory limit and, in each of five runs, set n to one of five, six, seven, eight, or nine. Then, for each test case, the system randomly chooses n features from the normalized feature set, sets the weights associated with those features to one, and sets the remaining weights to zero. This effectively dis- Table 5 . The first column of results shows the effect of memory limitations on the baseline representation. In general, the restricted memory bias with random feature selection degrades the ability of the system to predict relative pronoun antecedents although none of the changes is statistically significant. This is not surprising given that the current implementation of the bias is likely to discard relevant features as well as irrelevant features. We expect that this bias will have a positive impact on performance only when it is combined with linguistic biases that provide feature relevancy information. This is, in fact, the case: the final column in Table 5 shows the effect of restricted memory limitations on the combined recency representation. To incorporate the restricted memory bias and the combined recency bias into the baseline case representation, we (1) apply the right-to-left labeling, (2) rank the features of the case according to the recency weighting, and (3) keep the n features with the highest weights (where n is the memory limit). Ties are broken randomly. We expected: the merged representation to perform rather well because the combined recency bias representation worked well on its own and because the restricted memory (RM) bias essentially discards features that are distant from the relative pronoun and rarely included in the antecedent. As shown in the last column of Table 5 , four out of five RM/recency variations posted higher accuracies than the combined recency representation. In fact, three of the RM/recency representations now outperform the original baseline representation (shown in boldface) at the 95% significance level. (Until this point, the best representation had been the combined recency representation, which significantly outperformed the default heuristic, but not the baseline case representation.)",
"cite_spans": [],
"ref_spans": [
{
"start": 402,
"end": 409,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 1066,
"end": 1073,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 1817,
"end": 1824,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Incorporating the Restricted Memory Bias",
"sec_num": null
},
{
"text": "A number of studies in psycholinguistics have noted the special importance of the first item mentioned in a sentence. In particular, it has been shown that the accessibility of the first discourse object, which very often corresponds to the subject of the sentence, remains high even at the end of a sentence (Gernsbacher et al., 1989) . This subject accessibility bias is an example of a more general focus of attention bias. In vision learning problems, for example, the brightest object in view may be a highly accessible object for the learning agent; in aural tasks, very loud or high-pitched sounds may be highly accessible. We incorporate the subject accessibility bias into the baseline representation by increasing the weight associated with the constituent attribute that represents the subject of the clause preceding the relative pronoun whenever that feature is part of the normalized feature set. Table 6 shows the effects of allowing matches on the subject attribute to contribute two, five, seven, and ten times as much as they did in the baseline representation. The weights were chosen more or less arbitrarily. Results indicate that incorporation of the subject accessibility bias never improves performance of the learning algorithm, although dips in performance are never statistically significant. At first it may seem surprising that this bias does not result in a better representation. Like the recency bias, however, the baseline representation already encodes the subject accessibility bias by explicitly recognizing the subject as a major constituent of the sentence (i.e., \"s\") rather than by labeling it merely as a low-level noun phrase (i.e., \"np\"). It may be that this builtin encoding of the bias is adequate or that, like the restricted memory bias, additional modifications to the baseline representation are required before the subject accessibility bias can have a positive effect on the learning algorithm's ability to find relative pronoun antecedents. Table 7 shows the effects of merging the subject accessibility bias with both recency biases and the restricted memory bias (RM). The results in the first column (Baseline) are just the results from Table 6 --they indicate the performance of the baseline case representation with various levels of the subject accessibility bias. The second column shows the effect of incorporating the subject accessibility bias into the combined recency bias representation. To create this merged representation, we first establish the right-to-left labeling of features and then add together the weight vectors recommended by the recency weighting and subject accessibility biases. As was the case with the baseline representation, incorporation of the subject accessibility bias steadily decreases performance of the learning algorithm as the weight on the subject constituent is increased. None of the changes is statistically significant.",
"cite_spans": [
{
"start": 309,
"end": 335,
"text": "(Gernsbacher et al., 1989)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 911,
"end": 918,
"text": "Table 6",
"ref_id": "TABREF8"
},
{
"start": 1993,
"end": 2000,
"text": "Table 7",
"ref_id": null
},
{
"start": 2192,
"end": 2199,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Incorporating the Subject Accessibility Bias",
"sec_num": null
},
{
"text": "The remaining five columns of Table 7 show the effects of incorporating all three linguistic biases into the baseline case representation. To create this representation, we (1) relabel the attributes using the right-to-left labeling, (2) incorporate the subject and recency weighting representations by adding the weight vectors proposed by each bias, (3) apply the restricted memory bias by keeping only the n features with the highest weights (where n is the memory limit) and choosing randomly in case of ties. Results for these experiments indicate that some combinations of the linguistic bias parameters work very well together and others do not. In general, associating a weight of two with the subject constituent improves the accuracy of the learning algorithm as compared to the corresponding representation that omits the subject accessibility bias. (Compare the first and second rows of results). In particular, three representations (shown in italics) now outperform the best previous representation (which had the r-to-1 labeling, recency weighting, memory limit = 5 and achieved 81.7% correct). In addition, the best-performing representation now outperforms the hand-coded relative pronoun disambiguation rules (84.2% vs. 80.5%) at the 90% significance level.",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Incorporating the Subject Accessibility Bias",
"sec_num": null
},
{
"text": "In summary, this section presented a linguistic bias approach to feature set selection and applied it to the problem of finding the antecedent of the 122 relative pronoun \"who.\" Our experiments showed that performance of the case-based learning algorithm steadily improved as each of the available linguistic biases was used to modify the baseline case representation. Although one would not expect monotonic improvement to continue forever, it is clear that explicit incorporation of linguistic biases into the case representation can improve the learning algorithm performance for the relative pronoun disambiguation task. Table 8 summarizes these results. When all three biases are included in the case representation, the learning algorithm performs significantly better than the hand-coded rules (84.2% correct vs. 80.5% correct) at the 90% confidence level.",
"cite_spans": [],
"ref_spans": [
{
"start": 625,
"end": 632,
"text": "Table 8",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Incorporating the Subject Accessibility Bias",
"sec_num": null
},
{
"text": "It should be emphasized that modifications to the baseline case representation in response to each of the individual linguistic biases are performed automatically by the CBL system, subject to the constraints provided in Table 9 . Upon invocation of the CBL algorithm, the user need only specify (1) the names of the biases to incorporate into the case representation, and (2) any parameters required for those biases (e.g., the memory limit for the restricted memory bias).",
"cite_spans": [],
"ref_spans": [
{
"start": 221,
"end": 228,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": null
},
{
"text": "In addition, the linguistic bias approach to feature set selection relies on the following general procedure when incorporating more than one linguistic bias into the baseline representation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": null
},
{
"text": "1. First, incorporate any bias that relabels attributes (e.g., r-to-1 labeling). 2. Then, incorporate biases that modify feature weights by adding the weight vectors proposed by each bias (e.g., recency weighting, subject accessibility bias). 3. Finally, incorporate biases that discard features (e.g., restricted memory bias), but give preference to those features assigned the highest weights in Step 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": null
},
{
"text": "Thus far, we have implemented just three linguistic biases, all of which represent broadly applicable cognitive processing limitations. We expect that additional biases will be needed to handle new natural language learning tasks, but that, in general, a relatively small set of linguistic biases should be adequate for handling large number of problems in natural language learning. Examples of other useful linguistic biases to make available include: minimal attachment, right association, lexical preference biases, and a syntactic structure identity bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": null
},
{
"text": "One important problem that we have not addressed is how to select automatically the combination of linguistic biases that will achieve the Table 7 : Additional Results for the Subject Accessibility Bias Representation. (% correct, *'s indicate significance with respect to the original baseline result shown in boldface, \u2022 ~ p = 0.05, ** --, p = 0.01; RM refers to the memory limit). memory limit Weight factor, attribute associated with object of focus, e.g., the subject best performance for a particular natural language learning task. Our current approach assumes that the expert knowledge of computational linguists is easier to apply at the level of linguistic bias selection than at the feature set selection level -so at the very least, this expert knowledge can be used to seed the bias selection algorithm. For the relative pronoun task, for example, we assumed that all three linguistic biases were relevant and then exhaustively enumerated all combinations of the biases, choosing the combination that performed best in cross-validation testing. Because this method will get quickly out of hand as additional biases are included or parameters tested, future work should investigate less costly alternatives to linguistic bias selection.",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 146,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": null
},
{
"text": "In addition, we have tested the linguistic bias approach to feature selection on just one natural language learning task. We believe, however, that it offers a generM approach for case-based learning of natural language. In theory, it allows system developers to use the same underlying case representation for a variety of problems in NLP rather than developing a new representation as each new task is tackled. The underlying case representation only has to change when new knowledge sources become available to the NLP system in which the CBL system is embedded. Hence, the baseline case representation is parser-dependent (i.e., NLP system-dependent) rather than task-dependent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subject",
"sec_num": null
},
{
"text": "In particular, we are currently applying the linguistic bias CBL approach to the problem of general pronoun resolution. While it appears that our existing linguistic bias set will be of use, we believe that the CBL system will benefit from additional linguistic biases. Centering constraints (see Brennan et al., 1987) , for example, can be encoded as linguistic biases and applied to the pronoun resolution task to increase system performance.",
"cite_spans": [
{
"start": 297,
"end": 318,
"text": "Brennan et al., 1987)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subject",
"sec_num": null
},
{
"text": "Furthermore, we have focused on applying the linguistic bias approach to feature set selection for case-based learning algorithms only. In future work, we plan to investigate the use of the approach for feature selection in conjunction with other standard machine learning algorithms. Here we expect that very different manipulations of the baseline case representation will be needed to implement the linguistic biases presented in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subject",
"sec_num": null
},
{
"text": "Finally, the viability of both the linguistic bias approach to feature set selection and the general CBL approach to natural language learning must be tested using much larger corpora. Experiments on case-based part-of-speech tagging by researchers at Tilburg University (Daelemans et al., submitted) , however, indicate that the CBL approach to natural language learning will scale to 124 much larger data sets.",
"cite_spans": [
{
"start": 271,
"end": 300,
"text": "(Daelemans et al., submitted)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subject",
"sec_num": null
},
{
"text": "In summary, this paper begins to address the issue of \"algorithm vs. representation\" for casebased learning of linguistic knowledge. We have shown empirically that the feature set used to describe training and test instances plays an important role for a number of tasks in natural language understanding. In addition, we have presented an automated approach to feature set selection for case-based learning of linguistic knowledge. The approach takes a baseline case representation and modifies it in response to one of three linguistic biases by adding, deleting, and weighting features appropriately. We applied the technique to the task of relative pronoun disambiguation and found that the case-based learning algorithm improves as relevant biases are used to modify the underlying case representation. Finally, we have argued that the linguistic bias approach to feature set selection offers new possibilities for case-based learning of natural language. It simplifies the process of designing an appropriate instance representation for individual natural language learning tasks because system developers can safely include in the baseline instance representation features for all available knowledge sources. In the long run, it may obviate the need for separate instance representations for each linguistic knowledge acquisition task. More importantly, the linguistic bias CBL approach to natural language learning offers a mechanism for explicitly combining the frequency information available from corpus-based techniques with linguistic bias information employed in traditional linguistic and knowledge-based approaches to natural language processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subject",
"sec_num": null
},
{
"text": "gence Approach. Morgan Kaufmann, San Mateo, CA. (Quinlan, 1992) J. R. Quinlan. 1992. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA. (Riesbeck and Schank, 1989) C. Riesbeck and R. Schank. 1989. Inside Case-Based Reasoning. Erlbaum, Northvale, NJ. (Riloff and Lehnert, 1994) ",
"cite_spans": [
{
"start": 48,
"end": 63,
"text": "(Quinlan, 1992)",
"ref_id": null
},
{
"start": 70,
"end": 167,
"text": "Quinlan. 1992. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA. (Riesbeck and",
"ref_id": null
},
{
"start": 168,
"end": 181,
"text": "Schank, 1989)",
"ref_id": null
},
{
"start": 185,
"end": 197,
"text": "Riesbeck and",
"ref_id": null
},
{
"start": 198,
"end": 279,
"text": "R. Schank. 1989. Inside Case-Based Reasoning. Erlbaum, Northvale, NJ. (Riloff and",
"ref_id": null
},
{
"start": 280,
"end": 294,
"text": "Lehnert, 1994)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subject",
"sec_num": null
}
],
"back_matter": [
{
"text": "Annual Meeting of the A CL, pages 122-129. Association for Computational Linguistics. (Bosch and Daelemans, 1993) A. van den Bosch and W. Daelemans. 1993 . Data-oriented methods for grapheme-to-phoneme conversion. ",
"cite_spans": [
{
"start": 97,
"end": 113,
"text": "Daelemans, 1993)",
"ref_id": null
},
{
"start": 138,
"end": 153,
"text": "Daelemans. 1993",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Instance-Based Learning Algorithms",
"authors": [
{
"first": "",
"middle": [],
"last": "Aha",
"suffix": ""
}
],
"year": 1991,
"venue": "Machine Learning",
"volume": "6",
"issue": "",
"pages": "37--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aha et al., 1991) D. Aha, D. Kibler, and M. Al- bert. 1991. Instance-Based Learning Algo- rithms. Machine Learning, 6(1):37-66.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Instance-Based Learning Algorithms",
"authors": [
{
"first": "D",
"middle": [],
"last": "Aha",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the Sixth International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "387--391",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "(Aha, 1989) D. Aha. 1989. Instance-Based Learn- ing Algorithms. In Proceedings of the Sixth In- ternational Conference on Machine Learning, pages 387-391, Cornell University, Ithaca, NY. Morgan Kaufmann.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Evaluating Automated and Manual Acquisition of Anaphora Resolution Strategies",
"authors": [
{
"first": "H",
"middle": [],
"last": "Almuallim",
"suffix": ""
},
{
"first": "T",
"middle": [
"G"
],
"last": "Dietterich",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the 33rd in Parsing: Restrictions on the Use of the Late Closure Strategy in Spanish",
"volume": "30",
"issue": "",
"pages": "73--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "(Almuallim and Dietterich, 1991) H. Almual- lim and T. G. Dietterich. 1991. Learning With Many Irrelevant Features. In Proceedings of the Ninth National Conference on Artificial Intel- ligence, pages 547-552, Anaheim, CA. AAAI Press / MIT Press. (Aone and Bennett, 1995) Chinatsu Aone and William Bennett. 1995. Evaluating Au- tomated and Manual Acquisition of Anaphora Resolution Strategies. In Proceedings of the 33rd in Parsing: Restrictions on the Use of the Late Closure Strategy in Spanish. Cognition, 30(1):73-105. (Daelemans et al., 1994) W. Daelemans,",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The Acquisition of Stress: A Data-Oriented Approach",
"authors": [
{
"first": "G",
"middle": [],
"last": "Durieux",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Gillis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "3",
"pages": "421--451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Durieux, and S. Gillis. 1994. The Acquisition of Stress: A Data-Oriented Approach. Compu- tational Linguistics, 20(3):421-451. (Daelemans et al., submitted)",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Individual Differences in Working Memory and Reading",
"authors": [
{
"first": "W",
"middle": [],
"last": "Daelemans",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zavrel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Berck",
"suffix": ""
},
{
"first": "Gillis",
"middle": [
"S"
],
"last": "Submitted ; M. Daneman",
"suffix": ""
},
{
"first": "P",
"middle": [
"A"
],
"last": "Carpenter",
"suffix": ""
}
],
"year": 1980,
"venue": "Journal of Verbal Learning and Verbal Behavior",
"volume": "19",
"issue": "",
"pages": "450--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Daelemans, J. Zavrel, Berck P., and Gillis S. submitted. Memory-Based Part of Speech Tagging. Tilburg University. (Daneman and Carpenter, 1980) M. Dane- man and P. A. Carpenter. 1980. Individual Differences in Working Memory and Reading. Journal of Verbal Learning and Verbal Behav- ior, 19:450-466. (Daneman and Carpenter, 1983)",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "dividual Differences in Integrating Information Between and Within Sentences. Journal of Experimental Psychology: Learning, Memory, and Cognition",
"authors": [
{
"first": "M",
"middle": [],
"last": "Daneman",
"suffix": ""
},
{
"first": "P",
"middle": [
"A"
],
"last": "Carpenter",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "9",
"issue": "",
"pages": "561--584",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Daneman and P. A. Carpenter. 1983. In- dividual Differences in Integrating Information Between and Within Sentences. Journal of Ex- perimental Psychology: Learning, Memory, and Cognition, 9:561-584.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Finding Structure in Time",
"authors": [
{
"first": ";",
"middle": [
"J"
],
"last": "Elman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Elman",
"suffix": ""
}
],
"year": 1990,
"venue": "Cognitive Science",
"volume": "14",
"issue": "",
"pages": "179--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elman, 1990) J. Elman. 1990. Finding Structure in Time. Cognitive Science, 14:179-211.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The Sausage Machine: A New Two-Stage Parsing Model",
"authors": [
{
"first": "L",
"middle": [],
"last": "Frazier",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Fodor",
"suffix": ""
}
],
"year": 1978,
"venue": "",
"volume": "6",
"issue": "",
"pages": "291--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "(Frazier and Fodor, 1978)L. Frazier and J. D. Fodor. 1978. The Sausage Machine: A New Two-Stage Parsing Model. Cognition, 6:291- 325.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Building and Accessing Clausal Representations: The Advantage of First Mention Versus the Advantage of Clause Recency",
"authors": [
{
"first": "(",
"middle": [],
"last": "Gernsbacher",
"suffix": ""
}
],
"year": 1989,
"venue": "Journal of Memory and Language",
"volume": "28",
"issue": "",
"pages": "735--755",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "(Gernsbacher et al., 1989) M. A. Gernsbacher, D. J. Hargreaves, and M. Beeman. 1989. Build- ing and Accessing Clausal Representations: The Advantage of First Mention Versus the Advan- tage of Clause Recency. Journal of Memory and Language, 28:735-755.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Cross-linguistic Attachment Preferences: Evidence from English and Spanish",
"authors": [
{
"first": "(",
"middle": [],
"last": "Gibson",
"suffix": ""
}
],
"year": 1993,
"venue": "Sixth Annual CUNY Sentence Processing Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "(Gibson et al., 1993) E. Gibson, N. Pearlmutter, E. Canseco-Gonzalez, and G. Hickok. 1993. Cross-linguistic Attachment Preferences: Evi- dence from English and Spanish. In Sixth An- nual CUNY Sentence Processing Conference, University of Massachusetts, Amherst, MA. Only abstract in the Sentence Processing Con- ference proceedings. Full manuscript to appear in journal.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Recency Preferences and Garden-Path Effects",
"authors": [
{
"first": ";",
"middle": [
"E"
],
"last": "Gibson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gibson",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the Twelfth Annual Conference of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gibson, 1990) E. Gibson. 1990. Recency Prefer- ences and Garden-Path Effects. In Proceedings of the Twelfth Annual Conference of the Cog- nitive Science Society, Massachusetts Institute of Technology, Cambridge, MA. Lawrence Erl- baum Associates.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Individual Differences in Syntactic Processing: The Role of Working Memory",
"authors": [
{
"first": ";",
"middle": [
"J"
],
"last": "Kimball",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kimball",
"suffix": ""
}
],
"year": 1973,
"venue": "Journal of Memory and Language",
"volume": "2",
"issue": "",
"pages": "580--602",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kimball, 1973) J. Kimball. 1973. Seven Prin- ciples of Surface Structure Parsing in Natural Language. Cognition, 2:15-47. (King and Just, 1991)J. King and M. A. Just. 1991. Individual Differences in Syntactic Pro- cessing: The Role of Working Memory. Journal of Memory and Language, 30:580-602.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Case-Based Reasoning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Kolodner",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "(Kolodner, 1993) J. Kolodner. 1993. Case-Based Reasoning. Morgan Kaufmann, San Mateo, CA. (Langley and Sage, in press)",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Litman, 1994) Diane J. Litman. 1994. Classifying Cue Phrases in Text and Speech Using Machine Learning",
"authors": [
{
"first": "P",
"middle": [],
"last": "Langley",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sage ; David",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Magerman",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "Wendy",
"middle": [
"G"
],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lehnert",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the Fourteenth International Conference on Artificial Intelligence",
"volume": "4",
"issue": "",
"pages": "1050--1055",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Langley and S. Sage. in press. Scaling to domains with irrelevant features. In R. Greiner, editor, Computational learning theory and natu- ral learning systems, volume 4. The MIT Press, Cambridge, MA. (Lehnert, 1990) W. Lehnert. 1990. Sym- bolic/Subsymbolic Sentence Analysis: Exploit- ing the Best of Two Worlds. In J. Barnden and J. Pollack, editors, Advances in Connec- tionist and Neural Computation Theory, pages 135-164. Ablex Publishers, Norwood, NJ. (Litman, 1994) Diane J. Litman. 1994. Classify- ing Cue Phrases in Text and Speech Using Ma- chine Learning. In Proceedings of the Twelfth National Conference on Artificial Intelligence, pages 806-813. AAAI Press / MIT Press. (Magerman, 1995) David M. Magerman. 1995. Statistical Decision-Tree Models for Parsing. In Proceedings of the 33rd Annual Meeting of the ACL, pages 276-283. Association for Computa- tional Linguistics. (McCarthy and Lehnert, 1995) Joseph F. McCarthy and Wendy G. Lehnert. 1995. Using Decision Trees for Coreference Res- olution. In C. Mellish, editor, Proceedings of the Fourteenth International Conference on Artifi- cial Intelligence, pages 1050-1055.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information",
"authors": [
{
"first": ";",
"middle": [
"G A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1956,
"venue": "Psychological Review",
"volume": "63",
"issue": "1",
"pages": "81--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miller, 1956) G. A. Miller. 1956. The Magical Number Seven, Plus or Minus Two: Some Lim- its on our Capacity for Processing Information. Psychological Review, 63(1):81-97.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "1990) E. Newport. 1990. Maturational Constraints on Language Learning",
"authors": [],
"year": 1994,
"venue": "Proceedings of the Fifth Message Understanding Conference (MUC-5}. Morgan Kaufmann",
"volume": "14",
"issue": "",
"pages": "11--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MUC, 1994) 1994. Proceedings of the Fifth Mes- sage Understanding Conference (MUC-5}. Mor- gan Kaufmann, San Marco, CA. (Newport, 1990) E. Newport. 1990. Maturational Constraints on Language Learning. Cognitive Science, 14:11-28.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Coreference Processing During Sentence Comprehension",
"authors": [
{
"first": ";",
"middle": [
"J"
],
"last": "Nicol",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nicol",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicol, 1988) J. Nicol. 1988. Coreference Pro- cessing During Sentence Comprehension. Ph.D. thesis, Massachusetts Institute of Technology, Cambridge, MA.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Frequencies vs Biases: Machine learning problems in natural language processing",
"authors": [
{
"first": ";",
"middle": [
"F"
],
"last": "Pereira",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the Eleventh International Conference on Machine Learning",
"volume": "380",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pereira, 1994) F. Pereira. 1994. Frequencies vs Biases: Machine learning problems in natu- ral language processing. In Proceedings of the Eleventh International Conference on Machine Learning, page 380, Rutgers University, New Brunswick, NJ. Morgan Kaufmann.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning Efficient Classification Procedures and Their Application to Chess End Games",
"authors": [
{
"first": ";",
"middle": [
"J R"
],
"last": "Quinlan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Quinlan",
"suffix": ""
}
],
"year": 1983,
"venue": "Machine Learning: An Artificial lntelli",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quinlan, 1983) J. R. Quinlan. 1983. Learning Ef- ficient Classification Procedures and Their Ap- plication to Chess End Games. In R. S. Michal- ski, J. G. Carbonell, and T. M. Mitchell, ed- itors, Machine Learning: An Artificial lntelli-",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Histogram of Relevant Context Features for Part-of-Speech Tagging. (In the graph,",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "weighted nearest-neighbor case retrieval algorithm:1. Set the weight, wl, associated with each feature, f, in the normalized feature set3: w I = 0.2 if ff is missing from the (unnormalized) problem case, w/ = 1 otherwise.",
"type_str": "figure",
"num": null
},
"TABREF0": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "Results for Lexical Tagging Using Case-Based Learning With and Without Feature Set Selection. (All experiments draw training and test cases from a base set of 120 sentences from the MUC/TIPSTER Joint Ventures corpus"
},
"TABREF2": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>class:</td><td>s</td></tr><tr><td colspan=\"2\">(The antecedent is the subject.)</td></tr><tr><td>S2:</td><td>[I] [thank] [Nike] [andl [Reebok] [,] who ...</td></tr><tr><td colspan=\"2\">features: (s human) (v exists) (do name) (do-up1 name) (prevl-syntactic-type comma)...</td></tr><tr><td>class:</td><td>do -t-do-up1</td></tr><tr><td colspan=\"2\">(The antecedent involves two constituents.)</td></tr><tr><td>83:</td><td>[I] [thank] [our sponsor] [,] [GE] [,] who ...</td></tr><tr><td colspan=\"2\">features: (s human) (v exists) (do entity) (do-up1 name) (prevl-syntactic-type comma)...</td></tr><tr><td>class:</td><td>do-up1 V do</td></tr><tr><td colspan=\"2\">(There are two semantically legal antecedents.)</td></tr><tr><td/><td>Figure 2: Baseline Instance Representation.</td></tr></table>",
"num": null,
"text": "The man] [from Oklahoma] [,] who ... features: (s human) (s-ppl location) (pvevl-syntactic-type comma) ..."
},
"TABREF3": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>% correct)</td><td/><td/></tr><tr><td>CBL</td><td>Default</td><td>i~-</td></tr><tr><td>Algorithm</td><td>Strategy</td><td>Heuristics</td></tr><tr><td>w/o feature</td><td/><td/></tr><tr><td>set selection</td><td/><td/></tr><tr><td>76.2</td><td>74.3</td><td>80.5</td></tr></table>",
"num": null,
"text": ""
},
"TABREF4": {
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td>Feature</td><td>Base-</td></tr><tr><td/><td/><td>line</td></tr><tr><td/><td/><td>weight</td></tr><tr><td>It</td><td>s</td><td>1</td></tr><tr><td>was</td><td/><td/></tr><tr><td>the hardliners</td><td/><td/></tr><tr><td>in Congress</td><td/><td/></tr><tr><td>who...</td><td/><td/></tr></table>",
"num": null,
"text": "Incorporating the Recency Bias by Modifying the Weight Vector."
},
"TABREF5": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>(s entity) (v exists) (do human) (do-ppl entity) (prevl-syntactic-type</td></tr><tr><td>prep.phrase) ... (class do)</td></tr><tr><td>(s entity) (v exists) (np2 human) (ppl entity) (prevl-syntactic-type</td></tr><tr><td>prep-phrase) ... (class np2)</td></tr><tr><td>Figure 3: Incorporating the Recency Bias Using a Right-to-Left Labeling.</td></tr></table>",
"num": null,
"text": "to-left labeling: [It[ [was] [the hardliners] [in Congress] [,] who ..."
},
"TABREF6": {
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"2\">[ Case Representation [ % Correct [</td></tr><tr><td>Baseline Representation</td><td>76.2</td></tr><tr><td>(no feature selection)</td><td/></tr><tr><td>R-to-L Labeling</td><td>79.2</td></tr><tr><td>Recency Weighting</td><td>75.8</td></tr><tr><td>R-to-L + RecWt</td><td>80.0</td></tr><tr><td>Hand-Coded Heuristics</td><td>80.5</td></tr><tr><td>Default Heuristic</td><td>74.3</td></tr><tr><td>Baseline Representation</td><td>69.2</td></tr><tr><td>w/o built-in recency bias</td><td/></tr></table>",
"num": null,
"text": "Results for the Recency Bias Representations."
},
"TABREF7": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>Memory Limit</td><td>Baseline</td><td>R-to-L + RecWt</td></tr><tr><td>none</td><td>776.2</td><td>80.0</td></tr><tr><td/><td>78.3</td><td>81.2\"</td></tr><tr><td/><td>74.2</td><td>81.2\"</td></tr><tr><td/><td>76.2</td><td>80.0</td></tr><tr><td/><td>75.8</td><td>80.4</td></tr><tr><td/><td>75.0</td><td>81.7*</td></tr><tr><td colspan=\"2\">cards all but the n selected features from the case</td><td/></tr><tr><td colspan=\"2\">representation. Results for the restricted mem-</td><td/></tr><tr><td>ory bias representation are shown in</td><td/><td/></tr></table>",
"num": null,
"text": "Results for the Restricted Memory Bias Representation. (% correct, *'s indicate significance with respect to the original baseline result shown in boldface, * ~ p = 0.05)"
},
"TABREF8": {
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"2\">(% correct)</td></tr><tr><td>Baseline</td><td>76.2</td></tr><tr><td>Baseline, SubjWt=2</td><td>75.0</td></tr><tr><td>Baseline, SubjWt=5</td><td>74.2</td></tr><tr><td>Baseline, SubjWt=7</td><td>73.7</td></tr><tr><td colspan=\"2\">Baseline, SubjWt=10 73.3</td></tr></table>",
"num": null,
"text": ""
},
"TABREF10": {
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"2\">Case Representation</td><td colspan=\"2\">% Correct</td></tr><tr><td colspan=\"2\">Baseline w/o Built-in Recency Bias</td><td/><td>69.2</td></tr><tr><td colspan=\"3\">Default Heuristic: Choose Most Recent Phrase</td><td>74.3</td></tr><tr><td>Baseline</td><td/><td/><td>76.2</td></tr><tr><td>Baseline</td><td/><td/></tr><tr><td colspan=\"2\">+ Recency Bias</td><td/><td>80.0</td></tr><tr><td colspan=\"2\">Hand-Coded Heuristics</td><td/><td>80.5</td></tr><tr><td>Baseline</td><td/><td/></tr><tr><td colspan=\"2\">+ Recency Bias</td><td/></tr><tr><td colspan=\"2\">+ Restricted Memory Bias (limit=5)</td><td/><td>81.7</td></tr><tr><td>Baseline</td><td/><td/></tr><tr><td colspan=\"2\">+ Recency Bias</td><td/></tr><tr><td colspan=\"2\">+ Restricted Memory Bias (limit=5)</td><td/></tr><tr><td colspan=\"2\">+ Subject Accessibility Bias (subj wt=2)</td><td/><td>84.2</td></tr><tr><td>Table</td><td colspan=\"2\">9: Linguistic Bias Modifications.</td></tr><tr><td>Bias</td><td>Assumptions</td><td colspan=\"2\">Parameters</td></tr><tr><td>Recency</td><td>Attribute names indicate</td><td colspan=\"2\">Function mapping original</td></tr><tr><td>(r-to-1 labeling)</td><td>recency</td><td colspan=\"2\">attribute names to new</td></tr><tr><td/><td/><td colspan=\"2\">attribute names</td></tr><tr><td>Recency</td><td>Attributes in original case</td><td>None</td></tr><tr><td>(recency weighting)</td><td>are provided in inverse</td><td/></tr><tr><td/><td>recency order</td><td/></tr><tr><td>Restricted Memory</td><td>None</td><td/></tr><tr><td>Focus of Attention</td><td>None</td><td/></tr><tr><td>(subject accessibility)</td><td/><td/></tr></table>",
"num": null,
"text": "Summary of Linguistic Bias Results."
},
"TABREF11": {
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"3\">E. Riloff and W. Lehn-</td></tr><tr><td colspan=\"3\">ert. 1994. Information extraction as a basis for</td></tr><tr><td colspan=\"3\">high-precision text classification. ACM Trans-</td></tr><tr><td colspan=\"3\">actions on Information Systems, 12(3):296-333.</td></tr><tr><td colspan=\"3\">(Simmons and Yu, 1992) Robert F. Simmons and</td></tr><tr><td colspan=\"3\">Yeong-Ho Yu. 1992. The Acquisition and Use</td></tr><tr><td colspan=\"3\">of Context-Dependent Grammars for English.</td></tr><tr><td colspan=\"2\">Computational Linguistics, 18(4):391-418.</td><td/></tr><tr><td>(Stanfill and Waltz, 1986) C.</td><td>Stanfill</td><td>and</td></tr><tr><td>D.</td><td/><td/></tr></table>",
"num": null,
"text": "Waltz. 1986. Toward Memory-based Reasoning. Communications of the ACM, 29:1213- 1228. (Zelle and Mooney, 1993 J. Zelle and R. Mooney. 1993. Learning Semantic Grammars with Constructive Inductive Logic Programming. In Proceedings of the Eleventh National Conference on Artificial Intelligence, pages 817-822, Washington, DC. AAAI Press / MIT Press. (Zelle and Mooney, 1994) J. Zelle and R. Mooney. 1994. Inducing Deterministic Prolog Parsers from Treebanks: A Machine Learning Approach. In Proceedings of the Twelfth National Conference on Artificial Intelligence, pages 748-753, Seattle, WA. AAAI Press / MIT Press."
}
}
}
}