ACL-OCL / Base_JSON /prefixE /json /E03 /E03-1034.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E03-1034",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:25:13.269581Z"
},
"title": "Evaluating and Combining Approaches to Selectional Preference Acquisition",
"authors": [
{
"first": "Carsten",
"middle": [],
"last": "Brockmann",
"suffix": "",
"affiliation": {},
"email": "carsten.brockmann@ed.ac.uk"
},
{
"first": "Mireila",
"middle": [],
"last": "Lapata",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Previous work on the induction of selectional preferences has been mainly carried out for English and has concentrated almost exclusively on verbs and their direct objects. In this paper, we focus on class-based models of selectional preferences for German verbs and take into account not only direct objects, but also subjects and prepositional complements. We evaluate model performance against human judgments and show that there is no single method that overall performs best. We explore a variety of parametrizations for our models and demonstrate that model combination enhances agreement with human ratings.",
"pdf_parse": {
"paper_id": "E03-1034",
"_pdf_hash": "",
"abstract": [
{
"text": "Previous work on the induction of selectional preferences has been mainly carried out for English and has concentrated almost exclusively on verbs and their direct objects. In this paper, we focus on class-based models of selectional preferences for German verbs and take into account not only direct objects, but also subjects and prepositional complements. We evaluate model performance against human judgments and show that there is no single method that overall performs best. We explore a variety of parametrizations for our models and demonstrate that model combination enhances agreement with human ratings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Selectional preferences or constraints are the semantic restrictions that a word imposes on the environment in which it occurs. A verb like eat typically takes animate entities as its subject and edible entities as its object. Selectional preferences can most easily be observed in situations where they are violated. For example, in the sentence \"The mountain eats sincerity.\" both subject and object preferences for the verb eat are violated. The problem of quantifying the degree to which a given predicate (e.g., eat) semantically fits its arguments has received a lot of attention within computational linguistics. Several approaches have been developed for the induction of selectional preferences, and almost all of them rely on the availability of large machine-readable corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Probably the most primitive corpus-based model of selectional preferences is co-occurrence frequency. Inspection in a corpus of the types of nouns eat admits as its objects will reveal that food, meal, meat, or lunch are frequent complements, whereas river, mountain, or moon are rather unlikely. The obvious disadvantage of the frequency-based approach is that no generalizations emerge with respect to the observed preferences as it embodies no notion of semantic relatedness or proximity Ideally, one would like to infer from the corpus that eat is semantically congruent with food-related objects and incongruent with natural objects. Another related limitation of the frequency-based account is that it cannot make any predictions for words that never occurred in the corpus. A zero co-occurrence count might be due to insufficient evidence or might reflect the fact that a given word combination is inherently implausible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For the above reasons, most approaches model the selectional preferences of predicates (e.g., verbs, nouns, adjectives) by combining observed frequencies with knowledge about the semantic classes of their arguments. The classes can be induced directly from the corpus (Pereira et al., 1993; Brown et al., 1992; Lapata et al., 2001) or taken from a manually crafted taxonomy (Resnik, 1993; Li and Abe, 1998; Clark and Weir, 2002; Ciaramita and Johnson, 2000; Abney and Light, 1999) . In the latter case the taxonomy is used to provide a mapping from words to conceptual classes, and in most cases WordNet (Miller et al., 1990 ) is employed for this purpose.",
"cite_spans": [
{
"start": 268,
"end": 290,
"text": "(Pereira et al., 1993;",
"ref_id": "BIBREF17"
},
{
"start": 291,
"end": 310,
"text": "Brown et al., 1992;",
"ref_id": "BIBREF2"
},
{
"start": 311,
"end": 331,
"text": "Lapata et al., 2001)",
"ref_id": "BIBREF11"
},
{
"start": 374,
"end": 388,
"text": "(Resnik, 1993;",
"ref_id": "BIBREF18"
},
{
"start": 389,
"end": 406,
"text": "Li and Abe, 1998;",
"ref_id": "BIBREF12"
},
{
"start": 407,
"end": 428,
"text": "Clark and Weir, 2002;",
"ref_id": "BIBREF5"
},
{
"start": 429,
"end": 457,
"text": "Ciaramita and Johnson, 2000;",
"ref_id": "BIBREF4"
},
{
"start": 458,
"end": 480,
"text": "Abney and Light, 1999)",
"ref_id": "BIBREF0"
},
{
"start": 604,
"end": 624,
"text": "(Miller et al., 1990",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although most approaches agree on how selectional preferences must be represented, i.e., as a mapping cv : (p,r,c) -> a that maps each predicate p and the semantic class c of its argument with respect to role r to a real number a (Light and Greiff, 2002) , there is little agreement on how selectional preferences must be modeled (e.g., whether to use a probability model or not) and evaluated (e.g., whether to use a task-based evaluation or not). Furthermore, previous work has almost exclusively focused on verbal selectional preferences in English with the exception of Lapata et al. (1999 Lapata et al. ( , 2001 , who look at adjectivenoun combinations, again for English. Verbs tend to impose stricter selectional preferences on their arguments than adjectives or nouns and thus provide a natural test bed for models of selectional preferences. However, research on verbal selectional preferences has been relatively narrow in scope as it has primarily focused on verbs and their direct objects, ignoring the selectional preferences pertaining to subjects and prepositional complements.",
"cite_spans": [
{
"start": 230,
"end": 254,
"text": "(Light and Greiff, 2002)",
"ref_id": "BIBREF13"
},
{
"start": 574,
"end": 593,
"text": "Lapata et al. (1999",
"ref_id": "BIBREF10"
},
{
"start": 594,
"end": 616,
"text": "Lapata et al. ( , 2001",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The induction of selectional preferences typically addresses two related problems: (a) finding an appropriate class that best fits the predicate in question and (b) coming up with a statistical model or a measure that estimates how well a predicate fits its arguments. Resnik (1993) defines selectional association, an informationtheoretic measure of semantic fit of a particular semantic class c as an argument to a predicate p. Li and Abe (1998) use the Minimum Description Length (MDL) principle to select the the appropriate class c, Clark and Weir (2002) employ hypothesis testing. Abney and Light (1999) propose Hidden Markov Models as a way of deriving selectional preferences over words, senses, or even classes, whereas Ciaramita and Johnson (2000) use Bayesian Belief Networks to quantify selectional preferences.",
"cite_spans": [
{
"start": 269,
"end": 282,
"text": "Resnik (1993)",
"ref_id": "BIBREF18"
},
{
"start": 430,
"end": 447,
"text": "Li and Abe (1998)",
"ref_id": "BIBREF12"
},
{
"start": 538,
"end": 559,
"text": "Clark and Weir (2002)",
"ref_id": "BIBREF5"
},
{
"start": 587,
"end": 609,
"text": "Abney and Light (1999)",
"ref_id": "BIBREF0"
},
{
"start": 729,
"end": 757,
"text": "Ciaramita and Johnson (2000)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although there is no standard way to evaluate different approaches to selectional preferences, two types of evaluation are usually conducted: task-based evaluation and comparisons against human judgments. Word sense disambiguation results are reported by Resnik (1997) , Abney and Light (1999) , Ciaramita and Johnson (2000) and Carroll and McCarthy (2000) (however, on a different data set). Among the first three approaches, Ciaramita and Johnson (2000) obtain the best results. Li and Abe (1998) evaluate their system on the task of prepositional phrase attachment, whereas Clark and Weir (2002) use pseudodisambiguation,' a somewhat artificial task, and show that their approach outperforms Li and Abe (1998) and Resnik (1993) .",
"cite_spans": [
{
"start": 255,
"end": 268,
"text": "Resnik (1997)",
"ref_id": "BIBREF19"
},
{
"start": 271,
"end": 293,
"text": "Abney and Light (1999)",
"ref_id": "BIBREF0"
},
{
"start": 296,
"end": 324,
"text": "Ciaramita and Johnson (2000)",
"ref_id": "BIBREF4"
},
{
"start": 329,
"end": 356,
"text": "Carroll and McCarthy (2000)",
"ref_id": "BIBREF3"
},
{
"start": 427,
"end": 455,
"text": "Ciaramita and Johnson (2000)",
"ref_id": "BIBREF4"
},
{
"start": 481,
"end": 498,
"text": "Li and Abe (1998)",
"ref_id": "BIBREF12"
},
{
"start": 577,
"end": 598,
"text": "Clark and Weir (2002)",
"ref_id": "BIBREF5"
},
{
"start": 695,
"end": 712,
"text": "Li and Abe (1998)",
"ref_id": "BIBREF12"
},
{
"start": 717,
"end": 730,
"text": "Resnik (1993)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Another way to evaluate a model's performance is agreement with human ratings. This can be done by selecting predicate-argument structures randomly, using the model to predict the degree of semantic fit and then looking at how well the ratings correlate with the model's predictions (Resnik, 1993; Lapata et al., 1999; Lapata et al., 2001) . This approach seems more appropriate for languages for which annotated corpora with word senses are not available. It is more direct than disambiguation which relies on the assumption that models of selectional preferences have to infer the appropriate semantic class and therefore perform disambiguation as a side effect. It is also more natural than pseudo-disambiguation which relies on artificially constructed data sets. Large-scale comparative studies have not, however, assessed the strengths and weaknesses of the proposed methods as far as modeling human data is concerned.",
"cite_spans": [
{
"start": 283,
"end": 297,
"text": "(Resnik, 1993;",
"ref_id": "BIBREF18"
},
{
"start": 298,
"end": 318,
"text": "Lapata et al., 1999;",
"ref_id": "BIBREF10"
},
{
"start": 319,
"end": 339,
"text": "Lapata et al., 2001)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we undertake such a comparative study by looking at selectional preferences of German verbs. In contrast to previous work, we take into account not only verbs and their direct objects, but also subjects and prepositional complements. We focus on three previously well-studied models, Resnik's (1993) selectional association, Li and Abe's (1998) MDL and Clark and Weir's (2002) probability estimation method. For comparison, we also employ two models that do not incorporate any notion of semantic class, namely cooccurrence frequency and conditional probability.",
"cite_spans": [
{
"start": 299,
"end": 314,
"text": "Resnik's (1993)",
"ref_id": "BIBREF18"
},
{
"start": 340,
"end": 359,
"text": "Li and Abe's (1998)",
"ref_id": "BIBREF12"
},
{
"start": 368,
"end": 391,
"text": "Clark and Weir's (2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the remainder of this paper, we briefly review the models of selectional preferences we consider (Section 2). Section 3 details our experiments, evaluation methodology, and reports our results. Section 4 offers some discussion and concluding remarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Co-occurrence Frequency. We can quantify the semantic fit between a verb and its arguments by simply counting f (v,r.n), the number of times a noun n co-occurs with a verb v in a grammatical relation r.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models of Selectional Preferences",
"sec_num": "2"
},
{
"text": "Conditional Probability. As we discuss below, most class-based approaches to selectional preferences rely on the estimation of the conditional probability P(nlv, r), where n is represented by its corresponding classes in the taxonomy. Here we concentrate solely on the nouns as attested in the corpus without making reference to a taxonomy and estimate the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models of Selectional Preferences",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P(n v, r) = f (v, r.n) f (v, ' r) P(Idr,n) = f (v, r, n) f (r,n) A(v,r,c) = Ti P(Clv.r) =EP(clv, 0 1\u00b0g p(c) P(clv,r)logP p (c(lcv) 'r) E f (v, r,n f(v,r,c) = ) nEsyn(0 cn(n)",
"eq_num": "(6)"
}
],
"section": "Models of Selectional Preferences",
"sec_num": "2"
},
{
"text": "In (1) it is the verb that imposes the semantic preferences on its arguments, whereas in (2) selectional preferences are expressed in the other direction, i.e. arguments select for their predicates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models of Selectional Preferences",
"sec_num": "2"
},
{
"text": "Selectional Association. Resnik (1993) was the first to propose a measure of the the semantic fit of a particular semantic class c as an argument to a verb v. Selectional association (see 3and 4)represents the contribution of a particular semantic class c to the total quantity of information provided by a verb about the semantic classes of its argument, when measured as the relative entropy between the prior distribution of classes P(c) and the posterior distribution P(clv,r) of the argument classes for a particular verb v. The latter distribution is estimated as shown in (5).",
"cite_spans": [
{
"start": 25,
"end": 38,
"text": "Resnik (1993)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models of Selectional Preferences",
"sec_num": "2"
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models of Selectional Preferences",
"sec_num": "2"
},
{
"text": "(4) f (c) P(clv,r) = v,r, (5) f (v, r)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models of Selectional Preferences",
"sec_num": "2"
},
{
"text": "The estimation of P(clv, r) would be a straightforward task if each word was always represented in the taxonomy by a single concept or if we had a corpus labeled explicitly with taxonomic information. Lacking such a corpus we need to take into consideration the fact that words in a taxonomy may belong to more than one conceptual class. Counts of verb-argument configurations are constructed for each conceptual class by dividing the contribution of the argument by the number of classes it belongs to (Resnik, 1993): where syn(c) is the synset of concept c, i.e., the set of synonymous words that can be used to denote the concept (for example, syn((beve r age)) = {beverage, drink, drinkable, potable}), and cn(n) is the set of concepts that can be denoted by noun n (more formally, cn(n) = {c n c syn(c)}).",
"cite_spans": [
{
"start": 503,
"end": 518,
"text": "(Resnik, 1993):",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models of Selectional Preferences",
"sec_num": "2"
},
{
"text": "Tree Cut Models. Li and Abe (1998) use MDL to select from a hierarchy a set of classes that represent the selectional preferences for a given verb. These preferences are probabilities of the form P(n r) where n is a noun represented by a class in the taxonomy, v is a verb and r is an argument slot. Li and Abe's algorithm operates on thesaurus-like hierarchies where each leaf node stands for a noun, each internal node stands for the class of nouns below it, and a noun is uniquely represented by a leaf node. Li and Abe derive a separate model for each verb by partitioning the leaf nodes (i.e., nouns) of the thesaurus tree and associating a probability with each class in the partition.",
"cite_spans": [
{
"start": 17,
"end": 34,
"text": "Li and Abe (1998)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models of Selectional Preferences",
"sec_num": "2"
},
{
"text": "More formally, a tree cut model M is defined as a pair of a tree cut F, which is a set of classes ci , c2, , ck, and a parameter vector 0 specifying a probability distribution over the members of F with the constraint that the probabilities sum to one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models of Selectional Preferences",
"sec_num": "2"
},
{
"text": "To select the tree cut model that best tits the data, Li and Abe (1998) employ the MDL principle (Rissanen, 1978) by considering the cost in bits of describing both the model itself and the observed data (in our case verb-argument combinations).",
"cite_spans": [
{
"start": 54,
"end": 71,
"text": "Li and Abe (1998)",
"ref_id": "BIBREF12"
},
{
"start": 97,
"end": 113,
"text": "(Rissanen, 1978)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EP(cilv,r) =1 i=1",
"sec_num": null
},
{
"text": "Given a data sample S encoded by a tree cut model /12/ = (F, 0) with tree cut F and estimated parameters 6, the total description length in bits L(M,S) is given by equation 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EP(cilv,r) =1 i=1",
"sec_num": null
},
{
"text": "L(M,S) = log1G log IS -E logPia(nlv, r) nes (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EP(cilv,r) =1 i=1",
"sec_num": null
},
{
"text": "where IQ is the cardinality of the set of all possible tree cuts, k is the number of classes on the cut F, 1,51 is the sample size, and 13,1, 4-(n r) is the probability of a noun, which is estimated by distributing the probability of a given class equally among the nouns that can be denoted by it:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EP(cilv,r) =1 i=1",
"sec_num": null
},
{
"text": "Pia(clv,r) (9) Vn syn(c) : Pft(n1 = Isyn(c)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EP(cilv,r) =1 i=1",
"sec_num": null
},
{
"text": "Class-based Probability. Clark and Weir (2002) are, strictly speaking, not concerned with the induction of selectional preferences but with the problem of estimating conditional probabilities of the form shown in (1) in the face of sparse data. However, their probability estimation method can be naturally applied to the selectional preference acquisition problem as it is suited not only for the estimation of the appropriate probabilities but also for finding a suitable class for the predicates of interest. Clark 7and Weir obtain the probability P( v P(c v, r) using Bayes' theorem:",
"cite_spans": [
{
"start": 25,
"end": 46,
"text": "Clark and Weir (2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EP(cilv,r) =1 i=1",
"sec_num": null
},
{
"text": "v, = P(vIc , r) P(clr) P(v1r) They suggest the following way for finding a set of concepts c' (where c' denotes the set of concepts dominated by c', including c' itself) as a generalization for concept c (where c can be either n or one of its hypernyms): Initially, c' is set to c, then c' is set to successive hypernyms of c until a node in the hierarchy is reached where P(c' v, r) changes significantly. This is determined by comparing estimates of P (c v, r) for each child c of c' using hypothesis testing. The null hypothesis is that the probabilities p(v c , r) are the same for each child c' , of c'. If there is a significant difference between them, the null hypothesis is rejected and classes that are lower in the hierarchy than c' are used. Selecting the right level of generalization crucially depends on the type of statistic used (in their experiments Clark and Weir use the Pearson chi-square statistic X 2 and the log-likelihood chi-square statistic G2 ). The appropriate level of significance a can be tuned experimentally.",
"cite_spans": [
{
"start": 23,
"end": 29,
"text": "P(v1r)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EP(cilv,r) =1 i=1",
"sec_num": null
},
{
"text": "Once a suitable class is found, the similarityclass probability P is estimated:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EP(cilv,r) =1 i=1",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v, r) = P(vIr) ,",
"eq_num": "(11)"
}
],
"section": "EP(cilv,r) =1 i=1",
"sec_num": null
},
{
"text": "L\", ec 13 (v1 [v,r,c1,0 1-11cy l rrj",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EP(cilv,r) =1 i=1",
"sec_num": null
},
{
"text": "where [v, r, c] denotes the class chosen for concept c in relation r to verb v, P denotes a relative frequency estimate, and C the set of concepts in the hierarchy. The denominator is a normalization factor. Again, since we are not dealing with word sense disambiguated data, counts for each noun are distributed evenly among all senses of the noun (see (5)).",
"cite_spans": [
{
"start": 6,
"end": 15,
"text": "[v, r, c]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EP(cilv,r) =1 i=1",
"sec_num": null
},
{
"text": "In our experiments, we compared the performance of the five methods discussed above against human judgments. Before discussing the details of our evaluation we present our general experimental setup (e.g., the corpora and hierarchy used) and the different types of parameters we explored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Settings",
"sec_num": "3.1"
},
{
"text": "All our experiments were conducted on data obtained from the German Siiddeutsche Zeitung (SZ) corpus, a 179 million word collection of newspaper texts. The corpus was parsed using the grammatical relation recognition component of SMES, a robust information extraction core system for the processing of German text (Neumann et al., 1997) . SMES incorporates a tokenizer that maps the text into a stream of tokens. The tokens are then analyzed morphologically (compound recognition, assignment of part-of-speech tags), and a chunk parser identifies phrases and clauses by means of finite state grammars. The grammatical relations recognizer operates on the output of the parser while exploiting a large subcategorization lexicon. Although SMES recognizes a variety of grammatical relations, in our experiments we focused solely on relations of the form (v,r,n) where r can be a subject, direct object, or prepositional object (see the examples in Table 2 ).",
"cite_spans": [
{
"start": 314,
"end": 336,
"text": "(Neumann et al., 1997)",
"ref_id": "BIBREF16"
},
{
"start": 851,
"end": 858,
"text": "(v,r,n)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 945,
"end": 952,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Parameter Settings",
"sec_num": "3.1"
},
{
"text": "For the class-based models, the hierarchy available in GermaNet (Hamp and Feldweg, 1997) was used. The experiments reported in this paper make use of the noun taxonomy of Ger-maNet (version 3.0, 23,053 noun synsets), and the information encoded in it in terms of the hyponymy/hypernymy relation.",
"cite_spans": [
{
"start": 64,
"end": 88,
"text": "(Hamp and Feldweg, 1997)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Settings",
"sec_num": "3.1"
},
{
"text": "Certain modifications to the original GermaNet hierarchy were necessary for the implementation of Li and Abe's method (1998) . The GermaNet noun hierarchy is a directed acyclic graph (DAG) whereas their algorithm operates on trees. A solution to this problem is given by Li and Abe, who transform the DAG into a tree by copying each subgraph having multiple parents. An additional modification is needed since in GermaNet, nouns do not only occur as leaves of the hierarchy, but also at internal nodes. Following Wagner (2000) and McCarthy (2001) , we created a new leaf for each internal node, containing a copy of the internal node's nouns. This guarantees that all nouns are present at the leaf level.",
"cite_spans": [
{
"start": 98,
"end": 124,
"text": "Li and Abe's method (1998)",
"ref_id": null
},
{
"start": 513,
"end": 526,
"text": "Wagner (2000)",
"ref_id": "BIBREF22"
},
{
"start": 531,
"end": 546,
"text": "McCarthy (2001)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Settings",
"sec_num": "3.1"
},
{
"text": "Finally, the algorithm requires that the employed hierarchy has a single root node. In Word-Net and GermaNet, nouns are not contained in a single hierarchy; instead they are partitioned according to a set of semantic primitives which are treated as the unique beginners of separate hierarchies. This means that an artificial concept (root) has to be created and connected to the existing top-level classes. Although WordNet has only nine classes without a hypernym, GermaNet contains 502. Of these, 125 have one or more daughters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Settings",
"sec_num": "3.1"
},
{
"text": "The number of classes below (root) has an immediate effect on the tree cut model: We therefore varied the the number of classes below (root) in order to observe how this affects the generalization outcome. We excluded from the hierarchy classes with less than or equal to 10, 20, and 30 hyponyms. This resulted in 49, 40, and 33 classes below (r o ot ). We also experimented with the full 125 classes (see Table 1 ). All of the class-based methods produce a value for each class c to which an argument noun n belongs. Since n can be ambiguous and its appropriate sense is not known, a unique class is typically chosen by simply selecting the class which maximizes the quantity of interest (see (3), (9), and (11)). An alternative is to consider the mean value over all classes. In our experiments, we compare the effect of these distinct selection procedures.",
"cite_spans": [],
"ref_spans": [
{
"start": 406,
"end": 413,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Parameter Settings",
"sec_num": "3.1"
},
{
"text": "Finally, for Clark and Weir's (2002) approach, two parameters are important for finding an appropriate generalization class: (a) the statistic for performing significance testing and (b) the a value for determining the significance level. Here, we experimented with the X2 and G2 statistics and ran our experiments for the following different a values: .0005, .05, .3, .75, and .995. The parameter settings we explored are shown in Table 1 .",
"cite_spans": [
{
"start": 13,
"end": 36,
"text": "Clark and Weir's (2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 432,
"end": 439,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Parameter Settings",
"sec_num": "3.1"
},
{
"text": "In order to evaluate the methods introduced in Section 2, we first established an independent measure of how well a verb fits its arguments by eliciting judgments from human subjects (Resnik, 1993; Lapata et al., 2001; Lapata et al., 1999) . In this section, we describe our method for assembling the set of experimental materials and collecting plausibility ratings for these stimuli.",
"cite_spans": [
{
"start": 183,
"end": 197,
"text": "(Resnik, 1993;",
"ref_id": "BIBREF18"
},
{
"start": 198,
"end": 218,
"text": "Lapata et al., 2001;",
"ref_id": "BIBREF11"
},
{
"start": 219,
"end": 239,
"text": "Lapata et al., 1999)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Eliciting Judgments on Selectional Preferences",
"sec_num": "3.2"
},
{
"text": "As mentioned earlier, co-occurrence triples of the form (v, r, n) were extracted from the output of SMES. In order to reduce the risk of ratings being influenced by verb/noun combinations unfamiliar to the participants, we removed triples that had a verb or a noun with fre-quency less than one per million Ten verbs were selected randomly for each grammatical relation. For each verb we divided the set of triples into three bands (High, Medium, and Low), based on an equal division of the range of log-transformed co-occurrence frequency, and randomly chose one noun from each band. The division ensured that the experimental stimuli represented likely and unlikely verb-argument combinations and enabled us to investigate how the different models perform with low/high counts. Example stimuli are shown in Table 2 . Our experimental design consisted of the factors grammatical relation (Re!), verb (Verb), and probability band (Band). The factors Re! and Band had three levels each, and the factor Verb had 10 levels. This yielded a total of Re! x Verb x Band = 3 x 10 x 3 = 90 stimuli. The 90 verb/noun pairs were paraphrased to create sentences. For the direct/PPobject sentences, one of 10 common human first names (five female, five male) was added as subject where possible, or else an inanimate subject which appeared frequently in the corpus was chosen.",
"cite_spans": [],
"ref_spans": [
{
"start": 809,
"end": 816,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Materials and Design.",
"sec_num": null
},
{
"text": "Magnitude Estimation (ME), a technique standardly used in psychophysics to measure judgments of sensory stimuli (Stevens, 1975) , which Bard et al. (1996) and Cowart (1997) have applied to the elicitation of linguistic judgments. ME has been shown to provide fine-grained measurements of linguistic acceptability which are robust enough to yield statistically significant results, while being highly replicable both within and across speakers.",
"cite_spans": [
{
"start": 112,
"end": 127,
"text": "(Stevens, 1975)",
"ref_id": "BIBREF21"
},
{
"start": 136,
"end": 154,
"text": "Bard et al. (1996)",
"ref_id": "BIBREF1"
},
{
"start": 159,
"end": 172,
"text": "Cowart (1997)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure. The experimental paradigm was",
"sec_num": null
},
{
"text": "ME requires subjects to assign numbers to a series of linguistic stimuli in a proportional fashion. Subjects are first exposed to a modulus item, to which they assign an arbitrary number. All other stimuli are rated proportionally to the modulus. In this way, each subject can establish their own rating scale.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure. The experimental paradigm was",
"sec_num": null
},
{
"text": "In the present experiment, the subjects were instructed to judge how acceptable the 90 sentences were in proportion to a modulus sentence. The experiment was conducted remotely over the Internet using WebExp 2.1 (Keller et al., 1998) , an interactive software package for administering web-based psychological experiments. Subjects first saw a set of instructions that explained the ME technique and included some examples, and had to fill in a short questionnaire including basic demographic information. Each subject saw 90 experimental stimuli. A random stimulus order was generated for each subject. ",
"cite_spans": [
{
"start": 212,
"end": 233,
"text": "(Keller et al., 1998)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure. The experimental paradigm was",
"sec_num": null
},
{
"text": "The data were first normalized by dividing each numerical judgment by the modulus value that the subject had assigned to the reference sentence. This operation creates a common scale for all subjects. Then the data were transformed by taking the decadic logarithm. This transformation ensures that the judgments are normally distributed and is standard practice for magnitude estimation data (Bard et al., 1996) . All analyses were conducted on the normalized, log-transformed judgments.",
"cite_spans": [
{
"start": 392,
"end": 411,
"text": "(Bard et al., 1996)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3"
},
{
"text": "Using correlation analysis we explored the linear relationship between the human judgments and the methods discussed in Section 2. As shown in Table 1 there are 30 distinct parameter instantiations for the class-based models. There are no parameters for co-occurrence frequency and conditional probability. Table 3 lists the best correlation coefficients per method, indicating the respective parameters where appropriate. For each grammatical relation, the optimal coefficient is emphasized.",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 150,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 307,
"end": 314,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3"
},
{
"text": "In Table 3 , we also show how well humans agree in their judgments (inter-subject agreement, ISAgr) and thus provide an upper bound for the task which allows us to interpret how well the models are doing in relation to humans. We performed correlations on the elicited judgments using leave-one-out resampling (Weiss and Kulikowski, 1991) . We divided the set of the subjects' responses with size m into a set of size m -1 (i.e., the response data of all but one subject) and a set of size one (i.e., the response data of a single subject). We then correlated the mean rating of the former set with the rating of the latter. This was repeated m times and the average agreement is reported in Table 3 .",
"cite_spans": [
{
"start": 321,
"end": 338,
"text": "Kulikowski, 1991)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 692,
"end": 699,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3"
},
{
"text": "As shown in Table 3 , all five models are significantly correlated with the human ratings, although the correlation coefficients are not as high as the inter-subject agreement (ISAgr). Selectional association (SelA) and conditional probability (CondP) reveal the highest overall correlations. CondP as expressed in (2) outperformed (1) which was excluded from further comparisons. As far as the individual argument relations are concerned, the similarity-class probability (SimC) performs best at modeling the selectional preferences for prepositional and direct objects. Clark and Weir's (2002) pseudo-disambiguation experiments also show that their method outperforms tree cut models (TCM) and SelA at modeling the semantic fit between verbs and their direct objects. Our results additionally generalize to PP-objects. SelA is the best predictor for subject-related selectional pref- With respect to the class selection method, better results are obtained when the highest class is chosen. This is true for SelA and SimC but not for TCM where the mean generally yields better performance. Recall from Section 3.1 that for TCM the number of classes below (root) was varied from 125 to 33. As can be seen from Table 3 , better results are obtained with 40 and 33 classes, i.e., with a relatively small number of classes below (root). Finally, in agreement with Clark and Weir, for SimC the best results were obtained with the G2 statistic. Also note that different a values seem to be appropriate for different argument relations.",
"cite_spans": [
{
"start": 572,
"end": 595,
"text": "Clark and Weir's (2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 1210,
"end": 1217,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3"
},
{
"text": "An obvious question is whether a better fit with the experimental data can be obtained via model combination. As discussed earlier different models seem to provide complementary information when it comes to modeling different argument relations. A straightforward way to combine our different models is multiple linear regression. Recall that we have 30 variants of class-based models (only the best performing ones are shown in Table 3), some of which are expectedly highly correlated. After removing models with high intercorrelation (r > .99, 15 out of 30), principal components factor analysis (PCFA) was performed on all 90 items, keeping the factors that explained more than 5% of the variance (see Table 4 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 705,
"end": 712,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Model Combination",
"sec_num": "3.4"
},
{
"text": "Multiple regression on all 90 observations with all four factors and forward selection (with Equation (12) was derived from the entire data set (i.e., 90 verb-argument combinations). Ideally, one would need to conduct another experiment with a new set of materials in order to determine whether (12) generalizes to unseen data. In default of a second experiment which we plan for the future, we investigated how well model combination performs on unseen data by using 10-fold crossvalidation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Combination",
"sec_num": "3.4"
},
{
"text": "Our data set was split into 10 disjoint subsets each containing 9 items. We repeated the PCFA procedure and the multiple regression analysis 10 times, each time using 81 items as training data and the remaining 9 as test data. Then we performed a correlation analysis between the predicted values for the unseen items of each fold and the human ratings. Effectively, this analysis treats the whole data set as unseen. However notice that for each test/train set split we obtain different regression equations since the PCFA yields different factors for different data sets. Comparison between the estimated values and the human ratings yielded a correlation coefficient of .40 (p < .001) outperforming any single model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Combination",
"sec_num": "3.4"
},
{
"text": "In this paper, we evaluated five models for the acquisition of selectional preferences. We focused on German verbs and their subjects, direct objects, and PP-objects. We placed emphasis on classbased models of selectional preferences, explored their parameter space, and showed that the existing models, developed primarily for English, also generalize to German. We proposed to evaluate the different models against human ratings and argued that such an evaluation methodology allows us to assess the feasibility of the task and to compute performance upper bounds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Our results indicate that there is no method which overall performs best; it seems that different methods are suited for different argument relations (i.e., SimC for objects, SelA for subjects). The more sophisticated class-based approaches do not always yield better results when compared to simple frequency-based models. This is in agreement with Lapata et al. (1999) who found that cooccurrence frequency is the best predictor of the plausibility of adjective-noun pairs. Model combination seems promising in that a better fit with experimental data is obtained. However, note that none of our models (including the ones obtained via multiple regression) seem to attain results reasonably close to the upper bound.",
"cite_spans": [
{
"start": 350,
"end": 370,
"text": "Lapata et al. (1999)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "In the future, we plan to consider web-based frequencies for our probability estimates (Keller et al., 2002) as well as Abney and Light's (1999) Hidden Markov Models and Ciaramita and Johnson's (2000) Bayesian Belief Networks. We will also expand our evaluation methodol- (12) ogy to adjective-noun and noun-noun combinations and conduct further rating experiments to cross-validate our combined models.",
"cite_spans": [
{
"start": 87,
"end": 108,
"text": "(Keller et al., 2002)",
"ref_id": "BIBREF9"
},
{
"start": 120,
"end": 144,
"text": "Abney and Light's (1999)",
"ref_id": "BIBREF0"
},
{
"start": 170,
"end": 200,
"text": "Ciaramita and Johnson's (2000)",
"ref_id": "BIBREF4"
},
{
"start": 272,
"end": 276,
"text": "(12)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "The task is to decide which of two verbs v1 and 1,2 is more likely to take a noun n as its object. The method being tested must reconstruct which of the unseen (vi, n) and (v2, n) is a valid verb-object combination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Hiding a semantic class hierarchy in a Markov model",
"authors": [
{
"first": "Steve",
"middle": [],
"last": "Abney",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Light",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the ACL Workshop on Unsupervised Learning in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steve Abney and Marc Light. 1999. Hiding a semantic class hierarchy in a Markov model. In Proceedings of the ACL Workshop on Unsupervised Learning in Natural Language Processing, pages 1-8, College Park, MD.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Magnitude estimation of linguistic acceptability. Language",
"authors": [
{
"first": "Ellen Gurman",
"middle": [],
"last": "Bard",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Robertson",
"suffix": ""
},
{
"first": "Antonella",
"middle": [],
"last": "Sorace",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "72",
"issue": "",
"pages": "32--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen Gurman Bard, Dan Robertson, and Antonella So- race. 1996. Magnitude estimation of linguistic ac- ceptability. Language, 72(1):32-68.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Class-based n-gram models of natural language",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [
"Della"
],
"last": "Vincent",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"V"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "De Souza",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "4",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Vincent J. Della Pietra, Peter V. de Souza, and Robert L. Mercer. 1992. Class-based n-gram models of natural language. Computational Linguistics, 18(4):467-479.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Word sense disambiguation using automatically acquired verbal preferences",
"authors": [
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
}
],
"year": 2000,
"venue": "Computers and the Humanities",
"volume": "34",
"issue": "1-2",
"pages": "109--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Carroll and Diana McCarthy. 2000. Word sense disambiguation using automatically acquired verbal preferences. Computers and the Humanities, 34(1- 2):109-114.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Explaining away ambiguity: Learning verb selectional restrictions with Bayesian networks",
"authors": [
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 18th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "187--193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Massimiliano Ciaramita and Mark Johnson. 2000. Ex- plaining away ambiguity: Learning verb selectional restrictions with Bayesian networks. In Proceed- ings of the 18th International Conference on Com- putational Linguistics, pages 187-193, Saarbriicken, Germany.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Class-based probability estimation using a semantic hierarchy",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "2",
"pages": "187--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark and David Weir. 2002. Class-based probability estimation using a semantic hierarchy. Computational Linguistics, 28(2): 187-206.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Experimental Syntax: Applying Objective Methods to Sentence Judgments",
"authors": [
{
"first": "Wayne",
"middle": [],
"last": "Cowart",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wayne Cowart. 1997. Experimental Syntax: Apply- ing Objective Methods to Sentence Judgments. Sage Publications, Thousand Oaks, CA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "GermaNeta lexical-semantic net for German",
"authors": [
{
"first": "Birgit",
"middle": [],
"last": "Hamp",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Feldweg",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Workshop on Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications at the 35th ACL and the 8th EACL",
"volume": "",
"issue": "",
"pages": "9--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Birgit Hamp and Helmut Feldweg. 1997. GermaNet - a lexical-semantic net for German. In Proceedings of the Workshop on Automatic Information Extrac- tion and Building of Lexical Semantic Resources for NLP Applications at the 35th ACL and the 8th EACL, pages 9-15, Madrid, Spain.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Web-Exp: A Java toolbox for web-based psychological experiments",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Corley",
"suffix": ""
},
{
"first": "Steffan",
"middle": [],
"last": "Corley",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Konieczny",
"suffix": ""
},
{
"first": "Amalia",
"middle": [],
"last": "Todirascu",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Keller, Martin Corley, Steffan Corley, Lars Konieczny, and Amalia Todirascu. 1998. Web- Exp: A Java toolbox for web-based psychological experiments. Technical Report HCRC/TR-99, Hu- man Communication Research Centre, University of Edinburgh, UK.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Using the web to overcome data sparseness",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Ourioupina",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "230--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Keller, Maria Lapata, and Olga Ourioupina. 2002. Using the web to overcome data sparse- ness. In Proceedings of the Conference on Empiri- cal Methods in Natural Language Processing, pages 230-237, Philadelphia, PA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Determinants of adjective-noun plausibility",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 9th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "30--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Lapata, Scott McDonald, and Frank Keller. 1999. Determinants of adjective-noun plausibility. In Proceedings of the 9th Conference of the Euro- pean Chapter of the Association for Computational Linguistics, pages 30-36, Bergen, Norway.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Evaluating smoothing algorithms against plausibility judgments",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "346--353",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Lapata, Frank Keller, and Scott McDonald. 2001. Evaluating smoothing algorithms against plausibility judgments. In Proceedings of the 39th Annual Meeting of the Association for Com- putational Linguistics, pages 346-353, Toulouse, France.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Generalizing case frames using a thesaurus and the MDL principle",
"authors": [
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Naoki",
"middle": [],
"last": "Abe",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "2",
"pages": "217--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hang Li and Naoki Abe. 1998. Generalizing case frames using a thesaurus and the MDL principle. Computational Linguistics, 24(2):217-244.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Statistical models for the induction and use of selectional preferences",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Light",
"suffix": ""
},
{
"first": "Warren",
"middle": [],
"last": "Greiff",
"suffix": ""
}
],
"year": 2002,
"venue": "Cognitive Science",
"volume": "87",
"issue": "",
"pages": "1--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Light and Warren Greiff. 2002. Statistical mod- els for the induction and use of selectional prefer- ences. Cognitive Science, 87:1-13.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Lexical Acquisition at the Syntax-Semantics Interface: Diathesis Alternations, Subcategorization Frames and Selectional Preferences",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana McCarthy. 2001. Lexical Acquisition at the Syntax-Semantics Interface: Diathesis Alternations, Subcategorization Frames and Selectional Prefer- ences. Ph.D. thesis, University of Sussex, UK.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Introduction to WordNet: An on-line lexical database",
"authors": [
{
"first": "George",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Beckwith",
"suffix": ""
},
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Katherine",
"middle": [
"J"
],
"last": "Miller",
"suffix": ""
}
],
"year": 1990,
"venue": "International Journal of Lexicography",
"volume": "3",
"issue": "4",
"pages": "235--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller, Richard Beckwith, Christiane Fell- baum, Derek Gross, and Katherine J. Miller. 1990. Introduction to WordNet: An on-line lexi- cal database. International Journal of Lexicogra- phy, 3(4):235-244.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An information extraction core system for real world German text processing",
"authors": [
{
"first": "Ginter",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Rolf",
"middle": [],
"last": "Backofen",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "Baur",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Becker",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Braun",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 5th ACL Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "209--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ginter Neumann, Rolf Backofen, Judith Baur, Markus Becker, and Christian Braun. 1997. An informa- tion extraction core system for real world German text processing. In Proceedings of the 5th ACL Con- ference on Applied Natural Language Processing, pages 209-216, Washington, DC.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Distributional clustering of English words",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Naftali",
"middle": [],
"last": "Tishby",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "183--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando Pereira, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of English words. In Proceedings of the 31st Annual Meeting of the Asso- ciation for Computational Linguistics, pages 183- 190, Columbus, OH.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Selection and Information: A Class-Based Approach to Lexical Relationships",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Stuart Resnik",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Stuart Resnik. 1993. Selection and Information: A Class-Based Approach to Lexical Relationships. Ph.D. thesis, University of Pennsylvania, Philadel- phia, PA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Selectional preferences and sense disambiguation",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, What, and How?",
"volume": "",
"issue": "",
"pages": "52--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Resnik. 1997. Selectional preferences and sense disambiguation. In Proceedings of the ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, What, and How?, pages 52-57, Washington, DC.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Modeling by shortest data description",
"authors": [
{
"first": "Jorma",
"middle": [],
"last": "Rissanen",
"suffix": ""
}
],
"year": 1978,
"venue": "Automatica",
"volume": "14",
"issue": "",
"pages": "465--471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jorma Rissanen. 1978. Modeling by shortest data de- scription. Automatica, 14:465-471.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Psychophysics: Introduction to Its Perceptual, Neural, and Social Prospects",
"authors": [
{
"first": "S",
"middle": [
"S"
],
"last": "Stevens",
"suffix": ""
}
],
"year": 1975,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. S. Stevens. 1975. Psychophysics: Introduction to Its Perceptual, Neural, and Social Prospects. John Wiley & Sons, New York, NY.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Enriching a lexical semantic net with selectional preferences by means of statistical corpus analysis",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Wagner",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 1st Workshop on Ontology Learning at the 14th ECM",
"volume": "",
"issue": "",
"pages": "37--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Wagner. 2000. Enriching a lexical semantic net with selectional preferences by means of statisti- cal corpus analysis. In Proceedings of the 1st Work- shop on Ontology Learning at the 14th ECM, pages 37-42, Berlin, Germany.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Computer Systems that Learn: Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sholom",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Casimir",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kulikowski",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sholom M. Weiss and Casimir A Kulikowski. 1991. Computer Systems that Learn: Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems. Morgan Kaufmann, San Mateo, CA.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "p > .05 for removal from the model) yielded the regression equation in (12). The corresponding correlation coefficient is .47 (p < .001). Rating = .091 CondP \u00b1 .068 TCM +.103 SelA \u00b1 .052",
"num": null
},
"TABREF0": {
"type_str": "table",
"num": null,
"html": null,
"text": "Explored parameter settings number of classes, many of the cuts returned by MDL are over-generalizing at the (root) level.",
"content": "<table/>"
},
"TABREF2": {
"type_str": "table",
"num": null,
"html": null,
"text": "Example stimuli (with log co-occurrence frequencies in the SZ corpus)",
"content": "<table><tr><td>Rating</td><td colspan=\"2\">ISAgr Freq</td><td>CondP</td><td>SelA</td><td>TCM</td><td>SimC</td></tr><tr><td>SUBJ</td><td>.790</td><td>.386*</td><td>.010</td><td>.408*</td><td>.281</td><td>.268</td></tr><tr><td/><td/><td/><td/><td>[highest]</td><td>[mean, 40 c.b.r.]</td><td>[mean, G2 , a = .75]</td></tr><tr><td>OBJ</td><td>.810</td><td>.360</td><td>.399*</td><td>.430*</td><td>.251</td><td>.611***</td></tr><tr><td/><td/><td/><td/><td>[mean]</td><td>[mean, 40 c.b.r.]</td><td>[highest, G2 , a = .05]</td></tr><tr><td>PP-OBJ</td><td>.820</td><td>.168</td><td>.335</td><td>.330</td><td>.319</td><td>.597***</td></tr><tr><td/><td/><td/><td/><td>[mean]</td><td>[mean, 33 c.b.r.]</td><td>[highest, G2 , a = .3]</td></tr><tr><td>overall</td><td>.810</td><td>.301**</td><td>.374***</td><td>.374***</td><td>.341*\"</td><td>.232*</td></tr><tr><td/><td/><td/><td/><td>[highest]</td><td>[mean, 40 c.b.r.]</td><td>[highest, G2 , a = .3]</td></tr><tr><td colspan=\"2\">* p &lt; .05</td><td colspan=\"2\">** p &lt; .01</td><td>*** p &lt; .001</td><td colspan=\"2\">c.b.r.: classes below (root)</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"num": null,
"html": null,
"text": "Best correlations between human ratings and selectional preference models",
"content": "<table><tr><td>Subjects. The experiment was completed by</td></tr><tr><td>61 volunteers, all self-reported native speakers of</td></tr><tr><td>German. Subjects were recruited via postings to</td></tr><tr><td>Usenet newsgroups.</td></tr></table>"
},
"TABREF5": {
"type_str": "table",
"num": null,
"html": null,
"text": "",
"content": "<table><tr><td>: Principal component factors</td></tr><tr><td>erences, whereas co-occurrence frequency (Freq)</td></tr><tr><td>is the second best.</td></tr></table>"
}
}
}
}